Tag Archives: Computing/Software

Researchers Can Make AI Forget You

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/researchers-can-make-ai-forget-you

Whether you know it or not, you’re feeding artificial intelligence algorithms. Companies, governments, and universities around the world train machine learning software on unsuspecting citizens’ medical records, shopping history, and social media use. Sometimes the goal is to draw scientific insights, and other times it’s to keep tabs on suspicious individuals. Even AI models that abstract from data to draw conclusions about people in general can be prodded in such a way that individual records fed into them can be reconstructed. Anonymity dissolves.

To restore some amount of privacy, recent legislation such as Europe’s General Data Protection Regulation and the California Consumer Privacy Act provides a right to be forgotten. But making a trained AI model forget you often requires retraining it from scratch with all the data but yours. This process that can take weeks of computation.

Two new papers offer ways to delete records from AI models more efficiently, possibly saving megawatts of energy and making compliance more attractive. “It seemed like we needed some new algorithms to make it easy for companies to actually cooperate, so they wouldn’t have an excuse to not follow these rules,” said Melody Guan, a computer scientist at Stanford and co-author of the first paper.

Neural Networks Can Drive Virtual Racecars Without Learning

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/tech-talk/computing/software/neural-networks-ai-artificial-intelligence-drives-virtual-racecars-without-learning

Animals are born with innate abilities and predispositions. Horses can walk within hours of birth, ducks can swim soon after hatching, and human infants are automatically attracted to faces. Brains have evolved to take on the world with little or no experience, and many researchers would like to recreate such natural abilities in artificial intelligence.

New research finds that artificial neural networks can evolve to perform tasks without learning. The technique could lead to AI that is much more adept at a wide variety of tasks such as labeling photos or driving a car.

Artificial neural networks are arrangements of small computing elements (“neurons”) that pass information between them. The networks typically learn to perform tasks like playing games or recognizing images by adjusting the “weights” or strengths of the connections between neurons. A technique called neural architecture search tries lots of network shapes and sizes to find ones that learn better for a specific purpose.

5G Small-Cell Base Station Antenna Array Design

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/5g-smallcell-base-station-antenna-array-design

In this eSeminar we will explore state-of-the-art simulation approaches for antenna array design, with a particular focus on 5G small-cell base station antennas.

Realizing the 5G promise of reliably providing high data rate connections to many users simultaneously requires the use of new design approaches for base station antennas. In particular, antenna arrays will increasingly be used to enable agile beam forming and massive MIMO technology, both required to provide good service in dynamic, complex urban environments with a high number of users. The array design capabilities of SIMULIA CST Studio Suite have grown dramatically over the last years and are relied on by many companies around the world.

Join us to learn more about how simulation can help you with your array design, as we answer the following questions.

  • How can antenna elements be designed and evaluated in terms of their suitability as an array element?
  • How can full arrays with real radomes be simulated accurately much more quickly than before using the new simulation-by-zones approach?
  • How can interference between multiple co-located arrays be evaluated using advanced hybrid simulation techniques?
  • Finally, how can the coverage performance of base station arrays in complex urban or indoor environments be predicted?

AI and Economic Productivity: Expect Evolution, Not Revolution

Post Syndicated from Jeffrey Funk original https://spectrum.ieee.org/computing/software/ai-and-economic-productivity-expect-evolution-not-revolution

In 2016, London-based DeepMind Technologies, a subsidiary of Alphabet (which is also the parent company of Google), startled industry watchers when it reported that the application of artificial intelligence had reduced the cooling bill at a Google data center by a whopping 40 percent. What’s more, we learned that year, DeepMind was starting to work with the National Grid in the United Kingdom to save energy throughout the country using deep learning to optimize the flow of electricity.

Could AI really slash energy usage so profoundly? In the three years that have passed, I’ve searched for articles on the application of AI to other data centers but find no evidence of important gains. What’s more, DeepMind’s talks with the National Grid about energy have broken down. And the financial results for DeepMind certainly don’t suggest that customers are lining up for its services: For 2018, the company reported losses of US $571 million on revenues of $125 million, up from losses of $366 million in 2017. Last April, The Economist characterized DeepMind’s 2016 announcement as a publicity stunt, quoting one inside source as saying, “[DeepMind just wants] to have some PR so they can claim some value added within Alphabet.”

This episode encouraged me to look more deeply into the economic promise of AI and the rosy projections made by champions of this technology within the financial sector. This investigation was just the latest twist on a long- standing interest of mine. In the early 1980s, I wrote a doctoral dissertation on the economics of robotics and AI, and throughout my career as a professor and technology consultant I have followed the economic projections for AI, including detailed assessments by consulting organizations such as Accenture, PricewaterhouseCoopers International (PwC), and McKinsey.

These analysts have lately been asserting that AI-enabled technologies will dramatically increase economic output. Accenture claims that by 2035 AI will double growth rates for 12 developed countries and increase labor productivity by as much as a third. PwC claims that AI will add $15.7 trillion to the global economy by 2030, while McKinsey projects a $13 trillion boost by that time.

Other forecasts have focused on specific sectors such as retail, energy, education, and manufacturing. In particular, the McKinsey Global Institute assessed the impact of AI on these four sectors in a 2017 report titled Artificial Intelligence: The New Digital Frontier? and did so for a much longer list of sectors in a 2018 report. In the latter, the institute concluded that AI techniques “have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. This constitutes about 40 percent of the overall $9.5 trillion to $15.4 trillion annual impact that could potentially be enabled by all analytical techniques.”

Wow. These are big numbers. If true, they create a powerful incentive for companies to pursue AI—with or without help from McKinsey consultants. But are these predictions really valid?

Many of McKinsey’s estimates were made by extrapolating from claims made by various startups. For instance, its prediction of a 10 percent improvement in energy efficiency in the U.K. and elsewhere was based on the purported success of DeepMind and also of Nest Labs, which became part of Google’s hardware division in 2018. In 2017, Nest, which makes a smart thermostat and other intelligent products for the home, lost $621 million on revenues of $726 million. That fact doesn’t mesh with the notion that Nest and similar companies are contributing, or are poised to contribute, hugely to the world economy.

So I decided to investigate more systematically how well such AI startups were doing. I found that many were proving not nearly as valuable to society as all the hype would suggest. This assertion will certainly rub a lot of people the wrong way, the analysts at McKinsey among them. So I’d like to describe here how I reached my much more pessimistic conclusions.

My investigation of Nest Labs expanded into a search for evidence that smart meters in general are leading to large gains in energy efficiency. In 2016, the British government began a coordinated campaign to install smart meters throughout the country by 2020. And since 2010, the U.S. Department of Energy has invested some $4.5 billion installing more than 15 million smart meters throughout the United States. Curiously enough, all that effort has had little observed impact on energy usage. The U.K. government recently revised downward the amount it figures a smart meter will save each household annually, from £26 to just £11. And the cost of smart meters and their installation has risen, warns the U.K.’s National Audit Office. All of this is not good news for startups banking on the notion that smart thermostats, smart home appliances, and smart meters will lead to great energy savings.

Are other kinds of AI startups having a greater positive effect on the economy? Tech sector analyst CB Insights reports that overall venture capital funding in the United States was $115 billion in 2018 [PDF], of which $9.3 billion went to AI startups. While that’s just 8 percent of the total, it’s still a lot of money, indicating that there are many U.S. startups working on AI (although some overstate the role of AI in their business plans to acquire funding).

To probe further, I gathered data on the U.S. AI startups that have received the most funding and looked at which industries they were hoping to disrupt. The reason for focusing on the United States is that it has the longest history of startup success, so it seems likely that its AI startups are more apt to flourish than those in other countries. My intention was to evaluate whether these U.S. startups had succeeded in shaking up various industries and boosting productivity or whether they promise to do so shortly.

In all, I examined 40 U.S. startups working on AI. These either had valuations greater than $1 billion or had more than $70 million in equity funding. Other than two that had been acquired by public companies, the startups I looked at are all private firms. I found their names and product offerings in lists of leading startups that Crunchbase, Fortune, and Datamation had compiled and published. I then updated my data set with more recent news about these companies (including reports of some shutdowns).

I categorized these 40 startups by the type of product or service they offered. Seventeen are working on what I would call basic computer hardware and software (Wave Computing and OpenAI, respectively, are examples), including cybersecurity (CrowdStrike, for instance). That is, I included in this category companies building tools that are intended to support the computing environment itself.

Making up another large fraction—8 of the 40—are companies that develop software that automates various tasks. The robotic process automation software being developed by Automation Anywhere, UiPath, and WorkFusion, for example, enables higher productivity among professionals and other white-collar workers. Software from Brain Corp. converts manual equipment into intelligent robots. Algolia, Conversica, and Xant offer software to improve sales and marketing. ZipRecruiter targets human resources.

The remaining startups on my list are spread among various industries. Three (Flatiron Health, Freenome, Tempus Labs) work in health care; three more (Avant, Upstart, ZestFinance) are focused on financial technology; two (Indigo, Zymergen) target agriculture or synthetic biology; and three others (Nauto, Nuro, Zoox) involve transportation. There is just one startup each for geospatial analytics (Orbital Insight), patterns of human interaction (Afiniti), photo/video recognition (Vicarious), and music recognition (SoundHound).

Are there indications that these startups will bring large productivity improvements in the near future? In my view, software that automates tasks normally carried out by white-collar workers is probably the most promising of the products and services that AI is being applied to. Similar to past improvements in tools for white-collar professionals, including Excel for accountants and computer-aided design for engineers and architects, these types of AI-based tools have the greatest potential impact on productivity. For instance, there are high hopes for generative design, in which teams of people input constraints and the system proposes specific designs.

But looking at the eight startups on my list that are working on automation tools for white-collar workers, I realized that they are not targeting things that would lead to much higher productivity. Three of them are focused on sales and marketing, which is often a zero-sum game: The company with the best software takes customers from competitors, with only small increases in productivity under certain conditions. Another one of these eight companies is working on human-resource software, whose productivity benefits may be larger than those for sales and marketing but probably not as large as you’d get from improved robotic process automation.

This leaves four startups that do offer such software, which may lead to higher productivity and lower costs. But even among these startups, none currently offers software that helps engineers and architects become more productive through, for example, generative design. Software of this kind isn’t coming from the largest startups, perhaps because there is a strong incumbent, Autodesk, or because the relevant AI is still not developed enough to provide truly useful tools in this area.

The relatively large number of startups I classified as working on basic hardware and software for computing (17) also suggests that productivity improvements are still many years away. Although basic hardware and software are a necessary part of developing higher-level AI-based tools, particularly ones utilizing machine learning, it will take time for the former to enable the latter. I suppose this situation simply reflects that AI is still in its infancy. You certainly get that impression from companies like OpenAI: Although it has received $1 billion in funding (and a great deal of attention), the vagueness of its mission—“Benefiting all of humanity”—suggests that it will take many years yet for specific useful products and services to evolve from this company’s research.

The large number of these startups that are focused on cybersecurity (seven) highlights the increasing threat of security problems, which raise the cost of doing business over the Internet. AI’s ability to address cybersecurity issues will likely make the Internet more safe, secure, and useful. But in the end, this thrust reflects yet higher costs in the future for Internet businesses and will not, to my mind, lead to large productivity improvements within the economy as a whole.

If not from the better software tools it brings, where will AI bring substantial economic gains? Health care, you would think, might benefit greatly from AI. Yet the number of startups on my list that are applying AI to health care (three) seems oddly small if that were really the case. Perhaps this has something to do with IBM’s experience with its Watson AI, which proved a disappointment when it was applied to medicine.

Still, many people remain hopeful that AI-fueled health care startups will fill the gap left by Watson’s failures. Arguing against this is Robert Wachter, who points out that it’s much more difficult to apply computers to health care than to other sectors. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, details the many reasons that health care lags other industries in the application of computers and software. It’s not clear that adding AI to the mix of digital technologies available will do anything to change the situation.

There are also some big applications missing from the list of well-funded AI startups. Housing represents the largest category of consumer expenditures in the United States, but none of these startups are addressing this sector of the economy at all. Transportation is the second largest expenditure, and it is the focus of just three of these startups. One is working on a product that identifies distracted drivers. Another intends to provide automated local deliveries. Only one startup on the list is developing driverless passenger vehicles. That there is only one working on self-driving cars is consistent with the pessimism recently expressed by executives of Ford, General Motors, and Mercedes-Benz about the prospects for driverless vehicles taking to the streets in large numbers anytime soon, even though $35 billion has already been spent on R&D for them.

Admittedly, my assessment of what these 40 companies are doing and whether their offerings will shake up the world over the next decade is subjective. Perhaps it makes better sense to consider a more objective measure of whether these companies are providing value to the world economy: their profitability.

Alas, good financial data is not available on privately held startups, only two of the companies on my list are now part of public companies, and startups often take years to turn a profit (Amazon took seven years). So there isn’t a lot to go on here. Still, there are some broad trends in the tech sector that are quite telling.

The fraction of tech companies that are profitable by the time they go public dropped from 76 percent in 1980 to just 17 percent in 2018, even though the average time to IPO has been rising—it went from 2.8 years in 1998 to 7.7 years in 2016, for example. Also, the losses of some well-known startups that took a long time to go public are huge. For instance, none of the big ride-sharing companies are making a profit, including those in the United States (Uber and Lyft), China, India, and Singapore, with total losses of about $5 billion in 2018. Most bicycle and scooter sharing, office sharing, food delivery, P2P (peer-to peer) lending, health care insurance and analysis, and other consumer service startups are also losing vast amounts of money, not only in the United States but also in China and India.

Most of the 40 AI startups I examined will probably stay private, at least in the near term. But even if some do go public several years down the road, it’s unlikely they’ll be profitable at that point, if the experience of many other tech companies is any guide. It may take these companies years more to achieve the distinction of making more money than they are spending.

For the reasons I’ve given, it’s very hard for me to feel confident that any of the AI startups I examined will provide the U.S. economy with a big boost over the next decade. Similar pessimism is also starting to emerge from such normally cheery publications as Technology Review and Scientific American. Even the AI community is beginning to express concerns in books such as The AI Delusion and Rebooting AI: Building Artificial Intelligence We Can Trust, concerns that are growing amid the rising hype about many new technologies.

The most promising areas for rapid gains in productivity are likely to be found in robotic process automation for white-collar workers, continuing a trend that has existed for decades. But these improvements will be gradual, just as those for computer-aided design and computer-aided engineering software, spreadsheets, and word processing have been.

Viewed over the span of decades, the value of such software is impressive, bringing huge gains in productivity for engineers, accountants, lawyers, architects, journalists, and others—gains that enabled some of these professionals (particularly engineers) to enrich the global economy in countless ways.

Such advances will no doubt continue with the aid of machine learning and other forms of AI. But they are unlikely to be nearly as disruptive—for companies, for workers, or for the economy as a whole—as many observers have been arguing.

About the Author

Jeffrey Funk retired from the National University of Singapore in 2017, where he taught (among other subjects) a course on the economics of new technology as a professor of technology management. He remains based in Singapore, where he consults in various areas of technology and business.

Simulating Radar Signals for Meaningful Radar Warning Receiver Tests

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/simulating-radar-signals-for-meaningful-radar-warning-receiver-tests

Testing state-of-the-art radar warning equipment with adequate RF and microwave signals is a challenging task for radar engineers. Generating test signals in a lab that are similar to a real-life RF environment is considered a supreme discipline when evaluating advanced radars. A scenario for testing radar-warning receivers (RWR) can easily contain many highly mobile emitters that are capable of mode changes, many interfering signals, and a moving multi-channel receiver. Today, there are intuitive and easy-to-use software tools available that take a lot of work off the hands of radar engineers.

In this seminar, you will learn how to:

  • Create radar scenarios ranging from simple pulses to the most demanding emitter scenarios using PC software
  • Generate complex radar signals with off-the-shelf vector signal generators
  • Increase flexibility during radar simulation by streaming pulse descriptor words (PDW)

Screening Technique Found 142 Malicious Apps in Apple’s App Store

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/new-screening-technique-reveals-142-malicious-apple-apps

Journal Watch report logo, link to report landing page

Apple’s App Store is renowned for its security—but even Apple is inadvertently allowing a small handful of malicious apps to sneak through its screening process and onto some people’s phones, new research shows. The good news is that the researchers involved, who published their findings on 31 October in IEEE Transactions on Dependable and Secure Computing, have also uncovered a way to detect these Trojan horses.

Thanks to strict guidelines and bans on certain practices that facilitate the spread of malicious apps, the vast majority of apps in Apple’s App Store are safe. However, some malicious apps are still making their way through the screening process by exhibiting one user interface while harboring a second, malicious user interface.   

“[These apps] display a benign user interface under Apple’s review but reveal their hidden, harmful user interfaces after being installed on users’ devices,” explains Yeonjoon Lee, a researcher at Hanyang University who was involved in the study.

Hey, Data Scientists: Show Your Machine-Learning Work

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/computing/software/hey-data-scientists-show-your-machinelearning-work

In the last two years, the U.S. Food and Drug Administration has approved several machine-learning models to accomplish tasks such as classifying skin cancer and detecting pulmonary embolisms. But for the companies who built those models, what happens if the data scientist who wrote the algorithms leaves the organization?

In many businesses, an individual or a small group of data scientists is responsible for building essential machine-learning models. Historically, they have developed these models on their own laptops through trial and error, and pass it along for production when it works. But in that transfer, the data scientist might not think to pass along all the information about the model’s development. And if the data scientist leaves, that information is lost for good.

That potential loss of information is why experts in data science are calling for machine learning to become a formal, documented process overseen by more people inside an organization.

Companies need to think about what could happen if their data scientists take new jobs, or if a government organization or an important customer asks to see an audit of the algorithm to ensure it is fair and accurate. Not knowing what data was used to train the model and how the data was weighted could lead to a loss of business, bad press, and perhaps regulatory scrutiny, if the model turns out to be biased.

David Aronchick, the head of open-source machine-learning strategy at Microsoft Azure, says companies are realizing that they must run their machine-learning efforts the same way they run their software-development practices. That means encouraging documentation and codevelopment as much as possible.

Microsoft has some ideas about what the documentation process should look like. The process starts with the researcher structuring and organizing the raw data and annotating it appropriately. Not having a documented process at this stage could lead to poorly annotated data that has biases associated with it or is unrelated to the problem the business wants to solve.

Next, during training, a researcher feeds the data to a neural network and tweaks how it weighs various factors to get the desired result. Typically, the researcher is still working alone at this point, but other people should get involved to see how the model is being developed—just in case questions come up later during a compliance review or even a lawsuit.

A neural network is a black box when it comes to understanding how it makes its decisions, but the data, the number of layers, and how the network weights different parameters shouldn’t be mysterious. The researchers should be able to tell how the data was structured and weighted at a glance.

It’s also at this point where having good documentation can help make a model more flexible for future use. For example, a shopping site’s model that crunched data specifically for Christmas spending patterns can’t apply that same model to Valentine’s Day spending. Without good documentation, a data scientist would have to essentially rebuild the model, rather than going back and tweaking a few parameters to adjust it for a new holiday.

The last step in the process is actually deploying the model. Historically, only at this point would other people get involved and acquaint themselves with the data scientist’s hard work. Without good documentation, they’re sure to get headaches trying to make sense of it. But now that data is so essential to so many businesses—not to mention the need to adapt quickly—it’s time for companies to build machine-learning processes that rival the quality of their software-development processes.

This article appears in the December 2019 print issue as “Show Your Machine-Learning Work.”

FedEx Ground Uses Virtual Reality to Train and Retain Package Handlers

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/fedex-ground-uses-vr-to-train-and-retain-package-handlers

Package handlers who work on FedEx Ground loading docks load and unload 8.5 million packages a day. The volume and the physical nature of the work make it a tough job—tougher than many new hires realize until they do it. Some quit almost immediately, according to Denise Abbott, FedEx Ground’s vice president of human resources.

So, when FedEx Corp.’s truck package delivery division evaluated how best to incorporate virtual reality into employee training, teaching newly hired package handlers what to expect on the job and how to stay safe doing it quickly rose to the top of the list.

“It allows us to bring an immersive learning technology into the classroom so people can practice before they step foot on a dock,” said Jefferson Welch, human resource director for FedEx Ground University, the division’s training arm. He and Abbott talked about the company’s foray into VR-based training during a presentation at the recent HR Technology Conference in Las Vegas.

At Domino’s Biggest Franchisee, a Chatbot Named “Dottie” Speeds Up Hiring

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/at-dominos-biggest-franchisee-a-chatbot-named-dottie-speeds-up-hiring

At RPM Pizza, a chatbot nicknamed “Dottie” has made hiring almost as fast as delivering pizzas.

RPM adopted the text message-based chatbot along with live chat and text-based job applications to speed up multiple aspects of the hiring process, including identifying promising job candidates and scheduling initial interviews.

It makes sense to use texting for hiring, said Merrin Mueller, RPM’s head of people and marketing, during a presentation on the chatbot at the recent HR Technology Conference in Las Vegas. Job hunters respond to a text faster than an email. At a time when U.S. unemployment is low, competition for hourly workers is fierce, and company recruiters are overwhelmed, you have to act fast.

“People who apply here are applying at Taco Bell and McDonald’s too, and if we don’t get to them right away and hire them faster, they’ve already been offered a job somewhere else,” Mueller said.

A Guide to Computational Storage

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/a-guide-to-computational-storage

To be successful in this increasingly digital world, organizations need the infrastructure and technology to be capable of delivering and storing data and analytics in a fast, secure and efficient way.

Computational storage enables organizations to create customer value by maximizing the benefits of big data. It puts processing power directly on the storage device, giving companies quick and easy access to vital information.

This easy-to-read guide provides an introduction to computational storage and its benefits, walking through real-world examples of how it’s deployed today and what to consider when implementing it in your device. 

Read this guide to learn: 

  • What is computational storage and how it works
  • The benefits it brings to architects, developers and organizations
  • How to move from a traditional storage solution to a more intelligent device
  • Real-world examples from Arm’s partners

How to Perform a Security Investigation in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-perform-a-security-investigation-in-aws

Do you have a plan in place describing how to investigate in Amazon Web Services (AWS)? What security controls, techniques, and data sources can you leverage when investigating and containing an incident in the cloud? Join SANS and AWS Marketplace to learn how to leverage different technologies to determine the source and timeline of the event, and the systems targeted to define a reliable starting point from which to begin your investigations.

Attendants will learn:

  • Prerequisites for performing an effective investigation
  • Services that enable an investigation
  • How to plan an investigation
  • Steps to completing an investigation in AWS

Test Ops Agile Design and Test

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/test-ops-agile-design-and-test

In the 1990s, agile software development profoundly transformed software development. Agile is far more than a process; it’s a new way to work. Today, a similar transformation is happening in test and measurement: TestOps. Learn about TestOps and how to accelerate your product development workflow.

photo

Accelerate your innovation with NI Wireless Research Handbook

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/accelerate-your-innovation-with-ni-wireless-research-handbook

Download the latest edition of NI’s Wireless Research Handbook, which includes research examples from around the world and across a wide range of advanced wireless research topics. This comprehensive look at next-generation wireless systems will offer you a more in-depth view of how prototyping can enhance research results.

Applications include:

· Flexible Waveform Shaping Based on UTW-OFDM for 5G and Beyond

· Flexible Real-Time Waveform Generator for Mixed-Service Scenarios

· In-Band Full-Duplex SDR for MAC Protocol with Collision Detection

· Bandwidth-Compressed Spectrally Efficient Communication System

· World-Leading Parallel Channel Sounder Platform

· Distributed Massive MIMO: Algorithm for TDD Reciprocity Calibration

· Wideband/Opportunistic Map-Based Full-Duplex Radios

· An Experimental SDR Platform for In-Band D2D Communications in 5G

· Wideband Multichannel Signal Recorder for Radar Applications

· Passive and Active Radar Imaging

· Multi-antenna Technology for Reliable Wireless Communication

· Radio Propagation Analysis for the Factories of the Future

New Alternative to Bitcoin Uses Negligible Energy

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/energywise/computing/software/bitcoin-alternative

A nearly zero-energy alternative to Bitcoin and other blockchain-based cryptocurrencies that promises as much security but far greater speeds is now under development in Europe, a new study finds.

Cryptocurrencies such as Bitcoin are digital currencies that use cryptography to protect and enable financial transactions between individuals, rendering third-party middlemen such as banks or credit card companies unnecessary. The explosion of interest in Bitcoin made it the world’s fastest-growing currency for years.

JumpStart Guide to Security Investigations and Posture Management in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/jumpstart-guide-to-security-investigations-and-posture-management-in-aws

Many organizations know how to conduct a security investigation and have a basic understanding of their security posture status. However, some areas of an organization’s environment can be easily overlooked that may affect their security posture or an investigation, such as misconfiguration and hidden interrelationships.

There are solutions available to enable your ability to conduct effective investigations and help improve your organization’s security posture in AWS. This webinar provides guidance on the key considerations when choosing those solutions.

Attendants will learn:

  • Needs and capabilities associated with security investigations and posture management technologies
  • Important business, technical, and operational considerations for implementation of selected tools
  • AWS-specific considerations for selection of data sources, investigation solutions, and posture management solutions
  • Process for making an informed decision about products to integrate
  • How security posture management solutions can be integrated into investigation processes like Barracuda Cloud Security Guardian for AWS

Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong

Post Syndicated from Stuart Russell original https://spectrum.ieee.org/computing/software/many-experts-say-we-shouldnt-worry-about-superintelligent-ai-theyre-wrong

Editor’s note: This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House.

AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.

Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”

Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.

Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:

Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.

Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.

No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.

Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:

If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled…. This new danger…is certainly something which can give us anxiety.

Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.

Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.

Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.

Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.

Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.

This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.

The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”

To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.

What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.

If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.

For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.

Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.

Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:

At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.

Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.

The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:

Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.

And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:

If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.

The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.

The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.

Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.

Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:

AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.

Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:

There is no reason for AIs to have self-preservation instincts, jealousy, etc…. AIs will not have these destructive “emotions” unless we build these emotions into them.

Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.

A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:

Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.

By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.

The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”

Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.

In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.

Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.

This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”

About the Author

Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. This month, Viking Press is publishing Russell’s new book, Human Compatible: Artificial Intelligence and the Problem of Control, on which this article is based. He is also active in the movement against autonomous weapons, and he instigated the production of the highly viewed 2017 video Slaughterbots.

COMSOL News 2019 Special Edition Powers

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/comsol-news-2019-special-edition-powers

See how power engineers benefit from using the COMSOL Multiphysics ® simulation software for the generation, distribution, and use of electrical power. COMSOL News 2019 Special Edition Power includes stories about designers, engineers, and researchers working to develop power transformers, cable systems, transmission lines, power electronics, and more.

IMAGE

World’s First Deepfake Audit Counts Videos and Tools on the Open Web

Post Syndicated from Sally Adee original https://spectrum.ieee.org/tech-talk/computing/software/the-worlds-first-audit-of-deepfake-videos-and-tools-on-the-open-web

If you wanted to make a deepfake video right now, where would you start? Today, an Amsterdam-based startup has published an audit of all the online resources that exist to help you make your own deepfakes. And its authors say it’s a first step in the quest to fight the people doing so. In weeding through software repositories and deepfake tools, they found some unexpected trends—and verified a few things that experts have long suspected.

Democratizing the MBSE Process for Safety-Critical Systems Using Excel

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/democratizing-the-mbse-process-for-safetycritical-systems-using-excel

Compliance with functional safety standards (ISO 26262, IEC 60601, ARP 4754…) is critically important in the design of complex safety-critical systems and requires close collaboration across multiple disciplines, processes and the supply chain. Systems engineering, using model based approaches, provides the rigor and consistency to develop a systems model as the “single source of truth”. However not all stakeholders are systems engineers, and even fewer are model-based systems engineers. All organizations have many players with specific skills and knowledge who have key roles that need to feed into the systems development process.

This webinar will introduce MapleMBSE, a recently-released tool that allows the use of Excel as a way of increasing effective engagement with the systems model by all stakeholders within the IBM Rhapsody eco-system. The resulting work-flow will enable engineers and developers not familiar with the MBSE paradigm to interact easily with a systems model in Excel, capturing and understanding the systems engineering information from the model, doing further analysis with the information, and adding further detail to the model. The presentation will include use-cases for the development of safety-critical systems, involving model-based safety analysis and FMEA.