Tag Archives: Computing/Software

Test Ops Agile Design and Test

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/test-ops-agile-design-and-test

In the 1990s, agile software development profoundly transformed software development. Agile is far more than a process; it’s a new way to work. Today, a similar transformation is happening in test and measurement: TestOps. Learn about TestOps and how to accelerate your product development workflow.

photo

Accelerate your innovation with NI Wireless Research Handbook

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/accelerate-your-innovation-with-ni-wireless-research-handbook

Download the latest edition of NI’s Wireless Research Handbook, which includes research examples from around the world and across a wide range of advanced wireless research topics. This comprehensive look at next-generation wireless systems will offer you a more in-depth view of how prototyping can enhance research results.

Applications include:

· Flexible Waveform Shaping Based on UTW-OFDM for 5G and Beyond

· Flexible Real-Time Waveform Generator for Mixed-Service Scenarios

· In-Band Full-Duplex SDR for MAC Protocol with Collision Detection

· Bandwidth-Compressed Spectrally Efficient Communication System

· World-Leading Parallel Channel Sounder Platform

· Distributed Massive MIMO: Algorithm for TDD Reciprocity Calibration

· Wideband/Opportunistic Map-Based Full-Duplex Radios

· An Experimental SDR Platform for In-Band D2D Communications in 5G

· Wideband Multichannel Signal Recorder for Radar Applications

· Passive and Active Radar Imaging

· Multi-antenna Technology for Reliable Wireless Communication

· Radio Propagation Analysis for the Factories of the Future

New Alternative to Bitcoin Uses Negligible Energy

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/energywise/computing/software/bitcoin-alternative

A nearly zero-energy alternative to Bitcoin and other blockchain-based cryptocurrencies that promises as much security but far greater speeds is now under development in Europe, a new study finds.

Cryptocurrencies such as Bitcoin are digital currencies that use cryptography to protect and enable financial transactions between individuals, rendering third-party middlemen such as banks or credit card companies unnecessary. The explosion of interest in Bitcoin made it the world’s fastest-growing currency for years.

JumpStart Guide to Security Investigations and Posture Management in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/jumpstart-guide-to-security-investigations-and-posture-management-in-aws

Many organizations know how to conduct a security investigation and have a basic understanding of their security posture status. However, some areas of an organization’s environment can be easily overlooked that may affect their security posture or an investigation, such as misconfiguration and hidden interrelationships.

There are solutions available to enable your ability to conduct effective investigations and help improve your organization’s security posture in AWS. This webinar provides guidance on the key considerations when choosing those solutions.

Attendants will learn:

  • Needs and capabilities associated with security investigations and posture management technologies
  • Important business, technical, and operational considerations for implementation of selected tools
  • AWS-specific considerations for selection of data sources, investigation solutions, and posture management solutions
  • Process for making an informed decision about products to integrate
  • How security posture management solutions can be integrated into investigation processes like Barracuda Cloud Security Guardian for AWS

Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong

Post Syndicated from Stuart Russell original https://spectrum.ieee.org/computing/software/many-experts-say-we-shouldnt-worry-about-superintelligent-ai-theyre-wrong

Editor’s note: This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House.

AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.

Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”

Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.

Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:

Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.

Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.

No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.

Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:

If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled…. This new danger…is certainly something which can give us anxiety.

Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.

Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.

Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.

Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.

Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.

This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.

The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”

To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.

What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.

If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.

For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.

Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.

Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:

At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.

Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.

The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:

Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.

And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:

If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.

The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.

The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.

Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.

Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:

AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.

Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:

There is no reason for AIs to have self-preservation instincts, jealousy, etc…. AIs will not have these destructive “emotions” unless we build these emotions into them.

Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.

A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:

Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.

By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.

The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”

Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.

In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.

Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.

This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”

About the Author

Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. This month, Viking Press is publishing Russell’s new book, Human Compatible: Artificial Intelligence and the Problem of Control, on which this article is based. He is also active in the movement against autonomous weapons, and he instigated the production of the highly viewed 2017 video Slaughterbots.

COMSOL News 2019 Special Edition Powers

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/comsol-news-2019-special-edition-powers

See how power engineers benefit from using the COMSOL Multiphysics ® simulation software for the generation, distribution, and use of electrical power. COMSOL News 2019 Special Edition Power includes stories about designers, engineers, and researchers working to develop power transformers, cable systems, transmission lines, power electronics, and more.

IMAGE

World’s First Deepfake Audit Counts Videos and Tools on the Open Web

Post Syndicated from Sally Adee original https://spectrum.ieee.org/tech-talk/computing/software/the-worlds-first-audit-of-deepfake-videos-and-tools-on-the-open-web

If you wanted to make a deepfake video right now, where would you start? Today, an Amsterdam-based startup has published an audit of all the online resources that exist to help you make your own deepfakes. And its authors say it’s a first step in the quest to fight the people doing so. In weeding through software repositories and deepfake tools, they found some unexpected trends—and verified a few things that experts have long suspected.

Democratizing the MBSE Process for Safety-Critical Systems Using Excel

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/democratizing-the-mbse-process-for-safetycritical-systems-using-excel

Compliance with functional safety standards (ISO 26262, IEC 60601, ARP 4754…) is critically important in the design of complex safety-critical systems and requires close collaboration across multiple disciplines, processes and the supply chain. Systems engineering, using model based approaches, provides the rigor and consistency to develop a systems model as the “single source of truth”. However not all stakeholders are systems engineers, and even fewer are model-based systems engineers. All organizations have many players with specific skills and knowledge who have key roles that need to feed into the systems development process.

This webinar will introduce MapleMBSE, a recently-released tool that allows the use of Excel as a way of increasing effective engagement with the systems model by all stakeholders within the IBM Rhapsody eco-system. The resulting work-flow will enable engineers and developers not familiar with the MBSE paradigm to interact easily with a systems model in Excel, capturing and understanding the systems engineering information from the model, doing further analysis with the information, and adding further detail to the model. The presentation will include use-cases for the development of safety-critical systems, involving model-based safety analysis and FMEA.

U.S. Military, Looking to Automate Post-Disaster Damage Recognition, Seeks a Winning Formula

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/computing/software/defense-department-launches-disastrous-computer-vision-contest

It seems like natural disasters are happening more and more frequently these days. Worse still, we seem ill prepared to deal with them. Even if it’s something we can see coming, like a hurricane, the path to recovery is often a confused mess as first responders scramble to figure out where to allocate resources. Remote sensing technology can help with this, but the current state of the art comes down to comparing aerial before-and-after images from disaster scenes by hand and trying to identify which locations were hit hardest.

To help with this problem, the Defense Innovation Unit (a sort of tech accelerator inside the Department of Defense) is sponsoring a challenge called xView2. Its goal: to develop a computer vision algorithm that can automate the process of detecting and labeling damage based on differences in before-and-after photos. And like all good challenges, there’s a big pile of money at the end for whoever manages to do the best job of it.

How to Secure App Pipelines in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-secure-app-pipelines-in-aws

Applications today have evolved into containers and microservices deployed in fully automated and distributed environments across data centers and in AWS. This webinar will focus on the security of the continuous integration and continuous deployment (CI/CD) pipeline and security automation. Join SANS and AWS Marketplace as they discuss how to improve and automate security across the entire CI/CD pipeline and runtime environment.

Attendants will learn:

  • App security concepts specific to cloud environments, including management of secrets, security of APIs, serverless applications and security, and privilege management
  • The stages of a cloud-oriented development pipeline and how security can tie into the CI/CD pipeline
  • How to mitigate disruptions caused by perimeter-based and legacy security tools that don’t fit into CI/CD practices
  • Solutions available in AWS Marketplace to help you secure your app pipeline in your AWS environment

Quantum Computing Software Startup Aliro Emerges From Stealth Mode

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/software/quantum-computing-software-startup-aliro-emerges-from-stealth-mode

There are a lot of different types of quantum computers. Arguably, none of them are ready to make a difference in the real world. But some startups are betting that they’re getting so close that it’s time to make it easy for regular software developers to take advantage of these machines. Boston-based Aliro Technologies is one such startup.

Aliro emerged from stealth mode today, revealing that it had attracted US $2.7 million from investors that include Crosslink Ventures, Flybridge Capital Partners, and Samsung NEXT’s Q Fund. The company was founded by Harvard assistant professor of computational materials science, Prineha Narang, along with two of her students, and a post-doctoral researcher.

Aliro plans a software stack that will allow ordinary developers to first determine whether available cloud-based quantum hardware can speed any of their processes better than other accelerators, such as GPUs. And then it will allow them to write code that takes advantage of that speedup.

Q&A: How Google Implements Code Coverage at Massive Scale

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/qa-how-google-implements-code-coverage-at-massive-scale

In software development, a common metric called code coverage measures the percentage of a system’s code that is covered by tests performed prior to deployment. Code coverage is typically measured automatically by a separate software program, or it can be invoked manually from the command line for certain code coverage tools. The results show exactly which lines of code were executed when running a test suite, and could reveal which lines may need further testing. 

Ideally, software development teams aim for 100 percent code coverage. But in reality, this rarely happens because of the different paths a certain code block could take, or the various edge cases that should (or shouldn’t) be considered based on system requirements.

Measuring code coverage has become common practice for software development and testing teams, but the question of whether this practice actually improves code quality is still up for debate.

Some argue that developers might focus on quantity rather than quality, creating tests just to satisfy the code coverage percentage instead of tests that are robust enough to identify high-risk or critical areas. Others raise concerns about its cost-effectiveness—it takes valuable developer time to review the results and doesn’t necessarily improve test quality. 

For a large organization such as Google—with a code base of one billion lines of code receiving tens of thousands of commits per day and supporting seven programming languages—measuring code coverage can be especially difficult.

A recent study led by Google AI researchers Marko Ivanković and Goran Petrović provides a behind-the-scenes look at the tech giant’s code coverage infrastructure, which consists of four core layers. The bottom layer is a combination of existing code coverage libraries for each programming language, while the middle layers automate and integrate code coverage into the company’s development and build workflows. The top layer deals with visualizing code coverage information using code editors and other custom tools.

JumpStart Guide for Application Security in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/jumpstart-guide-for-application-security-in-aws

Securing applications in the cloud involves a different set of considerations compared to on-premises environments. Understanding the applications your organization plans to deploy and allowing security and development teams to work in tandem can enable more secure, timely application releases. This webinar provides guidance on how to understand and protect applications in your pipeline as well as solution suggestions to help secure application deployment and delivery on Amazon Web Services (AWS).

Attendants will learn:

  • Hands-on implementation options available to protect applications in the pipeline
  • Needs and capabilities associated with AppSec solutions for development visibility, automated security assessments, and more
  • Tools and features, such as access control for AWS services and applications, to help secure application delivery and deployment
  • Key business, technical, and operational considerations for application security on AWS
  • Practical guidance on tactics and techniques for using resources and ideas to plan future application development