Tag Archives: computing

Accelerate your innovation with NI Wireless Research Handbook

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/accelerate-your-innovation-with-ni-wireless-research-handbook

Download the latest edition of NI’s Wireless Research Handbook, which includes research examples from around the world and across a wide range of advanced wireless research topics. This comprehensive look at next-generation wireless systems will offer you a more in-depth view of how prototyping can enhance research results.

Applications include:

· Flexible Waveform Shaping Based on UTW-OFDM for 5G and Beyond

· Flexible Real-Time Waveform Generator for Mixed-Service Scenarios

· In-Band Full-Duplex SDR for MAC Protocol with Collision Detection

· Bandwidth-Compressed Spectrally Efficient Communication System

· World-Leading Parallel Channel Sounder Platform

· Distributed Massive MIMO: Algorithm for TDD Reciprocity Calibration

· Wideband/Opportunistic Map-Based Full-Duplex Radios

· An Experimental SDR Platform for In-Band D2D Communications in 5G

· Wideband Multichannel Signal Recorder for Radar Applications

· Passive and Active Radar Imaging

· Multi-antenna Technology for Reliable Wireless Communication

· Radio Propagation Analysis for the Factories of the Future

Nonlinear Magnetic Materials Modeling Webinar

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/nonlinear-magnetic-materials-modeling

In this webinar, you will learn how to model ferromagnetic materials and other nonlinear magnetic materials in the COMSOL® software.

Ferromagnetic materials exhibit saturation and hysteresis. These factors are major challenges in the design of electric motors and transformers because they affect the iron loss. In addition, the loss of ferromagnetic properties at elevated temperatures (Curie temperatures) is an important nonlinear multiphysics effect in, for example, induction heating. It can also cause permanent material degradation in permanent magnet motors.

This webinar will demonstrate how to build high-fidelity finite element models of ferromagnetic devices using COMSOL Multiphysics®. The presentation concludes with a Q&A session.

PRESENTERS:

Magnus Olsson, Technology Manager, COMSOL

Magnus Olsson joined COMSOL in 1996 and currently leads development for the electromagnetic design products. He holds an MSc in engineering physics and a PhD in plasma physics and fusion research. Prior to joining COMSOL, he worked as a consulting specialist in electromagnetic computations for the Swedish armed forces.

 

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to [email protected] for a webinar code. To request your certificate complete the form here: http://innovationatwork.ieee.org/spectrum/

Attendance is free. To access the event please register.

NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

New Alternative to Bitcoin Uses Negligible Energy

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/energywise/computing/software/bitcoin-alternative

A nearly zero-energy alternative to Bitcoin and other blockchain-based cryptocurrencies that promises as much security but far greater speeds is now under development in Europe, a new study finds.

Cryptocurrencies such as Bitcoin are digital currencies that use cryptography to protect and enable financial transactions between individuals, rendering third-party middlemen such as banks or credit card companies unnecessary. The explosion of interest in Bitcoin made it the world’s fastest-growing currency for years.

JumpStart Guide to Security Investigations and Posture Management in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/jumpstart-guide-to-security-investigations-and-posture-management-in-aws

Many organizations know how to conduct a security investigation and have a basic understanding of their security posture status. However, some areas of an organization’s environment can be easily overlooked that may affect their security posture or an investigation, such as misconfiguration and hidden interrelationships.

There are solutions available to enable your ability to conduct effective investigations and help improve your organization’s security posture in AWS. This webinar provides guidance on the key considerations when choosing those solutions.

Attendants will learn:

  • Needs and capabilities associated with security investigations and posture management technologies
  • Important business, technical, and operational considerations for implementation of selected tools
  • AWS-specific considerations for selection of data sources, investigation solutions, and posture management solutions
  • Process for making an informed decision about products to integrate
  • How security posture management solutions can be integrated into investigation processes like Barracuda Cloud Security Guardian for AWS

Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong

Post Syndicated from Stuart Russell original https://spectrum.ieee.org/computing/software/many-experts-say-we-shouldnt-worry-about-superintelligent-ai-theyre-wrong

Editor’s note: This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House.

AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.

Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”

Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.

Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:

Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.

Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.

No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.

Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:

If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled…. This new danger…is certainly something which can give us anxiety.

Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.

Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.

Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.

Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.

Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.

This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.

The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”

To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.

What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.

If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.

For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.

Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.

Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:

At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.

Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.

The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:

Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.

And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:

If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.

The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.

The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.

Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.

Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:

AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.

Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:

There is no reason for AIs to have self-preservation instincts, jealousy, etc…. AIs will not have these destructive “emotions” unless we build these emotions into them.

Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.

A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:

Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.

By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.

The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”

Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.

In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.

Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.

This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”

About the Author

Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. This month, Viking Press is publishing Russell’s new book, Human Compatible: Artificial Intelligence and the Problem of Control, on which this article is based. He is also active in the movement against autonomous weapons, and he instigated the production of the highly viewed 2017 video Slaughterbots.

COMSOL News 2019 Special Edition Powers

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/comsol-news-2019-special-edition-powers

See how power engineers benefit from using the COMSOL Multiphysics ® simulation software for the generation, distribution, and use of electrical power. COMSOL News 2019 Special Edition Power includes stories about designers, engineers, and researchers working to develop power transformers, cable systems, transmission lines, power electronics, and more.

IMAGE

World’s First Deepfake Audit Counts Videos and Tools on the Open Web

Post Syndicated from Sally Adee original https://spectrum.ieee.org/tech-talk/computing/software/the-worlds-first-audit-of-deepfake-videos-and-tools-on-the-open-web

If you wanted to make a deepfake video right now, where would you start? Today, an Amsterdam-based startup has published an audit of all the online resources that exist to help you make your own deepfakes. And its authors say it’s a first step in the quest to fight the people doing so. In weeding through software repositories and deepfake tools, they found some unexpected trends—and verified a few things that experts have long suspected.

Dream Your Future with CERN!

Post Syndicated from Cern original https://spectrum.ieee.org/computing/hardware/dream-your-future-with-cern

On 14 and 15 September CERN opened its doors to the public on the occasion of its Open Days, a unique opportunity to witness the incredible work going on behind the scenes of this unique organisation, whose mission is to answer the fundamental questions of the universe. More than 75,000 visitors of all ages and backgrounds came to CERN’s many visit points, with more than 100 activities, guided by 3,000 dedicated and passionate volunteers eager to share the wonders of this unique place to work.

CERN is the world’s largest particle physics research centre. It is an incredible place, with its myriad of accelerators, detectors, computing infrastructure and experiments that serve to research the origins of our universe. Seeing it for oneself is the only way to understand and realise the sheer enormity of what is going on here. We traditionally have over 110’000 visitors per year coming to CERN, numbers that grow all the time. It is a very popular place to visit at any time as its ranking on Tripadvisor confirms.

Every five years, CERN enters a ‘Long shutdown’ phase for essential upgrades and maintenance work which last several months, and this is the ideal opportunity to open CERN up to the public with its ‘Open days’, for people to see, experience and integrate what science on this scale actually looks like. The theme of these open days was “Explore the future with us”, with the aim to engage visitors in how we work at CERN, engage them in the process of science, human endeavour driven by values of openness, diversity and peaceful collaboration.

You can of course visit CERN at any time, although on a more reduced scale than the open days. While in operation, the Large Hadron Collider and detectors are clearly inaccessible. In the regular annual shutdown periods, limited underground visits are possible but cannot be guaranteed, however there are many interesting places to be visited above ground at all times, with free of charge visits and tours on offer. Furthermore, if coming in person is not feasible, people can take virtual tours notably of the LHC and the computing centre.

Who works at CERN? A common misconception about CERN is that all employees work in physics. CERN’s mission is to uncover the mysteries of our universe and is known as the largest physics laboratory in the world, so in many ways this misconception comes from a logical assumption. What is probably less tangible and less well understood by the public is that to achieve this level of cutting edge particle physics research, you need the infrastructure and tools to perform it: the accelerators, detectors, technology, computing and a whole host of other disciplines. CERN employs 2600 staff members to build, operate and maintain this infrastructure that is in turn used by a worldwide community of physicists to perform their world-class research.

Of the 2600 staff members, only 3% are research physicists – CERN’s core hiring needs are for engineers and technicians and support staff in a wide variety of disciplines, spanning electricity, mechanics, electronics, material science, vacuum, and of course computing. Let’s not forget that CERN is the birth place of the world wide web and advances in computing are key here – it’s a great place to work as a software or hardware engineer!

Working at CERN is enriching on so many levels, it is a privilege to be a part of this Organization which has such a noble mission, uniting people from all over the world with values that truly speak to me: diversity, commitment, creativity, integrity and professionalism. Every day is a new opportunity to learn, discover and grow. The benefits of working at CERN are plentiful, and the quality of life offered in the Geneva region is remarkable. We often say it’s working in a place like nowhere else on earth! So don’t hesitate to come find out for yourself, on a visit or … by joining us as a student, a graduate or a professional. Apply now and take part! https://careers.cern

Key parameters for selecting RF inductors

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/key-parameters-for-selecting-r-f-inductors

Download this application note to learn the key selection criteria an engineer needs to understand in order to properly evaluate and specify RF inductors, including inductance value, current rating, DC resistance (DCR), self-resonant frequency (SRF) and more.

Democratizing the MBSE Process for Safety-Critical Systems Using Excel

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/democratizing-the-mbse-process-for-safetycritical-systems-using-excel

Compliance with functional safety standards (ISO 26262, IEC 60601, ARP 4754…) is critically important in the design of complex safety-critical systems and requires close collaboration across multiple disciplines, processes and the supply chain. Systems engineering, using model based approaches, provides the rigor and consistency to develop a systems model as the “single source of truth”. However not all stakeholders are systems engineers, and even fewer are model-based systems engineers. All organizations have many players with specific skills and knowledge who have key roles that need to feed into the systems development process.

This webinar will introduce MapleMBSE, a recently-released tool that allows the use of Excel as a way of increasing effective engagement with the systems model by all stakeholders within the IBM Rhapsody eco-system. The resulting work-flow will enable engineers and developers not familiar with the MBSE paradigm to interact easily with a systems model in Excel, capturing and understanding the systems engineering information from the model, doing further analysis with the information, and adding further detail to the model. The presentation will include use-cases for the development of safety-critical systems, involving model-based safety analysis and FMEA.

How to do Machine Learning on Arm Cortex-M MCUs

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/how-to-do-machine-learning-on-arm-cortexm-mcus

How to machine Learning on Arm Cortex-M MCUs

Descriptions

Machine learning (ML) algorithms are moving processing to the IoT device for challenges with latency, power consumption, cost, network bandwidth, reliability, security, and more.

As a result, interest is growing in developing neural network (NN) solutions to deploy ML on low-power IoT devices, for example, with microcontrollers powered by proven Arm Cortex-M technology.

To help developers get a head start, Arm offers CMSIS-NN, an open-source library of optimized software kernels that maximize NN performance on Cortex-M processors with minimal memory overhead.

This guide to ML on Cortex-M microcontrollers offers methods for NN architecture exploration using image classification on a sample CIFAR-10 dataset to develop models that fit on power and cost-constrained IoT devices.

What’s included in this guide?

  • Techniques to perform NN model search within a set of typical computer constraints of microcontroller devices
  • Methods too used to optimize the NN kernels in CMSIS-NN
  • Ways to maximize NN performance on Cortex-M processors with the lowest memory footprint

What Google’s Quantum Supremacy Claim Means for Quantum Computing

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/how-googles-quantum-supremacy-plays-into-quantum-computings-long-game

Google’s claim to have demonstrated quantum supremacy—one of the earliest and most hotly anticipated milestones on the long road toward practical quantum computing—was supposed to make its official debut in a prestigious science journal. Instead, an early leak of the research paper has sparked a frenzy of media coverage and some misinformed speculation about when quantum computers will be ready to crack the world’s computer security algorithms.

U.S. Military, Looking to Automate Post-Disaster Damage Recognition, Seeks a Winning Formula

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/computing/software/defense-department-launches-disastrous-computer-vision-contest

It seems like natural disasters are happening more and more frequently these days. Worse still, we seem ill prepared to deal with them. Even if it’s something we can see coming, like a hurricane, the path to recovery is often a confused mess as first responders scramble to figure out where to allocate resources. Remote sensing technology can help with this, but the current state of the art comes down to comparing aerial before-and-after images from disaster scenes by hand and trying to identify which locations were hit hardest.

To help with this problem, the Defense Innovation Unit (a sort of tech accelerator inside the Department of Defense) is sponsoring a challenge called xView2. Its goal: to develop a computer vision algorithm that can automate the process of detecting and labeling damage based on differences in before-and-after photos. And like all good challenges, there’s a big pile of money at the end for whoever manages to do the best job of it.

How to Secure App Pipelines in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-secure-app-pipelines-in-aws

Applications today have evolved into containers and microservices deployed in fully automated and distributed environments across data centers and in AWS. This webinar will focus on the security of the continuous integration and continuous deployment (CI/CD) pipeline and security automation. Join SANS and AWS Marketplace as they discuss how to improve and automate security across the entire CI/CD pipeline and runtime environment.

Attendants will learn:

  • App security concepts specific to cloud environments, including management of secrets, security of APIs, serverless applications and security, and privilege management
  • The stages of a cloud-oriented development pipeline and how security can tie into the CI/CD pipeline
  • How to mitigate disruptions caused by perimeter-based and legacy security tools that don’t fit into CI/CD practices
  • Solutions available in AWS Marketplace to help you secure your app pipeline in your AWS environment

High-Speed Digital Back to Basics Seminar

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/highspeed-digital-back-to-basics-seminar

Whether you are new to high-speed digital design or a seasoned veteran, the goal of this seminar is to give you the knowledge to take to the lab and use immediately for faster time to test and increased confidence in your digital designs. 

Longer version:

Master the fundamentals of High-Speed Digital measurements

Keysight’s Complimentary High-Speed Digital Back to Basics Seminar is coming to the following locations:

  • Oct 3 – Cambridge, MA
  • Oct 10 – Irvine, CA

Join us for a day of face to face networking with our Keysight experts and others in your field. Join us for lunch and a product fair and attend the following technical sessions:

  • How to avoid signal integrity problems with a network analyzer
  • Understanding the fundamentals of oscilloscopes and their applications
  • Creating complex and custom waveforms with an arbitrary waveform generator
  • Conducting accurate receiver validation tests with a BERT

Goodbye, Motherboard. Hello, Silicon-Interconnect Fabric

Post Syndicated from Puneet Gupta original https://spectrum.ieee.org/computing/hardware/goodbye-motherboard-hello-siliconinterconnect-fabric

The need to make some hardware systems tinier and tinier and others bigger and bigger has been driving innovations in electronics for a long time. The former can be seen in the progression from laptops to smartphones to smart watches to hearables and other “invisible” electronics. The latter defines today’s commercial data centers—megawatt-devouring monsters that fill purpose-built warehouses around the world. Interestingly, the same technology is limiting progress in both arenas, though for different reasons.

The culprit, we contend, is the printed circuit board. And the solution is to get rid of it.

Our research shows that the printed circuit board could be replaced with the same material that makes up the chips that are attached to it, namely silicon. Such a move would lead to smaller, lighter-weight systems for wearables and other size-constrained gadgets, and also to incredibly powerful high-performance computers that would pack dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.

This all-silicon technology, which we call silicon-interconnect fabric, allows bare chips to be connected directly to wiring on a separate piece of silicon. Unlike connections on a printed circuit board, the wiring between chips on our fabric is just as small as wiring within a chip. Many more chip-to-chip connections are thus possible, and those connections are able to transmit data faster while using less energy.

Silicon-interconnect fabric, or Si-IF, offers an added bonus. It’s an excellent path toward the dissolution of the f(relatively) big, complicated, and difficult-to-manufacture systems-on-chips that currently run everything from smartphones to supercomputers. In place of SoCs, system designers could use a conglomeration of smaller, simpler-to-design, and easier-to-manufacture chiplets tightly interconnected on an Si-IF. This chiplet revolution is already well under way, with AMD, Intel, Nvidia, and others offering chiplets assembled inside of advanced packages. Silicon-interconnect fabric expands that vision, breaking the system out of the package to include the entire computer.

To understand the value of eliminating the printed circuit board, consider what happens with a typical SoC. Thanks to Moore’s Law, a 1-square-centimeter piece of silicon can pack pretty much everything needed to drive a smartphone. Unfortunately, for a variety of reasons that mostly begin and end with the printed circuit board, this sliver of silicon is then put inside a (usually) plastic package that can be as much as 20 times as large as the chip itself.

The size difference between chip and package creates at least two problems. First, the volume and weight of the packaged chip are much greater than those of the original piece of silicon. Obviously, that’s a problem for all things that need to be small, thin, and light. Second, if the final hardware requires multiple chips that talk to one another (and most systems do), then the distance that signals need to travel increases by more than a factor of 10. That distance is a speed and energy bottleneck, especially if the chips exchange a lot of data. This choke point is perhaps the biggest problem for data-intensive applications such as graphics, machine learning, and search. To make matters worse, packaged chips are difficult to keep cool. Indeed, heat removal has been a limiting factor in computer systems for decades.

If these packages are such a problem, why not just remove them? Because of the printed circuit board.

The purpose of the printed circuit board is, of course, to connect chips, passive components, and other devices into a working system. But it’s not an ideal technology. PCBs are difficult to make perfectly flat and are prone to warpage. Chip packages usually connect to the PCB via a set of solder bumps, which are melted and resolidified during the manufacturing process. The limitations of solder technology combined with surface warpage mean these solder bumps can be no less than 0.5 millimeters apart. In other words, you can pack no more than 400 connections per square centimeter of chip area. For many applications, that’s far too few connections to deliver power to the chip and get signals in and out. For example, the small area taken up by one of the Intel Atom processor’s dies has only enough room for a hundred 0.5-mm connections, falling short of what it needs by 300. Designers use the chip package to make the connection-per-unit-area math work. The package takes tiny input/output connections on the silicon chip—ranging from 1 to 50 micrometers wide—and fans them out to the PCB’s 500-µm scale.

Recently, the semiconductor industry has tried to limit the problems of printed circuit boards by developing advanced packaging, such as silicon interposer technology. An interposer is a thin layer of silicon on which a small number of bare silicon chips are mounted and linked to each other with a larger number of connections than could be made between two packaged chips. But the interposer and its chips must still be packaged and mounted on a PCB, so this arrangement adds complexity without solving any of the other issues. Moreover, interposers are necessarily thin, fragile, and limited in size, which means it is difficult to construct large systems on them.

We believe that a better solution is to get rid of packages and PCBs altogether and instead bond the chips onto a relatively thick (500-µm to 1-mm) silicon wafer. Processors, memory dies, analog and RF chiplets, voltage-regulator modules, and even passive components such as inductors and capacitors can be bonded directly to the silicon. Compared with the usual PCB material—a fiberglass and epoxy composite called FR-4—a silicon wafer is rigid and can be polished to near perfect flatness, so warping is no longer an issue. What’s more, because the chips and the silicon substrate expand and contract at the same rate as they heat and cool, you no longer need a large, flexible link like a solder bump between the chip and the substrate.

Solder bumps can be replaced with micrometer-scale copper pillars built onto the silicon substrate. Using thermal compression—which basically is precisely applied heat and force—the chip’s copper I/O ports can then be directly bonded to the pillars. Careful optimization of the thermal-compression bonding can produce copper-to-copper bonds that are far more reliable than soldered bonds, with fewer materials involved.

Eliminating the PCB and its weaknesses means the chip’s I/O ports can be spaced as little as 10 µm apart instead of 500 µm. We can therefore pack 2,500 times as many I/O ports on the silicon die without needing the package as a space transformer.

Even better, we can leverage standard semiconductor manufacturing processes to make multiple layers of wiring on the Si-IF. These traces can be much finer than those on a printed circuit board. They can be less than 2 µm apart, compared with a PCB’s 500 µm. The technology can even achieve chip-to-chip spacing of less than 100 µm, compared with 1 mm or more using a PCB. The result is that an Si-IF system saves space and power and cuts down on the time it takes signals to reach their destinations.

Furthermore, unlike PCB and chip-package materials, silicon is a reasonably good conductor of heat. Heat sinks can be mounted on both sides of the Si-IF to extract more heat—our estimates suggest up to 70 percent more. Removing more heat lets processors run faster.

Although silicon has very good tensile strength and stiffness, it is somewhat brittle. Fortunately, the semiconductor industry has developed methods over the decades for handling large silicon wafers without breaking them. And when Si-IF–based systems are properly anchored and processed, we expect them to meet or exceed most reliability tests, including resistance to shock, thermal cycling, and environmental stresses.

There’s no getting around the fact that the material cost of crystalline silicon is higher than that of FR-4. Although there are many factors that contribute to cost, the cost per square millimeter of an 8-layer PCB can be about one-tenth that of a 4-layer Si-IF wafer. However, our analysis indicates that when you remove the cost of packaging and complex circuit-board construction and factor in the space savings of Si-IF, the difference in cost is negligible, and in many cases Si-IF comes out ahead.

Let’s look at a few examples of how Si-IF integration can benefit a computer system. In one study of server designs, we found that using packageless processors based on Si-IF can double the performance of conventional processors because of the higher connectivity and better heat dissipation. Even better, the size of the silicon “circuit board” (for want of a better term) can be reduced from 1,000 cm2 to 400 cm2. Shrinking the system that much has real implications for data-center real estate and the amount of cooling infrastructure needed. At the other extreme, we looked at a small Internet of Things system based on an Arm microcontoller. Using Si-IF here not only shrinks the size of the board by 70 percent but also reduces its weight from 20 grams to 8 grams.

Apart from shrinking existing systems and boosting their performance, Si-IF should let system designers create computers that would otherwise be impossible, or at least extremely impractical.

A typical high-performance server contains two to four processors on a PCB. But some high-performance computing applications need multiple servers. Communication latency and bandwidth bottlenecks arise when data needs to move across different processors and PCBs. But what if all the processors were on the same wafer of silicon? These processors could be integrated nearly as tightly as if the whole system were one big processor.

This concept was first proposed by Gene Amdahl at his company Trilogy Systems. Trilogy failed because manufacturing processes couldn’t yield enough working systems. There is always the chance of a defect when you’re making a chip, and the likelihood of a defect increases exponentially with the chip’s area. If your chip is the size of a dinner plate, you’re almost guaranteed to have a system-killing flaw somewhere on it.

But with silicon-interconnect fabric, you can start with chiplets, which we already know can be manufactured without flaws, and then link them to form a single system. A group of us at the University of California, Los Angeles, and the University of Illinois at Urbana-Champaign architected such a wafer-scale system comprising 40 GPUs. In simulations, it sped calculations more than fivefold and cut energy consumption by 80 percent when compared with an equivalently sized 40-GPU system built using state-of-the-art multichip packages and printed circuit boards.

These are compelling results, but the task wasn’t easy. We had to take a number of constraints into account, including how much heat could be removed from the wafer, how the GPUs could most quickly communicate with one another, and how to deliver power across the entire wafer.

Power turned out to be a major constraint. At a chip’s standard 1-volt supply, the wafer’s narrow wiring would consume a full 2 kilowatts. Instead, we chose to up the supply voltage to 12 V, reducing the amount of current needed and therefore the power consumed. That solution required spreading voltage regulators and signal-conditioning capacitors all around the wafer, taking up space that might have gone to more GPU modules. Encouraged by the early results, we are now building a prototype wafer-scale computing system, which we hope to complete by the end of 2020.

Silicon-interconnect fabric could play a role in an important trend in the computer industry: the dissolution of the system-on-chip (SoC) into integrated collections of dielets, or chiplets. (We prefer the term dielets to chiplets because it emphasizes the nature of a bare silicon die, its small size, and the possibility that it might not be fully functional without other dielets on the Si-IF.) Over the past two decades, a push toward better performance and cost reduction compelled designers to replace whole sets of chips with ever larger integrated SoCs. Despite their benefits (especially for high-volume systems), SoCs have plenty of downsides.

For one, an SoC is a single large chip, and as already mentioned, ensuring good yield for a large chip is very difficult, especially when state-of-the-art semiconductor manufacturing processes are involved. (Recall that chip yield drops roughly exponentially as the chip area grows.) Another drawback of SoCs is their high one-time design and manufacturing costs, such as the US $2 million or more for the photolithography masks, which can make SoCs basically unaffordable for most designs. What’s more, any change in the design or upgrade of the manufacturing process, even a small one, requires significant redesign of the entire SoC. Finally, the SoC approach tries to force-fit all of the subsystem designs into a single manufacturing process, even if some of those subsystems would perform better if made using a different process. As a result, nothing within the SoC achieves its peak performance or efficiency.

The packageless Si-IF integration approach avoids all of these problems while retaining the SoC’s small size and performance benefits and providing design and cost benefits, too. It breaks up the SoC into its component systems and re-creates it as a system-on-wafer or system–on–Si-IF (SoIF).

Such a system is composed of independently fabricated small dielets, which are connected on the Si-IF. The minimum separation between the dielets (a few tens of micrometers ) is comparable to that between two functional blocks within an SoC. The wiring on the Si-IF is the same as that used within the upper levels of an SoC and therefore the interconnect density is comparable as well.

The advantages of the SoIF approach over SoCs stem from the size of the dielet. Small dielets are less expensive to make than a large SoC because, as we mentioned before, you get a higher yield of working chips when the chips are smaller. The only thing that’s large about the SoIF is the silicon substrate itself. The substrate is unlikely to have a yield issue because it’s made up of just a few easy-to-fabricate layers. Most yield loss in chipmaking comes from defects in the transistor layers or in the ultradense lower metal layers, and a silicon-interconnect fabric has neither.

Beyond that, an SoIF would have all the advantages that industry is looking for by moving to chiplets. For example, upgrading an SoIF to a new manufacturing node should be cheaper and easier. Each dielet can have its own manufacturing technology, and only the dielets that are worth upgrading would need to be changed. Those dielets that won’t get much benefit from a new node’s smaller transistors won’t need a redesign. This heterogeneous integration allows you to build a completely new class of systems that mix and match dielets of various generations and of technologies that aren’t usually compatible with CMOS. For example, our group recently demonstrated the attachment of an indium phosphide die to an SoIF for potential use in high-frequency circuits.

Because the dielets would be fabricated and tested before being connected to the SoIF, they could be used in different systems, amortizing their cost significantly. As a result, the overall cost to design and manufacture an SoIF can be as much as 70 percent less than for an SoC, by our estimate. This is especially true for large, low-volume systems like those for the aerospace and defense industries, where the demand is for only a few hundred to a few thousand units. Custom systems are also easier to make as SoIFs, because both design costs and time shrink.

We think the effect on system cost and diversity has the potential to usher in a new era of innovation where novel hardware is affordable and accessible to a much larger community of designers, startups, and universities.

Over the last few years, we’ve made significant progress on Si-IF integration technology, but a lot remains to be done. First and foremost is the demonstration of a commercially viable, high-yield Si-IF manufacturing process. Patterning wafer-scale Si-IF may require innovations in “maskless” lithography. Most lithography systems used today can make patterns only about 33 by 24 mm in size. Ultimately, we’ll need something that can cast a pattern onto a 300-mm-diameter wafer.

We’ll also need mechanisms to test bare dielets as well as unpopulated Si-IFs. The industry is already making steady progress in bare die testing as chipmakers begin to move toward chiplets in advanced packages and 3D integration.

Next, we’ll need new heat sinks or other thermal-dissipation strategies that take advantage of silicon’s good thermal conductivity. With our colleagues at UCLA, we have been developing an integrated wafer-scale cooling and power-delivery solution called PowerTherm.

In addition, the chassis, mounts, connectors, and cabling for silicon wafers need to be engineered to enable complete systems.

We’ll also need to make several changes to design methodology to deliver on the promise of SoIFs. Si-IF is a passive substrate—it’s just conductors, with no switches—and therefore the interdielet connections need to be short. For longer connections that might have to link distant dielets on a wafer-scale system, we’ll need intermediate dielets to help carry data further. Design algorithms that do layout and pin assignments will need an overhaul in order to take advantage of this style of integration. And we’ll need to develop new ways of exploring different system architectures that leverage the heterogeneity and upgradability of SoIFs.

We also need to consider system reliability. If a dielet is found to be faulty after bonding or fails during operation, it will be very difficult to replace. Therefore, SoIFs, especially large ones, need to have fault tolerance built in. Fault tolerance could be implemented at the network level or at the dielet level. At the network level, interdielet routing will need to be able to bypass faulty dielets. At the dielet level, we can consider physical redundancy tricks like using multiple copper pillars for each I/O port.

Of course, the benefit of dielet assembly depends heavily on having useful dielets to integrate into new systems. At this stage, the industry is still figuring out which dielets to make. You can’t simply make a dielet for every subsystem of an SoC, because some of the individual dielets would be too tiny to handle. One promising approach is to use statistical mining of existing SoC and PCB designs to identify which functions “like” to be physically close to each other. If these functions involve the same manufacturing technologies and follow similar upgrade cycles as well, then they should remain integrated on the same dielet.

This might seem like a long list of issues to solve, but researchers are already dealing with some of them through the Defense Advanced Research Projects Agency’s Common Heterogeneous Integration and IP Reuse Strategies (CHIPS) program as well as through industry consortia. And if we can solve these problems, it will go a long way toward continuing the smaller, faster, and cheaper legacy of Moore’s Law. 

About the Authors

Puneet Gupta and Subramanian S. Iyer are both members of the electrical engineering department at the University of California at Los Angeles. Gupta is an associate professor, and Iyer is Distinguished Professor and the Charles P. Reames Endowed Chair.

Your Navigation App Is Making Traffic Unmanageable

Post Syndicated from Jane Macfarlane original https://spectrum.ieee.org/computing/hardware/your-navigation-app-is-making-traffic-unmanageable

Miguel Street is a winding, narrow route through the Glen Park neighborhood of San Francisco. Until a few years ago, only those living along the road traveled it, and they understood its challenges well. Now it’s packed with cars that use it as a shortcut from congested Mission Street to heavily traveled Market Street. Residents must struggle to get to their homes, and accidents are a daily occurrence.

The problem began when smartphone apps like Waze, Apple Maps, and Google Maps came into widespread use, offering drivers real-time routing around traffic tie-ups. An estimated 1 billion drivers use such apps in the United States alone.

Today, traffic jams are popping up unexpectedly in previously quiet neighborhoods around the country and the world. Along Adams Street, in the Boston neighborhood of Dorchester, residents complain of speeding vehicles at rush hour, many with drivers who stare down at their phones to determine their next maneuver. London shortcuts, once a secret of black-cab drivers, are now overrun with app users. Israel was one of the first to feel the pain because Waze was founded there; it quickly caused such havoc that a resident of the Herzliya Bet neighborhood sued the company.

The problem is getting worse. City planners around the world have predicted traffic on the basis of residential density, anticipating that a certain amount of real-time changes will be necessary in particular circumstances. To handle those changes, they have installed tools like stoplights and metering lights, embedded loop sensors, variable message signs, radio transmissions, and dial-in messaging systems. For particularly tricky situations—an obstruction, event, or emergency—city managers sometimes dispatch a human being to direct traffic.

But now online navigation apps are in charge, and they’re causing more problems than they solve. The apps are typically optimized to keep an individual driver’s travel time as short as possible; they don’t care whether the residential streets can absorb the traffic or whether motorists who show up in unexpected places may compromise safety. Figuring out just what these apps are doing and how to make them better coordinate with more traditional traffic-management systems is a big part of my research at the University of California, Berkeley, where I am director of the Smart Cities Research Center.

Here’s how the apps evolved. Typically, the base road maps used by the apps represent roads as five functional classes, from multilane freeways down to small residential streets. Each class is designed to accommodate a different number of vehicles moving through per hour at speeds that are adjusted for local conditions. The navigation systems—­originally available as dedicated gadgets or built into car dashboards and now in most smartphones—have long used this information in their routing algorithms to calculate likely travel time and to select the best route.

Initially, the navigation apps used these maps to search through all the possible routes to a destination. Although that worked well when users were sitting in their driveways, getting ready to set out on a trip, those searches were too computationally intensive to be useful for drivers already on the road. So software developers created algorithms that identify just a few routes, estimate the travel times of each, and select the best one. This approach might miss the fastest route, but it generally worked pretty well. Users could tune these algorithms to prefer certain types of roads over ­others—for example, to prefer highways or to avoid them.

The digital mapping industry is a small one. Navteq (now Here Technologies) and TomTom, two of the earliest digital-map makers, got started about 30 years ago. They focused mainly on building the data sets, typically releasing updated maps quarterly. In between these releases, the maps and the routes suggested by the navigation apps didn’t change.

When navigation capabilities moved to apps on smartphones, the navigation system providers began collecting travel speeds and locations from all the users who were willing to let the app share their information. Originally, the system providers used these GPS traces as historical data in algorithms designed to estimate realistic speeds on the roads at different times of day. They integrated these estimates with the maps, identifying red, yellow, and green routes—where red meant likely congestion and green meant unrestricted flow.  

As the historical records of these GPS traces grew and the coverage and bandwidth of the cellular networks improved, developers started providing traffic information to users in nearly real time. Estimates were quite accurate for the more popular apps, which had the most drivers in a particular region.

And then, around 2013, Here Technologies, TomTom, Waze, and Google went beyond just flagging traffic jams ahead. They began offering real-time rerouting suggestions, considering current traffic on top of the characteristics of the road network. That gave their users opportunities to get around traffic slowdowns, and that’s how the chaos began.

On its face, real-time rerouting isn’t a problem. Cities do it all the time by changing the signal, phase, and timing of traffic lights or flashing detour alerts on signs. The real problem is that the traffic management apps are not working with existing urban infrastructures to move the most traffic in the most efficient way.

First, the apps don’t account for the peculiarities of a given neighborhood. Remember the five classes of roads along with their estimated free-flow speeds I mentioned? That’s virtually all the apps know about the roads themselves. For example, Baxter Street in Los ­Angeles—also a scene of increased accidents due to app-induced shortcutting—is an extremely steep road that follows what originally was a network of goat paths. But to the apps, this road looks like any other residential road with a low speed limit. They assume it has parking on both sides and room for two-way traffic in between. It doesn’t take into account that it has a 32 percent grade and that when you’re at the top you can’t see the road ahead or oncoming cars. This blind spot has caused drivers to stop unexpectedly, causing accidents on this once-quiet neighborhood street.

The algorithms also may not consider other characteristics of the path they choose. For example, does it include roads on which there are a lot of pedestrians? Does it pass by an elementary school? Does it include intersections that are difficult to cross, such as a small street crossing a major thoroughfare with no signal light assistance?

I recently experienced what such cluelessness can cause. I was in congested traffic on a multilane road when an app offered to get me out of the traffic by sending me into a residential neighborhood. It routed me right past an elementary school at 8:15 a.m. There were crossing guards, minivans double parked, kids jumping out of cars, and drivers facing the bright morning sun having a hard time seeing in the glare. I only added to the chaos.

On top of all these problems, these rerouting apps are all out for themselves. They take a selfish view in which each vehicle is competing for the fastest route to its destination. This can lead to the router creating new traffic congestion in unexpected places.

Consider cars crossing a thoroughfare without the benefit of a signal light. Perhaps the car on the smaller road has a stop sign. Likely, it was designed as a two-way stop because traffic on the larger road was typically light enough that the wait to cross was comfortably short. Add cars to that larger road, however, and breaks in the traffic become few, causing the line of cars waiting at the stop sign to flow onto neighboring streets. If you’re in the car on the larger road, you may be zipping along to your destination. But if you’re on the smaller road, you may have to wait a very long time to cross. And if the apps direct more and more cars to these neighborhood roads, as may happen when a nearby highway is experiencing abnormal delays, the backups build and the likelihood of accidents increases.

To compound the “selfish routing” problem, each navigation application provider—Google, Apple, Waze (now owned by Google)—operates independently. Each provider receives data streamed to its servers only from the devices of its users, which means that the penetration of its app colors the system’s understanding of reality. If the app’s penetration is low, the system may fall back on historical traffic speeds for the area instead of getting a good representation of existing congestion. So we have multiple players working independently with imperfect information and expecting that the entire road network is available to absorb their users in real time.

Meanwhile, city transportation engineers are busy managing traffic with the tools they have at their disposal, like those on-ramp metering lights, messaging signs, and radio broadcasts suggesting real-time routing adjustments that I mentioned previously. Their goal is to control the congestion, maintain a safe and effective travel network, and react appropriately to such things as accidents, sporting events, and, in emergency situations, evacuations.

The city engineers are also working in isolation, with incomplete information, because they have no idea what the apps are going to do at any moment. The city now loses its understanding of the amount of traffic demanding access to its roads. That’s a safety issue in the short term and a planning issue in the long term: It blinds the city to information it could use to develop better traffic-mitigation strategies—for example, urging businesses to consider different work shifts or fleet operators to consider different routes.

So you may have recently benefitted from one of these shortcuts, but it’s doubtful that you’re winning the long game. To do that takes thinking about the system as a whole and perhaps even considering aggregate fuel consumption and emissions. Only then can we use these rerouting algorithms for the benefit of all citizens and our environment.

In the meantime, neighborhoods and citizens are fighting back against the strangers using their streets as throughways. In the early days of the problem, around 2014, residents would try to fool the applications into believing there were accidents tying up traffic in their neighborhood by logging fake incidents into the app. Then some neighborhoods convinced their towns to install speed bumps, slowing down the traffic and giving a route a longer base travel time.

A town in New Jersey, Leonia, simply closed many of its streets to through traffic during commute hours, levying heavy fines for nonresident drivers. Neighboring towns followed suit. And all faced the unintended consequence of their local businesses now losing customers who couldn’t get through the town at those hours.

The city of Los Angeles recently responded to the issues on Baxter Street by recasting the street as one-way: downhill only. It’s still not ideal; it means longer trips for residents coming and going from their homes, but it reduced the chaos.

Last year, an unfortunate situation in Los Angeles during the 2017 wildfires clearly demonstrated the lack of congruence among the rerouting apps and traditional traffic management: The apps directed drivers onto streets that were being closed by the city, right into the heart of the fire. This is not the fault of the algorithms; it is simply extremely difficult to maintain an up-to-date understanding of the roads during fast-moving events. But it does illustrate why city officials need a way to connect with or even override these apps. Luckily, the city had a police officer in the area, who was able to physically turn traffic away onto a safer route.

These are mere stopgap measures; they serve to reduce, not improve, overall mobility. What we really want is a socially optimum state in which the average travel time is minimized everywhere. Traffic engineers call this state system optimum equilibrium, one of the two ­Wardrop principles of equilibrium. How do we merge the app-following crowds with an engineered flow of traffic that at least moves toward a socially optimized system, using the control mechanisms we have on hand? We can begin by pooling everyone’s view of the real-time state of the road network. But getting everybody in the data pool won’t be easy. It is a David and Goliath story—some players like Google and Apple have massive back-office digital infrastructures to run these operations, while many cities have minimal funding for advanced technology development. Without the ability to invest in new technology, cities can’t catch up with these big technology providers and instead fall back on regulation. For example, Portland, Ore., Seattle, and many other cities have lowered the speed limits on residential streets to 20 miles per hour.

There are better ways. We must convince the app makers that if they share information with one another and with city governments, the rerouting algorithms could consider a far bigger picture, including information from the physical infrastructure, such as the timing schedule for traffic lights and meters and vehicle counts from static sensors, including cameras and inductive loops. This data sharing would make their apps better while simultaneously giving city traffic planners a helping hand.

As a first step, we should form public-private partnerships among the navigation app providers, city traffic engineering organizations, and even transportation companies like Uber and Lyft. Sharing all this information would help us figure out how to best reduce congestion and manage our mobility.

We have a number of other hurdles to overcome before all the apps and infrastructure tools can work together well enough to optimize traffic flow for everyone.

The real challenge with traffic control is the enormous scale of the problem. Using the flood of data from app users along with the data from city sensors will require a new layer of data analytics that takes the key information and combines it, anonymizes it, and puts it in a form that can be more easily digested by government-operated traffic management systems.

We also need to develop simulation software that can use all this data to model the dynamics of our mobility on an urban scale. Developing this software is a key topic of current research sponsored by the U.S. Department of Energy’s Energy Efficient Mobility Systems program and involving Here Technologies and three national laboratories: Lawrence Berkeley, Argonne, and Pacific Northwest. I am involved with this research program through the Berkeley lab, where I am a guest scientist in the Sustainable Transportation Initiative. To date, a team supported by this program, led by me and staffed by researchers from the three laboratories, has developed simulations for a number of large cities that can run in just minutes on DOE supercomputers. In the past, such simulations took days or weeks. I expect that new approaches to manage congestion that account for the many complexities of the problem will emerge from these simulations.

In one of our projects, we took 22 million origin-and-destination pairs—or trip legs, as defined by the San Francisco County Transportation Authority—and created a simulation for the San Francisco Bay Area that defines the shortest travel time route for each leg as well as the congestion patterns on each route for a full day. We added an algorithm that reroutes vehicles when the simulation anticipates significant congestion. We discovered that approximately 40,000 vehicles are typically rerouted per hour at the peak congestion times in the morning and 120,000 vehicles are rerouted per hour in the evening congestion period; an incident on a highway, of course, would cause these numbers to jump.

This simulation demonstrates how much traffic planners can do to rebalance traffic flow, and it provides numbers that, right now, are not directly available. The next question is how much of the road network you want to use, trading off highway congestion for some additional traffic on neighborhood roads.

Our next step will be to modify our algorithm to consider neighborhood constraints. We know, for example, that we don’t want to reroute traffic into school zones during drop-off and pickup times, and that we should modify navigation algorithms appropriately.

We hope to soon put these tools in the hands of government transportation agencies.

That’s what we’re trying to do with technology to address the problem. But there are nontechnical hurdles as well. For example, location data can contain personal information that cannot be shared indiscriminately. And current business models may make for-profit companies reluctant to give away data that has value.

Solving both the technical and nontechnical issues will require research and public-private partnerships before we can assemble this cooperative ecosystem. But as we learn more about what drives the dynamics of our roads, we will be able to develop effective routing and traffic controls that take into account neighborhood concerns, the business objectives of fleet owners, and people’s health and convenience.

I am confident that most people, when well informed, would be open to a little inconvenience in the furtherance of the common good. Wouldn’t you be willing to drive a few extra minutes to spare a neighborhood and improve the environment?

This article appears in the October 2019 print issue as “When Apps Rule the Road.”

About the Authors

Jane Macfarlane is director of the Smart Cities Research Center at the University of California Berkeley’s Institute of Transportation Studies, where she works on data analytics for emerging transportation issues.

Advanced low-frequency noise measurement system: 9812DX

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/advanced-low-frequency-noise-measurement-system-9812dx

The 9812DX system, consisting of three current amplifiers and one voltage amplifier, fully demonstrates its superior capabilities and versatility measuring low-frequency noise characteristics of onwafer transistors over a wide range of bias voltage 200V, bias current 200mA and operating frequency bandwidth 0.03Hz – 10 MHz down to an extremely low noise resolution of 1e-27 A2/Hz.

Quantum Computing Software Startup Aliro Emerges From Stealth Mode

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/software/quantum-computing-software-startup-aliro-emerges-from-stealth-mode

There are a lot of different types of quantum computers. Arguably, none of them are ready to make a difference in the real world. But some startups are betting that they’re getting so close that it’s time to make it easy for regular software developers to take advantage of these machines. Boston-based Aliro Technologies is one such startup.

Aliro emerged from stealth mode today, revealing that it had attracted US $2.7 million from investors that include Crosslink Ventures, Flybridge Capital Partners, and Samsung NEXT’s Q Fund. The company was founded by Harvard assistant professor of computational materials science, Prineha Narang, along with two of her students, and a post-doctoral researcher.

Aliro plans a software stack that will allow ordinary developers to first determine whether available cloud-based quantum hardware can speed any of their processes better than other accelerators, such as GPUs. And then it will allow them to write code that takes advantage of that speedup.

U.S. Energy Department is First Customer for World’s Biggest Chip

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/us-energy-department-is-first-customer-for-worlds-biggest-chip

Argonne National Laboratory and Lawrence Livermore National Laboratory will be among the first organizations to install AI computers made from the largest silicon chip ever built. Last month, Cerebras Systems unveiled a 46,225-square millimeter chip with 1.2 trillion transistors designed to speed the training of neural networks. Today, such training is often done in large data centers using GPU-based servers. Cerebras plans to begin selling computers based on the notebook-size chip in the 4th quarter of this year.