Tag Archives: Computing/Hardware

Cerebras Unveils First Installation of Its AI Supercomputer at Argonne National Labs

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/cerebras-unveils-ai-supercomputer-argonne-national-lab-first-installation

At Supercomputing 2019 in Denver, Colo., Cerebras Systems unveiled the computer powered by the world’s biggest chip. Cerebras says the computer, the CS-1, has the equivalent machine learning capabilities of hundreds of racks worth of GPU-based computers consuming hundreds of kilowatts, but it takes up only one-third of a standard rack and consumes about 17 kW. Argonne National Labs, future home of what’s expected to be the United States’ first exascale supercomputer, says it has already deployed a CS-1. Argonne is one of two announced U.S. National Laboratories customers for Cerebras, the other being Lawrence Livermore National Laboratory.

Developing Purpose-Built & Turnkey RF Applications

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/developing-purposebuilt-turnkey-rf-applications

Developing Purpose-Built & Turnkey RF Applications 

This ThinkRF white paper will explore how SIs can develop a purpose-built, turnkey RF application that lets end-users improve their business and understand the spectrum environment.

image

Save Time with Ready-To-Use Measurements

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/save-time-with-readytouse-measurements

The right measurement applications can increase the functionality of your signal analyzer and reduce your time to insight with ready-to-use measurements, built-in results displays, and standards conformance tests. They can also help ensure consistent measurement results across different teams and your design cycle. This efficiency means you can spend less time setting up measurements and more time evaluating and improving your designs. Learn about general-purpose or application-specific measurements that can help save you time and maintain measurement consistency in this eBook.

image

Register for Our Application Note “Tips and Tricks on How to Verify Control Loop Stability”

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/register-for-our-application-note-tips-and-tricks-on-how-to-verify-control-loop-stability

The Application Note explains the main measurement concept and will guide the user during the measurements and mention the main topics in a practical manner. Wherever possible, a hint is given where the user should pay attention. 

The Latest Techniques in Power Supply Test – Get the App Note

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/the-latest-techniques-in-power-supply-test-get-the-app-note

DC Electronic Loads are becoming more popular in test systems as more electronic devices convert or store energy. Learn about Keysight’s next-generation electronic loads, allowing for a complete DC power conversion solution on the popular N6700 modular power system.

photo

Understanding ADC Bits

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/white-paper-understanding-adc-bits

Did you know your oscilloscope’s effective number of bits (ENOB) is just as important as the number of ADC bits? ADC bits is one of the most widely known specifications. Many engineers rely on this as the sole specification that determines an oscilloscope’s quality. However, the importance of ADC bits is often exaggerated while other critical indications of signal integrity get pushed to the background. Learn about the major impacts ENOB has on your measurements.

image

Google’s Quantum Tech Milestone Excites Scientists and Spurs Rivals

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/googles-quantum-tech-milestone-excites-scientists-and-spurs-rivals

Quantum computing can already seem like the realm of big business these days, with tech giants such as Google, IBM, and Intel developing quantum tech hardware. But even as rivals reacted to Google’s announcement of having shown quantum computing’s advantage over the most powerful supercomputer, scientists have welcomed the demonstration as providing crucial experimental evidence to back up theoretical research in quantum physics.

Nonlinear Magnetic Materials Modeling Webinar

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/nonlinear-magnetic-materials-modeling

In this webinar, you will learn how to model ferromagnetic materials and other nonlinear magnetic materials in the COMSOL® software.

Ferromagnetic materials exhibit saturation and hysteresis. These factors are major challenges in the design of electric motors and transformers because they affect the iron loss. In addition, the loss of ferromagnetic properties at elevated temperatures (Curie temperatures) is an important nonlinear multiphysics effect in, for example, induction heating. It can also cause permanent material degradation in permanent magnet motors.

This webinar will demonstrate how to build high-fidelity finite element models of ferromagnetic devices using COMSOL Multiphysics®. The presentation concludes with a Q&A session.

PRESENTERS:

Magnus Olsson, Technology Manager, COMSOL

Magnus Olsson joined COMSOL in 1996 and currently leads development for the electromagnetic design products. He holds an MSc in engineering physics and a PhD in plasma physics and fusion research. Prior to joining COMSOL, he worked as a consulting specialist in electromagnetic computations for the Swedish armed forces.

 

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to [email protected] for a webinar code. To request your certificate complete the form here: http://innovationatwork.ieee.org/spectrum/

Attendance is free. To access the event please register.

NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Dream Your Future with CERN!

Post Syndicated from Cern original https://spectrum.ieee.org/computing/hardware/dream-your-future-with-cern

On 14 and 15 September CERN opened its doors to the public on the occasion of its Open Days, a unique opportunity to witness the incredible work going on behind the scenes of this unique organisation, whose mission is to answer the fundamental questions of the universe. More than 75,000 visitors of all ages and backgrounds came to CERN’s many visit points, with more than 100 activities, guided by 3,000 dedicated and passionate volunteers eager to share the wonders of this unique place to work.

CERN is the world’s largest particle physics research centre. It is an incredible place, with its myriad of accelerators, detectors, computing infrastructure and experiments that serve to research the origins of our universe. Seeing it for oneself is the only way to understand and realise the sheer enormity of what is going on here. We traditionally have over 110’000 visitors per year coming to CERN, numbers that grow all the time. It is a very popular place to visit at any time as its ranking on Tripadvisor confirms.

Every five years, CERN enters a ‘Long shutdown’ phase for essential upgrades and maintenance work which last several months, and this is the ideal opportunity to open CERN up to the public with its ‘Open days’, for people to see, experience and integrate what science on this scale actually looks like. The theme of these open days was “Explore the future with us”, with the aim to engage visitors in how we work at CERN, engage them in the process of science, human endeavour driven by values of openness, diversity and peaceful collaboration.

You can of course visit CERN at any time, although on a more reduced scale than the open days. While in operation, the Large Hadron Collider and detectors are clearly inaccessible. In the regular annual shutdown periods, limited underground visits are possible but cannot be guaranteed, however there are many interesting places to be visited above ground at all times, with free of charge visits and tours on offer. Furthermore, if coming in person is not feasible, people can take virtual tours notably of the LHC and the computing centre.

Who works at CERN? A common misconception about CERN is that all employees work in physics. CERN’s mission is to uncover the mysteries of our universe and is known as the largest physics laboratory in the world, so in many ways this misconception comes from a logical assumption. What is probably less tangible and less well understood by the public is that to achieve this level of cutting edge particle physics research, you need the infrastructure and tools to perform it: the accelerators, detectors, technology, computing and a whole host of other disciplines. CERN employs 2600 staff members to build, operate and maintain this infrastructure that is in turn used by a worldwide community of physicists to perform their world-class research.

Of the 2600 staff members, only 3% are research physicists – CERN’s core hiring needs are for engineers and technicians and support staff in a wide variety of disciplines, spanning electricity, mechanics, electronics, material science, vacuum, and of course computing. Let’s not forget that CERN is the birth place of the world wide web and advances in computing are key here – it’s a great place to work as a software or hardware engineer!

Working at CERN is enriching on so many levels, it is a privilege to be a part of this Organization which has such a noble mission, uniting people from all over the world with values that truly speak to me: diversity, commitment, creativity, integrity and professionalism. Every day is a new opportunity to learn, discover and grow. The benefits of working at CERN are plentiful, and the quality of life offered in the Geneva region is remarkable. We often say it’s working in a place like nowhere else on earth! So don’t hesitate to come find out for yourself, on a visit or … by joining us as a student, a graduate or a professional. Apply now and take part! https://careers.cern

Key parameters for selecting RF inductors

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/key-parameters-for-selecting-r-f-inductors

Download this application note to learn the key selection criteria an engineer needs to understand in order to properly evaluate and specify RF inductors, including inductance value, current rating, DC resistance (DCR), self-resonant frequency (SRF) and more.

What Google’s Quantum Supremacy Claim Means for Quantum Computing

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/how-googles-quantum-supremacy-plays-into-quantum-computings-long-game

Google’s claim to have demonstrated quantum supremacy—one of the earliest and most hotly anticipated milestones on the long road toward practical quantum computing—was supposed to make its official debut in a prestigious science journal. Instead, an early leak of the research paper has sparked a frenzy of media coverage and some misinformed speculation about when quantum computers will be ready to crack the world’s computer security algorithms.

Goodbye, Motherboard. Hello, Silicon-Interconnect Fabric

Post Syndicated from Puneet Gupta original https://spectrum.ieee.org/computing/hardware/goodbye-motherboard-hello-siliconinterconnect-fabric

The need to make some hardware systems tinier and tinier and others bigger and bigger has been driving innovations in electronics for a long time. The former can be seen in the progression from laptops to smartphones to smart watches to hearables and other “invisible” electronics. The latter defines today’s commercial data centers—megawatt-devouring monsters that fill purpose-built warehouses around the world. Interestingly, the same technology is limiting progress in both arenas, though for different reasons.

The culprit, we contend, is the printed circuit board. And the solution is to get rid of it.

Our research shows that the printed circuit board could be replaced with the same material that makes up the chips that are attached to it, namely silicon. Such a move would lead to smaller, lighter-weight systems for wearables and other size-constrained gadgets, and also to incredibly powerful high-performance computers that would pack dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.

This all-silicon technology, which we call silicon-interconnect fabric, allows bare chips to be connected directly to wiring on a separate piece of silicon. Unlike connections on a printed circuit board, the wiring between chips on our fabric is just as small as wiring within a chip. Many more chip-to-chip connections are thus possible, and those connections are able to transmit data faster while using less energy.

Silicon-interconnect fabric, or Si-IF, offers an added bonus. It’s an excellent path toward the dissolution of the f(relatively) big, complicated, and difficult-to-manufacture systems-on-chips that currently run everything from smartphones to supercomputers. In place of SoCs, system designers could use a conglomeration of smaller, simpler-to-design, and easier-to-manufacture chiplets tightly interconnected on an Si-IF. This chiplet revolution is already well under way, with AMD, Intel, Nvidia, and others offering chiplets assembled inside of advanced packages. Silicon-interconnect fabric expands that vision, breaking the system out of the package to include the entire computer.

To understand the value of eliminating the printed circuit board, consider what happens with a typical SoC. Thanks to Moore’s Law, a 1-square-centimeter piece of silicon can pack pretty much everything needed to drive a smartphone. Unfortunately, for a variety of reasons that mostly begin and end with the printed circuit board, this sliver of silicon is then put inside a (usually) plastic package that can be as much as 20 times as large as the chip itself.

The size difference between chip and package creates at least two problems. First, the volume and weight of the packaged chip are much greater than those of the original piece of silicon. Obviously, that’s a problem for all things that need to be small, thin, and light. Second, if the final hardware requires multiple chips that talk to one another (and most systems do), then the distance that signals need to travel increases by more than a factor of 10. That distance is a speed and energy bottleneck, especially if the chips exchange a lot of data. This choke point is perhaps the biggest problem for data-intensive applications such as graphics, machine learning, and search. To make matters worse, packaged chips are difficult to keep cool. Indeed, heat removal has been a limiting factor in computer systems for decades.

If these packages are such a problem, why not just remove them? Because of the printed circuit board.

The purpose of the printed circuit board is, of course, to connect chips, passive components, and other devices into a working system. But it’s not an ideal technology. PCBs are difficult to make perfectly flat and are prone to warpage. Chip packages usually connect to the PCB via a set of solder bumps, which are melted and resolidified during the manufacturing process. The limitations of solder technology combined with surface warpage mean these solder bumps can be no less than 0.5 millimeters apart. In other words, you can pack no more than 400 connections per square centimeter of chip area. For many applications, that’s far too few connections to deliver power to the chip and get signals in and out. For example, the small area taken up by one of the Intel Atom processor’s dies has only enough room for a hundred 0.5-mm connections, falling short of what it needs by 300. Designers use the chip package to make the connection-per-unit-area math work. The package takes tiny input/output connections on the silicon chip—ranging from 1 to 50 micrometers wide—and fans them out to the PCB’s 500-µm scale.

Recently, the semiconductor industry has tried to limit the problems of printed circuit boards by developing advanced packaging, such as silicon interposer technology. An interposer is a thin layer of silicon on which a small number of bare silicon chips are mounted and linked to each other with a larger number of connections than could be made between two packaged chips. But the interposer and its chips must still be packaged and mounted on a PCB, so this arrangement adds complexity without solving any of the other issues. Moreover, interposers are necessarily thin, fragile, and limited in size, which means it is difficult to construct large systems on them.

We believe that a better solution is to get rid of packages and PCBs altogether and instead bond the chips onto a relatively thick (500-µm to 1-mm) silicon wafer. Processors, memory dies, analog and RF chiplets, voltage-regulator modules, and even passive components such as inductors and capacitors can be bonded directly to the silicon. Compared with the usual PCB material—a fiberglass and epoxy composite called FR-4—a silicon wafer is rigid and can be polished to near perfect flatness, so warping is no longer an issue. What’s more, because the chips and the silicon substrate expand and contract at the same rate as they heat and cool, you no longer need a large, flexible link like a solder bump between the chip and the substrate.

Solder bumps can be replaced with micrometer-scale copper pillars built onto the silicon substrate. Using thermal compression—which basically is precisely applied heat and force—the chip’s copper I/O ports can then be directly bonded to the pillars. Careful optimization of the thermal-compression bonding can produce copper-to-copper bonds that are far more reliable than soldered bonds, with fewer materials involved.

Eliminating the PCB and its weaknesses means the chip’s I/O ports can be spaced as little as 10 µm apart instead of 500 µm. We can therefore pack 2,500 times as many I/O ports on the silicon die without needing the package as a space transformer.

Even better, we can leverage standard semiconductor manufacturing processes to make multiple layers of wiring on the Si-IF. These traces can be much finer than those on a printed circuit board. They can be less than 2 µm apart, compared with a PCB’s 500 µm. The technology can even achieve chip-to-chip spacing of less than 100 µm, compared with 1 mm or more using a PCB. The result is that an Si-IF system saves space and power and cuts down on the time it takes signals to reach their destinations.

Furthermore, unlike PCB and chip-package materials, silicon is a reasonably good conductor of heat. Heat sinks can be mounted on both sides of the Si-IF to extract more heat—our estimates suggest up to 70 percent more. Removing more heat lets processors run faster.

Although silicon has very good tensile strength and stiffness, it is somewhat brittle. Fortunately, the semiconductor industry has developed methods over the decades for handling large silicon wafers without breaking them. And when Si-IF–based systems are properly anchored and processed, we expect them to meet or exceed most reliability tests, including resistance to shock, thermal cycling, and environmental stresses.

There’s no getting around the fact that the material cost of crystalline silicon is higher than that of FR-4. Although there are many factors that contribute to cost, the cost per square millimeter of an 8-layer PCB can be about one-tenth that of a 4-layer Si-IF wafer. However, our analysis indicates that when you remove the cost of packaging and complex circuit-board construction and factor in the space savings of Si-IF, the difference in cost is negligible, and in many cases Si-IF comes out ahead.

Let’s look at a few examples of how Si-IF integration can benefit a computer system. In one study of server designs, we found that using packageless processors based on Si-IF can double the performance of conventional processors because of the higher connectivity and better heat dissipation. Even better, the size of the silicon “circuit board” (for want of a better term) can be reduced from 1,000 cm2 to 400 cm2. Shrinking the system that much has real implications for data-center real estate and the amount of cooling infrastructure needed. At the other extreme, we looked at a small Internet of Things system based on an Arm microcontoller. Using Si-IF here not only shrinks the size of the board by 70 percent but also reduces its weight from 20 grams to 8 grams.

Apart from shrinking existing systems and boosting their performance, Si-IF should let system designers create computers that would otherwise be impossible, or at least extremely impractical.

A typical high-performance server contains two to four processors on a PCB. But some high-performance computing applications need multiple servers. Communication latency and bandwidth bottlenecks arise when data needs to move across different processors and PCBs. But what if all the processors were on the same wafer of silicon? These processors could be integrated nearly as tightly as if the whole system were one big processor.

This concept was first proposed by Gene Amdahl at his company Trilogy Systems. Trilogy failed because manufacturing processes couldn’t yield enough working systems. There is always the chance of a defect when you’re making a chip, and the likelihood of a defect increases exponentially with the chip’s area. If your chip is the size of a dinner plate, you’re almost guaranteed to have a system-killing flaw somewhere on it.

But with silicon-interconnect fabric, you can start with chiplets, which we already know can be manufactured without flaws, and then link them to form a single system. A group of us at the University of California, Los Angeles, and the University of Illinois at Urbana-Champaign architected such a wafer-scale system comprising 40 GPUs. In simulations, it sped calculations more than fivefold and cut energy consumption by 80 percent when compared with an equivalently sized 40-GPU system built using state-of-the-art multichip packages and printed circuit boards.

These are compelling results, but the task wasn’t easy. We had to take a number of constraints into account, including how much heat could be removed from the wafer, how the GPUs could most quickly communicate with one another, and how to deliver power across the entire wafer.

Power turned out to be a major constraint. At a chip’s standard 1-volt supply, the wafer’s narrow wiring would consume a full 2 kilowatts. Instead, we chose to up the supply voltage to 12 V, reducing the amount of current needed and therefore the power consumed. That solution required spreading voltage regulators and signal-conditioning capacitors all around the wafer, taking up space that might have gone to more GPU modules. Encouraged by the early results, we are now building a prototype wafer-scale computing system, which we hope to complete by the end of 2020.

Silicon-interconnect fabric could play a role in an important trend in the computer industry: the dissolution of the system-on-chip (SoC) into integrated collections of dielets, or chiplets. (We prefer the term dielets to chiplets because it emphasizes the nature of a bare silicon die, its small size, and the possibility that it might not be fully functional without other dielets on the Si-IF.) Over the past two decades, a push toward better performance and cost reduction compelled designers to replace whole sets of chips with ever larger integrated SoCs. Despite their benefits (especially for high-volume systems), SoCs have plenty of downsides.

For one, an SoC is a single large chip, and as already mentioned, ensuring good yield for a large chip is very difficult, especially when state-of-the-art semiconductor manufacturing processes are involved. (Recall that chip yield drops roughly exponentially as the chip area grows.) Another drawback of SoCs is their high one-time design and manufacturing costs, such as the US $2 million or more for the photolithography masks, which can make SoCs basically unaffordable for most designs. What’s more, any change in the design or upgrade of the manufacturing process, even a small one, requires significant redesign of the entire SoC. Finally, the SoC approach tries to force-fit all of the subsystem designs into a single manufacturing process, even if some of those subsystems would perform better if made using a different process. As a result, nothing within the SoC achieves its peak performance or efficiency.

The packageless Si-IF integration approach avoids all of these problems while retaining the SoC’s small size and performance benefits and providing design and cost benefits, too. It breaks up the SoC into its component systems and re-creates it as a system-on-wafer or system–on–Si-IF (SoIF).

Such a system is composed of independently fabricated small dielets, which are connected on the Si-IF. The minimum separation between the dielets (a few tens of micrometers ) is comparable to that between two functional blocks within an SoC. The wiring on the Si-IF is the same as that used within the upper levels of an SoC and therefore the interconnect density is comparable as well.

The advantages of the SoIF approach over SoCs stem from the size of the dielet. Small dielets are less expensive to make than a large SoC because, as we mentioned before, you get a higher yield of working chips when the chips are smaller. The only thing that’s large about the SoIF is the silicon substrate itself. The substrate is unlikely to have a yield issue because it’s made up of just a few easy-to-fabricate layers. Most yield loss in chipmaking comes from defects in the transistor layers or in the ultradense lower metal layers, and a silicon-interconnect fabric has neither.

Beyond that, an SoIF would have all the advantages that industry is looking for by moving to chiplets. For example, upgrading an SoIF to a new manufacturing node should be cheaper and easier. Each dielet can have its own manufacturing technology, and only the dielets that are worth upgrading would need to be changed. Those dielets that won’t get much benefit from a new node’s smaller transistors won’t need a redesign. This heterogeneous integration allows you to build a completely new class of systems that mix and match dielets of various generations and of technologies that aren’t usually compatible with CMOS. For example, our group recently demonstrated the attachment of an indium phosphide die to an SoIF for potential use in high-frequency circuits.

Because the dielets would be fabricated and tested before being connected to the SoIF, they could be used in different systems, amortizing their cost significantly. As a result, the overall cost to design and manufacture an SoIF can be as much as 70 percent less than for an SoC, by our estimate. This is especially true for large, low-volume systems like those for the aerospace and defense industries, where the demand is for only a few hundred to a few thousand units. Custom systems are also easier to make as SoIFs, because both design costs and time shrink.

We think the effect on system cost and diversity has the potential to usher in a new era of innovation where novel hardware is affordable and accessible to a much larger community of designers, startups, and universities.

Over the last few years, we’ve made significant progress on Si-IF integration technology, but a lot remains to be done. First and foremost is the demonstration of a commercially viable, high-yield Si-IF manufacturing process. Patterning wafer-scale Si-IF may require innovations in “maskless” lithography. Most lithography systems used today can make patterns only about 33 by 24 mm in size. Ultimately, we’ll need something that can cast a pattern onto a 300-mm-diameter wafer.

We’ll also need mechanisms to test bare dielets as well as unpopulated Si-IFs. The industry is already making steady progress in bare die testing as chipmakers begin to move toward chiplets in advanced packages and 3D integration.

Next, we’ll need new heat sinks or other thermal-dissipation strategies that take advantage of silicon’s good thermal conductivity. With our colleagues at UCLA, we have been developing an integrated wafer-scale cooling and power-delivery solution called PowerTherm.

In addition, the chassis, mounts, connectors, and cabling for silicon wafers need to be engineered to enable complete systems.

We’ll also need to make several changes to design methodology to deliver on the promise of SoIFs. Si-IF is a passive substrate—it’s just conductors, with no switches—and therefore the interdielet connections need to be short. For longer connections that might have to link distant dielets on a wafer-scale system, we’ll need intermediate dielets to help carry data further. Design algorithms that do layout and pin assignments will need an overhaul in order to take advantage of this style of integration. And we’ll need to develop new ways of exploring different system architectures that leverage the heterogeneity and upgradability of SoIFs.

We also need to consider system reliability. If a dielet is found to be faulty after bonding or fails during operation, it will be very difficult to replace. Therefore, SoIFs, especially large ones, need to have fault tolerance built in. Fault tolerance could be implemented at the network level or at the dielet level. At the network level, interdielet routing will need to be able to bypass faulty dielets. At the dielet level, we can consider physical redundancy tricks like using multiple copper pillars for each I/O port.

Of course, the benefit of dielet assembly depends heavily on having useful dielets to integrate into new systems. At this stage, the industry is still figuring out which dielets to make. You can’t simply make a dielet for every subsystem of an SoC, because some of the individual dielets would be too tiny to handle. One promising approach is to use statistical mining of existing SoC and PCB designs to identify which functions “like” to be physically close to each other. If these functions involve the same manufacturing technologies and follow similar upgrade cycles as well, then they should remain integrated on the same dielet.

This might seem like a long list of issues to solve, but researchers are already dealing with some of them through the Defense Advanced Research Projects Agency’s Common Heterogeneous Integration and IP Reuse Strategies (CHIPS) program as well as through industry consortia. And if we can solve these problems, it will go a long way toward continuing the smaller, faster, and cheaper legacy of Moore’s Law. 

About the Authors

Puneet Gupta and Subramanian S. Iyer are both members of the electrical engineering department at the University of California at Los Angeles. Gupta is an associate professor, and Iyer is Distinguished Professor and the Charles P. Reames Endowed Chair.

Your Navigation App Is Making Traffic Unmanageable

Post Syndicated from Jane Macfarlane original https://spectrum.ieee.org/computing/hardware/your-navigation-app-is-making-traffic-unmanageable

Miguel Street is a winding, narrow route through the Glen Park neighborhood of San Francisco. Until a few years ago, only those living along the road traveled it, and they understood its challenges well. Now it’s packed with cars that use it as a shortcut from congested Mission Street to heavily traveled Market Street. Residents must struggle to get to their homes, and accidents are a daily occurrence.

The problem began when smartphone apps like Waze, Apple Maps, and Google Maps came into widespread use, offering drivers real-time routing around traffic tie-ups. An estimated 1 billion drivers use such apps in the United States alone.

Today, traffic jams are popping up unexpectedly in previously quiet neighborhoods around the country and the world. Along Adams Street, in the Boston neighborhood of Dorchester, residents complain of speeding vehicles at rush hour, many with drivers who stare down at their phones to determine their next maneuver. London shortcuts, once a secret of black-cab drivers, are now overrun with app users. Israel was one of the first to feel the pain because Waze was founded there; it quickly caused such havoc that a resident of the Herzliya Bet neighborhood sued the company.

The problem is getting worse. City planners around the world have predicted traffic on the basis of residential density, anticipating that a certain amount of real-time changes will be necessary in particular circumstances. To handle those changes, they have installed tools like stoplights and metering lights, embedded loop sensors, variable message signs, radio transmissions, and dial-in messaging systems. For particularly tricky situations—an obstruction, event, or emergency—city managers sometimes dispatch a human being to direct traffic.

But now online navigation apps are in charge, and they’re causing more problems than they solve. The apps are typically optimized to keep an individual driver’s travel time as short as possible; they don’t care whether the residential streets can absorb the traffic or whether motorists who show up in unexpected places may compromise safety. Figuring out just what these apps are doing and how to make them better coordinate with more traditional traffic-management systems is a big part of my research at the University of California, Berkeley, where I am director of the Smart Cities Research Center.

Here’s how the apps evolved. Typically, the base road maps used by the apps represent roads as five functional classes, from multilane freeways down to small residential streets. Each class is designed to accommodate a different number of vehicles moving through per hour at speeds that are adjusted for local conditions. The navigation systems—­originally available as dedicated gadgets or built into car dashboards and now in most smartphones—have long used this information in their routing algorithms to calculate likely travel time and to select the best route.

Initially, the navigation apps used these maps to search through all the possible routes to a destination. Although that worked well when users were sitting in their driveways, getting ready to set out on a trip, those searches were too computationally intensive to be useful for drivers already on the road. So software developers created algorithms that identify just a few routes, estimate the travel times of each, and select the best one. This approach might miss the fastest route, but it generally worked pretty well. Users could tune these algorithms to prefer certain types of roads over ­others—for example, to prefer highways or to avoid them.

The digital mapping industry is a small one. Navteq (now Here Technologies) and TomTom, two of the earliest digital-map makers, got started about 30 years ago. They focused mainly on building the data sets, typically releasing updated maps quarterly. In between these releases, the maps and the routes suggested by the navigation apps didn’t change.

When navigation capabilities moved to apps on smartphones, the navigation system providers began collecting travel speeds and locations from all the users who were willing to let the app share their information. Originally, the system providers used these GPS traces as historical data in algorithms designed to estimate realistic speeds on the roads at different times of day. They integrated these estimates with the maps, identifying red, yellow, and green routes—where red meant likely congestion and green meant unrestricted flow.  

As the historical records of these GPS traces grew and the coverage and bandwidth of the cellular networks improved, developers started providing traffic information to users in nearly real time. Estimates were quite accurate for the more popular apps, which had the most drivers in a particular region.

And then, around 2013, Here Technologies, TomTom, Waze, and Google went beyond just flagging traffic jams ahead. They began offering real-time rerouting suggestions, considering current traffic on top of the characteristics of the road network. That gave their users opportunities to get around traffic slowdowns, and that’s how the chaos began.

On its face, real-time rerouting isn’t a problem. Cities do it all the time by changing the signal, phase, and timing of traffic lights or flashing detour alerts on signs. The real problem is that the traffic management apps are not working with existing urban infrastructures to move the most traffic in the most efficient way.

First, the apps don’t account for the peculiarities of a given neighborhood. Remember the five classes of roads along with their estimated free-flow speeds I mentioned? That’s virtually all the apps know about the roads themselves. For example, Baxter Street in Los ­Angeles—also a scene of increased accidents due to app-induced shortcutting—is an extremely steep road that follows what originally was a network of goat paths. But to the apps, this road looks like any other residential road with a low speed limit. They assume it has parking on both sides and room for two-way traffic in between. It doesn’t take into account that it has a 32 percent grade and that when you’re at the top you can’t see the road ahead or oncoming cars. This blind spot has caused drivers to stop unexpectedly, causing accidents on this once-quiet neighborhood street.

The algorithms also may not consider other characteristics of the path they choose. For example, does it include roads on which there are a lot of pedestrians? Does it pass by an elementary school? Does it include intersections that are difficult to cross, such as a small street crossing a major thoroughfare with no signal light assistance?

I recently experienced what such cluelessness can cause. I was in congested traffic on a multilane road when an app offered to get me out of the traffic by sending me into a residential neighborhood. It routed me right past an elementary school at 8:15 a.m. There were crossing guards, minivans double parked, kids jumping out of cars, and drivers facing the bright morning sun having a hard time seeing in the glare. I only added to the chaos.

On top of all these problems, these rerouting apps are all out for themselves. They take a selfish view in which each vehicle is competing for the fastest route to its destination. This can lead to the router creating new traffic congestion in unexpected places.

Consider cars crossing a thoroughfare without the benefit of a signal light. Perhaps the car on the smaller road has a stop sign. Likely, it was designed as a two-way stop because traffic on the larger road was typically light enough that the wait to cross was comfortably short. Add cars to that larger road, however, and breaks in the traffic become few, causing the line of cars waiting at the stop sign to flow onto neighboring streets. If you’re in the car on the larger road, you may be zipping along to your destination. But if you’re on the smaller road, you may have to wait a very long time to cross. And if the apps direct more and more cars to these neighborhood roads, as may happen when a nearby highway is experiencing abnormal delays, the backups build and the likelihood of accidents increases.

To compound the “selfish routing” problem, each navigation application provider—Google, Apple, Waze (now owned by Google)—operates independently. Each provider receives data streamed to its servers only from the devices of its users, which means that the penetration of its app colors the system’s understanding of reality. If the app’s penetration is low, the system may fall back on historical traffic speeds for the area instead of getting a good representation of existing congestion. So we have multiple players working independently with imperfect information and expecting that the entire road network is available to absorb their users in real time.

Meanwhile, city transportation engineers are busy managing traffic with the tools they have at their disposal, like those on-ramp metering lights, messaging signs, and radio broadcasts suggesting real-time routing adjustments that I mentioned previously. Their goal is to control the congestion, maintain a safe and effective travel network, and react appropriately to such things as accidents, sporting events, and, in emergency situations, evacuations.

The city engineers are also working in isolation, with incomplete information, because they have no idea what the apps are going to do at any moment. The city now loses its understanding of the amount of traffic demanding access to its roads. That’s a safety issue in the short term and a planning issue in the long term: It blinds the city to information it could use to develop better traffic-mitigation strategies—for example, urging businesses to consider different work shifts or fleet operators to consider different routes.

So you may have recently benefitted from one of these shortcuts, but it’s doubtful that you’re winning the long game. To do that takes thinking about the system as a whole and perhaps even considering aggregate fuel consumption and emissions. Only then can we use these rerouting algorithms for the benefit of all citizens and our environment.

In the meantime, neighborhoods and citizens are fighting back against the strangers using their streets as throughways. In the early days of the problem, around 2014, residents would try to fool the applications into believing there were accidents tying up traffic in their neighborhood by logging fake incidents into the app. Then some neighborhoods convinced their towns to install speed bumps, slowing down the traffic and giving a route a longer base travel time.

A town in New Jersey, Leonia, simply closed many of its streets to through traffic during commute hours, levying heavy fines for nonresident drivers. Neighboring towns followed suit. And all faced the unintended consequence of their local businesses now losing customers who couldn’t get through the town at those hours.

The city of Los Angeles recently responded to the issues on Baxter Street by recasting the street as one-way: downhill only. It’s still not ideal; it means longer trips for residents coming and going from their homes, but it reduced the chaos.

Last year, an unfortunate situation in Los Angeles during the 2017 wildfires clearly demonstrated the lack of congruence among the rerouting apps and traditional traffic management: The apps directed drivers onto streets that were being closed by the city, right into the heart of the fire. This is not the fault of the algorithms; it is simply extremely difficult to maintain an up-to-date understanding of the roads during fast-moving events. But it does illustrate why city officials need a way to connect with or even override these apps. Luckily, the city had a police officer in the area, who was able to physically turn traffic away onto a safer route.

These are mere stopgap measures; they serve to reduce, not improve, overall mobility. What we really want is a socially optimum state in which the average travel time is minimized everywhere. Traffic engineers call this state system optimum equilibrium, one of the two ­Wardrop principles of equilibrium. How do we merge the app-following crowds with an engineered flow of traffic that at least moves toward a socially optimized system, using the control mechanisms we have on hand? We can begin by pooling everyone’s view of the real-time state of the road network. But getting everybody in the data pool won’t be easy. It is a David and Goliath story—some players like Google and Apple have massive back-office digital infrastructures to run these operations, while many cities have minimal funding for advanced technology development. Without the ability to invest in new technology, cities can’t catch up with these big technology providers and instead fall back on regulation. For example, Portland, Ore., Seattle, and many other cities have lowered the speed limits on residential streets to 20 miles per hour.

There are better ways. We must convince the app makers that if they share information with one another and with city governments, the rerouting algorithms could consider a far bigger picture, including information from the physical infrastructure, such as the timing schedule for traffic lights and meters and vehicle counts from static sensors, including cameras and inductive loops. This data sharing would make their apps better while simultaneously giving city traffic planners a helping hand.

As a first step, we should form public-private partnerships among the navigation app providers, city traffic engineering organizations, and even transportation companies like Uber and Lyft. Sharing all this information would help us figure out how to best reduce congestion and manage our mobility.

We have a number of other hurdles to overcome before all the apps and infrastructure tools can work together well enough to optimize traffic flow for everyone.

The real challenge with traffic control is the enormous scale of the problem. Using the flood of data from app users along with the data from city sensors will require a new layer of data analytics that takes the key information and combines it, anonymizes it, and puts it in a form that can be more easily digested by government-operated traffic management systems.

We also need to develop simulation software that can use all this data to model the dynamics of our mobility on an urban scale. Developing this software is a key topic of current research sponsored by the U.S. Department of Energy’s Energy Efficient Mobility Systems program and involving Here Technologies and three national laboratories: Lawrence Berkeley, Argonne, and Pacific Northwest. I am involved with this research program through the Berkeley lab, where I am a guest scientist in the Sustainable Transportation Initiative. To date, a team supported by this program, led by me and staffed by researchers from the three laboratories, has developed simulations for a number of large cities that can run in just minutes on DOE supercomputers. In the past, such simulations took days or weeks. I expect that new approaches to manage congestion that account for the many complexities of the problem will emerge from these simulations.

In one of our projects, we took 22 million origin-and-destination pairs—or trip legs, as defined by the San Francisco County Transportation Authority—and created a simulation for the San Francisco Bay Area that defines the shortest travel time route for each leg as well as the congestion patterns on each route for a full day. We added an algorithm that reroutes vehicles when the simulation anticipates significant congestion. We discovered that approximately 40,000 vehicles are typically rerouted per hour at the peak congestion times in the morning and 120,000 vehicles are rerouted per hour in the evening congestion period; an incident on a highway, of course, would cause these numbers to jump.

This simulation demonstrates how much traffic planners can do to rebalance traffic flow, and it provides numbers that, right now, are not directly available. The next question is how much of the road network you want to use, trading off highway congestion for some additional traffic on neighborhood roads.

Our next step will be to modify our algorithm to consider neighborhood constraints. We know, for example, that we don’t want to reroute traffic into school zones during drop-off and pickup times, and that we should modify navigation algorithms appropriately.

We hope to soon put these tools in the hands of government transportation agencies.

That’s what we’re trying to do with technology to address the problem. But there are nontechnical hurdles as well. For example, location data can contain personal information that cannot be shared indiscriminately. And current business models may make for-profit companies reluctant to give away data that has value.

Solving both the technical and nontechnical issues will require research and public-private partnerships before we can assemble this cooperative ecosystem. But as we learn more about what drives the dynamics of our roads, we will be able to develop effective routing and traffic controls that take into account neighborhood concerns, the business objectives of fleet owners, and people’s health and convenience.

I am confident that most people, when well informed, would be open to a little inconvenience in the furtherance of the common good. Wouldn’t you be willing to drive a few extra minutes to spare a neighborhood and improve the environment?

This article appears in the October 2019 print issue as “When Apps Rule the Road.”

About the Authors

Jane Macfarlane is director of the Smart Cities Research Center at the University of California Berkeley’s Institute of Transportation Studies, where she works on data analytics for emerging transportation issues.

Advanced low-frequency noise measurement system: 9812DX

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/advanced-low-frequency-noise-measurement-system-9812dx

The 9812DX system, consisting of three current amplifiers and one voltage amplifier, fully demonstrates its superior capabilities and versatility measuring low-frequency noise characteristics of onwafer transistors over a wide range of bias voltage 200V, bias current 200mA and operating frequency bandwidth 0.03Hz – 10 MHz down to an extremely low noise resolution of 1e-27 A2/Hz.

U.S. Energy Department is First Customer for World’s Biggest Chip

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/us-energy-department-is-first-customer-for-worlds-biggest-chip

Argonne National Laboratory and Lawrence Livermore National Laboratory will be among the first organizations to install AI computers made from the largest silicon chip ever built. Last month, Cerebras Systems unveiled a 46,225-square millimeter chip with 1.2 trillion transistors designed to speed the training of neural networks. Today, such training is often done in large data centers using GPU-based servers. Cerebras plans to begin selling computers based on the notebook-size chip in the 4th quarter of this year.

Delivering on Quantum Innovation

Post Syndicated from University of Maryland original https://spectrum.ieee.org/computing/hardware/delivering-on-quantum-innovation

The University of Maryland (UMD) has announced the launch of the Quantum Technology Center (QTC), which aims to translate quantum physics research into innovative technologies.

The center will capitalize on the university’s strong research programs and partnerships in quantum science and systems engineering, and pursue collaborations with industry and government labs to help take promising quantum advances from the lab to the marketplace. QTC will also train students in the development and application of quantum technologies to produce a workforce educated in quantum-related engineering.

The launch of QTC comes at a pivotal time when quantum science research is expanding beyond physics into materials science, engineering, computer science, chemistry, and biology. Scientists across these disciplines are looking for ways to exploit quantum physics to build powerful computers, develop secure communication networks, and improve sensing and imaging capabilities. In the future, quantum technology could also impact fields such as artificial intelligence, energy, and medicine.

Fearless vision

The rules of quantum physics cover the shockingly strange behaviors of atoms and smaller particles. Technologies based on the first century of quantum physics research are close at hand in your daily life—in your smartphone’s billions of transistors and GPS navigation, for instance.

Today more radical quantum technologies are moving toward commercial reality.

UMD has long been a powerhouse in quantum research and is now accelerating this trend with the launch of QTC. Founded jointly by UMD’s A. James Clark School of Engineering and College of Computer, Mathematical, and Natural Sciences, QTC will translate quantum science to the marketplace.

“QTC will be a community that brings together different types of people and ideas to create new quantum technologies and train a new generation of quantum workforce,” says QTC founding Director Ronald Walsworth. “UMD will focus on developing these technologies in the early stages, and then translating them out to the wider world with diverse partners.”

Like UMD’s existing quantum research programs, QTC is expected to draw strong sponsorship from federal research agencies. National support for quantum research is on the upswing—most notably evidenced by the National Quantum Initiative, signed into law in December 2018, which authorizes $1.275 billion over five years for research. 

Quantum research on the rise

UMD already hosts more than 200 researchers in quantum science, one of the greatest concentrations in the world. Much of the effort has been led by the Joint Quantum Institute (JQI) and Joint Center for Quantum Information and Computer Science (QuICS), both partnerships between UMD and the National Institute of Standards and Technology. JQI and QuICS support many projects that cross boundaries in research disciplines and organizations; this trend will only increase with QTC on campus.

One prime example of constructively blurred lines comes from the research of Distinguished University Professor Chris Monroe. An international leader in isolating individual atoms for quantum computing and simulation, Monroe is a member of all three centers, and well-positioned to tap into the expertise of researchers in related disciplines. 

Professor Edo Waks and Associate Professor Mohammad Hafezi, both members of QTC and JQI, are also among the UMD researchers helping to form the next revolution of quantum research with groundbreaking work on devices for quantum information processing and quantum networks.

In one effort, Waks demonstrated the first single-photon transistor using a semiconductor chip. The device is compact; roughly one million of these new transistors could fit inside a single grain of salt. It is also fast and able to process 10 billion photonic qubits every second.

“Using our transistor, we should be able to perform quantum gates between photons,” says Waks. “Software running on a quantum computer would use a series of such operations to attain exponential speedup for certain computational problems.”

Hafezi studies the fundamental behaviors of light–matter interactions down to the single-photon level. He created the first silicon chip that can reliably constrain light to its four corners. The effect, which arises from interfering optical pathways, could eventually enable the creation of robust sources of quantum light.

“We have been developing integrated silicon photonic systems to realize ideas derived from topology in a physical system,” Hafezi says. “The fact that we use components compatible with current technology means that, if these systems are robust, they could possibly be translated into immediate applications.”

Grounding a quantum community

 “QTC will be a crucible for quantum science and engineering,” says Walsworth, a leader in quantum sensing who was recruited from Harvard University to lead the new center. “We’ll be building bridges between people, between sectors, between theories and technologies. There’s a kind of hunger for a community that pulls people together to pool information and find ways to overcome challenges in this exciting new area.”

According to Clark School Dean and Farvardin Professor Darryll Pines, UMD’s hiring of Walsworth signals an important next step in bringing engineering solutions to the forefront. “He’s the perfect representative to bridge the gap between physics and engineering, because he’s already been doing that himself,” says Pines.

In addition to his broad range of research accomplishments, Walsworth has acted as an advisor for corporations and co-founded two companies based in part on his lab’s work. Quantum Diamond Technologies is developing applications in medical diagnostics for quantum measurement technologies that can be generated at room temperatures in synthetic diamonds. Hyperfine Research is creating low-cost portable MRI machines. 

“If you really want a new community of technology to flourish, you’ve got to have the applications right,” Walsworth adds. “You’ve got to be solving someone’s problems. Some people are busy building their technologies, but they don’t always know what the technologies are good for. Other people are out there complaining about how they can’t solve their problems, but they don’t know what technology exists that might help.” 

Making the match will require QTC researchers to seek out groups across and outside the university to talk about actual challenges where quantum technology might help.

“From a United States perspective, this is a big deal,” he says. “Quantum is one of those areas that requires enormous investment from the federal government, to advance our knowledge in this space. We hope this leads to opportunities that translate to real products with positive impact for people, society, and the U.S. economy.”

Descartes Labs Built a Top 500 Supercomputer From Amazon Cloud

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/descartes-labs-built-a-top-500-supercomputer-from-amazon-cloud

Cofounder Mike Warren talks about the future of high-performance computing in a data-rich, cloud computing world

Descartes Labs cofounder Mike Warren has had some notable firsts in his career, and a surprising number have had lasting impact. Back in 1998 for instance, his was the first Linux-based computer fast enough to gain a spot in the coveted Top 500 list of supercomputers. Today, they all run Linux. Now his company, which crunches geospatial and location data to answer hard questions, has achieved something else that may be indicative of where high-performance computing is headed: It’s built the world’s 136th fastest supercomputer using just Amazon Web Services and Descartes Labs’ own software. In 2010, this would have been the most powerful computer on the planet.

Notably, Amazon didn’t do anything special for Descartes. Warren’s firm just plunked down US $5,000 on the company credit card for the use of a “high-network-throughput instance block” consisting of 41,472 processor cores and 157.8 gigabytes of memory. It then worked out some software to make the collection act as a single machine. Running the standard supercomputer test suite, called LinPack, the system reached 1,926.4 teraFLOPS (trillion floating point operations per second). (Amazon itself made an appearance much lower down on the Top 500 list a few years back, but that’s thought to have been for its own dedicated system in which Amazon was the sole user rather than what’s available to the public.)