Tag Archives: Computing/Hardware

Get a method to separate common-mode and differential-mode separation using two oscilloscope channels.

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/get-a-method-to-separate-commonmode-and-differentialmode-separation-using-two-oscilloscope-channels

oscilloscope channels


Learn how to distinguish between common-mode (CM) and differential-mode (DM) noise. This additional information about the dominant mode provides the capability to optimize input filters very efficiently.

Superconducting Microprocessors? Turns Out They’re Ultra-Efficient

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/hardware/new-superconductor-microprocessor-yields-a-substantial-boost-in-efficiency

Computers use a staggering amount of energy today. According to one recent estimate, data centers alone consume two percent of the world’s electricity, a figure that’s expected to climb to eight percent by the end of the decade. To buck that trend, though, perhaps the microprocessor, at the center of the computer universe, could be streamlined in entirely new ways. 

One group of researchers in Japan have taken this idea to the limit, creating a superconducting microprocessor—one with zero electrical resistance. The new device, the first of its kind, is described in a study published last month in the IEEE Journal of Solid-State Circuits.

Superconductor microprocessors could offer a potential solution for more energy efficient computing power—but for the fact that, at present, these designs require ultra-cold temperatures below 10 kelvin (or -263 degrees Celsius). The research group in Japan sought to create a superconductor microprocessor that’s adiabatic, meaning that, in principle, energy is not gained or lost from the system during the computing process.

While adiabatic semiconductor microprocessors exist, the new microprocessor prototype, called MANA (Monolithic Adiabatic iNtegration Architecture), is the world’s first adiabatic superconductor microprocessor. It’s composed of superconducting niobium and relies on hardware components called adiabatic quantum-flux-parametrons (AQFPs). Each AQFP is composed of a few fast-acting Josephson junction switches, which require very little energy to support superconductor electronics. The MANA microprocessor consists of more than 20,000 Josephson junctions (or more than 10,000 AQFPs) in total.

Christopher Ayala is an Associate Professor at the Institute of Advanced Sciences at Yokohama National University, in Japan, who helped develop the new microprocessor. “The AQFPs used to build the microprocessor have been optimized to operate adiabatically such that the energy drawn from the power supply can be recovered under relatively low clock frequencies up to around 10 GHz,” he explains. “This is low compared to the hundreds of gigahertz typically found in conventional superconductor electronics.”

This doesn’t mean that the group’s current-generation device hits 10 GHz speeds, however. In a press statement, Ayala added, “We also show on a separate chip that the data processing part of the microprocessor can operate up to a clock frequency of 2.5 GHz making this on par with today’s computing technologies. We even expect this to increase to 5-10 GHz as we make improvements in our design methodology and our experimental setup.” 

The price of entry for the niobium-based microprocessor is of course the cryogenics and the energy cost for cooling the system down to superconducting temperatures.  

“But even when taking this cooling overhead into account,” says Ayala, “The AQFP is still about 80 times more energy-efficient when compared to the state-of-the-art semiconductor electronic device, [such as] 7-nm FinFET, available today.”

Since the MANA microprocessor requires liquid helium-level temperatures, it’s better suited for large-scale computing infrastructures like data centers and supercomputers, where cryogenic cooling systems could be used.

“Most of these hurdles—namely area efficiency and improvement of latency and power clock networks—are research areas we have been heavily investigating, and we already have promising directions to pursue,” he says.

Designing PCBs 5x Faster Despite the Pandemic

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/designing-pcbs-5x-faster-despite-the-pandemic

Designing PCBs faster

Join our webinar on January 20th, to learn how Kinetic Vision uses Altium’s platform to enable a connected and frictionless PCB design experience, increasing their productivity x5 even in the midst of Covid.

For over 30 years, Kinetic Vision has provided technology solutions to over 50 of the top Fortune 500 companies in the world. Hear about how their embedded development team needed a better design and collaboration solution to satisfy the increasing needs of their demanding clients and how Altium helped solve their toughest issues.

Altium Designer is their tool of choice when it comes to designing PCBs, as it provides the most connected PCB design experience – removing the common points of friction that occur throughout a typical design flow. With the Covid situation, using Altium’s platform became even more essential as it enabled seamless remote working and has actually increased their levels of productivity to 5 times their pre-Covid rate.

Join Jeremy Jarrett, Executive Vice President at Kinetic Vision and Michael Weston, Team Lead Engineer at Kinetic Vision as we discuss what exactly sets Altium apart from the rest of the PCB design solutions out there today – and why it is an essential tool in order to stay ahead of the game.

Register today!
January 20th | 10:00 AM PST | 1:00 PM EST

3D-printing hollow structures with “xolography”

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/xolography-printing

By shining light beams in liquid resin, a new 3-D printing technique dubbed “xolography” can generate complex hollow structures, including simple machines with moving parts, a new study finds.

“I like to imagine that it’s like the replicator from Star Trek,” says study co-author Martin Regehly, an experimental physicist at the Brandenburg University of Applied Science in Germany. “As you see the light sheet moving, you can see something created from nothing.”

IBM Makes Tape Storage Better Than Ever

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/nanoclast/computing/hardware/tape-is-back-and-better-than-ever

Introduced by strains of the Strauss waltz that served as the soundtrack for “2001: A Space Odyssey,” IBM demonstrated a new world record in magnetic tape storage capabilities in a live presentation this week from its labs in Zurich, Switzerland.

A small group of journalists looked on virtually as IBM scientists showed off a 29-fold increase in the storage capability of its current data-tape cartridge, from 20 terabytes (TB) to 580 TB. That’s roughly 32-times the capacity of LTO-Ultrium (Linear Tape-Open, version 9), the latest industry-standard in magnetic tape products.

While these figures may sound quite impressive, some may wonder whether this story might have mistakenly come from a time capsule buried in the 1970s. But the fact is tape is back, by necessity.

Magnetic tape storage has been undergoing a renaissance in recent years, according to Mark Lantz, manager of CloudFPGA and tape technologies at IBM Zurich. This resurgence, Lantz argues, has been driven by the convergence of two trends: exponential data growth and a simultaneous slowing down of areal density in hard-disk drives (HDD).

In fact, the growth rate of HDD areal density has slowed down to under an 8% compound annual growth rate over the last several years, according to Lantz. This slowdown is occurring while data is growing worldwide to the point where it is expected to hit 175 zettabytes by 2025, representing a 61% annual growth rate.

This lack of HDD scaling has resulted in the price per gigabyte of HDD rising dramatically. Estimates put HDD bytes at four times the cost of tape bytes. This creates a troublesome imbalance at an extremely inopportune moment: just as the amount of data being produced is increasing exponentially, data centers can’t afford to store it.

Fortunately a large portion of the data being stored is what’s termed “cold,” meaning it hasn’t been accessed in a long time and is not needed frequently. These types of data can tolerate higher retrieval latencies, making magnetic tape well suited for the job.

Magnetic tape is also inherently more secure from cybercrime, requires less energy, provides long-term durability, and has a lower cost per gigabyte than HDD. Because of these factors IBM estimates that more than 345,000 exabytes (EB) of data already resides in tape storage systems. In the midst of these market realities for data storage, IBM’s believes that its record-setting demonstration will enable tape to meet its scaling roadmap for the next decade.

This new record involved a 15-year journey that IBM had undertaken with Fujifilm to continuously push the capabilities of tape technology. After setting six new records since 2006, IBM and Fujifilm achieved this latest big leap by improving three main areas of tape technology: the tape medium, a new tape head technology with novel use of an HDD detector for reading the data, and servo-mechanical technologies that ensure the tape tracks precisely.

For the new tape medium, Fujifilm set aside the current industry standard of barium ferrite particles and incorporated smaller strontium ferrite particles in a new tape coating, allowing for higher density storage on the same amount of tape.

With Fujifilm’s strontium ferrite particulate magnetic tape in hand, IBM developed a new low friction tape head technology that could work with the very smooth surfaces of the new tape. IBM also used an ultra-narrow 29 nm wide tunnel-magnetoresistance (TMR) read sensor that enables reliable detection of data written on the strontium ferrite media at a linear density of 702 kilobytes per inch.

IBM also developed a family of new servo-mechanical technologies for the system. This suite of technologies measures the position of the tape head on the tape and then adjusts that position so the data is written in the correct location. Transducers then scan the center of the tracks during read-back operation. In the aggregate all these new servo technologies made head positioning possible at a world record accuracy of 3.2 nm, this while the tape is streamed over the read head at a speed of about 15 km/h.

Alberto Pace, head of data storage for the European Organization for Nuclear Research (CERN), put the development in context: “Only 20 years ago, all the data produced by the old Large Electron-Positron (LEP) collider had to be held in the big data center. Today all that data from the old LEP fits into a cabinet in my office. I expect that in less than 20 years we will have all the data from the Large Hadron Collider that now resides in our current data center fitting into a small cabinet in my office.”

Augmented Reality and the Surveillance Society

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/hardware/augmented-reality-and-the-surveillance-society

First articulated in a 1965 white paper by Ivan Sutherland, titled “The Ultimate Display,” augmented reality (AR) lay beyond our technical capacities for 50 years. That changed when smartphones began providing people with a combination of cheap sensors, powerful processors, and high-bandwidth networking—the trifecta needed for AR to generate its spatial illusions. Among today’s emerging technologies, AR stands out as particularly demanding—for computational power, for sensed data, and, I’d argue, for attention to the danger it poses.

Unlike virtual-reality (VR) gear, which creates for the user a completely synthetic experience, AR gear adds to the user’s perception of her environment. To do that effectively, AR systems need to know where in space the user is located. VR systems originally used expensive and fragile systems for tracking user movements from the outside in, often requiring external sensors to be set up in the room. But the new generation of VR accomplishes this through a set of techniques collectively known as simultaneous localization and mapping (SLAM). These systems harvest a rich stream of observational data—mostly from cameras affixed to the user’s headgear, but sometimes also from sonar, lidar, structured light, and time-of-flight sensors—using those measurements to update a continuously evolving model of the user’s spatial environment.

For safety’s sake, VR systems must be restricted to certain tightly constrained areas, lest someone blinded by VR goggles tumble down a staircase. AR doesn’t hide the real world, though, so people can use it anywhere. That’s important because the purpose of AR is to add helpful (or perhaps just entertaining) digital illusions to the user’s perceptions. But AR has a second, less appreciated, facet: It also functions as a sophisticated mobile surveillance system.

This second quality is what makes Facebook’s recent Project Aria experiment so unnerving. Nearly four years ago, Mark Zuckerberg announced Facebook’s goal to create AR “spectacles”—consumer-grade devices that could one day rival the smartphone in utility and ubiquity. That’s a substantial technical ask, so Facebook’s research team has taken an incremental approach. Project Aria packs the sensors necessary for SLAM within a form factor that resembles a pair of sunglasses. Wearers collect copious amounts of data, which is fed back to Facebook for analysis. This information will presumably help the company to refine the design of an eventual Facebook AR product.

The concern here is obvious: When it comes to market in a few years, these glasses will transform their users into data-gathering minions for Facebook. Tens, then hundreds of millions of these AR spectacles will be mapping the contours of the world, along with all of its people, pets, possessions, and peccadilloes. The prospect of such intensive surveillance at planetary scale poses some tough questions about who will be doing all this watching and why.

To work well, AR must look through our eyes, see the world as we do, and record what it sees. There seems no way to avoid this hard reality of augmented reality. So we need to ask ourselves whether we’d really welcome such pervasive monitoring, why we should trust AR providers not to misuse the information they collect, or how they can earn our trust. Sadly, there’s not been a lot of consideration of such questions in our rush to embrace technology’s next big thing. But it still remains within our power to decide when we might allow such surveillance—and to permit it only when necessary.

This article appears in the January 2021 print issue as “AR’s Prying Eyes.”

Modeling and Simulation of 5G Antenna System Innovations

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/modeling-and-simulation-of-5g-antenna-system-innovations

Ansys has developed a breakthrough technology, known as 3D Component Domain Decomposition Method (3D Comp DDM), that enables the accurate and efficient simulation of antenna arrays. Whether solving complex, electrically large antenna arrays, or relatively simple antenna arrays, this technology enables fast simulation without compromising on accuracy. 3D Comp DDM enhances the simulation process offering a robust and scalable solution for modeling realistic arrays while capturing finite array truncation effects. 

In this presentation, a workflow for the analysis of 5G phased array antennas and hand-held antennas will be demonstrated.

Photonic Quantum Computer Displays “Supremacy” Over Supercomputers

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/photonic-quantum-computer-shows-advantage-over-supercomputer

When Google unveiled its unprecedented demonstration of quantum computational advantage over classical computing in late 2019, Chinese researchers reported progress toward their own milestone demonstration.

Just over a year later, the Chinese team has shown a different but no less exciting path forward for quantum computing by achieving the second known demonstration of quantum computational advantage—also known as quantum supremacy.

This new finding does more than just confirm that quantum computers can solve certain problems faster than conventional supercomputers. (That’s the “supremacy” part.) It also adds a surprising twist: It now appears possible to achieve phenomenal computational feats with lasers and nonlinear crystals. Photonic systems have sometimes been considered more of a bridge technology between quantum devices than the core of a quantum computer itself. 

No longer. 

AI System Beats Supercomputer in Combustion Simulation

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/ai-system-beats-supercomputer-at-key-scientific-simulation

Cerebras Systems, which makes a specialized AI computer based on the largest chip ever made, is breaking out of its original role as a neural-network training powerhouse and turning its talents toward more traditional scientific computing. In a simulation having 500 million variables, the CS-1 trounced the 69th-most powerful supercomputer in the world. 

It also solved the problem—combustion in a coal-fired power plant—faster than the real-world flame it simulates. To top it off, Cerebras and its partners at the U.S. National Energy Technology Center claim, the CS-1 performed the feat faster than any present-day CPU or GPU-based supercomputer could.

The research, which was presented this week at the supercomputing conference SC20, shows that Cerebras’ AI architecture “is not a one trick pony,” says Cerebras CEO Andrew Feldman.

Weather forecasting, design of airplane wings, predicting temperatures in a nuclear power plant, and many other complex problems are solved by simulating “the movement of fluids in space over time,” he says. The simulation divides the world up into a set of cubes, models the movement of fluid in those cubes, and determines the interactions between the cubes. There can be 1 million or more of these cubes and it can take 500,000 variables to describe what’s happening.

According to Feldman, solving that takes a computer system with lots of processor cores, tons of memory very close to the cores, oodles of bandwidth connecting the cores and the memory, and loads of bandwidth connecting the cores. Conveniently, that’s what a neural-network training computer needs, too. The CS-1 contains a single piece of silicon with 400,000 cores, 18 gigabytes of memory, 9 petabytes of memory bandwidth, and 100 petabits per second of core-to-core bandwidth.

Scientists at NETL simulated combustion in a powerplant using both a Cerebras CS-1 and the Joule supercomputer, which has 84,000 CPU cores and consumes 450 kilowatts. By comparison, Cerebras runs on about 20 kilowatts. Joule completed the calculation in 2.1 milliseconds. The CS-1 was more than 200-times faster, finishing in 6 microseconds.

This speed has two implications, according to Feldman. One is that there is no combination of CPUs or even of GPUs today that could beat the CS-1 on this problem. He backs this up by pointing to the nature of the simulation—it does not scale well. Just as you can have too many cooks in the kitchen, throwing too many cores at a problem can actually slow the calculation down. Joule’s speed peaked when using 16,384 of its 84,000 cores.

The limitation comes from connectivity between the cores and between cores and memory. Imagine the volume to be simulated as a 370 x 370 x 370 stack of cubes (136,900 vertical stacks with 370 layers). Cerebras maps the problem to the wafer-scale chip by assigning the array of vertical stacks to a corresponding array of processor cores. Because of that arrangement, communicating the effects of one cube on another is done by transferring data between neighboring cores, which is as fast as it gets. And while each layer of the stack is computed, the data representing the other layers reside inside the core’s memory where it can be quickly accessed.

(Cerebras takes advantage of a similar kind of geometric mapping when training neural networks. [See sidebar “The Software Side of Cerebras,” January 2020.])

And because the simulation completed faster than the real-world combustion event being simulated, the CS-1 could now have a new job on its hands—playing a role in control systems for complex machines.

Feldman reports that the SC-1 has made inroads in the purpose for which it was originally built, as well. Drugmaker GlaxoSmithKline is a known customer, and the SC-1 is doing AI work at Argonne National Laboratory and Lawrence Livermore National Lab, the Pittsburgh Supercomputing Center. He says there are several customers he cannot name in the military, intelligence, and heavy manufacturing industries.

A next generation SC-1 is in the works, he says. The first generation used TSMC’s 16-nanometer process, but Cerebras already has a 7-nanometer version in hand with more than double the memory—40 GB—and the number of AI processor cores—850,000.

Increasing test coverage in hard switching half bridge configurations

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/increasing-test-coverage-in-hard-switching-half-bridge-configurations

Do you have to pay particular attention to proper switching operations to prevent shoot-through events? Learn more.

Setting up complex real-time trigger conditions using the R&S oscilloscopes increases the test coverage & robustness of converter & inverter systems.

Rapid Scale-Up of Commercial Ion-Trap Quantum Computers

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/commercial-iontrap-quantum-computers-showing-rapid-scaleup

Last week, Honeywell’s Quantum Solutions division released its first commercial quantum computer: a system based on trapped ions comprising 10 qubits. The H1, as it’s called, is actually the same ion trap chip the company debuted as a prototype, but with four additional ions. The company revealed a roadmap that it says will rapidly lead to much more powerful quantum computers. Separately, a competitor in ion-trap quantum computing, Maryland-based startup IonQ, unveiled a 32-qubit ion computer last month.

Ion trap quantum computers are made of chips that are designed to trap and hold ions in a line using a specially-designed RF electromagnetic field. The chip can also move specific ions along the line using an electric field. Lasers then encode the ions’ quantum states to perform calculations. Proponents say trapped-ion-based quantum computers are attractive because the qubits are thought to be longer-lasting, have much higher-fidelity, and are potentially easier to connect together than other options, allowing for more reliable computation.

For Honeywell, that means a system that is the only one capable of performing a “mid-circuit measurement” (a kind of quantum equivalent of an if/then), and then recycling the measured qubit back into the computation, says Patty Lee, Honeywell Quantum Solutions chief scientist. The distinction allows for different kinds of quantum algorithms and an ability to perform more complex calculations with fewer ions.

Both companies measure their systems’ abilities using a value called quantum volume. Daniel Lidar, director of the Center for Quantum Information Science and Technology at the University of Southern California in Los Angeles explained quantum volume to IEEE Spectrum like this:

…. IBM’s team defines quantum volume as 2 to the power of the size of the largest circuit with equal width and depth that can pass a certain reliability test involving random two-qubit gates. The circuit’s size is defined by either width based on the number of qubits or depth based on the number of gates, given that width and depth are equal in this case.

That means a 6-qubit quantum computing system would have a quantum volume of 2 to the power of 6, or 64—but only if the qubits are relatively free of noise and the potential errors that can accompany such noise.

Honeywell says its 10-qubit system has a measured quantum volume of 128, the highest in the industry. IonQ’s earlier 11-qubit prototype had a measured quantum volume of 32. It’s 32-ion system could theoretically reach higher than 4 million, the company claims, but this hasn’t been proven yet.

With the launch of its commercial system, Honeywell unveiled that it will use a subscription model for access to its computers. Customers would pay for time and engagement with the systems, even as the systems scale up throughout the year. “Imagine if you have Netflix, and next week it’s twice as good, and 3 months from now it’s 1000 times as good,” says Tony Uttley, president of Honeywell Quantum Solutions. “That’d be a pretty cool subscription. And that’s the approach we’ve taken with this.”

Honeywell’s path forward involves first adding more ions to the H1, which has capacity for 40. “We built a large auditorium,” Uttley says. “Now we’re just filling seats.”

The next step is to change the ion trap chip’s single line to a racetrack configuration. This system, called H2, is already in testing; it allows faster ion computation interactions, because ions at the ends of the line can be moved around to interact with each other. A further scale-up, H3, will come with a chip that has a grid of traps instead of a single line. For this, ions will have to be steered around corners, something Uttley says the company can already do.

For H4, the grid will be integrated with on-chip photonics. Today the laser beams that encode quantum states onto the ions are sent in from outside the vacuum chamber that houses the trap, and that configuration limits the number of points on the chip where computation can happen. An integrated photonics system, which has been designed and tested, would increase the available computation points. In a final step, targeted for 2030, tiles of H4 chips will be stitched together to form a massive integrated system.

For its part, IonQ CEO Peter Chapman told Ars Technica that the company plans to double the number of qubits in its systems every eight months for the next few years. Instead of physically moving ions to get them to interact, IonQ’s system uses carefully crafted pairs of laser pulses on a stationary line of ions.

Despite the progress so far, these systems can’t yet do anything that can’t be done already on a classical computer system. So why are customers buying in now? “With this roadmap we’re showing that we are going to very quickly cross a boundary where there is no way you can fact check” a result, says Uttley.  Companies need to see that their quantum algorithms work on these systems now, so that when they reach a capability beyond today’s supercomputers, they can still trust the result, he says.

Intel Creating Cryptographic Codes That Quantum Computers Can’t Crack

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/how-to-protect-the-internet-of-things-in-the-quantum-computing-era

Journal Watch report logo, link to report landing page

The world will need a new generation of cryptographic algorithms once quantum computing becomes powerful enough to crack the codes that protect everyone’s digital privacy. An Intel team has created an improved version of such a quantum-resistant cryptographic algorithm that could work more efficiently on the smart home and industrial devices making up the Internet of Things.

The Intel team’s Bit-flipping Key Encapsulation (BIKE) provides a way to create a shared secret that encrypts sensitive information exchanged between two devices. The encryption process requires computationally complex operations involving mathematical problems that could strain the hardware of many Internet of Things (IoT) devices. But Intel researchers figured out how to create a hardware accelerator that enables the BIKE software to run efficiently on less powerful hardware.

“Software execution of BIKE, especially on lightweight IoT devices, is latency and power intensive,” says Manoj Sastry, principal engineer at Intel. “The BIKE hardware accelerator proposed in this paper shows feasibility for IoT-class devices.”

Intel has been developing BIKE as one possible quantum-resistant algorithm among the many being currently evaluated by the U.S. National Institute of Standards and Technology. This latest version of BIKE was presented in a paper [PDF] during the IEEE International Conference on Quantum Computing and Engineering on 13 October 2020. 

BIKE securely establishes a shared secret between two devices through a three-step process, says Santosh Ghosh, a research scientist at Intel and coauthor on the paper. First, the host device creates a public-private key pair and sends the public key to the client. Second, the client sends an encrypted message using the public key to the host. And third, the host decodes the encrypted message through a BIKE decode procedure using the private key. “Of these three steps, BIKE decode is the most compute intensive operation,” Ghosh explains.

The improved version of BIKE takes advantage of a new decoder that requires less computing power. Testing showed that this enabled the computation of a single BIKE decode operation in 1.3 million cycles at 110 MHz on an Intel Arria 10 FPGA in 12 milliseconds, which is fairly competitive compared to other options.

BIKE is well suited for applications where IoT devices are used for encapsulation and a more capable device takes the role of host to generate the keys and perform the decapsulation procedure,” Ghosh says.

This also represents the first hardware implementation of BIKE suitable for Level 5 keys and ciphertexts, with level 5 representing the highest level of security as defined by the U.S. National Institute of Standards and Technology. Each higher level of security requires bigger keys and ciphertexts—the encrypted forms of data that would look unintelligible to prying eyes—which in turn require more compute-intensive operations. 

The team was previously focused on BIKE implementations suitable for the lower security levels of 1 and 3, which meant the public hardware implementation submitted to NIST as reference did not support level 5, says Rafael Misoczki, coauthor on the paper who was formerly at Intel and is now a cryptography engineer at Google.

The latest hardware implementation for BIKE takes the security up a couple of notches.

“Our BIKE decoder supports keys and ciphertexts for Level 5 which provides security equivalent to AES-256,” says Andrew Reinders, security researcher at Intel and coauthor on the paper. “This means a quantum computer needs 2128 operations to break it.”

The latest version of the BIKE hardware accelerator has a design that offers additional security against side-channel attacks in which attackers attempt to exploit information about the power consumption or even timing of software processes. For example, differential power analysis attacks can track the power consumption patterns associated with running certain computational tasks to reveal some of the underlying computations.

In theory, such attacks against BIKE might attempt to target a small block of the secret being shared between two devices. That block size depends upon how many secret bits work together in underlying sub-operations, Reinders says. Because a BIKE block size is 1-bit, a BIKE hardware design that processed a single secret bit at a time would be highly vulnerable to such a differential power analysis attack.

But the new BIKE hardware accelerator offers protection against such attacks because it performs all the computations for the 128-bits of secret in parallel, which makes it difficult to single out power consumption patterns associated with individual computations.

The BIKE hardware accelerator also has protection against timing attacks, because its decoder always runs for a fixed number of rounds that are each the same amount of time.

BIKE was previously chosen as one of eight alternates in the third round of NIST’s Post-Quantum Cryptography Standardization Process that is currently narrowing down the best candidates to replace modern cryptography standards. Seven other algorithms were selected as finalists, making for a total of 15 algorithms remaining under consideration from a starting group of 69 submissions.

The Intel team has submitted their revised version of BIKE for the third round of the NIST challenge and are currently awaiting comments. If all goes well, the federal agency plans to release the initial standard for quantum-resistant cryptography in 2022.

Measuring Progress in the ‘Noisy’ Era of Quantum Computing

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/measuring-progress-in-the-noisy-era-of-quantum-computing

Measuring the progress of quantum computers can prove tricky in the era of “noisy” quantum computing technology. One concept, known as “quantum volume,” has become a favored measure among companies such as IBM and Honeywell. But not every company or researcher agrees on its usefulness as a yardstick in quantum computing.

Protect Electronic Components with Dam & Fill

Post Syndicated from MasterBond original https://spectrum.ieee.org/computing/hardware/protect-electronic-components-with-dam-and-fill

Watch now to see the dam and fill method.

Sensitive electronic components mounted on a circuit board often require protection from exposure to harsh environments. While there are several ways to accomplish this, the dam and fill method offers many benefits. Dam-and-filling entails dispensing the damming material around the area to be encapsulated, thereby restricting the flow of the fill from spreading to other parts of the board.

By using a damming compound such as Supreme 3HTND-2DM-1, you can create a barrier around the component. Then, with a flowable filler material like EP3UF-1, the component can be completely covered for protection.

To start, apply the damming compound, Supreme 3HTND-2DM-1, around the component. Supreme 3HTND-2DM-1 is readily dispensed to create a structurally sound barrier. This material will not run and will cure in place in 5-10 minutes at 300°F, in essence forming a dam.

After the damming compound has been applied and cured, a filling compound such as EP3UF-1 is     dispensed to fill the area inside the dam and cover the component to be protected. EP3UF-1 is a specialized, low viscosity one part system with a filler that has ultra small particle sizes, which enables it to flow even in tiny spaces. This system cures in 10-15 minutes at 300°F and features low shrinkage and high dimensional stability once cured.

Both Supreme 3HTND-2DM-1 and EP3UF-1 are thermally conductive, electrically insulating compounds and are available for use in syringes for automated or manual dispensing.

Despite being a two step process, dam and fill offers the following advantages over glob topping:

  • Flow of the filling compound is controlled and restricted
  • It can be applied to larger sections of the board
  • Filling compound flows better than a glob top, allowing better protection underneath and around component

IBM’s Envisons the Road to Quantum Computing Like an Apollo Mission

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/computing/hardware/ibms-envisons-the-road-to-quantum-computing-like-an-apollo-mission

At its virtual Quantum Computing Summit last week, IBM laid out its roadmap for the future of quantum computing. To illustrate the enormity of the task ahead of them, Jay Gambetta, IBM Fellow and VP, Quantum Computing, drew parallels between the Apollo missions and the next generation of Big Blue’s quantum computers. 

In a post published on the IBM Research blog, Gambetta said:  “…like the Moon landing, we have an ultimate objective to access a realm beyond what’s possible on classical computers: we want to build a large-scale quantum computer.”

Lofty aspirations can lead to landing humankind on the moon and possibly enabling quantum computers to help solve our biggest challenges such as administering healthcare and managing natural resources. But it was clear from Gambetta’s presentation that it is going to require a number of steps to achieve IBM’s ultimate aim: a 1,121-qubit processor named Condor.

For IBM, it’s been a process that started in the mid-2000s with its initial research into superconducting qubits, which are on its roadmap through at least 2023.

This reliance on superconducting qubits stands in contrast to Intel’s roadmap which depends on silicon spin qubits. It would appear that IBM is not as keen as Intel to have qubits resemble a transistor.

However, there is one big issue for quantum computers based on superconducting qubits: they require extremely cold temperatures of about 20 millikelvin (-273 degrees C). In addition, as the number of superconducting qubits increases, the refrigeration system needs to expand as well. With an eye toward reaching its 1,121-qubit processor by 2023, IBM is currently building an enormous “super fridge” dubbed Goldeneye that will be 3 meters tall and 2 meters wide.

Upon reaching the 1,121-qubit threshold, IBM believes it could lay the groundwork for an entirely new era in quantum computing in which it will become possible to scale to error-corrected, interconnected, 1-million-plus-qubit quantum computers.

“At 1,121 qubits, we expect to start demonstrating real error correction which will result in the circuits having higher fidelity than without error correction,” said Gambetta.

Gambetta says that Big Blue engineers will need to overcome a number of technical challenges to get to 1,121 qubits. Back in early September, IBM made available its 65-qubit Hummingbird processor, a step up from its 27-qubit Falcon processor, which had run a quantum circuit long enough for IBM to declare it had reached a quantum volume of 64. (Quantum volume is a measurement of how many physical qubits there are, how connected they are, and how error prone they may be.)

Another issue is the so-called “fan-out” problems that result when you scale up the number of qubits on a quantum chip. As the qubits increase, you need to add multiple control wires for each qubit.

The issue has become such a concern that quantum computer scientists have adopted the Rent’s Rule that the semiconductor industry defined back in the mid-1960s. E.F. Rent, a scientist at IBM in the 1960s, observed that there was relationship between the number of external signal connections to a logic block and the number of logic gates in the logic block.  Quantum scientists have adopted the terminology to describe their own challenge with wiring of qubits.

IBM plans to address these issues next year when it introduces its 127-qubit Quantum Eagle processor that will feature through-silicon vias and multi-level wiring that will enable the “fan-out” of a large density classical control signal while protecting the qubits in order to maintain high coherence times.

“Quantum computing will face things similar to the Rent Rule, but with the multi-level wiring and multiplex readout—things we presented in our roadmap which we are working on are already—we are demonstrating a path to solving how we scale up quantum processors,” said Gambetta.

It is with this Eagle processor that IBM will introduce concurrent real-time classical computing capabilities that will enable the introduction of a broader family of quantum circuits and codes.

It’s in IBM’s next release, in 2022—the 433-qubit Quantum Osprey system—that some pretty big technical challenges will start looming.

“Getting to a 433-qubit machine will require increased density of cryo-infrastructure and controls and cryo-flex cables, said Gambetta. “It has never been done, so the challenge is to make sure we are designing with a million-qubit system in mind. To this end, we have already begun fundamental feasibility tests.”

After Osprey comes the Condor and its 1,121-qubit processor. As Gambetta noted in his post: “We think of Condor as an inflection point, a milestone that marks our ability to implement error correction and scale up our devices, while simultaneously complex enough to explore potential Quantum Advantages—problems that we can solve more efficiently on a quantum computer than on the world’s best supercomputers.”

Co-designing electronics and microfluidics for a cooling boost

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/tech-talk/computing/hardware/codesigning-electronics-and-microfluidics-for-a-cooling-boost

The heat generated by today’s densely-packed electronics is a costly resource drain. To keep systems at the right temperature for optimal computational performance, data center cooling in the United States consumes the as much energy and water as all the residents of the city of Philadelphia. Now, by integrating liquid cooling channels directly into semiconductor chips, researchers hope to reduce that drain at least in power electronics devices, making them smaller, cheaper and less energy-intensive. 

Traditionally, the electronics and the heat management system are designed and made separately, says Elison Matioli, an electrical engineering professor at École Polytechnique Fédérale de Lausanne in Switzerland. That introduces a fundamental obstacle to improving cooling efficiency since heat has to propagate relatively long distances through multiple materials for removal. In today’s processors, for instance, thermal materials syphon heat away from the chip to a bulky, air-cooled copper heat sink.

For a more energy-efficient solution, Matioli and his colleagues have developed a low-cost process to put a 3D network of microfluidic cooling channels directly into a semiconductor chip. Liquids remove heat better than air, and the idea is to put coolant micrometers away from chip hot spots.

But unlike previously reported microfluidic cooling techniques, he says, “we design the electronics and the cooling together from the beginning.” So the microchannels are right underneath the active region of each transistor device, where it heats up the most, which increases cooling performance by a factor of 50. They reported their co-design concept in the journal Nature today.

Researchers first proposed microchannel cooling back in 1981, and startups such as Cooligy have pursued the idea for processors. But the semiconductor industry is moving from planar devices to 3D ones and towards future chips with stacked multi-layer architectures, which makes cooling channels impractical. “This type of embedded cooling solution is not meant for modern processors and chips, like the CPU,” says Tiwei Wei, who studies electronic cooling solutions at Interuniversity Microelectronics Centre and KU Leuven in Belgium.  Instead, this cooling technology makes the most sense for power electronics, he says.

Power electronics circuits manage and convert electrical energy, and are used widely in computers, data centers, solar panels, and electric vehicles, among other things. They use large-area discrete devices made from wide-bandgap semiconductors like gallium nitride. The power density of these devices has gone up dramatically over the years, which means they have to be “hooked to a massive heat sink,” Matioli says.

First Photonic Quantum Computer on the Cloud

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/photonic-quantum

Quantum computers based on photons may possess key advantages over those based on electrons. To benefit from those advantages, quantum computing startup Xanadu has, for the first time, made a photonic quantum computer publicly available over the cloud.

Whereas classical computers switch transistors either on or off to symbolize data as ones and zeroes, quantum computers use quantum bits or “qubits” that, because of the surreal nature of quantum physics, can be in a state known as superposition where they can act as both 1 and 0. This essentially lets each qubit perform two calculations at once.

If two qubits are quantum-mechanically linked, or entangled, they can help perform 2^2 or four calculations simultaneously; three qubits, 2^3 or eight calculations; and so on. In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the visible universe.

What Intel Is Planning for The Future of Quantum Computing: Hot Qubits, Cold Control Chips, and Rapid Testing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/intels-quantum-computing-plans-hot-qubits-cold-control-chips-and-rapid-testing

Quantum computing may have shown its “supremacy” over classical computing a little over a year ago, but it still has a long way to go. Intel’s director of quantum hardware, Jim Clarke, says that quantum computing will really have arrived when it can do something unique that can change our lives, calling that point “quantum practicality.” Clarke talked to IEEE Spectrum about how he intends to get silicon-based quantum computers there:

Jim Clarke on…

IEEE Spectrum: Intel seems to have shifted focus from quantum computers that rely on superconducting qubits to ones with silicon spin qubits. Why do you think silicon has the best chance of leading to a useful quantum computer?

Jim Clarke: It’s simple for us… Silicon spin qubits look exactly like a transistor. … The infrastructure is there from a tool fabrication perspective. We know how to make these transistors. So if you can take a technology like quantum computing and map it to such a ubiquitous technology, then the prospect for developing a quantum computer is much clearer.

I would concede that today silicon spin-qubits are not the most advanced quantum computing technology out there. There has been a lot of progress in the last year with superconducting and ion trap qubits.

But there are a few more things: A silicon spin qubit is the size of a transistor—which is to say it is roughly 1 million times smaller than a superconducting qubit. So if you take a relatively large superconducting chip, and you say “how do I get to a useful number of qubits, say 1000 or a million qubits?” all of a sudden you’re dealing with a form factor that is … intimidating.

We’re currently making server chips with billions and billions of transistors on them. So if our spin qubit is about the size of a transistor, from a form-factor and energy perspective, we would expect it to scale much better.

Spectrum: What are silicon spin-qubits and how do they differ from competing technology, such as superconducting qubits and ion trap systems?

Clarke: In an ion trap you are basically using a laser to manipulate a metal ion through its excited states where the population density of two excited states represents the zero and one of the qubit. In a superconducting circuit, you are creating the electrical version of a nonlinear LC (inductor-capacitor) oscillator circuit, and you’re using the two lowest energy levels of that oscillator circuit as the zero and one of your qubit. You use a microwave pulse to manipulate between the zero and one state.

We do something similar with the spin qubit, but it’s a little different. You turn on a transistor, and you have a flow of electrons from one side to another. In a silicon spin qubit, you essentially trap a single electron in your transistor, and then you put the whole thing in a magnetic field [using a superconducting electromagnet in a refrigerator]. This orients the electron to either spin up or spin down. We are essentially using its spin state as the zero and one of the qubit.

That would be an individual qubit. Then with very good control, we can get two separated electrons in close proximity and control the amount of interaction between them. And that serves as our two-qubit interaction.

So we’re basically taking a transistor, operating at the single electron level, getting it in very close proximity to what would amount to another transistor, and then we’re controlling the electrons.

Spectrum: Does the proximity between adjacent qubits limit how the system can scale?

Clarke: I’m going to answer that in two ways. First, the interaction distance between two electrons to provide a two-qubit gate is not asking too much of our process. We make smaller devices every day at Intel. There are other problems, but that’s not one of them.

Typically, these qubits operate on a sort of a nearest neighbor interaction. So you might have a two-dimensional grid of qubits, and you would essentially only have interactions between one of its nearest neighbors. And then you would build up [from there]. That qubit would then have interactions with its nearest neighbors and so forth. And then once you develop an entangled system, that’s how you would get a fully entangled 2D grid. [Entanglement is a condition necessary for certain quantum computations.]

Spectrum: What are some of the difficult issues right now with silicon spin qubits?

Clarke: By highlighting the challenges of this technology, I’m not saying that this is any harder than other technologies. I’m prefacing this, because certainly some of the things that I read in the literature would suggest that  qubits are straightforward to fabricate or scale. Regardless of the qubit technology, they’re all difficult.

With a spin qubit, we take a transistor that normally has a current of electrons go through, and you operate it at the single electron level. This is the equivalent of having a single electron, placed into a sea of several hundred thousand silicon atoms and still being able to manipulate whether it’s spin up or spin down.

So we essentially have a small amount of silicon, we’ll call this the channel of our transistor, and we’re controlling a single electron within that that piece of silicon. The challenge is that silicon, even a single crystal, may not be as clean as we need it.  Some of the defects—these defects can be extra bonds, they can be charge defects, they can be dislocations in the silicon—these can all impact that single electron that we’re studying. This is really a materials issue that we’re trying to solve.

Back to top

Spectrum: Just briefly, what is coherence time and what’s its importance to computing?

Clarke: The coherence time is the window during which information is maintained in the qubit. So, in the case of a silicon spin qubit, it’s how long before that electron loses its orientation, and randomly scrambles the spin state. It’s the operating window for a qubit.

Now, all of the qubit types have what amounts to coherence times. Some are better than others. The coherence times for spin qubits, depending on the type of coherence time measurement, can be on the order of milliseconds, which is pretty compelling compared to other technologies.

What needs to happen [to compensate for brief coherence times] is that we need to develop an error correction technique. That’s a complex way of saying we’re going to put together a bunch of real qubits and have them function as one very good logical qubit.

Spectrum: How close is that kind of error correction?

Clarke: It was one of the four items that really needs to happen for us to realize a quantum computer that I wrote about earlier. The first is we need better qubits. The second is we need better interconnects. The third is we need better control. And the fourth is we need error correction. We still need improvements on the first three before we’re really going to get, in a fully scalable manner, to error correction.

You will see groups starting to do little bits of error correction on just a few qubits. But we need better qubits and we need a more efficient way of wiring them up and controlling them before you’re really going to see fully fault-tolerant quantum computing.

Back to top

Spectrum: One of the improvements to qubits recently was the development of “hot” silicon qubits. Can you explain their significance?

Clarke: Part of it equates to control.

Right now you have a chip at the bottom of a dilution refrigerator, and then, for every qubit, you have several wires that that go from there all the way outside of the fridge. And these are not small wires; they’re coax cables. And so from a form factor perspective and a power perspective—each of these wires dissipates power—you really have a scaling problem.

One of the things that Intel is doing is that we are developing control chips. We have a control chip called Horse Ridge that’s a conventional CMOS chip that we can place in the fridge in close proximity to our qubit chip. Today that control chip sits at 4 kelvins and our qubit chip is at 10 millikelvins and we still have to have wires between those two stages in the fridge.

Now, imagine if we can operate our qubit slightly warmer. And by slightly warmer, I mean maybe 1 kelvin. All of a sudden, the cooling capacity of our fridge becomes much greater. The cooling capacity of our fridge at 10 millikelvin is roughly a milliwatt. That’s not a lot of power.  At 1 Kelvin, it’s probably a couple of Watts. So, if we can operate at higher temperatures, we can then place control electronics in very close proximity to our qubit chip.

By having hot qubits we can co-integrate our control with our qubits, and we begin to solve some of the wiring issues that we’re seeing in today’s early quantum computers.

Spectrum: Are hot qubits structurally the same as regular silicon spin qubits?

Clarke: Within silicon spin qubits there are several different types of materials, some are what I would call silicon MOS type qubits— very similar to today’s transistor materials.  In other silicon spin qubits you have silicon that’s buried below a layer of silicon germanium. We’ll call that a buried channel device. Each have their benefits and challenges.

We’ve done a lot of work with TU Delft working on a certain type of [silicon MOS] material system, which is a little different than most in the community are studying [and lets us] operate the system at a slightly higher temperature.

I loved the quantum supremacy work. I really did. It’s good for our community. But it’s a contrived problem, on a brute force system, where the wiring is a mess (or at least complex).

What we’re trying to do with the hot qubits and with the Horse Ridge chip is put us on a path to scaling that will get us to a useful quantum computer that will change your life or mine. We’ll call that quantum practicality.

Back to top

Spectrum: What do you think you’re going to work on next most intensely?

Clarke: In other words, “What keeps Jim up at night?”

There are a few things. The first is time-to-information. Across most of the community, we use these dilution refrigerators. And the standard way [to perform an experiment] is: You fabricate a chip; you put it in a dilution refrigerator; it cools down over the course of several days; you experiment with it over the course of several weeks; then you warm it back up and put another chip in.

Compare that to what we do for transistors: We take a 300-millimeter wafer, put it on a probe station, and after two hours we have thousands and thousands of data points across the wafer that tells us something about our yield, our uniformity, and our performance.

That doesn’t really exist in quantum computing. So we asked, “Is there way to—at slightly higher temperatures—to combine a probe station with a dilution refrigerator?” Over the last two years, Intel has been working with two companies in Finland [Bluefors Oy and Afore Oy] to develop what we call the cryoprober. And this is just coming online now. We’ve been doing an impressive job of installing this massive piece of equipment in the complete absence of field engineers from Finland due to the Coronavirus.

What this will do is speed up our time-to-information by a factor of up to 10,000. So instead of wire bonding a single sample, putting it in the fridge, taking a week to study it, or even a few days to study it, we’re going to be able to put a 300-millimeter wafer into this unit and over the course of an evening step and scan. So we’re going to get a tremendous increase in throughput. I would say a 100 X improvement. My engineers would say 10,000.  I’ll leave that as a challenge for them to impress me beyond the 100.

Here’s the other thing that keeps me up at night. Prior to starting the Intel quantum computing program, I was in charge of interconnect research in Intel’s Components Research Group. (This is the wiring on chips.) So, I’m a little less concerned with the wiring into and out of the fridge than I am just about the wiring on the chip.

I’ll give an example:  An Intel server chip has probably north of 10 billion transistors on a single chip. Yet the number of wires coming off that chip is a couple of thousand. A quantum computing chip has more wires coming off the chip than there are qubits. This was certainly the case for the Google [quantum supremacy] work last year. This was certainly the case for the Tangle Lake chip that Intel manufactured in 2018, and it’s the case with our spin qubit chips we make now.

So we’ve got to find a way to make the interconnects more elegant. We can’t have more wires coming off the chip than we have devices on the chip. It’s ineffective.

This is something the conventional computing community discovered in the late 1960s with Rent’s Rule [which empirically relates the number of interconnects coming out of a block of logic circuitry to the number of gates in the block]. Last year we published a paper with Technical University Delft on the quantum equivalent of Rent’s Rule. And it talks about, amongst other things the Horse Ridge control chip, the hot qubits, and multiplexing.

We have to find a way to multiplex at low temperatures. And that will be hard. You can’t have a million-qubit quantum computer with two million coax cables coming out of the top of the fridge.

Spectrum: Doesn’t Horse Ridge do multiplexing?

Clarke: It has multiplexing. The second generation will have a little bit more. The form factor of the wires [in the new generation] is much smaller, because we can put it in closer proximity to the [quantum] chip.

So if you kind of combine everything I’ve talked about. If I give you a package that has a classical control chip—call it a future version of Horse Ridge—sitting right next to and in the same package as a quantum chip, both operating at a similar temperature and making use of very small interconnect wires and multiplexing, that would be the vision.

Spectrum: What’s that going to require?

Clarke: It’s going to require a few things. It’s going to require improvements in the operating temperature of the control chip. It’s probably going to require some novel implementations of the packaging so there isn’t a lot of thermal cross talk between the two chips. It’s probably going to require even greater cooling capacity from the dilution refrigerator. And it’s probably going to require some qubit topology that facilitates multiplexing.

Spectrum: Given the significant technical challenges you’ve talked about here, how optimistic are you about the future of quantum computing?

Clarke: At Intel, we’ve consistently maintained that we are early in the quantum race. Every major change in the semiconductor industry has happened on the decade timescale and I don’t believe quantum will be any different. While it’s important to not underestimate the technical challenges involved, the promise and potential are real. I’m excited to see and participate in the meaningful progress we’re making, not just within Intel but the industry as a whole. A computing shift of this magnitude will take technology leaders, scientific research communities, academia, and policy makers all coming together to drive advances in the field, and there is tremendous work already happening on that front across the quantum ecosystem today.

Back to top

Three Ways to Hack a Printed Circuit Board

Post Syndicated from Samuel H. Russ original https://spectrum.ieee.org/computing/hardware/three-ways-to-hack-a-printed-circuit-board

In 2018, an article in Bloomberg ­Businessweek made the stupendous assertion that Chinese spy services had created back doors to servers built for Amazon, Apple, and others by inserting millimeter-size chips into circuit boards. 

This claim has been roundly and specifically refuted by the companies involved and by the U.S. Department of Homeland Security. Even so, the possibility of carrying out such a stupendous hack is quite real. And there have been more than a dozen documented examples of such system-level attacks.

We know much about malware and counterfeit ICs, but the vulnerabilities of the printed circuit board itself are only now starting to get the attention they deserve. We’ll take you on a tour of some of the best-known weak points in printed-circuit-board manufacturing. Fortunately, the means to shore up those points are relatively straightforward, and many of them simply amount to good engineering practice.

In order to understand how a circuit board can be hacked, it’s worth reviewing how they are made. Printed circuit boards typically contain thousands of components. (They are also known as printed wiring boards, or PWBs, before they are populated with components.) The purpose of the PCB is, of course, to provide the structural support to hold the components in place and to provide the wiring needed to connect signals and power to the components.

PCB designers start by creating two electronic documents, a schematic and a layout. The schematic describes all the components and how they are interconnected. The layout depicts the finished bare board and locates objects on the board, including both components and their labels, called reference designators. (The reference designator is extremely important—most of the assembly process, and much of the design and procurement process, is tied to reference designators.)

Not all of a PCB is taken up by components. Most boards include empty component footprints, called unpopulated components. This is because boards often contain extra circuitry for debugging and testing or because they are manufactured for several purposes, and therefore might have versions with more or fewer components.

Once the schematic and layout have been checked, the layout is converted to a set of files. The most common file format is called “Gerber,” or RS-274X. It consists of ASCII-formatted commands that direct shapes to appear on the board. A second ASCII-formatted file, called the drill file, shows where to place holes in the circuit board. The manufacturer then uses the files to create masks for etching, printing, and drilling the boards. Then the boards are tested.

Next, “pick and place” machines put surface-mount components where they belong on the board, and the PCBs pass through an oven that melts all the solder at once. Through-hole components are placed, often by hand, and the boards pass over a machine that applies solder to all of the through-hole pins. It’s intricate work: An eight-pin, four-resistor network can cover just 2 millimeters by 1.3 mm, and some component footprints are as small as 0.25 mm by 0.13 mm. The boards are then inspected, tested, repaired as needed, and assembled further into working products.

Attacks can be made at every one of these design steps. In the first type of attack, extra components are added to the schematic. This attack is arguably the hardest to detect because the schematic is usually regarded as the most accurate reflection of the designer’s intent and thus carries the weight of authority.

A variation on this theme involves adding an innocuous component to the schematic, then using a maliciously altered version of the component in production. This type of attack, in which seemingly legitimate components have hardware Trojans, is outside the scope of this article, but it should nevertheless be taken very seriously.

In either case, the countermeasure is to review the schematic carefully, something that should be done in any case. One important safeguard is to run it by employees from other design groups, using their “fresh eyes” to spot an extraneous component.

In a second type of attack, extra components can be added to the layout. This is a straightforward process, but because there are specific process checks to compare the layout to the schematic, it is harder to get away with it: At a minimum, a layout technician would have to falsify the results of the comparison. And combatting this form of attack is simple: Have an engineer—or, better, a group of engineers—observe the layout-to-­schematic comparison step and sign off on it.

In a third type of attack, the Gerber and drill files can be altered. There are three important points on the Gerber and drill files from a security perspective: First, they’re ASCII-formatted, and therefore editable in very common text-editing tools; second, they’re human-readable; and third, they contain no built-in cryptographic protections, such as signatures or checksums. Since a complete set of ­Gerber files can be hundreds of thousands of lines, this is a very efficient mode of attack, one that is easily missed.

In one example, an attacker could insert what appears to be an electrostatic discharge diode. This circuit’s design files are made up of 16 Gerber and drill files. Of the 16 files, nine would need altering; of those nine, seven would vary in a total of 79 lines, and two files need changes in about 300 lines each. The latter two files specify the power and ground planes. A more skilled attack, such as one adding vertical connections called vias, would dramatically reduce the number of lines that needed rewriting.

Unprotected Gerber files are vulnerable to even a single bad actor who sneaks in at any point between the designing company and the production of the photolithographic masks. As the Gerber­ files are based on an industry standard, acquiring the knowledge to make the changes is relatively straightforward.

One might argue that standard cryptographic methods of protecting files would protect Gerber files, too. While it is clear that such protections would guard a ­Gerber file in transit, it is unclear whether those protections hold when the files reach their destination. The fabrication of circuit boards almost always occurs outside the company that designs them. And, while most third-party manufacturers are reputable companies, the steps they take to protect these files are usually not documented for their customers.

One way to protect files is to add a digital signature, cryptographic hash, or some other sort of authentication code to the internal contents of the file in the form of a comment. However, this protection is effective only if the mask-making process authenticates the file quite late in the process; ideally, the machines that create the photolithography masks should have the ability to authenticate a file. Alternatively, the machine could retain a cryptographic hash of the file that was actually used to create the mask, so that the manufacturer can audit the process. In either case, the mask-making machine would itself require secure handling.

If bad actors succeed at one of these three attacks, they can add an actual, physical component to the assembled circuit board. This can occur in three ways.

First, the extra component can be added in production. This is difficult because it requires altering the supply chain to add the component to the procurement process, programming the pick-and-place machine to place the part, and attaching a reel of parts to the machine. In other words, it would require the cooperation of several bad actors, a conspiracy that might indicate the work of a corporation or a state.

Second, the extra component can be added in the repair-and-rework area—a much easier target than production. It’s common for assembled circuit boards to require reworking by hand. For example, on a board with 2,000 components, the first-pass yield—the fraction of boards with zero defects—might be below 70 percent. The defective boards go to a technician who then adds or removes components by hand; a single technician could easily add dozens of surreptitious components per day. While not every board would have the extra component, the attack might still succeed, especially if there was a collaborator in the shipping area to ship the hacked boards to targeted customers. Note that succeeding at this attack (altered Gerber files, part inserted in repair, unit selectively shipped) requires only three people.

Third, a component can be added by hand to a board after production—in a warehouse, for instance. The fact that an in-transit attack is possible may require companies to inspect incoming boards to confirm that unpopulated parts remain unpopulated.

Knowing how to sabotage a PCB is only half the job. Attackers also have to know what the best targets are on a computer motherboard. They’ll try the data buses, specifically those with two things in common—low data rates and low pin counts. High-speed buses, such as SATA, M.2, and DDR are so sensitive to data rates that the delay of an extra component would very likely keep them from working correctly. And a component with a smaller number of pins is simpler to sneak into a design; therefore, buses with low pin counts are easier targets. On a PC motherboard, there are three such buses.

The first is the System Management Bus (SMBus), which controls the voltage regulators and clock frequency on most PC motherboards. It’s based on the two-wire Inter-IC (I2C) standard created by Philips Semiconductor back in 1982. That standard has no encryption, and it allows a number of connected devices to directly access critical onboard components, such as the power supply, independently of the CPU.

A surreptitious component on an SMBus could enable two types of attacks against a system. It could change the voltage settings of a regulator and damage components. It could also interfere with communications between the processor and onboard sensors, either by impersonating another device or by intentionally interfering with incoming data.

The second target is the Serial Peripheral Interface (SPI) bus, a four-wire bus created by Motorola in the mid-1980s. It’s used by most modern flash-memory parts, and so is likely to be the bus on which the important code, such as the BIOS (Basic Input/Output System), is accessed.

A well-considered attack against the SPI bus could alter any portion of the data that is read from an attached memory chip. Modifications to the BIOS as it is being accessed could change hardware configurations done during the boot process, leaving a path open for malicious code.

The third target is the LPC (Low Pin Count) bus, and it’s particularly attractive because an attack can compromise the operation of the computer, provide remote access to power and other vital control functions, and compromise the security of the boot process. This bus carries seven mandatory signals and up to six optional signals; it is used to connect a computer’s CPU to legacy devices, such as serial and parallel ports, or to physical switches on the chassis, and in many modern PCs, its signals control the fans.

The LPC bus is such a vulnerable point because many servers use it to connect a separate management processor to the system. This processor, called the baseboard management controller (BMC), can perform basic housekeeping functions even if the main processor has crashed or the operating system has not been installed. It’s convenient because it permits remote control, repair, and diagnostics of server components. Most BMCs have a dedicated Ethernet port, and so an attack on a BMC can also result in network access.

The BMC also has a pass-through connection to the SPI bus, and many processors load their BIOS through that channel. This is a purposeful design decision, as it permits the BIOS to be patched remotely via the BMC.

Many motherboards also use the LPC bus to access hardware implementing the Trusted Platform Module (TPM) standard, which provides cryptographic keys and a range of other services to secure a computer and its software.

Start your search for surreptitious components with these buses. You can search them by machine: At the far end of the automation is a system developed by Mark M. Tehranipoor, director of the Florida Institute for Cybersecurity Research, in Gainesville. It uses optical scans, microscopy, X-ray tomography, and artificial intelligence to compare a PCB and its components with the intended design. Or you can do the search by hand, which consists of four rounds of checks. While these manual methods may take time, they don’t need to be done on every single board, and they require little technical skill.

In the first round, check the board for components that lack a reference designator. This is a bright red flag; there is no way that a board so hobbled could be manufactured in a normal production process. Finding such a component is a strong indication of an attack on the board layout files (that is, the Gerber and drill files), because that step is the likeliest place to add a component without adding a reference designator. Of course, a component without a reference designator is a major design mistake and worth catching under any circumstances.

In the second round of checks, make sure that every reference designator is found in the schematic, layout, and bill of materials. A bogus reference designator is another clear indication that someone has tampered with the board layout files.

In the third round, focus on the shape and size of the component footprints. For example, if a four-pin part is on the schematic and the layout or board has an eight-pin footprint, this is clear evidence of a hack.

The fourth round of checks should examine all the unpopulated parts of the board. Although placing components in an unpopulated spot may well be the result of a genuine mistake, it may also be a sign of sabotage, and so it needs to be checked for both reasons.

As you can see, modern motherboards, with their thousands of sometimes mote-size components, are quite vulnerable to subversion. Some of those exploits make it possible to gain access to vital system functions. Straightforward methods can detect and perhaps deter most of these attacks. As with malware, heightened sensitivity to the issue and well-planned scrutiny can make attacks unlikely and unsuccessful.

About the Author

Samuel H. Russ is an associate professor of electrical engineering at the University of South Alabama. Jacob Gatlin is a Ph.D. candidate at the university.

The CPU’s Silent Partner: The Coprocessor’s Role Is Often Unappreciated

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/hardware/the-cpus-silent-partner-the-coprocessors-role-is-often-unappreciated

One reason the PC has endured for nearly 40 years is that its design was almost entirely open: No patents restricted reproduction of the fully documented hardware and firmware. When you bought a PC, IBM gave you everything you needed to manufacture your own clone. That openness seeded an explosion of PC compatibles, the foundation of the computing environment we enjoy today.

In one corner of the original PC’s motherboard, alongside the underpowered-but-epochal 8088 CPU, sat an empty socket. It awaited an upgrade it rarely received: an 8087 floating-point coprocessor.

Among the most complex chips of its day, the 8087 accelerated mathematical computations—in particular, the calculation of transcendental functions—by two orders of magnitude. While not something you’d need for a Lotus 1-2-3 spreadsheet, for early users of AutoCAD those functions were absolutely essential. Pop that chip into your PC and rendering detailed computer-aided-design (CAD) drawings no longer felt excruciatingly slow. That speed boost didn’t come cheap, though. One vendor sold the upgrade for US $295—almost $800 in today’s dollars.

Recently, I purchased a PC whose CPU runs a million times as fast as that venerable 8088 and uses a million times as much RAM. That computer cost me about as much as an original PC—but in 2020 dollars, it’s worth only a third as much. Yet the proportion of my spend that went into a top-of-the-line graphics processing unit (GPU) was the same as what I would have invested in an 8087 back in the day.

Although I rarely use CAD, I do write math-intensive code for virtual or augmented reality and videogrammetry. My new coprocessor—a hefty slice of Nvidia silicon—performs its computations at 200 million times the speed of its ancestor.

That kind of performance bump can only partially be attributed to Moore’s Law. Half or more of the speedup derives from the massive parallelism designed into modern GPUs, which are capable of simultaneously executing several thousand pixel-shader programs (to compute position, color, and other attributes when rendering objects).

Such massive parallelism has its direct analogue in another, simultaneous revolution: the advent of pervasive connectivity. Since the late 1990s, it’s been a mistake to conceive of a PC as a stand-alone device. Through the Web, each PC has been plugged into a coprocessor of a different sort: the millions of other PCs that are similarly connected.

The computing hardware we quickly grew to depend on was eventually refined into a smartphone, representing the essential parts of a PC, trimmed to accommodate a modest size and power budget. And smartphones are even better networked than early PCs were. So we shouldn’t think of the coprocessor in a smartphone as its GPU, which helps draw pretty pictures on the screen. The real coprocessor is the connected capacity of some 4 billion other smartphone-­carrying people, each capable of sharing with and learning from one another through the Web or on various social-media platforms. It’s something that brings out both the best and worst in us.

We all now have powerful tools at our fingertips for connecting with others. When we plug ourselves into this global coprocessor, we can put our heads together to imagine, plan, and produce—or to conspire, harass, and thwart. We can search and destroy, or we can create and share. With the great power that this technology confers comes great responsibility. That’s something we should remind ourselves of every time we peer at our screens.

This article appears in the September 2020 print issue as “The CPU’s Silent Partner.”