All posts by Samuel K. Moore

How to Keep the Automotive Chip Shortage From Happening Again

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/transportation/advanced-cars/how-to-keep-the-automotive-chip-shortage-from-happening-again

“The automotive supply chain is a very complicated animal,” said Bob O’Donnell president of TECHnalysis Research at an automotive technology panel held Monday at GlobalFoundries Fab 8 in Malta, N.Y. “And very few people understand it.”

O’Donnell made this observation as part of a discussion involving executives from the auto and chip industries. The panelists portrayed a supply chain whose shortcomings have recently brought car makers to their knees. The panelists—who consisted of executives from chip manufacturer GlobalFoundries, IC maker Analog Devices, system integrator Aptiv, and automaker Ford—all agreed that this must never happen again. 

Meanwhile, the semiconductor content in cars is growing at an unprecedented rate—and those semiconductors are being integrated into new architectures driven by the change to electric vehicles.

“We have to revisit risk management across the board,” said Jonathan Jennings, vice president of global commodities purchasing and supplier technical assistance at Ford. He explained that the industry thought it had been covering itself against risks by using multiple suppliers. However, they did not realize that those suppliers or the suppliers of those suppliers were all using the output of the same small set of semiconductor foundries.

Kevin P. Clark, president and CEO of Aptiv, which as a Tier 1 supplier builds electronics systems for automakers, presented a sense of the scale of his company’s part of the supply chain, saying, “We receive 220 million parts from 400 suppliers daily. Of which we produce more than 90 million components shipped to 7000 to 8000 customers daily.”

Car makers typically deal closely with their Tier 1 suppliers, and Jennings said people in his position rarely met with chip manufacturers directly. “But we have now,” he said.

The suppliers agreed that they need deeper relationships with the car makers. “What it requires is strategic relationships all the way down the chain,” said Aptiv’s Clark. It will take, he continued, “co-investment not just from a dollars standpoint, but from a relationship standpoint.”

What might that mean for chip manufacturers like GlobalFoundries? According to GlobalFoundries senior vice president Mike Hogan, car maker involvement could lead to faster introduction of new chip technologies. For example, the first version of new tech could be designed to meet auto industry standards rather than today’s model, where tech developed for other industries are adapted to car makers’ needs.

This reimagining of the supply chain is happening as the car industry confronts big changes. “If you look at where we’re going from a technology standpoint, we will advance more in the next ten years than we will have in the last hundred,” Jennings said.

The move to battery electric vehicles presents a major chance to simplify the way the electronic systems in vehicles are designed. With existing internal combustion cars, those electronics have been layered on as new technologies were developed and deployed leading to a lot of complexity in both hardware and software, explains Hogan. (For a deep dive into just how complex the software situation has gotten, read “How Software Is Eating the Car.”)

Battery electric redesigns offer “a real opportunity to rethink how a vehicle is architected,” said Aptiv’s Clark. But for the supply chain to work efficiently, he thinks suppliers need to participate in that rearchitecting.

How long will it take before this dream supply chain emerges? It will likely be the work of years, executives say.

Ignoring Intel Rumors, GlobalFoundries Will Do $1-billion Expansion

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/devices/ignoring-intel-rumors-globalfoundries-will-do-1billion-expansion

It used to be rare that the semiconductor industry made the news, as Thomas Caulfield, CEO of GlobalFoundries, said on Monday. Caulfield was addressing an assembly of industry partners and political dignitaries including Senator Charles Schumer of New York and Commerce Secretary Gina Raimondo. “Every day now we’re in the news, not about what we did but what we haven’t been able to do enough of,” he said. “Our manufacturing capacity, worldwide, has been outpaced by demand.

Hoping to cash in on remedying that situation, GlobalFoundries announced a $1-billion investment aimed at adding 150,000 wafers per year of capacity on Fab 8 in Malta, N.Y., at the company’s most advanced facility. Caulfield also announced that GlobalFoundries would build a new fab in Malta. However, he provided no details about that fab.

The truth is that GlobalFoundries has been in the news quite a lot lately, but not for any reason Caulfield would talk about. Intel is rumored to be interested in purchasing GlobalFoundries for US $30 billion, according to the Wall Street Journal. Caulfield would not acknowledge the rumors. Such a tie-up would be a good fit for Intel, analysts say, because the company is in the process of rebooting its foundry business. GlobalFoundries would provide both ready-made capacity and, perhaps more importantly, know-how to get that business off the ground.

The Fab 8 expansion would generate 1000 jobs at GlobalFoundries and thousands more in the local economy, Caulfield said. The $1-billion expansion comes on top of a global $1.4 billion expansion and a new fab in Singapore that will add 450,000 wafers per year.

Fab 8 has had its ups and downs. In 2018, the company installed two extreme ultraviolet lithography machines in a drive to produce 7-nanometer chips, then the industry’s most advanced. However, company executives calculated that they would never be profitable if they continued chasing Moore’s Law and abandoned the project. Instead, the company has focused on adding technology features to their existing processes, such as embedded MRAM memory and advanced RF capabilities. IBM is now suing GlobalFoundries, because it says the latter company was obligated to produce IBM chips at 10-nanometers or better, a process technology between 7-nanometers and the 12-nm that GlobalFoundries currently operates at Fab 8.

The lack of detail about the new fab in Malta, N.Y. may be in part because there is likely a considerable amount of federal government money at stake. Schumer scored a rare bipartisan win pushing the United States Innovation and Competition Act of 2021 through the U.S. Senate. The bill includes $52 billion for semiconductor manufacturing and R&D, but it must pass in the House of Representatives before it’s signed into law. And after that, the Commerce Department must figure out how to dispense the funds.

The $52 billion seems small compared to other programs in the works in South Korea and Europe, which could be in the hundreds of billions of dollars over a decade. Raimondo described the U.S. plan as “a historic, once in a generation investment.” But she also acknowledged that $52-billion won’t be enough. “It’s just the tip of the spear.”

How and When the Chip Shortage Will End, in 4 Charts

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/devices/how-and-when-the-chip-shortage-will-end-in-4-charts

Historians will probably spend decades picking apart the consequences of the COVID-19 epidemic. But the shortage of chips that it’s caused will be long over by then. A variety of analysts agree that the most problematic shortages will begin to ease in the third or fourth quarter of 2021, though it could take much of 2022 for the resulting chips to work their way through the supply chain to products. The supply relief will not be coming from the big, national investments in the works right now by South Korea, the United States, and Europe but from older chip fabs and foundries running processes far from the cutting edge and on comparatively small silicon wafers.

Before we get into how the shortage will end, it’s worth summing up how it began. With panic, lockdowns, and general uncertainty rolling across the globe, automakers cancelled orders. However, those conditions meant a big fraction of the workforce recreated the office at home, purchasing computers, monitors, and other equipment. At the same time entire school systems switched to virtual learning via laptops and tablets. And more time at home also meant more spending on home entertainment, such as TVs and game consoles. These, the 5G rollout, and continued growth in cloud computing quickly hoovered up the capacity automakers had unceremoniously freed. By the time car makers realized people still wanted to buy their goods they found themselves at the back of the line for the chips they needed.

At $39.5 billion, the auto industry makes up less than 9 percent of chip demand by revenue, according to market research firm IDC. That figure is set to increase by about 10 percent per year to 2025. However, the auto industry—which employs more than 10 million people globally— is something both consumers and politicians are acutely sensitive to, especially in the United States and Europe.

Chips for the automotive sector are made using processes intended to meet safety criteria that are different from those meant for other industries. But they are still fabricated on the same production lines as the analog ICs, power management chips, display drivers, microcontrollers, and sensors that go in everything else. “The common denominator is the process technology is 40 nanometers and older,” says Mario Morales, vice president, enabling technologies and semiconductors at IDC.

This chip manufacturing technology was last cutting edge 15 years ago or earlier, lines producing chips at these old nodes represent a full 54 percent of installed capacity, according to IDC. Today these old nodes are typically used on 200-mm wafers of silicon. To reduce cost, the industry began moving to 300-mm wafers in 2000, but much of the old 200-mm infrastructure continued and even expanded.

Despite the auto industry’s desperation, there’s no great rush to build new 200-mm fabs. “The return on investment just isn’t there,” says Morales. What’s more there are already many legacy-node plants in China that are not operating efficiently right now, but “at some point, they will,” he says, further reducing the incentive to build new fabs. According to the chip manufacturing equipment industry association SEMI, the number of 200-mm fabs will go from 212 in 2020 to 222 in 2022, about half the expected increase of the more profitable 300-mm fabs.

More than 40 companies will increase capacity by more than 750,000 wafers-per-month from the beginning of 2020 to the end of 2022. The long-term trend to the end of 2024 is for a 17 percent increase in capacity for 200-mm facilities. Spending on equipment for these fabs is set to rise to $4.6 billion in 2021 after crossing the $3-billion mark in 2020 for the first time in years, SEMI says. But then spending will drop back to $4 billion in 2022. In comparison, spending to equip 300-mm fabs is expected to hit $78-billion in 2021.

The chip shortage is happening simultaneously with national and regional efforts to boost advanced logic chip manufacturing. South Korea announced a push worth $450-billion over ten years, the United States is pushing legislation worth $52 billion, and the EU could plow up to $160-billion into its semiconductor sector. Chipmakers were already on a spending spree. Globally, capital equipment for semiconductor production grew 56 percent year-on-year through April 2021, according to SEMI. SEMI’s 3 June 2021 World Fab Forecast indicates that 10 new 300-mm fabs will start operation in 2021 with 14 more coming up in 2022.

“The push for building IC capacity around the world will certainly drive fab investment of the current decade to a new high,” says Christian Gregor Dieseldorff, senior principal for semiconductors at SEMI. “We expect to see record spending and more new fab announcements in the next few years.”

One potential hiccup on the road to ending the shortage is that some of the skyrocketing demand appears to be from customers that are double-ordering to bulk up on inventory, says Jim Feldhan, president of Semico Research. “I don’t know of any product that needs twice the amount of analog” as the year before, he says.  But manufacturers “don’t want a 12-cent part to hold up a 4K television,” so they’re stocking up.

The auto industry needs to do more than just stock up, according to Bharat Kapoor, lead partner, Americas, in the high-tech practice of global strategy and management consulting firm, Kearney. To keep future shortages at bay, the chip industry and auto executives need a more direct connection going forward so signals about supply and demand are clearer, he says.

This post was corrected on 30 June to clarify historic 200-mm fab equipment spending.

Samsung’s 3-nm Tech Shows Nanosheet Transistor Advantage

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/memory/samsungs-3nm-tech-shows-nanosheet-transistor-advantage

The logic chip industry is heading toward a fundamental change in the structure of transistors. Today’s transistors, called FinFETs, will give way to devices variously called nanosheet transistors, multibridge channel FETs, and gate-all-around transistors. Apart from the drive to make transistors that are both better performing and smaller, nanosheets add a degree of freedom to circuit design that FinFETs lack. Earlier this month at the IEEE International Solid-State Circuits Conference, Samsung engineers showed how that extra flexibility leads to on-chip memory cells that can be written to using hundreds of millivolts less potential, likely saving power on future chips.

Although Taiwan Semiconductor Manufacturing Co. (TSMC)  plans to stay with FinFETs for its next generation process, the 3-nanometer node, Samsung chose to forge ahead with its version of nanosheets, multibridge channel MOSFETs (MBCFET). In FinFETs, the channel region, the part of the transistor through which current flows, is a vertical fin that protrudes up from the surrounding silicon. The gate drapes over the fin, covering it on three sides to control the flow of current through the channel. Nanosheets replace the fin with a stack of horizontal sheets of silicon. The gate completely surrounds each sheet. 

“We have used FinFET transistors for about a decade, however at 3-nm we are using a gate all around transistor,” Samsung Electronics vice president Taejoong Song told attendees of the virtual conference. The new transistor “provides high speed, low power, and small area.”

But, as early nanosheet developers explained in IEEE Spectrum, the new device structure adds a degree of design flexibility that FinFETs lack. The key here is the “effective width”, or Weff, of the transistor channel. Generally, a wider channel can drive more current through it for a given voltage, effectively reducing its resistance. Because you can’t vary the height of the fin in a FinFET, the only way to boost Weff with today’s transistors is to add more fins per transistor. So with a FinFET you can double or triple Weff, but you can’t increase it by 25 percent or decrease it by 20 percent. You can, however, vary the width of the sheets in a nanosheet device, so a circuit using them can be composed of transistors with a variety of properties.

“Recently, designers are facing many challenges for [achieving maximum device frequency] and low power,” Song said. “Due to this design flexibility SRAM… can be improved more.”

Song and his team took advantage of this flexibility to improve the performance of a potential next generation SRAM. SRAM is a six-transistor memory cell predominantly used as cache memory on processors, and it’s also among the most densely-packed parts of a logic chip. Samsung tested two schemes to improve SRAM’s write margin, the minimum voltage needed to switch the state of the cell. That value has been under pressure as chip interconnects have been shrunk down and their resistance has consequently increased.

SRAM’s six transistors can be divided into three pairs: the pass gates, the pull ups, and the pull downs. In a FinFET design, the Weff of all three types would be equal. But with nanosheet devices, the Samsung team was free to make alterations. In one they made the pass gates and the pull downs wider. In another, they made the pass gate wider and the pull down narrower.

The aim was to decrease the voltage needed to write to the SRAM cell without making the cell so unstable that reading it would accidentally flip a bit. The two schemes they came up with exploited these width adjustments—in particular widening the pass gate transistors relative to the pull up transistors—to arrive at an SRAM cell that writes at a voltage 230 mv lower than it otherwise would.

Samsung is expected to move to a 3-nm process with its MBCFET in 2022.

Darpa Hacks Its Secure Hardware, Fends Off Most Attacks

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/embedded-systems/darpa-hacks-its-secure-hardware-fends-off-most-attacks

Last summer, Darpa asked hackers to take their best shots at a set of newly designed hardware architectures. After 13,000 hours of hacking by 580 cybersecurity researchers, the results are finally in: just 10 vulnerabilities. Darpa is calling it a win, not because the new hardware fought off every attack, but because it “proved the value of the secure hardware architectures developed under its System Security Integration Through Hardware and Firmware (SSITH) program while pinpointing critical areas to further harden defenses,” says the agency.

Researchers in SSITH, which is part of Darpa’s multibillion dollar Electronics Resurgence Initiative, are now in the third and final phase of developing security architectures and tools that guard systems against common classes of hardware vulnerabilities that can be exploited by malware. [See “How the Spectre and Meltdown Hacks Really Worked.”] The idea is to find a way past the long-standing security model of “patch and pray”, where vulnerabilities are found and software is updated.

In an essay introducing the bug bounty, Keith Rebello, the project’s leader, wrote that patching and praying is a particularly ineffective strategy for IoT hardware, because of the cost and inconsistency of updating and qualifying a hugely diverse set of systems. [See “DARPA: Hack Our Hardware”]

Rebello described the common classes of vulnerabilities as buffer errorsprivilege escalationsresource management attacksinformation leakage attacksnumeric errorscode injection attacks, and cryptographic attacks. SSITH teams came up with RISC-V-based architectures meant to render them impossible. These were then emulated using FPGAs. A full stack of software including a bunch of apps known to be vulnerable ran on the FPGA. They also allowed outsiders to add their own vulnerable applications. The Defense Department then loosed hackers upon the emulated systems using a crowdsourced security platform provided by Synack in a bug bounty effort called Finding Exploits to Thwart Tampering (FETT).

“Knowing that virtually no system is unhackable, we expected to discover bugs within the processors. But FETT really showed us that the SSITH technologies are quite effective at protecting against classes of common software-based hardware exploits,” said Rebello, in a press release. “The majority of the bug reports did not come from exploitation of the vulnerable software applications that we provided to the researchers, but rather from our challenge to the researchers to develop any application with a vulnerability that could be exploited in contradiction with the SSITH processors’ security claims. We’re clearly developing hardware defenses that are raising the bar for attackers.”

Of the 10 vulnerabilities discovered, four were fixed during the bug bounty, which ran from July to October 2020. Seven of those 10 were deemed critical, according to the Common Vulnerability Scoring System 3.0 standards. Most of those resulted from weaknesses introduced by interactions between the hardware, firmware, and the operating system software. For example, one hacker managed to steal the Linux password authentication manager from a protected enclave by hacking the firmware that monitors security, Rebello explains.

In the program’s third and final phase, research teams will work on boosting the performance of their technologies and then fabricating a silicon system-on-chip that implements the security enhancements. They will also take the security tech, which was developed for the open-source RISC-V instruction set architecture, and adapt it to processors with the much more common Arm and x86 instruction set architectures. How long that last part will take depends on the approach the research team took, says Rebelllo. However, he notes that three teams have already ported their architectures to Arm processors in a fraction of the time it took to develop the initial RISC-V version.

New Type of DRAM Could Accelerate AI

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/memory/new-type-of-dram-could-accelerate-ai

One of the biggest problems in computing today is the “memory wall”—the difference between processing time and the time it takes to shuttle data over to the processor from separate DRAM memory chips. The increasingly popularity of AI applications has only made that problem more pronounced, because the huge networks that find faces, understand speech, and recommend consumer goods rarely fit in a processor’s on-board memory.

In December at IEEE International Electron Device Meeting (IEDM), separate research groups in the United States and in Belgium think a new kind of DRAM might be the solution. The new DRAM, made from oxide semiconductors and built in the layers above the processor, holds bits hundreds or thousands of times longer than commercial DRAM and could provide huge area and energy savings when running large neural nets, they say.

Plasmonics: A New Way to Link Processors With Light

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/optoelectronics/plasmonics-a-new-way-to-link-processors-with-light

Fiber optic links are already the main method of slinging data between clusters of computers in data centers, and engineers want to bring their blazing bandwidth to the processor. That step comes at a cost that researchers at the University of Toronto and Arm think they can greatly reduce.

Silicon photonics components are huge in comparison to their electronic counterparts. That’s a function of optical wavelengths being so much larger than today’s transistors and the copper interconnects that tie them together into circuits. Silicon photonic components are also surprisingly sensitive to changes in temperature, so much so that photonics chips must include heating elements that take up about half of their area and energy consumption, as Charles Lin, one of the team at University of Toronto, explained last month at the IEEE International Electron Device Meeting.

At the virtual conference Lin, a researcher in the laboratory of Amr S. Helmy, described new silicon transceiver components that dodge both of these problems by relying on plasmonics instead of photonics. The results so far point to transceivers capable of at least double the bandwidth while consuming only one third the energy and taking up a mere 20 percent of the area. What’s more, they could be built right atop the processor, instead of on separate chiplets as is done with silicon photonics.

When light strikes the interface between a metal and insulator at a shallow angle, it forms plasmons: waves of electron density that propagate along the metal surface. Conveniently, plasmons can travel down a waveguide that is much narrower than the light that forms it, but they typically peter-out very quickly because the metal absorbs light.

The Toronto researchers invented a structure to take advantage of plasmonics’ smaller size while greatly reducing the loss. Called the coupled hybrid plasmonic waveguide (CPHW), it is essentially a stack made up of silicon, the conductor indium tin oxide, silicon dioxide, aluminum, and more silicon. That combination forms two types of semiconductor junctions—a Schottky diode and a metal-oxide-semiconductor—with the aluminum that contains the plasmon in common between the two. Within the metal, the plasmon in the top junction interferes with the plasmon in the bottom junction in such a way that loss is reduced by almost two orders of magnitude, Lin said.

Using the CPHW as a base, the Toronto group built two key photonics components—a modulator, which turns electronic bits into photonic bits, and a photodetector, which does the reverse. (As is done in silicon photonics, a separate laser provides the light; the modulator blocks the light or lets it pass to represent bits.) The modulator took up just 2 square micrometers and could switch at as fast as 26 gigahertz, the limit of the Toronto team’s equipment. Based on the device’s measured capacitance, the real limit could be as high as 636 GHz. The plasmonic photodetector was near match to silicon photonics’ sensitivity, but it was only 1/36th the size.

One of the CPHW’s biggest advantages is its lack of sensitivity to temperature. Silicon photonics components have a temperature tolerance that can’t swing farther than one degree in order for them to operate at the proper wavelength. Temperature sensitivity is a “big challenge for silicon photonics,” explains Saurabh Sinha, a principal research engineer at Arm. Managing that tolerance requires both extra circuitry and the consumption of energy. In a simulated 16-channel silicon photonics transceiver heating circuits consume half of the circuit’s energy and take up nearly that fraction of their total area, and that translates to huge difference in area: 0.37 mm2 for silicon photonics versus 0.07 mm2 for plasmonic transceivers.

Simulations of the CPHW-based plasmonics transceiver predict a number of benefits over silicon photonics. The CPHW system consumed less than one-third of the energy per bit transmitted of a competing silicon photonics system—0.49 picojoules per bit versus 1.52 pJ/b. It could comfortably transmit more than three times more bits per second at acceptable Ethernet error rates without relying on error correction—150 gigabits per second versus 39 Gb/s.

Sinha says Arm and the Toronto group are discussing next steps, and those might include exploring other potential benefits of these transceivers such as the fact that CPHWs could be constructed atop processor chips, while silicon photonics devices must be made separately from the processor and then linked to them inside the processor package using chiplet technology.

System Creates the Illusion of an Ideal AI Chip

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/system-creates-the-illusion-of-an-ideal-ai-chip

What does an ideal neural network chip look like? The most important part is to have oodles of memory on the chip itself, say engineers. That’s because data transfer (from main memory to the processor chip) generally uses the most energy and produces most of the system lag—even compared to the AI computation itself. 

Cerebras Systems solved these problems, collectively called the memory wall, by making a computer consisting almost entirely of a single, large chip containing 18 gigabytes of memory. But researchers in France, Silicon Valley, and Singapore have come up with another way.

Called Illusion, it uses processors built with resistive RAM memory in a 3D stack built above the silicon logic, so it costs little energy or time to fetch data. By itself, even this isn’t enough, because neural networks are increasingly too large to fit in one chip. So the scheme also requires multiple such hybrid processors and an algorithm that both intelligently slices up the network among the processors and also knows when to rapidly turn processors off when they’re idle.

In tests, an eight-chip version of Illusion came within 3-4 percent of the energy use and latency of a “dream” processor that had all the needed memory and processing on one chip.

U.S. Takes Strategic Step to Onshore Electronics Manufacturing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/us-takes-strategic-step-to-onshore-electronics-manufacturing

Late last week, the U.S. Congress passed the annual policy law that guides the U.S. Defense Department. Tucked away inside the National Defense Authorization Act of 2021 (NDAA) are provisions that supporters hope will lead to a resurgence in chip manufacturing in the United States. The provisions include billions of dollars of financial incentives for construction or modernization of facilities “relating to the fabrication, assembly, testing, advanced packaging, or advanced research and development of semiconductors.”

 

The microelectronics incentives in the law originate out of U.S. officials’ concerns about China’s rapidly growing share of the global chipmaking industry and from the United States’ shrinking stake. The legislation frames that as an issue of U.S. national security.

Although China does not hold a technological lead in chipmaking, its geographic proximity to those who do worries some in the United States. Today, foundries using the most advanced manufacturing processes (currently the 5-nanometer node) are operated by Samsung in South Korea and by Taiwan Semiconductor Manufacturing Company (TSMC) in Taiwan and nowhere else.

Both companies provide foundry services, manufacturing chips for U.S.-based tech giants like NvidiaAMDGoogleFacebook, and Qualcomm. For years, Intel was their match and more in terms of manufacturing technology, but the company has struggled to move to new processes.

Admittedly, there are already some moves toward state-of-the-art foundry construction in the United States even in the absence of the NDAA. TSMC announced in May that it planned to spend up to $12 billion over nine years on a new 5-nanometer fab in Arizona. The TSMC board of directors took the first step in November when they approved $3.5 billion to establish a wholly-owned subsidiary in the state.

But the Semiconductor Industry Association, the U.S. trade group, says government incentives will accelerate construction. The SIA calculates that a $20-billion incentive program over 10 years would yield 14 new fabs and attract $174 billion in investment versus 9 fabs and $69 billion without the federal incentives. A $50-billion program would yield 19 fabs and attract $279 billion.

The NDAA specifies a cap of $3-billion per project unless Congress and the President agree to more, but how much money actually gets spent in total on microelectronics capacity will depend on separate “appropriations” bills.

“The next step is for leaders in Washington to fully fund the NDAA’s domestic chip manufacturing incentives and research initiatives,” said Bob Bruggeworth, chair of SIA and president, CEO, and director of RF-chipmaker Qorvo, in a press release.

Getting the NDAA’s microelectronics and other technology provisions funded will be one of IEEE USA’s top priorities in 2021, says the organization’s director of government relations, Russell T. Harrison.

Beyond financial incentives, the NDAA also authorizes microelectronics-related R&D, development of a “provably secure” microelectronics supply chain, the creation of a National Semiconductor Research Technology Center to help move new technology into industrial facilities, and establishment of committees to create strategies toward adding capacity at the cutting edge. It also authorizes quantum computing and artificial intelligence initiatives.

The NDAA “has a lot of provisions in it that are very good for IEEE members,” says Harrison.

The semiconductor strategy and investment portion of the law began as separate bills in the House of Representatives and the Senate. In the Senate, it was called the American Foundries Act of 2020, and was introduced in July. The act called for $15 billion for state-of-the-art construction or modernization and $5 billion in R&D spending, including $2 billion for the Defense Advanced Projects Agency’s Electronics Resurgence Initiative. In the House, the bill was called the CHIPS for America Act. It was introduced in June and offered similar levels of R&D.

Some in industry objected to early conceptions of the legislation, believing them to be too narrowly focused on cutting-edge silicon CMOS. Industry lobbied Congress to make the law more inclusive—potentially allowing for expansion of facilities like SkyWater Technology’s 200-mm fab in Bloomington, Minn.

The language in later versions of the bill signals that the government “still wants to pursue advanced nodes but that they understand that we have an existing manufacturing capability in the U.S. that needs support and can still play a big role in making us competitive,” says John Cooney, director of strategic government relations at Skywater.

Realizing that little legislating was likely to happen in an election year, supporters chose to try to fold the microelectronics language into the NDAA, which is considered a must-pass bill and had a 59-year bipartisan streak going into December. President Trump vetoed the NDAA last month, but Congress quickly overrode the veto at the start of January.

“What we’ve seen increasingly over the last nine months, is that there is a bicameral, bipartisan consensus building in Congress that the United States needs to do more to promote technology and technology research [domestically],” says Harrison.

“All of this is a huge step in the right direction, and we’re really excited about it,” says Skywater’s Cooney. “But it is just the first step to be competitive.”

The U.S. move is just one among a series of maneuvers taking place globally as countries and regions seek to build up or regain chipmaking capabilities. China has been on an investment streak through its Made in China 2025 plan. In December, Belgium, France, Germany, and 15 other European Union nations agreed to jointly bolster Europe’s semiconductor industry, including moving toward 2-nanometer node production. The money for this would come from the 145-billion-euro portion of the EU’s pandemic recovery fund set aside for “digital transition.”

Gravity Energy Storage Will Show Its Potential in 2021

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/energy/batteries-storage/gravity-energy-storage-will-show-its-potential-in-2021

graphic link to special report landing page

Cranes are a familiar fixture of practically any city skyline, but one in the Swiss City of Ticino, near the Italian border, would stand out anywhere: It has six arms. This 110-meter-high starfish of the skyline isn’t intended for construction. It’s meant to prove that renewable energy can be stored by hefting heavy loads and dispatched by releasing them.

Energy Vault, the Swiss company that built the structure, has already begun a test program that will lead to its first commercial deployments in 2021. At least one competitor, Gravitricity, in Scotland, is nearing the same point. And there are at least two companies with similar ideas, New Energy Let’s Go and Gravity Power, that are searching for the funding to push forward.

To be sure, nearly all the world’s currently operational energy-storage facilities, which can generate a total of 174 gigawatts, rely on gravity. Pumped hydro storage, where water is pumped to a higher elevation and then run back through a turbine to generate electricity, has long dominated the energy-storage landscape. But pumped hydro requires some very specific geography—two big reservoirs of water at elevations with a vertical separation that’s large, but not too large. So building new sites is difficult.

Energy Vault, Gravity Power, and their competitors seek to use the same basic principle—lifting a mass and letting it drop—while making an energy-storage facility that can fit almost anywhere. At the same time they hope to best batteries—the new darling of renewable-energy storage—by offering lower long-term costs and fewer environmental issues.

In action, Energy Vault’s towers are constantly stacking and unstacking 35-metric-ton bricks arrayed in concentric rings. Bricks in an inner ring, for example, might be stacked up to store 35 megawatt-hours of energy. Then the system’s six arms would systematically disassemble it, lowering the bricks to build an outer ring and discharging energy in the process.

This joule-storing Jenga game can be complicated. To maintain a constant output, one block needs to be accelerating while another is decelerating. “That’s why we use six arms,” explains Robert Piconi, the company’s CEO and cofounder.

What’s more, the control system has to compensate for gusts of wind, the deflection of the crane as it picks up and sets down bricks, the elongation of the cable, pendulum effects, and more, he says.

Piconi sees several advantages over batteries. Advantage No. 1 is environmental. Instead of chemically reactive and difficult-to-recycle lithium-ion batteries, Energy Vault’s main expenditure is the bricks themselves, which can be made on-site using available dirt and waste material mixed with a new polymer from the Mexico-based cement giant Cemex

Another advantage, according to Piconi, is the lower operating expense, which the company calculates to be about half that of a battery installation with equivalent storage capacity. Battery-storage facilities must continually replace cells as they degrade. But that’s not the case for Energy Vault’s infrastructure.

The startup is confident enough in its numbers to claim that 2021 will see the start of multiple commercial installations. Energy Vault raised US $110 million in 2019 to build the demonstration unit in Ticino and prepare for a “multicontinent build-out,” says Piconi.

Compared with Energy Vault’s effort, Gravitricity’s energy-storage scheme seems simple. Instead of a six-armed crane shuttling blocks, Gravitricity plans to pull one or just a few much heavier weights up and down abandoned, kilometer-deep mine shafts.

These great masses, each one between 500 and 5,000 metric tons, need only move at mere centimeters per second to produce megawatt-level outputs. Using a single weight lends itself to applications that need high power quickly and for a short duration, such as dealing with second-by-second fluctuations in the grid and maintaining grid frequency, explains Chris Yendell, Gravitricity’s project development manager. Multiple-weight systems would be more suited to storing more energy and generating for longer periods, he says. 

Proving the second-to-second response is a primary goal of a 250-kilowatt concept demonstrator that Gravitricity is building in Scotland. Its 50-metric-ton weight will be suspended 7 meters up on a lattice tower. Testing should start during the first quarter of 2021. “We expect to be able to achieve full generation within less than one second of receiving a signal,” says Yendell.

The company will also be developing sites for a full-scale prototype during 2021. “We are currently liaising with mine owners in Europe and in South Africa, [and we’re] certainly interested in the United States as well,” says Yendell. Such a full-scale system would then come on line in 2023.

Gravity Power and its competitor New Energy Let’s Go, which acquired its technology from the now bankrupt Heindl Energy, are also looking underground for energy storage, but they are more closely inspired by pumped hydro. Instead of storing energy using reservoirs at different elevations, they pump water underground to lift an extremely heavy piston. Allowing the piston to fall pushes water through a turbine to generate electricity.

“Reservoirs are the Achilles’ heel of pumped hydro,” says Jim Fiske, the company’s founder. “The whole purpose of a Gravity Power plant is to remove the need for reservoirs. [Our plants] allow us to put pumped-hydro-scale power and storage capacity in 3 to 5 acres [1 to 2 hectares] of flat land.”

Fiske estimates that a 400-megawatt plant with 16 hours of storage (or 6.4 gigawatt-hours of energy) would have a piston that’s more than 8 million metric tons. That might sound ludicrous, but it’s well within the lifting abilities of today’s pumps and the constraints of construction processes, he says. 

While these companies expect such underground storage sites to be more economical than battery installations, they will still be expensive. But nations concerned about the changing climate may be willing to pay for storage options like these when they recognize the gravity of the crisis.

This article appears in the January 2021 print issue as “The Ups and Downs of Gravity Energy Storage.”

Intel’s Stacked Nanosheet Transistors Could Be the Next Step in Moore’s Law

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/intels-stacked-nanosheet-transistors-could-be-the-next-step-in-moores-law

The logic circuits behind just about every digital device today rely on a pairing of two types of transistors—NMOS and PMOS. The same voltage signal that turns one of them on turns the other off. Putting them together means that electricity should flow only when a bit changes, greatly cutting down on power consumption. These pairs have sat beside each other for decades, but if circuits are to continue shrinking they’re going to have to get closer still. This week, at the IEEE International Electron Devices Meeting (IEDM), Intel showed a different way: stacking the pairs so that one is atop the other. The scheme effectively cut the footprint of a simple CMOS circuit in half, meaning a potential doubling of transistor density on future ICs.

HPE Invents First Memristor Laser

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/hpe-invents-first-memristor-laser

Researchers at Hewlett Packard Labs, where the first practical memristor was created, have invented a new variation on the device—a memristor laser. It’s a laser that can have its wavelength electronically shifted and, uniquely, hold that adjustment even if the power is turned off. At the IEEE International Electron Device Meeting the researchers suggest that, in addition to simplifying photonic transceivers for data transmission between processors, the new devices could form the components of superefficient brain-inspired photonic circuits.

Scaled-Down Carbon Nanotube Transistors Inch Closer to Silicon Abilities

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/scaleddown-carbon-nanotube-transistors-inch-closer-to-silicon-abilities

Carbon nanotube devices are getting closer to silicon’s abilities thanks to a series of developments, the latest of which was revealed today at the IEEE Electron Devices Meeting (IEDM). Engineers from Taiwan Semiconductor Manufacturing Company (TSMC), University of California San Diego, and Stanford University explained a new fabrication process that leads to better control of carbon nanotube transistors. Such control is crucial to ensuring that transistors, which act as switches in logic circuits, turn fully off when they are meant to. Interest in carbon nanotube transistors has accelerated recently, because they can potentially be shrunk down further than silicon transistors can and offer a way to produce stacked layers of circuitry much more easily than can be done in silicon.

The team invented a process for producing a better gate dielectric. That’s the layer of insulation between the gate electrode and the transistor channel region. In operation, voltage at the gate sets up an electric field in the channel region that cuts off the flow of current. As silicon transistors were scaled down over the decades, however, that layer of insulation, which was made of silicon dioxide, had to become thinner and thinner in order to control the current using less voltage, reducing energy consumption. Eventually, the insulation barrier was so thin that charge could actually tunnel through it, leaking current and wasting energy.

AI System Beats Supercomputer in Combustion Simulation

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/ai-system-beats-supercomputer-at-key-scientific-simulation

Cerebras Systems, which makes a specialized AI computer based on the largest chip ever made, is breaking out of its original role as a neural-network training powerhouse and turning its talents toward more traditional scientific computing. In a simulation having 500 million variables, the CS-1 trounced the 69th-most powerful supercomputer in the world. 

It also solved the problem—combustion in a coal-fired power plant—faster than the real-world flame it simulates. To top it off, Cerebras and its partners at the U.S. National Energy Technology Center claim, the CS-1 performed the feat faster than any present-day CPU or GPU-based supercomputer could.

The research, which was presented this week at the supercomputing conference SC20, shows that Cerebras’ AI architecture “is not a one trick pony,” says Cerebras CEO Andrew Feldman.

Weather forecasting, design of airplane wings, predicting temperatures in a nuclear power plant, and many other complex problems are solved by simulating “the movement of fluids in space over time,” he says. The simulation divides the world up into a set of cubes, models the movement of fluid in those cubes, and determines the interactions between the cubes. There can be 1 million or more of these cubes and it can take 500,000 variables to describe what’s happening.

According to Feldman, solving that takes a computer system with lots of processor cores, tons of memory very close to the cores, oodles of bandwidth connecting the cores and the memory, and loads of bandwidth connecting the cores. Conveniently, that’s what a neural-network training computer needs, too. The CS-1 contains a single piece of silicon with 400,000 cores, 18 gigabytes of memory, 9 petabytes of memory bandwidth, and 100 petabits per second of core-to-core bandwidth.

Scientists at NETL simulated combustion in a powerplant using both a Cerebras CS-1 and the Joule supercomputer, which has 84,000 CPU cores and consumes 450 kilowatts. By comparison, Cerebras runs on about 20 kilowatts. Joule completed the calculation in 2.1 milliseconds. The CS-1 was more than 200-times faster, finishing in 6 microseconds.

This speed has two implications, according to Feldman. One is that there is no combination of CPUs or even of GPUs today that could beat the CS-1 on this problem. He backs this up by pointing to the nature of the simulation—it does not scale well. Just as you can have too many cooks in the kitchen, throwing too many cores at a problem can actually slow the calculation down. Joule’s speed peaked when using 16,384 of its 84,000 cores.

The limitation comes from connectivity between the cores and between cores and memory. Imagine the volume to be simulated as a 370 x 370 x 370 stack of cubes (136,900 vertical stacks with 370 layers). Cerebras maps the problem to the wafer-scale chip by assigning the array of vertical stacks to a corresponding array of processor cores. Because of that arrangement, communicating the effects of one cube on another is done by transferring data between neighboring cores, which is as fast as it gets. And while each layer of the stack is computed, the data representing the other layers reside inside the core’s memory where it can be quickly accessed.

(Cerebras takes advantage of a similar kind of geometric mapping when training neural networks. [See sidebar “The Software Side of Cerebras,” January 2020.])

And because the simulation completed faster than the real-world combustion event being simulated, the CS-1 could now have a new job on its hands—playing a role in control systems for complex machines.

Feldman reports that the SC-1 has made inroads in the purpose for which it was originally built, as well. Drugmaker GlaxoSmithKline is a known customer, and the SC-1 is doing AI work at Argonne National Laboratory and Lawrence Livermore National Lab, the Pittsburgh Supercomputing Center. He says there are several customers he cannot name in the military, intelligence, and heavy manufacturing industries.

A next generation SC-1 is in the works, he says. The first generation used TSMC’s 16-nanometer process, but Cerebras already has a 7-nanometer version in hand with more than double the memory—40 GB—and the number of AI processor cores—850,000.

Rapid Scale-Up of Commercial Ion-Trap Quantum Computers

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/commercial-iontrap-quantum-computers-showing-rapid-scaleup

Last week, Honeywell’s Quantum Solutions division released its first commercial quantum computer: a system based on trapped ions comprising 10 qubits. The H1, as it’s called, is actually the same ion trap chip the company debuted as a prototype, but with four additional ions. The company revealed a roadmap that it says will rapidly lead to much more powerful quantum computers. Separately, a competitor in ion-trap quantum computing, Maryland-based startup IonQ, unveiled a 32-qubit ion computer last month.

Ion trap quantum computers are made of chips that are designed to trap and hold ions in a line using a specially-designed RF electromagnetic field. The chip can also move specific ions along the line using an electric field. Lasers then encode the ions’ quantum states to perform calculations. Proponents say trapped-ion-based quantum computers are attractive because the qubits are thought to be longer-lasting, have much higher-fidelity, and are potentially easier to connect together than other options, allowing for more reliable computation.

For Honeywell, that means a system that is the only one capable of performing a “mid-circuit measurement” (a kind of quantum equivalent of an if/then), and then recycling the measured qubit back into the computation, says Patty Lee, Honeywell Quantum Solutions chief scientist. The distinction allows for different kinds of quantum algorithms and an ability to perform more complex calculations with fewer ions.

Both companies measure their systems’ abilities using a value called quantum volume. Daniel Lidar, director of the Center for Quantum Information Science and Technology at the University of Southern California in Los Angeles explained quantum volume to IEEE Spectrum like this:

…. IBM’s team defines quantum volume as 2 to the power of the size of the largest circuit with equal width and depth that can pass a certain reliability test involving random two-qubit gates. The circuit’s size is defined by either width based on the number of qubits or depth based on the number of gates, given that width and depth are equal in this case.

That means a 6-qubit quantum computing system would have a quantum volume of 2 to the power of 6, or 64—but only if the qubits are relatively free of noise and the potential errors that can accompany such noise.

Honeywell says its 10-qubit system has a measured quantum volume of 128, the highest in the industry. IonQ’s earlier 11-qubit prototype had a measured quantum volume of 32. It’s 32-ion system could theoretically reach higher than 4 million, the company claims, but this hasn’t been proven yet.

With the launch of its commercial system, Honeywell unveiled that it will use a subscription model for access to its computers. Customers would pay for time and engagement with the systems, even as the systems scale up throughout the year. “Imagine if you have Netflix, and next week it’s twice as good, and 3 months from now it’s 1000 times as good,” says Tony Uttley, president of Honeywell Quantum Solutions. “That’d be a pretty cool subscription. And that’s the approach we’ve taken with this.”

Honeywell’s path forward involves first adding more ions to the H1, which has capacity for 40. “We built a large auditorium,” Uttley says. “Now we’re just filling seats.”

The next step is to change the ion trap chip’s single line to a racetrack configuration. This system, called H2, is already in testing; it allows faster ion computation interactions, because ions at the ends of the line can be moved around to interact with each other. A further scale-up, H3, will come with a chip that has a grid of traps instead of a single line. For this, ions will have to be steered around corners, something Uttley says the company can already do.

For H4, the grid will be integrated with on-chip photonics. Today the laser beams that encode quantum states onto the ions are sent in from outside the vacuum chamber that houses the trap, and that configuration limits the number of points on the chip where computation can happen. An integrated photonics system, which has been designed and tested, would increase the available computation points. In a final step, targeted for 2030, tiles of H4 chips will be stitched together to form a massive integrated system.

For its part, IonQ CEO Peter Chapman told Ars Technica that the company plans to double the number of qubits in its systems every eight months for the next few years. Instead of physically moving ions to get them to interact, IonQ’s system uses carefully crafted pairs of laser pulses on a stationary line of ions.

Despite the progress so far, these systems can’t yet do anything that can’t be done already on a classical computer system. So why are customers buying in now? “With this roadmap we’re showing that we are going to very quickly cross a boundary where there is no way you can fact check” a result, says Uttley.  Companies need to see that their quantum algorithms work on these systems now, so that when they reach a capability beyond today’s supercomputers, they can still trust the result, he says.

New AI Inferencing Records

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/new-ai-inferencing-records

MLPerf, a consortium of AI experts and computing companies, has released a new set of machine learning records. The records were set on a series of benchmarks that measure the speed of inferencing: how quickly an already-trained neural network can accomplish its task with new data. For the first time, benchmarks for mobiles and tablets were contested. According to David Kanter, executive director of MLPerf’s parent organization, a downloadable app is in the works that will allow anyone to test the AI capabilities of their own smartphone or tablet.

MLPerf’s goal is to present a fair and straightforward way to compare AI systems. Twenty-three organizations—including Dell, Intel, and Nvidia—submitted a total of 1200 results, which were peer reviewed and subjected to random third-party audits. (Google was conspicuously absent this round.) As with the MLPerf records for training AIs released over the summer, Nvidia was the dominant force, besting what competition there was in all six categories for both datacenter and edge computing systems. Including submissions by partners like Cisco and Fujitsu, 1029 results, or 85 percent of the total for edge and data center categories, used Nvidia chips, according to the company.

“Nvidia outperforms by a wide range on every test,” says Paresh Kharaya, senior director of product management, accelerated computing at Nvidia. Nvidia’s A100 GPUs powered its wins in the datacenter categories, while its Xavier was behind the GPU-maker’s edge-computing victories. According to Kharaya, on one of the new MLPerf benchmarks, Deep Learning Recommendation Model (DLRM), a single DGX A100 system was the equivalent of 1000 CPU-based servers.

There were four new inferencing benchmarks introduced this year, adding to the two carried over from the previous round:

  • BERT, for Bi-directional Encoder Representation from Transformers, is a natural language processing AI contributed by Google. Given a question input, BERT predicts a suitable answer.
  • DLRM, for Deep Learning Recommendation Model is a recommender system that is trained to optimize click-through rates. It’s used to recommend items for online shopping and rank search results and social media content. Facebook was the major contributor of the DLRM code.
  • 3D U-Net is used in medical imaging systems to tell which 3D voxel in an MRI scan are parts of a tumor and which are healthy tissue. It’s trained on a dataset of brain tumors.
  • RNN-T, for Recurrent Neural Network Transducer, is a speech recognition model. Given a sequence of speech input, it predicts the corresponding text.

In addition to those new metrics, MLPerf put together the first set of benchmarks for mobile devices, which were used to test smartphone and tablet platforms from MediaTek, Qualcomm, and Samsung as well as a notebook from Intel. The new benchmarks included:

  • MobileNetEdgeTPU, an image classification benchmark that is considered the most ubiquitous task in computer vision. It’s representative of how a photo app might be able pick out the faces of you or your friends.
  • SSD-MobileNetV2, for Single Shot multibox Detection with MobileNetv2, is trained to detect 80 different object categories in input frames with 300×300 resolution. It’s commonly used to identify and track people and objects in photography and live video.
  • DeepLabv3+ MobileNetV2: This is used to understand a scene for things like VR and navigation, and it plays a role in computational photography apps. 
  • MobileBERT is a mobile-optimized variant of the larger natural language processing BERT model that is fine-tuned for question answering. Given a question input, the MobileBERT generates an answer.

The benchmarks were run on a purpose-built app that should be available to everyone within months, according to Kanter. “We want something people can put into their hands for newer phones,” he says.

The results released this week were dubbed version 0.7, as the consortium is still ramping up. Version 1.0 is likely to be complete in 2021.

Arm Spinout Reveals Correlated-Electron Memory Plans

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/memory/arm-spinout-details-correlatedelectron-memory-plans

Earlier this month, a group of eight Arm Research engineers established a startup, Cerfe Labs, to commercialize an experimental memory technology they had been working on for the past five years with Austin-based Symetrix. The technology, called correlated electron RAM (CeRAM), could become a nonvolatile replacement for the fast-access embedded SRAM used in processor high-level cache memory today. Besides being able to hold data in the absence of a power supply, which SRAM cannot do, CeRAM is likely to be considerably smaller than SRAM, potentially easing IC area issues as the industry’s ability to keep shrinking transistors reaches its end.

Much of semiconductor physics relies on the assumption that you can treat electrons individually. But more than half a century ago, Neville Francis Mott showed that in certain materials, when electrons are forced together, “[those materials] will do weird things,” explains says Greg Yeric, formerly a Fellow at Arm Research and now Cerfe Labs’ CTO. One of those things is a reversible transition between a metallic state and an insulating state, called a Mott transition. Laboratories around the world have been studying this phenomenon in vanadium oxide and other materials, and HP Labs recently described a neuron-like device that relies on the principle.

“What we think we have through our partnership with Symetrix is a correlated-electron switch—a material that can switch resistance states,” says Yeric. The companies are exploring a number of materials, but the one they’ve invested the most in so far is a carbon-doped nickel oxide. Its native state is non-conducting; that is, there is a gap between the allowable energy states of electrons that are bound to atoms and those that are free to move. But if enough electrons are injected into the material, they “screen out” the presence of the nickel atoms (from the perspective of other electrons). This has the effect of shifting the two energy bands so they meet, and allowing current to freely flow as if the material were a metal.

“We think we have a set of materials that exhibit this transition and, importantly, have a nonvolatile state on either side” of it, says Yeric.

The device itself is just the correlated electron material sandwiched between two electrodes, similar in structure to resistive RAM, phase change RAM, and magnetic RAM but less complex than the latter. And like those three, it is constructed in the metal interconnect layers above the silicon, requiring only one transistor in the silicon layer to access it, as opposed to SRAM’s six. Yeric says the company has made devices that fit with 7-nanometer CMOS processes and they should be scalable in both size and voltage to 5-nanometers (today’s cutting edge).

But it’s CeRAM’s speed that could make it a good replacement for SRAM. To date, they’ve made CeRAM with a 2-nanosecond pulse width for writing data, which is on par with what’s needed for a processor’s L3 cache; Yeric says they expect this speed to improve with development.

The carbon-doped nickel oxide material also has properties that are well beyond what today’s nonvolatile memory can do, but they are not as completely proven. For example, CerLab has shown that the device works at temperatures as low as 1.5 kelvins—well beyond what any nonvolatile memory can do, and in range for a role in quantum computing control circuits. In the other direction, they’ve demonstrated device operation up to 125 °C and showed that it retains its bits at up to 400 °C. But these figures were limited by the equipment the company had available. What’s more, the device’s theory of operation suggests that CeRAM should be naturally resistant to ionizing radiation and magnetic field disturbances.

Symetrix, which also develops ferroelectric RAM, explored correlated electron materials in theoretical studies for a Defense Advanced Research Projects Agency (DARPA) program called FRANC, for Foundations Required for Novel Computing. Symetrix “put together models and were able to predict the materials,” says CEO Eric Hennenhoefer, another Arm veteran.

“System designers are always on the lookout for improved memory, as virtually every system is limited in some way by the memory it can access,” says Yeric. “In [Arm’s] canvassing of possible future technologies, we came across the Symetrix technology. We eventually licensed the technology, based on its (very early and speculative) promise to advance embedded memory speed, density, cost, and power, without tradeoff.”

CerfeLab’s goal isn’t to manufacture CeRAM, but to develop the technology to the point that a large-scale manufacturer will want to take over development, he says. A new memory technology’s journey from discovery to commercialization typically takes eight or nine years at least. CeRAM is about half way there, he estimates.

Among the questions still to be answered involves the memory’s endurance—how many times it can switch before it begins to fail. In theory, there’s no element of the CeRAM device that would wear-out. But it would be naïve to think there won’t be problems in the real world. “There are always extrinsic things that wind up limiting endurance,” Yeric says.

Memristor Breakthrough: First Single Device To Act Like a Neuron

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/memristor-first-single-device-to-act-like-a-neuron

One thing that’s kept engineers from copying the brain’s power efficiency and quirky computational skill is the lack of an electronic device that can, all on its own, act like a neuron. It would take a special kind of device to do that, one whose behavior is more complex than any yet created.

Suhas Kumar of Hewlett Packard Laboratories, R. Stanley Williams now at Texas A&M, and the late Stanford student Ziwen Wang have invented a device that meets those requirements. On its own, using a simple DC voltage as the input, the device outputs not just simple spikes, as some other devices can manage, but the whole array of neural activity—bursts of spikes, self-sustained oscillations, and other stuff that goes on in your brain. They described the device last week in Nature.

It combines resistance, capacitance, and what’s called a Mott memristor all in the same device. Memristors are devices that hold a memory, in the form of resistance, of the current that has flowed through them. Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance. Materials in a Mott transition go between insulating and conducting according to their temperature. It’s a property seen since the 1960s, but only recently explored in nanoscale devices.

The transition happens in a nanoscale sliver of niobium oxide in the memristor. Here when a DC voltage is applied, the NbO2 heats up slightly, causing it to transition from insulating to conducting. Once that switch happens, the charge built up in the capacitance pours through. Then the device cools just enough to trigger the transition back to insulating. The result is a spike of current that resembles a neuron’s action potential.

 “We’ve been working for five years to get that,” Williams says. “There’s a lot going on in that one little piece of nanoscale material structure.”

According to Kumar, memristor inventor Leon Chua predicted that if you mapped out the possible device parameters there would be regions of chaotic behavior in between regions where behavior is stable. At the edge of some of these chaotic regions, devices can exist that do what the new artificial neuron does.

Williams credits Kumar with doggedly fine tuning the device’s material and physical parameters to find a combination that works. “You cannot find this by accident,” he says. “Everything has to be perfect before you see this characteristic, but once you’re able to make this thing, it’s actually very robust and reproducible.”

They tested the device first by building spiking versions of Boolean logic gates—NAND and NOR, and then by building a small analog optimization circuit.

There’s a lot of work ahead to turn these into practical devices and scale them up to useful systems that could challenge todays machines. For example, Kumar and Williams plan to explore other possible materials that experience Mott transitions at different temperatures. NbO2’s happens at a worrying 800 C. That temperature is only occurring in a nanometers thin layer, but scaled up to millions of devices, and it could be a problem.

Others have researched vanadium oxide, which transitions at a more pleasant 60 C. But that might be too low, says Williams, given that systems in data centers often operate at 100 C.

There may even be materials that can use other types of transitions to achieve the same result. “Finding the goldilocks material is a very interesting issue,” says Williams.

Breakthrough Could Lead to Amplifiers for 6G Signals

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/devices/breakthrough-could-lead-to-amplifiers-for-6g-signals

Journal Watch report logo, link to report landing page

With 5G just rolling out and destined to take years to mature, it might seem odd to worry about 6G. But some engineers say that this is the perfect time to worry about it. One group, based at the University of California, Santa Barbara, has been developing a device that could be critical to efficiently pushing 6G’s terahertz-frequency signals out of the antennas of future smartphones and other connected devices. They reported key aspects of the device—including an “n-polar” gallium nitride high-electron mobility transistor—in two papers that recently appeared in IEEE Electron Device Letters.

Testing so far has focused on 94 gigahertz frequencies, which are at the edge of terahertz. “We have just broken through records of millimeter-wave operation by factors which are just stunning,” says Umesh K. Mishra, an IEEE Fellow who heads the UCSB group that published the papers. “If you’re in the device field, if you improve things by 20 percent people are happy. Here, we have improved things by 200 to 300 percent.”

The key power amplifier technology is called a high-electron-mobility transistor (HEMT). It is formed around a junction between two materials having different bandgaps: in this case, gallium nitride and aluminum gallium nitride. At this “heterojunction,” gallium nitride’s natural polarity causes a sheet of excess charge called a two-dimensional electron gas to collect. The presence of this charge gives the device the ability to operate at high frequencies, because the electrons are free to move quickly through it without obstruction.

Gallium nitride HEMTs are already making their mark in amplifiers, and they are a contender for 5G power amplifiers. But to efficiently amplify terahertz frequencies, the typical GaN HEMT needs to scale down in a particular way. Just as with silicon logic transistors, bringing a HEMT’s gate closer to the channel through which current flows—the electron gas in this case—lets it control the flow of current using less energy, making the device more efficient. More specifically, explains Mishra, you want to maximize the ratio of the length of the gate versus the distance from the gate to the electron gas. That’s usually done by reducing the amount of barrier material between the gate’s metal and the rest of the device. But you can only go so far with that strategy. Eventually it will be too thin to prevent current from leaking through, therefore harming efficiency.

But Mishra says his group has come up with a better way: They stood the gallium nitride on its head.

Ordinary gallium nitride is what’s called gallium-polar. That is, if you look down at the surface, the top layer of the crystal will always be gallium. But the Santa Barbara team discovered a way to make nitrogen-polar crystals, so that the top layer is always nitrogen. It might seem like a small difference, but it means that the structure that makes the sheet of charge, the heterojunction, is now upside down.

This delivers a bunch of advantages. First, the source and drain electrodes now make contact with the electron gas via a lower band-gap material (a nanometers-thin layer of GaN) rather than a higher-bandgap one (aluminum gallium nitride), lowering resistance. Second, the gas itself is better confined as the device approaches its lowest current state, because the AlGaN layer beneath acts as a barrier against scattered charge.

Devices made to take advantage of these two characteristics have already yielded record-breaking results. At 94 GHz, one device produced 8.8 Watts per millimeter at 27 percent efficiency. A similar gallium-polar device produced only about 2 W/mm at that efficiency.

But the new geometry also allows for further improvements by positioning the gate even closer to the electron gas, giving it better control. For this to work, however, the gate has to act as a low-leakage Schottky diode. Unlike ordinary p-n junction diodes, which are formed by the junction of regions of semiconductor chemically doped to have different excess charges, Schottky diodes are formed by a layer of metal, insulator, and semiconductor. The Schottky diode Mishra’s team cooked up—ruthenium deposited one atomic layer at a time on top of N-polar GaN—provides a high barrier against current sneaking through it. And, unlike in other attempts at the gate diode, this one doesn’t lose current through random pathways that shouldn’t exist in theory but do in real life.

“Schottky diodes are typically extremely difficult to get on GaN without them being leaky,” says Mishra. “We showed that this material combination… gave us the nearly ideal Schottky diode characteristics.”

The UC Santa Barbara team hasn’t yet published the results of from a HEMT made with this new diode as the gate, says Mishra. But the data so far is promising. And they plan to eventually test the new devices at even higher frequencies than before—140 GHz and 230 GHz—both firmly in the terahertz range.

What Intel Is Planning for The Future of Quantum Computing: Hot Qubits, Cold Control Chips, and Rapid Testing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/intels-quantum-computing-plans-hot-qubits-cold-control-chips-and-rapid-testing

Quantum computing may have shown its “supremacy” over classical computing a little over a year ago, but it still has a long way to go. Intel’s director of quantum hardware, Jim Clarke, says that quantum computing will really have arrived when it can do something unique that can change our lives, calling that point “quantum practicality.” Clarke talked to IEEE Spectrum about how he intends to get silicon-based quantum computers there:

Jim Clarke on…

IEEE Spectrum: Intel seems to have shifted focus from quantum computers that rely on superconducting qubits to ones with silicon spin qubits. Why do you think silicon has the best chance of leading to a useful quantum computer?

Jim Clarke: It’s simple for us… Silicon spin qubits look exactly like a transistor. … The infrastructure is there from a tool fabrication perspective. We know how to make these transistors. So if you can take a technology like quantum computing and map it to such a ubiquitous technology, then the prospect for developing a quantum computer is much clearer.

I would concede that today silicon spin-qubits are not the most advanced quantum computing technology out there. There has been a lot of progress in the last year with superconducting and ion trap qubits.

But there are a few more things: A silicon spin qubit is the size of a transistor—which is to say it is roughly 1 million times smaller than a superconducting qubit. So if you take a relatively large superconducting chip, and you say “how do I get to a useful number of qubits, say 1000 or a million qubits?” all of a sudden you’re dealing with a form factor that is … intimidating.

We’re currently making server chips with billions and billions of transistors on them. So if our spin qubit is about the size of a transistor, from a form-factor and energy perspective, we would expect it to scale much better.

Spectrum: What are silicon spin-qubits and how do they differ from competing technology, such as superconducting qubits and ion trap systems?

Clarke: In an ion trap you are basically using a laser to manipulate a metal ion through its excited states where the population density of two excited states represents the zero and one of the qubit. In a superconducting circuit, you are creating the electrical version of a nonlinear LC (inductor-capacitor) oscillator circuit, and you’re using the two lowest energy levels of that oscillator circuit as the zero and one of your qubit. You use a microwave pulse to manipulate between the zero and one state.

We do something similar with the spin qubit, but it’s a little different. You turn on a transistor, and you have a flow of electrons from one side to another. In a silicon spin qubit, you essentially trap a single electron in your transistor, and then you put the whole thing in a magnetic field [using a superconducting electromagnet in a refrigerator]. This orients the electron to either spin up or spin down. We are essentially using its spin state as the zero and one of the qubit.

That would be an individual qubit. Then with very good control, we can get two separated electrons in close proximity and control the amount of interaction between them. And that serves as our two-qubit interaction.

So we’re basically taking a transistor, operating at the single electron level, getting it in very close proximity to what would amount to another transistor, and then we’re controlling the electrons.

Spectrum: Does the proximity between adjacent qubits limit how the system can scale?

Clarke: I’m going to answer that in two ways. First, the interaction distance between two electrons to provide a two-qubit gate is not asking too much of our process. We make smaller devices every day at Intel. There are other problems, but that’s not one of them.

Typically, these qubits operate on a sort of a nearest neighbor interaction. So you might have a two-dimensional grid of qubits, and you would essentially only have interactions between one of its nearest neighbors. And then you would build up [from there]. That qubit would then have interactions with its nearest neighbors and so forth. And then once you develop an entangled system, that’s how you would get a fully entangled 2D grid. [Entanglement is a condition necessary for certain quantum computations.]

Spectrum: What are some of the difficult issues right now with silicon spin qubits?

Clarke: By highlighting the challenges of this technology, I’m not saying that this is any harder than other technologies. I’m prefacing this, because certainly some of the things that I read in the literature would suggest that  qubits are straightforward to fabricate or scale. Regardless of the qubit technology, they’re all difficult.

With a spin qubit, we take a transistor that normally has a current of electrons go through, and you operate it at the single electron level. This is the equivalent of having a single electron, placed into a sea of several hundred thousand silicon atoms and still being able to manipulate whether it’s spin up or spin down.

So we essentially have a small amount of silicon, we’ll call this the channel of our transistor, and we’re controlling a single electron within that that piece of silicon. The challenge is that silicon, even a single crystal, may not be as clean as we need it.  Some of the defects—these defects can be extra bonds, they can be charge defects, they can be dislocations in the silicon—these can all impact that single electron that we’re studying. This is really a materials issue that we’re trying to solve.

Back to top

Spectrum: Just briefly, what is coherence time and what’s its importance to computing?

Clarke: The coherence time is the window during which information is maintained in the qubit. So, in the case of a silicon spin qubit, it’s how long before that electron loses its orientation, and randomly scrambles the spin state. It’s the operating window for a qubit.

Now, all of the qubit types have what amounts to coherence times. Some are better than others. The coherence times for spin qubits, depending on the type of coherence time measurement, can be on the order of milliseconds, which is pretty compelling compared to other technologies.

What needs to happen [to compensate for brief coherence times] is that we need to develop an error correction technique. That’s a complex way of saying we’re going to put together a bunch of real qubits and have them function as one very good logical qubit.

Spectrum: How close is that kind of error correction?

Clarke: It was one of the four items that really needs to happen for us to realize a quantum computer that I wrote about earlier. The first is we need better qubits. The second is we need better interconnects. The third is we need better control. And the fourth is we need error correction. We still need improvements on the first three before we’re really going to get, in a fully scalable manner, to error correction.

You will see groups starting to do little bits of error correction on just a few qubits. But we need better qubits and we need a more efficient way of wiring them up and controlling them before you’re really going to see fully fault-tolerant quantum computing.

Back to top

Spectrum: One of the improvements to qubits recently was the development of “hot” silicon qubits. Can you explain their significance?

Clarke: Part of it equates to control.

Right now you have a chip at the bottom of a dilution refrigerator, and then, for every qubit, you have several wires that that go from there all the way outside of the fridge. And these are not small wires; they’re coax cables. And so from a form factor perspective and a power perspective—each of these wires dissipates power—you really have a scaling problem.

One of the things that Intel is doing is that we are developing control chips. We have a control chip called Horse Ridge that’s a conventional CMOS chip that we can place in the fridge in close proximity to our qubit chip. Today that control chip sits at 4 kelvins and our qubit chip is at 10 millikelvins and we still have to have wires between those two stages in the fridge.

Now, imagine if we can operate our qubit slightly warmer. And by slightly warmer, I mean maybe 1 kelvin. All of a sudden, the cooling capacity of our fridge becomes much greater. The cooling capacity of our fridge at 10 millikelvin is roughly a milliwatt. That’s not a lot of power.  At 1 Kelvin, it’s probably a couple of Watts. So, if we can operate at higher temperatures, we can then place control electronics in very close proximity to our qubit chip.

By having hot qubits we can co-integrate our control with our qubits, and we begin to solve some of the wiring issues that we’re seeing in today’s early quantum computers.

Spectrum: Are hot qubits structurally the same as regular silicon spin qubits?

Clarke: Within silicon spin qubits there are several different types of materials, some are what I would call silicon MOS type qubits— very similar to today’s transistor materials.  In other silicon spin qubits you have silicon that’s buried below a layer of silicon germanium. We’ll call that a buried channel device. Each have their benefits and challenges.

We’ve done a lot of work with TU Delft working on a certain type of [silicon MOS] material system, which is a little different than most in the community are studying [and lets us] operate the system at a slightly higher temperature.

I loved the quantum supremacy work. I really did. It’s good for our community. But it’s a contrived problem, on a brute force system, where the wiring is a mess (or at least complex).

What we’re trying to do with the hot qubits and with the Horse Ridge chip is put us on a path to scaling that will get us to a useful quantum computer that will change your life or mine. We’ll call that quantum practicality.

Back to top

Spectrum: What do you think you’re going to work on next most intensely?

Clarke: In other words, “What keeps Jim up at night?”

There are a few things. The first is time-to-information. Across most of the community, we use these dilution refrigerators. And the standard way [to perform an experiment] is: You fabricate a chip; you put it in a dilution refrigerator; it cools down over the course of several days; you experiment with it over the course of several weeks; then you warm it back up and put another chip in.

Compare that to what we do for transistors: We take a 300-millimeter wafer, put it on a probe station, and after two hours we have thousands and thousands of data points across the wafer that tells us something about our yield, our uniformity, and our performance.

That doesn’t really exist in quantum computing. So we asked, “Is there way to—at slightly higher temperatures—to combine a probe station with a dilution refrigerator?” Over the last two years, Intel has been working with two companies in Finland [Bluefors Oy and Afore Oy] to develop what we call the cryoprober. And this is just coming online now. We’ve been doing an impressive job of installing this massive piece of equipment in the complete absence of field engineers from Finland due to the Coronavirus.

What this will do is speed up our time-to-information by a factor of up to 10,000. So instead of wire bonding a single sample, putting it in the fridge, taking a week to study it, or even a few days to study it, we’re going to be able to put a 300-millimeter wafer into this unit and over the course of an evening step and scan. So we’re going to get a tremendous increase in throughput. I would say a 100 X improvement. My engineers would say 10,000.  I’ll leave that as a challenge for them to impress me beyond the 100.

Here’s the other thing that keeps me up at night. Prior to starting the Intel quantum computing program, I was in charge of interconnect research in Intel’s Components Research Group. (This is the wiring on chips.) So, I’m a little less concerned with the wiring into and out of the fridge than I am just about the wiring on the chip.

I’ll give an example:  An Intel server chip has probably north of 10 billion transistors on a single chip. Yet the number of wires coming off that chip is a couple of thousand. A quantum computing chip has more wires coming off the chip than there are qubits. This was certainly the case for the Google [quantum supremacy] work last year. This was certainly the case for the Tangle Lake chip that Intel manufactured in 2018, and it’s the case with our spin qubit chips we make now.

So we’ve got to find a way to make the interconnects more elegant. We can’t have more wires coming off the chip than we have devices on the chip. It’s ineffective.

This is something the conventional computing community discovered in the late 1960s with Rent’s Rule [which empirically relates the number of interconnects coming out of a block of logic circuitry to the number of gates in the block]. Last year we published a paper with Technical University Delft on the quantum equivalent of Rent’s Rule. And it talks about, amongst other things the Horse Ridge control chip, the hot qubits, and multiplexing.

We have to find a way to multiplex at low temperatures. And that will be hard. You can’t have a million-qubit quantum computer with two million coax cables coming out of the top of the fridge.

Spectrum: Doesn’t Horse Ridge do multiplexing?

Clarke: It has multiplexing. The second generation will have a little bit more. The form factor of the wires [in the new generation] is much smaller, because we can put it in closer proximity to the [quantum] chip.

So if you kind of combine everything I’ve talked about. If I give you a package that has a classical control chip—call it a future version of Horse Ridge—sitting right next to and in the same package as a quantum chip, both operating at a similar temperature and making use of very small interconnect wires and multiplexing, that would be the vision.

Spectrum: What’s that going to require?

Clarke: It’s going to require a few things. It’s going to require improvements in the operating temperature of the control chip. It’s probably going to require some novel implementations of the packaging so there isn’t a lot of thermal cross talk between the two chips. It’s probably going to require even greater cooling capacity from the dilution refrigerator. And it’s probably going to require some qubit topology that facilitates multiplexing.

Spectrum: Given the significant technical challenges you’ve talked about here, how optimistic are you about the future of quantum computing?

Clarke: At Intel, we’ve consistently maintained that we are early in the quantum race. Every major change in the semiconductor industry has happened on the decade timescale and I don’t believe quantum will be any different. While it’s important to not underestimate the technical challenges involved, the promise and potential are real. I’m excited to see and participate in the meaningful progress we’re making, not just within Intel but the industry as a whole. A computing shift of this magnitude will take technology leaders, scientific research communities, academia, and policy makers all coming together to drive advances in the field, and there is tremendous work already happening on that front across the quantum ecosystem today.

Back to top