Tag Archives: Semiconductors

Are Antiferromagnets the Next Step for MRAM?

Post Syndicated from Amy Nordrum original https://spectrum.ieee.org/nanoclast/semiconductors/memory/antiferromagnets-next-step-mram

Quasi-magnetic materials known as antiferromagnets are attracting research interest for their potential to hold far more data in a computer’s memory than traditional magnets allow. 

Though the early work required to prove the concept has only just begun, a series of new studies shows progress in being able to electrically manipulate bits stored in antiferromagnets and to do so with components compatible with standard CMOS manufacturing techniques. 

Antiferromagnets exhibit different properties than traditional ferromagnets, which are used in a variety of modern memory technologies including magnetoresistive random-access memory (MRAM). 

MRAM has clear advantages over other memory technologies. Reading and writing data using MRAM can be done at speeds similar to volatile technologies such as DRAM and SRAM. But MRAM consumes less power and, like flash, is non-volatile, meaning it doesn’t need a steady power supply to retain data.  

Despite its advantages, MRAM could still be considered a boutique memory technology. And in theory, at least, antiferromagnets could fix a problem that has prevented MRAM from achieving broader adoption. 

MRAM stores information as the spins of electrons—a property related to an electron’s intrinsic angular momentum. Ferromagnets have unpaired electrons that spin, or point, in one of two directions. Most electrons in a ferromagnet point in the same direction. When a current runs nearby, its magnetic field can cause most of those electrons to change their spins. The magnet records a “1” or a “0” depending on which direction they point.

A drawback of ferromagnets is that they can be influenced by external magnetic fields, which can cause bits to flip unintentionally. And the spins of adjacent ferromagnets can influence one another unless there’s enough space between them—which limits MRAM’s ability to scale to higher densities for lower costs.  

Antiferromagnets—which include compounds of common metals such as manganese, platinum, and tin—don’t have that problem. Unlike ferromagnets, the spins of electrons within the same antiferromagnet don’t all point in the same direction. Electrons on neighboring atoms point opposite to each other, effectively canceling one another out. 

The collective orientation of all spins in an antiferromagnet can still record bits, but the magnet as a whole has no magnetic field. As a result, antiferromagnets can’t influence each other, and they aren’t bothered by external fields. Which means you can pack them in tight.

And because the dynamics of the spin in antiferromagnets are much faster, bits can be switched in picoseconds with terahertz frequencies—much faster than the nanoseconds required at gigahertz frequencies used in today’s ferromagnetic MRAM. Theoretically, antiferromagnets could increase the writing speed of MRAM by three orders of magnitude. 

Only in the past five years have antiferromagnets been seriously investigated for their potential in memory, since researchers in Europe demonstrated it was possible to use an electric current to control the spins of electrons within an antiferromagnet. That work has led to a flurry of research investigating different types of antiferromagnets and switching techniques. 

“There are a very wide range of antiferromagnetic materials one could choose,” says Pedram Khalili-Amiri, an associate professor of electrical and computer engineering at Northwestern University. “There’s more of them than there are ferromagnets. This is a blessing and a curse.” 

Researchers have reported several advances using antiferromagnets since the start of this year. Khalili-Amiri led a team that showed switching in tiny pillars of platinum manganese, an antiferromagnet used in hard drives and magnetic field sensors today. The team described its work in February in Nature Electronics. “We wanted to build a device that was CMOS-compatible,” he says. 

In March, a group involving Markus Meinert of the Technical University of Darmstadt in Germany wrote in Physical Review Research of an experiment showing a novel MRAM technique for switching bits, known as spin-orbit torque, could also work for switching bits stored in one type of antiferromagnet. 

And in April, Satoru Nakatsuji at the University of Tokyo and his collaborators described in Nature an experiment that successfully switched bits in an antiferromagnet (Mn3Sn) that has a particular type of electrons known as Weyl fermions. The spin states of these fermions are relatively easy to measure and allow for a device to be much simpler than other antiferromagnetic devices.

Despite this progress, Barry Zink from the University of Denver says it’s too early to bet on any one type of antiferromagnet. “It’s a really exciting field. I think it’s not clear yet just exactly which material, or if just one of them by itself, is going to be the winner in all this,” he says.  

A number of technical challenges would have to be resolved before antiferromagnets could ever be used in commercial devices. One issue that Zink has written about is that heat from a current appears to cause a voltage pattern in some antiferromagnetic devices that looks similar to what a switch in electron spin may cause. To read data back, it will be important to distinguish between the two. 

And reading data from an antiferromagnet is still much slower and more difficult than reading data stored in ferromagnets. “We need to find ways of reading more efficiently,” says Meinert. 

Already, companies are beginning to take note. Though he declined to share names, Nakatsuji says he’s been contacted by large technology companies for his lab’s work on antiferromagnets. “I think in the near future, a lot will become possible,” he says.

Optical Atomic Clocks Are Ready to Redefine Time

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/optical-atomic-clock-advantage-expands-electronics

Optical atomic clocks will likely redefine the international standard for measuring a second in time. They are far more accurate and stable than the current standard, which is based on microwave atomic clocks.

Now, researchers in the United States have figured out how to convert high-performance signals from optical clocks into a microwave signal that can more easily find practical use in modern electronic systems.

Cable Assemblies: Determining a Reliable & Cost-Effective Approach

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/cable_assemblies_determing_a_reliable_and_cost_effective_approach

Cable Assemblies

Cable assemblies fulfill an important role in high-density electronic systems. Often overlooked as ‘a couple of connectors and some cable’, these are in fact an essential element of modern engineering design and should not be underestimated.

In this paper, we outline the process of creating cable assemblies for application sectors requiring the highest levels of reliability, such as aerospace, defense, space and motorsport.

Sharing Manufacturing IP Could Help Us Deal with COVID-19

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/semiconductors/devices/sharing-manufacturing-ip-could-help-us-deal-with-covid19

IEEE COVID-19 coverage logo, link to landing page

Back in 2012, Netflix released Chaos Monkey, an open-source tool for triggering random failures in critical computing infrastructure. Similar stress testing, at least in a simulated environment, has been applied in other contexts such as banking, but we’ve never stress tested industrial production during a viral pandemic. Until now.

COVID-19 has demonstrated beyond doubt the fragility of the global system of lean inventories and just-in-time delivery. Many nations have immediate need for critical medical supplies and equipment, even as we grope for the switch that will allow us to turn the global economy back on. That means people have to be able to manufacture stuff, where it’s needed, when it’s needed, and from components that can be locally sourced. That’s a big ask, because most of our technology comes to us from far away, often in seamlessly opaque packaging—like a smartphone, all surface with no visible interior.

The manufacture of even basic products can be so encumbered by secrecy or obscurity that it quickly becomes difficult to learn how to make them or to re-create their functionality in some other way. While we normally tolerate such impediments as part of normal business practice, they have thrown up unexpected roadblocks to keeping the world operating through the present crisis.

We must do whatever we can to lower the barriers to getting things built, and that begins by embracing a newfound flexibility in our approaches to both manufacturing and intellectual property. Companies are already rising to this challenge.

For example, Medtronic shared the designs and code for its portable ventilator at no charge, enabling other capable manufacturers to take up the challenge of building and distributing enough units to meet peak demand during the pandemic. Countless other pieces of electronic equipment—everything from routers to thermostats—operate in critical environments and need immediate replacement should they fail. Where they cannot be replaced, we will fall deeper into the ditch we now find ourselves in.

It would be ideal if any sort of equipment could be “printed” on demand. We already have the capacity for such rapid manufacturing in some realms, but to address the breadth of the present crisis would require a comprehensive database of product designs, testing, firmware, and much else. Little of that infrastructure exists at present, highlighting a real danger: If we don’t construct a distributed, global build-it-here-now capacity, we might burn through our existing inventories without any way to replenish them. Then we will be truly stuck.

Many firms will no doubt have reservations about handing over their intellectual property, even to satisfy critical needs. This tension between normal business practice and public good echoes the contours of the dilemmas facing personal privacy and public health. We nevertheless need urgently to find a way to share trade secrets—temporarily—to preserve the kind of world within which business can one day operate normally.

Some governments have already signaled their greater flexibility in enforcing both patent protection and intellectual property protections during this crisis. Yet more is needed. Like Medtronic, businesses should take the plunge, open up, share their trade secrets, provide guidance to others (even former competitors) to help us speed our way into a post-pandemic economy. Sharing today will make that return much faster and far less painful. To paraphrase a wise old technologist, we either hang together, or we will no doubt hang separately.

This article appears in the June 2020 print issue as “Not Business as Usual.”

TSMC’s Geopolitical Dance

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/tsmcs-geopolitical-dance

The global semiconductor supply chain is having an interesting year. Having adjusted to the potential and realities of a U.S.-China trade war, it is now faced with an economy-halting pandemic. Friday’s news seemed a microcosm of what is emerging from this moment: a combination of less concentrated advanced manufacturing and attempts to pressure companies to bend to geopolitical objectives.

On 15 May, the world’s largest semiconductor foundry, TSMC, announced that it planned to build a $12-billion fab in Arizona, which would begin production in 2024. (The $12 billion investment is 2021-2029.) That same day, the Trump Administration said it would now require TSMC and other non-U.S. chip makers to get a license from the U.S. Commerce Department if they want to ship chips to Huawei that were made using U.S. software and technology.

First, the new fab: According to VLSI Research’s Dan Hutcheson, a U.S. fab is partly a ploy to keep Apple happy. The iPhone-etc.-maker’s CEO Tim Cook has been pushing for such a move for some time to ensure supply continuity for the processors that go in the company’s products. These processors have historically used leading-edge chip making technology. Currently that’s TSMC’s 7-nanometer process, but the company says the next generation process, 5-nanometers, is in production now.

TSMC, of course, has other important customers for its leading-edge technologies. AMD, Xilinx, Qualcomm, and Nvidia are among them; and more recently, so are cloud giants such as Google, Microsoft, Facebook, and Amazon, which have been developing their own server and AI designs.

To keep them happy, the Arizona fab will have to operate at the most advanced process available. TSMC is promising a 5-nanometer fab there, but by 2024 when production is set to begin, TSMC may be moving to another process generation, 3-nanometers. But fabs are built to be upgraded, Hutcheson points out. They aren’t built around a particular technology, and it seems assured that whatever 3-nanometer and more advanced processes entail, it will still mainly rely on extreme-ultraviolet lithography, the same tech central to the 7-nanometer and 5-nanometer processes.

However, transferring a manufacturing process to a new location and getting it to the point that it yields a profitable proportion of wafers is never easy. Hutcheson notes that TSMC struggled with that when it first built fabs in Tainan, which is little more than an hour away by high-speed rail from its headquarters in Hsinchu. However, depending on where in Arizona the fab is located, the company may benefit from infrastructure and experienced employees related to Intel’s advanced fabs in Chandler.

The plant’s projected 20,000-wafers-per-month capacity figure is actually quite low compared to other facilities. It matches the company’s recently built 16-nanometer Fab 16 in Nanjing, China. But it’s not in the same league as the company’s planned 5-nanometer Fab 18 in southern Taiwan, which will have a nameplate capacity of 120,000 wafers per month. Still, 20,000 wafers per month is in line with the first phase of other new fabs, says Joanne Itow, managing director at Semico Research. And that capacity could translate to 144 million applications processors per year, according to Itow. That’s enough to partly supply several customers and generate about $1.44 billion in revenue for TSMC.

That’s all assuming this fab actually happens. “Right now, it’s a Powerpoint fab,” says Hutcheson. TSMC’s own press release gives a very conditional feel: TSMC “announced its intention to build and operate an advanced semiconductor fab in the United States with the mutual understanding and commitment to support from the U.S. federal government and the State of Arizona.”

“Technically, it probably doesn’t matter where the chips are manufactured; however, in today’s tense trade arena the optics of having a fab in the United States provide a more positive partnership atmosphere,” says Joanne Itow, managing director at Semico Research.

The other TSMC news is much less of a win-win. The U.S. government has sought to starve Huawei of advanced semiconductors. Its Bureau of Industry and Security (BIS) added Huwawei and its affiliates, particularly its semiconductor arm HiSilicon, to its list of entities that U.S. firms can’t sell to without a license in 2019. Huawei got around this by stepping up its own chip design capabilities, though it relies on foundries, especially TSMC, to manufacture its advanced chips. BIS is now seeking to tighten the screws by extending the licensing to foundries using U.S. software and tools to make Huawei’s chips.

In effect, the rule boils down to one country specifying which tools can be used in a factory in another country to produce goods for a customer in a third. TSMC is among the largest customers of U.S. chip tool makers and they have reason to worry, according to the Semiconductor Industry Association. “We are concerned this rule may create uncertainty and disruption for the global semiconductor supply chain, but it seems to be less damaging to the U.S. semiconductor industry than the very broad approaches previously considered,” the organization’s CEO John Neuffer said in a statement.

The new rule will likely accelerate Huawei’s ongoing shift away from U.S. technology, says Nelson Dong, a senior partner in charge of national security at the international law firm Dorsey & Whitney and a board member at the non-profit advocacy group the National Committee on US-China Relations. Indirectly, “this move may well force the global semiconductor industry to look away from U.S. suppliers of semiconductor design tools and semiconductor production equipment and even to create new rival companies in other countries, including China itself,” he says. He cites the example of export restrictions in the satellite industry, which ultimately led to the growth of competing businesses outside the United States and higher prices for U.S. satellite makers due to their suppliers’ smaller market.

It is difficult to imagine how the U.S. could enforce such a rule in an advanced fab. “Fabs are an extreme version of ‘What happens in Vegas, stays in Vegas,’” quips Hutcheson. Manufacturing processes are proprietary and very closely guarded. “How would they even know it was going on?”

3 Ways Chiplets Are Remaking Processors

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/semiconductors/processors/3-ways-chiplets-are-remaking-processors

Want more computing power in your processor? Add more silicon. But complexity and cost are starting to erode that maxim.

“The absolute die size has been going up relentlessly over time and is trending to bump into” the limit of what chipmaking equipment can produce, Samuel Naffziger of Advanced Micro Devices (AMD) told engineers at the International Solid-State Circuits Conference (ISSCC) in San Francisco earlier this year. At the same time, for a fixed die size, the cost per square millimeter “has been increasing relentlessly and is now accelerating,” he says.

The combined squeeze of rising costs and ever-larger chip sizes is leading to a solution in which processors are made up of collections of smaller, ­less-expensive-to-produce chiplets bound together by high-bandwidth connections within a single package. At ISSCC, AMD, Intel, and the French research organization CEA-Leti showed how far this scheme can go.

The CEA-Leti processor stacks six 16-core chiplets on top of an “active interposer,” made of a thin sliver of silicon, to create a 96-core processor. The interposer houses voltage-regulation systems that are usually found on the processor itself. It also features a network-on-chip that uses three different communication circuits to link the cores’ on-chip SRAM memories. The network is capable of slinging 3 terabytes per second per square millimeter of silicon with a latency of just 0.6 nanosecond per millimeter.

Active interposers are the best way forward for chiplet technology, if it’s ever going to allow the integration of disparate technologies from multiple vendors into single systems, according to Pascal Vivet, a scientific director at CEA-Leti. “If you want to integrate chiplets from vendor A with chiplets from vendor B, and their interfaces are not compatible, you need a way to glue them together,” he says. “And the only way to glue them together is with active circuits in the interposer.”

Chiplet enthusiasts imagine a remaking of the system-on-chip industry so that chiplets from multiple vendors could all be integrated with little effort, thanks to standardized interfaces. The result would be cheaper, more flexible mix-and-match systems.

But industry is not there yet. Rather than produce simple mix-and-match systems, Intel and AMD each designed their new chiplets to coordinate closely with one another and with the packages that integrate them. Nevertheless, the results seem worthwhile.

Intel used its 3D chiplet-integration tech, called Foveros, to produce the new Lakefield mobile processor. Foveros provides high-data-rate interconnects between chiplets by stacking them atop one another and delivering power and data from the package vertically through the bottom die. At ISSCC, Intel’s Wilfred Gomes explained that among the goals for Lakefield was to boost graphics computation by about 50 percent while consuming one-tenth the standby power of its predecessor. No single manufacturing process can produce transistors that would achieve both targets, but Foveros allows for mixing dies that have transistors designed for high-performance computing with those designed for superior standby power consumption.

AMD has been using chiplets connected on an organic substrate within the chip package. At ISSCC, the company detailed how it designed the chiplets and package together for its second-generation EPYC high-performance processors. The previous generation had been made up of four chiplets. But in order to fit more silicon in while keeping costs down, the company redesigned the chiplets so that only the computing cores were upgraded to Taiwan Semiconductor Manufacturing Co.’s 7-nanometer process technology, the most advanced available. All other functions were piled into a central input/output chiplet made using older, less costly technology. Once that was done, “we had to figure out how to route nine chiplets in the same package size that we had done with four,” said Naffziger. The result was “an unprecedented amount of silicon/package codesign.”

This article appears in the May 2020 print issue as “Chiplets Are the Future of Processors.”

How the Father of FinFETs Helped Save Moore’s Law

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/semiconductors/devices/how-the-father-of-finfets-helped-save-moores-law

It was 1995. Advances in chip technology continued apace with Moore’s Law, the observation that the number of transistors on a chip doubles roughly every two years, generally because of the shrinking size of those transistors.

But the horizon no longer seemed limitless. Indeed, for the first time, murmurs throughout the semiconductor industry predicted the death of Moore’s Law. The golden days would be coming to an end, the predictions went, when the size of a critical transistor feature, then around 350 nanometers, reached below 100 nm. Even the U.S. government was worried—so worried that DARPA raised an alarm, launching a program seeking new chip technologies that could extend progress.

Chenming Hu, then a professor of electrical engineering and computer science at the University of California, Berkeley, jumped at the challenge. He immediately thought of a solution—actually, two solutions—and, on a plane ride a few days later, sketched out those designs. One of those ideas, raising the channel through which current flows so that it sticks out above the surface of the chip, became the FinFET, a technology that earned Hu this year’s IEEE Medal of Honor “for a distinguished career of developing and putting into practice semiconductor models, particularly 3-D device structures, that have helped keep Moore’s Law going over many decades.”

The story of the FinFET didn’t begin with Hu putting pencil to paper on an airline tray table, of course.

It started in Taiwan, where Hu was a curious child, conducting stove-top experiments on seawater and dismantling—and reassembling—alarm clocks. As he approached the end of high school, he was still interested in science, mostly chemistry. But instead of targeting a chemistry degree, he applied for the electrical engineering program at the National Taiwan University, even though he didn’t really know what an EE actually did. It was simply a challenge—the electrical engineering program required the highest test scores to get in.

During his last year of college, Hu discovered the industry he would later shake up, thanks to Frank Fang, then a visiting professor from the United States.

“It was 1968,” Hu recalls, “and he told us semiconductors were going to be the material for future televisions, and the televisions would be like photographs we could hang on the wall.”

That, in an era of bulky tube televisions, got Hu’s attention. He decided that semiconductors would be the field for him and applied to graduate programs in the United States. In 1969, he landed at Berkeley, where he joined a research group working on metal-oxide semiconductor (MOS) transistors.

His career soon took a detour because semiconductors, he recalls, just seemed too easy. He switched to researching optical circuits, did his Ph.D. thesis on integrated optics, and went off to MIT to continue his work in that field.

But then came the 1973 oil embargo. “I felt I had to do something,” he said, “something that was useful, important; that wasn’t just writing papers.”

So he switched his efforts toward developing low-cost solar cells for terrestrial applications—at the time, solar cells were used only on satellites. In 1976, he returned to Berkeley, this time as a professor, planning to do research in energy topics, including hybrid cars, an area that transported him back to semiconductors. “Electric cars,” Hu explains, “needed high voltage, high current semiconductor devices.”

Come the early 1980s, that move back to semiconductor research turned out to be a good thing. Government funding for energy research dried up, but a host of San Francisco Bay Area companies were supporting semiconductor research, and transitioning to corporate funding “was not very difficult,” Hu says. He started spending time down in Silicon Valley, not far from Berkeley, invited by companies to teach short courses on semiconductor devices. And in 1982, he spent a sabbatical in the heart of Silicon Valley, at National Semiconductor in Santa Clara.

“Being in industry then ended up having a long influence on me,” Hu says. “In academia, we learn from each other about what is important, so what I thought was interesting really came just because I was reading another paper and felt, ‘Hey, I can do better than that.’ But once I opened my eyes to industry, I found that’s where the interesting problems are.” And that epiphany got Hu looking harder at the 3D structure of transistors.

A field-effect transistor has four basic parts—a source, a drain, a conductive channel that connects the two, and a gate to control the flow of current down the channel. As these components were made smaller, people started noticing that the behaviors of transistors were changing with long-term use. These changes weren’t showing up in short-term testing, and companies had difficulty predicting the changes.

In 1983, Hu read a paper published by researchers at IBM that described this challenge. Having spent time at National Semiconductor, he realized the kinds of problems this lack of long-term reliability could cause for the industry. Had he not worked in the trenches, he says, “I wouldn’t have known just how important a problem it was, and so I wouldn’t have been willing to spend nearly 10 years working on it.”

Hu decided to take on the challenge, and with a group of students he developed what he called the hot-carrier-injection theory for predicting the reliability of MOS semiconductors. It’s a quantitative model for how a device degrades as electrons migrate through it. He then turned to investigating another reliability problem: the ways in which oxides break down over time, a rising concern as manufacturers made the oxide layers of semiconductors thinner and thinner.

These research efforts, Hu says, required him to develop a deep understanding of what happens inside transistors, work that evolved into what came to be called the Berkeley Reliability Tool (BERT) and BSIM, a set of transistor models. BSIM became an industry standard and remains in use today; Hu still leads the effort to regularly update its models.

Hu continued to work with his students to study the basic characteristics of transistors—how they work, how they fail, and how they change over time—well into the 1990s. Meanwhile, commercial chips continued to evolve along the path predicted by Moore’s Law. But by the mid-1990s, with the average feature size around 350 nm, the prospects for being able to shrink transistors further had started looking worrisome.

“The end of Moore’s Law was in view,” recalls Lewis Terman, who was at IBM Research at the time.

The main problem was power. As features grew smaller, current that leaked through when a transistor was in its “off” state became a bigger issue. This leakage is so great that it increased—or even dominated—a chip’s power consumption.

“Papers started projecting that Moore’s Law for CMOS would come to an end below 100 nm, because at some point you would dissipate more watts per square centimeter than a rocket nozzle,” Hu recalled. “And the industry declared it a losing battle.”

Not ready to give up on Moore’s Law, DARPA (the Defense Advanced Research Projects Agency) looked to fund research that promised to break that barrier, launching an effort in mid-1995 to develop what it called the 25-nm Switch.

“I liked the idea of 25 nm—that it was far enough beyond what the industry thought possible,” Hu says.

Hu saw the fundamental problem as quite clear—making the channel very thin to prevent electrons from sneaking past the gate. To date, solutions had involved thinning the gate’s oxide layer. That gave the gate better control over the channel, reducing leakage current. But Hu’s work in reliability had shown him that this approach was close to a limit: Make the oxide layer sufficiently thin and electrons could jump across it into the silicon substrate, forming yet another source of leakage.

Two other approaches immediately came to mind. One involved making it harder for the charges to sneak around the gate by adding a layer of insulation buried in the silicon beneath the transistor. That design came to be called fully depleted silicon-on-insulator, or FDSOI. The other involved giving the gate greater control over the flow of the charge by extending the thin channel vertically above the substrate, like a shark’s fin, so that the gate could wrap around the channel on three sides instead of just sitting on top. This structure was dubbed the FinFET, which had the additional advantage that using space vertically relieved some of the congestion on the 2D plane, ushering in the era of 3D transistors.

There wasn’t a lot of time to get a proposal submitted to DARPA, however. Hu had heard about the DARPA funding from a fellow Berkeley faculty member, Jeffrey Bokor, who, in turn, had heard about it while windsurfing with a DARPA program director. So Hu quickly met with Bokor and another colleague, Tsu Jae King, and confirmed that the team would pull together a proposal within a week. On a plane trip to Japan a day or two later, he sketched out the two designs, faxing his sketches and a description of his technical approach back to Berkeley when he arrived at his hotel in Japan. The team submitted the proposal, and DARPA later awarded them a four-year research grant.

Ideas similar to FinFET had been described before in theoretical papers. Hu and his team, however, actually built manufacturable devices and showed how the design would make transistors 25 nm and smaller possible. “The others who read the papers didn’t see it as a solution, because it would be hard to build and may or may not work. Even the people who wrote the papers did not pursue it,” says Hu. “I think the difference was that we looked at it and said, we want to do this not because we want to write another paper, or get another grant, but because we want to help the industry. We felt we had to keep [Moore’s Law] going.

“As technologists,” Hu continues, “we have the responsibility to make sure the thing doesn’t stop, because once it stops, we’re losing the biggest hope for us to have more abilities to solve the world’s difficult problems.”

Hu and his team “were well-poised to develop the FinFET because of the way he trains his students to think about devices,” says Elyse Rosenbaum, a former student of his and now a professor at the University of Illinois at Urbana-Champaign. “He emphasizes big picture, qualitative understanding. When studying a semiconductor device, some people focus on creating a model and then numerically solving all the points in its 3D grid. He taught us to step back, to try to visualize where the electric field is distributed in a device, where the potential barriers are located, and how the current flow changes when we change the dimension of a particular feature.”

Hu felt that visualizing the behavior of semiconductor devices was so important, Rosenbaum recalls, that once, struggling to teach his students his process, he “built us a model of the behavior of an MOS transistor using his kids’ Play-Doh.”

“These things looked like a lightning invention,” said Fari Assaderaghi, a former student who is now senior vice president of innovation and advanced technology at NXP Semiconductors. “But his team had been working on fundamental concepts of what an ideal device should be, working from first principles of physics early on; how to build the structure comes from that.”

By 2000, at the end of the four-year grant term, Hu and his team had built working devices and published their research, raising immediate, widespread interest within the industry. It took another decade, however, before chips using FinFETs began rolling off of manufacturing lines, the first from Intel in 2011. Why so long?

“It was not broken yet,” Hu explains, referring to the industry’s ability to make semiconductor circuits more and more compact. “People were thinking it was going to break, but you never fix anything that’s not broken.”

It turned out that the DARPA program managers were prescient—they had called the project the 25-nm Switch, and FinFETs came into play when the semiconductor industry moved to sub-25-nm geometries.

FDSOI, meanwhile, also progressed and is also being used in industry today. In particular, it’s found in optical and RF devices, but FinFETs currently dominate the processor industry. Hu says he never really promoted one approach over the other.

In FinFET’s dormant years, Hu took a three-year break from Berkeley to serve as chief technology officer of semiconductor manufacturer TSMC in Taiwan. He saw that as a chance to pay back the country where he received his initial education. He returned to Berkeley in 2004, continuing his teaching, research in new energy-efficient semiconductor devices, and efforts to support BSIM. In 2009, Hu stopped teaching regular classes, but as a professor emeritus, he still works with graduate students.

Since Hu moved back to Berkeley, FinFET technology has swept the industry. And Moore’s Law did not come to an end at 25 nm, although its demise is still regularly predicted.

“It is going to gradually slow down, but we aren’t going to have a replacement for MOS semiconductors for a hundred years,” Hu says. This does not make him pessimistic, though. “There are still ways of improving circuit density and power consumption and speed, and we can expect the semiconductor industry to keep giving people more and more useful and convenient and portable devices. We just need more creativity and a big dose of confidence.”

This article appears in the May 2020 print issue as “The Father of FinFETs.”

Ultraviolet-LED Maker Demonstrates 30-Second Coronavirus Kill

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/ultravioletled-maker-demonstrates-30second-coronavirus-kill

Robots and stranger machines have been using a particular band of ultraviolet light to sterilize surfaces that might be contaminated with coronavirus. Those that must decontaminate large spaces, such as hospital rooms or aircraft cabins, use large, power-hungry mercury lamps to produce ultraviolet-C light. Companies around the world are working to improve the abilities of UV-C producing LEDs, to offer a more compact and efficient alternative. Earlier this month, Seoul Viosys showed what it says is the first 99.9 percent sterilization of SARS-COV-2, the coronavirus that causes COVID-19, using ultraviolet LEDs.

UV LEDs are deadly to viruses and bacteria, because the 100-280 nanometer wavelength C-band shreds genetic material. Unfortunately, it’s also strongly absorbed by nitrogen in the air, so sources have to be powerful to have an effect at a distance. (Air is such a strong barrier, that the sun’s UV-C doesn’t reach the Earth’s surface.) Working with researchers at Korea University, in Seoul, the company showed that its Violed LED modules could eliminate 99.9 percent of the SARS-COV-2 virus using a 30-second dose from a distance of three centimeters.

Unfortunately, the company did not disclose how many of its LEDs were used to achieve that. Assuming that it and the university researchers used a single Violed CMD-FSC-CO1A integrated LED module, a 30-second dose would have delivered at most 600 millijoules of energy. This is somewhat in-line with expectations. A study of UVC’s ability to kill influenza A viruses on N95 respirator masks indicated that about 1 joule per square centimeter would do the job.

While the 3-centimeter distance may work in tight spaces such as an air filter or water purifier—products that UV LEDs already serve—it won’t do for hospital-room-sterilizing robots. The GermFalcon airplane cabin sterilizer, for example, needs to bathe an aircraft cabin in light strong enough to kill the virus in seconds from a distance of about 30 centimeters, its inventor Dr. Arthur Kreitenberg told IEEE Spectrum last month. Today’s UV-C LEDs can’t produce enough light for the job, he said. But with the GermFalcon’s mercury lamps, which measure output in watts, that power comes at a large cost in energy and bulk. The system’s iron-phosphate battery pack has to deliver 100 amperes to produce the needed UV power.

The potential advantages of UV-C LEDs over mercury lamps include a lack of toxic mercury, better robustness, longer lifetimes, faster startup, and emission at a diversity of wavelengths, which may aid in their germicidal role. But it’s their potential for efficiency that could be most important.

At the moment, mercury lamps have a better wall-plug efficiency—electrical power in versus optical power out—than the UV-C LEDs on the market now. The wall plug efficiency of today’s UV-C LEDs is just 2.8 percent, with 3.3 percent-efficient systems in the R&D phase, according to Jae-hak Jeong, technical research fellow and vice president at Seoul Semiconductor, Seoul Viosys’ parent company. Mercury lamps boast 15-35 percent.

The mercury lamp’s advantage is not expected to last, because researchers expect UV-C LEDs to follow a similar efficiency improvement path to solid-state lighting’s blue LEDs. However, UV-C devices have a long way to go. Blue LEDs typically have an internal quantum efficiency, the fraction of electrons injected into a specific part of the LED that result in the generation of photons—of about 90 percent. For UV-C it’s 30-40 percent, says Jeong. For external quantum efficiency—the ratio of photons emitted to electrons passing through the LED—the comparison is even worse. About 70 percent for blue LEDs versus 10-16 percent for UV-C devices.

According to Jeong, boosting these figures will take improvements both to fabrication process and to epitaxy, the growth of the semiconductor crystal that the LEDs are made upon. These LEDs are usually built by using epitaxy to grow a layer of crystalline aluminum nitride atop a sapphire wafer. Defects in the crystal are a main limiter to LED performance, so improving the epitaxy process is one path toward brighter LEDs.

Experts Invent a New Way to Track Semiconductor Technology

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/experts-invent-a-new-way-to-track-semiconductor-technology

Journal Watch report logo, link to report landing page

A group of some of the most noted device engineers on the planet, including several IEEE Fellows and this year’s IEEE Medal of Honor recipient, is proposing a new way of judging the progress of semiconductor technology. Today’s measure, the technology node, began to go off the rails almost two decades ago. Since then, the gap between what a technology node is called and the size of the devices it can make has only grown. After all, there is nothing in a 7-nanometer chip that is actually that small. This mismatch is much more of problem than you might think, argues one of the group, Stanford University professor H.-S. Philip Wong.

 “The broader research community has a feeling that [device] technology is over,” he says. “Nothing could be further from the truth.”

Here’s What It’s Like Inside a Chip Foundry During the COVID-19 Pandemic

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/heres-what-its-like-inside-a-chip-foundry-during-the-covid19-pandemic

“Ironically, one of the safest places to be right now is in a cleanroom,” points out Thomas Sonderman, president of SkyWater Technology, in Bloomington, Minn.

Like every business, semiconductor foundries like SkyWater and GlobalFoundries have had to make some pretty radical changes to their operations in order to keep their workers safe and comply with new government mandates, but there are some unique problems to running a 24/7 chip-making operation.

GlobalFoundries’ COVID-19 plan is basically an evolution of its response to a previous coronavirus outbreak, the 2002-3 SARS pandemic. When the company acquired Singapore-based Chartered Semiconductor in 2010, it inherited a set of fabs that had managed to produce chips through the worst of that outbreak. (According to the World Health Organization, Singapore suffered 238 SARS cases and 33 deaths.)

“During that period we established business policies, protocols, and health and safety measures to protect our team while maintaining operations,” says Ronald Sampson, GlobalFoundries’ senior vice president and general manager of U.S. fab operations. “That was a successful protocol that served as the basis for this current pandemic that we’re experiencing together now. Since that time we’ve implemented it worldwide and of course in our three U.S. factories.”

At Fab 8 in Malta, N.Y., GlobalFoundries’ most advanced 300-mm CMOS facility, that translates into a host of procedures. Some of them are common, such as working from home, forbidding travel, limiting visitors, and temperature screening. Others are more unique to the fab operation. For example, workers are split into two teams that never come into contact with each other; they aren’t in the building on the same day, and they even use separate gowning rooms to enter the cleanroom floor. Those gowning rooms are marked off in roughly 2-meter squares, and no two people are allowed to occupy the same square.

Once employees are suited up and in the clean room, they’re taking advantage of it. “It’s one of the cleanest places on earth,” says Sampson. “We’ve moved all of our operations meetings onto the factory floor itself,” instead of having physically separated team members in a conference room.

GlobalFoundries is sharing some of what makes that safety possible, too. It’s assisted healthcare facilities in New York and Vermont, where its U.S. fabs are located, with available personal protective equipment, such as face shields and masks, in addition to making cash donations to local food banks and other causes near its fabs around the world. (SkyWater is currently evaluating what the most significant needs are in its community and whether it is able to play a meaningful role in addressing them.)

SkyWater occupies a very different niche in the foundry universe than does GlobalFoundries Fab 8. It works on 200-mm wafers and invests heavily in co-developing new technology processes with its customers. In addition to manufacturing an essential microfluidic component for a coronavirus sequencing and identification system system, it’s developing 3D carbon-nanotube based chips through a $61-million DARPA program, for example.

But there are plenty of similarities with GlobalFoundries in SkyWater’s current operations, including telecommuting engineers, staggered in-person work shifts, and restricted entry for visitors. There are, of course, few visitors these days. Customers and technology development partners are meeting remotely with SkyWater’s engineers. And many chip making tools can be monitored by service companies remotely.

(Applied Materials, a major chip equipment maker, says that many customers’ tools are monitored and diagnosed remotely already. The company installs a server in the fab that allows field engineers access to the tools without having to set foot on premises.)

With the whole world in economic upheaval, you might expect that the crisis would lead to some surprises in foundry supply chains. Both GlobalFoundries and SkyWater say they are well prepared. For SkyWater, a relatively small US-based foundry with just the one fab, the big reasons for that preparedness was the trade war between the United States and China beginning in 2018.

“If you look at the broader supply chain, we’ve been preparing for this since tariffs began,” says Sonderman. Those necessitated a deep dive into the business’s potential vulnerabilities that’s helped guide the response to the current crisis, he says.

At press time no employees of either company had tested positive for the virus. But that situation is likely to change as the virus spreads, and the companies say they will adapt. Like everybody else, “we’re finding our new normal,” says Sampson.

Graphene Solar Thermal Film Could Be a New Way to Harvest Renewable Energy

Post Syndicated from John Boyd original https://spectrum.ieee.org/energywise/semiconductors/materials/graphene-solar-heating-film-potential-new-renewable-energy-source

Researchers at the Center for Translational Atomaterials (CTAM) at Swinburne University of Technology in Melbourne, Australia, have developed a new graphene-based film that can absorb sunlight with an efficiency of over 90 percent, while simultaneously eliminating most IR thermal emission loss—the first time such a feat has been reported.

The result is an efficient solar heating metamaterial that can heat up rapidly to 83 degrees C (181 degrees F) in an open environment with minimal heat loss. Proposed applications for the film include thermal energy harvesting and storage, thermoelectricity generation, and seawater desalination.

Topological Photonics: What It Is and Why We Need It

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/semiconductors/optoelectronics/topological-photonics-what-it-is-why-we-need-it

Since topological insulators were first created in 2007, these novel materials, which are insulating on the inside and conductive on the outside, have intrigued researchers for their potential in electronics. However, a related but more obscure class of materials—topological photonics—may reach practical applications first.

Topology is the branch of mathematics that investigates what aspects of shapes withstand deformation. For example, an object shaped like a ring may deform into the shape of a mug, with the ring’s hole forming the hole in the cup’s handle, but cannot deform into a shape without a hole.

Using insights from topology, researchers developed topological insulators. Electrons traveling along the edges or surfaces of these materials strongly resist any disturbances that might hinder their flow, much as the hole in a deforming ring would resist any change.

Recently, scientists have designed photonic topological insulators in which light is similarly “topologically protected.” These materials possess regular variations in their structures that lead specific wavelengths of light to flow along their exterior without scattering or losses, even around corners and imperfections.

Here are three promising potential uses for topological photonics.

TOPOLOGICAL LASERS Among the first practical applications of these novel materials may be lasers that incorporate topological protection. For example, Mercedeh Khajavikhan of the University of Southern California and her colleagues developed topological lasers that were more efficient and proved more robust against defects than conventional devices.

The first topological lasers each required an external laser to excite them to work, limiting practical use. However, scientists in Singapore and England recently developed an electrically driven topological laser.

The researchers started with a wafer made of gallium arsenide and aluminum gallium arsenide layers sandwiched together. When electrically charged, the wafer emitted bright light.

The scientists drilled a lattice of holes into the wafer. Each hole resembled an equilateral triangle with its corners snipped off. The lattice was surrounded by holes of the same shape oriented the opposite way.

The topologically protected light from the wafer flowed along the interface between the different sets of holes, and emerged from nearby channels as laser beams. The device proved robust against defects, says electrical and optical engineer Qi Jie Wang at Nanyang Technological University in Singapore.

The laser works in terahertz frequencies, which are useful for imaging and security screening. Khajavikhan and her colleagues are now working to develop ones that work at near-infrared wavelengths, possibly for telecommunications, imaging, and lidar.

PHOTONIC CHIPS By using photons instead of electrons, photonic chips promise to process data more quickly than conventional electronics can, potentially supporting high-capacity data routing for 5G or even 6G networks. Photonic topological insulators could prove especially valuable for photonic chips, guiding light around defects.

However, topological protection works only on the outsides of materials, meaning the interiors of photonic topological insulators are effectively wasted space, greatly limiting how compact such devices can get.

To address this problem, optical engineer Liang Feng at the University of Pennsylvania and his colleagues developed a photonic topological insulator with edges they could reconfigure so the entire device could shuttle data. They built a photonic chip 250 micrometers wide and etched it with oval rings. By pumping the chip with an external laser, they could alter the optical properties of individual rings, such that “we could get the light to go anywhere we wanted in the chip,” Feng says—from any input port to any output port, or even multiple outputs at once.

All in all, the chip hosted hundreds of times as many ports as seen in current state-of-the-art photonic routers and switches. Instead of requiring an off-chip laser to reconfigure the chip, the researchers are now developing an integrated way to perform that task.

QUANTUM CIRCUITRY Quantum computers based on qubits are theoretically extraordinarily powerful. But qubits based on superconducting circuits and trapped ions are susceptible to electromagnetic interference, making it difficult to scale up to useful machines. Qubits based on photons could avoid such problems.

Quantum computers work only if their qubits are “entangled,” or linked together to work as one. Entanglement is very fragile—researchers hope topological protection could defend photonic qubits from scattering and other disruptions that can occur when photons run across inevitable fabrication errors.

Photonic scientist Andrea Blanco-Redondo, now head of silicon photonics at Nokia Bell Labs, and her colleagues made lattices of silicon nanowires, each 450 nanometers wide, and lined them up in parallel. Occasionally a nanowire in the lattice was separated from the others by two thick gaps. This generated two different topologies within the lattice and entangled photons traveling down the border between these topologies were topologically protected, even when the researchers added imperfections to the lattices. The hope is that such topological protection could help quantum computers based on light scale up to solve problems far beyond the capabilities of mainstream computers.

This article appears in the April 2020 print issue as “3 Practical Uses for Topological Photonics.”

Google Invents AI That Learns a Key Part of Chip Design

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/design/google-invents-ai-that-learns-a-key-part-of-chip-design

There’s been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that’s optimized to do today’s AI, not the AI of two to five years ago. Google’s solution: have an AI design the AI chip.

“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv.

“We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn’t exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.”

Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area.

Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized. Heightening the challenge is the requirement that all this happen while at the same time obeying rules about the density of interconnects. Goldie and Mirhoseini targeted chip placement, because even with today’s advanced tools, it takes a human expert weeks of iteration to produce an acceptable design.

Goldie and Mirhoseini modeled chip placement as a reinforcement learning problem. Reinforcement learning systems, unlike typical deep learning, do not train on a large set of labeled data. Instead, they learn by doing, adjusting the parameters in their networks according to a reward signal when they succeed. In this case, the reward was a proxy measure of a combination of power reduction, performance improvement, and area reduction. As a result, the placement-bot becomes better at its task the more designs it does.

The team hopes AI systems like theirs will lead to the design of “more chips in the same time period, and also chips that run faster, use less power, cost less to build, and use less area,” says Goldie.

Multiphysics Modeling of Piezoelectric Sensors and Actuators

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/multiphysics_modeling_of_piezoelectric_sensors_and_actuators

Are you interested in modelling piezoelectric sensors and actuators? If so, then join us for this webinar.

In this webinar, James Ransley from Veryst Engineering will work through a case study in which the properties of a piezoelectric bender actuator are optimized. The material orientation and dimensions are optimized to maximize the efficiency of the bender for the application. The blocking curve is computed and the dimensions are modified to achieve a given specification with minimal actuator volume.

The study will also include a live demo of piezoelectric device design using the COMSOL Multiphysics® simulation software. A Q&A session will conclude the webinar.

Are you interested in modelling piezoelectric sensors and actuators?

Optimal crystal orientation and displacement profile (color is proportional to relative displacement) for a piezoelectric bender made from lithium niobate (left) and PZT 5H (right).

Edge-AI Startup Gets $60-million, Preps for Mass Production

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/israeli-edgeai-startup-gets-60million-preps-for-mass-production

Israeli startup Hailo says it has raised US $60 million in the second round of funding it will use to mass produce its Hailo-8 chip. The Hailo-8 is designed to do deep learning in cars, robots, and other “edge” machines. Such edge chips are meant to reduce the cost, size, and power consumption needs of using AI to process high-resolution information from sensors such as HD cameras.

Experiment Reveals the Peculiar Way Light Travels in a Photonic Crystal

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/topological-photonic-crystal-light

In novel materials known as photonic topological insulators, wavelengths of light can flow around sharp corners with virtually no losses. Now scientists have witnessed key details of what the light does inside these structures, which could help them to better engineer these materials for real-world applications.

Topology is the branch of mathematics that explores what features of shapes withstand deformation. For instance, an object shaped like a doughnut can get pushed and pulled into the shape of a mug, with the doughnut’s hole forming the hole in the cup’s handle, but it could not get deformed into a shape that lacked a hole.

Using insights from topology, researchers developed the first electronic topological insulators in 2007. Electrons traveling along the edges or surfaces of these materials strongly resist any disturbances that might hinder their flow, much as a doughnut might resist any change that would remove its hole.

Cartesiam Hopes to Make Embedded AI Easier for Everyone

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/cartesiam-startup-news-embedded-ai-arm-microcontrollers

French startup Cartesiam was founded because of the predicted inundation of IoT sensors and products. Even a few years ago, the idea was that these tens of billions of smart sensors would deliver their data to the cloud. AI and other software there would understand what it meant and trigger the appropriate action.

As it did to many others in the embedded systems space, this scheme looked a little ludicrous. “We were thinking: it doesn’t make sense,” says general manager and cofounder Marc Dupaquier. Transporting all that data was expensive in terms of energy and money, it wasn’t secure, it added latency between an event and the needed reaction, and it was a privacy-endangering use of data. So Cartesiam set about building a system that allows ordinary Arm microcontrollers to run a kind of AI called unsupervised learning.  

Engineers Bang Tiny Laser Drum To Speed Data Transmission

Post Syndicated from Neil Savage original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/acoustic-wave-modulate-quantum-cascade-laser

Societies in Africa, the Amazon basin, and New Guinea used to send messages over long distances by banging on drums. Now a group of scientists in the United Kingdom is adapting that idea, using sound pulses to speed up the transmission of data.

Data centers and satellite relays have vast amounts of information to send from one place to another, so speeding up transmission would help enormously. Quantum cascade lasers (QCLs) can emit light at terahertz frequencies, but data has to be encoded onto the laser beam, and the basic laws of physics place a limit on how fast electronic systems can modulate the beam.

So engineers from the University of Leeds and the University of Nottingham in England decided to skip the electronics and use an acoustic wave to modulate the light instead. They describe their proof-of-concept in a recent paper in Nature Communications.

A QCL consists of a series of quantum wells, small areas that confine electrons at specific energy levels. As an electron drops from one well to the next in a sort of waterfall effect, it emits a photon, so a single electron can produce many photons.

To modulate the emission of those photons, and thus encode data onto the laser beam, the research team attached a thin aluminum film to one contact of the laser. They then hit the film with pulses from a different type of laser. Each brief pulse caused the aluminum skin to produce an acoustic wave that ran through the QCL, slightly deforming the structure.

 “It’s as if the whole system’s being shaken really,” says John Cunningham, a professor of electronic and electrical engineering at Leeds who led the research. “It changes the probability of electron transfer between the quantum wells.”

The team used an off-the-shelf QCL to create its prototype system, and only achieved modulation of about 6 percent. Cunningham says it should be possible to reach 100 percent modulation by redesigning the laser so that the quantum wells are specifically engineered to respond to acoustic waves. He’d also like to incorporate a semiconductor phonon laser—a saser, the sonic equivalent of a laser—invented by Tony Kent, a professor of physics at Nottingham and a co-author of the paper. That would make the system more compact and efficient.

Electronic circuits, limited by inductance, capacitance, and resistance, can modulate a laser at a few tens of gigahertz at most. Cunningham says an acoustic system should increase that to hundreds of gigahertz for a tenfold increase in transmission speed, and might one day get even faster.