Tag Archives: Semiconductors/Devices

IMEC and Intel Researchers Develop Spintronic Logic Device

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/devices/imec-and-intel-researchers-develop-spintronic-logic-device

Spintronics is a budding path in the quest for a future beyond CMOS. Where today’s electronics harness an electron’s charge, spintronics plays with another key property: an electron’s spin, which is its intrinsic angular momentum that points either “up” or “down.” Spintronic devices use much less power than their CMOS counterparts—and keep their data even when the device is switched off.

Spin is already used in memory, such as magnetoresistive RAM (MRAM). However, manipulating tiny magnetic effects for the purpose of performing logical operations is more difficult. In 2018, for instance, MIT researchers experimented with hydrogen ions as a possible basis for spintronic transistors. Their efforts were very preliminary; the team admitted that there was still a long way to go before a fully functioning spintronic transistor that could store and process information might be developed. 

Now, researchers at imec and Intel, led by PhD candidate Eline Raymenants, have created a spintronic logic device that can be fully controlled with electric current rather than magnetic fields. The Intel-imec team presented its work at the recent IEEE International Electron Devices Meeting (IEDM).

An electron’s spin generates a magnetic moment. When many electrons with identical spins are close together, their magnetic moments can align and join forces to form a larger magnetic field. Such a region is called a magnetic domain, and the boundaries between domains are called domain walls. A material can consist of many such domains and domain walls, assembled like a magnetized mosaic.

Devices can encode 0s and 1s in those domains. A domain pointing “up” could represent a 0, where a “down” represents a 1. The Intel-imec device uses domains placed in a single-file line of nanoscale wire. The device then uses current to shift those domains and their walls along the wire, like cars along a train track.

The track meets the switch at a magnetic tunnel junction (MTJ). It’s similar to the read-heads of today’s hard disks, but the researchers have implemented a new type of MTJ that’s optimized to move the domain walls more quickly. The MTJs read information from the track and act as logic inputs. At IEDM, the researchers presented a proof of concept: several MTJs feeding an AND gate.

The same MTJ is also where information is written into the track. To do this, the Intel-imec device uses the same technology that’s used in MRAM today. The device passes a spin-polarized current—most of whose electrons have spins in one direction—through a magnetic domain. That current can realign the magnetic field’s direction, creating or editing domain walls in the process.

It’s similar to racetrack memory, an experimental form of data storage that was first proposed over a decade ago. Racetrack memory also writes information into magnetic domains and uses current to shuttle those domains along a nanoscale wire, or “racetrack.” But the Intel-imec device takes advantage of advances in materials, allowing domain walls to move down the line far more quickly. This, the researchers say, is key for allowing logic.

Researchers so far have largely focused on optimizing those materials, according to Van Dai Nguyen, a researcher at imec. “But to build the full devices,” he says, “that’s been missing.”

The Intel-imec team is not alone. Earlier in 2020, researchers at ETH Zurich created a logic gate using domain-wall logic. Researchers at MIT also recently demonstrated a domain-wall-based artificial neuron. Like the Intel-imec researchers and like racetrack memory, these devices also use current to shift domains down the line.

But the Zurich and MIT devices rely on magnetic fields to write information. For logic, that’s not ideal. “If you build a logic circuit,” says Iuliana Radu, a researcher at imec, “you’re not going to put…a huge magnet that you change direction or switch on and off to implement the logic.” Full electrical control, Radu says, will also allow the Intel-imec device to be connected to CMOS circuits.

The researchers say their next steps will be to show their device in action. They’ve designed a majority gate, which returns a positive result if the majority of its inputs are positive. Radu, however, says that they have yet to really explore this design. Only then will the researchers know how their spintronic logic will fare against the CMOS establishment.

Intel’s Stacked Nanosheet Transistors Could Be the Next Step in Moore’s Law

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/intels-stacked-nanosheet-transistors-could-be-the-next-step-in-moores-law

The logic circuits behind just about every digital device today rely on a pairing of two types of transistors—NMOS and PMOS. The same voltage signal that turns one of them on turns the other off. Putting them together means that electricity should flow only when a bit changes, greatly cutting down on power consumption. These pairs have sat beside each other for decades, but if circuits are to continue shrinking they’re going to have to get closer still. This week, at the IEEE International Electron Devices Meeting (IEDM), Intel showed a different way: stacking the pairs so that one is atop the other. The scheme effectively cut the footprint of a simple CMOS circuit in half, meaning a potential doubling of transistor density on future ICs.

Bright Silicon LED Brings Light And Chips Closer

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/devices/bright-silicon-led-brings-light-and-chips-closer

For decades, researchers have tried to fashion an effective silicon LED, one that could be fabricated together with its chip. At the device level, this quest matters for all the applications we don’t have on our mobile devices that would rely on cheap and easily fabricated sources of infrared light. 

Silicon LEDs specialize in infrared light, making them useful for autofocusing cameras or measuring distances—abilities that most phones now have. But virtually no electronics use silicon LEDs, instead opting for more expensive materials that have to be manufactured separately.

However, prospects for the elusive, light-emitting, silicon-based diode may be looking up. MIT researchers, led by PhD student Jin Xue, have designed a functional CMOS chip with a silicon LED, manufactured by GlobalFoundries in Singapore. They presented their work at the recent IEEE International Electron Devices Meeting (IEDM).

The chief problem to date has been, to be blunt, silicon isn’t a very good LED material.

An LED consists of an n-type region, rich in excited free electrons, junctioned with a p-type region, containing positively-charged “holes” for those electrons to fill. As electrons plop into those holes, they drop energy levels, releasing that difference in energy. Standard LED materials like gallium nitride or gallium arsenide are direct bandgap materials, whose electrons are powerful emitters of light.

Silicon, on the other hand, is an indirect bandgap material. Its electrons tend to turn that energy into heat, rather than light. That makes silicon LEDs slower and less efficient than their counterparts. Silicon LED makers must find a way around that indirect bandgap.

One way might be to alloy silicon with germanium. Earlier this year, in fact, a group at Eindhoven University of Technology in the Netherlands fashioned a silicon-based laser out of a nanowire-grown silicon-germanium alloy. Their tiny laser, they reported, might one day send data cheaply and efficiently from one chip to another.

That’s one perhaps elaborate approach to the problem. Another has been considered for more than 50 years—operating silicon LEDs in what’s called reverse-biased mode. Here, the voltage is applied backwards to the direction that would normally allow current to flow. This changeup prevents electrons from filling their holes until the electrical field reaches a critical intensity. Then, the electrons accelerate with enough zeal to knock other electrons loose, multiplying the current into an electrical avalanche. LEDs can harness that avalanche to create bright light, but they need voltages several times higher than the norm for microelectronics.

Since the turn of the millennium, other researchers have tinkered with forward-biased LEDs, in which electrons flow easily and uninterrupted. These LEDs can operate at 1 volt, much closer to a transistor in a typical CMOS chip, but they’ve never been bright enough for consumer use.

The MIT-GlobalFoundries team followed the forward-biased path. The key to their advance is a new type of junction between the n-type and p-type regions. Previous silicon LEDs placed the two side-by-side, but the MIT-GlobalFoundries design stacks the two vertically. That shoves both the electrons and their holes away from the surfaces and edges. Doing that discourages the electrons from releasing energy as heat, channelling more of it into emitting light.

“We’re basically suppressing all the competing processes to make it feasible,” says Rajeev Ram, one of the MIT researchers. Ram says their design is ten times brighter than previous forward-biased silicon LEDs. That’s still not bright enough to be rolled out into smartphones quite yet, but Ram believes there’s more advances to come.

Sonia Buckley, a researcher at the U.S. National Institute of Standards and Technology (NIST) who isn’t part of the MIT-GlobalFoundries research group, says these LEDs prioritize power over efficiency. “If you have some application that can tolerate low efficiencies and high power driving your light source,” she says, “then this is a lot easier and, likely, a lot cheaper to make” than present LEDs, which aren’t integrated with their chips.

That application, Ram thinks, is proximity sensing. Ram says the team is close to creating an all-silicon system that could tell a phone how far away its surroundings are. “I think that might be a relatively near-term application,” he says, “and it’s certainly driving the collaboration we have with GlobalFoundries.”

Scaled-Down Carbon Nanotube Transistors Inch Closer to Silicon Abilities

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/scaleddown-carbon-nanotube-transistors-inch-closer-to-silicon-abilities

Carbon nanotube devices are getting closer to silicon’s abilities thanks to a series of developments, the latest of which was revealed today at the IEEE Electron Devices Meeting (IEDM). Engineers from Taiwan Semiconductor Manufacturing Company (TSMC), University of California San Diego, and Stanford University explained a new fabrication process that leads to better control of carbon nanotube transistors. Such control is crucial to ensuring that transistors, which act as switches in logic circuits, turn fully off when they are meant to. Interest in carbon nanotube transistors has accelerated recently, because they can potentially be shrunk down further than silicon transistors can and offer a way to produce stacked layers of circuitry much more easily than can be done in silicon.

The team invented a process for producing a better gate dielectric. That’s the layer of insulation between the gate electrode and the transistor channel region. In operation, voltage at the gate sets up an electric field in the channel region that cuts off the flow of current. As silicon transistors were scaled down over the decades, however, that layer of insulation, which was made of silicon dioxide, had to become thinner and thinner in order to control the current using less voltage, reducing energy consumption. Eventually, the insulation barrier was so thin that charge could actually tunnel through it, leaking current and wasting energy.

Adhesives can be effective to handle variations in thermal expansion

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/adhesives-can-be-effective-to-handle-variations-in-thermal-expansion

Forming reliable bonds between different materials can be challenging because there can be large variations in CTE’s (coefficients of thermal expansion). Adhesive compounds play a critical role in the fabrication of assemblies for electronic, optical and mechanical systems. Learn more about CTE’s in this paper.

Download now to learn more.

Learn the fundamentals of brushless DC motors (BLDC motors)

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/learn-the-fundamentals-of-brushless-dc-motors-bldc-motors

Machines ranging from simple devices to complex equipment make use of brushless DC motors that convert electrical energy into rotational motion.  

  Basics of BLDC motors include: 

  • A comparison of brushed and brushless DC motors 
  • Simulating BLDC motors to observe the back-EMF profile 
  • Six-step commutation 
  • Motor inner workings and torque generation 
  • How three-phase inverters work 

img

Advanced Wide Band Gap High-Power Semiconductor Measurement Techniques Webinar

Post Syndicated from Keysight original https://spectrum.ieee.org/semiconductors/devices/advanced-wide-band-gap-highpower-semiconductor-measurement-techniques-webinar

In this webcast, Keysight Technologies will explain new measurement techniques, technologies and equipment that can meet the tough characterization challenges presented by wide band gap semiconductor devices.

Meet the next generation of quantum analyzers at this Zurich Instruments launch event

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/meet-the-next-generation-of-quantum-analyzers-at-this-zurich-instruments-launch-event

Would you like to improve the readout of your superconducting qubits, increase the fidelity of your quantum algorithm, or scale up your qubit system size? These are the goals that motivated Zurich Instruments to bring to the market the
SHFQA Quantum Analyzer. This virtual launch event will provide a technical overview of the instrument’s capabilities, including the strengths of the SHFQA’s integrated and mixer-calibration-free frequency conversion scheme. Practical instrument demonstrations will then show you how to measure a resonator at 8 GHz with two microwave cables only, and how to perform the readout of 16 qubits in parallel.

Register Now

Memristor Breakthrough: First Single Device To Act Like a Neuron

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/memristor-first-single-device-to-act-like-a-neuron

One thing that’s kept engineers from copying the brain’s power efficiency and quirky computational skill is the lack of an electronic device that can, all on its own, act like a neuron. It would take a special kind of device to do that, one whose behavior is more complex than any yet created.

Suhas Kumar of Hewlett Packard Laboratories, R. Stanley Williams now at Texas A&M, and the late Stanford student Ziwen Wang have invented a device that meets those requirements. On its own, using a simple DC voltage as the input, the device outputs not just simple spikes, as some other devices can manage, but the whole array of neural activity—bursts of spikes, self-sustained oscillations, and other stuff that goes on in your brain. They described the device last week in Nature.

It combines resistance, capacitance, and what’s called a Mott memristor all in the same device. Memristors are devices that hold a memory, in the form of resistance, of the current that has flowed through them. Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance. Materials in a Mott transition go between insulating and conducting according to their temperature. It’s a property seen since the 1960s, but only recently explored in nanoscale devices.

The transition happens in a nanoscale sliver of niobium oxide in the memristor. Here when a DC voltage is applied, the NbO2 heats up slightly, causing it to transition from insulating to conducting. Once that switch happens, the charge built up in the capacitance pours through. Then the device cools just enough to trigger the transition back to insulating. The result is a spike of current that resembles a neuron’s action potential.

 “We’ve been working for five years to get that,” Williams says. “There’s a lot going on in that one little piece of nanoscale material structure.”

According to Kumar, memristor inventor Leon Chua predicted that if you mapped out the possible device parameters there would be regions of chaotic behavior in between regions where behavior is stable. At the edge of some of these chaotic regions, devices can exist that do what the new artificial neuron does.

Williams credits Kumar with doggedly fine tuning the device’s material and physical parameters to find a combination that works. “You cannot find this by accident,” he says. “Everything has to be perfect before you see this characteristic, but once you’re able to make this thing, it’s actually very robust and reproducible.”

They tested the device first by building spiking versions of Boolean logic gates—NAND and NOR, and then by building a small analog optimization circuit.

There’s a lot of work ahead to turn these into practical devices and scale them up to useful systems that could challenge todays machines. For example, Kumar and Williams plan to explore other possible materials that experience Mott transitions at different temperatures. NbO2’s happens at a worrying 800 C. That temperature is only occurring in a nanometers thin layer, but scaled up to millions of devices, and it could be a problem.

Others have researched vanadium oxide, which transitions at a more pleasant 60 C. But that might be too low, says Williams, given that systems in data centers often operate at 100 C.

There may even be materials that can use other types of transitions to achieve the same result. “Finding the goldilocks material is a very interesting issue,” says Williams.

Breakthrough Could Lead to Amplifiers for 6G Signals

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/devices/breakthrough-could-lead-to-amplifiers-for-6g-signals

Journal Watch report logo, link to report landing page

With 5G just rolling out and destined to take years to mature, it might seem odd to worry about 6G. But some engineers say that this is the perfect time to worry about it. One group, based at the University of California, Santa Barbara, has been developing a device that could be critical to efficiently pushing 6G’s terahertz-frequency signals out of the antennas of future smartphones and other connected devices. They reported key aspects of the device—including an “n-polar” gallium nitride high-electron mobility transistor—in two papers that recently appeared in IEEE Electron Device Letters.

Testing so far has focused on 94 gigahertz frequencies, which are at the edge of terahertz. “We have just broken through records of millimeter-wave operation by factors which are just stunning,” says Umesh K. Mishra, an IEEE Fellow who heads the UCSB group that published the papers. “If you’re in the device field, if you improve things by 20 percent people are happy. Here, we have improved things by 200 to 300 percent.”

The key power amplifier technology is called a high-electron-mobility transistor (HEMT). It is formed around a junction between two materials having different bandgaps: in this case, gallium nitride and aluminum gallium nitride. At this “heterojunction,” gallium nitride’s natural polarity causes a sheet of excess charge called a two-dimensional electron gas to collect. The presence of this charge gives the device the ability to operate at high frequencies, because the electrons are free to move quickly through it without obstruction.

Gallium nitride HEMTs are already making their mark in amplifiers, and they are a contender for 5G power amplifiers. But to efficiently amplify terahertz frequencies, the typical GaN HEMT needs to scale down in a particular way. Just as with silicon logic transistors, bringing a HEMT’s gate closer to the channel through which current flows—the electron gas in this case—lets it control the flow of current using less energy, making the device more efficient. More specifically, explains Mishra, you want to maximize the ratio of the length of the gate versus the distance from the gate to the electron gas. That’s usually done by reducing the amount of barrier material between the gate’s metal and the rest of the device. But you can only go so far with that strategy. Eventually it will be too thin to prevent current from leaking through, therefore harming efficiency.

But Mishra says his group has come up with a better way: They stood the gallium nitride on its head.

Ordinary gallium nitride is what’s called gallium-polar. That is, if you look down at the surface, the top layer of the crystal will always be gallium. But the Santa Barbara team discovered a way to make nitrogen-polar crystals, so that the top layer is always nitrogen. It might seem like a small difference, but it means that the structure that makes the sheet of charge, the heterojunction, is now upside down.

This delivers a bunch of advantages. First, the source and drain electrodes now make contact with the electron gas via a lower band-gap material (a nanometers-thin layer of GaN) rather than a higher-bandgap one (aluminum gallium nitride), lowering resistance. Second, the gas itself is better confined as the device approaches its lowest current state, because the AlGaN layer beneath acts as a barrier against scattered charge.

Devices made to take advantage of these two characteristics have already yielded record-breaking results. At 94 GHz, one device produced 8.8 Watts per millimeter at 27 percent efficiency. A similar gallium-polar device produced only about 2 W/mm at that efficiency.

But the new geometry also allows for further improvements by positioning the gate even closer to the electron gas, giving it better control. For this to work, however, the gate has to act as a low-leakage Schottky diode. Unlike ordinary p-n junction diodes, which are formed by the junction of regions of semiconductor chemically doped to have different excess charges, Schottky diodes are formed by a layer of metal, insulator, and semiconductor. The Schottky diode Mishra’s team cooked up—ruthenium deposited one atomic layer at a time on top of N-polar GaN—provides a high barrier against current sneaking through it. And, unlike in other attempts at the gate diode, this one doesn’t lose current through random pathways that shouldn’t exist in theory but do in real life.

“Schottky diodes are typically extremely difficult to get on GaN without them being leaky,” says Mishra. “We showed that this material combination… gave us the nearly ideal Schottky diode characteristics.”

The UC Santa Barbara team hasn’t yet published the results of from a HEMT made with this new diode as the gate, says Mishra. But the data so far is promising. And they plan to eventually test the new devices at even higher frequencies than before—140 GHz and 230 GHz—both firmly in the terahertz range.

A Better Way to Measure Progress in Semiconductors

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/semiconductors/devices/a-better-way-to-measure-progress-in-semiconductors

One of the most famous maxims in technology is, of course, Moore’s Law. For more than 55 years, the “Law” has described and predicted the shrinkage of transistors, as denoted by a set of roughly biannual waypoints called technology nodes. Like some physics-based doomsday clock, the node numbers have ticked down relentlessly over the decades as engineers managed to regularly double the number of transistors they could fit into the same patch of silicon.

When Gordon Moore first pointed out the trend that carries his name, there was no such thing as a node, and only about 50 transistors could economically be integrated on an IC.

But after decades of intense effort and hundreds of billions of dollars in investment, look how far we’ve come! If you’re fortunate enough to be reading this article on a high-end smartphone, the processor inside it was made using technology at what’s called the 7-nanometer node. That means that there are about 100 million transistors within a square millimeter of silicon. Processors fabricated at the 5-nm node are in production now, and industry leaders expect to be working on what might be called the 1-nm node inside of a decade.

And then what?

After all, 1 nm is scarcely the width of five silicon atoms. So you’d be excused for thinking that soon there will be no more Moore’s Law, that there will be no further jumps in processing power from semiconductor manufacturing advances, and that solid-state device engineering is a dead-end career path.

You’d be wrong, though. The picture the semiconductor technology node system paints is false. Most of the critical features of a 7-nm transistor are actually considerably larger than 7 nm, and that disconnect between nomenclature and physical reality has been the case for about two decades. That’s no secret, of course, but it does have some really unfortunate consequences.

One is that the continuing focus on “nodes” obscures the fact that there are actually achievable ways semiconductor technology will continue to drive computing forward even after there is no more squeezing to be accomplished with CMOS transistor geometry. Another is that the continuing node-centric view of semiconductor progress fails to point the way forward in the industry-galvanizing way that it used to. And, finally, it just rankles that so much stock is put into a number that is so fundamentally meaningless.

Efforts to find a better way to mark the industry’s milestones are beginning to produce clearly better alternatives. But will experts in a notoriously competitive industry unite behind one of them? Let’s hope they do, so we can once again have an effective way of measuring advancement in one of the world’s largest, most important, and most dynamic industries.

So, how did we get to a place where the progress of arguably the most important technology of the past hundred years appears, falsely, to have a natural endpoint? Since 1971, the year the Intel 4004 microprocessor was released, the linear dimensions of a MOS transistor have shrunk down by a factor of roughly 1,000, and the number of transistors on a single chip has increased about 15-million-fold. The metrics used to gauge this phenomenal progress in integration density were primarily dimensions called the metal half-pitch and gate length. Conveniently, for a long time, they were just about the same number.

Metal half-pitch is half the distance from the start of one metal interconnect to the start of the next on a chip. In the two-dimensional or “planar” transistor design that dominated until this decade, gate length measured the space between the transistor’s source and drain electrodes. In that space sat the device’s gate stack, which controlled the flow of electrons between the source and drain. Historically, it was the most important dimension for determining transistor performance, because a shorter gate length suggested a faster-switching device.

In the era when gate length and metal half-pitch were roughly equivalent, they came to represent the defining features of chip-manufacturing technology, becoming the node number. These features on the chip were typically made 30 percent smaller with each generation. Such a reduction enables a doubling of transistor density, because reducing both the x and y dimensions of a rectangle by 30 percent means a halving in area.

Using the gate length and half-pitch as the node number served its purpose all through the 1970s and ’80s, but in the mid-1990s, the two features began to uncouple. Seeking to continue historic gains in speed and device efficiency, chipmakers shrank the gate length more aggressively than other features of the device. For example, transistors made using the so-called 130-nm node actually had 70-nm gates. The result was the continuation of the Moore’s Law density-doubling pathway, but with a disproportionately shrinking gate length. Yet industry, for the most part, stuck to the cadence of the old node-naming convention.

Developments in the early 2000s drove things further apart, as processors ran up against the limitations of how much power they could dissipate. Engineers found ways to keep devices improving. For example, putting part of the transistor’s silicon under strain allows charge carriers to zip through faster at lower voltages, increasing the speed and power efficiency of CMOS devices without making the gate length much smaller.

Things got even stranger as current-leakage problems necessitated structural changes to the CMOS transistor. In 2011, when Intel switched to FinFETs at the 22-nm node, the devices had 26-nm gate lengths, a 40-nm half-pitch, and 8-nm-wide fins.

The industry’s node number “had by then absolutely no meaning, because it had nothing to do with any dimension that you can find on the die that related to what you’re really doing,” says Paolo Gargini, an IEEE Life Fellow and Intel veteran who is leading one of the new metric efforts.

There’s broad, though not universal, agreement that the semiconductor industry needs something better. One solution is simply to realign the nomenclature with the sizes of actual features important to the transistor. That doesn’t mean going back to the gate length, which is no longer the most important feature. Instead, the suggestion is to use two measures that denote a real limit on the area needed to make a logic transistor. One is called the contacted gate pitch. This phrase refers to the minimum distance from one transistor’s gate to another’s. The other vital metric, metal pitch, measures the minimum distance between two horizontal interconnects. (There is no longer any reason to divide metal pitch in half, because gate length is now less relevant.)

These two values are the “least common denominator” in creating logic in a new process node, explains Brian Cline, a principal research engineer at Arm. The product of these two values is a good estimate of the minimum possible area for a transistor. Every other design step—forming logic or SRAM cells, blocks of circuits—adds to that minimum. “A good logic process with well-thought-out physical design characteristics will enable the least degradation” from that value, he says.

Gargini, who is chairman of the IEEE International Roadmap for Devices and Systems (IRDS), proposed in April that the industry “return to reality” by adopting a three-number metric that combines contacted gate pitch (G), metal pitch (M), and, crucially for future chips, the number of layers, or tiers, of devices on the chip (T). (IRDS is the successor to the International Technology Roadmap for Semiconductors, or ITRS, a now-defunct, decades-long, industry-wide effort that forecast aspects of future nodes, so that the industry and its suppliers had a unified goal.)

“These three parameters are all you need to know to assess transistor density,” says Gargini, who also led ITRS.

The IRDS road map shows that the coming 5-nm chips have a contacted gate pitch of 48 nm, a metal pitch of 36nm, and a single tier—making the metric G48M36T1. It doesn’t exactly roll off the tongue, but it does convey much more useful information than “5-nm node.”

As with the node nomenclature, the gate pitch and metal pitch values of this GMT metric will continue to diminish throughout the decade. However, they will do so more and more slowly, reaching an endpoint about 10 years from now, at current rates of progress. By that time, metal pitch will be nearing the limits of what extreme-ultraviolet lithography can resolve. And while the previous generation of lithography machines managed to cost-effectively push well past the perceived limits of their 193-nm wavelengths, nobody expects the same thing will happen with extreme ultraviolet.

“Around 2029, we reach the limit of what we can do with lithography,” says Gargini. After that, “the way forward is to stack…. That’s the only way to increase density that we have.”

That’s when the number of tiers (T) term will start to become important. Today’s advanced silicon CMOS is a single layer of transistors linked together into circuits by more than a dozen layers of metal interconnects. But if you could build two layers of transistors, you might nearly double the density of devices at a stroke.

For silicon CMOS, that’s still in the lab for now, but it shouldn’t be for long. For more than a decade, industrial researchers have been exploring ways to produce “monolithic 3D ICs,” chips where layers of transistors are built atop one another. It hasn’t been easy, because silicon-processing temperatures are usually so high that building one layer can damage another. Nevertheless, several industrial research efforts (notably at Belgian nanotech research firm Imec, France’s CEA-Leti, and Intel) are developing technology that would build the two types of transistors in CMOS logic—NMOS and PMOS—one on top of the other.

Upcoming nonsilicon technology could go 3D even sooner. For example, MIT professor Max Shulaker and his colleagues have been involved in the development of 3D chips that rely on tiers of carbon-nanotube transistors. Because you can process these devices at relatively low temperatures, you can build them up in multiple tiers much more easily than you can with silicon devices.

Others are working on logic or memory devices that can be built within the layers of metal interconnect above the silicon. These include micromechanical relays and transistors made from atom-thin semiconductors such as tungsten disulfide.

About a year ago, a prominent group of academics got together on the campus of the University of California, Berkeley, to come up with their own metric.

The informal group included some of the biggest names in semiconductor research. In attendance at that June 2019 meeting were all three of the Berkeley engineers credited with the FinFET: Chenming Hu, Tsu-Jae King Liu, and Jeffrey Bokor. Bokor is chair of electrical engineering at the university. Hu is a former chief technology officer at Taiwan Semiconductor Manufacturing Co. (TSMC), which is the world’s largest semiconductor foundry, and he was awarded the IEEE Medal of Honor this year. Liu is dean of the college of engineering and sits on the board of directors at Intel. Also present from Berkeley was Sayeef Salahuddin, a pioneer in the development of ferroelectric devices.

From Stanford University, there was H.-S. Philip Wong, a professor and corporate research vice president at TSMC, Subhasish Mitra, who invented a key self-test technology and codeveloped the first carbon-nanotube-based computer with Wong, and James D. Plummer, a former board member at Intel and the longest serving dean of engineering at Stanford. TSMC researcher Kerem Akarvardar and MIT’s Dimitri Antonidis joined later.

They all had the sense that their field was becoming less attractive to top students, particularly U.S. students, says Liu. The logic behind that conviction seemed straightforward: If you saw a field where advances were unlikely just 10 years from now, why would you spend four to six years training for it? This perceived lack of attraction for top students was coming when “we actually need more and more innovative solutions to continue to advance computing technology,” she says.

This mix of experts sought a metric that would erase the node’s doomsday-clock vibe. Crucially, this metric should have no natural endpoint, they decided. In other words, numbers should go up with progress rather than down. It also had to be simple, accurate, and relevant to the main purpose of improving semiconductor technology—more capable computing systems.

To that end they wanted something that did more than describe just the technology used to make the processor, as the IRDS’s GMT metric does. They wanted a metric that took into account not just the processor but also other key performance-impacting aspects of the entire computer system. That may seem overly ambitious, and perhaps it is, but it jibes with the direction computing is beginning to go.

Crack open the package of an Intel Stratix 10 field-programmable gate array, and you’ll find much more than an FPGA processor. Inside the package, the processor die is surrounded by a range of “chiplets,” including, notably, two high-bandwidth DRAM chips. A small sliver of silicon etched with a dense array of interconnects links the processor to the memory.

At its most basic, a computer is just that: logic, memory, and the connections between them. So to come up with their new metric, Wong and his colleagues chose as parameters the density of each of those components, calling them DL, DM, and DC. Combining the subscripts, they dubbed their idea the LMC metric.

Together, improvements in DL, DM, and DC are prime contributions to the overall speed and energy efficiency of computing systems, especially in today’s age of data-centric computing, according to the originators of the LMC metric. They have plotted historical data showing a correlation between the growth in logic, memory, and connectivity that suggests a balanced increase of DL, DM, and DC has been going on for decades. This balance is implicit in computer architectures, they argue—and, strikingly, it holds true for computing systems of various degrees of complexity, from mobile and desktop processors all the way up to the world’s fastest supercomputers. This balanced growth suggests that similar improvements will be needed in the future, says Wong.

In the LMC metric, DL is the density of logic transistors, in number of devices per square millimeter. DM is the density of a system’s main memory in memory cells per square millimeter. And DC is the connections between logic and main memory, in interconnects per square millimeter. If there are multiple tiers of devices or a 3D stack of chips, the entire volume above that square millimeter counts.

DL is perhaps the most historically familiar of the three, as people have been counting the number of transistors on a chip since the first ICs. While it sounds simple, it’s not. Different types of circuits on a processor vary in density, largely because of the interconnects that link the devices. The most dense part of a logic chip is typically the SRAM memory that makes up the processor’s caches, where data is stored for fast, repeated access. These caches are large arrays of six-transistor cells that can be packed closely together, in part because of their regularity. By that measure the highest value reported for DL so far is a 135-megabit SRAM array made using TSMC’s 5-nm process, which packs in the equivalent of 286 million transistors per square millimeter. In the proposed nomenclature, that’d be written 286M.

But blocks of logic are more complex, less uniform, and less dense than the SRAM that’s embedded in them. So judging a technology on SRAM alone might not be fair. In 2017, then Intel senior fellow Mark Bohr advocated a formula that uses weighted densities of some common logic cells. The formula looks at the transistor count per unit area for a simple and ubiquitous two-input, four-transistor NAND gate and for a common but more complex circuit called a scan flip-flop. It weights each according to the proportion of such small gate and large cells in a typical design to produce a single transistors-per-square-millimeter result. Bohr said at the time that SRAM is so different in its density that it should be measured separately.

Internally, AMD uses something similar, according to AMD senior fellow Kevin Gillespie. If a metric doesn’t take into account how devices are connected, it won’t be accurate, he says.

Another possibility, separately suggested by several experts, would be to measure the average density across some agreed-upon, large block of semiconductor intellectual property, such as one of the widely available processor designs by Arm.

Indeed, Arm abandoned its attempts at a single metric in favor of extracting the density of functional blocks of circuitry from complete processor designs, according to Arm’s Cline. “I don’t think there is a one-size-fits-all logic density metric for all hardware applications” because the diversity of different types of chips and systems is too great, he says. Different types of processors—CPUs, GPUs, neural network processors, digital signal processors—have different ratios of logic and SRAM, he points out.

In the end, the LMC originators chose not to specify a particular way of measuring DL, leaving it for debate in the industry.

Measuring DM is a bit more straightforward. Right now, main memory generally means DRAM, because it is inexpensive, has a high endurance, and is relatively fast to read and write from.

A DRAM cell consists of a single transistor that controls access to a capacitor that stores the bit as charge. Because the charge leaks out over time, the cells must periodically be refreshed. These days the capacitor is built in the interconnect layers above the silicon, so density is influenced not just by the size of the transistor, but by the geometry of the interconnects. The highest DM value the LMC group could find in the published literature came from Samsung. In 2018, the company detailed DRAM technology with a density of 200 million cells per square millimeter (200M).

DRAM may not always hold its position as main memory. Alternative memory technologies such as magnetoresistive RAM, ferroelectric RAM, resistive RAM, and phase-change RAM are in commercial production today, some as memory embedded on the processor itself, and some as stand-alone chips.

Providing adequate connectivity between main memory and logic is already a major bottleneck in today’s computational systems. Interconnects between processor and memory, what DC measures, have historically been created by package-level technology rather than chipmaking technology. Compared with logic density and memory density, DC has improved much less steadily over the decades. Instead there have been discrete jumps as new packaging technologies are introduced and then refined. The last decade has been particularly eventful, as single-die systems-on-chip (SoCs) have begun to give way to collections of chiplets bound tightly together on silicon interposers (so-called 2.5-D systems) or stacked in 3D arrangements. A system using TSMC’s System on Integrated Chips 3D chip-stacking technology had the highest published DC at 12,000 interconnects per square millimeter (12K).

However, DC need not necessarily connect logic to a separate memory chip. For certain systems, main memory is entirely embedded. For example, Cerebras Systems’ machine-learning megachip relies entirely on SRAM embedded in proximity with its logic cores on a single massive slab of silicon.

The LMC originators suggest that a system combining the best of all three parameters—DL, DM, and DC—would be described [260M, 200M, 12K].

The time is long gone when a single number could describe how advanced a semiconductor node is, argues Intel CTO Michael Mayberry. However, he does like the idea of having a comprehensive system-level metric, in principle. “Picking something that is agreed upon, even if imperfect, is more useful than the current node branding,” he says.

He’d like to see the LMC expanded with an additional level of detail to specify what’s being measured and how. For example, regarding the DM value, Mayberry says that it might need to specifically relate to memory that is within the same chip package as the processor it serves. And what classifies as “main memory” may need fine-tuning as well, he adds. In the future, there may be multiple layers of memory between the processor and data-storage devices. Intel and Micron, for example, make 3D XPoint memory, a type of nonvolatile system that occupies a niche between DRAM and storage.

A further criticism is that both a density-based metric like LMC and a lithography-based one like GMT are a step away from what customers of foundries and memory chipmakers want. “There’s area [density], but there’s also performance, power, and cost,” says AMD’s Gillespie. Each chip design makes trade-offs around those four axes, to the point that “there is no single number that can ever capture how good a node is,” adds Mayberry.

“The most important metric for memory and storage is still cost per bit,” says Gurtej Singh Sandhu, senior fellow and vice president at the world’s No. 3 DRAM maker, Micron Technologies. “Several other factors, including various performance metrics based on specific market applications, are also closely considered.”

There’s also a faction that argues that a new metric isn’t even needed at this point. Such measures are “useful really only in applications dominated by scaling,” says Gregg Bartlett, senior vice president for engineering and quality at GlobalFoundries, which ended its pursuit of a 7-nm process in 2018. “There are only a few companies manufacturing in this space and a limited number of customers and applications, so it is less relevant to the vast majority of the semiconductor industry.” Only Intel, Samsung, and TSMC are left pursuing the last few CMOS logic nodes, but they are hardly bit players, generating a big fraction of global semiconductor manufacturing revenue.

Bartlett, whose company is not in that group, sees the integration of CMOS logic with specialized technologies, such as embedded nonvolatile memory and millimeter-wave radio, as more crucial to the future of the industry than scaling.

But there’s no doubt that continued scaling is important for many semiconductor consumers. And the originators of the LMC metric and of the GMT metric both feel a sense of urgency, though for different reasons. For Wong and the LMC supporters, the industry needs to make clear its long-term future in an era when transistor scaling is less important so that they can recruit the technical talent to make that future happen.

For Gargini and the GMT backers, it’s about keeping the industry on track. In his view, without the synchronization of a metric, the industry becomes less efficient. “It increases the probability of failure,” he says. “We have 10 years” until silicon CMOS stops shrinking entirely. “That’s barely sufficient” to produce the needed breakthroughs that will keep computing going.

This article appears in the August 2020 print issue as “The Node is Nonsense.”

How RF MEMS Tech Finally Delivered the “Ideal Switch”

Post Syndicated from Glenn Zorpette original https://spectrum.ieee.org/semiconductors/devices/how-rf-mems-tech-finally-delivered-the-ideal-switch

Twenty years ago, engineers specializing in radio-frequency circuits dared to dream of an “ideal switch.” It would have superlow resistance when “on,” superhigh when “off,” and so much more. It would be tiny, fast, readily manufacturable, capable of switching fairly high currents, able to withstand billions of on-off cycles, and would require very little power to operate. It would conduct signals well up in the tens or even hundreds of gigahertz with no distortion at all (close-to-perfect linearity).

It was no pipe dream, and there were ready markets for such a switch in big, budding industries. Sustained by breakthroughs in microelectromechanical systems, or MEMS, major projects sprang up all over the United States and Europe. Many were funded by either the Defense Advanced Research Projects Agency or a European Union Network of Excellence. And now, after one of the longer and more turbulent tech-development efforts of the 21st Century, RF MEMS switches finally appear headed for commercial success.

In the United States, two companies in particular appear to be gaining a solid foothold in fast-moving and competitive markets. Menlo Micro, in Irvine, Calif., a spinoff from GE Ventures, has more than 30 customers in aerospace, military, and wireless-infrastructure industries, according to Chris Giovaniello, a cofounder and senior vice president of the company. Meanwhile Cavendish Kinetics, a Silicon Valley company formed to develop RF MEMS switches for use in smartphones, was acquired last October by Qorvo, a leading maker of wireless devices and systems, including many used in smartphones. Terms of the deal were not released, but up to that point, Cavendish had raised more than US $58 million in at least three rounds. (Semiconductor giant Analog Devices also sells an RF MEMS switch, aimed at applications in testing and instrumentation.)

The successes come after a long, tumultuous period of intermittent high and dashed hopes. “After many, many, many companies tried, we have these two that succeeded, and good for them. And maybe 20 companies failed, and c’est la vie,” says Gabriel Rebeiz, a pioneer in RF MEMS R&D and a professor of electrical engineering at the University of California, San Diego. “One out of ten succeeding is normal for startups in technology.”

The new devices combine some of the best features of electromechanical relay switches—superlow resistance and leakage currents, and very high linearity—with some pluses of semiconductor switches: small size, very high reliability, and ruggedness. Conceptually, they’re similar to the relay switches. In operation, an electrostatic force pulls a conductive beam, called an actuator or a cantilever, toward an electrical contact. Unlike a relay, whose actuation is triggered by an electromagnet, the RF MEMS switches use a simple DC voltage in the range of 50 to 100 volts to produce a static electric field that pulls the beam to the contact. (The relatively high voltage comes from a DC-to-DC converter fed by the 3- to 5-V circuit voltage.) Because the field is static, the current and therefore power consumption are extremely low.

One of the most difficult technical challenges, says Giovaniello, was finding an electrically conductive alloy that could withstand billions of bending-unbending cycles. “The real issue was the actuator,” he says. “That’s really where GE put the bulk of its effort, coming up with alloys. We’ve developed some proprietary alloys that are highly conductive, that make them really good for relays. But they’re extremely strong mechanically, almost like polysilicons.

“GE had, for decades, done a lot of work in alloys for jet engines and it was really some of those people that helped us to solve some of these fundamental reliability issues,” he added. Menlo has not revealed the composition of its alloys, but a research paper written by GE and Menlo engineers about five years ago indicates that they were then working with separate alloys of nickel and of gold.

Giovaniello says that some of Menlo’s customers are using the devices, which measure about 50 by 50 micrometers, in wireless base stations, military radios, or phased-array radars. Increasingly, advanced radios and wireless systems are handling many different frequency bands, each selected by one or more different filters. “We’ve got customers that have 20 filters in their radio,” he says. “When you have to select between many filters with a traditional switch, you can have significant losses going through all the switches to select all the different filters.” The power losses can amount to 3 to 4 decibels, he explains, noting that a 3-dB reduction translates to a whopping 50 percent loss.

Qorvo, meanwhile, is targeting applications inside smartphones (as is Menlo Micro). It’s a gigantic market—around a billion and a half smartphones are sold each year—but it’s also one that makes extremely stringent demands on components, says Julio Costa, senior director of technology development at Qorvo. “Your phone is such a pivotal part of your day-to-day life that if [a new component] is not reliable beyond any doubt, it won’t be incorporated.”

There are a couple of obvious possible uses for the devices in smartphones, Costa explains. The first one Qorvo is pursing is antenna tuning. A modern smartphone has up to eight antennas to accommodate a number of frequency bands that is growing as wireless transitions from 4G to 5G. To better match the antennas to frequency, switches embedded along an antenna can change its configuration and also switch in resonant devices, like capacitors or inductors, to fine-tune the antenna’s response. For this application, handset makers now use semiconductor switches, based on silicon-on-insulator (SOI) technology. But the higher frequencies and linearities possible with the MEMS devices make them an attractive alternative, particularly for some 5G bands, Costa says. He expects to see handsets incorporating the switches “in a couple of years.”

In Europe, too, commercial offerings are following lengthy R&D projects. A startup called AirMems is marketing RF MEMS switches based on work at the University of Limoges, in France. And in Germany, the research institute IHP (Innovations for High Performance Microelectronics) developed a process that integrates RF MEMS switches directly into a bipolar CMOS chip.

In a far-ranging interview, Giovaniello said wireless, radar, and instrumentation applications are only the beginning. Declaring that Menlo Micro has already managed to conduct 20 amperes through one of the tiny switches, he envisions a future role for the devices as a kind of “resettable, electronically controllable fuse.”

“Every few generations, there’s a new technology that comes along that gives you a different way to make switches,” he adds. “There was only mechanical back 100 years ago and then there were vacuum tubes, and then the transistor, and then the integrated circuit. They’re all just different ways of making switches if you think about it.

“And that’s how we like to describe this ideal-switch technology. It’s a combination of the best of the mechanical and the semiconductor worlds but in the end it’s a new process technology for making switches that will allow us and our partners to make hundreds or even thousands of different products over the next decade.”

This article appears in the August 2020 print issue as “RF MEMS Deliver
the “Ideal Switch”.”

Why Sweat Will Power Your Next Wearable

Post Syndicated from Patrick Mercier original https://spectrum.ieee.org/semiconductors/devices/why-sweat-will-power-your-next-wearable

Admit it—you love your smart watch and all the amazing things you can do with it. But the task of keeping it charged can be annoying. A longer battery life would be great—but with batteries, the amount of energy stored correlates with volume, and bigger batteries add bulk and weight. All wearables, including fitness trackers, audio-enhancing hearables that go in your ear, and augmented-reality contact lenses, have a similar power problem. And today’s batteries are too bulky and stiff for use in wearables that are woven into textiles or directly mounted on a user’s skin.

The power demands for these kinds of devices range from 1 milliwatt for a basic step counter to tens of milliwatts for more advanced smart watches. When using small, centimeter-size batteries, which have capacities on the order of 10 to 300 milliampere hours, this results in battery lifetimes of only a few days at most.

Some researchers are tackling the wearable power challenge by developing new types of stretchable batteries and supercapacitors. However, it’s hard to produce such batteries using screen printing, a process that dramatically lowers costs. Other developers are trying to bypass batteries altogether by using Near Field Communication chipsets for wireless power transmission. But NFC technology requires you to have an external power source, like a mobile phone, within a few centimeters of the wearable; once you move the phone away, the wearable stops working.

At the Center for Wearable Sensors at the University of California, San Diego, we think there’s a better way to power the next generation of wearables: scavenging energy—especially biofuels–from the wearers themselves. Devices powered this way could be so small that you’ll forget they are there. We call them “unaware-ables.”

And the first practical biofuel will be your own sweat.

Harvesting energy from your body or your surroundings isn’t a new idea. Earlier approaches involved tapping into motion, light, and heat. Since the 1770s, for example, self-winding watches have used the natural motion of the body to scavenge power. In the first such devices, a weight inside the watch would wind the mechanism. Later versions had magnetic weights that passed through a coil, which generated electricity to charge a battery. Modern self-winding watches use piezoelectric materials, crystalline substances that release an electric charge when bent or squeezed.

Scavenging light is done with fingernail-size photovoltaic cells, which have been used to power calculators for decades. Most are rigid, but researchers in our center are developing solar cells that are flexible and even stretchable, to better conform to the human body.

Body heat is a third energy-scavenging approach. In most climates and settings, the temperature of the human body is well above ambient temperature. Tiny thermoelectric generators exploit this differential. The body-heat scavenger used on the PowerWatch, which is a fairly basic smart watch, fits on the skin-facing side of the device.

Unfortunately, few of these tried-and-true energy-harvesting methods have been able to supply sufficient power for a small, flexible, useful wearable. Motion harvesters are fine for watches, but there’s not enough motion to harvest in the areas where the newest wearables are worn, such as on the chest or in the ear. Thermoelectric generators can scavenge energy effectively only when their design includes a large heat sink, typically made from aluminum, to collect the body’s heat and transfer it to the device. And thermoelectric generators and solar cells don’t work well if the wearable is worn under clothing.

That’s why we’re so excited about sweat power. Certain chemicals found in human sweat can be used as fuel in wearable-size fuel cells. These biofuel cells could offer high power densities in a more practical, wearable form than is possible with any of the existing energy-scavenging approaches. Our team has already developed prototype wearables that generate power from sweat.

Here’s how a wearable turns sweat into energy: A fuel cell consists of two electrodes—an anode and a cathode—with an electrolyte between them. The fuel goes into the anode, where a catalyst separates its molecules into electrons and protons. The protons pass through a membrane to the cathode, while the electrons flow into a circuit. The Welsh scientist William Robert Grove first worked out the process in 1839; he used hydrogen as the fuel and oxygen as the catalyst to generate water and electrical current.

Hydrogen is impractical for a wearable fuel cell because it’s highly flammable. Sweat, on the other hand, is easily acquired and abundant, particularly when a person is exercising or playing sports. And, given that athletes embraced all sorts of wearables early and widely, they represent an attractive early market for sweat-powered devices.

Sweat isn’t just water. It contains trace amounts of a wide variety of minerals and other substances like glucose and lactate. These substances, called metabolites, are by-products of the chemical processes that constantly go on inside living beings, and they make attractive biofuels. We are particularly interested in lactate, because its concentration in sweat rises with exertion. In our sweat biofuel cells, we create a layer of enzymes that reacts with the lactate in sweat to split the electrons and protons and create an electrical current.

We’re not the first researchers to think of using body fluids as fuel. Some of the original pacemaker and cochlear implants proposed in the 1970s were intended to use glucose biofuel cells for power. Given the abundance of biofuels inside the body, using them for implantable devices was a logical choice. The main drawback was that the enzymes used to catalyze the fuel cell reaction would degrade, and the electrode would stop functioning within a few days. The only way to restore the fuel cell’s operation was to surgically remove the implant, which was obviously impractical.

To avoid the enzyme-depletion issue, our group instead has focused on developing disposable wearables worn outside the body. We demonstrated our first biofuel cells in 2014. The lactate biofuel cells were screen-printed onto a fabric headband and a wrist-worn sweat guard.

Our test subjects each wore a headband and a sweat guard while riding an exercise bike. We’d hooked up each biofuel cell to a small DC-DC converter. The converter raised the voltage generated by the perspiring bike riders to the levels necessary to light up a microwatt-level LED and power a digital wristwatch. In the experiment, the biofuel cells generated up to 100 microwatts per square centimeter, enough to power both the LED and the watch. That’s a greater power density than would be possible with a thermoelectric generator paired with a small, wearable-size heat sink, and somewhat greater than the power density of a solar cell operating in normal indoor lighting. (We did not measure lactate concentrations during the experiment; these vary by subject. Our previous, in vitro experiments, which produced similar power levels, used lactate concentrations of 3 × 10–3 moles per liter (3 moles per cubic meter) to 15 × 10–3 mol/L (15 mol/m3).

To our knowledge, this was the first demonstration of a printable, wearable biofuel cell powering an everyday object. What’s more, because the biofuel cell was printed onto a flexible textile, the design was reasonably comfortable to wear, and it continued to function even after repeated bending. However, it didn’t generate quite enough power to operate a sophisticated activity tracker or a full smart watch. It also didn’t conform to the skin as comfortably as we would have liked.

Of course, truly useful wearables have more components than a digital watch does. Even the simplest activity tracker includes an accelerometer, memory, and a Bluetooth radio. Together, these items consume about a milliwatt or two, about 10 times as much as what we generated in our 2014 demonstration. Indeed, boosting the power density of wearable biofuel cells is one of our biggest challenges. If the cells can’t generate much more power than alternative technologies can, they won’t make it into real products.

In reviewing our fuel cell’s design, we realized that the surface of the anode was relatively flat. That meant the anode was exposed to only the molecules resting directly below it, which greatly limited the amount of biofuel the cell could catalyze. In 2017, in collaboration with our UC San Diego colleague Sheng Xu and his team, we came up with a 3D carbon-nanotube structure in the form of pellets that could be attached to the top of the anode and the cathode. These pellets increased the electrodes’ effective surface area without increasing the actual area of the device. Although using this 3D structure meant our cell was no longer fully printable, it allowed us to load each electrode with more of the catalyst, giving access to more fuel and enabling each cell to generate much more power.

Increasing the cell’s effective surface area and making additional improvements in the chemical composition of our catalyst enabled us to boost the power density by a factor of 10, to 1 milliwatt per square centimeter. That power density approaches that of a small solar cell in direct sunlight.

To test this design, we built a circuit board that included a Bluetooth radio, typically the most power-hungry component of a wearable device. Our prototype lactate fuel cell generated around a milliwatt, enough to successfully power the radio, a small microprocessor, and peripheral components, including a temperature sensor and voltage converter.

The 3D electrodes we used in this design weren’t flexible. Ultimately, we want a fuel cell that’s not just flexible but also stretchable. The human body is bumpy and curvy, so flexibility alone isn’t sufficient to ensure that the fuel cell will conform smoothly to a user’s skin—think of trying to wrap a sheet of paper around your wrist. Something stretchy like nylon or cling wrap would be much better.

To create flexibility and stretch, we again turned to carbon nanotubes, using an “island bridge” structure developed by Xu’s group. This biofuel cell is dotted with rows of “islands,” each one of which is a 3D electrode. Half of the islands make up the anode, and the other half form the cathode. The islands are connected by stretchy “bridges” made of thin, spring-shaped wires that can stretch and bend. However, the electrodes themselves don’t stretch, so over time, the mechanical mismatch can create a lot of strain and cause the fuel cell to fail. Our team is currently working to solve this problem.

There’s one more hurdle to bringing sweat power to wearables: In most situations, people don’t sweat constantly—or at least not heavily enough to generate much power. If you’re not sweating, your fuel cell will run dry and stop producing power. This may not be an issue in applications like exercise and athletics, but it’s a big deal in most other cases.

There are three ways to work around this limitation. We could use the scavengers only for applications where the availability of sweat is guaranteed. Or we could add an energy-storage element to the wearable. Or, finally, we could add a complementary, nonbiofuel energy scavenger to the wearable.

We don’t want to limit the applications to athletes in action, so we’ve been focusing on the two latter approaches. For wearables that need a constant supply of energy—for example, smart watches—an obvious solution is adding a battery or an ultracapacitor to act as an energy buffer. If the fuel cell has a high power density but the availability of power is intermittent, then the wearable device will charge its battery when power is available and discharge the battery when the biofuel cell stops producing power.

This energy buffer needs to have the same general physical properties as the rest of the wearable. It doesn’t make sense to have a biofuel cell that’s small, soft, and stretchable if the battery on top is large and rigid.

Last year, researchers in our center created a stretchable textile that uses sweat to feed its biofuel module and stores the energy in a flexible supercapacitor. We’ve also demonstrated stretchable and rechargeable zinc-silver-oxide battery prototypes, made using printed electronics. These stretchy batteries can be recharged hundreds of times and can generate relatively high currents, although their energy densities have a long way to go before they can match those of their rigid counterparts.

To increase the likelihood of successful energy harvesting and therefore decrease the demands on the energy buffer, we’re also looking at using multiple types of energy scavenging to power a single wearable. Users may not necessarily sweat all the time, but if their wearable combines a biofuel cell with a solar cell and a thermoelectric generator, then the chances of any one of these devices generating power at a particular time is going to be higher than with one of them acting alone.

Of course, integrating several scavenging technologies needs to happen without compromising the wearer’s comfort. As with the biofuel cell–energy buffer combination, we have to make these gadgets small, soft, and stretchy. Researchers have lately been considering how to redesign energy-scavenging devices for better integration into what we like to call an energy sandwich—that is, a collection of energy scavengers stacked in a single small, stretchable device. In 2018 we developed a small circuit that can simultaneously extract energy from multiple sources and send it to multiple wearables and a battery simultaneously.

So far, we’ve been talking about using biofuel cells to replace traditional batteries in wearables. UC San Diego researchers are also making progress on printable batteries that are smaller, cheaper, and more flexible than traditional batteries. These, along with supercapacitors, will eventually be useful for briefly storing the energy harvested by wearable biofuel cells.

And here’s the real magic of biofuel cells: They can act as sensors. That’s something no battery can do.

The sensing is possible because the output power of a biofuel cell is usually proportional to the underlying concentration of metabolites being used as fuel. We can exploit this phenomenon to build self-powered sensors—devices that perform sensing and energy harvesting at the same time. This idea was first reported in 2001 by a team led by researchers at the Hebrew University of Jerusalem.

Our team has developed self-powered biosensors to detect changes in the lactate or glucose concentration of sweat while using that same lactate or glucose to generate the power the sensor needs to operate. Lactate is a good marker of exertion, and knowing one’s level is useful for athletes and also for people fighting illness. Glucose is a marker for nutrition as well as for blood sugar levels. Custom low-power microelectronic circuits in the device—powered by the biofuel—read the sensor data, convert it to a digital format, and wirelessly transmit the reading to a smart watch or phone. We believe this is the first practical demonstration of a noninvasive, integrated, self-powered chemical biosensing system with a wireless readout.

Sweat power is a new technology, and many challenges remain. For one thing, not everybody sweats in the same amounts; we have to make sure our systems can adapt to varying conditions. We also need to do a better job of integrating these biofuel cells with other harvesters, energy sources, and electronics to create useful wearables.

And finally, the longevity of our biofuel cells needs improvement. Right now, most of our prototypes are designed for use in disposable devices, as easy to put on and take off as a Band-Aid. These biofuel cells might go into wearables meant to be worn for a day at most, for use by athletes trying to optimize training, people fighting illness who need to monitor temperature and hydration, or those just interested in tracking their general well-being. While we feel this approach is reasonable for many applications, we’d still like to try to extend the devices’ longevity a bit further.

If we do our job well, we will create a class of small, flexible, robust, rugged, washable wearables that can be used around the clock and never need to be recharged. Sweat isn’t the only biofuel—our bodies produce a variety of potential biofuels. Tears could power smart contact lenses, while saliva could power smart mouth guards. A baby’s urine could power a smart diaper, and fluid collected from the skin using microneedles could power a drug-delivery device. These and other self-powered biosensors would measure myriad biological parameters and report their data wirelessly to a user’s health app, yielding far better information than what’s now available from traditional wearables. That’s our vision of an “unaware-ables” future.

This article appears in the July 2020 print issue as “Powered By Sweat.”

About the Authors

Patrick Mercier is an associate professor of electrical and computer engineering and Joseph Wang is a professor of nanoengineering at the University of California, San Diego. Mercier and Wang are also co-director and director, respectively of the university’s Center for Wearable Sensors.

Atom-Thin Switches Could Route 5G and 6G Radio Signals

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/atomthin-switches-5g-6g-radio-signals

Two-dimensional, atom-thin materials are good for a lot of things, but until two years ago, nobody thought they’d make good memory devices. Then Deji Akinwande, Jack Lee, and their team at UT Austin tried it out. It turns out that sandwiching a 2D material like molybdenum disulfide between two electrodes makes a memristor—a two-terminal device that stores data as a change in resistance. In research reported last week, they’ve proved a very important potential application for these “atomristors”—analog RF switches for 5G and perhaps future 6G radios.

Sharing Manufacturing IP Could Help Us Deal with COVID-19


Post Syndicated from Mark Pesce original https://spectrum.ieee.org/semiconductors/devices/sharing-manufacturing-ip-could-help-us-deal-with-covid19

IEEE COVID-19 coverage logo, link to landing page

Back in 2012, Netflix released Chaos Monkey, an open-source tool for triggering random failures in critical computing infrastructure. Similar stress testing, at least in a simulated environment, has been applied in other contexts such as banking, but we’ve never stress tested industrial production during a viral pandemic. Until now.

COVID-19 has demonstrated beyond doubt the fragility of the global system of lean inventories and just-in-time delivery. Many nations have immediate need for critical medical supplies and equipment, even as we grope for the switch that will allow us to turn the global economy back on. That means people have to be able to manufacture stuff, where it’s needed, when it’s needed, and from components that can be locally sourced. That’s a big ask, because most of our technology comes to us from far away, often in seamlessly opaque packaging—like a smartphone, all surface with no visible interior.

The manufacture of even basic products can be so encumbered by secrecy or obscurity that it quickly becomes difficult to learn how to make them or to re-create their functionality in some other way. While we normally tolerate such impediments as part of normal business practice, they have thrown up unexpected roadblocks to keeping the world operating through the present crisis.

We must do whatever we can to lower the barriers to getting things built, and that begins by embracing a newfound flexibility in our approaches to both manufacturing and intellectual property. Companies are already rising to this challenge.

For example, Medtronic shared the designs and code for its portable ventilator at no charge, enabling other capable manufacturers to take up the challenge of building and distributing enough units to meet peak demand during the pandemic. Countless other pieces of electronic equipment—everything from routers to thermostats—operate in critical environments and need immediate replacement should they fail. Where they cannot be replaced, we will fall deeper into the ditch we now find ourselves in.

It would be ideal if any sort of equipment could be “printed” on demand. We already have the capacity for such rapid manufacturing in some realms, but to address the breadth of the present crisis would require a comprehensive database of product designs, testing, firmware, and much else. Little of that infrastructure exists at present, highlighting a real danger: If we don’t construct a distributed, global build-it-here-now capacity, we might burn through our existing inventories without any way to replenish them. Then we will be truly stuck.

Many firms will no doubt have reservations about handing over their intellectual property, even to satisfy critical needs. This tension between normal business practice and public good echoes the contours of the dilemmas facing personal privacy and public health. We nevertheless need urgently to find a way to share trade secrets—temporarily—to preserve the kind of world within which business can one day operate normally.

Some governments have already signaled their greater flexibility in enforcing both patent protection and intellectual property protections during this crisis. Yet more is needed. Like Medtronic, businesses should take the plunge, open up, share their trade secrets, provide guidance to others (even former competitors) to help us speed our way into a post-pandemic economy. Sharing today will make that return much faster and far less painful. To paraphrase a wise old technologist, we either hang together, or we will no doubt hang separately.

This article appears in the June 2020 print issue as “Not Business as Usual.”

How the Father of FinFETs Helped Save Moore’s Law


Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/semiconductors/devices/how-the-father-of-finfets-helped-save-moores-law

It was 1995. Advances in chip technology continued apace with Moore’s Law, the observation that the number of transistors on a chip doubles roughly every two years, generally because of the shrinking size of those transistors.

But the horizon no longer seemed limitless. Indeed, for the first time, murmurs throughout the semiconductor industry predicted the death of Moore’s Law. The golden days would be coming to an end, the predictions went, when the size of a critical transistor feature, then around 350 nanometers, reached below 100 nm. Even the U.S. government was worried—so worried that DARPA raised an alarm, launching a program seeking new chip technologies that could extend progress.

Chenming Hu, then a professor of electrical engineering and computer science at the University of California, Berkeley, jumped at the challenge. He immediately thought of a solution—actually, two solutions—and, on a plane ride a few days later, sketched out those designs. One of those ideas, raising the channel through which current flows so that it sticks out above the surface of the chip, became the FinFET, a technology that earned Hu this year’s IEEE Medal of Honor “for a distinguished career of developing and putting into practice semiconductor models, particularly 3-D device structures, that have helped keep Moore’s Law going over many decades.”

The story of the FinFET didn’t begin with Hu putting pencil to paper on an airline tray table, of course.

It started in Taiwan, where Hu was a curious child, conducting stove-top experiments on seawater and dismantling—and reassembling—alarm clocks. As he approached the end of high school, he was still interested in science, mostly chemistry. But instead of targeting a chemistry degree, he applied for the electrical engineering program at the National Taiwan University, even though he didn’t really know what an EE actually did. It was simply a challenge—the electrical engineering program required the highest test scores to get in.

During his last year of college, Hu discovered the industry he would later shake up, thanks to Frank Fang, then a visiting professor from the United States.

“It was 1968,” Hu recalls, “and he told us semiconductors were going to be the material for future televisions, and the televisions would be like photographs we could hang on the wall.”

That, in an era of bulky tube televisions, got Hu’s attention. He decided that semiconductors would be the field for him and applied to graduate programs in the United States. In 1969, he landed at Berkeley, where he joined a research group working on metal-oxide semiconductor (MOS) transistors.

His career soon took a detour because semiconductors, he recalls, just seemed too easy. He switched to researching optical circuits, did his Ph.D. thesis on integrated optics, and went off to MIT to continue his work in that field.

But then came the 1973 oil embargo. “I felt I had to do something,” he said, “something that was useful, important; that wasn’t just writing papers.”

So he switched his efforts toward developing low-cost solar cells for terrestrial applications—at the time, solar cells were used only on satellites. In 1976, he returned to Berkeley, this time as a professor, planning to do research in energy topics, including hybrid cars, an area that transported him back to semiconductors. “Electric cars,” Hu explains, “needed high voltage, high current semiconductor devices.”

Come the early 1980s, that move back to semiconductor research turned out to be a good thing. Government funding for energy research dried up, but a host of San Francisco Bay Area companies were supporting semiconductor research, and transitioning to corporate funding “was not very difficult,” Hu says. He started spending time down in Silicon Valley, not far from Berkeley, invited by companies to teach short courses on semiconductor devices. And in 1982, he spent a sabbatical in the heart of Silicon Valley, at National Semiconductor in Santa Clara.

“Being in industry then ended up having a long influence on me,” Hu says. “In academia, we learn from each other about what is important, so what I thought was interesting really came just because I was reading another paper and felt, ‘Hey, I can do better than that.’ But once I opened my eyes to industry, I found that’s where the interesting problems are.” And that epiphany got Hu looking harder at the 3D structure of transistors.

A field-effect transistor has four basic parts—a source, a drain, a conductive channel that connects the two, and a gate to control the flow of current down the channel. As these components were made smaller, people started noticing that the behaviors of transistors were changing with long-term use. These changes weren’t showing up in short-term testing, and companies had difficulty predicting the changes.

In 1983, Hu read a paper published by researchers at IBM that described this challenge. Having spent time at National Semiconductor, he realized the kinds of problems this lack of long-term reliability could cause for the industry. Had he not worked in the trenches, he says, “I wouldn’t have known just how important a problem it was, and so I wouldn’t have been willing to spend nearly 10 years working on it.”

Hu decided to take on the challenge, and with a group of students he developed what he called the hot-carrier-injection theory for predicting the reliability of MOS semiconductors. It’s a quantitative model for how a device degrades as electrons migrate through it. He then turned to investigating another reliability problem: the ways in which oxides break down over time, a rising concern as manufacturers made the oxide layers of semiconductors thinner and thinner.

These research efforts, Hu says, required him to develop a deep understanding of what happens inside transistors, work that evolved into what came to be called the Berkeley Reliability Tool (BERT) and BSIM, a set of transistor models. BSIM became an industry standard and remains in use today; Hu still leads the effort to regularly update its models.

Hu continued to work with his students to study the basic characteristics of transistors—how they work, how they fail, and how they change over time—well into the 1990s. Meanwhile, commercial chips continued to evolve along the path predicted by Moore’s Law. But by the mid-1990s, with the average feature size around 350 nm, the prospects for being able to shrink transistors further had started looking worrisome.

“The end of Moore’s Law was in view,” recalls Lewis Terman, who was at IBM Research at the time.

The main problem was power. As features grew smaller, current that leaked through when a transistor was in its “off” state became a bigger issue. This leakage is so great that it increased—or even dominated—a chip’s power consumption.

“Papers started projecting that Moore’s Law for CMOS would come to an end below 100 nm, because at some point you would dissipate more watts per square centimeter than a rocket nozzle,” Hu recalled. “And the industry declared it a losing battle.”

Not ready to give up on Moore’s Law, DARPA (the Defense Advanced Research Projects Agency) looked to fund research that promised to break that barrier, launching an effort in mid-1995 to develop what it called the 25-nm Switch.

“I liked the idea of 25 nm—that it was far enough beyond what the industry thought possible,” Hu says.

Hu saw the fundamental problem as quite clear—making the channel very thin to prevent electrons from sneaking past the gate. To date, solutions had involved thinning the gate’s oxide layer. That gave the gate better control over the channel, reducing leakage current. But Hu’s work in reliability had shown him that this approach was close to a limit: Make the oxide layer sufficiently thin and electrons could jump across it into the silicon substrate, forming yet another source of leakage.

Two other approaches immediately came to mind. One involved making it harder for the charges to sneak around the gate by adding a layer of insulation buried in the silicon beneath the transistor. That design came to be called fully depleted silicon-on-insulator, or FDSOI. The other involved giving the gate greater control over the flow of the charge by extending the thin channel vertically above the substrate, like a shark’s fin, so that the gate could wrap around the channel on three sides instead of just sitting on top. This structure was dubbed the FinFET, which had the additional advantage that using space vertically relieved some of the congestion on the 2D plane, ushering in the era of 3D transistors.

There wasn’t a lot of time to get a proposal submitted to DARPA, however. Hu had heard about the DARPA funding from a fellow Berkeley faculty member, Jeffrey Bokor, who, in turn, had heard about it while windsurfing with a DARPA program director. So Hu quickly met with Bokor and another colleague, Tsu Jae King, and confirmed that the team would pull together a proposal within a week. On a plane trip to Japan a day or two later, he sketched out the two designs, faxing his sketches and a description of his technical approach back to Berkeley when he arrived at his hotel in Japan. The team submitted the proposal, and DARPA later awarded them a four-year research grant.

Ideas similar to FinFET had been described before in theoretical papers. Hu and his team, however, actually built manufacturable devices and showed how the design would make transistors 25 nm and smaller possible. “The others who read the papers didn’t see it as a solution, because it would be hard to build and may or may not work. Even the people who wrote the papers did not pursue it,” says Hu. “I think the difference was that we looked at it and said, we want to do this not because we want to write another paper, or get another grant, but because we want to help the industry. We felt we had to keep [Moore’s Law] going.

“As technologists,” Hu continues, “we have the responsibility to make sure the thing doesn’t stop, because once it stops, we’re losing the biggest hope for us to have more abilities to solve the world’s difficult problems.”

Hu and his team “were well-poised to develop the FinFET because of the way he trains his students to think about devices,” says Elyse Rosenbaum, a former student of his and now a professor at the University of Illinois at Urbana-Champaign. “He emphasizes big picture, qualitative understanding. When studying a semiconductor device, some people focus on creating a model and then numerically solving all the points in its 3D grid. He taught us to step back, to try to visualize where the electric field is distributed in a device, where the potential barriers are located, and how the current flow changes when we change the dimension of a particular feature.”

Hu felt that visualizing the behavior of semiconductor devices was so important, Rosenbaum recalls, that once, struggling to teach his students his process, he “built us a model of the behavior of an MOS transistor using his kids’ Play-Doh.”

“These things looked like a lightning invention,” said Fari Assaderaghi, a former student who is now senior vice president of innovation and advanced technology at NXP Semiconductors. “But his team had been working on fundamental concepts of what an ideal device should be, working from first principles of physics early on; how to build the structure comes from that.”

By 2000, at the end of the four-year grant term, Hu and his team had built working devices and published their research, raising immediate, widespread interest within the industry. It took another decade, however, before chips using FinFETs began rolling off of manufacturing lines, the first from Intel in 2011. Why so long?

“It was not broken yet,” Hu explains, referring to the industry’s ability to make semiconductor circuits more and more compact. “People were thinking it was going to break, but you never fix anything that’s not broken.”

It turned out that the DARPA program managers were prescient—they had called the project the 25-nm Switch, and FinFETs came into play when the semiconductor industry moved to sub-25-nm geometries.

FDSOI, meanwhile, also progressed and is also being used in industry today. In particular, it’s found in optical and RF devices, but FinFETs currently dominate the processor industry. Hu says he never really promoted one approach over the other.

In FinFET’s dormant years, Hu took a three-year break from Berkeley to serve as chief technology officer of semiconductor manufacturer TSMC in Taiwan. He saw that as a chance to pay back the country where he received his initial education. He returned to Berkeley in 2004, continuing his teaching, research in new energy-efficient semiconductor devices, and efforts to support BSIM. In 2009, Hu stopped teaching regular classes, but as a professor emeritus, he still works with graduate students.

Since Hu moved back to Berkeley, FinFET technology has swept the industry. And Moore’s Law did not come to an end at 25 nm, although its demise is still regularly predicted.

“It is going to gradually slow down, but we aren’t going to have a replacement for MOS semiconductors for a hundred years,” Hu says. This does not make him pessimistic, though. “There are still ways of improving circuit density and power consumption and speed, and we can expect the semiconductor industry to keep giving people more and more useful and convenient and portable devices. We just need more creativity and a big dose of confidence.”

This article appears in the May 2020 print issue as “The Father of FinFETs.”

Experts Invent a New Way to Track Semiconductor Technology

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/experts-invent-a-new-way-to-track-semiconductor-technology

Journal Watch report logo, link to report landing page

A group of some of the most noted device engineers on the planet, including several IEEE Fellows and this year’s IEEE Medal of Honor recipient, is proposing a new way of judging the progress of semiconductor technology. Today’s measure, the technology node, began to go off the rails almost two decades ago. Since then, the gap between what a technology node is called and the size of the devices it can make has only grown. After all, there is nothing in a 7-nanometer chip that is actually that small. This mismatch is much more of problem than you might think, argues one of the group, Stanford University professor H.-S. Philip Wong.

 “The broader research community has a feeling that [device] technology is over,” he says. “Nothing could be further from the truth.”

TSMC’s 5-Nanometer Process on Track for First Half of 2020

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/tsmc-5-nanometer-process

“Those who know, know.” That was all that TSMC senior director of advanced technology Geoffrey Yeap would say about the mystery ingredient that helps boost the performance of devices made using the company’s next generation manufacturing process. N5, TSMC’s 5-nanometer process, is on track for high-volume production during the first half of 2020, Yeap told engineers at the IEEE International Electron Device Meeting in San Francisco, Wednesday.

Compared with the company’s 7-nanometer process, used to make iPhone X processors among other high-end systems, N5 leads to devices that are 15 percent faster and 30 percent more power efficient. It produces logic that is 1.84 times as small as the previous process and produces SRAM cells that are only 0.021 square micrometers, the most compact ever reported, Yeap said.

The process is currently in what’s called risk production—initial customers are taking a risk that it will work for their designs. Yeap reported that initial average SRAM yield was about 80 percent and that yield improvement has been faster for N5 than any other recent process introduction.

Some of that yield improvement is likely due to the use of extreme ultraviolet lithography (EUV). N5 is the first TSMC process designed around EUV. The previous generation was developed first using the established 193-nanometer immersion lithography first, and then when EUV was introduced, some of the most difficult to produce chip features were made with the new technology. Because it uses a 13.5-nanometer light instead of 193-nanometers, EUV can define chip features in one step—compared with three or more steps using 193-nanometer light. With more than 10 EUV layers, N5 is the first new process “in quite a long time” that uses fewer photolithography masks than its predecessor, Yeap said.

Part of the performance enhancement comes from the inclusion, for the first time in TSMC’s process, of a “high-mobility channel”. Charge carrier mobility is the speed with which current moves through the transistor, and therefore limits how quickly the device can switch. Asked (several times) about the makeup of the high-mobility channel, Yeap declined to offer details. “Those who know, know,” he said, prompting laughter from the audience. TSMC and others have explored germanium-based channels in the past. And earlier in the day, Intel showed a 3D process with silicon NMOS on the bottom and a layer of germanium PMOS above it.

Yeap would not even be tied down on which type of transistor, NMOS or PMOS or both, had the enhanced channel. However, the latter is probably not very mysterious. Holes generally travel more slowly through silicon devices than electrons and therefore the PMOS devices would benefit from enhanced mobility. When pressed Yeap confirmed that only one variety of device had the high-mobility channel.

How to Reduce the Bill of Material Costs with Digital Signal Processing

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/how-to-reduce-the-bill-of-material-costs-with-digital-signal-processing

The need to decrease the bill of material (BOM) costs in embedded products is being driven by the need for high volume, low-cost sensor systems. As IoT devices become more sophisticated, they require developers to utilize digital signal processing (DSP) to handle more features within the product, such as device provisioning.

In this paper, we will examine how DSP can be used to reduce a product’s cost.

You will learn:

  • The technology trends moving data processing to the edge of the network to enable more compute performance
  • The benefits of digital signal processing, including decreased product dimensions, product flexibility, shorter design cycle, and in-field adaptability
  • How to convert analog circuits to software using modeling software such as MathWorks MATLAB or Advanced Solutions Nederlands (ASN) filter designer
  • How to select the right DSP processor solution to benefit from reduced BOM costs
  • The capabilities and features of the Arm Cortex-M processors with DSP extensions to help you get your signal processing application running as quickly as possible.

Artificial Intelligence in Software Defined SIGINT Systems

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/artificial-intelligence-in-software-defined-sigint-systems

As wireless protocols grow more complex, spectrum environments become more contested and electronic warfare increases in sophistication.

Read how you can combine artificial intelligence and deep learning with commercial off-the-shelf software-defined radio hardware to train algorithms.

You can teach them to detect new threats faster, reduce development risk and support burden, and deploy in signals intelligence and spectrum monitoring scenarios limited by SWaP (size, weight, and power).