Tag Archives: Semiconductors

White paper on flame retardant epoxies and silicones

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/white-paper-on-flame-retardant-epoxies-and-silicones

Flame retardant epoxies

Epoxies and silicones used in aircraft applications must maintain their primary role as adhesives or coatings while exhibiting resistance to heat and flame in accordance with government and industry specifications. Master Bond’s flame-retardant systems comply with specifications for flame resistance and reduction of smoke density and toxic emissions.

Plasmonics: A New Way to Link Processors With Light

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/optoelectronics/plasmonics-a-new-way-to-link-processors-with-light

Fiber optic links are already the main method of slinging data between clusters of computers in data centers, and engineers want to bring their blazing bandwidth to the processor. That step comes at a cost that researchers at the University of Toronto and Arm think they can greatly reduce.

Silicon photonics components are huge in comparison to their electronic counterparts. That’s a function of optical wavelengths being so much larger than today’s transistors and the copper interconnects that tie them together into circuits. Silicon photonic components are also surprisingly sensitive to changes in temperature, so much so that photonics chips must include heating elements that take up about half of their area and energy consumption, as Charles Lin, one of the team at University of Toronto, explained last month at the IEEE International Electron Device Meeting.

At the virtual conference Lin, a researcher in the laboratory of Amr S. Helmy, described new silicon transceiver components that dodge both of these problems by relying on plasmonics instead of photonics. The results so far point to transceivers capable of at least double the bandwidth while consuming only one third the energy and taking up a mere 20 percent of the area. What’s more, they could be built right atop the processor, instead of on separate chiplets as is done with silicon photonics.

When light strikes the interface between a metal and insulator at a shallow angle, it forms plasmons: waves of electron density that propagate along the metal surface. Conveniently, plasmons can travel down a waveguide that is much narrower than the light that forms it, but they typically peter-out very quickly because the metal absorbs light.

The Toronto researchers invented a structure to take advantage of plasmonics’ smaller size while greatly reducing the loss. Called the coupled hybrid plasmonic waveguide (CPHW), it is essentially a stack made up of silicon, the conductor indium tin oxide, silicon dioxide, aluminum, and more silicon. That combination forms two types of semiconductor junctions—a Schottky diode and a metal-oxide-semiconductor—with the aluminum that contains the plasmon in common between the two. Within the metal, the plasmon in the top junction interferes with the plasmon in the bottom junction in such a way that loss is reduced by almost two orders of magnitude, Lin said.

Using the CPHW as a base, the Toronto group built two key photonics components—a modulator, which turns electronic bits into photonic bits, and a photodetector, which does the reverse. (As is done in silicon photonics, a separate laser provides the light; the modulator blocks the light or lets it pass to represent bits.) The modulator took up just 2 square micrometers and could switch at as fast as 26 gigahertz, the limit of the Toronto team’s equipment. Based on the device’s measured capacitance, the real limit could be as high as 636 GHz. The plasmonic photodetector was near match to silicon photonics’ sensitivity, but it was only 1/36th the size.

One of the CPHW’s biggest advantages is its lack of sensitivity to temperature. Silicon photonics components have a temperature tolerance that can’t swing farther than one degree in order for them to operate at the proper wavelength. Temperature sensitivity is a “big challenge for silicon photonics,” explains Saurabh Sinha, a principal research engineer at Arm. Managing that tolerance requires both extra circuitry and the consumption of energy. In a simulated 16-channel silicon photonics transceiver heating circuits consume half of the circuit’s energy and take up nearly that fraction of their total area, and that translates to huge difference in area: 0.37 mm2 for silicon photonics versus 0.07 mm2 for plasmonic transceivers.

Simulations of the CPHW-based plasmonics transceiver predict a number of benefits over silicon photonics. The CPHW system consumed less than one-third of the energy per bit transmitted of a competing silicon photonics system—0.49 picojoules per bit versus 1.52 pJ/b. It could comfortably transmit more than three times more bits per second at acceptable Ethernet error rates without relying on error correction—150 gigabits per second versus 39 Gb/s.

Sinha says Arm and the Toronto group are discussing next steps, and those might include exploring other potential benefits of these transceivers such as the fact that CPHWs could be constructed atop processor chips, while silicon photonics devices must be made separately from the processor and then linked to them inside the processor package using chiplet technology.

Shining a HOT light on optomechanics

Post Syndicated from Paul Seidler original https://spectrum.ieee.org/semiconductors/optoelectronics/shining-a-hot-light-on-optomechanics

We all use micromechanical devices in our daily lives, although we may be less aware of them than electronic and optical technologies. Micromechanical sensors, for example, detect the motion of our smartphones and cars, allowing us to play the latest games and drive safely, and mechanical resonators serve as filters to extract the relevant cellular network signal from the broadband radio waves caught by a mobile phone’s antenna. For applications such as these, micromechanical devices are integrated with their control and readout electronics in what are known as micro-electro-mechanical systems (MEMS). Decades of research and development have turned MEMS into a mature technology, with a global market approaching one hundred billion US dollars.

In recent years, a community of researchers from various universities and institutes across Europe and the United States set out to explore the physics of micro- and nano-mechanical devices coupled to light. The initial focus of these investigations was on demonstrating and exploiting uniquely quantum effects in the interaction of light and mechanical motion, such as quantum superposition, where a mechanical oscillator occupies two places simultaneously. The scope of this work quickly broadened as it became clear that these so-called optomechanical devices would open the door to a broad range of new applications.

Hybrid Optomechanical Technologies (HOT) is a research and innovation action funded by the European Commission’s FET Proactive program that supports future and emerging technologies at an early stage. HOT is laying the foundation for a new generation of devices that bring together several nanoscale platforms in a single hybrid system. It unites researchers from thirteen leading academic groups and four major industrial companies across Europe working to bring technologies to market that exploit the combination of light and motion.

One key set of advances made in the HOT consortium involves a family of non-reciprocal optomechanical devices , including optomechanical circulators. Imagine a device that acts like a roundabout for light or microwaves , where a signal input from one port emerges from a second port, and a signal input from that second port emerges from a third one, and so on. Such a device is critical to signal processing chains in radiofrequency or optical systems, as it allows efficient distribution of information among sources and receivers and protection of fragile light sources from unwanted back-reflections. It has however proven very tricky to implement a circulator at small scales without involving strong magnetic fields to facilitate the required unidirectional flow of signals.

Introducing a mechanical component makes it possible to overcome this limitation. Motion induced by optical forces causes light to flow in one direction through the roundabout. The resulting devices are more compact, do not require strong permanent magnets, and are therefore more amenable to large-scale device integration.

HOT researchers have also created mechanical systems that are simultaneously coupled to an electric and an optical resonator. These quintessentially hybrid devices interconvert electronic and optical signals via a mechanical intermediary, and they do so with very low added noise, high quantum efficiency, and a compact footprint. This makes them interesting for applications that benefit from the advantages of analog signal transmission over optical fibers instead of copper cables, such as those requiring high bandwidth, low loss, low crosstalk, and immunity to harsh environmental conditions.

An example of such a device is a receiver for a magnetic resonance imaging (MRI) scanner, as used in hospitals for three-dimensional imaging inside the human body. In MRI, tiny electronic signals are collected from several sensors on a patient inside the scanner. The signals need to be extracted from the scanner in the presence of large magnetic fields of several tesla, with the lowest possible distortion, to form high-resolution images. Conversion to the optical domain provides a means of protecting the signal. A prototype of a MRI sensor that uses optical readout has been developed by HOT researchers.

Another application of simultaneous optical and electronic control over mechanical resonators is the realization of very stable oscillators. These can function as on-chip clocks and microwave sources with ultrahigh purity. HOT researchers filed a patent application that shows how to stabilize nanoscale mechanical resonators that naturally oscillate at gigahertz frequencies driven by optical and electric fields. Combining all components on a single chip makes such devices extremely compact.

A somewhat more exotic application of hybrid transducers, but one with potentially far-reaching implications, is the interconnection of quantum computers. Quantum computers hold the promise of tackling computational problems that our current classical computers will never be able to solve. The leading contender as the platform for future quantum computers encodes information in microwave photons confined in superconducting circuits to form qubits. Unlike the bits used in conventional computers that take on values of either 0 or 1, qubits can exist in states representing both 0 and 1 simultaneously. The qubits however are bound to the ultracold environment of a dilution refrigerator to prevent thermal noise from destroying their fragile quantum states. Transferring quantum information to and from computing nodes, even within a quantum data center, will require conversion of the stationary superconducting qubits to so-called flying qubits that can be transmitted between separate locations. Optical photons represent a particularly attractive option for flying qubits, as they are robust at room temperature and thus provide one of the few practical means of transmitting quantum states over distances greater than a few meters. In fact, the transfer of quantum information encoded in optical photons is now routinely achieved over distances of hundreds of kilometers.

A key prerequisite for quantum networking is therefore quantum-coherent bidirectional conversion between microwave and optical frequencies. To date, no experimental demonstration exists of efficient transduction at the level of individual quantum states. However, many research groups around the world are diligently pursuing various possible solutions. The approaches that have come the closest so far utilize a mechanical system as an intermediary, and this is where the technologies pursued by the HOT consortium come into play.

HOT researchers have created compact chip-scale devices on commercially available silicon wafers that are fully compatible with both silicon photonics and superconducting qubit technology. The unique optomechanical designs developed by the HOT consortium exploit strong optical field confinement, producing large optomechanical coupling. As a  result, electrical signals at the gigahertz frequencies typical of superconducting qubits can be coherently converted to optical frequencies commonly used for telecommunication. Such integrated photonic devices employing optomechanical coupling are often plagued by the deleterious effects of heating due to absorption of high-intensity light. The thermal problems can be circumvented by optimizing the device design and using alternative dielectric materials, and internal efficiencies exceeding unity have been achieved for ultra-low optical pump powers.

With the capabilities provided by such transducers, the power of quantum information processing could be brought to a whole new class of tasks, such as secure data sharing, in addition to creating networks of quantum devices.

As hybrid optomechanical systems enter the quantum regime , new challenges emerge. One particularly important consideration is that the very act of measuring the state of a system must be rethought. Contrary to our everyday experience, quantum mechanics requires that any measurement exerts some inevitable backaction onto the system being measured. This often has adverse effects; the response of an optomechanical sensor to a signal of interest, for example, can be washed out by the backaction caused by reading out the sensor. Luckily, these effects are well understood today, and can be corrected for using advanced quantum measurement and control techniques.

HOT researchers have pioneered the application of such techniques to mechanical sensors. They have shown how quantum state estimation and feedback can help overcome the measurement challenges. The classical counterparts of these approaches are widely used in many areas of engineering and are familiar to consumers in such products as noise-cancelling headphones. 

In the setting of optomechanics, they have been used to measure and control the quantum state of motion of a mechanical sensor. For example, HOT researchers have managed to limit the random thermal fluctuations of a vibrating drum to the minimal level allowed by quantum mechanics. This provides an excellent starting point to detect even the smallest forces exerte by other quantum systems like a single electron or photon.

With its focus on real-world technologies, the HOT consortium also considers such practical matters as device packaging and large-scale fabrication. Optomechanical devices require electronic and optical connectivity in a package that also keeps the mechanical element under vacuum. Whereas such demands have been met separately before, consortium member and industry giant STMicroelectronics is addressing their combination in a single device package as well as the potential for mass production.
 

This project is financed by the European Commission through its Horizon 2020 research and innovation programme under grant agreement no. 732894 (FET-Proactive HOT).

For more information about Hybrid Optomechanical Technologies, visit our website , like our Facebook page , and follow our Twitter stream. You can watch our videos on YouTube , including an explanation of the optical circulator built by the network and how to silence a quantum drum.

System Creates the Illusion of an Ideal AI Chip

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/system-creates-the-illusion-of-an-ideal-ai-chip

What does an ideal neural network chip look like? The most important part is to have oodles of memory on the chip itself, say engineers. That’s because data transfer (from main memory to the processor chip) generally uses the most energy and produces most of the system lag—even compared to the AI computation itself. 

Cerebras Systems solved these problems, collectively called the memory wall, by making a computer consisting almost entirely of a single, large chip containing 18 gigabytes of memory. But researchers in France, Silicon Valley, and Singapore have come up with another way.

Called Illusion, it uses processors built with resistive RAM memory in a 3D stack built above the silicon logic, so it costs little energy or time to fetch data. By itself, even this isn’t enough, because neural networks are increasingly too large to fit in one chip. So the scheme also requires multiple such hybrid processors and an algorithm that both intelligently slices up the network among the processors and also knows when to rapidly turn processors off when they’re idle.

In tests, an eight-chip version of Illusion came within 3-4 percent of the energy use and latency of a “dream” processor that had all the needed memory and processing on one chip.

Nanoscale Revelations Could Lower the Cost of Desalination

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/nanoscale-revelations-could-lower-the-cost-of-desalination

A few parched middle-eastern countries already rely on removing salt from seawater to satiate their thirst. Many others might have to turn to desalination as they deal with increasing populations and climate change. But the technology is still expensive and requires a lot of energy.

Today’s foremost desalination method, reverse osmosis, uses membranes that block salt and impurities as seawater is pushed through them. Better membranes translate to more energy-efficient, less costly desalination.

To help design improved membranes, a team of researchers has used electron microscopy and 3D computational modeling for a nanoscale understanding of how water flows through the barriers. It turns out that uniform membrane density down to the nanometer scale, and not the thinness of the membranes, is crucial for improving water flow. Improving uniformity could increase efficiency by over 30 percent, the team, from the University of Texas, Penn State, and DuPont, reported in the journal Science last week.

The team used common reverse osmosis polymer membranes made by DuPont Water Solutions. Think of these solid membranes as tangled mats of polymer strings. Water flows through the tiny, Angstrom-scale voids formed between the polymer strings.

Membrane manufacturers change the internal structure and thicknesses of these polymer membranes to increase the flow of water. But it has been difficult to pinpoint exactly which parameter affects performance. DuPont researchers, for instance, recently found that making some of the company’s membranes thicker counterintuitively increased the amount of water that flowed through. 

To get to the bottom of this paradox, Manish Kumar at the University of Texas, Enrique Gomez at Penn State, and their collaborators at DuPont used a transmission electron microscopy technique called electron tomography to get a 3D view of membranes at nanoscale. It involves scanning a high-intensity electron beam across the surface of the membrane to make several 2D images that are angled and put together to create a 3D image. “This gives you information on the density of the polymer,” Kumar says. “Now you can find the density of each cubic nanometer of membrane to see how much is occupied by polymer and how much with free space.”

Next, they fed the image data from the nanometer-scale structures comprising the membrane into the supercomputer at the Texas Advanced Computing Center. They were able to run large-scale computer simulations that revealed the pathways water took through the membranes.

Water, of course, chooses the path of least resistance. But even in the thinnest membranes, the simulations showed that densely packed polymers could form structures that obstruct water and make it take a longer route, Kumar says. So membrane density at the nanoscale rather than thickness, the team found, is the chief parameter that affects water transport. The most permeable membrane was the least dense and its density did not fluctuate much across the membrane.

If manufacturers could similarly evaluate their reverse osmosis membranes at the nanoscale and figure out how to use the right chemistry and processing techniques to make more uniformly dense membranes, Kumar says, that would make desalination more cost-effective.

IMEC and Intel Researchers Develop Spintronic Logic Device

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/devices/imec-and-intel-researchers-develop-spintronic-logic-device

Spintronics is a budding path in the quest for a future beyond CMOS. Where today’s electronics harness an electron’s charge, spintronics plays with another key property: an electron’s spin, which is its intrinsic angular momentum that points either “up” or “down.” Spintronic devices use much less power than their CMOS counterparts—and keep their data even when the device is switched off.

Spin is already used in memory, such as magnetoresistive RAM (MRAM). However, manipulating tiny magnetic effects for the purpose of performing logical operations is more difficult. In 2018, for instance, MIT researchers experimented with hydrogen ions as a possible basis for spintronic transistors. Their efforts were very preliminary; the team admitted that there was still a long way to go before a fully functioning spintronic transistor that could store and process information might be developed. 

Now, researchers at imec and Intel, led by PhD candidate Eline Raymenants, have created a spintronic logic device that can be fully controlled with electric current rather than magnetic fields. The Intel-imec team presented its work at the recent IEEE International Electron Devices Meeting (IEDM).

An electron’s spin generates a magnetic moment. When many electrons with identical spins are close together, their magnetic moments can align and join forces to form a larger magnetic field. Such a region is called a magnetic domain, and the boundaries between domains are called domain walls. A material can consist of many such domains and domain walls, assembled like a magnetized mosaic.

Devices can encode 0s and 1s in those domains. A domain pointing “up” could represent a 0, where a “down” represents a 1. The Intel-imec device uses domains placed in a single-file line of nanoscale wire. The device then uses current to shift those domains and their walls along the wire, like cars along a train track.

The track meets the switch at a magnetic tunnel junction (MTJ). It’s similar to the read-heads of today’s hard disks, but the researchers have implemented a new type of MTJ that’s optimized to move the domain walls more quickly. The MTJs read information from the track and act as logic inputs. At IEDM, the researchers presented a proof of concept: several MTJs feeding an AND gate.

The same MTJ is also where information is written into the track. To do this, the Intel-imec device uses the same technology that’s used in MRAM today. The device passes a spin-polarized current—most of whose electrons have spins in one direction—through a magnetic domain. That current can realign the magnetic field’s direction, creating or editing domain walls in the process.

It’s similar to racetrack memory, an experimental form of data storage that was first proposed over a decade ago. Racetrack memory also writes information into magnetic domains and uses current to shuttle those domains along a nanoscale wire, or “racetrack.” But the Intel-imec device takes advantage of advances in materials, allowing domain walls to move down the line far more quickly. This, the researchers say, is key for allowing logic.

Researchers so far have largely focused on optimizing those materials, according to Van Dai Nguyen, a researcher at imec. “But to build the full devices,” he says, “that’s been missing.”

The Intel-imec team is not alone. Earlier in 2020, researchers at ETH Zurich created a logic gate using domain-wall logic. Researchers at MIT also recently demonstrated a domain-wall-based artificial neuron. Like the Intel-imec researchers and like racetrack memory, these devices also use current to shift domains down the line.

But the Zurich and MIT devices rely on magnetic fields to write information. For logic, that’s not ideal. “If you build a logic circuit,” says Iuliana Radu, a researcher at imec, “you’re not going to put…a huge magnet that you change direction or switch on and off to implement the logic.” Full electrical control, Radu says, will also allow the Intel-imec device to be connected to CMOS circuits.

The researchers say their next steps will be to show their device in action. They’ve designed a majority gate, which returns a positive result if the majority of its inputs are positive. Radu, however, says that they have yet to really explore this design. Only then will the researchers know how their spintronic logic will fare against the CMOS establishment.

U.S. Takes Strategic Step to Onshore Electronics Manufacturing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/us-takes-strategic-step-to-onshore-electronics-manufacturing

Late last week, the U.S. Congress passed the annual policy law that guides the U.S. Defense Department. Tucked away inside the National Defense Authorization Act of 2021 (NDAA) are provisions that supporters hope will lead to a resurgence in chip manufacturing in the United States. The provisions include billions of dollars of financial incentives for construction or modernization of facilities “relating to the fabrication, assembly, testing, advanced packaging, or advanced research and development of semiconductors.”

 

The microelectronics incentives in the law originate out of U.S. officials’ concerns about China’s rapidly growing share of the global chipmaking industry and from the United States’ shrinking stake. The legislation frames that as an issue of U.S. national security.

Although China does not hold a technological lead in chipmaking, its geographic proximity to those who do worries some in the United States. Today, foundries using the most advanced manufacturing processes (currently the 5-nanometer node) are operated by Samsung in South Korea and by Taiwan Semiconductor Manufacturing Company (TSMC) in Taiwan and nowhere else.

Both companies provide foundry services, manufacturing chips for U.S.-based tech giants like NvidiaAMDGoogleFacebook, and Qualcomm. For years, Intel was their match and more in terms of manufacturing technology, but the company has struggled to move to new processes.

Admittedly, there are already some moves toward state-of-the-art foundry construction in the United States even in the absence of the NDAA. TSMC announced in May that it planned to spend up to $12 billion over nine years on a new 5-nanometer fab in Arizona. The TSMC board of directors took the first step in November when they approved $3.5 billion to establish a wholly-owned subsidiary in the state.

But the Semiconductor Industry Association, the U.S. trade group, says government incentives will accelerate construction. The SIA calculates that a $20-billion incentive program over 10 years would yield 14 new fabs and attract $174 billion in investment versus 9 fabs and $69 billion without the federal incentives. A $50-billion program would yield 19 fabs and attract $279 billion.

The NDAA specifies a cap of $3-billion per project unless Congress and the President agree to more, but how much money actually gets spent in total on microelectronics capacity will depend on separate “appropriations” bills.

“The next step is for leaders in Washington to fully fund the NDAA’s domestic chip manufacturing incentives and research initiatives,” said Bob Bruggeworth, chair of SIA and president, CEO, and director of RF-chipmaker Qorvo, in a press release.

Getting the NDAA’s microelectronics and other technology provisions funded will be one of IEEE USA’s top priorities in 2021, says the organization’s director of government relations, Russell T. Harrison.

Beyond financial incentives, the NDAA also authorizes microelectronics-related R&D, development of a “provably secure” microelectronics supply chain, the creation of a National Semiconductor Research Technology Center to help move new technology into industrial facilities, and establishment of committees to create strategies toward adding capacity at the cutting edge. It also authorizes quantum computing and artificial intelligence initiatives.

The NDAA “has a lot of provisions in it that are very good for IEEE members,” says Harrison.

The semiconductor strategy and investment portion of the law began as separate bills in the House of Representatives and the Senate. In the Senate, it was called the American Foundries Act of 2020, and was introduced in July. The act called for $15 billion for state-of-the-art construction or modernization and $5 billion in R&D spending, including $2 billion for the Defense Advanced Projects Agency’s Electronics Resurgence Initiative. In the House, the bill was called the CHIPS for America Act. It was introduced in June and offered similar levels of R&D.

Some in industry objected to early conceptions of the legislation, believing them to be too narrowly focused on cutting-edge silicon CMOS. Industry lobbied Congress to make the law more inclusive—potentially allowing for expansion of facilities like SkyWater Technology’s 200-mm fab in Bloomington, Minn.

The language in later versions of the bill signals that the government “still wants to pursue advanced nodes but that they understand that we have an existing manufacturing capability in the U.S. that needs support and can still play a big role in making us competitive,” says John Cooney, director of strategic government relations at Skywater.

Realizing that little legislating was likely to happen in an election year, supporters chose to try to fold the microelectronics language into the NDAA, which is considered a must-pass bill and had a 59-year bipartisan streak going into December. President Trump vetoed the NDAA last month, but Congress quickly overrode the veto at the start of January.

“What we’ve seen increasingly over the last nine months, is that there is a bicameral, bipartisan consensus building in Congress that the United States needs to do more to promote technology and technology research [domestically],” says Harrison.

“All of this is a huge step in the right direction, and we’re really excited about it,” says Skywater’s Cooney. “But it is just the first step to be competitive.”

The U.S. move is just one among a series of maneuvers taking place globally as countries and regions seek to build up or regain chipmaking capabilities. China has been on an investment streak through its Made in China 2025 plan. In December, Belgium, France, Germany, and 15 other European Union nations agreed to jointly bolster Europe’s semiconductor industry, including moving toward 2-nanometer node production. The money for this would come from the 145-billion-euro portion of the EU’s pandemic recovery fund set aside for “digital transition.”

Intel’s Stacked Nanosheet Transistors Could Be the Next Step in Moore’s Law

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/intels-stacked-nanosheet-transistors-could-be-the-next-step-in-moores-law

The logic circuits behind just about every digital device today rely on a pairing of two types of transistors—NMOS and PMOS. The same voltage signal that turns one of them on turns the other off. Putting them together means that electricity should flow only when a bit changes, greatly cutting down on power consumption. These pairs have sat beside each other for decades, but if circuits are to continue shrinking they’re going to have to get closer still. This week, at the IEEE International Electron Devices Meeting (IEDM), Intel showed a different way: stacking the pairs so that one is atop the other. The scheme effectively cut the footprint of a simple CMOS circuit in half, meaning a potential doubling of transistor density on future ICs.

Bright Silicon LED Brings Light And Chips Closer

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/devices/bright-silicon-led-brings-light-and-chips-closer

For decades, researchers have tried to fashion an effective silicon LED, one that could be fabricated together with its chip. At the device level, this quest matters for all the applications we don’t have on our mobile devices that would rely on cheap and easily fabricated sources of infrared light. 

Silicon LEDs specialize in infrared light, making them useful for autofocusing cameras or measuring distances—abilities that most phones now have. But virtually no electronics use silicon LEDs, instead opting for more expensive materials that have to be manufactured separately.

However, prospects for the elusive, light-emitting, silicon-based diode may be looking up. MIT researchers, led by PhD student Jin Xue, have designed a functional CMOS chip with a silicon LED, manufactured by GlobalFoundries in Singapore. They presented their work at the recent IEEE International Electron Devices Meeting (IEDM).

The chief problem to date has been, to be blunt, silicon isn’t a very good LED material.

An LED consists of an n-type region, rich in excited free electrons, junctioned with a p-type region, containing positively-charged “holes” for those electrons to fill. As electrons plop into those holes, they drop energy levels, releasing that difference in energy. Standard LED materials like gallium nitride or gallium arsenide are direct bandgap materials, whose electrons are powerful emitters of light.

Silicon, on the other hand, is an indirect bandgap material. Its electrons tend to turn that energy into heat, rather than light. That makes silicon LEDs slower and less efficient than their counterparts. Silicon LED makers must find a way around that indirect bandgap.

One way might be to alloy silicon with germanium. Earlier this year, in fact, a group at Eindhoven University of Technology in the Netherlands fashioned a silicon-based laser out of a nanowire-grown silicon-germanium alloy. Their tiny laser, they reported, might one day send data cheaply and efficiently from one chip to another.

That’s one perhaps elaborate approach to the problem. Another has been considered for more than 50 years—operating silicon LEDs in what’s called reverse-biased mode. Here, the voltage is applied backwards to the direction that would normally allow current to flow. This changeup prevents electrons from filling their holes until the electrical field reaches a critical intensity. Then, the electrons accelerate with enough zeal to knock other electrons loose, multiplying the current into an electrical avalanche. LEDs can harness that avalanche to create bright light, but they need voltages several times higher than the norm for microelectronics.

Since the turn of the millennium, other researchers have tinkered with forward-biased LEDs, in which electrons flow easily and uninterrupted. These LEDs can operate at 1 volt, much closer to a transistor in a typical CMOS chip, but they’ve never been bright enough for consumer use.

The MIT-GlobalFoundries team followed the forward-biased path. The key to their advance is a new type of junction between the n-type and p-type regions. Previous silicon LEDs placed the two side-by-side, but the MIT-GlobalFoundries design stacks the two vertically. That shoves both the electrons and their holes away from the surfaces and edges. Doing that discourages the electrons from releasing energy as heat, channelling more of it into emitting light.

“We’re basically suppressing all the competing processes to make it feasible,” says Rajeev Ram, one of the MIT researchers. Ram says their design is ten times brighter than previous forward-biased silicon LEDs. That’s still not bright enough to be rolled out into smartphones quite yet, but Ram believes there’s more advances to come.

Sonia Buckley, a researcher at the U.S. National Institute of Standards and Technology (NIST) who isn’t part of the MIT-GlobalFoundries research group, says these LEDs prioritize power over efficiency. “If you have some application that can tolerate low efficiencies and high power driving your light source,” she says, “then this is a lot easier and, likely, a lot cheaper to make” than present LEDs, which aren’t integrated with their chips.

That application, Ram thinks, is proximity sensing. Ram says the team is close to creating an all-silicon system that could tell a phone how far away its surroundings are. “I think that might be a relatively near-term application,” he says, “and it’s certainly driving the collaboration we have with GlobalFoundries.”

Atoms-Thick Transistors Get Faster Using Less Power

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/tech-talk/semiconductors/materials/atomsthick-transistors-get-faster-using-less-power

For post-silicon electronics, engineers have been doubling down on research aimed at making transistors from atoms-thick two-dimensional materials. The most famous one is graphene, but experts believe that 2D semiconductors such as molybdenum disulfide and tungsten disulfide might be better suited for the job. Graphene lacks a bandgap, the property that makes a material a semiconductor.

Now, by combining graphene and MoS2, researchers have made a transistor that operates at half the voltage and has a higher current density than any state-of-the-art 2D transistor previously under development. This should slash the power consumption of integrated circuits based on these 2D devices.

“We were able to fully explore the untapped potential of 2D materials to make a transistor that shows better performance in terms of energy consumption and switching speed,” says Huamin Li, the electrical engineering professor at the University of Buffalo who presented the device at the IEEE International Electron Devices Meeting (IEDM).

Interestingly, the device takes advantage of graphene’s lack of a bandgap. In a transistor, a voltage at the gate electrode injects charge carriers into the channel region to create a conductive path between the source and drain electrodes. Conventional silicon transistors and 2D MoS2 transistors take advantage of the emission of high-energy “hot” electrons from the source. This places a fundamental limit of 60 millivolts for each ten-fold increase in the drain current (60 mV/decade).

But graphene, with no bandgap, acts as a “cold” electron source, Li says. That means less energy is required to send electrons out across the channel region to the drain electrode. The result: The device current can be switched on and off more rapidly.

Using this unique mechanism we were able to break the fundamental limit of switching,” Li says. The group’s 1-nanometer-thick transistor needs only 29 mV to achieve that 10-fold change in device current. “We use less voltage to switch the device and control more current, so our transistor is much more energy efficient.”

HPE Invents First Memristor Laser

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/hpe-invents-first-memristor-laser

Researchers at Hewlett Packard Labs, where the first practical memristor was created, have invented a new variation on the device—a memristor laser. It’s a laser that can have its wavelength electronically shifted and, uniquely, hold that adjustment even if the power is turned off. At the IEEE International Electron Device Meeting the researchers suggest that, in addition to simplifying photonic transceivers for data transmission between processors, the new devices could form the components of superefficient brain-inspired photonic circuits.

Scaled-Down Carbon Nanotube Transistors Inch Closer to Silicon Abilities

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/devices/scaleddown-carbon-nanotube-transistors-inch-closer-to-silicon-abilities

Carbon nanotube devices are getting closer to silicon’s abilities thanks to a series of developments, the latest of which was revealed today at the IEEE Electron Devices Meeting (IEDM). Engineers from Taiwan Semiconductor Manufacturing Company (TSMC), University of California San Diego, and Stanford University explained a new fabrication process that leads to better control of carbon nanotube transistors. Such control is crucial to ensuring that transistors, which act as switches in logic circuits, turn fully off when they are meant to. Interest in carbon nanotube transistors has accelerated recently, because they can potentially be shrunk down further than silicon transistors can and offer a way to produce stacked layers of circuitry much more easily than can be done in silicon.

The team invented a process for producing a better gate dielectric. That’s the layer of insulation between the gate electrode and the transistor channel region. In operation, voltage at the gate sets up an electric field in the channel region that cuts off the flow of current. As silicon transistors were scaled down over the decades, however, that layer of insulation, which was made of silicon dioxide, had to become thinner and thinner in order to control the current using less voltage, reducing energy consumption. Eventually, the insulation barrier was so thin that charge could actually tunnel through it, leaking current and wasting energy.

New Hollow-core Optical Fiber Is Clearer Than Glass

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/new-hollow-core-optical-fiber-is-clearer-than-glass

Conventional fiber optics guide light through solid glass cores that are extremely transparent. The clearest fibers have loss as low as 0.142 decibel per kilometer. Which means that more than one percent of input light remains after a hundred kilometers. Yet solid glass fibers cannot carry very high powers, particularly in short pulses, limiting their use for applications including delivering intense light for laser machining.

Now a team at the University of Southampton’s Optoelectronics Research Center has made hollow-core fibers that are clearer than solid-core fibers at some important wavelengths. They report their results in Nature Communications.

Interest in hollow-core fibers initially took off in 1998 after Philip St. John Russell, then at the University of Bath in the UK, showed that microstructured optical fibers could guide light within a hollow core by blocking transmission through layers around the core at some wavelengths.  However, the air-filled microstructures in the fiber were very complex, while these “photonic bandgap” fibers enabled transmission of too limited a range of wavelengths to be practical.

In recent years, Francesco Poletti at Southampton has pioneered a new range of experimental fibers that depart from Russell’s basic idea by limiting the microstructure only to the hollow core of the fiber.  Poletti’s 2014 design spaced several pairs of nested tubes, like small straws placed inside larger ones, around the outside of the central core.   

Lumenisity Ltd., a startup from our group, has already installed [fibers with the new design] in optical cables running live traffic,” Poletti said in an email. Light travels 50% faster in air than in glass, so the initial market for hollow-core fibers is short data links requiring extreme speed. He cites financial transfers, 5G and links inside and between data centers. It’s a potential boon for high-frequency security traders.   However, it has yet to match the transmission of solid-core fibers in the telecommunications band at 1550.

Now Poletti’s group has developed new hollow-core fibers that match or even beat the transparency of solid-core fibers in key wavelengths outside the telecommunications band, where glass is most transparent. They tailored fibers for 660 and 850 nm photons (these bands in the red and near infrared are widely used in biology and quantum networks) as well as for 1064 nm (another near infrared band that’s popular for industrial laser machining).

He says the new technology could offer advantages over conventional solid-core fiber for high precision sensors, laser beam delivery and time-frequency measurement. A handful of vendors already sell older types of hollow-core fibers to deliver ultrashort laser pulses that deliver a rapid series of pulses to cut glass for smartphones. “There is no alternative to hollow-core fibers because solid-core fibers cannot compete,” Poletti says. The intense light pulses have to be kept out of the glass to avoid damaging it.

So far, glass cutting is a small share of the overall market.  But he says his group’s new hollow-core fibers can transport “kilowatt scale continuous powers in a fundamental mode over tens or hundreds of meters, and could soon challenge solid-core delivery fibers in the much bigger market of [industrial machining with] high average power lasers.” Analysts put sales well into the billions

The new hollow-core fibers also have other advantages besides damage resistance and faster light speed. Keeping light out of the glass avoids nonlinear effects that can distort signals and can improve quality of the transmitted laser beam.

Fundamental limits of the new fibers are unknown. Light scattering dominates the loss of solid-core fibers at wavelengths shorter than the telecommunications band. “Up to a few months ago, we thought that scattering of light at the air-glass interfaces [in hollow-core fibers] would eventually dominate loss at those shorter wavelengths,” Poletti says. But the loss keeps dropping as they improve fiber fabrication, and he now speculates that in a few years the new hollow fibers might be ten times clearer in that band than the fundamental limit solid-core fibers.

Adhesives can be effective to handle variations in thermal expansion

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/adhesives-can-be-effective-to-handle-variations-in-thermal-expansion

Forming reliable bonds between different materials can be challenging because there can be large variations in CTE’s (coefficients of thermal expansion). Adhesive compounds play a critical role in the fabrication of assemblies for electronic, optical and mechanical systems. Learn more about CTE’s in this paper.

Download now to learn more.

Semiconductor Industry Forecast: Sunny and Bright with Few Clouds in Sight

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/semiconductors/processors/semiconductor-industry-forecast-sunny-and-bright-with-few-clouds-in-sight

“I’ve never seen a better time for this industry,” said Mark Edelstone. “Chips are cool again.”

Edelstone, who is chairman of global semiconductor investment banking for Morgan Stanley, and has some 30 years of experience in the chip business, was speaking on a panel at the annual semiconductor forum held (virtually this year) by startup incubator Silicon Catalyst. He was not alone in his assessment.

“The market is hot,” said fellow panelist Ann Kim, managing director and head of the frontier technology group for Silicon Valley Bank. “There is a strong funding environment and the cost of capital is low. Venture capital funds have over $150 billion of dry powder. Companies in semiconductor space are raising massive growth rounds…[and] semiconductor entrepreneurs should be attacking the market right now.”

The reason for such a sunshiny outlook? Surprisingly, it’s due in large part to winds that changed as a result of the coronavirus storm.

Said Edelstone: “The shift to the cloud and work from home are significant trends right now, catalyzed by COVID.”

Both are fueling demand for semiconductors, he indicated—in particular,  acceleration of the move by companies from their own computer infrastructures to cloud services will continue post-pandemic. “We are only about 10 percent of the way there in terms of what can move to the cloud.”

The fact that the pandemic has proven to be more of a boost than a drag on semiconductor industry fortunes was not exactly expected.

Said Jodi Shelton, cofounder CEO of the Global Semiconductor Alliance: “I don’t think any of us anticipated how well things would hold up. People [initially] talked about a lot of cost-cutting, but by May and June, the attitude changed.”

“The pandemic has been toxic for the economy,” she said, “but Nasdaq and the Dow hit records, there are record deals, record amounts of money being raised….You wouldn’t know we are in a pandemic if you look at those numbers.”

The (Brief) Pandemic Pause

Kim also reported that the initial fears by semiconductor company leaders were short-lived. “We spoke to the VC-backed companies that we work with at the beginning of sheltering in place,” she said. “There was definitely a pause. Management teams had to take a fresh look at their runways; people rushed to the capital market to close equity rounds. But then they realized that debt is available and cheap, and they can use debt as a safety blanket.”

“We thought we would see a U-shaped curve,” said Edelstone. He was referring to a rapid drop and then a slow period for semiconductor companies before a recovery—one that would be “easier to come out of than the dot-com crash.”

“But to see how it has shot up has been amazing,” he said.

Contributing to that extremely short pause and quick turnaround has been the ability of the semiconductor industry to pivot to remote work, a more daunting challenge than the adjustment faced by the software-centric tech companies.

“The thing that has impressed me the most is just how productive everybody has been,” Edelstone said. “All of our companies in our industry are operating virtually, designing these complex devices and taping them out relatively on schedule. It has been incredible to watch the resiliency that technology has been able to deliver.”

Industry executives had been somewhat worried that the boom was due to inventory stockpiling, and were concerned that either manufacturing issues due to the virus or trade issues with China would cause the companies that use semiconductors in their products to order more devices earlier, panelists reported. But that seems to not be the case. Said Shelton: “It seems a lot of inventory has been burned off.”

The China Question

Those trade issues still form the hint of a cloud on the horizon, panelists indicated. “Things with China may likely get worse before they get better,” said Shelton. And the tensions between the U.S. and China could spread to Taiwan, affecting TSMC. With TSMC being such a dominant manufacturer in terms of both technical capabilities and market share, any disruption there would ripple throughout the entire industry.

So, said Shelton, the question is, “Can we reset the relationship with China, dialing down the rhetoric and moving towards a solution? The Biden administration will have pressure placed on it to remain tough on China. And we have six weeks more of the Trump administration; there are things they could do [with regards to China] that would be hard to undo.”

Beyond the COVID Era

After the pandemic, when the world settles back to normal—or a new normal—what will the chip industry look like? That was the question posed by moderator Don Clark, a New York Times contributing journalist.

Said Shelton: “We are very optimistic about the future. The industry has a lot of room to grow.”

But the way the industry grows may be different than it has in the past, with semiconductor companies competing with each other to create the most powerful or lowest-power processors.

Instead, said Edelstone, “I think we will increasingly see semiconductors as a core competency for companies. They start with software, develop a chip to drive it, and go to market as a systems company.”

Consider Nvidia, he said. Even 10 years ago, “[Nvidia CEO] Jensen [Huang] would have said that it is not a semiconductor company, it is a full stack, end to end, solution provider. That is where the future is going to be,” not, he indicated, in bringing a $10 product to market as a standalone semiconductor company.

Learn the fundamentals of brushless DC motors (BLDC motors)

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/learn-the-fundamentals-of-brushless-dc-motors-bldc-motors

Machines ranging from simple devices to complex equipment make use of brushless DC motors that convert electrical energy into rotational motion.  

  Basics of BLDC motors include: 

  • A comparison of brushed and brushless DC motors 
  • Simulating BLDC motors to observe the back-EMF profile 
  • Six-step commutation 
  • Motor inner workings and torque generation 
  • How three-phase inverters work 

img

Advanced Wide Band Gap High-Power Semiconductor Measurement Techniques Webinar

Post Syndicated from Keysight original https://spectrum.ieee.org/semiconductors/devices/advanced-wide-band-gap-highpower-semiconductor-measurement-techniques-webinar

In this webcast, Keysight Technologies will explain new measurement techniques, technologies and equipment that can meet the tough characterization challenges presented by wide band gap semiconductor devices.

MIT Spinoff Building New Solid-State Lidar-on-a-Chip System

Post Syndicated from Josué J. López original https://spectrum.ieee.org/tech-talk/semiconductors/design/kyber-photonics-solid-state-lidar-on-a-chip-system

To enable safe and affordable autonomous vehicles, the automotive industry needs lidar systems that are around the size of a wallet, cost one hundred dollars, and can see targets at long distances with high resolution. With the support of DARPA, our team at Kyber Photonics, in Lexington, Mass., is advancing the next generation of lidar sensors by developing a new solid-state, lidar-on-a-chip architecture that was recently demonstrated at MIT. The technology has an extremely wide field of view, a simplified control approach compared to the state-of-the-art designs, and has the promise to scale to millions of units via the wafer-scale fabrication methods of the integrated photonics industry.

 

Light detection and ranging (lidar) sensors hold great promise for allowing autonomous machines to see and navigate the world with very high precision. But current technology suffers from several drawbacks that need to be addressed before widespread adoption can occur. Lidar sensors provide spatial information by scanning an optical beam, typically in the wavelength range between 850 and 1550 nm, and using the reflected optical signals to build a three-dimensional map of an area of interest. They complement cameras and radar by providing high resolution and unambiguous ranging and velocity information under both daytime and nighttime conditions.

This type of information is critical because it allows an autonomous vehicle to detect and safely navigate around other vehicles, cyclists, pedestrians, and any potentially dangerous obstacles on the road. However, before lidar can be widely used by autonomous vehicles and robots, lidar units need to be manufacturable at large volumes, have high performance, and cost two orders of magnitude less than current commercial systems, which cost several thousands of dollars apiece. Although there has been major progress in the industry over the past 10 years, there is yet to be a lidar sensor that can meet all of the industry needs simultaneously.

The challenges facing the lidar industry

The fully autonomous vehicles application space highlights the performance and development challenges facing the lidar industry. These requirements include cost on the order of US $100 per unit, ranging greater than 200 meters at 10 percent object reflectivity, a minimum field of view (FOV) of 120° horizontal by 20° vertical, 0.1° angular resolution, at least 10 frames per second with approximately 100,000 pixels per frame, and scalable manufacturing (millions of units per year).

Any viable lidar solution must also demonstrate multi-year reliability under the guidelines established by the Automotive Electronics Council, which is the global coalition of automotive electronics suppliers. While many commercial lidar systems can meet some of these criteria, none have met all of the requirements to date. Our team at Kyber Photonics, which we founded in early 2020, aims to solve these challenges through a new integrated photonic design that we anticipate will meet these performance targets and leverage the manufacturing infrastructure built for the integrated circuits industry to meet volume and cost requirements.

To understand the benefits of our approach, it is first useful to survey currently available technology. To date, autonomous vehicles and systems have relied primarily on lidar that uses moving components to steer an optical beam. The most commonly used lidar designs comprise an array of lasers, optics, electronics, and detectors on a mechanically rotating stage. The need to assemble and align all of these parts lead to high costs and relatively low manufacturing volumes, and the wear and tear on the mechanical components raise questions about long-term reliability.

While these lidar systems, produced on scales of tens of thousands of units, have allowed the field of autonomous vehicles to make headway, they are not suitable for ubiquitous deployment of lidar. For these reasons, there is a big push towards eliminating mechanical components and moving towards compact designs that are more reliable and can be manufactured at larger volumes and a lower unit cost.

Seeking a solid-state lidar-on-a-chip design

There are two concepts in lidar design that address the reliability and scalability challenges for this technology. First, “solid-state” describes a lidar sensor that has eliminated the mechanical modes of failure by using no moving parts. Second, “lidar-on-a-chip” describes a system with the lasers, electronics, detectors, and optical beam-steering mechanism all integrated onto semiconductor chips. Lidar-on-a-chip systems go beyond solid-state because these designs attempt to further reduce the size, weight, and power requirements by integrating all of the key components onto the smallest footprint possible.

This system miniaturization reduces the complexity of the assembly and enables high volume throughput that can result in 10 to 100x cost reduction at scale. This is all possible because lidar-on-a-chip architectures can fully leverage CMOS compatible materials and wafer-scale fabrication methods established by the semiconductor industry. Once a final solution is proven, there is anticipation that, just like with the integrated circuits inside computers and smartphones, hundreds of millions of lidar units could be manufactured every year.

To realize the vision of ubiquitous lidar, there have been several emerging approaches to solid-state lidar and lidar-on-a-chip design. Some of the key approaches include:

Microelectromechanical Systems (MEMs)

  • This approach uses an assembly of optics and a millimeter-scale deflecting mirror for free-space beam steering of a laser or laser array. Although MEMS mirrors are fabricated using semiconductor wafer manufacturing, the architecture itself is not a fully integrated lidar-on-a-chip system since optical alignment is still needed between lasers, MEMs mirrors, and optics.
  • One of the trade-offs in this design is between scanning speed and maximum steering angle for aperture/mirror size, which directly relates to the detection range. This trade-off exists because a MEMS mirror acts like a spring, so as its size and mass increases, its resonant frequency decreases and limits scanning speed. 

Liquid-Crystal Metasurfaces (LCM)

  • Similar architecture to a MEMS system but with the MEMS mirror replaced by a liquid crystal metasurface (LCM). LCMs use an array of subwavelength scatterers mixed with liquid crystals to impart a tunable phase front onto a reflected laser beam, which allows for beam steering.
  • LCM designs have large optical apertures and a wide FOV but current modules are limited to 150-meter ranging. Like MEMs lidar systems, the LCM steering mechanism is built using a semiconductor wafer manufacturing process, while the system is not fully on-chip since the laser input needs to be aligned at an angle to the surface of the LCM.  

Optical Phased Arrays (OPA)

  • This approach is compatible with cheap, high volume manufacturing of the optical portion of the system. Uses an array of optical antennas, each requiring electronic phase control, to generate constructive interference patterns that form a steerable optical beam. However, as the size of the emitting aperture increases, electronic complexity increases and limits scaling since it requires thousands of active phase shifters controlled simultaneously. 
  • The size of optical waveguides places practical fabrication constraints on the ideal antenna spacing (half wavelength) and therefore degrades the optical beam, an effect called aliasing, which limits the usable horizontal FOV. More recent designs use aperiodic waveguide spacing and different waveguide cross-sections to solve these issues, but each creates trade-offs such as limitations on the vertical scanning and the reduction of optical power in the main beam.

Each one of these approaches has shown an improvement over conventional mechanical-based lidar in various metrics such as range, reliability, and cost, but it is still unclear whether they can hit all of the requirements for fully autonomous vehicles. 

A novel beam-steering lidar architecture

Recognizing these challenges, the Photonic and Modern Electro-Magnetics Group at MIT and MIT Lincoln Laboratory researchers collaborated to develop an alternative solution to solid-state optical beam steering that could solve the remaining challenges for lidar. A point of inspiration was the Rotman lens, invented in the 1960s, to enable passive beam-forming networks in the microwave regime without the need for active electronic phase control. The Rotman lens receives microwave radiation from multiple inputs along an outer focal arc of the lens. The lens spreads the microwaves along an inner focal arc where an array of delay lines imparts a phase shift to the microwaves to form a collimated beam.

While much of the intuition from the microwave regime translates to the optical and near-infrared domain, the length scales and available materials require a different lens design and system architecture. Over the past three years, the MIT-MIT Lincoln Laboratory team has designed, fabricated, and experimentally demonstrated a new solid-state beam steering architecture that works in the near-infrared.

This novel beam steering architecture includes an on-chip planar lens for steering in the horizontal direction and a grating to scan in the vertical direction, as shown in Figure 2. The laser light is edge-coupled into an optical waveguide, which guides and confines light along a desired optical path. This initial waveguide feeds into a switch matrix that expands into a tree pattern. This switch matrix can comprise hundreds of photonic waveguides, yet requires only about 10 active phase shifters to route the light into the desired waveguide. Each waveguide provides a distinct path that feeds into the planar lens and corresponds to one beam in free space. This planar lens consists of a stack of CMOS compatible materials tens of nanometers thick and is designed to collimate the light fed in by each waveguide. The collimation creates a flat wavefront for the light and allows it to minimize spreading and therefore reduces loss as it propagates over long distances.

Since each waveguide is designed to feed the lens at a unique angle, selecting a particular waveguide will make the light enter the lens and exit at a specific angle as seen in Figure 3. The system is designed such that the beams emitted from adjacent waveguides overlap in the far-field. In this way, the entire horizontal field-of-view is covered by beams emitted from the discrete number of waveguides. For instance, a 100° FOV lidar system with a 0.1° resolution can be designed using 1000 equally spaced waveguides along the focal arc of the lens and by using a sufficiently large aperture to generate the required beam width and overlap. This approach allows the design to meet the required FOV and resolution requirements of lidar applications.

To control the vertical angle of the emitted beam, our design uses a wavelength selective component called a grating coupler. The grating is a periodic structure composed of straight or curved rulings that repeat every few hundred nanometers. The interface of each repeated ruling causes light to diffract and thus scatter out of the plane of the chip. Since the emission angle of the grating depends on the wavelength of light, tuning the laser frequency controls the angle at which the beam is emitted into free space. As a result, selecting the waveguide via the switch matrix steers the beam in the plane of the chip (horizontally), and tuning the wavelength of the laser steers the beam out of the plane of the chip (vertically).

An alternative to time-of-flight detection

How does our planar lens architecture address key remaining challenges of optical beam steering for lidar applications? Like optical phased arrays (OPAs), the architecture takes advantage of CMOS-compatible fabrication techniques and materials used in recently established photonic foundries. Compared to OPAs, our lens-based design can have up to twice the horizontal FOV and also reduces electronic complexity by two orders of magnitude because we do not need hundreds of simultaneously active phase shifters.

Since the number of active phase shifters scales as the binary logarithm of the number of waveguides, log base 2, our system requires approximately 10 phase shifters to select the waveguide input and steer the optical beam. To enable a full lidar system, the lens-based approach can also take advantage of similar lasers and detection schemes that have been demonstrated with OPAs and other solid-state beam steering approaches. Combining all of these benefits enables a lidar-on-a-chip design that can lead to a low-cost sensor producible at high manufacturing volumes.

To address the range requirement of 200 meters, we are designing our system for a coherent detection scheme called frequency modulated continuous wave (FMCW) rather than the more traditionally used time-of-flight (ToF) detection. FMCW has advantages in terms of the instantaneous velocity information it can provide and the lower susceptibility to interference from other light sources. Note that FMCW has been successfully demonstrated by several lidar companies with ranges beyond 300 meters, though those companies have yet to fully prove that their systems can be produced at low cost and high volumes.

Rather than emitting pulses of light, FMCW lidars emit a continuous stream of light with a modulated frequency laser. Before being transmitted, a small fraction of the beam is split off, kept on the chip, and recombined with the main beam after it has been transmitted, reflected by an object, and received. The laser light is coherent, so these two beams can constructively and destructively interfere at the detector, generating a beat frequency waveform that is detected electronically and contains both ranging and velocity information. Since a beat signal is only generated by coherent light sources, it makes FMCW lidars less sensitive to interference from sunlight and other lidars. Moreover, the detection process also suppresses noise from other frequency bands, giving FMCW a sensitivity advantage over ToF.

Proof of concept and next steps

To demonstrate the viability of the optical beam steering, a proof-of-concept chip was fabricated using the silicon nitride integrated photonic platform at MIT Lincoln Laboratory. Optical microscope images of the fabricated chips are shown in Figure 4. The planar lens beam-steering was tested using a fiber-coupled tunable laser with a wavelength range between 1500-1600 nm. This testing confirmed that the chip was capable of providing a 40° (horizontal) by 12° (vertical) FOV, seen in Figure 5. Recently, we have improved our designs to enable a 160° (horizontal) by 20° (vertical) FOV so that we can better address the needs of lidar applications. These fabricated chips are undergoing testing to verify the expected beam-steering performance.

Kyber Photonics lidar on a chip
Kyber Photonics lidar on a chip
Images: Kyber Photonics
Figure 5. Animation consisting of compiled infrared images showing the operation of the planar lens beam-steering chip: (a) beam-steering via port selection and (b) beam-steering via wavelength tuning. Captured with an infrared camera. 

As Kyber Photonics moves forward with development, we are expanding from optical beam-steering demonstrations into the engineering of a complete lidar-on-a-chip system. The planar lens architecture will be tested in combination with the necessary frequency-modulated laser, detectors, and electronics to demonstrate ranging using FMCW detection.

Kyber Photonics will start by building a compact demonstration that uses a separately packaged laser that is fiber-coupled onto our chip and then move towards an on-chip laser solution. Many photonic foundries are adding lasers to their process development kits, so we plan to integrate and refine the on-chip laser solution at different stages of product development.

Subsequent steps will also include integrating the necessary application-specific integrated circuits to handle the photonic drivers, laser modulation, and receive signal processing. Our end goal is that within the next two to three years, Kyber Photonics will have a lidar-on-a-chip unit that is the size of a wallet and has a clear production path to meeting the demands of low cost, high reliability, high performance, and scalability for the autonomous vehicle industry and beyond.

About the Authors

Josué J. López is CEO and co-founder of Kyber Photonics and a 2020 Fellow of Activate, a two-year fellowship supported by DARPA that helps scientists bring their innovations to the marketplace.

Thomas Mahony is CTO and co-founder of Kyber Photonics and a 2020 Activate Fellow. 

Samuel Kim is co-founder of Kyber Photonics.

The authors would like to acknowledge key inventors and contributors to the technology described in this article. They include members of the Soljačić Group at MIT: Dr. Scott Skirlo, Jamison Sloan, and Professor Marin Soljačić; and MIT Lincoln Laboratory researchers: Dr. Cheryl Sorace-Agaskar, Dr. Paul Juodawlkis, and members of the Advanced Technology Division at Lincoln Laboratory.

Quantum Dot Paint Could Make Airframe Inspection Quick and Easy

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/quantum-dot-paint-could-make-airframe-inspection-quick-and-easy

Technicians of the future might be able to test airplane fuselages for airworthiness, check a bridge’s structural integrity, or inspect an intricate 3D-printed part for defects with only a quick scan from a camera.

Researchers at the Air Force Institute of Technology and Los Alamos National Laboratory are developing a paint containing quantum dots that could allow just that. Manufacturers could apply this paint to the surfaces of objects including vehicles, infrastructure, and spare parts, the researchers say. An inspector or a quality assurance specialist could rapidly gauge strain on those surfaces—how much they’ve been deformed—by analyzing light emitted by that paint.

To test surfaces today without destroying them in the process, technicians might create a magnetic field, but this often requires unwieldy equipment and provides results that are complicated to read. They might also use X-rays, which come with radiation safety concerns. Other scanning methods, such as ultrasound or microwaves, are limited in the materials they can measure.

“Imagine holding a camera and seeing a 2D strain surface map on your screen,” says Michael Sherburne of the Air Force Institute of Technology, one of the researchers who developed the quantum dot–infused paint.

Quantum dots are particles of semiconductor a few nanometers wide. Their small size brings on some unique quantum effects, as if each quantum dot were an artificial atom. For instance, quantum dots emit light when they’re excited by electromagnetic radiation. That emitted light’s wavelength changes depending on a dot’s size; smaller dots emit bluer light.

The “paint” consists of quantum dots suspended within a clear polymer. The dots emit green light when irradiated with ultraviolet light. Other quantum dot paints do exist, but they’re mostly used for aesthetics, such as to give objects a luminescent glow in the dark.

This paint’s quantum dots have an additional property: When they’re strained, the wavelength of their emitted light changes, becoming shorter (more blue) if they’re stretched and longer (more red) if they’re squeezed. The researchers envision a camera that uses a laser to bathe a painted surface with ultraviolet light, then captures the resulting emissions and creates a map of the surface’s strain.

Ideas for using light to measure strain have been around for several decades, but using quantum dots for that purpose is new, says Kirk Schanze, a professor of chemistry at the University of Florida who has previously worked in the field. 

The paint does have its limitations, however. For one, very high temperatures would melt the polymer. And, of course, the quantum-dot coating can only measure strain that occurred after the paint was applied; if a technician coated an old aircraft with it, for example, the paint wouldn’t reveal any deformation that happened beforehand.

But Sherburne wants to address that latter problem. “This is definitely future work,” he says.

In addition to making an inspection process faster and easier, the paint could be more versatile than today’s strain-sensing instruments. Manufacturers could apply it to 3D-printed parts with complex shapes that don’t lend themselves well to today’s strain-measuring techniques. Furthermore, a device could pipe UV light through fibre-optic cables, allowing scanning of surfaces that are in otherwise hard-to-reach locations.

Schanze calls the researchers’ work “an interesting proof of concept,” but he says the path to using the paint for full-scale measurements is not yet clear.

Sherburne agrees that any practical applications are still a long way off. One next step is constructing a camera system that scans in ultraviolet and measures the returning colors. Although the technology exists, the researchers have yet to develop it beyond a concept. So it will likely be more than a decade before maintenance workers start coating bridges in strain-sensing paint and scanning them with UV beams.

Optical Conversion Tech Images Infrared “In Color”

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/turning-infrared-into-new-colors

Mid-infrared (mid-IR) light is rich with information. For cameras sensitive to this portion of the spectrum, cancer cells are sometimes easily spotted. The earth under our feet glows like a campfire. Many “invisible” gases like carbon dioxidemethane and propane are also very much visible in mid-IR. 

This new realm of “colors” not available to the human eye may ultimately be coming to your smartphone camera, too. That’s if the research performed by Michael Mrejen, a research scientist at the the Femto-Nano Laboratory at the University of Tel Aviv, can be scaled up to commercial-grade consumer technology. 

He says the device they’ve begun to develop could “bring ‘color’ mid-IR imaging to the masses and [offer] a window to a whole new wealth of data not available so far to the public.”

They can do this, he says, by “leveraging the widespread availability of cheap, high resolution, fast and efficient silicon-based [visible light] color sensors.” 

Today, conventional mid-IR sensors are expensive, insensitive, operate below room temperature and lack the high resolution of mass-produced silicon camera chips. 

Yet Mrejen and colleagues are pioneering a new technique that shifts mid-IR light to visible wavelengths that mass-produced silicon camera chips can detect. This way smartphones and other portable cameras might be able to include mid-IR light in their images.

Wavelength or frequency shifting is an old thing in the radio spectrum, where signals have been shifted between frequency bands for more than a century. Light wavelengths were first shifted soon after the invention of the laser.

Today, the light from green laser pointers is produced by doubling the output frequency of invisible infrared light from tiny lasers. Yet shifting light to non-harmonic frequencies is more difficult, and shifting multiple wavelengths to higher frequencies—or equivalently, to shorter wavelengths—is even more difficult.

The problem is that different wavelengths travel at different speeds through nonlinear materials, causing light waves with different frequencies to drift out of phase with each other, so little of the light is converted to the visible.

The only way to shift multiple wavelengths at once has been to shift one wavelength at a time. But that requires very sophisticated cameras too expensive for most users.

Seeking a better way to shift the mid-IR photons to a wavelength that silicon can see, Mrejen’s group relied on a nonlinear process his coauthor Haim Suchowski had previously developed that slowly changes the infrared waves into visible photons. This so-called adiabatic technique of frequency conversion, described in a recent issue of Laser and Photonics Reviews, compensates for the phase mismatch problem noted above. 

Suchowski says he hopes mass-produced special crystals and pump lasers (altogether costing a few hundred dollars) that use this adiabatic frequency conversion trick could be mounted on smartphones.

“We humans see between red and blue,” Suchowski said in a press release. “If we could see in the infrared realm, we would see that elements like hydrogen, carbon and sodium have a unique color. An environmental monitoring  satellite that would take a picture in this region would see a pollutant being now emitted from a plant, or a spy satellite would see where explosives or uranium are being hidden. In addition, since every object emits heat in the infrared, all this information can be seen even at night.”