Tag Archives: Semiconductors/Design

MIT Spinoff Building New Solid-State Lidar-on-a-Chip System

Post Syndicated from Josué J. López original https://spectrum.ieee.org/tech-talk/semiconductors/design/kyber-photonics-solid-state-lidar-on-a-chip-system

To enable safe and affordable autonomous vehicles, the automotive industry needs lidar systems that are around the size of a wallet, cost one hundred dollars, and can see targets at long distances with high resolution. With the support of DARPA, our team at Kyber Photonics, in Lexington, Mass., is advancing the next generation of lidar sensors by developing a new solid-state, lidar-on-a-chip architecture that was recently demonstrated at MIT. The technology has an extremely wide field of view, a simplified control approach compared to the state-of-the-art designs, and has the promise to scale to millions of units via the wafer-scale fabrication methods of the integrated photonics industry.

 

Light detection and ranging (lidar) sensors hold great promise for allowing autonomous machines to see and navigate the world with very high precision. But current technology suffers from several drawbacks that need to be addressed before widespread adoption can occur. Lidar sensors provide spatial information by scanning an optical beam, typically in the wavelength range between 850 and 1550 nm, and using the reflected optical signals to build a three-dimensional map of an area of interest. They complement cameras and radar by providing high resolution and unambiguous ranging and velocity information under both daytime and nighttime conditions.

This type of information is critical because it allows an autonomous vehicle to detect and safely navigate around other vehicles, cyclists, pedestrians, and any potentially dangerous obstacles on the road. However, before lidar can be widely used by autonomous vehicles and robots, lidar units need to be manufacturable at large volumes, have high performance, and cost two orders of magnitude less than current commercial systems, which cost several thousands of dollars apiece. Although there has been major progress in the industry over the past 10 years, there is yet to be a lidar sensor that can meet all of the industry needs simultaneously.

The challenges facing the lidar industry

The fully autonomous vehicles application space highlights the performance and development challenges facing the lidar industry. These requirements include cost on the order of US $100 per unit, ranging greater than 200 meters at 10 percent object reflectivity, a minimum field of view (FOV) of 120° horizontal by 20° vertical, 0.1° angular resolution, at least 10 frames per second with approximately 100,000 pixels per frame, and scalable manufacturing (millions of units per year).

Any viable lidar solution must also demonstrate multi-year reliability under the guidelines established by the Automotive Electronics Council, which is the global coalition of automotive electronics suppliers. While many commercial lidar systems can meet some of these criteria, none have met all of the requirements to date. Our team at Kyber Photonics, which we founded in early 2020, aims to solve these challenges through a new integrated photonic design that we anticipate will meet these performance targets and leverage the manufacturing infrastructure built for the integrated circuits industry to meet volume and cost requirements.

To understand the benefits of our approach, it is first useful to survey currently available technology. To date, autonomous vehicles and systems have relied primarily on lidar that uses moving components to steer an optical beam. The most commonly used lidar designs comprise an array of lasers, optics, electronics, and detectors on a mechanically rotating stage. The need to assemble and align all of these parts lead to high costs and relatively low manufacturing volumes, and the wear and tear on the mechanical components raise questions about long-term reliability.

While these lidar systems, produced on scales of tens of thousands of units, have allowed the field of autonomous vehicles to make headway, they are not suitable for ubiquitous deployment of lidar. For these reasons, there is a big push towards eliminating mechanical components and moving towards compact designs that are more reliable and can be manufactured at larger volumes and a lower unit cost.

Seeking a solid-state lidar-on-a-chip design

There are two concepts in lidar design that address the reliability and scalability challenges for this technology. First, “solid-state” describes a lidar sensor that has eliminated the mechanical modes of failure by using no moving parts. Second, “lidar-on-a-chip” describes a system with the lasers, electronics, detectors, and optical beam-steering mechanism all integrated onto semiconductor chips. Lidar-on-a-chip systems go beyond solid-state because these designs attempt to further reduce the size, weight, and power requirements by integrating all of the key components onto the smallest footprint possible.

This system miniaturization reduces the complexity of the assembly and enables high volume throughput that can result in 10 to 100x cost reduction at scale. This is all possible because lidar-on-a-chip architectures can fully leverage CMOS compatible materials and wafer-scale fabrication methods established by the semiconductor industry. Once a final solution is proven, there is anticipation that, just like with the integrated circuits inside computers and smartphones, hundreds of millions of lidar units could be manufactured every year.

To realize the vision of ubiquitous lidar, there have been several emerging approaches to solid-state lidar and lidar-on-a-chip design. Some of the key approaches include:

Microelectromechanical Systems (MEMs)

  • This approach uses an assembly of optics and a millimeter-scale deflecting mirror for free-space beam steering of a laser or laser array. Although MEMS mirrors are fabricated using semiconductor wafer manufacturing, the architecture itself is not a fully integrated lidar-on-a-chip system since optical alignment is still needed between lasers, MEMs mirrors, and optics.
  • One of the trade-offs in this design is between scanning speed and maximum steering angle for aperture/mirror size, which directly relates to the detection range. This trade-off exists because a MEMS mirror acts like a spring, so as its size and mass increases, its resonant frequency decreases and limits scanning speed. 

Liquid-Crystal Metasurfaces (LCM)

  • Similar architecture to a MEMS system but with the MEMS mirror replaced by a liquid crystal metasurface (LCM). LCMs use an array of subwavelength scatterers mixed with liquid crystals to impart a tunable phase front onto a reflected laser beam, which allows for beam steering.
  • LCM designs have large optical apertures and a wide FOV but current modules are limited to 150-meter ranging. Like MEMs lidar systems, the LCM steering mechanism is built using a semiconductor wafer manufacturing process, while the system is not fully on-chip since the laser input needs to be aligned at an angle to the surface of the LCM.  

Optical Phased Arrays (OPA)

  • This approach is compatible with cheap, high volume manufacturing of the optical portion of the system. Uses an array of optical antennas, each requiring electronic phase control, to generate constructive interference patterns that form a steerable optical beam. However, as the size of the emitting aperture increases, electronic complexity increases and limits scaling since it requires thousands of active phase shifters controlled simultaneously. 
  • The size of optical waveguides places practical fabrication constraints on the ideal antenna spacing (half wavelength) and therefore degrades the optical beam, an effect called aliasing, which limits the usable horizontal FOV. More recent designs use aperiodic waveguide spacing and different waveguide cross-sections to solve these issues, but each creates trade-offs such as limitations on the vertical scanning and the reduction of optical power in the main beam.

Each one of these approaches has shown an improvement over conventional mechanical-based lidar in various metrics such as range, reliability, and cost, but it is still unclear whether they can hit all of the requirements for fully autonomous vehicles. 

A novel beam-steering lidar architecture

Recognizing these challenges, the Photonic and Modern Electro-Magnetics Group at MIT and MIT Lincoln Laboratory researchers collaborated to develop an alternative solution to solid-state optical beam steering that could solve the remaining challenges for lidar. A point of inspiration was the Rotman lens, invented in the 1960s, to enable passive beam-forming networks in the microwave regime without the need for active electronic phase control. The Rotman lens receives microwave radiation from multiple inputs along an outer focal arc of the lens. The lens spreads the microwaves along an inner focal arc where an array of delay lines imparts a phase shift to the microwaves to form a collimated beam.

While much of the intuition from the microwave regime translates to the optical and near-infrared domain, the length scales and available materials require a different lens design and system architecture. Over the past three years, the MIT-MIT Lincoln Laboratory team has designed, fabricated, and experimentally demonstrated a new solid-state beam steering architecture that works in the near-infrared.

This novel beam steering architecture includes an on-chip planar lens for steering in the horizontal direction and a grating to scan in the vertical direction, as shown in Figure 2. The laser light is edge-coupled into an optical waveguide, which guides and confines light along a desired optical path. This initial waveguide feeds into a switch matrix that expands into a tree pattern. This switch matrix can comprise hundreds of photonic waveguides, yet requires only about 10 active phase shifters to route the light into the desired waveguide. Each waveguide provides a distinct path that feeds into the planar lens and corresponds to one beam in free space. This planar lens consists of a stack of CMOS compatible materials tens of nanometers thick and is designed to collimate the light fed in by each waveguide. The collimation creates a flat wavefront for the light and allows it to minimize spreading and therefore reduces loss as it propagates over long distances.

Since each waveguide is designed to feed the lens at a unique angle, selecting a particular waveguide will make the light enter the lens and exit at a specific angle as seen in Figure 3. The system is designed such that the beams emitted from adjacent waveguides overlap in the far-field. In this way, the entire horizontal field-of-view is covered by beams emitted from the discrete number of waveguides. For instance, a 100° FOV lidar system with a 0.1° resolution can be designed using 1000 equally spaced waveguides along the focal arc of the lens and by using a sufficiently large aperture to generate the required beam width and overlap. This approach allows the design to meet the required FOV and resolution requirements of lidar applications.

To control the vertical angle of the emitted beam, our design uses a wavelength selective component called a grating coupler. The grating is a periodic structure composed of straight or curved rulings that repeat every few hundred nanometers. The interface of each repeated ruling causes light to diffract and thus scatter out of the plane of the chip. Since the emission angle of the grating depends on the wavelength of light, tuning the laser frequency controls the angle at which the beam is emitted into free space. As a result, selecting the waveguide via the switch matrix steers the beam in the plane of the chip (horizontally), and tuning the wavelength of the laser steers the beam out of the plane of the chip (vertically).

An alternative to time-of-flight detection

How does our planar lens architecture address key remaining challenges of optical beam steering for lidar applications? Like optical phased arrays (OPAs), the architecture takes advantage of CMOS-compatible fabrication techniques and materials used in recently established photonic foundries. Compared to OPAs, our lens-based design can have up to twice the horizontal FOV and also reduces electronic complexity by two orders of magnitude because we do not need hundreds of simultaneously active phase shifters.

Since the number of active phase shifters scales as the binary logarithm of the number of waveguides, log base 2, our system requires approximately 10 phase shifters to select the waveguide input and steer the optical beam. To enable a full lidar system, the lens-based approach can also take advantage of similar lasers and detection schemes that have been demonstrated with OPAs and other solid-state beam steering approaches. Combining all of these benefits enables a lidar-on-a-chip design that can lead to a low-cost sensor producible at high manufacturing volumes.

To address the range requirement of 200 meters, we are designing our system for a coherent detection scheme called frequency modulated continuous wave (FMCW) rather than the more traditionally used time-of-flight (ToF) detection. FMCW has advantages in terms of the instantaneous velocity information it can provide and the lower susceptibility to interference from other light sources. Note that FMCW has been successfully demonstrated by several lidar companies with ranges beyond 300 meters, though those companies have yet to fully prove that their systems can be produced at low cost and high volumes.

Rather than emitting pulses of light, FMCW lidars emit a continuous stream of light with a modulated frequency laser. Before being transmitted, a small fraction of the beam is split off, kept on the chip, and recombined with the main beam after it has been transmitted, reflected by an object, and received. The laser light is coherent, so these two beams can constructively and destructively interfere at the detector, generating a beat frequency waveform that is detected electronically and contains both ranging and velocity information. Since a beat signal is only generated by coherent light sources, it makes FMCW lidars less sensitive to interference from sunlight and other lidars. Moreover, the detection process also suppresses noise from other frequency bands, giving FMCW a sensitivity advantage over ToF.

Proof of concept and next steps

To demonstrate the viability of the optical beam steering, a proof-of-concept chip was fabricated using the silicon nitride integrated photonic platform at MIT Lincoln Laboratory. Optical microscope images of the fabricated chips are shown in Figure 4. The planar lens beam-steering was tested using a fiber-coupled tunable laser with a wavelength range between 1500-1600 nm. This testing confirmed that the chip was capable of providing a 40° (horizontal) by 12° (vertical) FOV, seen in Figure 5. Recently, we have improved our designs to enable a 160° (horizontal) by 20° (vertical) FOV so that we can better address the needs of lidar applications. These fabricated chips are undergoing testing to verify the expected beam-steering performance.

Kyber Photonics lidar on a chip
Kyber Photonics lidar on a chip
Images: Kyber Photonics
Figure 5. Animation consisting of compiled infrared images showing the operation of the planar lens beam-steering chip: (a) beam-steering via port selection and (b) beam-steering via wavelength tuning. Captured with an infrared camera. 

As Kyber Photonics moves forward with development, we are expanding from optical beam-steering demonstrations into the engineering of a complete lidar-on-a-chip system. The planar lens architecture will be tested in combination with the necessary frequency-modulated laser, detectors, and electronics to demonstrate ranging using FMCW detection.

Kyber Photonics will start by building a compact demonstration that uses a separately packaged laser that is fiber-coupled onto our chip and then move towards an on-chip laser solution. Many photonic foundries are adding lasers to their process development kits, so we plan to integrate and refine the on-chip laser solution at different stages of product development.

Subsequent steps will also include integrating the necessary application-specific integrated circuits to handle the photonic drivers, laser modulation, and receive signal processing. Our end goal is that within the next two to three years, Kyber Photonics will have a lidar-on-a-chip unit that is the size of a wallet and has a clear production path to meeting the demands of low cost, high reliability, high performance, and scalability for the autonomous vehicle industry and beyond.

About the Authors

Josué J. López is CEO and co-founder of Kyber Photonics and a 2020 Fellow of Activate, a two-year fellowship supported by DARPA that helps scientists bring their innovations to the marketplace.

Thomas Mahony is CTO and co-founder of Kyber Photonics and a 2020 Activate Fellow. 

Samuel Kim is co-founder of Kyber Photonics.

The authors would like to acknowledge key inventors and contributors to the technology described in this article. They include members of the Soljačić Group at MIT: Dr. Scott Skirlo, Jamison Sloan, and Professor Marin Soljačić; and MIT Lincoln Laboratory researchers: Dr. Cheryl Sorace-Agaskar, Dr. Paul Juodawlkis, and members of the Advanced Technology Division at Lincoln Laboratory.

Introducing Calibre nmLVS-Recon – A new paradigm for circuit verification

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/introducing-calibre-nmlvsrecon-a-new-paradigm-for-circuit-verification

Each year, at least 50% of tapeouts are late, with physical and circuit verification closure a significant contributing factor. Mentor incorporated the know-how of our industry-leading Calibre nmLVS sign-off tool with lessons learned from customers to create an innovative smart engine specifically engineered to help design teams find and fix high-impact systemic errors early in the design flow. As part of our growing suite of early-stage design verification technologies, the Calibre nmLVS-Recon tool enables designers to accelerate early-stage design analysis and debug cycles, and reduce the time needed to reach tapeout. We explain the concept behind our innovative technology, and introduce the first Calibre nmLVS-Recon use model – short isolation analysis.

IBM Toolkit Aims at Boosting Efficiencies in AI Chips

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/semiconductors/design/ibm-toolkit-aims-at-boosting-efficiencies-in-ai-chips

In February 2019, IBM Research launched its AI Hardware Center with the stated aim of improving AI computing efficiency by 1,000 times within the decade. Over the last two years, IBM says they’ve been meeting this ambitious goal: They’ve improved AI computing efficiency, they claim, by two-and-a-half times per year.

Big Blue’s big AI efficiency push comes in the midst of a boom in AI chip startups. Conventional chips often choke on the huge amount of data shuttling back and forth between memory and processing. And many of these AI chip startups say they’ve built a better mousetrap. 

There’s an environmental angle to all this, too. Conventional chips waste a lot of energy performing AI algorithms inefficiently. Which can have deleterious effects on the climate.  

Recently IBM reported two key developments on their AI efficiency quest. First, IBM will now be collaborating with Red Hat to make IBM’s AI digital core compatible with the Red Hat OpenShift ecosystem. This collaboration will allow for IBM’s hardware to be developed in parallel with the software, so that as soon as the hardware is ready, all of the software capability will already be in place.

“We want to make sure that all the digital work that we’re doing, including our work around digital architecture and algorithmic improvement, will lead to the same accuracy,” says Mukesh Khare, vice president of IBM Systems Research. 

Second, IBM and the design automation firm Synopsys are open-sourcing an analog hardware acceleration kit — highlighting the capabilities analog AI hardware can provide.

Analog AI is aimed at the so-called von Neumann bottleneck, in which data gets stuck between computation and memory. Analog AI addresses this challenge by performing the computation in the memory itself.

The Analog AI toolkit will be available to startups, academics, students and  businesses, according to Khare. “They can all … learn how to leverage some of these new capabilities that are coming down the pipeline. And I’m sure the community may come up with even better ways to exploit this hardware than some of us can come up with,” Khare says.

A big part of this toolkit will be the design tools provided by Synopsys. 

“The data movement is so vast in an AI chip that it really necessitates that memory has to be very close to its computing to do these massive metrics computations,” says Arun Venkatachar, vice president of Artificial Intelligence & Central Engineering at Synopsys. “As a result, for example, the interconnects become a big challenge.”

He says IBM and Synopsys have worked together on both hardware and software for the Analog AI toolkit. 

“Here, we were involved through the entire stack of chip development: materials research and physics, to the device and all the way through verification and software,” Venkatachar says. 

Khare says for IBM this holistic approach translated to fundamental device and materials research, chip design, chip architecture, system design software and emulation, as well as a testbed for end-users to to validate performance improvements.

“It’s important for us to work in tandem and across the entire stack,” Khare adds. “Because developing hardware without having the right software infrastructure is not complete, and the other way around as well.”

Cable Assemblies: Determining a Reliable & Cost-Effective Approach

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/cable_assemblies_determing_a_reliable_and_cost_effective_approach

Cable Assemblies

Cable assemblies fulfill an important role in high-density electronic systems. Often overlooked as ‘a couple of connectors and some cable’, these are in fact an essential element of modern engineering design and should not be underestimated.

In this paper, we outline the process of creating cable assemblies for application sectors requiring the highest levels of reliability, such as aerospace, defense, space and motorsport.

Google Invents AI That Learns a Key Part of Chip Design

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/design/google-invents-ai-that-learns-a-key-part-of-chip-design

There’s been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that’s optimized to do today’s AI, not the AI of two to five years ago. Google’s solution: have an AI design the AI chip.

“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv.

“We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn’t exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.”

Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area.

Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized. Heightening the challenge is the requirement that all this happen while at the same time obeying rules about the density of interconnects. Goldie and Mirhoseini targeted chip placement, because even with today’s advanced tools, it takes a human expert weeks of iteration to produce an acceptable design.

Goldie and Mirhoseini modeled chip placement as a reinforcement learning problem. Reinforcement learning systems, unlike typical deep learning, do not train on a large set of labeled data. Instead, they learn by doing, adjusting the parameters in their networks according to a reward signal when they succeed. In this case, the reward was a proxy measure of a combination of power reduction, performance improvement, and area reduction. As a result, the placement-bot becomes better at its task the more designs it does.

The team hopes AI systems like theirs will lead to the design of “more chips in the same time period, and also chips that run faster, use less power, cost less to build, and use less area,” says Goldie.

Multiphysics Modeling of Piezoelectric Sensors and Actuators

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/multiphysics_modeling_of_piezoelectric_sensors_and_actuators

Are you interested in modelling piezoelectric sensors and actuators? If so, then join us for this webinar.

In this webinar, James Ransley from Veryst Engineering will work through a case study in which the properties of a piezoelectric bender actuator are optimized. The material orientation and dimensions are optimized to maximize the efficiency of the bender for the application. The blocking curve is computed and the dimensions are modified to achieve a given specification with minimal actuator volume.

The study will also include a live demo of piezoelectric device design using the COMSOL Multiphysics® simulation software. A Q&A session will conclude the webinar.

Are you interested in modelling piezoelectric sensors and actuators?

Optimal crystal orientation and displacement profile (color is proportional to relative displacement) for a piezoelectric bender made from lithium niobate (left) and PZT 5H (right).

Arm Shows Backside Power Delivery as Path to Further Moore’s Law

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/design/arm-shows-backside-power-delivery-as-path-to-further-moores-law

People often think that Moore’s Law is all about making smaller and smaller transistors. But these days, a lot of the difficulty is squeezing in the tangle of interconnects needed to get signals and power to them. Those smaller, more dense interconnects are more resistive, leading to a potential waste of power. At the IEEE International Electron Devices Meeting in December, Arm engineers presented a processor design that demonstrates a way to reduce the density of interconnects and deliver power to chips with less waste.

Tips and Tricks on how to verify control loop stability

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/tips-and-tricks-on-how-to-verify-control-loop-stability

The Application Note explains the main measurement concept and will guide the user during the measurements and mention the main topics in a practical manner. Wherever possible, a hint is given where the user should pay attention.

Register for our Application Note "Tips and Tricks on how to verify control loop stability"

Forget Moore’s Law—Chipmakers Are More Worried About Heat and Power Issues

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/semiconductors/design/power-problems-might-drive-chip-specialization

Power consumption and heat generation: these hurdles impede progress toward faster, cheaper chips, and are worrying semiconductor industry veterans far more than the slowing of Moore’s Law. That was the takeaway from several discussions about current and future chip technologies held in Silicon Valley this week.

John Hennessy—president emeritus of Stanford University, Google chairman, and MIPS Computer Systems founder—says Moore’s Law “was an ambition, a goal. It wasn’t a law; it was something to shoot for.”

Next-Gen AR Glasses Will Require New Chip Designs

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/semiconductors/design/dramatic-changes-in-chip-design-will-be-necessary-to-make-ar-glasses-a-reality

What seems like a simple task—building a useful form of augmented reality into comfortable, reasonably stylish, eyeglasses—is going to need significant technology advances on many fronts, including displays, graphics, gesture tracking, and low-power processor design.

That was the message of Sha Rabii, Facebook’s head of silicon and technology engineering. Rabii, speaking at Arm TechCon 2019 in San Jose, Calif., on Tuesday, described a future with AR glasses that enable wearers to see at night, improve overall eyesight, translate signs on the fly, prompt wearers with the names of people they meet, create shared whiteboards, encourage healthy food choices, and allow selective hearing in crowded rooms. This type of AR will be, he said, “an assistant, connected to the Internet, sitting on your shoulders, and feeding you useful information to your ears and eyes when you need it.”

X-Ray Tech Lays Chip Secrets Bare

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/design/xray-tech-lays-chip-secrets-bare

Scientists and engineers in Switzerland and California have come up with a technique that can reveal the 3D design of a modern microprocessor without destroying it.

Typically today, such reverse engineering is a time-consuming process that involves painstakingly removing each of a chip’s many nanometers-thick interconnect layers and mapping them using a hierarchy of different imaging techniques, from optical microscopy for the larger features to electron microscopy for the tiniest features. 

The inventors of the new technique, called ptychographic X-ray laminography, say it could be used by integrated circuit designers to verify that manufactured chips match their designs, or by government agencies concerned about “kill switches” or hardware trojans that could have secretly been added to ICs they depend on.

“It’s the only approach to non-destructive reverse engineering of electronic chips—[and] not just reverse engineering but assurance that chips are manufactured according to design,” says Anthony F. J. Levi, professor of electrical and computer engineering at University of Southern California, who led the California side of the team. “You can identify the foundry, aspects of the design, who did the design. It’s like a fingerprint.”

How to select current sensors and transformers

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/how-to-select-current-sensors-and-transformers

Choosing the right current sensor does not need to be guesswork

Description: Download this application note to learn selection parameters for selecting current sensors, as well as the limitations of alternative technologies such as current sense resistors.

Semiconductor CEOs on Computing’s Big Role in Slowing the Advance of Climate Change

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/semiconductors/design/semiconductor-ceos-on-life-after-moores-law-climate-change

The end of Moore’s Law will drive a renaissance in chip innovation, CEOs say. But the semiconductor industry must face the existential question of power consumption driving climate change

We have entered a “Renaissance of Silicon.” That was the thesis of a panel that brought together semiconductor industry CEOs at Micron Technology’s San Jose campus last week. This renaissance, the executives indicated, will lead to an exciting—but not predictable—innovation in chip technology driven by applications that demand more computing power and by the demise of Moore’s Law.

“I’ve never seen a more exciting time in my 40 years in the industry,” said Sanjay Mehrotra, CEO of Micron Technology.

“I hadn’t heard semiconductor and Renaissance in the same sentence in 20 years,” said Tammy Kiely, Goldman Sachs global head of semiconductor investment banking. Kiely moderated the panel, which was organized by the Churchill Club.

The driving force behind this renaissance is “burning necessity,” said Xilinx CEO Victor Peng. Arm CEO Simon Segars agreed.

“For last 15 years, the driver of growth was mobile,” Segars said. “Over the last five years, the industry was in a bit of a lull. Then all of a sudden there is this combination of effects.” He listed cloud computing, handheld computing, IoT devices, 5G, AI, and autonomous vehicles as contributing to the boom. “Lots of things are coming together at once,” he said, along with “fundamental algorithm development.”

All these things, Xilinx’s Peng said, mean that the industry will have to come up with a way to improve computing power and storage by a factor of 100—if not 1000—over the next 10 years. That will require new architectures, new packaging—and a new way of looking at the entire ecosystem. “The entire data center is a computer,” he said, pointing out that computation will have to happen all over it, in memory, in switches, even in the communications lines.

Getting a 100- to 1000-times improvement in processing power will also require innovation in software, Peng continued. “People got used to Moore’s law enabling them to throw cycles away to enable abstraction. It’s not that simple anymore…. When you need 100 times [improvement], you don’t evolve the same architecture, you start all over. When Moore’s law was chipping away every year, you didn’t rethink the entire problem. But now you have to look at hardware and software together.”

Concerned About Climate Change

The panelists reminded the audience that it’s also no longer just about making chips better, faster, and cheaper (or these days, as Peng points out, getting one or two of those things at best). The semiconductor industry also has to drive power consumption down.

“Power consumption is an existential question, [considering] climate change,” Peng said, noting that data centers now consume about 10 percent of the world’s electric power. “That cannot scale exponentially and be sustainable.” Getting power consumption down is, he said, “not only a huge business opportunity but a moral imperative.”

Climate change, Segars said, will be a big driver for semiconductor innovation over the next five years. The industry, he said, will have to “create different computing engines to solve things in a more efficient way… [and innovate] on microarchitectures to get power down. In the long term, [we have to] think about workloads. The ultimate architecture might be dedicated engines for different commonly executed tasks that do things in a more efficient way.”

Segars also suggested that we ought to consider the bigger picture when weighing the power costs of computing. “Smart cities,” he said, “may result in more energy getting burned in data centers to process the IoT data, but their net could be energy savings.”

Don’t Expect a Startup Surge

This boom in innovation is unlikely to lead to a boom in startups, the panelists agreed. That may be counter to what we expect from Silicon Valley, but it’s the reality, they indicated.

“The cost of taking a chip out at 5 nanometers is astronomical, so unless you can amortize costs over multiple designs, nobody can afford it,” Segars said. “So we will need the large semiconductor companies to keep progress aggressive. Of course, the more expensive it is, the fewer that can afford to do it. But unlike other industries, like steel, I don’t think innovation is going to dry up.”

“Larger companies have a greater ability to invest in future innovations, to make big bets,” Mehrotra agreed. However, he said, “there are startups that are forming that are working on silicon aspects. Certainly what Simon [Segars] said about increased complexity and the time and money [involved] compared to the past is true.” But, he said, at least some architecture innovation is happening outside the big companies—though “not the way it was when I joined the industry 40 years ago.”

“There has been a lot of VC money going into AI chip companies,” Segars concurred. But, he predicts that, “unfortunately I don’t think we are going back to the days where Sand Hill Road is going to hand out wheelbarrows of money to people to design chips.”