Tag Archives: Semiconductors

Ultrasensitive Microscope Reveals How Charging Changes Molecular Structures

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/structural-changes-of-molecules-during-charging-revealed

New ability to image molecules under charging promises big changes for molecular electronics and organic photovolatics

All living systems depend on the charging and discharging of molecules to convert and transport energy. While science has revealed many of the fundamental mechanisms of how this occurs, one area has remained shrouded in mystery: How does a molecule’s structure change while charging? The answer could have implications for range of applications including molecular electronics and organic photovoltaics.

Now a team of researchers from IBM Research in Zurich, the University of Santiago de Compostela and ExxonMobil has reported in the journal Science the ability to image, with unprecedented resolution, the structural changes that occur to individual molecules upon charging.

This ability to peer into this previously unobserved phenomenon should reveal the molecular charge-function relationships and how they relate to biological systems converting and transporting energy. This understanding could play a critical role in the development of both organic electronic and photovoltaic devices.

“Molecular charge transition is at the heart of many important phenomena, such as photoconversion, energy and molecular transport, catalysis, chemical synthesis, molecular electronics, to name some,” said Leo Gross, research staff member at IBM Zurich and co-author of the research. “Improving our understanding of how the charging affects the structure and function of molecules will improve our understanding of these fundamental phenomena.”

This latest breakthrough is based on research going back 10 years when Gross and his colleagues developed a technique to resolve the structure of molecules with an atomic force microscope. AFMs map the surface of a material by recording the vertical displacement necessary to maintain a constant force on the cantilevered probe tip as it scans a sample’s surface.

Over the years, Gross and his colleagues refined the technique so it could see the charge distribution inside a molecule, and then were able to get it to distinguish between individual bonds of a molecule.

The trick to these techniques was to functionalize the tip of the AFM probe with a single carbon monoxide (CO) molecule. Last year, Gross and his colleague Shadi Fatayer at IBM Zurich believed that the ultra-high resolution possible with the CO tips could be combined with controlling the charge of the molecule being imaged.

“The main hurdle was in combining two capabilities, the control and manipulation of the charge states of molecules and the imaging of molecules with atomic resolution,” said Fatayer.

The concern was that the functionalization of the tip would not be able to withstand the applied bias voltages used in the experiment. Despite these concerns, Fatayer explained that they were able to overcome the challenges in combining these two capabilities by using multi-layer insulating films, which avoid charge leakage and allow charge state control of molecules.

The researchers were able to control the charge-state by attaching single electrons from the AFM tip to the molecule, or vice-versa. This was achieved by applying a voltage between the tip and the molecule. “We know when an electron is attached or removed from the molecule by observing changes in the force signal,” said Fatayer.

The IBM researchers expect that this research could have an impact in the fundamental understanding of single-electron based and molecular devices. This field of molecular electronics promises a day when individual molecules become the building blocks of electronics.

Another important prospect of the research, according to Fatayer and Gross, would be its impact on organic photovoltaic devices. Organic photovoltaics have been a tantalizing solution for solar power because they are cheap to manufacture. However, organic solar cells have been notoriously poor compared to silicon solar cells at converting sunlight to energy efficiently.

The hope is that by revealing how the structural changes of molecules under charge impact the charge transition of molecules, engineers will be able to further optimize organic photovoltaics.

Applied Materials’ New Memory Machines

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/memory/applied-materials-new-memory-machines

Tools designed to rapidly build embedded MRAM, RRAM, and phase change memories on logic chips expand foundry options

Chip equipment giant Applied Materials wants foundry companies to know that it feels their pain. Continuing down the traditional Moore’s Law path of increasing the density of transistors on a chip is too expensive for all but the three richest players—Intel, Samsung, and TSMC. So to keep the customers coming, other foundries can instead add new features, such as the ability to embed new non-volatile memories—RRAM, phase change memory, and MRAM—right on the processor. The trouble is, those are really hard things to make at scale. So Applied has invented a pair of machines that boost throughput by more than an order of magnitude. It unveiled the machines on 9 July at Semicon West, in San Francisco.

Is Graphene by Any Other Name Still Graphene?

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/is-graphene-by-any-other-name-still-graphene

Consumers may finally have a way to know if their graphene-enabled products actually get any benefit from the wonder material

Last year, the graphene community was rocked by a series of critical articles that appeared in some high-profile journals. First there was an Advanced Material’s article with the rather innocuously title: “The Worldwide Graphene Flake Production”. It was perhaps the follow-up article that appeared in the journal Nature that really shook things up with its incendiary title: “The war on fake graphene”.

In these two articles it was revealed that material that had been claimed to be high-quality (and high-priced) graphene was little more than graphite powder. Boosted by their appearance in high-impact journals, these articles threatened the foundations of the graphene marketplace.

But while these articles triggered a lot of hand wringing among the buyers and sellers of graphene, it’s not clear that their impact extended much beyond the supply chain of graphene. Whether or not graphene has aggregated back to being graphite is one question. An even bigger one is whether or not consumers are actually being sold a better product on the basis that it incorporates graphene. 

Consumer products featuring graphene today include everything from headphones to light bulbs. Consequently, there is already confusion among buyers about the tangible benefits graphene is supposed to provide. And of course the situation becomes even worse if the graphene sold to make products may not even be graphene: how are consumers supposed to determine whether graphene infuses their products with anything other than a buzzword?

Another source of confusion arises because when graphene is incorporated into a product it is effectively a different animal from graphene in isolation. There is ample scientific evidence that graphene when included in a material matrix, like a polymer or even paper, can impart new properties to the materials. “You can transfer some very useful properties of graphene into other materials by adding graphene, but just because the resultant material contains graphene it does not mean it will behave like free-standing graphene, explains Tom Eldridge, of UK-based Fullerex, a consultancy that provides companies with information on how to include graphene in a material matrix.

Eldridge added: “This is why it is often misleading to talk about the superlative properties of free-standing graphene for benefiting applications, because almost always graphene is being combined with other materials. For instance, if I combine graphene with concrete I will not get concrete which is 200 times stronger than steel.”

This is what leaves consumers a bit lost at sea: Graphene can provide performance improvements to a product, but what kind and by how much?

The Graphene Council (Disclosure: The author of this story has also worked for The Graphene Council) recognized this knowledge gap in the market and has just launched a “Verified Graphene Product” Program in addition to its “Verified Graphene Producer” program. The Verified Graphene Producer program takes raw samples of graphene and characterizes them to verify the type of graphene it is, while the Verified Graphene Product program addresses the issue of what graphene is actually doing in products that claim to use it. 

Companies that are marketing products that claim to be enhanced by graphene can use this service, and the verification can be applied to their product to give buyers confidence that graphene is actually doing something. (It’s not known if there are any clients taking advantage of it yet.)

“Consumers want to know that the products they purchase are genuine and will perform as advertised,” said Terrance Barkan, executive director of The Graphene Council. “This applies equally to purchasers of graphene enhanced materials and applications. This is why independent, third-party verification is needed.”

Nvidia Chip Takes Deep Learning to the Extremes

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/nvidia-chip-takes-deep-learning-to-the-extremes

Individual accelerator chips can be ganged together in a single module to tackle both the small jobs and the big ones without sacrificing efficiency

There’s no doubt that GPU-powerhouse Nvidia would like to have a solution for all size scales of AI—from massive data center jobs down to the always-on, low-power neural networks that listen for wakeup words in voice assistants.

Right now, that would take several different technologies, because none of them scale up or down particularly well. It’s clearly preferable to be able to deploy one technology rather than several. So, according to Nvidia chief scientist Bill Dallythe company has been seeking to answer the question: “Can you build something scalable… while still maintaining competitive performance-per-watt across the entire spectrum?” 

It looks like the answer is yes. Last month at the VLSI Symposia in Kyoto, Nvidia detailed a tiny test chip that can work on its own to do the low-end jobs or be linked tightly together with up to 36 of its kin in a single module to do deep learning’s heavy lifting. And it does it all while achieving roughly the same top-class performance.

The individual accelerator chip is designed to perform the execution side of deep learning rather than the training part. Engineers generally measure the performance of such “inferencing” chips in terms of how many operations they can do per joule of energy or millimeter of area. A single one of Nvidia’s prototype chips peaks at 4.01 tera-operations per second (1000 billion operations per second) and 1.29 TOPS per millimeter. Compared to prior prototypes from other groups using the same precision the single chip was at least 16 times as area efficient and 1.7 times as energy efficient. But linked together into a 36-chip system it reached 127.8 TOPS. That’s a 32-fold performance boost. (Admittedly, some of the efficiency comes from not having to handle higher-precision math, certain DRAM issues, and other forms of AI besides convolutional neural nets.)

Companies have mainly been tuning their technologies to work best for their particular niches. For example, Irvine, Calif.,-startup Syntiant uses analog processing in flash-memory to boost performance for very-low power, low-demand applications. While Google’s original tensor processing unit’s powers would be wasted on anything other than the data center’s high-performance, high-power environment.

With this research Nvidia is trying to demonstrate that one technology can operate well in all those situations. Or at least it can if the chips are linked together with Nvidia’s mesh network in a multichip module. These modules are essentially small printed circuit boards or slivers of silicon that hold multiple chips in a way that they can be treated as one large IC. They are becoming increasingly popular, because they allow systems composed of a couple of smaller chips—often called chiplets—instead of a single larger and more expensive chip.

“The multichip module option has a lot of advantages not just for future scalable [deep learning] accelerators but for building version of our products that have accelerators for different functions,” explains Dally.

Key to the Nvidia multichip module’s ability to bind together the new deep learning chips is an interchip network that uses a technology called ground-referenced signaling. As its name implies, GRS uses the difference between a voltage signal on a wire and a common ground to transfer data, while avoiding many of the known pitfalls of that approach. It can transmit 25 gigabits/s using a single wire, whereas most technologies would need a pair of wires to reach that speed. Using single wires boosts how much data you can stream off of each millimeter of the edge of the chip to a whopping terabit per second. What’s more, GRS’s power consumption is a mere picojoule per bit.

“It’s a technology that we developed to basically give the option of building multichip modules on an organic substrate, as opposed to on a silicon interposer, which is much more expensive technology,” says Dally.

The accelerator chip presented at VLSI is hardly the last word on AI from Nvidia. Dally says they’ve already completed a version that essentially doubles this chip’s TOPS/W. “We believe we can do better than that,” he says. His team aspires to find inferencing accelerating techniques that blow past the VLSI prototype’s 9.09 TOPS/W and reaches 200 TOPS/W while still being scalable.

DARPA’S $1.5-Billion Remake of U.S. Electronics: Progress Report

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/devices/darpas-15billion-remake-of-us-electronics-progress-report

Agency is adding security and AI design to a mix meant to boost U.S. industry

About a year ago, the U.S. Defense Advanced Research Projects Agency pulled back the covers on its five-year, $1.5-billion scheme to remake the U.S. electronics industry. The Electronics Resurgence Initiative included efforts in “aggressive specialization” for chip architectures, systems that are smart enough to reconfigure themselves for whatever data you throw at them, open-source hardware, 24-hour push-button system design, and carbon-nanotube-enabled 3D chip manufacturing, among other cool things. As always with DARPA, this is high-risk research; but if even half of it works out, it could change the nature not just of what kinds of systems are designed but also of who makes them and how they’re made.

On 18 June, IEEE Spectrum spoke with IEEE Fellow Mark Rosker, the new director of the Microsystems Technology Office, which is leading ERI at DARPA. Rosker talked about what’s happened in the ERI programs, what new components have been added, and what to expect from the 2nd ERI Summit. That event will be held 15-17 July in Detroit, Mich., and headlined by the CEOs of AMD, GlobalFoundries, and Qualcomm.

Mark Rosker on:

IEEE Spectrum: How is the Electronics Resurgence Initiative going?

Mark Rosker: It’s a really good question. And I guess that is the question that is the underpinning of the second summit that we’re going to be having, I guess, about a month from now. I think it’s going extremely well. What we’re doing now is moving closer towards DARPA’s more traditional mode of operation, in which we have identified specific areas that we think are really game-changing technologies that we’re trying to go after, and have specific programs that address each of those areas. At the same time, I’m constantly on the lookout for new projects, new programs, that will be disruptive and be within the charter of the Electronics Resurgence Initiative. So to a degree, that feels very comfortable to us, but in no way is that to say it’s incremental. It’s just how we do business.

IEEE Spectrum: What was the advantage to making a concerted push last year?

Mark Rosker: Running that many new-program starts at the same time probably is not the most efficient thing for us. By probably, I mean it isn’t. But in terms of its value in capturing the community, in capturing attention, in getting the people who traditionally have not been that interested in participating in government research, to pay attention, I think it had great value. I think that where we are now is: We have their attention. And so, it’s probably more important to regain that efficiency.

IEEE Spectrum: Can you give any updates on the progress in some of the key programs that started ERI off last year? Some of them had pretty amazing goals, such as seeking push-button 24-hour system design and making 90-nanometer foundries competitive with 7-nm ones.

Mark Rosker: A lot of the discussion that will be held in Detroit will be talking about some of the details of the specifics of what each of the performers have done. And I don’t want to get too far ahead and make some generalizations about how we’re doing in that area.

IEEE Spectrum: ERI’s programs were centered on three main pillars: design, architecture, and materials & integration. Has that evolved any now that there are new sub-programs in play?

Mark Rosker: At the summit, I’ll probably be talking about how, going forward, ERI will be divided into four different areas. The first area [will focus on] new materials and devices that go beyond the materials and device people traditionally had to use. Largely, we’re talking about silicon in that case. The second area is a very familiar theme that [former ERI director, now special assistant to director at DARPA] Bill Chappell talked a lot about: specialized functions circuits that are really focused and optimized to do specific tasks.  The third area is really tools that help you organize those specialized functions, and also enable you to incorporate security without necessarily being an expert at security. And then lastly, is heterogeneous integration: How do you tie these new specialized functions together.

Back to top

IEEE Spectrum: Heterogeneous integration, particularly through chiplets, is something that’s being actively explored and debated now by industry. Is there a back-and-forth between this program and that debate?

Mark Rosker: Yeah. I really do believe that in 10 years or in 20 years, this period of electronics, more than anything else, may be associated with heterogeneous integration. Really, it’s just the physical manifestation of everything we’ve been talking about. So what you’re asking is, what are the standards and processes that are going to allow this to take place on a global scale. The CHIPS program [Common Heterogeneous Integration and IP Reuse Strategies] certainly is an example of trying to create such standards, and push a community towards adopting that because there are mutual gains to be had if everybody designs in ways that allow reuse and compatibility.

My own opinion is that DARPA will not be able to drive that standards formation, certainly not in the commercial world. But what we can do is encourage the creation of those standards by the larger community by showing that, in certain cases and in certain domains, there are really large advantages to having those commercial standards. 

Back to top

IEEE Spectrum: What’s the roadmap for ERI going forward?

Mark Rosker: We started with a very large investment, we pushed a bunch of ideas out to the community, and we got a great reaction. We’re now sort of in year two, which is a really good time to take stock. We have a commitment to maintain at least five years in this process, and I probably shouldn’t speculate about what happens after five years. In any case, if we’re going to make midcourse changes, if we’re going to cover new areas that maybe we might have missed that we should have covered, now is a good time to be having that discussion and think about what those things should be. For example, in our second year, we had an increased focus on security, as well as on strengthening the manufacturing options that are available for specialized functions, like photonics or RF.

IEEE Spectrum: You began several new programs some months after the official launch, could you talk about those a bit?

Mark Rosker: At the launch in July, we had a day of workshops. And from those workshops, we took some of that community feedback and created what we call our Phase Two of programs. It was six additional programs that we’ve announced since November. Those haven’t kicked off yet, but they have been announced. There are six new efforts that are categorized around security, as well as a defense applications initiative and some manufacturing initiatives.

Back to top

IEEE Spectrum: Can you walk us through some of those new ones?

Mark Rosker: I can certainly tell you about what the program goals are.

I don’t think there’s any particular point to the order here. But first program is one called PIPES, which stands for Photonics in the Package for Extreme Scalability. And what this is really about is very-high-bandwidth optical signaling for digital interconnects. Photonic interconnects are something that everyone understands, but we’re really talking about driving very high bandwidth photonics all the way down to the package level.

IEEE Spectrum: Currently it stops at the rack level, right?

Mark Rosker: Exactly. Exactly. So, this would be useful for achieving sensationally high transfer rates all the way to the package.

[For some of the reasons why that hasn’t yet been achieved commercially, see “Silicon Photonics Stumbles at the Last Meter,” IEEE Spectrum, September 2018.]

Back to top

IEEE Spectrum: What’s next?

Mark Rosker: So the second program is called T-MUSIC, which stands for Technologies for Mixed-mode Ultra Scaled Integrated Circuits. [Hesitates.] I always have to check, because once you come up with the acronym no one ever remembers what it stands for. This is a program that is really focused on trying to develop very integrated and very broadband RF electronics. It’s combining silicon CMOS with silicon germanium technologies to get to next-generation mixed mode devices. These are things that could probably be up to the terahertz in terms of performance. Clearly, this is very highly relevant to the Department of Defense. The DOD typically is asking for extremely high performance even by commercial standards. But what it also offers is a route to onshore manufacturing. That’s very important in this particular program.

The third program in this list is the program called GAPS, which stands for Guaranteed Architecture for Physical Security. Again, this is getting back to the physical security part of the problem. Really what you’re talking about doing here is taking architectures that can be provably separated and provably shown to be secure. So it’s hardware components and interfaces, co-designed tools, and integration of the tools into systems that can be validated.

IEEE Spectrum: I’m going to need a little unpacking for that. What do you mean provably separate, provably secure? I’m not sure what’s being separated from what.

Mark Rosker: So to explain this, I want you to imagine that you have multiple tasks that you would like to do and that you want to ensure that one task does not talk to another task or that someone who is supposed to be getting information from one task doesn’t receive information that’s related to the other task. Ultimately, these could be things that could be at a different level of security from each other, or simply they may be—in the commercial world—they may be simply tasks that you want to make sure are kept separate. That is a significant problem in the DOD and government space.

IEEE Spectrum: And this is at the level of computer architecture?

Mark Rosker: Yup.

IEEE Spectrum: Is that what you’re talking about? This sounds a little bit like a response to Spectre and Meltdown in a way—information bleeding from one process to another due to an architecture issue.

Mark Rosker: I think that is certainly within the scope of the kinds of things that we’re interested in looking at.

The fourth program is called DRBE. We got a little bit creative with the acronymship, it’s Digital RF Battlefield Emulator. This is really quite interesting, because we’re using a problem that is of interest to the Department of Defense to serve as something that drives high performance computing in a larger way: high fidelity emulation of RF environments. If you were, for example, in downtown Chicago, and you had a large number of emitters around you—cell phone towers, just all the things that you’ll find in a city—trying to understand how that RF environment works is an immense computational problem.

IEEE Spectrum: Obviously, since you’re working on it I think I know the answer to this, but that’s not something that AT&T or Verizon can currently do? I mean, they can’t just stand in the space and get a complete picture of the RF environment?

Mark Rosker: Actually, it’s not anything that anyone can do. It depends on the level of fidelity at which you’re trying to simulate, of course. You can do an emulation of a system with a spreadsheet. But if you ask for a complex model that models what’s going on with a large number of emitters and a large number of what’s called multipaths, the problem grows geometrically. If I take a very small number—say, 10—it’s easy to do that. But if I take a very large number like the number of people in an urban environment, not really. No one can do that.

I’m being glib here because I’m saying number of people or number of emitters, but I also have to worry about the number of paths. It’s a multipath problem. And so that problem becomes very—it becomes intractable, actually.

IEEE Spectrum: What are some other new programs?

Mark Rosker: These are very new. One is called Real-Time Machine Learning, RTML. It probably sounds like what it is. It’s trying to reduce the design costs of developing AI or machine learning by developing ways to automatically generate the chip designs for machine learning. Really, I guess what you would say is that RTML is about making a machine learning compiler. If you could do that, it would be enormously important in terms of reducing the cost of building—I guess you could call it—a machine learning processor.

IEEE Spectrum: And is this aimed at the inferencing or training chips?

Mark Rosker: The training kind. This is basically the tensor processor and Pytorches of the world.

IEEE Spectrum: The ERI already has a hardware compiler component through the IDEA program right? 

Mark Rosker: Right, but there is no machine learning compiler that exists. It’s a completely separate problem. So this would be a first of its kind. The hardware compiler technology under development through POSH and IDEA, those programs are more traditional Von Neumann-type generalized processing.

IEEE Spectrum: There any other programs you want to talk about?

Mark Rosker: There’s one more that I haven’t mentioned and we call it AISS, Automatic Implementation of Secure Silicon. And this is a design program. It is run by [DARPA program manager] Serge Leef, and, basically, what it is about is creating an augmented chip design flow that is consistent with security mechanisms.

IEEE Spectrum: How does this differ from the other automated design programs that you’re already working on through POSH and IDEA?

Mark Rosker: POSH and IDEA are really about trying to deal with complexity. This is about secure silicon. How do you make a design which provides a way of evaluating and making sure you have achieved some security metric?

IEEE Spectrum: Security is always a moving target. What sort of things are going to have to be guarded against, by design? Or have you decided what those things are?

Mark Rosker: You’re right; in the security space you have to define the problem. AISS is specifically dealing with four threat vectors: side channel attacks, Trojan insertion, reverse engineering, and supply chain attacks such as cloning and counterfeiting.

IEEE Spectrum: Sort of like the GAPS program, but with automated design?

Mark Rosker: Yes. It is in that space between the two.

IEEE Spectrum: What’s your ideal outcome from the symposium in July?

Mark Rosker: I think, for us, the summit is all about engagement with the larger community. I think we have been very successful in the first year to attract the attention of a number of companies, ones we call non-traditional performers, [by which we mean] people who have not traditionally answered our call for responding to new ideas.

What I think we want to do moving forward is to couple better with those people and those communities and some of the more traditional performers that we have who work on problems.

But, again, we, at DARPA are always mindful that we’re a part of the Department of Defense. Ultimately, those improvements and capabilities that we develop, we want to see realized in applications that are important and disruptive for the Department of Defense.

So the ideal outcome is engagement. Not just between us and different communities, but between traditional and non-traditional performer communities. Having them together and talking to each other. And, hopefully, working with each other in ERI as we move forward.

IEEE Spectrum: Any fiscal year 2019 budget information you can share for ERI?

Mark Rosker: Well, I don’t think we announced the budget to the dollar. We committed to at least $1.5 billion over 5 years, and we are absolutely going to deliver on that.

Back to top

Racing Toward Yottabyte Information

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/semiconductors/memory/racing-toward-yottabyte-information

Yotta, yotta, yotta—that’s the Greek prefix we’ll soon need to describe the vastness of our data archives—yotta, yotta, yotta

Once upon a time, information was deposited only inside human brains, and ancient bards could spend hours retelling stories of conflicts and conquests. Then external data storage was invented. 

Small clay cylinders and tablets, invented in Sumer some 5,000 years ago, often contained just a dozen cuneiform characters, equivalent to a few hundred bytes (102 B). The Oresteia, a trilogy of Greek tragedies by Aeschylus (fifth century BCE), amounts to about 300,000 B (105 B). Some rich senators in imperial Rome had libraries housing hundreds of scrolls, with one large collection holding at least 108 B (100 megabytes).

A radical shift came with Johannes Guttenberg’s printing press, using movable type. By 1500, less than half a century after printing’s introduction, European printers had released more than 11,000 new book editions. The extraordinary rise of printing was joined by other forms of stored information. First came engraved and woodcut music scores, illustrations, and maps. Then, in the 19th century, came photographs, sound recordings, and movies. Reference works and other regularly published statistical compendia during the 20th century came on the backs of new storage modes, notably magnetic tapes and long-playing records.

Beginning in the 1960s, computers expanded the scope of digitization to medical imaging (a digital mammogram [PDF] is 50 MB), animated movies (2–3 gigabytes), intercontinental financial transfers, and eventually the mass emailing of spam (more than 100 million messages sent every minute). Such digitally stored information rapidly surpassed all printed materials. Shakespeare’s plays and poems amount to 5 MB, the equivalent of just a single high-resolution photograph, or of 30 seconds of high-fidelity sound, or of eight seconds of streamed high-definition video.

Printed materials have thus been reduced to a marginal component of overall global information storage. By the year 2000, all books in the Library of Congress were on the order of 1013 B (more than 10 terabytes) but that was less than 1 percent of the total collection (1015 B, about 3  petabytes) once all photographs, maps, movie, and audio recordings were added.

And in the 21st century this information is being generated ever faster. In its latest survey of data generated per minute in 2018, Domo [PDF], a cloud service, listed more than 97,000 hours of video streamed by Netflix users, nearly 4.5 million videos watched on YouTube, just over 18 million forecast requests on the Weather Channel, and more than 3 quadrillion bytes (3.1 petabytes) of other Internet data used in the United States alone. By 2016, the annual global data-creation rate surpassed 16 ZB (a zettabyte is 1021 B), and by 2025, it is expected to rise by another order of magnitude—that is, to about 160  ZB or 1023 B. And according to Domo, by 2020 1.7 MB of data will be generated every second for every one of the world’s nearly 8 billion people.

These quantities lead to some obvious questions. Only a fraction of the data flood could be stored, but which part should that be? Challenges of storage are obvious even if less than 1 percent of this flow gets preserved. And for whatever we decide to store, the next question is how long should the data be preserved. No storage need last forever, but what is the optimal span?

The highest prefix in the international system of units is yotta, Y = 1024. We’ll have that many bytes within a decade. And once we start creating more than 50 trillion bytes of information per person per year, will there be any real chance of making effective use of it? It is easier to find new prefixes for large databases than to decide how large is large enough. After all, there are fundamental differences between accumulated data, useful information, and insightful knowledge.

This article appears in the July 2019 print issue as “Data World: Racing Toward Yotta.”

Solve Your Thin Film Challenges in High-Volume Compound Semi Manufacturing

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/addressing-thin-film-challenges-in-highvolume-compound-semiconductor-manufacturing-a-360degree-solution

In this white paper, you’ll learn how investing in a robust, reliable thin film deposition solution will better position compound semi manufacturers for high-volume production.

Scaling into high-volume production for compound semiconductor manufacturing does not just involve achieving a higher throughput and factory output. Compound semi manufacturers need to invest in a robust, reliable thin film deposition solution that is configured for high throughput and excellent precision. In this white paper, you’ll learn how a flexible configuration with the right hardware, software and partner support will lead to a better production process and performance and a lower cost of ownership.

img

Magnet Sets World Record at 45.5 Teslas

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/semiconductors/nanotechnology/a-beachhead-to-superstrong-magnetic-fields

It’s the strongest continuous DC magnetic field ever recorded and could help scientists study nuclear fusion and exotic states of matter

A new multicomponent, partially-superconducting electromagnet—currently the world’s strongest DC magnet of any kind—is poised to reveal a path to substantially stronger magnets still. The new magnet technology could help scientists study many other phenomena including nuclear fusion, exotic states of matter“shape-shifting” molecules, and interplanetary rockets, to name a few.

The National High Magnetic Field Laboratory in Tallahassee, Florida is home to four types of advanced, ultra-strong magnets. One supports magnetic resonance studies. Another is configured for mass spectrometry. And a different type produces the strongest magnetic fields in the world. (Sister MagLab campuses at the University of Florida and Los Alamos National Laboratory provide three more high-capacity magnets for other fields of study.)

It’s that last category on the Tallahassee campus—world’s strongest magnet—that the latest research is attempting to complement. The so-called MagLab DC Field Facility, in operation since 1999, is nearing a limit in the strength of magnetic fields it can produce with its current materials and technology.

The MagLab’s DC magnet maintains a steady 45 Tesla of field strength, which until very recently was the strongest continuous magnetic field produced in the world. (Not to be confused with the electric car brand of the same name, Tesla is also a unit of magnetic field strength. The higher its Tesla rating, the stronger the magnet. For comparison, a typical MRI machine is built around a superconducting magnet with approximately 3 Tesla of field strength. The Earth’s magnetic field, felt at the planet’s surface, is 0.00005 T.)

U.S.-China Trade War Portends Painful Times for U.S. Semiconductor Industry

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/semiconductors/processors/china-trade-war-portends-period-of-pain-for-semiconductor-industry

Semiconductor industry mavens in the United States anticipate damage from U.S.-China trade policy and call for a national strategy for semiconductor manufacturing

“There is going to be a lot of pain for the semiconductor industry before it normalizes,” says Dan Hutcheson.

“It’s a mess, and it’s going to get a lot worse before it gets better,” says David French.

“If we aren’t going to sell them chips, it is not going to take them long [to catch up to us]; it is going to hurt us,” says Mar Hershenson.


French, Hutcheson, and Hershenson, along with Ann Kim and Pete Rodriguez, were discussing the U.S.-China trade war that escalated last month when the United States placed communications behemoth Huawei on a trade blacklist. All five are semiconductor industry veterans and investors: French is currently chairman of Silicon Power Technology; Hutcheson is CEO of VLSI Research; Hershenson is managing partner of Pear Ventures, Kim is managing director of Silicon Valley Bank’s Frontier Technology Group, and Rodriguez is CEO of startup incubator Silicon Catalyst. The five took the stage at Silicon Catalyst’s second industry forum, held in Santa Clara, Calif., last week to discuss several aspects of the trade war:

A Faster Way to Rearrange Atoms Could Lead to Powerful Quantum Sensors

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/a-faster-way-to-rearrange-atoms

The technique is also more accurate than the traditional method of poking atoms with the tip of a scanning electron microscope

The fine art of adding impurities to silicon wafers lies at the heart of semiconductor engineering and, with it, much of the computer industry. But this fine art isn’t yet so finely tuned that engineers can manipulate impurities down to the level of individual atoms.

As technology scales down to the nanometer size and smaller, though, the placement of individual impurities will become increasingly significant. Which makes interesting the announcement last month that scientists can now rearrange individual impurities (in this case, single phosphorous atoms) in a sheet of graphene by using electron beams to knock them around like croquet balls on a field of grass.

The finding suggests a new vanguard of single-atom electronic engineering. Says research team member Ju Li, professor of nuclear science and engineering at MIT, gone are the days when individual atoms can only be moved around mechanically—often clumsily on the tip of a scanning tunneling microscope.

This MicroLED Display Is Smaller Than a Bug

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/this-microled-display-is-smaller-than-a-bug

Mojo Vision’s microLED display has record-breaking pixel density and a somewhat mysterious purpose

A Silicon Valley-based startup has recently emerged from stealth mode to reveal what it claims is the smallest, most pixel-dense dynamic display ever built. Mojo Vision’s display is just 0.48 millimeters across, but it has about 300 times as many pixels per square inch as a typical smartphone display.

Europe Has Invested €1 Billion Into Graphene—But For What?

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/europe-has-invested-1-billion-into-graphenebut-for-what

Six years into an ambitious 10-year research project, experts weigh in on whether the Graphene Flagship can help the “wonder material” make it through the Valley of Death

Six years ago, the European Union (EU) embarked on an ambitious project to create a kind of Silicon Valley for the “wonder material” of the last decade: graphene. The project—called the Graphene Flagship—would leverage €1 billion over 10 years to push graphene into commercial markets. The project would bring together academic and industrial research institutes to not only ensure graphene research would be commercialized, but to also make Europe an economic powerhouse for graphene-based technologies.

To this day, the EU’s investment in the Graphene Flagship represents the single largest project in graphene research and development (though some speculate that graphene-related projects in China may have surpassed it). In the past six years, the Graphene Flagship has spawned nine companies and 46 new graphene-based products. Despite these achievements, there remains a sense among critics that the wonder material has not lived up to expectations and the Flagship’s efforts have not done much to change that perception.

Graphene’s unique properties have engendered high expectations in a host of areas, including for advanced composites and new types of electronic devices. While graphene can come in many forms, its purest form is that of a one-atom-thick layer of graphite. This structure has provided the highest thermal conductivity ever recorded—10 times higher than copper. It also has one of the highest intrinsic electron mobilities of any material (the speed at which electrons can travel through a material), which is approximately 100 times greater than silicon—a tantalizing property for electronic applications.

The Graphene Flagship is now more than halfway through its 10-year funding cycle. To many observers, the project’s achievements—or lack thereof—is a barometer for the commercial status of graphene, which was first synthesized at the UK’s University of Manchester in 2004, earning its discoverers the Nobel Prize in 2010. When it was founded, the Flagship wrestled with a key question that it still faces today: Was the Flagship set up to support “fundamental” research or “applied” research in its quest to make Europe the “Graphene Valley” of the world?

Another Step Toward the End of Moore’s Law

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/semiconductors/devices/another-step-toward-the-end-of-moores-law

Samsung and TSMC move to 5-nanometer manufacturing

Two of the world’s largest foundries—Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung—announced in April that they’d climbed one more rung on the Moore’s Law ladder. TSMC spoke first, saying its 5-nanometer manufacturing process is now in what’s called “risk production”—the company believes it has finished the process, but initial customers are taking a chance that it will work for their designs. Samsung followed quickly with a similar announcement.

TSMC says its 5-nm process offers a 15 percent speed gain or a 30 percent improvement in power efficiency. Samsung is promising a 10 percent performance improvement or a 20 percent efficiency improvement. Analysts say these figures are in line with expectations. Compared, though, with the sometimes 50 percent improvements of a decade ago, it’s clear that Moore’s Law is not what it used to be. But judging by the investments big foundries are making, customers still think it’s worthwhile.

Why is 5 nanometers special?

The 5-nm node is the first to be built from the start using extreme ultraviolet lithography (EUV). With a wavelength of just 13.5 nm, EUV light can produce extremely fine patterns on silicon. Some of these patterns could be made with the previous generation of lithographic tools, but those tools would have to lay down three or four different patterns in succession to produce the same result that EUV manages in a single step.

Foundries began 7-nm manufacturing without EUV, but later used it to collapse the number of lithographic steps and improve yield. At 5 nm, the foundries are thought to be using 10 to 12 EUV steps, which would translate to 30 or more steps in the older technology, if it were even possible to use the older tech.

Because the photomasks that contain the patterns are so expensive and each lithography machine itself is a US $100 million–plus investment, “EUV costs more per layer,” says G. Dan Hutcheson, at VLSI Research. But it’s a net revenue gap on a per-wafer basis, and EUV will form the core of all future processes.

Who will use it?

The new manufacturing processes aren’t for everyone. At least not yet. But both companies identified some likely early adopters, including suppliers that make smartphone application processors and 5G infrastructure. “You have to have high volume and a need for either speed or power efficiency,” says Len Jelinek, a semiconductor-manufacturing analyst at IHS Markit.

Whom you’re competing against counts too, explains Kevin Krewell at TIRIAS Research. Graphics processing units, field-programmable gate arrays, and high-performance microprocessors used to be the first to take advantage of the bleeding edge of Moore’s Law. But with less competition in those markets, it’s the mobile processors that need the new tech to distinguish themselves, he says.

Is it okay that there are only two companies left?

Only Samsung and TSMC are offering 5-nm foundry services. GlobalFoundries gave up at 14 nm and Intel, which is years late with its rollout of an equivalent to competitors’ 7 nm, is thought to be pulling back on its foundry services, according to analysts.

Samsung and TSMC remain because they can afford the investment and expect a reasonable return. Samsung was the largest chipmaker by revenue in 2018, but its foundry business ranks fourth, with TSMC in the lead. TSMC’s capital expenditure was $10 billion in 2018. Samsung expects to nearly match that on a per-year basis until 2030.

Can the industry function with only two companies capable of the most advanced manufacturing processes? “It’s not a question of can it work?” says Hutcheson. “It has to work.”

“As long as we have at least two viable solutions, then the industry will be comfortable,” says Jelinek.

What’s next?

Chipmakers’ pipelines have traditionally had 5 nm following 7 nm and 3 nm following 5 nm. But analysts say to expect foundries to offer a variety of technologies with incremental improvements that fill in the gaps. Indeed, both Samsung and TSMC are offering what they’re calling a 6-nm process. Foundries will need those intermediate products to keep customers coming to the edge of Moore’s Law. After all, there aren’t many numbers left between 5 and 0.

New Optimization Chip Tackles Machine Learning, 5G Routing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/georgia-tech-optimization-chip-solves-huge-class-of-hard-problems

A 49-core chip by Georgia Tech uses a 1980s-era algorithm to solve some of today’s toughest optimization problems faster than a GPU

Engineers at Georgia Tech say they’ve come up with a programmable prototype chip that efficiently solves a huge class of optimization problems, including those needed for neural network training, 5G network routing, and MRI image reconstruction. The chip’s architecture embodies a particular algorithm that breaks up one huge problem into many small problems, works on the subproblems, and shares the results. It does this over and over until it comes up with the best answer. Compared to a GPU running the algorithm, the prototype chip—called OPTIMO—is 4.77 times as power efficient and 4.18 times as fast.

The training of machine learning systems and a wide variety of other data-intensive work can be cast as a set of mathematical problem called constrained optimization. In it, you’re trying to minimize the value of a function under some constraints, explains Georgia Tech professor Arijit Raychowdhury. For example, training a neural net could involve seeking the lowest error rate under the constraint of the size of the neural network.

“If you can accelerate [constrained optimization] using smart architecture and energy-efficient design, you will be able to accelerate a large class of signal processing and machine learning problems,” says Raychowdhury. A 1980s-era algorithm called alternating direction method of multipliers, or ADMM, turned out to be the solution. The algorithm solves enormous optimization problems by breaking them up and then reaching a solution over several iterations.

Speed up DC-DC converter control design with Simulink

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/speed-up-dc-dc-converter-control-design-with-simulink

Simulink optimizes system behavior by simulating multiple design options from a single environment.

Learn how to: use simulation to develop a digital controller for a DC-DC power converter; model passive circuit elements, power semiconductors, power sources and loads; simulate continuous and discontinuous conduction modes; simulate power losses and thermal behavior; tune controller gains to meet design requirements; and generate C code for a TI C2000 microcontroller.