All posts by Samuel K. Moore

U.S. Invests in Fabs That Make Radiation-Hardened Chips

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/devices/us-invests-in-radiationhardenedchip-fabs

The U.S. is investing in upgrades to the fabrication facility that makes the radiation-hardened chips for its nuclear arsenal. It is also spending up to US $170-million to enhance the capabilities of SkyWater Technology Foundry, in Bloomington, Minn., in part to improve the company’s radiation-hardened-chip line for other Defense Department needs.

X-Ray Tech Lays Chip Secrets Bare

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/design/xray-tech-lays-chip-secrets-bare

Scientists and engineers in Switzerland and California have come up with a technique that can reveal the 3D design of a modern microprocessor without destroying it.

Typically today, such reverse engineering is a time-consuming process that involves painstakingly removing each of a chip’s many nanometers-thick interconnect layers and mapping them using a hierarchy of different imaging techniques, from optical microscopy for the larger features to electron microscopy for the tiniest features. 

The inventors of the new technique, called ptychographic X-ray laminography, say it could be used by integrated circuit designers to verify that manufactured chips match their designs, or by government agencies concerned about “kill switches” or hardware trojans that could have secretly been added to ICs they depend on.

“It’s the only approach to non-destructive reverse engineering of electronic chips—[and] not just reverse engineering but assurance that chips are manufactured according to design,” says Anthony F. J. Levi, professor of electrical and computer engineering at University of Southern California, who led the California side of the team. “You can identify the foundry, aspects of the design, who did the design. It’s like a fingerprint.”

Custom Computer Makes Inverse Lithography Technology Practical for First Time

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/nanoclast/semiconductors/materials/custom-computer-makes-inverse-lithography-practical-for-first-time

Silicon Valley-based D2S revealed last week that it had solved the last problem in a nascent technique called inverse lithography technology, or ILT. The breakthrough could speed the process of making chips and allow semiconductor fabs to produce more advanced chips without upgrading equipment. The solution, a custom-built computer system, reduces the amount of time needed for a critical step from several weeks to a single day.

In most of the photolithography used to make today’s microchips, light with a wavelength of 193-nanometers is shown through lenses and a patterned photomask, so that the pattern is shrunk down and projected onto the silicon wafer where it defines device and circuit features. (The most modern chip making technology, extreme ultraviolet lithography, works a bit differently. But, only a few chipmakers have these tools.)

U.S. Seeks Superconducting Offshore Wind Generators

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/energywise/green-tech/wind/us-seeks-superconducting-offshore-wind-generators

Around this time last year, engineers were preparing to hoist the first superconducting wind turbine generator to the top of a tower in Denmark. The turbine capped an ambitious group of European efforts to find a way to increase the power of offshore turbines without making them too massive to build. Now it’s the United States’ turn. The U.S. Department of Energy will dole out US $8-million for three advanced-drive train efforts, two of which rely on superconductors.

Ayer, Mass.-based American Superconductor Corporation (AMSC) will build a generator based on high-temperature superconductors (HTS), which lose their resistance at around 77 Kelvins. General Electric Research, in Niskayuna, N.Y. will develop a generator using low-temperature superconductors, leveraging its experience using liquid helium to cool the materials below 10 K in MRI machines.

Quantum Computing Software Startup Aliro Emerges From Stealth Mode

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/software/quantum-computing-software-startup-aliro-emerges-from-stealth-mode

There are a lot of different types of quantum computers. Arguably, none of them are ready to make a difference in the real world. But some startups are betting that they’re getting so close that it’s time to make it easy for regular software developers to take advantage of these machines. Boston-based Aliro Technologies is one such startup.

Aliro emerged from stealth mode today, revealing that it had attracted US $2.7 million from investors that include Crosslink Ventures, Flybridge Capital Partners, and Samsung NEXT’s Q Fund. The company was founded by Harvard assistant professor of computational materials science, Prineha Narang, along with two of her students, and a post-doctoral researcher.

Aliro plans a software stack that will allow ordinary developers to first determine whether available cloud-based quantum hardware can speed any of their processes better than other accelerators, such as GPUs. And then it will allow them to write code that takes advantage of that speedup.

U.S. Energy Department is First Customer for World’s Biggest Chip

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/us-energy-department-is-first-customer-for-worlds-biggest-chip

Argonne National Laboratory and Lawrence Livermore National Laboratory will be among the first organizations to install AI computers made from the largest silicon chip ever built. Last month, Cerebras Systems unveiled a 46,225-square millimeter chip with 1.2 trillion transistors designed to speed the training of neural networks. Today, such training is often done in large data centers using GPU-based servers. Cerebras plans to begin selling computers based on the notebook-size chip in the 4th quarter of this year.

Stretchy Wearable Patch Allows Two-Way Communication With Robots

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/stretchy-wearable-patch-allows-twoway-communication-with-robots

Multifunctional metal-oxide semiconductor used to build flexible RRAM, transistors, and sensors

Engineers at the University of Houston are trying to make the melding of humans and machines a little easier on the humans. They’ve developed an easy-to-manufacture flexible electronics patch that, when attached to a human, translates the person’s motion and other commands to a robot and receives temperature feedback from the robot.

Led by University of Houston assistant professor Cunjiang Yu, the team developed transistors, RRAM memory cells, strain sensors, UV-light detectors, temperature sensors, and heaters all using the same set of materials in a low-temperature manufacturing process. They integrated the different devices into a 4-micrometer-thick adhesive plastic patch.

A paper describing the Houston researchers’ work appears this week in Science Advances.

With the patch on the back of a volunteer’s hand, the researchers were able to control a robot hand—causing it to close or open according to what the human’s hand motion did to the patch’s strain sensors. What’s more, they were able to close the human-robot control loop by providing temperature feedback from the robotic hand to the human one using the patch’s integrated heater circuits.

Descartes Labs Built a Top 500 Supercomputer From Amazon Cloud

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/descartes-labs-built-a-top-500-supercomputer-from-amazon-cloud

Cofounder Mike Warren talks about the future of high-performance computing in a data-rich, cloud computing world

Descartes Labs cofounder Mike Warren has had some notable firsts in his career, and a surprising number have had lasting impact. Back in 1998 for instance, his was the first Linux-based computer fast enough to gain a spot in the coveted Top 500 list of supercomputers. Today, they all run Linux. Now his company, which crunches geospatial and location data to answer hard questions, has achieved something else that may be indicative of where high-performance computing is headed: It’s built the world’s 136th fastest supercomputer using just Amazon Web Services and Descartes Labs’ own software. In 2010, this would have been the most powerful computer on the planet.

Notably, Amazon didn’t do anything special for Descartes. Warren’s firm just plunked down US $5,000 on the company credit card for the use of a “high-network-throughput instance block” consisting of 41,472 processor cores and 157.8 gigabytes of memory. It then worked out some software to make the collection act as a single machine. Running the standard supercomputer test suite, called LinPack, the system reached 1,926.4 teraFLOPS (trillion floating point operations per second). (Amazon itself made an appearance much lower down on the Top 500 list a few years back, but that’s thought to have been for its own dedicated system in which Amazon was the sole user rather than what’s available to the public.)

Intel Shows Off Chip Packaging Powers

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/intel-shows-off-chip-packaging-powers

Three research directions should bind chiplets more tightly together

Packaging has arguably never been a hotter subject. With Moore’s Law no longer providing the oomph it once did, one path to better computing is to connect chips more tightly together within the same package.

At Semicon West earlier this month, Intel showed off three new research efforts in packaging. One combines two of its existing technologies to more tightly integrate chiplets—smaller chips linked together in a package to form the kind of system that would, until recently, be made as a single large chip. Another adds better power delivery to dies at the top of a 3D stack of chips. And the final one is an improvement on Intel’s chiplet-to-chiplet interface called Advanced Interface Bus (AIB).

First Programmable Memristor Computer

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/first-programmable-memristor-computer

Michigan team builds memristors atop standard CMOS logic to demo a system that can do a variety of edge computing AI tasks

Hoping to speed AI and neuromorphic computing and cut down on power consumption, startups, scientists, and established chip companies have all been looking to do more computing in memory rather than in a processor’s computing core. Memristors and other nonvolatile memory seem to lend themselves to the task particularly well. However, most demonstrations of in-memory computing have been in standalone accelerator chips that either are built for a particular type of AI problem or that need the off-chip resources of a separate processor in order to operate. University of Michigan engineers are claiming the first memristor-based programmable computer for AI that can work on all its own.