All posts by Rahul Rao

Folding Drone Can Drop Into Inaccessible Mines

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/automaton/robotics/drones/folding-drone-can-drop-into-inaccessible-mines

Inspecting old mines is a dangerous business. For humans, mines can be lethal: prone to rockfalls and filled with noxious gases. Robots can go where humans might suffocate, but even robots can only do so much when mines are inaccessible from the surface.

Now, researchers in the UK, led by Headlight AI, have developed a drone that could cast a light in the darkness. Named Prometheus, this drone can enter a mine through a borehole not much larger than a football, before unfurling its arms and flying around the void. Once down there, it can use its payload of scanning equipment to map mines where neither humans nor robots can presently go. This, the researchers hope, could make mine inspection quicker and easier. The team behind Prometheus published its design in November in the journal Robotics.

Mine inspection might seem like a peculiarly specific task to fret about, but old mines can collapse, causing the ground to sink and damaging nearby buildings. It’s a far-reaching threat: the geotechnical engineering firm Geoinvestigate, based in Northeast England, estimates that around 8 percent of all buildings in the UK are at risk from any of the thousands of abandoned coal mines near the country’s surface. It’s also a threat to transport, such as road and rail. Indeed, Prometheus is backed by Network Rail, which operates Britain’s railway infrastructure.

Such grave dangers mean that old mines need periodic check-ups. To enter depths that are forbidden to traditional wheeled robots—such as those featured in the DARPA SubT Challenge—inspectors today drill boreholes down into the mine and lower scanners into the darkness.

But that can be an arduous and often fruitless process. Inspecting the entirety of a mine can take multiple boreholes, and that still might not be enough to chart a complete picture. Mines are jagged, labyrinthine places, and much of the void might lie out of sight. Furthermore, many old mines aren’t well-mapped, so it’s hard to tell where best to enter them.

Prometheus can fly around some of those challenges. Inspectors can lower Prometheus, tethered to a docking apparatus, down a single borehole. Once inside the mine, the drone can undock and fly around, using LIDAR scanners—common in mine inspection today—to generate a 3D map of the unknown void. Prometheus can fly through the mine autonomously, using infrared data to plot out its own course.

Other drones exist that can fly underground, but they’re either too small to carry a relatively heavy payload of scanning equipment, or too large to easily fit down a borehole. What makes Prometheus unique is its ability to fold its arms, allowing it to squeeze down spaces its counterparts cannot.

It’s that ability to fold and enter a borehole that makes Prometheus remarkable, says Jason Gross, a professor of mechanical and aerospace engineering at West Virginia University. Gross calls Prometheus “an exciting idea,” but he does note that it has a relatively short flight window and few abilities beyond scanning.

The researchers have conducted a number of successful test flights, both in a basement and in an old mine near Shrewsbury, England. Not only was Prometheus able to map out its space, the drone was able to plot its own course in an unknown area.

The researchers’ next steps, according to Puneet Chhabra, co-founder of Headlight AI, will be to test Prometheus’s ability to unfold in an actual mine. Following that, researchers plan to conduct full-scale test flights by the end of 2021.

NIST Develops Compact Laser-Cooled Atom Trap

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/optoelectronics/nist-develops-compact-laser-cooled-atom-trap

Using lasers, scientists can deep-freeze atoms down to a hair’s breadth above absolute zero. Because such cold atoms move very slowly, their properties can easily be measured. That makes them especially good for making ever-more-precise atomic clocks, navigation systems, and measurements of the Earth. But chilling atoms and keeping them confined usually requires an apparatus that’s mounted on a table, and so those clocks and navigation devices tend to be stuck in the lab.

Now, scientists might have found a way to change that. Researchers at the U.S. National Institute for Standards and Technology (NIST) have built a hand-size device that can trap atoms in a magnetic field and chill them to less than a millikelvin. NIST’s miniaturization could help cooled atoms exit the laboratory and start down the road to chips.

“There’s been this dream since the early 2000s to take [laser cooling] off of large optical tables…and make real devices out of them,” says William McGehee, a NIST researcher who helped build the device.

Scientists have used laser cooling for decades. It might seem counterintuitive, but if a laser’s light is fine-tuned to just the right frequency, and if the laser beam’s photons strike an atom like a headwind, the atom can absorb that photon and slow down. That kills off some of the atom’s energy and cools it. By repeatedly bombarding a cloud of atoms, scientists can plunge them to millionths of a degree above absolute zero.

Laser cooling was how physicists created that exotic state of matter called a Bose-Einstein condensate, winning the 2001 Nobel Prize in Physics. But cold atoms have more applications than just fun physics. In addition to clocks and navigation, they’re ideal for quantum computers and for linking them into quantum networks.

That’s part of what inspired the NIST researchers to find laser-cooling methods more amenable to industry. Other compact laser-cooling techniques do exist, but it’s difficult to mass-produce them at scales that devices of the future might need. “If you want to make, say, a hundred thousand nodes of a quantum network,” McGehee says, “you can’t take something that’s just a small version of what sits on an optics bench and make a hundred thousand of them.”

The NIST team’s device, about 15 centimeters long, contains three parts. First, is a photonic integrated circuit that launches a laser beam. But that beam is too narrow, so it passes through a metasurface, an array of silicon pillars that act like a prism, widening the beam. The metasurface also controls the beam’s brightness, preventing it from being more intense in its middle, and polarizes its light, which is important to helping it cool atoms.

The third part is a diffraction grating chip that splits the widened beam into multiple beams that strike the atoms from all sides. Those beams, with the help of a magnetic field, cool the atoms and corral them into a trap that’s about a centimeter wide. Using that trap, the NIST group has been able to cool rubidium atoms down to around 200 microkelvins.

This device and its components are flat, allowing them to sit in parallel atop a larger chip. It’s possible that those components can be manufactured and placed alongside traditional CMOS devices. “You could imagine taking them and stacking them in some packaged instrument,” says McGehee.

“This research clearly demonstrates how atomic systems can be assembled using integrated optics on a chip and other compact and reliable components,” says Chris Monroe, co-founder and chief scientist of IonQ, a firm specializing in using similar technology to build quantum computers.

The NIST device isn’t perfect, of course. Integrated cooling lasers like these won’t be as effective as their larger counterparts used in the lab, says McGehee. And manufacturers won’t be able to make cooled-atom devices just yet. Before that can happen, the NIST device will need to get even smaller: one-tenth or even one-twentieth its current size. That’s a goal McGehee believes is achievable.

Three Frosty Innovations for Better Quantum Computers

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/tech-talk/computing/hardware/three-super-cold-devices-quantum-computers

For most quantum computers, heat is the enemy. Heat creates error in the qubits that make a quantum computer tick, scuttling the operations the computer is carrying out. So quantum computers need to be kept very cold, just a tad above absolute zero.

“But to operate a computer, you need some interface with the non-quantum world,” says Jan Cranickx, a research scientist at imec. Today, that means a lot of bulky backend electronics that sit at room temperature. To make better quantum computers, scientists and engineers are looking to bring more of those electronics into the dilution refrigerator that houses the qubits themselves.

At December’s IEEE International Electron Devices Meeting (IEDM), researchers from than a half dozen companies and universities presented new ways to run circuits at cryogenic temperatures. Here are three such efforts: 

Google’s cryogenic control circuit could start shrinking quantum computers

At Google, researchers have developed a cryogenic integrated circuit for controlling the qubits, connecting them with other electronics. The Google team actually first unveiled their work back in 2019, but they’re continuing to scale up the technology, with an eye for building larger quantum computers.

This cryo-CMOS circuit isn’t much different from its room-temperature counterparts, says Joseph Bardin, a research scientist with Google Quantum AI and a professor at the University of Massachusetts, Amherst. But designing it isn’t so straightforward. Existing simulations and models of components aren’t tailored for cryogenic operation. Much of the researchers’ challenge comes in adapting those models for cold temperatures.

Google’s device operates at 4 kelvins inside the refrigerator, just slightly warmer than the qubits that are about 50 centimeters away. That could drastically shrink what are now room-sized racks of electronics. Bardin claims that their cryo-IC approach “could also eventually bring the cost of the control electronics way down.” Efficiently controlling quantum computers, he says, is crucial as they reach 100 qubits or more.

Cryogenic low-noise amplifiers make reading qubits easier

A key part of a quantum computer are the electronics to read out the qubits. On their own, those qubits emit weak RF signals. Enter the low-noise amplifier (LNA), which can boost those signals and make the qubits far easier to read. It’s not just quantum computers that benefit from cryogenic LNAs; radio telescopes and deep-space communications networks use them, too.

Researchers at Chalmers University of Technology in Gothenburg, Sweden, are among those trying to make cryo-LNAs. Their circuit uses high-electron-mobility transistors (HEMTs), which are especially useful for rapidly switching and amplifying current. The Chalmers researchers use transistors made from indium phosphide (InP), a familiar material for LNAs, though gallium arsenide is more common commercially. Jan Grahn, a professor at Chalmers University of Technology, states that InP HEMTs are ideal for the deep freeze, because the material does an even better job of conducting electrons at low temperatures than at room temperature.

Researchers have tinkered with InP HEMTs in LNAs for some time, but the Chalmers group are pushing their circuits to run at lower temperatures and to use less power than ever. Their devices operate as low as 4 kelvins, a temperature which makes them at home in the upper reaches of a quantum computer’s dilution refrigerator.

imec researchers are pruning those cables

Any image of a quantum computer is dominated by the byzantine cabling. Those cables connect the qubits to their control electronics, reading out of the states of the qubits and feeding back inputs. Some of those cables can be weeded out by an RF multiplexer (RF MUX), a circuit which can control the signals to and from multiple qubits. And researchers at imec have developed an RF MUX that can join the qubits in the fridge.

Unlike many experimental cryogenic circuits, which work at 4 kelvins, imec’s RF MUX can operate down to millikelvins. Jan Cranickx says that getting an RF MUX to work that temperature meant entering a world where the researchers and device physicists had no models to work from, He describes fabricating the device as a process of “trial and error,” of cooling components down to millikelvins and seeing how well they still work. “It’s totally unknown territory,” he says. “Nobody’s ever done that.”

This circuit sits right next to the qubits, deep in the cold heart of the dilution refrigerator. Further up and away, researchers can connect other devices, such as LNAs, and other control circuits. This setup could make it less necessary for each individual qubit to have its own complex readout circuit, and make it much easier to build complex quantum computers with much larger numbers of qubits—perhaps even thousands.

IMEC and Intel Researchers Develop Spintronic Logic Device

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/devices/imec-and-intel-researchers-develop-spintronic-logic-device

Spintronics is a budding path in the quest for a future beyond CMOS. Where today’s electronics harness an electron’s charge, spintronics plays with another key property: an electron’s spin, which is its intrinsic angular momentum that points either “up” or “down.” Spintronic devices use much less power than their CMOS counterparts—and keep their data even when the device is switched off.

Spin is already used in memory, such as magnetoresistive RAM (MRAM). However, manipulating tiny magnetic effects for the purpose of performing logical operations is more difficult. In 2018, for instance, MIT researchers experimented with hydrogen ions as a possible basis for spintronic transistors. Their efforts were very preliminary; the team admitted that there was still a long way to go before a fully functioning spintronic transistor that could store and process information might be developed. 

Now, researchers at imec and Intel, led by PhD candidate Eline Raymenants, have created a spintronic logic device that can be fully controlled with electric current rather than magnetic fields. The Intel-imec team presented its work at the recent IEEE International Electron Devices Meeting (IEDM).

An electron’s spin generates a magnetic moment. When many electrons with identical spins are close together, their magnetic moments can align and join forces to form a larger magnetic field. Such a region is called a magnetic domain, and the boundaries between domains are called domain walls. A material can consist of many such domains and domain walls, assembled like a magnetized mosaic.

Devices can encode 0s and 1s in those domains. A domain pointing “up” could represent a 0, where a “down” represents a 1. The Intel-imec device uses domains placed in a single-file line of nanoscale wire. The device then uses current to shift those domains and their walls along the wire, like cars along a train track.

The track meets the switch at a magnetic tunnel junction (MTJ). It’s similar to the read-heads of today’s hard disks, but the researchers have implemented a new type of MTJ that’s optimized to move the domain walls more quickly. The MTJs read information from the track and act as logic inputs. At IEDM, the researchers presented a proof of concept: several MTJs feeding an AND gate.

The same MTJ is also where information is written into the track. To do this, the Intel-imec device uses the same technology that’s used in MRAM today. The device passes a spin-polarized current—most of whose electrons have spins in one direction—through a magnetic domain. That current can realign the magnetic field’s direction, creating or editing domain walls in the process.

It’s similar to racetrack memory, an experimental form of data storage that was first proposed over a decade ago. Racetrack memory also writes information into magnetic domains and uses current to shuttle those domains along a nanoscale wire, or “racetrack.” But the Intel-imec device takes advantage of advances in materials, allowing domain walls to move down the line far more quickly. This, the researchers say, is key for allowing logic.

Researchers so far have largely focused on optimizing those materials, according to Van Dai Nguyen, a researcher at imec. “But to build the full devices,” he says, “that’s been missing.”

The Intel-imec team is not alone. Earlier in 2020, researchers at ETH Zurich created a logic gate using domain-wall logic. Researchers at MIT also recently demonstrated a domain-wall-based artificial neuron. Like the Intel-imec researchers and like racetrack memory, these devices also use current to shift domains down the line.

But the Zurich and MIT devices rely on magnetic fields to write information. For logic, that’s not ideal. “If you build a logic circuit,” says Iuliana Radu, a researcher at imec, “you’re not going to put…a huge magnet that you change direction or switch on and off to implement the logic.” Full electrical control, Radu says, will also allow the Intel-imec device to be connected to CMOS circuits.

The researchers say their next steps will be to show their device in action. They’ve designed a majority gate, which returns a positive result if the majority of its inputs are positive. Radu, however, says that they have yet to really explore this design. Only then will the researchers know how their spintronic logic will fare against the CMOS establishment.

Bright Silicon LED Brings Light And Chips Closer

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/devices/bright-silicon-led-brings-light-and-chips-closer

For decades, researchers have tried to fashion an effective silicon LED, one that could be fabricated together with its chip. At the device level, this quest matters for all the applications we don’t have on our mobile devices that would rely on cheap and easily fabricated sources of infrared light. 

Silicon LEDs specialize in infrared light, making them useful for autofocusing cameras or measuring distances—abilities that most phones now have. But virtually no electronics use silicon LEDs, instead opting for more expensive materials that have to be manufactured separately.

However, prospects for the elusive, light-emitting, silicon-based diode may be looking up. MIT researchers, led by PhD student Jin Xue, have designed a functional CMOS chip with a silicon LED, manufactured by GlobalFoundries in Singapore. They presented their work at the recent IEEE International Electron Devices Meeting (IEDM).

The chief problem to date has been, to be blunt, silicon isn’t a very good LED material.

An LED consists of an n-type region, rich in excited free electrons, junctioned with a p-type region, containing positively-charged “holes” for those electrons to fill. As electrons plop into those holes, they drop energy levels, releasing that difference in energy. Standard LED materials like gallium nitride or gallium arsenide are direct bandgap materials, whose electrons are powerful emitters of light.

Silicon, on the other hand, is an indirect bandgap material. Its electrons tend to turn that energy into heat, rather than light. That makes silicon LEDs slower and less efficient than their counterparts. Silicon LED makers must find a way around that indirect bandgap.

One way might be to alloy silicon with germanium. Earlier this year, in fact, a group at Eindhoven University of Technology in the Netherlands fashioned a silicon-based laser out of a nanowire-grown silicon-germanium alloy. Their tiny laser, they reported, might one day send data cheaply and efficiently from one chip to another.

That’s one perhaps elaborate approach to the problem. Another has been considered for more than 50 years—operating silicon LEDs in what’s called reverse-biased mode. Here, the voltage is applied backwards to the direction that would normally allow current to flow. This changeup prevents electrons from filling their holes until the electrical field reaches a critical intensity. Then, the electrons accelerate with enough zeal to knock other electrons loose, multiplying the current into an electrical avalanche. LEDs can harness that avalanche to create bright light, but they need voltages several times higher than the norm for microelectronics.

Since the turn of the millennium, other researchers have tinkered with forward-biased LEDs, in which electrons flow easily and uninterrupted. These LEDs can operate at 1 volt, much closer to a transistor in a typical CMOS chip, but they’ve never been bright enough for consumer use.

The MIT-GlobalFoundries team followed the forward-biased path. The key to their advance is a new type of junction between the n-type and p-type regions. Previous silicon LEDs placed the two side-by-side, but the MIT-GlobalFoundries design stacks the two vertically. That shoves both the electrons and their holes away from the surfaces and edges. Doing that discourages the electrons from releasing energy as heat, channelling more of it into emitting light.

“We’re basically suppressing all the competing processes to make it feasible,” says Rajeev Ram, one of the MIT researchers. Ram says their design is ten times brighter than previous forward-biased silicon LEDs. That’s still not bright enough to be rolled out into smartphones quite yet, but Ram believes there’s more advances to come.

Sonia Buckley, a researcher at the U.S. National Institute of Standards and Technology (NIST) who isn’t part of the MIT-GlobalFoundries research group, says these LEDs prioritize power over efficiency. “If you have some application that can tolerate low efficiencies and high power driving your light source,” she says, “then this is a lot easier and, likely, a lot cheaper to make” than present LEDs, which aren’t integrated with their chips.

That application, Ram thinks, is proximity sensing. Ram says the team is close to creating an all-silicon system that could tell a phone how far away its surroundings are. “I think that might be a relatively near-term application,” he says, “and it’s certainly driving the collaboration we have with GlobalFoundries.”

Quantum Dot Paint Could Make Airframe Inspection Quick and Easy

Post Syndicated from Rahul Rao original https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/quantum-dot-paint-could-make-airframe-inspection-quick-and-easy

Technicians of the future might be able to test airplane fuselages for airworthiness, check a bridge’s structural integrity, or inspect an intricate 3D-printed part for defects with only a quick scan from a camera.

Researchers at the Air Force Institute of Technology and Los Alamos National Laboratory are developing a paint containing quantum dots that could allow just that. Manufacturers could apply this paint to the surfaces of objects including vehicles, infrastructure, and spare parts, the researchers say. An inspector or a quality assurance specialist could rapidly gauge strain on those surfaces—how much they’ve been deformed—by analyzing light emitted by that paint.

To test surfaces today without destroying them in the process, technicians might create a magnetic field, but this often requires unwieldy equipment and provides results that are complicated to read. They might also use X-rays, which come with radiation safety concerns. Other scanning methods, such as ultrasound or microwaves, are limited in the materials they can measure.

“Imagine holding a camera and seeing a 2D strain surface map on your screen,” says Michael Sherburne of the Air Force Institute of Technology, one of the researchers who developed the quantum dot–infused paint.

Quantum dots are particles of semiconductor a few nanometers wide. Their small size brings on some unique quantum effects, as if each quantum dot were an artificial atom. For instance, quantum dots emit light when they’re excited by electromagnetic radiation. That emitted light’s wavelength changes depending on a dot’s size; smaller dots emit bluer light.

The “paint” consists of quantum dots suspended within a clear polymer. The dots emit green light when irradiated with ultraviolet light. Other quantum dot paints do exist, but they’re mostly used for aesthetics, such as to give objects a luminescent glow in the dark.

This paint’s quantum dots have an additional property: When they’re strained, the wavelength of their emitted light changes, becoming shorter (more blue) if they’re stretched and longer (more red) if they’re squeezed. The researchers envision a camera that uses a laser to bathe a painted surface with ultraviolet light, then captures the resulting emissions and creates a map of the surface’s strain.

Ideas for using light to measure strain have been around for several decades, but using quantum dots for that purpose is new, says Kirk Schanze, a professor of chemistry at the University of Florida who has previously worked in the field. 

The paint does have its limitations, however. For one, very high temperatures would melt the polymer. And, of course, the quantum-dot coating can only measure strain that occurred after the paint was applied; if a technician coated an old aircraft with it, for example, the paint wouldn’t reveal any deformation that happened beforehand.

But Sherburne wants to address that latter problem. “This is definitely future work,” he says.

In addition to making an inspection process faster and easier, the paint could be more versatile than today’s strain-sensing instruments. Manufacturers could apply it to 3D-printed parts with complex shapes that don’t lend themselves well to today’s strain-measuring techniques. Furthermore, a device could pipe UV light through fibre-optic cables, allowing scanning of surfaces that are in otherwise hard-to-reach locations.

Schanze calls the researchers’ work “an interesting proof of concept,” but he says the path to using the paint for full-scale measurements is not yet clear.

Sherburne agrees that any practical applications are still a long way off. One next step is constructing a camera system that scans in ultraviolet and measures the returning colors. Although the technology exists, the researchers have yet to develop it beyond a concept. So it will likely be more than a decade before maintenance workers start coating bridges in strain-sensing paint and scanning them with UV beams.