All posts by Charles Q. Choi

Google’s Quantum Computer Exponentially Suppresses Errors

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/googles-quantum-computer-exponentially-suppress-errors

In order to develop a practical quantum computer, scientists will have to design ways to deal with any errors that will inevitably pop up in its performance. Now Google has demonstrated that exponential suppression of such errors is possible, experiments that may help pave the way for scalable, fault-tolerant quantum computers.

A quantum computer with enough components known as quantum bits or “qubits” could in theory achieve a “quantum advantage” allowing it to find the answers to problems no classical computer could ever solve.

However, a critical drawback of current quantum computers is the way in which their inner workings are prone to errors. Current state-of-the-art quantum platforms typically have error rates near 10^-3 (or one in a thousand), but many practical applications call for error rates as low as 10^-15.

For the First Time, Scientists Detect a Moving Photon Multiple Times Without Destroying It

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/photons-trace-twice

The act of detecting a photon typically destroys it. Now scientists have for the first time nondestructively detected a single traveling photon not just once, but multiple times. Such research could make future quantum communications networks and quantum computers faster and more robust, they say.

Quantum communications networks are in principle virtually unhackable, and quantum computers theoretically can be more powerful than any supercomputer. These devices often use photons for communications or computations.

Detecting photons typically requires absorbing them. However, in some applications, a single photon may carry valuable information, such as data needed for a quantum computation. In such cases, scientists would like to read out the photon’s data but also keep track of the photon as it travels in order to make sure it reaches its final destination successfully.

“No detector is ever 100% efficient—there’s always a chance may slip through not detected,” says study senior author Stephan Welte, a quantum physicist at the Max Planck Institute for Quantum Optics in Garching, Germany. With more than one detector linked together in a series, “you enhance the chance of detection.”

Previous research developed a number of different ways to detect a photon without destroying it, often by analyzing how the photon interacts with an ion, superconducting qubit or other quantum system. However, such techniques could either make a single nondestructive detection of a moving photon, or repeated nondestructive detections of a stationary photon trapped inside a cavity.

Now researchers in Germany have detected a single photon twice without destroying it as it flew down an optical fiber. They detailed their findings online June 25 in the journal Physical Review Letters.

In the new study, the scientists designed a nondestructive detector made of a single rubidium atom trapped within a reflective cavity. A photon entering the cavity reflects off its walls, altering the atom’s quantum state, which the researchers can monitor with laser pulses.

The researchers placed two of their cavity devices 60 meters apart, which could both detect the same photon without absorbing it. In principle, “an infinite number of these detectors could detect a photon an infinite number of times in a row,” Welte says. However, in practice, there is currently a roughly 33% chance a photon is lost whenever one of these detectors is used, he notes.

These new findings could one day help scientists keep track of photons in quantum technology. This could make such quantum systems faster by ensuring photons are successfully going where they should.

“Let’s say you want to share quantum information between Munich and New York,” says study lead author Emanuele Distante, a quantum physicist at the Max Planck Institute for Quantum Optics in Garching, Germany. “You can have many points in the middle to see if the photon made it through—you can check to see if it made it to Paris, for instance.”

If detectors find that a photon has gotten lost in transit, “you can then restart immediately,” Welte says. “You don’t have to wait to finish your whole transmission just to make sure it all went through.”

Although this new technique can help researchers detect moving photons multiple times without destroying them, the researchers do not think it can help one eavesdrop on quantum communication links. “It’s a bit like when you trace a parcel—you can detect where the parcel is, but learn nothing about its content,” Welte says. In much the same way, “the photon is a particle with some quantum information in it, and you can nondestructively detect it but not learn anything about its quantum information.”

In the future, the scientists want to show they can nondestructively detect entangled photons multiple times to show their work has use for quantum technologies, Distante says. Future research could also try reducing the chances of photon loss from the detectors using a number of different techniques, such as more reflective cavities, Welte adds. This could help them use more than two detectors to keep track of a photon.

A New Laser For More Efficient Communications

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/roomtemp-topolaser

Novel devices known as topological lasers can prove more efficient at shining light than conventional lasers. Now scientists have created the first electrically driven topological laser that works at room temperature, which could find use in telecommunications.

Topology is the branch of mathematics that investigates what aspects of shapes can survive deformation. For example, an object shaped like a ring may deform into the shape of a mug, with the ring’s hole forming the hole in the cup’s handle. However, this object could not deform into a shape without a hole without changing into a fundamentally different shape.

Handheld Device Fights Fatigue by Stimulating Vagus Nerve

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/biomedical/devices/vagus-nerve

Feeling tired? Applying electricity to a nerve in the neck with a handheld device can reduce fatigue and improve multitasking in sleep-deprived people, according to a new study conducted on U.S. Air Force volunteers.

Previous research found that electrically stimulating the brain with a technique known as transcranial direct current stimulation can improve mental performance in a variety of ways, such as combating fatigue. Prior work suggested the benefits were rooted in the stimulation of a region in the brainstem known as the locus coeruleus, thought to play a key role in attention, wakefulness, memory formation and memory retention.

Still, although transcranial direct current stimulation is a non-invasive technique, it is difficult to use it by oneself and may require 30-minute doses to boost attention and vigor. As such, researchers wanted to find simpler, quicker methods to achieve similar results.

So scientists explored stimulating the vagus nerves that run from the abdomen into the locus coeruleus. Vagal nerve stimulation has been an FDA-approved medical treatment for epilepsy and depression for more than two decades, and recent work has found vagal nerve stimulation boosts memory and learning in both rats and humans.

In the new study, researchers experimented with a commercially available handheld device previously approved by the FDA to treat migraines and cluster headaches that delivers an electric current through the skin to the cervical vagus nerves in the neck.

In the experiment, 40 active-duty U.S. Air Force personnel stayed awake for 34 hours. “Fatigue is an enduring issue within the military,” says study co-author Andy McKinley, a biomedical engineer at the Air Force Research Laboratory at Wright-Patterson Air Force Base near Dayton, Ohio. “There can be days at a time when sleep is rare, or you can’t get good sleep due to environmental factors, or you go from a day shift to a night shift or a night shift to a day shift, and such large shifts in one’s circadian cycle makes sleep difficult, and leads to a lot of fatigue as well.”

The volunteers were tested at nine times during the experiment on four tasks that analyzed their ability to stay alert and multi-task. These included pressing a button as soon as they saw a light move a specific way and keeping track of multiple gauges, lights, sounds and other alerts to deal with simulated malfunctions and other challenges.

Twelve hours into the experiment, half the volunteers each received two minutes of electrical stimulation to the left cervical vagus nerve and two to the right, with two minutes of rest in between. The other half had a sham device that looked identical to the real machine pressed to their necks that made similar vibrations and clicking sounds but did not deliver any current. Both groups were asked to not nap or consume any caffeine or similar stimulants during the experiment.

The scientists found the volunteers who received vagus nerve stimulation performed better at tasks testing focus and multi-tasking. They also reported less fatigue and higher energy. These benefits peaked 12 hours after simulation, with boosts to alertness lasting for up to 19 hours. There was no apparent effect the device had on how well or how long the volunteers slept after the experiment, says study lead author Lindsey McIntire, a human factors psychologist at defense technology company Infoscitex in Dayton, Ohio.

“There’s chronic use of caffeine and energy drinks to make people perform better, but the more you use caffeine, the less effective it is. It only usually lasts a few hours, and it makes you jittery,” McKinley says. “The nice thing about this technology is that it’s very easy to administer and its effects last much longer than we typically see with caffeine, and not only do they experience a performance improvement, but they feel better overall.”

Outside of the military, this device could benefit many professions where fatigue is a serious and unavoidable problem, such as medicine and transportation, McIntire says. “You can almost imagine this helping anywhere people are working in extreme environments, such as the arctic or space— NASA funded this study,” she adds.

There is no research that McKinley has seen that suggests that such techniques might lead to addictive behavior. Still, “I definitely think there should be guidelines on the number of doses per day a person should have,” he says.

Future research can investigate how well this device can work on even greater levels of sleep loss, McKinley says. Further work can also explore the best time of day, frequency, duration and strength for doses of stimulation, test the device in messier and less controlled settings, and explore specifically what effects it has on the body and brain, the researchers say.

The scientists detailed their findings online June 10 in the journal Communications Biology.

This Quantum Computer is Sized For Server Rooms

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/iontrap-server

Usually quantum computers are devices that can fill entire laboratories. Now a compact prototype quantum computer can fit in two 19-inch server racks like those found in data centers throughout the world, which could help improve battery designs or crack codes, a new study finds.

A quantum computer with enough components known as quantum bits or “qubits” could in theory achieve a “quantum advantage” enabling it to find the answers to problems no classical computer could ever solve.

However, given how the most advanced current quantum computers typically only have a few dozen qubits at most yet fill up whole rooms. As such, scaling up to hundreds of qubits for practical applications is often thought to prove a challenging task.

In the Race to Hundreds of Qubits, Photons May Have “Quantum Advantage”

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/race-to-hundreds-of-photonic-qubits-xanadu-scalable-photon

Quantum computers based on photons may have some advantages over  electron-based machines, including operating at room temperature and not temperatures colder than that of deep space. Now, say scientists at quantum computing startup Xanadu, add one more advantage to the photon side of the ledger. Their photonic quantum computer, they say, could scale up to rival or even beat the fastest classical supercomputers—at least at some tasks.

Whereas conventional computers switch transistors either on or off to symbolize data as ones and zeroes, quantum computers use quantum bits or “qubits” that, because of the bizarre nature of quantum physics, can exist in a state known as superposition where they can act as both 1 and 0. This essentially lets each qubit perform multiple calculations at once.

The more qubits are quantum-mechanically connected entangled together, the more calculations they can simultaneously perform. A quantum computer with enough qubits could in theory achieve a “quantum advantage enabling it to grapple with problems no classical computer could ever solve. For instance, a quantum computer with 300 mutually-entangled qubits could theoretically perform more calculations in an instant than there are atoms in the visible universe.

Quantum Computing Makes Inroads Towards Pharma

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/biomedical/diagnostics/quantum-drug

Theoretically, quantum computers theoretically can prove more powerful than any supercomputer. And recent moves from computer giants such as Google and pharmaceutical titans such as Roche now suggest drug discovery might prove to be quantum computing’s first killer app.

Whereas classical computers switch transistors either on or off to symbolize data as ones or zeroes, quantum computers use quantum bits, or qubits, that, because of the surreal nature of quantum physics, can be in a state of superposition where they are both 1 and 0 simultaneously.

Superposition lets one qubit essentially perform two calculations at once, and if two qubits are linked through a quantum effect known as entanglement, they can help perform 22 or four calculations simultaneously; three qubits, 23 or eight calculations; and so on. In theory, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the visible universe.

Could “Topological Materials” Be A New Medium for Ultra-Fast Electronics?

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/semiconductors/materials/topological-materials-laser-controlled-new-electronics-medium

Potential future transistors that can exceed Moore’s law may rely on exotic materials called “topological matter in which electricity flows across surfaces only, with virtually no dissipation of energy. And now new findings suggest these special topological materials might one day find use in high-speed, low-power electronics and in quantum computers. Scientists, that is, have recently revealed they can switch on these exotic electronic properties using blasts from laser beams. 

Topological matter suggests one possible future of electronics operating with virtually no losses, meaning they could potentially burn much less energy and operate much faster than conventional, silicon-based electronics. And now researchers have the most direct evidence yet that light can manipulate these strange but incredible topological properties.

3D-Printed Thruster Boosts Range of CubeSat Applications

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/aerospace/satellites/3dprint-thruster

A new 3-D printed electric thruster could one day help make advanced miniature satellites significantly easier and more affordable to build, a new study finds.

Conventional rockets use chemical reactions to generate propulsion. In contrast, electric thrusters produce thrust by using electric fields to accelerate electrically charged propellants away from a spacecraft.

The main weakness of electric propulsion is that it generates much less thrust than chemical rockets, making it too weak to launch a spacecraft from Earth’s surface. On the other hand, electric thrusters are extremely efficient at generating thrust, given the small amount of propellant they carry. This makes them very useful where every bit of weight matters, as in the case of a satellite that’s already in orbit.

North Korea-Backed Hackers Targeting Security Researchers

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/networks/northkorea-security

According to Google, North Korean-backed hackers are pretending to be security researchers, complete with a fake research blog and bogus Twitter profiles. These actions are supposedly part of spying efforts against actual security experts.

In an online report posted on 25 January, Google’s Threat Analysis Group, which focuses on government-backed attacks, provided an update on this ongoing campaign, which Google identified over the past several months. The North Korean hackers are apparently focusing on computer scientists at different companies and organizations working on research and development into computer security vulnerabilities.

“Sophisticated security researchers and professionals often have important information of value to attackers, and they often have special access privileges to sensitive systems. It is no wonder they are targets,” says Salvatore Stolfo, a computer scientist specializing in computer security at Columbia University, who did not take part in this research. “Successfully penetrating the security these people employ would provide access to very valuable secrets.”

Brainwave: If Memristors Act Like Neurons, Put Them in Neural Networks

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/memristor-random

A low-power and non-volatile technology called the memristor shows initial promise as a basis for machine learning. According to new research, memristors efficiently tackle AI medical diagnosis problems, an encouraging development that suggests additional applications in other fields, especially low-power or network “edge” applications. This may be, the researchers say, because memristors artificially mimic some of the neuron’s essential properties. 

Memristors, or memory resistors, are a kind of building block for electronic circuits that scientists predicted roughly 50 years ago but only created for the first time a little more than a decade ago. These components, also known as resistive random access memory (RRAM) devices, are essentially electric switches that can remember whether they were toggled on or off after their power is turned off. As such, they resemble synapses—the links between neurons in the human brain—whose electrical conductivity strengthens or weakens depending on how much electrical charge has passed through them in the past.

Superintelligent AI May Be Impossible to Control; That’s the Good News

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/super-artificialintelligence

It may be theoretically impossible for humans to control a superintelligent AI, a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it’s on the verge of being created. 

Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity. 

Alongside news of AI besting humans at games such as chess, Go and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. “The question about whether superintelligence could be controlled if created is quite old,” says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. “It goes back at least to Asimov’s First Law of Robotics, in the 1940s.”

3D-printing hollow structures with “xolography”

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/xolography-printing

By shining light beams in liquid resin, a new 3-D printing technique dubbed “xolography” can generate complex hollow structures, including simple machines with moving parts, a new study finds.

“I like to imagine that it’s like the replicator from Star Trek,” says study co-author Martin Regehly, an experimental physicist at the Brandenburg University of Applied Science in Germany. “As you see the light sheet moving, you can see something created from nothing.”

Thin Holographic Video Display For Mobile Phones

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/consumer-electronics/portable-devices/holovideo-phones

A thin holographic display could one day enable 4K 3-D videos on mobile devices as well as household and office electronics, Samsung researchers and their colleagues say.

Conventional holograms are photographs that, when illuminated, essentially act like 2D windows looking onto 3D scenes. The pixels of each hologram scatter light waves falling onto them, making the light waves interact with each other in ways that generate an image with the illusion of depth.

Holography creates static holograms by using laser beams to encode an image onto a recording medium such as a film or plate. By sending the coherent light from lasers through a device known as a spatial light modulator—which can actively manipulate features of light waves such as their amplitude or phase—scientists can instead produce holographic videos.

Hum to Google to Identify Songs

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/consumer-electronics/portable-devices/google-hum

Ever have a song you can’t remember the name of, nor any of its words? Now Google has a new feature where you can simply hum the melody and it can hopefully name that tune.

The idea of identifying songs through singing, humming or whistling instead of lyrics is not a new idea—the music app SoundHound has possessed hum-to-search for at least a decade. Google’s new feature should help the search engine with the many requests it receives to identify music. Aparna Chennapragada, a Google vice president who introduced the new feature during a streamed event Oct. 15, said people ask Google “what song is playing” nearly 100 million times each month.

Far-Infrared Now Near: Researchers Debut Compact Terahertz Laser

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/biomedical/imaging/compact-terahertzlaser

Terahertz rays could have a dizzying array of applications, from high-speed wireless networks to detecting cancers and bombs. Now researchers say they may finally have created a portable, high-powered terahertz laser.

Terahertz waves (also called submillimeter radiation or far-infrared lightlie between optical waves and microwaves on the electromagnetic spectrum. Ranging in frequency from 0.1 to 10 terahertz, terahertz rays could find many applications in imaging, such as detecting many explosives and illegal drugs, scanning for cancers, identifying protein structures, non-destructive testing and quality control. They could also be key to future high-speed wireless networks, which will transmit data at terabits (trillions of bits) per second.

However, terahertz rays are largely restricted to laboratory settings due to a lack of powerful and compact terahertz sources. Conventional semiconductor devices can generate terahertz waves ranging either below 1 terahertz or above 10 terahertz in frequency. The range of frequencies in the middle, known as the terahertz gap, might prove especially valuable for imaging, bomb detection, cancer detection and chemical analysis applications, says Qing Hu, an electrical engineer at MIT.

Researchers Pack 10,000 Metasurface Pixels Per Inch in New OLED Display

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/consumer-electronics/audiovideo/metasurface-oled-display

A new OLED display from Samsung and Stanford can achieve more than 10,000 pixels per inch, which might lead to advanced virtual reality and augmented reality displays, researchers say.

An organic light-emitting diode (OLED) display possesses a film of organic compounds that emits light in response to an electric current. A commercial large-scale OLED television might have a pixel density of about 100 to 200 pixels per inch (PPI), whereas a mobile phone’s OLED display might achieve 400 to 500 PPI.

Two different kinds of OLED displays have reached commercial success in mobile devices and large-scale TVs. Mobile devices mostly used red, green and blue OLEDs, which companies manufacture by depositing dots of organic film through metal sheets with many tiny holes punched in them. However, the thickness of these metal sheets limits how small these fabricated dots can be and sagging of these metal sheets limits how large these displays can get.

In contrast, large-scale TVs use white OLEDs with color filters placed over them. However, these filters absorb more than 70% of light from the OLEDs. As such, these displays are power-hungry and can suffer “burn-in” of images that linger too long. The filters also limit how much the pixels can scale down in size.

The new display uses OLED films to emit white light between two reflective layers, one of which is made of a silver film, whereas the other is a “metasurface,” or forest of microscopic pillars each spaced less than a wavelength of light apart. Square clusters of these 80-nanometer-high, 100-nanometer-wide silver pillars served as pixels each roughly 2.4 microns wide, or slightly less than 1/10,000th of an inch.

Each pixel in the new display’s metasurface is divided into four subpixels of equal size. In principle, the OLED films can specify which subpixels they illuminate. The nano-pillars in each subpixel manipulate white light falling onto them, such that each subpixel can reflect a specific color of light, depending on the amount of spacing between its nano-pillars. In each pixel, the subpixel with the most densely packed nano-pillars yields red light; the one with moderately densely packed nano-pillars yields green light; and the two with the least densely packed nano-pillars yield blue light.

Emitted light reflects back and forth between the display’s reflective layers until it finally escapes through the silver film out the display’s surface. The way in which light can build up within the display gives it twice the luminescence efficiency of standard color-filtered white OLED displays, as well as higher color purity, the researchers say.

“If you think of a musical instrument, you often see an acoustic cavity that sounds come out of that helps make a nice and beautiful pure tone,” says study senior author Mark Brongersma, an optical engineer at Stanford University. “The same happens here with light — the different colors of light can resonate in these pixels.”

In the near term, one potential application for this new display is with virtual reality (VR). Since VR headsets place their displays close to a user’s eyes, high-resolutions are key to help create the illusion of reality, Brongersma says.

As impressive as 10,000 pixels per inch might sound, “according to our simulation results, the theoretical scaling limit of pixel density is estimated to be 20,000 pixels per inch,” says study lead author Won-Jae Joo, a nanophotonic engineer at the Samsung Advanced Institute of Technology in Suwon, Korea. “The challenge is the trade-off in brightness when the pixel dimensions go below one micrometer.”

Other research groups have developed displays they say range from 10,000 to 30,000 pixels per inch, typically using micro-LED technology, such as Jade Bird Display in China and VueReal in Canada. In terms of how the new OLED display compares with those others, “our color purity is very high,” Brongersma says.

In the future, metasurfaces might also find use trapping light in applications such as solar cells and light sensors, Brongersma says.

The scientists detailed their findings online Oct. 22 in the journal Science.

Print These Electronic Circuits Directly Onto Skin

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/devices/skin-circuits

New circuits can get printed directly on human skin to help monitor vital signs, a new study finds. 

Wearable electronics are growing increasingly more comfortable and more powerful. A next step for such devices might include electronics printed directly onto the skin to better monitor and interface with the human body. 

Scientists wanted a way to sinter—that is, use heat to fuse—metal nanoparticles to fabricate circuits directly on skin, fabric or paper. However, sintering usually requires heat levels far too high for human skin. Other techniques for fusing metal nanoparticles into circuits, such as lasers, microwaves, chemicals or high pressure, are similarly dangerous for skin.

In the new study, researchers developed a way to sinter nanoparticles of silver at room temperature. The key behind this advance is a so-called a sintering aid layer, consisting of a biodegradable polymer paste and additives such as titanium dioxide or calcium carbonate. 

Positive electrical charges in the sintering aid layer neutralized the negative electrical charges the silver nanoparticles could accumulate from other compounds in their ink. This meant it took less energy for the silver nanoparticles printed on top of the sintering aid layer to come together, says study senior author Huanyu Cheng, a mechanical engineer at Pennsylvania State University.

The sintering aid layer also created a smooth base for circuits printed on top of it. This in turn improved the performance of these circuits in the face of bending, folding, twisting and wrinkling.

In experiments, the scientists placed the silver nanoparticle circuit designs and the sintering aid layer onto a wooden stamp, which they pressed onto the back of a human hand. They next used a hair dryer set to cool to evaporate the solvent in the ink. A hot shower could easily remove these circuits without damaging the underlying skin.

After the circuits sintered, they could help the researchers measure body temperature, skin moisture, blood oxygen, heart rate, respiration rate, blood pressure and bodily electrical signals such as electrocardiogram (ECG or EKG) readings. The data from these sensors were comparable to or better than those measured using conventional commercial sensors that were simply stuck onto the skin, Cheng says.

The scientists also used this new technique to fabricate flexible circuitry on a paper card, to which they added a commercial off-the-shelf chip to enable wireless connectivity. They attached this flexible paper-based circuit board to the inside of a shirt sleeve and showed it could gather and transmit data from sensors printed on the skin. 

“With the use of a novel sintering aid layer, our method allows metal nanoparticles to be sintered at low or even room temperatures, as compared to several hundreds of degrees Celsius in alternative approaches,” Cheng says. “With enhanced signal quality and improved performance over their commercial counterparts, these skin-printed sensors with other expanded modules provide a repertoire of wearable electronics for health monitoring.”

The scientists are now interested in applying these sensors for diagnostic and treatment applications “for cardiopulmonary diseases, including COVID-19, pneumonia, and fibrotic lung diseases,” Cheng says. “This sensing technology can also be used to track and monitor marine mammals.”

The scientists detailed their findings online Sept. 11 in the journal ACS Applied Materials & Interfaces

Electronic Blood Vessels to the Rescue

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/bionics/electronic-bloodvessel

The number one cause of mortality worldwide is cardiovascular disease. Now scientists reveal electronic blood vessels might one day use electricity to stimulate healing and deliver gene therapies to help treat such maladies, a new study finds.

One-third of all U.S. deaths are linked to cardiovascular disease, according to the American Heart Association. When replacement blood vessels are needed to treat advanced cases of cardiovascular disease, doctors prefer ones taken from the patient’s own body, but sometimes the patient’s age or condition prevents such a strategy.

Artificial blood vessels that can prove helpful in cases where replacements more than 6 millimeters wide are needed are now commercially available. However, when it comes to smaller artificial blood vessels, so far none have succeeded in clinical settings. That’s because a complex interplay between such vessels and blood flow often triggers inflammatory responses, causing the walls of natural blood vessels to thicken and cut off blood flow, says Xingyu Jiang, a biomedical engineer at the Southern University of Science and Technology in Shenzhen, China. Jiang and his team report having developed a promising new artificial blood vessel that doesn’t cause inflammatory response. The scientists detailed their findings online on 1 October in the journal Matter.

An Open-Source Bionic Leg

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/bionics/opensource-bionicleg

Details on the design and clinical tests of an open-source bionic leg are now freely available online, so that researchers can hopefully create and test safe and useful new prosthetics.

Bionic knees, ankles and legs under development worldwide to help patients walk are equipped with electric motors. Getting the most from such powered prosthetics requires safe and reliable control systems that can account for many different types of motion: for example, shifting from striding on level ground to walking up or down ramps or stairs.

However, developing such control systems has proven difficult. “The challenge stems from the fact that these limbs support a person’s body weight,” says Elliott Rouse, a biomedical engineer and director of the neurobionics lab at the University of Michigan, Ann Arbor. “If it makes a mistake, a person can fall and get seriously injured. That’s a really high burden on a control system, in addition to trying to have it help people with activities in their daily life.”