Tag Archives: Biomedical

High Quality Asphere Manufacturing from Edmund Optics

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/high-quality-asphere-manufacturing-from-edmund-optics

Edmund Optics’ asphere experts Amy Frantz and Oleg Leonov, and moderator Lars Sandström, Precision Optics Senior Business Line Manager, present the benefits of using aspheres in optical system design and what factors need be taken into account during the design process. These key manufacturability considerations will significantly reduce asphere lead time and cost if considered early enough in the design process.

At the conclusion of this webinar, participants will have a strong understanding around:

  • Benefits of using aspheres in optics system design
  • Challenges of asphere manufacturing
  • Key factors on manufacturable aspheres

Quadriplegic Pilots Race for Gold in Cybathlon Brain Race

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/bionics/quadriplegic-pilots-race-for-gold-in-cybathlon-brain-race

The competitors were neck-and-neck going into the final turns of the last heat, and in the end, Italy beat Thailand by four seconds. But unlike the Olympic games, none of the competitors in this race could move their bodies. Instead, they competed using only their thoughts. 

This is Olympic racing, cyborg-style. Using brain-computer interface (BCI) systems, the competitors—all of whom are paralyzed from the neck down—navigated computer avatars through a racetrack using thought-controlled commands. 

The race was part of Cybathlon 2020: The second ever cyborg Olympics, in which people with paralysis or amputated limbs turn themselves into cyborg athletes using robotics and algorithms. Proud competitors raced with their exoskeletons, powered wheelchairs and prosthetic limbs through obstacle courses as their tech teams cheered them on.

Turning the Body into a Wire

Post Syndicated from Shreyas Sen original https://spectrum.ieee.org/biomedical/devices/turning-the-body-into-a-wire

In 2007, U.S. vice president Dick Cheney ordered his doctors to disable all wireless signals to and from his Internet-connected pacemaker. Cheney later said that the decision was motivated by his desire to prevent terrorists from being able to hack his pacemaker and use it to lethally shock his heart. Cheney’s command to his doctors might seem to some to be overly cautious, but wirelessly connected medical devices have a history of exploitable vulnerabilities. At a series of conferences in 2011 and 2012, for example, New Zealand hacker Barnaby Jack showed that connected medical devices could be remotely attacked. Jack used a high-gain antenna to capture the unencrypted electromagnetic signals transmitted by an insulin pump on a mannequin 90 meters away. He then used those signals to hack into the pump and adjust the level of insulin the pump delivered. He also hacked a pacemaker and made it deliver deadly electric shocks.

Eight years after those demonstrations, connected medical devices remain vulnerable. In June 2020, for example, the U.S. Department of Homeland Security recalled a model of connected insulin pumps. The pumps were transmitting sensitive information without encryption, making the data accessible to anyone nearby who might want to listen in.

Medical devices are only the tip of the iceberg when it comes to the wireless devices people are putting in or on their bodies. The list includes wireless earbuds, smartwatches, and virtual-reality headsets. Technologies still in development, such as smart contact lenses that display information and digital pills that transmit sensor data after being swallowed, will also be at risk.

All of these devices need to transmit data securely at low power and over a short range. That’s why researchers have started to think about them as individual components of a single human-size wireless network, referred to as a body-area network. The term “Internet of Bodies” (IoB) is also coming into use, taking a cue from the Internet of Things.

At the moment, IoB devices use established wireless technologies, mainly Bluetooth, to communicate. While these technologies are low power, well understood, and easy to implement, they were never designed for IoB networks. One of Bluetooth’s defining features is the ability for two devices to easily find and connect to one another from meters away. That feature is precisely what allows a hypothetical attacker to snoop on or attack the devices on someone’s body. Wireless technologies have also been designed to travel through air or vacuum, not through the medium of the human body, and therefore they are less efficient than a method of communicating designed to do so from scratch.

Through our research at Purdue University, we have developed a new method of communication that will keep medical devices, wearables, and any other devices on or near the body more secure than they are using low-power wireless signals to communicate with one another. The system capitalizes on the human body’s innate ability to conduct tiny, harmless electrical signals to turn the entire body into a wired communication channel. By turning the body into the network, we will make IoB devices more secure.

Sensitive personal data like medical information should always be encrypted when it’s transmitted, whether wirelessly or in an email or via some other channel. But there are three other especially good reasons to prevent an attacker from gaining access to medical devices locally.

The first is that medical data should be containable. You don’t want a device to be broadcasting information that someone might eavesdrop on. The second reason is that you don’t want the integrity of the device to be compromised. If you have a glucose monitor connected to an insulin pump, for example, you don’t want the pump to release more glucose because the monitor’s data was compromised. Not enough glucose in the blood can cause headaches, weakness, and dizziness, while too much can lead to vision and nerve problems, kidney disease, and strokes. Either situation can eventually lead to death. The third reason is that the device’s information always needs to be available. If an attacker were to jam the signals from an insulin pump or a pacemaker, the device might not even know it needed to respond to a sudden problem in the body.

So if security and privacy are so important, why not use wires? A wire creates a dedicated channel between two devices. Someone can eavesdrop on a wired signal only if they physically tap the wire itself. That’s going to be hard to do if the wire in question is on or inside your body.

Setting aside the benefits of security and privacy, there are some important reasons why you wouldn’t want wires crisscrossing your body. If a wire isn’t properly insulated, the body’s own biochemical processes can corrode the metal in the wire, which could in turn cause heavy-metal poisoning. It’s also a matter of convenience. Imagine needing to repair or replace a pacemaker with wires. Rethreading the wires through the body would be a very delicate task.

Rather than choose between wireless signals, which are easy for eavesdroppers to snoop, and wired signals, which bring risk to the body, why not a third option that combines the best of both? That’s the inspiration behind our work to use the human body as the communication medium for the devices in someone’s body-area network.

We call the method of sending signals directly through the body electro-quasistatic human-body communication. That’s a mouthful, so let’s just think of it as a body channel. The important takeaway is that by exploiting the body’s own conductive properties, we can avoid the pitfalls of both wired and wireless channels.

Metal wires are great conductors of electric charge. It’s a simple matter to transmit data by encoding 1s and 0s as different voltages. You need only define 1s as some voltage, which would cause current to flow through the wire, and 0s as zero voltage, which would mean no current flowing through the wire. By measuring the voltage over time at the other end of the wire, you end up with the original sequence of 1s and 0s. However, given you don’t want metal wires running around or through the body, what can you do instead?

The average adult human is about 60 percent water by weight. And though pure water is a terrible electrical conductor, water filled with conductive particles like electrolytes and salts conducts electricity better. Your body is filled with a watery solution called the interstitial fluid that sits underneath your skin and around the cells of your body. The interstitial fluid is responsible for carrying nutrients from the bloodstream to the body’s cells, and is filled with proteins, salts, sugars, hormones, neurotransmitters, and all sorts of other molecules that help keep the body going. Because inter­stitial fluid is everywhere in the body, it allows us to establish a circuit among two or more communicating devices sitting pretty much anywhere on the body.

Imagine someone with diabetes who uses an insulin pump and a separate monitor on the abdomen to manage blood glucose levels. Suppose they want their smartwatch, among its many other functions, to display current glucose levels and the operational status of the pump. Traditionally, these devices would have to be connected wirelessly, which would make it theoretically possible for anyone to grab a copy of the user’s personal data. Or worse, potentially attack the pump itself. Today, many medical devices still aren’t encrypted, and even for those that are, encryption is not a guarantee of security.

Here’s how it would work with a body channel instead. The pump, the monitor, and the smartwatch would each be outfitted with a small copper electrode on its back, in direct contact with the skin. Each device also has a second electrode not in contact with the skin that functions as a sort of floating ground, which is a local electrical ground that is not directly connected with Earth’s ground. When the monitor takes a blood glucose measurement, it will need to send that data to both the pump, in case the insulin level needs to be adjusted, and to the smartwatch, so that the individual can see the level. The smartwatch can also store data for long-term monitoring, or encrypt it and send it to the user’s computer, or their doctor’s computer, for remote storage and analysis.

The monitor communicates its glucose measurements by encoding the data into a series of voltage values. Then, it transmits these values by applying a voltage between its two copper electrodes—the one touching the human body, and the one acting as a floating ground.

This applied voltage very slightly changes the potential of the entire body with respect to Earth’s ground. This tiny change in potential between the body and Earth’s ground is just a fraction of the potential difference between the monitor’s two electrodes. But it’s enough to be picked up, as an even smaller fraction after crossing the body, by the devices elsewhere. Because both the pump on the waist as well as the smartwatch on the wrist are on the body, they can detect this change in potential across their own two electrodes—both on-body and floating. The pump and the smartwatch then convert these potential measurements back into data. All without the actual signal ever traveling beyond the skin.

One of the biggest challenges for realizing this method of body communication is in selecting the best wavelengths for the electrical signals. Electrical wavelengths like the ones we’re considering here are much longer than the RF wavelengths for wireless communications.

The reason selecting a frequency is a challenge is that there is a range of frequencies at which the human body itself can become an antenna. An ordinary radio antenna creates a signal when an alternating current causes the electrons in its material to oscillate and create electromagnetic waves. The frequency of the transmitted waves depends on the frequency of the alternating current fed into the antenna. Likewise, an alternating current at certain frequencies applied to the human body will cause the body to radiate a signal. This signal, while weak, is still strong enough to be picked up with the right equipment and from some distance away. And if the body is acting as an antenna, it can also pick up unwanted signals from elsewhere that might interfere with wearables’ and implants’ ability to talk with one another.

For the same reason you don’t want to use technologies like Bluetooth, you want to keep electrical signals confined to the body and not accidentally radiating from or to it. So you have to avoid electrical frequencies at which the human body becomes an antenna, which are in the range of 10 to 100 megahertz. Above that are the wireless bands, and we’ve already mentioned the problems there. The upshot is that you need to use frequencies in the range of 0.1 to 10 MHz, in which signals will stay confined to the body.

Earlier attempts to use the human body to communicate have usually shied away from these lower frequencies because the body is typically high loss at low frequencies. In other words, signals at these lower frequencies require more power to guarantee that a signal will make it to its destination. That means a signal from a glucose monitor on the abdomen might not make it to a smartwatch on the wrist before it’s unreadable, without a significant boost in power. These previous efforts were high loss because they focused on sending direct electrical signals, rather than information encoded in potential changes. We’ve found that the parasitic capacitance between a device and the body is key to creating a working channel.

Capacitance refers to the ability of an object to store electrical charge. Parasitic capacitance is unwanted capacitance that occurs unintentionally between any two objects. For example, two charged areas in close proximity on a circuit board, or between a person’s hand and their phone. Typically, parasitic capacitance is a nuisance, although it also enables certain applications like touch screens.

Astute readers may have picked up that we haven’t mentioned one key aspect of circuits before now: A circuit needs to be a closed loop for electrical communication to be possible. Up until now, we’ve restricted our discussion to the forward path, meaning the part of the circuit from the transmitting electrode to the receiving electrode. But we need a path back. We have one thanks to parasitic capacitance between the floating ground electrodes on the devices and Earth’s ground.

Here’s how to picture the circuit we’re using. First, imagine two circuit loops. The first loop begins with the transmitting device, at the electrode touching the skin. The circuit then goes through the body, down through the feet to the actual ground, and then back up through the air to the other (floating) electrode on the transmitting device. We should note here that this is not a loop through which direct current can flow. But because parasitic capacitances exist between any two objects, such as your feet and your shoes, and your shoes and the ground, a small alternating current can exist.

The second loop, in a similar fashion, begins with the receiving device, at its electrode that is touching the skin. It then goes through the body—both loops share this segment—to the ground, and back through the air to the floating-ground electrode on the receiving device.

The key here is to understand that the circuit loops are important not because we have to push a current through them necessarily, but because we need a closed path of capacitors. In a circuit, if the voltage changes across one capacitor—for example, the two electrodes of the transmitting device—it creates a slight alternating current in the loop. The other capacitors, meaning both the body and the air, “see” this current and, because of their impedances, or resistances to the current, their voltages change as well.

Remember that the circuit loop with the transmitting device and the one with the receiving device share the body as a segment of their respective loops. Because they share that segment, the receiving device also responds to the slight change in the body’s voltage. The two electrodes making up the receiving device’s capacitor detect the body’s changing voltage and allow that measurement to be decoded as meaningful information.

We have found that we want any IoB device’s capacitor to have high capacitance. If this is the case, relatively high voltages created by the transmitting device will result in extremely low currents in the body itself. Obviously, this makes sense from a safety perspective: We don’t want to run high current through the body, after all. But it also makes the communications channel low loss. That’s because a high-impedance capacitor will be particularly sensitive to minor changes in current. The upshot is that we can keep the current low (and safe) and still get clear voltage measurements at the receiving device. We’ve found that our technique results in a reduction in loss of two orders of magnitude compared with previous attempts to create a wireless channel in the body, which relied on sending an electrical signal via current directly through the body.

Our method for turning the human body into a communications channel shifts the distance at which signals can be intercepted from the 5- to 10-meter range of ­Bluetooth and similar signals to below 15 centimeters. In other words, we’ve reduced the distance over which an attacker can both intercept and interfere with signals by two orders of magnitude. With our method, an attacker would need to be so close to the target that there’s no way to hide.

Not only does our method offer more privacy and security for anyone with a medical implant or device, but as a bonus, the communications are far more energy efficient as well. Because we’ve developed a system that is low loss at low frequencies, we can send information between devices using far less power. Our method requires less than 10 picojoules per transferred bit. For reference, that’s about 0.01 percent of the energy required by Bluetooth. Using 256-bit encryption, it drew 415 nanowatts of power to transmit 1 kilobit per second, which is more than three orders of magnitude below Bluetooth (which draws between 1 and 10 milliwatts).

Medical devices like pacemakers and insulin pumps have been around for decades. Bluetooth earbuds and smartwatches may be newer, but neither life-saving medical equipment nor consumer tech is leaving our bodies any time soon. It only makes sense to make both categories of devices as secure as possible. Data is always most vulnerable to a malicious attack when it is moving from one point to another, and our IoB communication technique can finally close the loop on keeping personal data from leaving your body.

This article appears in the December 2020 print issue as “To Safeguard Sensitive Data, Turn Flesh and Tissue Into a Secure Wireless Channel.”

About the Author

Shreyas Sen is an associate professor of electrical and computer engineering at Purdue University. He is a Senior Member of the IEEE. Shovan Maity and Debayan Das are graduate students of Sen at Purdue University.

Important asphere specifications and their impact on optical performance.

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/important-asphere-specifications-and-their-impact-on-optical-performance

Aspheres as key optical components are true “enablers” in the field of optics and photonics, especially for applications which require light weight and small size. The whitepaper gives an overview of important asphere specifications and the impact they can have on optical performance.

Learn about Aspheres and their specifications and understand how to best use them to optimize performance of your optical system.

Flexible, Wearable Sensors Detect Workers’ Fatigue

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/biomedical/devices/flexible-wearable-sensors-detect-workers-fatigue

Fatigue in the workplace is a serious issue today—leading to accidents, injuries and worse. Some of history’s worst industrial disasters, in fact, can be traced at least in part to worker fatigue, including the 2005 Texas City BP oil refinery explosion and the nuclear accidents at Chernobyl and Three Mile Island.

Given the potential consequences of worker fatigue, scientists have been exploring wearable devices for monitoring workers’ alertness, which correlates with physiological parameters such as heart rate, breathing rate, sweating, and muscle contraction. In a recent study published November 6 in IEEE Sensors Journal, a group of Italian researchers describe a new wearable design that measures the frequency of the user’s breathing—which they argue is a proxy for fatigue. Breathing frequency is also used to identify stressing conditions such as excessive cold, heat, hypoxia, pain, and discomfort.

“This topic is very important since everyday thousands of work-related accidents occur throughout the world, affecting all sectors of the economy,” says Daniela Lo Presti, a PhD student at  Università Campus Bio-Medico di Roma, in Rome, Italy, who was involved in the study. “We believe that monitoring workers’ physiological state during [work]… may be crucial to prevent work-related accidents and improve the workers’ quality performances and safety.”

The sensor system that her team designed involves two elastic bands that are worn just below the chest (thorax) and around the abdomen. Each band is flexible, made of a soft silicon matrix and fiber optic technology that conforms well to the user’s chest as he or she breathes.

“These sensors work as optical strain gauges. When the subject inhales, the diaphragm contracts and the stomach inflates, so the flexible sensor that is positioned on the chest is strained,” explains Lo Presti. “Conversely, during the exhalation, the diaphragm expands, the stomach depresses, and the sensor is compressed.”

The sensors were tested on 10 volunteers while they did a variety of movements and activities, ranging from sitting and standing to lateral arm movements and lifting objects from the ground. The results suggest that the flexible sensors are adept at estimating respiratory frequency, providing similar measurements to a flow meter (a standard machine for measuring respiration). The researchers also found that their sensor could be strained by up to 2.5% of its initial length.

Lo Presti says this design has several strengths, including the conformation of the sensor to the user’s body. The silicon matrix is dumbbell shaped, allowing for better adhesion of the sensing component to the band, she says.

However, the sensing system must be plugged into a bulky instrument for processing the fiber optical signals (called an optical interrogator). Lo Presti says other research teams are currently working on making these devices smaller and cheaper. “Once high-performant, smaller interrogators are available, we will translate our technology to a more compact wearable system easily usable in a real working scenario.”

Electric Fields Accelerate CRISPR-based COVID Test

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/researchers-improve-crisprbased-covid-test-with-electric-fields

The world desperately needs a portable, reliable COVID-19 test that can deliver immediate results. Now scientists at Stanford University say they hope that a new diagnostic assay will fill that void.

In a paper published last week in the journal PNAS, the scientists describe how they used electric fields and the genetic engineering technique CRISPR to build a microfluidic lab-on-a-chip that can detect the novel coronavirus. The test delivers results in about half an hour—a record for CRISPR-based assays, according to the authors—and uses a lower volume of scarce reagents, compared with other CRISPR-based tests in development.

“We’re showing that we have all the elements required to achieve a miniaturized and automated device with no moving parts,” says Juan Santiago, vice chair of mechanical engineering at Stanford, who led the research. He adds, however, that more work lies ahead before their test could be ready for the public.  

Telemedicine Comes to the Operating Room

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/biomedical/devices/telemedicine-comes-to-the-operating-room

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

You know what a hospital operating room looks like—at least from TV shows. There’s the surgeon, of course, maybe a surgical resident, nurses, a scrub tech, the anesthesiologist, maybe a few aides; some students, if it’s a teaching hospital. 

But an actual modern hospital operating room probably has someone you never see on television: a medical device company representative. The device might be a special saw or probe or other tool for the surgeon to use; it might be a device being implanted, such as an artificial hip, knee, or mandible; a pacemaker—even, lately, internal braces to stabilize someone’s spine.

The toolkits for some of these devices might include dozens of wrenches and screws. The surgeon may be using the device and the kit for the first time. The medical device company representative quite probably knows more about the device and its insertion than anyone on the surgical team.

Obviously, in the time of the coronavirus, it’s a plus to have as few people in the OR as possible. But even in non-Covid times, it’s inefficient to fly these company reps around the country to observe and advise an operation that might only take an hour. And so, in a handful of ORs, you’ll see something else—one or more cameras, mounted strategically, and a flat-panel screen on a console, connected to a remote console. The medical device rep—or a consulting surgeon—can be a thousand kilometers away, controlling the cameras, looking at an MRI scan, and making notations on their tablet that can be seen on the one in the operating room.

It’s telemedicine for the OR, and it’s the brainchild of my guest today.

Daniel Hawkins is a serial inventor with well over 100 patents to his name and a serial entrepreneur with several startups to his résumé. His latest, is the one whose system we’re taking about today, Avail Medsystems. He joins us by Zoom.

Daniel, welcome to the podcast.

Daniel Hawkins Thanks for the opportunity, Steven. Happy to be here.

Steven Cherry Daniel, I didn’t know anything about these medical device reps. I gather they’re often part of the marketing or customer support teams at their companies, but they undergo some real surgical training before they start advising doctors.

Daniel Hawkins They do, in fact, Steven, typically the training regimens are several weeks, if not several months long, and then after they complete those training regimens, they’re required to travel with somebody very experienced in the operating rooms and they get what was initially a didactic training in the classroom setting or possibly even cadaveric lab setting, then converts to real-world settings in operating rooms where their teacher, if you will, has been on the job for an extended time period. And does a teacher mentor kind of a training session on an ongoing basis for several weeks, if not a few months, with a representative before they are turned loose.

Steven Cherry This isn’t just Zoom for operating rooms. The cameras, for example, aren’t like the webcam in my computer.

Daniel Hawkins No, they’re not. These are, in fact, 30x optical-zoom cameras. I can confidently say there’s not a camera on the planet that we haven’t tried! And have ultimately chosen a pair of cameras that have incredible clarity, color-balancing, and appropriate low-level-light image-capture capability. Because in operating rooms you need all of those things. The remote individual being a sales rep or a trained physician in an open surgery needs to have crystal clear images of the tissue that they are operating on. And color and color balancing, white-balancing, and tissue-plane identification are really relying on high-end optical clarity.

Steven Cherry The cameras were just one of the engineering challenges you faced.

Daniel Hawkins We are requiring high-definition audio and a high-definition video at a local source, meaning the operating room. We’re transferring that via a HIPPA-compliant, fully-encrypted Internet connection, bouncing off the cloud and then down to a remote participant, being the industry representative or possibly an advising surgeon could be across town and across the country or across the globe. And our system is designed to have latency of less than half a second. Now, of course, we’re dependent on the quality of the local and the remote Internet connections. But before we install a system, we care for the local issues with provisioning of the network in the hospital.

Steven Cherry Another challenge was the business model. There’s a hundred thousand dollars worth of equipment here, but your solution doesn’t involve customers shelling out that money.

Daniel Hawkins That’s right, I’ve been, Steven, twenty-six years in the medical device business and one of the first capital equipment businesses I was involved in with within health care is actually The Da Vinci surgical robot produced by Intuitive Surgical. That’s a two-million-dollar robot. Be it two million dollars, two hundred thousand dollars, or even two thousand dollars requires extensive approvals inside of hospitals to go through a capital acquisition process and model. And that really would delay our commercialization if we required that to get our systems placed. We decided instead to pursue a very aggressive model, inasmuch as we’re not charging at all for that hardware. We’re not charging a capital cost, we’re not charging a lease. We’re not even charging for the upkeep and maintenance or technical support. It’s fully free of charge to the hospitals from a capital perspective. What we do instead is market the utilization of the these systems in a fee-for-service based on time.

Steven Cherry In some sense, your customer is also the medical device manufacturer.

Daniel Hawkins Yes, we’re really a two-sided network. The first side, of course, is placing the consoles in hospitals or ambulatory surgery centers where we generate our revenues from the fees paid by the remote participant. And in the vast majority of cases, that is, in fact, the medical device manufacturer, that is the Johnson and Johnson or Medtronic or an Abbott or Boston Scientific. The variety of medical device companies have an aggregate of over 100 000 sales reps and clinical specialists. Those are folks that are somewhat like sales, that they don’t have a sales quota. Their whole job is to support procedures. There’s 110 000 just sales reps and probably something similar in the clinical specialist field force. These people need access to operating rooms every day. They waste an extraordinary amount of time driving between their different customers from one hospital to the next and waiting for a procedure once they arrive at the hospital waiting for the next procedure. The estimates are about 50 percent of their time is wasted in logistics. You can have a significant increase in the efficiency of time spent supporting your customers, those customers being the surgeons who were conducting the operation.

Steven Cherry We think of the remote experience as being inferior, but it seems there are some advantages here. For example, being able to look at scans more easily.

Daniel Hawkins That’s a great way of thinking about it. There are really a number of advantages. In an operating room, when you go as an industry representative to help a surgeon through the specifics of using some type of a device that you’re representing, you have to observe what’s called a sterile field—kind of an imaginary bubble that extends probably six or eight feet around every dimension of the operating table. That means you need to stand back. If you’re standing back, it’s kind of hard to see the operating field itself. And you can’t point to anything unless you use a laser pointer, which is a common tool in many reps bags.

And you also can’t really annotate or draw on a screen—if you kind of imagine there being a screen is displaying part of the procedure, could be from a moving X-ray called an angiogram if it’s an angioplasty placing a stent in the heart, or it could be a screen with a full video image, if it is a minimally invasive surgery procedure; it’s called laparoscopic surgery. And you might want to actually point something out to the surgeon. You can’t really do that with a tool that would allow you to draw and really point something out. Those are two examples of things that we solve with the Avail system. But because of the nature of our cameras and our console, you can actually get a better view of the operator field using our system than you could get if you were physically in the room. Our cameras, one of them is on a boom arm, is positioned over the operating field and you were able to see directly down under the operating field and zoom down and quite literally count the eyelashes on the patient if you wanted to do that. The level of of visual acuity is quite impressive. We also get an ability for somebody remote to draw on the screen, almost like you might see on Monday Night Football.

Steven Cherry So is there an increased interest in your system because of the pandemic, or maybe less so because so much in hospitals is on hold while they deal with that one overriding problem?

Daniel Hawkins That’s a great question. The fundamental issues that we’re solving have existed for forty years. Medical devices, have always been supported, trained, and introduced in person. And that’s a challenge. In fact, somewhere between 25 percent and 100 percent of cases require physical presence from industry. Some procedures like angioplasty, about one in four times, there’s a physical person in the room from a medical device company. For pacemakers, they’re actually not implanted unless there’s somebody in the world because the medical device representative is integral to the procedure. The pandemic shone a spotlight on the issues of access and needing that access. And interest levels, Steven actually went up. The awareness of the need for those people in the room against the restrictions of being able to come into the hospital made it very, very apparent that a remote capability was needed.

Another thing happened that was really interesting. What was otherwise an assumption—that health care needed to be delivered in person—that presumption has been shattered in dozens and dozens and dozens of medical device companies have approached us and we are under contract with several dozen right now.

Steven Cherry Daniel, you have something like one hundred and fifty patents. Your last startup, which I guess you’re still an adviser to, took some medical techniques that were well-known in kidney stone treatment and applied them to arterial plaque. None of this seems like the kind of thing that somebody would come up with if their degrees were from Wharton and Stanford in business and management.

Daniel Hawkins So I have been, in many respects, a medical device junkie for a few decades here, 26 years in total. But really, my interest stems even prior to that. My father was a physician. I grew up around medicine. I also grew up around entrepreneurship. What I really sought was a way to combine the two and didn’t know much about the medical-device industry. But what I did understand is I really thought the tools that surgeons used were pretty interesting.

When I was an undergraduate, I actually attempted to pursue a joint undergrad Wharton and premed degree. And thankfully, the deans of the schools made a different recommendation for me and suggested I take one. I knew I didn’t want to actually be a physician, but I did know that I wanted to be involved in health care. And after business school, I got involved in health care immediately. Really, I didn’t have any patents at all until 20005, I believe it was.

I joined a couple of engineers in an incubator of sorts and our task—we were sponsored by actually venture firms—our task was to create new medical technologies for disease states that were underserved. And they showed me how to invent is probably the best way to describe this, Steven. And after that, I was hooked. It was it just became something where I would observe there’s an issue. And by the nature of that process of incubation, I was the idea guy. I was the one who was trying to find the unmet needs. I would see those. And that means but what I would hear from the engineers I was working with is so many different types of solutions that could be brought to bear in. The beautiful part about that was actually that I was just informed enough to ask the question and just ignorant enough to not stop myself from wanting to pursue it.

Steven Cherry My grandmother was a doctor and, like your father, her office was downstairs in the house I grew up in, but I don’t have scores of medical-related patents, so I knew there was more to this story. You are also an executive at Intuitive Surgical, which makes The Da Vinci surgical robot. In some ways, the Avail system backs away from robot-aided surgery. Why did neither of your recent startups go further down the robotic path?

Daniel Hawkins Really, robotics is a … it’s fascinating … It’s absolutely fascinating. And I think it’s frankly undertapped. There’s a level of expertise that is needed in robotics that I simply don’t have. Having said that, I am an adviser to a brand-new robotic surgery company that is really just incredibly interesting, what they’re working on that—not at liberty, to talk too much about it.

Steven Cherry Getting back to Avail, it would seem helpful for a rural community, say maybe where there’s no surgeon at all, but a doctor or even a nurse practitioner needing to perform a procedure for which they need trained guidance. Is their interest outside of big hospitals in big cities?

Daniel Hawkins There absolutely is. Rural applications, I think, are very relevant. As are military surgery centers. And, you know, there’s many different use cases. And in some ways, I’d encourage you to think of what we’re doing as a telecommunications platform. We are connecting expertise from outside of the procedure room and delivering it to insert the procedure room. And that means really anyone who is an outside expert can clinically contribute to a surgery where somebody might have incrementally less expertise.

It’s also relevant for ambulatory surgery centers where there tend not to be five or six or seven surgeons in a practice group all working the same day at that same location. If there’s a case in a large hospital that a surgeon is working on and they have a question that they think one of their colleagues might be able to help out, they’ll ask a circulating nurse or a technician to call doctor so-and-so. And that physician, if they’re otherwise available, might put on a mask and a pair of gloves and come in and have a look. And they might consult for five minutes or 15 minutes. That’s incredibly valuable and it happens all the time.

Steven Cherry I can imagine the expertise flipping around. This seems like a good tool for observing an operation, if you’re a student at a teaching hospital. Better than being maybe dozens of feet away in the theater.

Daniel Hawkins Absolutely true. In fact, we’re working with a couple of medical universities where they’re actually interested in revamping their curriculum to solve exactly that problem. The issue being that there might be a dozen and a half or two dozen surgeon trainees and they’re circulating around an operating rooms trying to observe what they can. But as a practical matter, it really can only have two maybe at most three trainee surgeons, if you will, in an operating room at any given point in time to observe. Past that it becomes difficult to see and didactically a lot more challenging.

What about outside of medicine? I can imagine a complex engine repair on an oil rig in the Arctic, for example.

Daniel Hawkins Most certainly our technology is not really dependent on the content of what it’s doing. The capability is really universal for anything that involves audio and video. It has been proposed for that type of a remote repair setting that you just described. It’s actually been proposed to be used in hospitals in a similar fashion where the repair of an MRI machine would be consulted by the repairing … the manufacturer, if you will, would consult the biomedical engineer in a facility who’s pointed the cameras at the MRI machine and they can be walked through the steps. You know, for the remote that you just described out in the Arctic, one of the interesting use cases that we’re actively exploring is a military application where one of our units might be on a Marine vessel. As long as they’re able to get a satellite Internet connection. We’re talking about the military so that should not be an issue.

Steven Cherry Well, Daniel, that’s a pretty creative solution to a problem I think most of us didn’t even know existed. I’m sure hospitals and medical device reps are grateful for it. And I’m grateful for your joining us today.

Daniel Hawkins Thanks very much.

Steven Cherry We’ve been speaking with Daniel Hawkins, founder of Avail Medsystems, a startup that’s moving telemedicine from the doctor’s office to the hospital operating room.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

And we’re grateful to benefit from open-source—our music is by Chad Crouch and our editing tool is Audacity. This interview was recorded November 2, 2020. Radio Spectrum can be subscribed to on the Spectrum website, Spotify, Apple Podcast, Stitcher, or wherever you get your podcasts. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Far-Infrared Now Near: Researchers Debut Compact Terahertz Laser

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/biomedical/imaging/compact-terahertzlaser

Terahertz rays could have a dizzying array of applications, from high-speed wireless networks to detecting cancers and bombs. Now researchers say they may finally have created a portable, high-powered terahertz laser.

Terahertz waves (also called submillimeter radiation or far-infrared lightlie between optical waves and microwaves on the electromagnetic spectrum. Ranging in frequency from 0.1 to 10 terahertz, terahertz rays could find many applications in imaging, such as detecting many explosives and illegal drugs, scanning for cancers, identifying protein structures, non-destructive testing and quality control. They could also be key to future high-speed wireless networks, which will transmit data at terabits (trillions of bits) per second.

However, terahertz rays are largely restricted to laboratory settings due to a lack of powerful and compact terahertz sources. Conventional semiconductor devices can generate terahertz waves ranging either below 1 terahertz or above 10 terahertz in frequency. The range of frequencies in the middle, known as the terahertz gap, might prove especially valuable for imaging, bomb detection, cancer detection and chemical analysis applications, says Qing Hu, an electrical engineer at MIT.

New Stent-like Electrode Allows Humans to Operate Computers With Their Thoughts

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/bionics/new-stent-like-electrode-allows-humans-to-operate-computers-with-their-thoughts

Two Australian men with neuromuscular disorders regained some personal independence after researchers implanted stent-like electrodes in their brains, allowing them to operate computers using their thoughts

This is the first time such a device, dubbed a “stentrode,” has been implanted in humans, according to its inventors. The system also makes real-world use of brain-computer interfaces (BCIs)—devices that enable direct communication between the brain and a computer—more feasible.

The feat was described today in the Journal of Neurointerventional Surgery. “This paper represents the first fully implantable, commercial BCI system that patients can take home and use,” says Tom Oxley, founder of Melbourne-based Synchron, which developed the device.

No Implants Needed For Precise Control Deep Into The Brain

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/devices/deep-brain-control-without-implants

The first time Karl Deisseroth used light to control brain cells in a dish, people had a lot of questions, three in particular. Can the technique be used in living animals? Can it target different cell types? Can it work without implanting a light source into the brain?

In the years since that initial groundbreaking 2004 experiment, Deisseroth’s team and others found the answers to the first two questions: yes and yes. This month they answered the third question with another yes, successfully introducing an implant-free version of the technique. It is the first demonstration that optogenetics—which uses a combination of light and genetic engineering to control brain cells—can accurately switch the cells on and off without surgery.

“This is kind of a nice bookend to 16 years of research,” says Deisseroth, a neuroscientist and bioengineer at Stanford University. “It took years and years for us to sort out how to make it work.” The result is described this month in the journal Nature Biotechnology.

New Sensor Integrated Within Dental Implants Monitors Bone Health

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/the-human-os/biomedical/devices/new-sensor-integrated-within-dental-implants-monitors-bone-health

Journal Watch report logo, link to report landing page

Scientists have created a new sensor that can be integrated within dental implants to passively monitor bone growth, bypassing the need for multiple x-rays of the jaw. The design is described in study published September 25 in IEEE Sensors Journal.

Currently, x-rays are used to monitor jaw health following a dental implant. Dental x-rays typically involve low doses of radiation, but people with dental implants may require more frequent x-rays to monitor their bone health following surgery. And, as professor Alireza Hassanzadeh of Shahid Beheshti University, Tehran, notes, “Too many X-rays is not good for human health.”

To reduce this need for x-rays, Hassanzadeh and two graduate students at Shahid Beheshti University designed a new sensor that can be integrated within dental implants. It passively measures changes in the surrounding electrical field (capacitance) to monitor bone growth. Two designs, for short- and long-term monitoring, were created.

The sensors are made of titanium and poly-ether-ether-ketone, and are integrated directly into a dental implant using microfabrication methods. The designs do not require any battery, and passively monitor changes in capacitance once the dental implant is in place.

“When the bone is forming around the sensor, the capacitance of the sensor changes,” explains Hassanzadeh. This indicates how the surrounding bone growth changes over time. The changes in capacitance, and thus bone growth, are then conveyed to a reader device that transfers the measurements into a data logger.  

In their study, the researchers tested the sensors in the femur and jaw bone of a cow. “The results reveal that the amount of bone around the implant has a direct effect on the capacitance value of the sensor,” says Hassanzadeh.

He says that the sensor still needs to be optimized for size and different implant shapes, and clinical experiments will need to be completed with different kinds of dental implant patients. “We plan to commercialize the device after some clinical tests and approval from FDA and authorities,” says Hassanzadeh.

Print These Electronic Circuits Directly Onto Skin

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/devices/skin-circuits

New circuits can get printed directly on human skin to help monitor vital signs, a new study finds. 

Wearable electronics are growing increasingly more comfortable and more powerful. A next step for such devices might include electronics printed directly onto the skin to better monitor and interface with the human body. 

Scientists wanted a way to sinter—that is, use heat to fuse—metal nanoparticles to fabricate circuits directly on skin, fabric or paper. However, sintering usually requires heat levels far too high for human skin. Other techniques for fusing metal nanoparticles into circuits, such as lasers, microwaves, chemicals or high pressure, are similarly dangerous for skin.

In the new study, researchers developed a way to sinter nanoparticles of silver at room temperature. The key behind this advance is a so-called a sintering aid layer, consisting of a biodegradable polymer paste and additives such as titanium dioxide or calcium carbonate. 

Positive electrical charges in the sintering aid layer neutralized the negative electrical charges the silver nanoparticles could accumulate from other compounds in their ink. This meant it took less energy for the silver nanoparticles printed on top of the sintering aid layer to come together, says study senior author Huanyu Cheng, a mechanical engineer at Pennsylvania State University.

The sintering aid layer also created a smooth base for circuits printed on top of it. This in turn improved the performance of these circuits in the face of bending, folding, twisting and wrinkling.

In experiments, the scientists placed the silver nanoparticle circuit designs and the sintering aid layer onto a wooden stamp, which they pressed onto the back of a human hand. They next used a hair dryer set to cool to evaporate the solvent in the ink. A hot shower could easily remove these circuits without damaging the underlying skin.

After the circuits sintered, they could help the researchers measure body temperature, skin moisture, blood oxygen, heart rate, respiration rate, blood pressure and bodily electrical signals such as electrocardiogram (ECG or EKG) readings. The data from these sensors were comparable to or better than those measured using conventional commercial sensors that were simply stuck onto the skin, Cheng says.

The scientists also used this new technique to fabricate flexible circuitry on a paper card, to which they added a commercial off-the-shelf chip to enable wireless connectivity. They attached this flexible paper-based circuit board to the inside of a shirt sleeve and showed it could gather and transmit data from sensors printed on the skin. 

“With the use of a novel sintering aid layer, our method allows metal nanoparticles to be sintered at low or even room temperatures, as compared to several hundreds of degrees Celsius in alternative approaches,” Cheng says. “With enhanced signal quality and improved performance over their commercial counterparts, these skin-printed sensors with other expanded modules provide a repertoire of wearable electronics for health monitoring.”

The scientists are now interested in applying these sensors for diagnostic and treatment applications “for cardiopulmonary diseases, including COVID-19, pneumonia, and fibrotic lung diseases,” Cheng says. “This sensing technology can also be used to track and monitor marine mammals.”

The scientists detailed their findings online Sept. 11 in the journal ACS Applied Materials & Interfaces

Scientists Can Now Take Virtual Walks Through Human Cells

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/devices/scientists-can-now-take-virtual-walks-through-human-cells

In his “Plenty of Room at the Bottom” lecture at Caltech in 1959, physicist Richard Feynman urged his audience to make the microscope ever more powerful so that biologists could explore the “staggeringly small world” beyond. It would be a lot easier to answer fundamental biological questions if we could “just look at the thing,” he said. 

A few years later, in the science fiction movie Fantastic Voyage, a submarine crew shrinks to microscopic size and goes on a mission through the human body to repair brain damage. The 1966 movie trailer says the film “drops the bottom out of the world you know and understand,” and sends viewers “where no man or camera has gone before.” 

Now, scientists have combined the visions of the mid-century physicist and filmmakers in one groovy virtual reality experience. In a paper published last week in Nature Medicine, researchers described new software that enables scientists to enter inside and explore a cell or other biological structures using a virtual reality (VR) headset.

Brain Implant Bypasses Eyes To Help Blind People See

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/bionics/progress-toward-a-brain-implant-for-the-blind

Early humans were hunters, and their vision systems evolved to support the chase. When gazing out at an unchanging landscape, their brains didn’t get excited by the incoming information. But if a gazelle leapt from the grass, their visual cortices lit up. 

That neural emphasis on movement may be the key to restoring sight to blind people. Daniel Yoshor, the new chair of neurosurgery at the University of Pennsylvania’s Perelman School of Medicine, is taking cues from human evolution as he devises ways to use a brain implant to stimulate the visual cortex. “We’re capitalizing on that inherent bias the brain has in perception,” he tells IEEE Spectrum.

He recently described his experiments with “dynamic current steering” at the Bioelectronic Medicine Summit, and also published the research in the journal Cell in May. By tracing shapes with electricity onto the brain’s surface, his team is producing a relatively clear and functional kind of bionic vision. 

Yoshor is involved in an early feasibility study of the Orion implant, developed by the Los Angeles-based company Second Sight, a company that’s been on the forefront of technology workarounds for people with vision problems.

In 2013, the U.S. Food and Drug Administration approved Second Sight’s retinal implant system, the Argus II, which uses an eyeglass-mounted video camera that sends information to an electrode array in the eye’s retina. Users have reported seeing light and dark, often enough to navigate on a street or find the brightness of a face turned toward them. But it’s far from normal vision, and in May 2019 the company announced that it would suspend production of the Argus II to focus on its next product.

The company has had a hard time over the past year: At the end of March it announced that it was winding down operations, citing the impact of COVID-19 on its ability to secure financing. But in subsequent months it announced a new business strategy, an initial public offering of stock, and finally in September the resumption of clinical trials for its Orion implant.  

The Orion system uses the same type of eyeglass-mounted video camera, but it sends information to an electrode array atop the brain’s visual cortex. In theory, it could help many more people than a retinal implant: The Argus II was approved only for people with an eye disease called retinitis pigmentosa, in which the photoreceptor cells in the retina are damaged but the rest of the visual system remains intact and able to convey signals to the brain. The Orion system, by sending info straight to the brain, could help people with more widespread damage to the eye or optic nerve.

Six patients have received the Orion implant thus far, and each now has an array of 60 electrodes that tries to represent the image transmitted by the camera. But imagine a digital image made up of 60 pixels—you can’t get much resolution. 

Yoshor says his work on dynamic current steering began with “the fact that getting info into the brain with static stimulation just didn’t work that well.” He says that one possibility is that more electrodes would solve the problem, and wonders aloud about what he could do with hundreds of thousands of electrodes in the brain, or even 1 million. “We’re dying to try that, when our engineering catches up with our experimental imagination,” he says. 

Until that kind of hardware is available, Yoshor is focusing on the software that directs the electrodes to send electrical pulses to the neurons. His team has conducted experiments with two blind Second Sight volunteers as well as with sighted people (epilepsy patients who have temporary electrodes in their brains to map their seizures). 

One way to understand dynamic current steering, Yoshor says, is to think of a trick that doctors commonly use to test perception—they trace letter shapes on a patient’s palm. “If you just press a ‘Z’ shape into the hand, it’s very hard to detect what that is,” he says. “But if you draw it, the brain can detect it instantaneously.” Yoshor’s technology does something similar, grounded in well-known information about how a person’s visual field maps to specific areas of their brain. Researchers have constructed this retinotopic map by stimulating specific spots of the visual cortex and asking people where they see a bright spot of light, called a phosphene.

The static form of stimulation that disappointed Yoshor essentially tries to create an image from phosphenes. But, says Yoshor, “when we do that kind of stimulation, it’s hard for patients to combine phosphenes to a visual form. Our brains just don’t work that way, at least with the crude forms of stimulation that we’re currently able to employ.” He believes that phosphenes cannot be used like pixels in a digital image. 

With dynamic current steering, the electrodes stimulate the brain in sequence to trace a shape in the visual field. Yoshor’s early experiments have used letters as a proof of concept: Both blind and sighted people were able to recognize such letters as M, N, U, and W. This system has an additional advantage of being able to stimulate points in between the sparse electrodes, he adds. By gradually shifting the amount of current going to each (imagine electrode A first getting 100 percent while electrode B gets zero percent, then shifting to ratios of 80:20, 50:50, 20:80, 0:100), the system activates neurons in the gaps. “We can program that sequence of stimulation, it’s very easy,” he says. “It goes zipping across the brain.”

Second Sight didn’t respond to requests for comment for this article, so it’s unclear whether the company is interested in integrating Yoshor’s stimulation technique into its technology. 

But Second Sight isn’t the only entity working on a cortical vision prosthetic. One active project is at Monash University in Australia, where a team has been preparing for clinical trials of its Gennaris bionic vision system.

Arthur Lowery, director of the Monash Vision Group and a professor of electrical and computer systems engineering, says that Yoshor’s research seems promising. “The ultimate goal is for the brain to perceive as much information as possible. The use of sequential stimulation to convey different information with the same electrodes is very interesting, for this reason,” he tells IEEE Spectrum in an email. “Of course, it raises other questions about how many electrodes should be simultaneously activated when presenting, say, moving images.”

Yoshor thinks the system will eventually be able to handle complex moving shapes with the aid of today’s advances in computer vision and AI, particularly if there are more electrodes in the brain to represent the images. He imagines a microprocessor that converts whatever image the person encounters in daily life into a pattern of dynamic stimulation.

Perhaps, he speculates, the system could even have different settings for different situations. “There could be a navigation mode that helps people avoid obstacles when they’re walking; another mode for conversation where the prosthetic would rapidly trace the contours of the face,” he says. That’s a far-off goal, but Yoshor says he sees it clearly. 

Treating Tinnitus Through the…Tongue?

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/devices/treating-tinnitus-through-thetongue

It can sound like a soft buzzing in one’s ears. Or a sudden hissing. Or a loud roaring. Tinnitus, the sensation of hearing phantom sounds, ranges from annoying to debilitating, and it affects an estimated 10 to 15 percent of the population. Unfortunately, finding relief from these symptoms can be tough.

Doctors and patients may find themselves attempting many treatments for tinnitus, including sound machines to mask the phantom noise, medications to treat underlying anxiety or depression, and investigational brain implants or vagus nerve stimulation. In the United States, there are currently no clinically approved drugs or devices to treat tinnitus.

Now, in a paper published today in the journal Science Translational Medicine, researchers at Dublin-based biotech Neuromod Devices, along with academic collaborators, present positive results from a year-long, randomized clinical trial of a device that pairs sound with gentle electrical tongue stimulation to treat tinnitus. In a group of 326 adults, 12 weeks of treatment with the device significantly reduced tinnitus symptom severity for up to 12 months after treatment.

Electronic Blood Vessels to the Rescue

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/bionics/electronic-bloodvessel

The number one cause of mortality worldwide is cardiovascular disease. Now scientists reveal electronic blood vessels might one day use electricity to stimulate healing and deliver gene therapies to help treat such maladies, a new study finds.

One-third of all U.S. deaths are linked to cardiovascular disease, according to the American Heart Association. When replacement blood vessels are needed to treat advanced cases of cardiovascular disease, doctors prefer ones taken from the patient’s own body, but sometimes the patient’s age or condition prevents such a strategy.

Artificial blood vessels that can prove helpful in cases where replacements more than 6 millimeters wide are needed are now commercially available. However, when it comes to smaller artificial blood vessels, so far none have succeeded in clinical settings. That’s because a complex interplay between such vessels and blood flow often triggers inflammatory responses, causing the walls of natural blood vessels to thicken and cut off blood flow, says Xingyu Jiang, a biomedical engineer at the Southern University of Science and Technology in Shenzhen, China. Jiang and his team report having developed a promising new artificial blood vessel that doesn’t cause inflammatory response. The scientists detailed their findings online on 1 October in the journal Matter.

An Open-Source Bionic Leg

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/bionics/opensource-bionicleg

Details on the design and clinical tests of an open-source bionic leg are now freely available online, so that researchers can hopefully create and test safe and useful new prosthetics.

Bionic knees, ankles and legs under development worldwide to help patients walk are equipped with electric motors. Getting the most from such powered prosthetics requires safe and reliable control systems that can account for many different types of motion: for example, shifting from striding on level ground to walking up or down ramps or stairs.

However, developing such control systems has proven difficult. “The challenge stems from the fact that these limbs support a person’s body weight,” says Elliott Rouse, a biomedical engineer and director of the neurobionics lab at the University of Michigan, Ann Arbor. “If it makes a mistake, a person can fall and get seriously injured. That’s a really high burden on a control system, in addition to trying to have it help people with activities in their daily life.”

The Next Pandemic

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/static/the-next-pandemic

COVID-19 has galvanized tech communities. The tens of billions we’re spending on vaccines, antivirals, tests, robots, and devices are transforming how we’ll respond to future outbreaks of infectious disease.

intro illustration of COVID-19 spreading over globes

1. Grand Biomedical Challenges

engineers with ventilators

2. Testing, Tracing, and Modeling

magnifying glass looking at a COVID-19 molecule illustration

3. Tech Takes on COVID-19

An airplane cabin being de contaminated using an UV light device

From top: Photo: JPL-Caltech/NASA; illustration: StoryTK; photo: Honeywell Aerospace

Tracking Respiratory Droplets on The Fly

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/devices/new-sensors-detect-respiratory-droplets-escaping-through-masks

IEEE COVID-19 coverage logo, link to landing page

Covid-19 spreads via droplets expelled from an infected person’s lungs, so determining how the release of moisture is affected by different masks is an important step towards better protective gear. Now, using a new technique in 3D printing, University of Cambridge researchers have created tiny, freestanding, conducting fibers they claim can detect respiratory moisture more effectively than anything currently on the market. 

The researchers demonstrated the fiber sensors by testing the amount of breath moisture that leaks through face coverings. They attached their fiber array to the outside of the mask, wired it to a computer, and found that it outperformed conventional planer chip-based commercial sensors, particularly when monitoring rapid breathing. (A paper describing the invention was published today in the journal Science Advances.) 

Dubbed “inflight fiber printing,” the technique enables the researchers to print the fibers and hook them into a monitoring circuit, all in one step. 

“Previously you could have very small conducting fiber production but it could not be incorporated directly into a circuit,” says Shery Huang, a lecturer in bioengineering at the University of Cambridge who led the research. “The main innovation here is we can directly incorporate these small conducting fibers onto the circuit with designable fiber pattern structures,” she says.

Here’s How We Prepare for the Next Pandemic

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/biomedical/devices/heres-how-we-prepare-for-the-next-pandemic

When the Spanish flu pandemic swept across the globe in 1918, it ravaged a population with essentially no technological countermeasures. There were no diagnostic tests, no mechanical ventilators, and no antiviral or widely available anti-inflammatory medications other than aspirin. The first inactivated-virus vaccines would not become available until 1936. An estimated 50 million people died.

Today, a best-case scenario predicts 1.3 million fatalities from COVID-19 in 2020, according to projections by Imperial College London, and rapidly declining numbers after that. That in a world with 7.8 billion people—more than four times as many as in 1918. Many factors have lessened mortality this time, including better implementation of social-distancing measures. But technology is also a primary bulwark.

Since January of this year, roughly US $50 billion has been spent in the United States alone to ramp up testing, diagnosis, modeling, treatment, vaccine creation, and other tech-based responses, according to the Committee for a Responsible Federal Budget. The massive efforts have energized medical, technical, and scientific establishments in a way that hardly anything else has in the past half century. And they will leave a legacy of protection that will far outlast COVID-19.

In the current crisis, though, it hasn’t been technology that separated the winners and losers. Taking stock of the world’s responses so far, two elements set apart the nations that have successfully battled the coronavirus: foresight and a painstakingly systematic approach. Countries in East Asia that grappled with a dangerous outbreak of the SARS virus in the early 2000s knew the ravages of an unchecked virulent pathogen, and acted quickly to mobilize teams and launch containment plans. Then, having contained the first wave, some governments minimized further outbreaks by carefully tracing every subsequent cluster of infections and working hard to isolate them. Tens of thousands of people, maybe hundreds of thousands, are alive in Asia now because of those measures.

In other countries, most notably the United States, officials initially downplayed the impending disaster, losing precious time. The U.S. government did not act quickly to muster supplies, nor did it promulgate a coherent plan of action. Instead states, municipalities, and hospitals found themselves skirmishing and scrounging for functional tests, for personal protective equipment, and for guidance on when and how to go into lockdown.

The best that can be said about this dismal episode is that it was a hard lesson about how tragic the consequences of incompetence can be. We can only hope that the lesson was learned well, because there will be another pandemic. There will always be another pandemic. There will always be pathogens that mutate ever so slightly, making them infectious to human hosts or rendering existing drug treatments ineffective. Acknowledging that fact is the first step in getting ready—and saving lives.

The cutting-edge technologies our societies have developed and deployed at lightning speed are not only helping to stem the horrendous waves of death. Some of these technologies will endure and—like a primed immune system—put us on a path toward an even more effective response to the next pandemic.

Consider modeling. In the early months of the crisis, the world became obsessed with the models that forecast the future spread of the disease. Officials relied on such models to make decisions that would have mortal consequences for people and multibillion­-dollar ones for economies. Knowing how much was riding on the curves they produced, the modelers who create projections of case numbers and fatalities pulled out all the stops. As Matt Hutson recounts in “The Mess Behind the Models,” they adapted their techniques on the fly, getting better at simulating both a virus that nobody yet understood and the maddening vagaries of human behavior.

In the development of both vaccines and antiviral drugs, researchers have committed to timelines that would have seemed like fantasies a year ago. In “AI Takes Its Best Shot,” Emily Waltz describes how artificial intelligence is reshaping vaccine makers’ efforts to find the viral fragments that trigger a protective immune response. The speed record for vaccine development and approval is four years, she writes, and that honor is held by the mumps vaccine; if a coronavirus vaccine is approved for the general public before the end of this year, it will blow that record away.

Antiviral researchers have it even tougher in some ways. As Megan ­Scudellari writes, hepatitis C was discovered in 1989—and yet the first antiviral effective against it didn’t become available until 26 years later, in 2015. “Automating Antivirals” describes the high-tech methods researchers are creating that could cut the current ­drug-development timeline from five or more years to six months. That, too, will mean countless lives saved: Even with a good vaccine, some people inevitably become sick. For some of them, effective ­antivirals will be the difference between life and death.

Beyond Big Pharma, engineers are throwing their energies into a host of new technologies that could make a difference in the war we’re waging now and in those to come. For example, this pandemic is the first to be fought with robots alongside humans on the front lines. In hospitals, robots are checking on patients and delivering medical supplies; elsewhere, they’re carting groceries and other goods to people in places where a trip to the store can be fraught with risk. They’re even swabbing patients for COVID-19 tests, as Erico Guizzo and Randi Klett reveal in a photo essay of robots that became essential workers.

Among the most successful of the COVID-fighting robots are those buzzing around hospital rooms and blasting floors, walls, and even the air with ­ultraviolet-C radiation. Transportation officials are also starting to deploy UV-C systems to sanitize the interiors of passenger aircraft and subway cars, and medical facilities are using them to sterilize personal protective equipment. The favored wavelength is around 254 nanometers, which destroys the virus by shredding its RNA. The problem is, such UV-C light can also damage human tissues and DNA. So, as Mark Anderson reports in “The Ultraviolet Offense,” researchers are readying a new generation of so-called far-UV sterilizers that use light at 222 nm, which is supposedly less harmful to human beings.

When compared with successful responses in Korea, Singapore, and other Asian countries, two notable failures in the United States become clear: testing and contact tracing. For too long, testing was too scarce and too inaccurate in the United States. That was especially true early on, when it was most needed. And getting results sometimes took two weeks—a devastating delay, as the ­SARS-CoV-2 virus is notorious for being spread by people who don’t even know they’re sick and infectious. Researchers quickly realized that what was really needed was something “like a pregnancy test,” as one told Wudan Yan: “Spit on a stick or into a collection tube and have a clear result 5 minutes later.” Soon, we’ll have such a test.

Digital contact tracing, too, could be an enormously powerful weapon, as Jeremy Hsu reports in “The Dilemma of Contact-Tracing Apps.” But it’s a tricky one to deploy. During the pandemic, many municipalities have used some form of tracing. But much of it was low-key and low-tech—sometimes little more than a harried worker contacting people on a list. Automated contact tracing, using cloud-based smartphone apps that track people’s movements, proved capable of rapidly suppressing the contagion in places like China and South Korea. But most Western countries balked at that level of intrusiveness. Technical solutions that trade off some surveillance stringency for privacy have been developed and tested. But they couldn’t solve the most fundamental problem: a pervasive lack of trust in government among Americans and Europeans.

It has been 102 years since the ­Spanish flu taught us just how bad a global pandemic can be. But almost nobody expects that long of an interval until the next big one. Nearly all major infectious outbreaks today are caused by “zoonotic transfer,” when a pathogen jumps from an animal to human beings. And a variety of unrelated factors, including the loss of natural habitats due to deforestation and the rapid growth of livestock farming to feed industrializing economies, is stressing animal populations and putting them into more frequent contact with people.

We’re unlikely to halt or even measurably slow such global trends. What we can do is make sure we have suitable technology, good governance, and informed communities. That’s how we’ll mount a tougher response to the next pandemic.

This article appears in the October 2020 print issue as “Prepping for the Next Big One.”