Tag Archives: Biomedical/Bionics

Exclusive Q&A: Neuralink’s Quest to Beat the Speed of Type

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/tech-talk/biomedical/bionics/exclusive-neuralinks-goal-of-bestinworld-bmi

Elon Musk’s brain tech company, Neuralink, is subject to rampant speculation and misunderstanding. Just start a Google search with the phrase “can Neuralink…” and you’ll see the questions that are commonly asked, which include “can Neuralink cure depression?” and “can Neuralink control you?” Musk hasn’t helped ground the company’s reputation in reality with his public statements, including his claim that the Neuralink device will one day enable “AI symbiosis” in which human brains will merge with artificial intelligence.

It’s all somewhat absurd, because the Neuralink brain implant is still an experimental device that hasn’t yet gotten approval for even the most basic clinical safety trial. 

But behind the showmanship and hyperbole, the fact remains that Neuralink is staffed by serious scientists and engineers doing interesting research. The fully implantable brain-machine interface (BMI) they’ve been developing is advancing the field with its super-thin neural “threads” that can snake through brain tissue to pick up signals and its custom chips and electronics that can process data from more than 1000 electrodes.

In August 2020 the company demonstrated the technology in pigs, and this past April it dropped a YouTube video and blog post showing a monkey using the implanted device, called a Link, to control a cursor and play the computer game Pong. But the BMI team hasn’t been public about its current research goals, and the steps it’s taking to achieve them.

In this exclusive Q&A with IEEE Spectrum, Joseph O’Doherty, a neuroengineer at Neuralink and head of its brain signals team, lays out the mission. 

 

Joseph O’Doherty on…

  1. Aiming for a World Record
  2. The Hardware
  3. The Software
  4. What He’s Working on Right Now
  5. What the Limits Are, Where the Ceiling Is

  1. Aiming for a World Record

    IEEE Spectrum: Elon Musk often talks about the far-future possibilities of Neuralink; a future in which everyday people could get voluntary brain surgery and have Links implanted to augment their capabilities. But whom is the product for in the near term? 

    Joseph O’Doherty: We’re working on a communication prosthesis that would give back keyboard and mouse control to individuals with paralysis. We’re pushing towards an able-bodied typing rate, which is obviously a tall order. But that’s the goal.

    We have a very capable device and we’re aware of the various algorithmic techniques that have been used by others. So we can apply best practices engineering to tighten up all the aspects. What it takes to make the BMI is a good recording device, but also real attention to detail in the decoder, because it’s a closed-loop system. You need to have attention to that closed-loop aspect of it for it to be really high performance.

    We have an internal goal of trying to beat the world record in terms of information rate from the BMI. We’re extremely close to exceeding what, as far as we know, is the best performance. And then there’s an open question: How much further beyond that can we go? 

    My team and I are trying to meet that goal and beat the world record. We’ll either nail down what we can, or, if we can’t, figure out why not, and how to make the device better.

    BACK TO TOP↑

    The Hardware

    Spectrum: The Neuralink system has been through some big design changes over the years. When I was talking to your team in 2019, the system wasn’t fully implantable, and there was still a lot in flux about the design of the threads, how many electrodes per thread, and the implanted chip. What’s the current design? 

    O’Doherty: The threads are often referred to as the neural interface itself; that’s the physical part that actually interfaces with the tissue. The broad approach has stayed the same throughout the years: It’s our hypothesis that making these threads extremely small and flexible is good for the long-term life of the device. We hope it will be something that the immune system likes, or at least dislikes less. That approach obviously comes with challenges because we have very, very small things that need to be robust over many years. And a lot of the techniques that are used to make things robust have to do with adding thickness and adding layers and having barriers. 

    Spectrum: I imagine there are a lot of trade-offs between size and robustness.  

    O’Doherty: There are other flexible and very cool neural interfaces out in the world that we read about in academic publications. But those demonstrations often only have to work for the one hour or one day that the experiment is done. Whereas we need to have this working for many, many, many, many days. It’s a totally different solution space.  

    Spectrum: When I was talking to your team in 2019, there were 128 electrodes per thread. Has that changed? 

    O’Doherty: Right now we’re doing 16 contacts per thread, spaced out by 200 microns. The earlier devices were denser, but it was overkill in terms of sampling neurons in various layers of the cortex. We could record the same neurons on multiple adjacent channels when the contacts were something like 20 microns apart. So we could do a very good job of characterizing the individual neurons we were recording from, but it required a lot of density, a lot of stuff packed in one spot, and that meant more power requirements. That might be great if you’re doing neuroscience, perhaps with less good if you’re trying to make a functional product.

    That’s one reason why we changed our design to spread out our contacts in the cortex, and to distribute them on many threads across the cortical area. That way we don’t have redundant information. The current design is 16 channels per thread, and we have 64 of these threads that we can place wherever we want within the cortical region, which adds up to 1,024 channels. Those threads go to a single tiny device that’s smaller than a quarter, which has the algorithms, the spike detection, battery, telemetry, everything.

    In addition to 64×16, we’re also testing 128×8 and 256×4 configurations to see if there are performance gains. We ultimately have the flexibility to do any configuration of 1024 electrodes we’d like.

    Spectrum: Does each Link device have multiple chips?

    O’Doherty: Yes. The actual hardware is a 256-channel chip, and there are four of them, which adds up to 1,024 channels. The Link acts as one thing, but it is actually made up of four chips. 

    Spectrum: I imagine you’re continually upgrading the software as you push toward your goal, but is the hardware fixed at this point?

    O’Doherty: Well, we’re constantly working on the next thing. But it is the case that we have to prove the safety of a particular version of the device so that we can translate that to humans. We use what are called design controls, where we fix the device so we can describe what it is very well and describe how we are testing its safety. Then we can make changes, but we do it under an engineering control framework. We describe what we’re changing and then we can either say this change is immaterial to safety or we have to do these tests.

    BACK TO TOP↑

    The Software

    Spectrum: It sounds like a lot of the spike detection is being done on the chips. Is that something that’s evolved over time? I think a few years back it was being done on an external device. 

    O’Doherty: That’s right. We have a slightly different approach to spike detection. Let me first give a couple of big picture comments. For neuroscience, you often don’t just want to detect spikes. You want to detect spikes and then sort spikes by which neurons generated them. If you detect a spike on a channel and then realize, Oh, I can actually record five different neurons here. Which neuron did it come from? How do I refer each spike to the neuron that generated it? That’s a very difficult computational problem. That’s something that’s often done in post-processing—so after you record a bunch of data, then you do a bunch of math. 

    There’s another extreme where you simply put a threshold on your voltage, and you say that every time something crosses that threshold, it’s a spike. And then you just count how many of those happen. That’s all. That’s all the information you can use. 

    Both extremes are not great for us. In the first one, you’re doing a lot of computation that’s perhaps infeasible to do in a small package. With the other extreme, you’re very sensitive to noise and artifacts because many things can cause a threshold crossing that are not neurons firing. So we’re using an approach in the middle, where we’re looking for shapes that look like the signals that neurons generate. We transmit those events, along with a few extra bits of information about the spike, like how tall it is, how wide it is, and so on.

    That’s something that we were previously doing on the external part of the device. At the time we validated that algorithm, we had much higher bandwidth, because it was a wired system. So we were able to stream a lot of data and develop this algorithm. Then the chip team took that algorithm and implemented it in hardware. So now that’s all happening automatically on the chip. It’s automatically tuning its parameters—it has to learn about the statistical distribution of the voltages in the brain. And then it just detects spikes and sends them out to the decoder.

    Spectrum: How much data is coming off the device these days?  

    O’Doherty: To address this in brain-machine interface terms, we are detecting spikes within a 25-millisecond window or “bin.” So the vectors of information that we use in our algorithms for closed-loop control are factors of spike counts: 1,024 by 25 milliseconds. We count how many spikes occur per channel and that’s what we send out. We only need about four bits per bin, so that’s four bits times forty bins per second times 1,024 channels, or about 20 kilobytes each second. 

    That degree of compression is made possible by the fact that we’re spike detecting with our custom algorithm on the chip. The maximum bandwidth would be 1,024 channels times 20,000 samples per second, which is a pretty big number. That’s if we could send everything. But the compressed version is just the number of spike events that occur—zero one, two, three, four, whatever—times 1,024 channels.

    For our application, which is controlling our communications prosthesis, this data compression is a good way to go—and we still have usable signals for closed-loop control.

    Spectrum: When you say closed-loop control, what does that mean in this context?  

    O’Doherty: Most machine learning is open-loop. Say you have an image and you analyze it with a model and then produce some results, like detecting the faces in a photograph. You have some inference you want to do, but how quickly you do it doesn’t generally matter. But here the user is in the loop—the user is thinking about moving and the decoder is, in real time, decoding those movement intentions, and then taking some action. It has to act very quickly because if it’s too slow, it doesn’t matter. If you throw a ball to me and it takes my BMI five seconds to infer that I want to move my arm forward—that’s too late. I’ll miss the ball.

    So the user changes what they’re doing based on visual feedback about what the decoder does: That’s what I mean by closed loop. The user makes a motor intent; it’s decoded by the Neuralink device; the intended motion is enacted in the world by physically doing something with a cursor or a robot arm; the user sees the result of that action; and that feedback influences what motor intent they produce next. I think the closest analogy outside of BMI is the use of a virtual reality headset—if there’s a big lag between what you do and what you see on your headset, it’s very disorienting, because it breaks that closed-loop system.

    BACK TO TOP↑

    What He’s Working on Right Now

    Spectrum: What has to happen to get from where you are right now to best-in-world? 

    O’Doherty: Step one is to find the sources of latency and eliminate all of them. We want to have low latency throughout the system. That includes detecting spikes; that includes processing them on the implant; that includes the radio that has to transmit them—there’s all kinds of packetization details with Bluetooth that can add latency. And that includes the receiving side, where you do some processing in your model inference step, and that even includes drawing pixels on the screen for the cursor that you’re controlling. Any small amount of lag that you have there adds delay and that affects closed-loop control.

    Spectrum: OK, so let’s imagine all latency has been eliminated. What next? 

    O’Doherty: Step two is the decoder itself, and the model that it uses. There’s great flexibility in terms of the model—it could be very simple, very complex, very nonlinear, or very deep in terms of deep learning—how many layers your entire network has. But we have particular constraints. We need our decoder model to work fast, so we can’t use a sophisticated decoder that’s very accurate but takes too long to be useful. We’re also potentially interested in running the decoder on the implant itself, and that requires both low memory usage, so we don’t have to store a lot of parameters in a very constrained environment, and also compute efficiency so we don’t use a lot of clock cycles. But within that space, there’s some clever things we can do in terms of mapping neural spikes to movement. There are linear models that are very simple and nonlinear models that give us more flexibility in capturing the richness of neural dynamics. We want to find the right sweet spot there.

    Other factors include the speed at which you can calibrate the decoder to the user. If we have to spend a long time training the decoder, that’s not a great user experience. We want something that can come online really quickly and give the subject a lot of time to practice with the device.

    We’re also focusing on models that are robust. So from day one to day two to day three, we don’t want to have to recalibrate or re-tune the decoder. We want one that works on day one and that works reliably for a long time. Eventually, we want decoders that calibrate themselves, even without the user thinking about it. So the user is just going about their day doing things that cause the model to stay calibrated. 

    Spectrum: Are there any decoder tricks or hacks you’ve figured out that you can tell me about?  

    O’Doherty: One thing we find particularly helpful is decoding click intention. When a BMI user moves a cursor to a target, they typically need to dwell on that target for a certain amount of time, and that is considered a click. The user dwelled for 200 milliseconds, so they selected it. Which is fine, but it adds delay because the user has to wait that amount of time for the selection to happen. But if we decode click intention directly, that lets the user make selections that much faster. 

    And this is something that we’re working on—we don’t have a result yet. But we can potentially look into the future. Imagine you’re making a movement with the brain-controlled cursor, and I know where you are now… but maybe I also know where you’re going to want to go in a second. If I know that, I can do some tricks, I can just teleport you there and get you there faster.

    And honestly, practice is a component. These neuroprosthetic skills have to be learned by the user, just like learning to type or any other skill. We’ve seen this with non-human primates, and I’ve heard it’s also true of human participants in BrainGate trials. So we want a decoder that doesn’t pose too much of a learning burden. 

    Beyond that, I can speculate on cool stuff that could be done. For example, you type faster with two fingers than one finger, or text faster with two thumbs versus one pointer finger. So imagine decoding movement intention for two thumbs to control your brain-controlled keyboard and mouse. That could potentially be a way to boost performance. 

    Spectrum: What is the current world record for BMI rate? 

    O’Doherty: Krishna Shenoy of Stanford has been keeping track of this in some tables of BCI performance, which includes the paper that recently came out from his group. That paper set the record with a maximum bit rate of 6.18 bits per second with human participants. For non-human primates, the record is 6.49 bits per second.

    Spectrum: And can you prove best-in-world BMI with non-human primates, or do you need to get into humans for that?

    O’Doherty: That’s a good question. Non-human primates can’t talk or read English, so to some extent we have to make inferences. With a human participant you might say, here’s a sentence we’d like you to copy, please type it as best you can. And then we can look at performance there. For the monkey, we can create a string of sequences and ask them to do it quickly and compute performance rates that way. Monkeys are motivated and they’ll do those tasks. So I don’t see any reason, in principle, why one is superior to the other for this. For linguistic and semantic tasks like decoding speech or decoding text directly from your brain we’ll have to prototype in humans, of course. But until we get to that point, and even after that, non-human primates and other animal models are really important for proving out the technology.

    BACK TO TOP↑

    What the Limits Are, Where the Ceiling Is

    Spectrum: You said earlier that your team will either achieve a new world record or find out the reason why you can’t. Are there reasons why you think it might not work? 

    O’Doherty: The 2D cursor control is not a very high-dimension task. There are probably limits that have to do with intention and speed. Think about how long it takes to move a cursor around and hit targets: It’s the time it takes the user to go from point A to point B, and the time it takes to select when they’re at point B. Also if they make a mistake and click the wrong button, that’s really bad. So they have to go faster between A and B, they have to click there more reliably, and they can’t make mistakes.

    At some point, we’re going to hit a limit, because the brain can’t keep up. If the cursor is going too fast, the user won’t even see it moving. I think that’s where the limits will come from—not the neural interfacing, but what it means to move a cursor around. So then we’ll have to think about other interesting ways to interface with the brain to get beyond that. There are other ways of communicating that might be better—maybe it will involve ten-finger typing. I think it’s an open question where that ceiling is.

    Spectrum: Both the games that the monkey played were basically just cursor control: finding targets and using a cursor to move the paddle in Pong. Can you imagine any tests that would go beyond that for non-human primates?

    O’Doherty: Non-human primates can learn other more complicated tasks. The training can be lengthy, because we can’t tell them what to do; we have to show them and take small steps toward more complicated things. To pick a game out of a hat: Now we know that monkeys can play Pong, but can they play Fruit Ninja? There’s a training burden, but I think it’s within their capability.

    Spectrum: Is there anything else you want to emphasize about the technology, the work you’re doing, or how you’re doing it?

    O’Doherty: I first started working on BMI in an academic environment. The concerns that we have at Neuralink are different from the concerns involved with making a BMI for an academic demonstration. We’re really interested in the product, the experience of the user, the robustness, and having this device be useful across a long period of time. And those priorities necessarily lead to slightly different optimizations than I think we would choose if we were doing this for a one-off demonstration. We really enjoyed the Pong demo, but we’re not here to make Pong demos. That’s just a teaser for what will be possible when we bring our product to market.

    BACK TO TOP↑

Bionic muscles that are stronger, faster, and more efficient

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/biomedical/bionics/nanotube-bionic-muscles-are-10-times-stronger

Artificial muscles, once a tangle of elaborate servomotors, and hydraulic and pneumatic actuators, is now a thing of shape-memory alloys and hair-thin carbon nanotube (CNT) fibers. Bionics are, in brief, getting smaller—though perhaps not simpler. 

Electrochemical CNT muscles are also energy efficient, and they provide larger muscle strokes as well. Recently, a group of researchers from the U.S., Australia, South Korea and China, working with polymer-coated CNT fibers twisted into yarn, have effectively demonstrated how these muscles can be  faster, more powerful and more energy efficient.

Electrochemically driven CNT muscles actuate when a voltage is passed between the muscle fiber and a counter-electrode, causing a movement of ions to and from the surrounding electrolyte and the muscle. Generally speaking, this results in the muscles either contracting or expanding, until the potential reaches zero charge—after which it changes direction. In other words, a bipolar muscle stroke ensues. The bipolar movement, however, results in a smaller muscle stroke, reducing the muscle’s efficiency. 

The research team devised a way to circumvent this limitation. “When we coated the internal surface of the yarns with about a nanometer thicknesses of special polymers, we could shift the potential of zero charge of the muscle to outside the [stability window of the electrolyte, a voltage range beyond which it breaks down],” says Ray Baughman, director of the Alan G. MacDiarmid NanoTech Institute, University of Texas at Dallas, one of the authors of the paper. 

These polymers are ionically conducting materials with either positively or negatively charged chemical groups. In other words, they can accept either positive (cations) or negative ions (anions). With the potential of zero charge outside the electrolyte’s stability window, only one kind of ion (either cations or anions) infiltrates the muscle, and the muscle actuates in a unipolar direction. 

In the lab, the researchers used a CNT yarn muscle with a counter-electrode that was non-actuating to demonstrate their concept. But Baughman says that this doesn’t have to be the case. They found that by using two different types of their polymer-coated carbon nanotube yarns—one with positive substituents and the other with negative—they could create a dual-electrode unipolar muscle. “You can use the mechanical work being done by each muscle [additively]… [by putting] an unlimited numbers of these muscles together.”

The team were also able to make a dual-electrode CNT yarn muscle with a solid-state electrolyte, eliminating the need for a liquid electrolyte bath. “These dual electrode, unipolar muscles were woven to make actuating textiles that could be used for morphing clothing,” said Zhong Wang, a doctoral student and co-author, in the press release.

The group’s electrochemical unipolar muscles generate an average mechanical power output that is 10 times the average capability of human muscles, and about 2.2 times the weight-normalized power capability of a turbocharged V-8 diesel engine.

As such, it has a wide range of applications—including robotics and adaptable clothing. Of the former example, Baughman says robotic motors can be heavy and difficult to coordinate in a device with broad freedom of movement. By contrast, the artificial muscles could power electric robotic exoskeletons, which, could enable a person to work in a warehouse and move heavy items around with ease. 

Clothing that could adjust according to comfort is another application, allowing wearers to change the porosity of textiles depending on the weather. Medical implants, like a heart assist apparatus, could also use compact and lightweight artificial muscles, as could prosthetics. We are [currently]… writing a proposal for doing studies that will actually involve patients,” Baughman adds. 

Before real-world applications become possible, the challenge is to produce cost-effective, high-quality carbon nanotube yarn at scale. Baughman and his team also hope to adapt the unipolar CNT muscles to make more powerful mechanical energy harvesters.

Brain Implants and Wearables Let Paralyzed People Move Again

Post Syndicated from Chad Bouton original https://spectrum.ieee.org/biomedical/bionics/brain-implants-and-wearables-let-paralyzed-people-move-again

In 2015, a group of neuroscientists and engineers assembled to watch a man play the video game Guitar Hero. He held the simplified guitar interface gingerly, using the fingers of his right hand to press down on the fret buttons and his left hand to hit the strum bar. What made this mundane bit of game play so extraordinary was the fact that the man had been paralyzed from the chest down for more than three years, without any use of his hands. Every time he moved his fingers to play a note, he was playing a song of restored autonomy.

His movements didn’t rely on the damaged spinal cord inside his body. Instead, he used a technology that we call a neural bypass to turn his intentions into actions. First, a brain implant picked up neural signals in his motor cortex, which were then rerouted to a computer running machine-learning algorithms that deciphered those signals; finally, electrodes wrapped around his forearm conveyed the instructions to his muscles. He used, essentially, a type of artificial nervous system.

We did that research at the Battelle Memorial Institute in Columbus, Ohio. I’ve since moved my lab to the Institute of Bioelectronic Medicine at the Feinstein Institutes for Medical Research, in Manhasset, N.Y. Bioelectronic medicine is a relatively new field, in which we use devices to read and modulate the electrical activity within the body’s nervous system, pioneering new treatments for patients. My group’s particular quest is to crack the neural codes related to movement and sensation so we can develop new ways to treat the millions of people around the world who are living with paralysis—5.4 million people in the United States alone. To do this we first need to understand how electrical signals from neurons in the brain relate to actions by the body; then we need to “speak” the language correctly and modulate the appropriate neural pathways to restore movement and the sense of touch. After working on this problem for more than 20 years, I feel that we’ve just begun to understand some key parts of this mysterious code.

My team, which includes electrical engineer Nikunj Bhagat, neuroscientist Santosh Chandrasekaran, and clinical manager Richard Ramdeo, is using that information to build two different kinds of synthetic nervous systems. One approach uses brain implants for high-fidelity control of paralyzed limbs. The other employs noninvasive wearable technology that provides less precise control, but has the benefit of not requiring brain surgery. That wearable technology could also be rolled out to patients relatively soon.

Ian Burkhart, the participant in the Guitar Hero experiment, was paralyzed in 2010 when he dove into an ocean wave and was pushed headfirst into a sandbar. The impact fractured several vertebrae in his neck and damaged his spinal cord, leaving him paralyzed from the middle of his chest down. His injury blocks electrical signals generated by his brain from traveling down his nerves to trigger actions by his muscles. During his participation in our study, technology replaced that lost function. His triumphs—which also included swiping a credit card and pouring water from a bottle into a glass—were among the first times a paralyzed person had successfully controlled his own muscles using a brain implant. And they pointed to two ways forward for our research.

The system Burkhart used was experimental, and when the study ended, so did his new autonomy. We set out to change that. In one thrust of our research, we’re developing noninvasive wearable technology, which doesn’t require a brain implant and could therefore be adopted by the paralyzed community fairly quickly. As I’ll describe later in this article, tetraplegic people are already using this system to reach out and grasp a variety of objects. We are working to commercialize this noninvasive technology and hope to gain clearance from the U.S. Food and Drug Administration within the next year. That’s our short-term goal.

We’re also working toward a long-term vision of a bidirectional neural bypass, which will use brain implants to pick up the signals close to the source and to return feedback from sensors we’ll place on the limb. We hope this two-way system will restore both motion and sensation, and we’ve embarked on a clinical trial to test this approach. We want people like Burkhart to feel the guitar as they make music with their paralyzed hands.

Paralysis used to be considered a permanent condition. But in the past two decades, there’s been remarkable progress in reading neural signals from the brain and using electrical stimulation to power paralyzed muscles.

In the early 2000s, the BrainGate consortium began groundbreaking work with brain implants that picked up signals from the motor region of the brain, using those signals to control various machines. I had the privilege of working with the consortium during the early years and developed machine-learning algorithms to decipher the neural code. In 2007, those algorithms helped a woman who was paralyzed due to a stroke drive a wheelchair with her thoughts. By 2012, the team had enabled a paralyzed woman to use a robotic arm to pick up a bottle. Meanwhile, other researchers were using implanted electrodes to stimulate the spinal cord, enabling people with paralyzed legs to stand up and even walk.

My research group has continued to tackle both sides of the problem: reading the signals from the brain as well as stimulating the muscles, with a focus on the hands. Around the time that I was working with the BrainGate team, I remember seeing a survey that asked people with spinal cord injuries about their top priorities. Tetraplegics—that is, people with paralysis of all four limbs—responded that their highest priority was regaining function in their arms and hands.

Robotics has partially filled that need. One commercially available robotic arm can be operated with wheelchair controls, and studies have explored controlling robotic arms through brain implants or scalp electrodes. But some people still long to use their own arms. When Burkhart spoke to the press in 2016, he said that he’d rather not have a robotic arm mounted on his wheelchair, because he felt it would draw too much attention. Unobtrusive technology to control his own arm would allow him “to function almost as a normal member of society,” he said, “and not be treated as a cyborg.”

Restoring movement in the hands is a daunting challenge. The human hand has more than 20 degrees of freedom, or ways in which it can move and rotate—that’s many more than the leg. That means there are many more muscles to stimulate, which creates a highly complex control-systems problem. And we don’t yet completely understand how all of the hand’s intricate movements are encoded in the brain. Despite these challenges, my group set out to give tetraplegics back their hands.

Burkhart’s implant was in his brain’s motor cortex, in a region that controls hand movements. Researchers have extensively mapped the motor cortex, so there’s plenty of information about how general neuronal activity there correlates with movements of the hand as a whole, and each finger individually. But the amount of data coming off the implant’s 96 electrodes was formidable: Each one measured activity 30,000 times per second. In this torrent of data, we had to find the discrete signals that meant “flex the thumb” or “extend the index finger.”

To decode the signals, we used a combination of artificial intelligence and human perseverance. Our stalwart volunteer attended up to three sessions weekly for 15 weeks to train the system. In each session, Burkhart would watch an animated hand on a computer screen move and flex its fingers, and he’d imagine making the same movements while the implant recorded his neurons’ activity. Over time, a machine-learning algorithm figured out which pattern of activity corresponded to the flexing of a thumb, the extension of an index finger, and so on.

Once our neural-bypass system understood the signals, it could generate a pattern of electrical pulses for the muscles of Burkhart’s forearm, in theory mimicking the pulses that the brain would send down an undamaged spinal cord and through the nerves. But in reality, translating Burkhart’s intentions to muscle movements required another intense round of training and calibration. We spent countless hours stimulating different sets of the 130 electrodes wrapped around his forearm to determine how to control the muscles of his wrist, hand, and each finger. But we couldn’t duplicate all of the movements the hand can make, and we never quite got control of the pinkie! We knew we had to develop something better.

To make a more practical and convenient system, we decided to develop a version that is completely noninvasive, which we call GlidePath. We recruited volunteers who have spinal cord injuries but still have some mobility in their shoulders. We placed a proprietary mix of inertial and biometric sensors on the volunteers’ arms, and asked them to imagine reaching for different objects. The data from the sensors fed into a machine-learning algorithm, enabling us to infer the volunteers’ grasping intentions. Flexible electrodes on their forearms then stimulated their muscles in a particular sequence. In one session, volunteer Casey Ellin used this wearable bypass to pick up a granola bar from a table and bring it to his mouth to take a bite. We published these results in 2020 in the journal Bioelectronic Medicine.

My team is working to integrate the sensors and the stimulators into lightweight and inconspicuous wearables; we’re also developing an app that will be paired with the wearable, so that clinicians can check and adjust the stimulation settings. This setup will allow for remote rehabilitation sessions, because the data from the app will be uploaded to the cloud.

To speed up the process of calibrating the stimulation patterns, we’re building a database of how the patterns map to hand movements, with the help of both able-bodied and paralyzed volunteers. While each person responds differently to stimulation, there are enough similarities to train our system. It’s analogous to Amazon’s Alexa voice assistant, which is trained on thousands of voices and comes out of the box ready to go—but which over time further refines its understanding of its specific users’ speech patterns. Our wearables will likewise be ready to go immediately, offering basic functions like opening and closing the hand. But they’ll continue to learn about their users’ intentions over time, helping with the movements that are most important to each user.

We think this technology can help people with spinal cord injuries as well as people recovering from strokes, and we’re collaborating with Good Shepherd Rehabilitation Hospital and the Barrow Neurological Institute to test our technology. Stroke patients commonly receive neuromuscular electrical stimulation, to assist with voluntary movements and help recover motor function. There’s considerable evidence that such rehab works better when a patient actively tries to make a movement while electrodes stimulate the proper muscles; that connected effort by brain and muscles has been shown to increase “plasticity,” or the ability of the nervous system to adapt to damage. Our system will ensure that the patient is fully engaged, as the stimulation will be triggered by the patient’s intention. We plan to collect data over time, and we hope to see patients eventually regain some function even when the technology is turned off.

As exciting as the wearable applications are, today’s noninvasive technology doesn’t readily control complex finger movements, at least initially. We don’t expect the GlidePath technology to immediately enable people to play Guitar Hero, much less a real guitar. So we’ve continued to work on a neural bypass that involves brain implants.

When Burkhart used the earlier version of the neural bypass, he told us that it offered a huge step toward independence. But there were a lot of practical things we hadn’t considered. He told us, “It is strange to not feel the object I’m holding.” Daily tasks like buttoning a shirt require such sensory feedback. We decided then to work on a two-way neural bypass, which conveyed movement commands from the brain to the hand and sent sensory feedback from the hand to the brain, skipping over the damaged spinal cord in both directions.

To give people sensation from their paralyzed hands, we knew that we’d need both finely tuned sensors on the hand and an implant in the sensory cortex region of the brain. For the sensors, we started by thinking about how human skin sends feedback to the brain. When you pick something up—say, a disposable cup filled with coffee—the pressure compresses the underlying layers of skin. Your skin moves, stretches, and deforms as you lift the cup. The thin-film sensors we developed can detect the pressure of the cup against the skin, as well as the shear (transverse) force exerted on the skin as you lift the cup and gravity pulls it down. This delicate feedback is crucial, because there’s a very narrow range of appropriate movement in that circumstance; if you squeeze the cup too tightly, you’ll end up with hot coffee all over you.

Each of our sensors has different zones that detect the slightest pressure or shear force. By aggregating the measurements, our system determines exactly how the skin is bending or stretching. The processor will send that information to the implants in the sensory cortex, enabling a user to feel the cup in their hand and adjust their grip as needed.

Figuring out exactly where to stimulate the sensory cortex was another challenge. The part of the sensory cortex that receives input from the hand hasn’t been mapped exhaustively via electrodes, in part because the regions dealing with the fingertips are tucked into a groove in the brain called the central sulcus. To fill in this blank spot on the map, we worked with our neurosurgeon colleagues Ashesh Mehta and Stephan Bickel, along with hospitalized epilepsy patients who underwent procedures to map their seizure activity. Depth electrodes were used to stimulate areas within that groove, and patients were asked where they felt sensation. We were able to elicit sensation in very specific parts of the hand, including the crucial fingertips.

That knowledge prepared us for the clinical trial that marks the next step in our research. We’re currently enrolling volunteers with tetraplegia for the study, in which the neurosurgeons on our team will implant three arrays of electrodes in the sensory cortex and two in the motor cortex. Stimulating the sensory cortex will likely bring new challenges for the decoding algorithms that interpret the neural signals in the motor cortex, which is right next door to the sensory cortex—there will certainly be some changes to the electrical signals we pick up, and we’ll have to learn to compensate for them.

In the study, we’ve added one other twist. In addition to stimulating the forearm muscles and the sensory cortex, we’re also going to stimulate the spinal cord. Our reasoning is as follows: In the spinal cord, there are 10 million neurons in complex networks. Earlier research has shown that these neurons have some ability to temporarily direct the body’s movements even in the absence of commands from the brain. We’ll have our volunteers concentrate on an intended movement, to physically make the motion with the help of electrodes on the forearm, and receive feedback from the sensors on the hand. If we stimulate the spinal cord while this process is going on, we believe we can promote plasticity within its networks, strengthening connections between neurons within the spinal cord that are involved in the hand’s movements. It’s possible that we’ll achieve a restorative effect that lasts beyond the duration of the study: Our dream is to give people with damaged spinal cords their hands back.

One day, we hope that brain implants for people with paralysis will be clinically proven and approved for use, enabling them to go well beyond playing Guitar Hero. We’d like to see them making complex movements with their hands, such as tying their shoes, typing on a keyboard, and playing scales on a piano. We aim to let these people reach out to clasp hands with their loved ones and feel their touch in return. We want to restore movement, sensation, and ultimately their independence.

This article appears in the February 2021 print issue as “Bypassing Paralysis.”

About the Author

Chad Bouton is vice president of advanced engineering at the Feinstein Institutes for Medical Research at Northwell Health, a health care network in New York.

Quadriplegic Pilots Race for Gold in Cybathlon Brain Race

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/bionics/quadriplegic-pilots-race-for-gold-in-cybathlon-brain-race

The competitors were neck-and-neck going into the final turns of the last heat, and in the end, Italy beat Thailand by four seconds. But unlike the Olympic games, none of the competitors in this race could move their bodies. Instead, they competed using only their thoughts. 

This is Olympic racing, cyborg-style. Using brain-computer interface (BCI) systems, the competitors—all of whom are paralyzed from the neck down—navigated computer avatars through a racetrack using thought-controlled commands. 

The race was part of Cybathlon 2020: The second ever cyborg Olympics, in which people with paralysis or amputated limbs turn themselves into cyborg athletes using robotics and algorithms. Proud competitors raced with their exoskeletons, powered wheelchairs and prosthetic limbs through obstacle courses as their tech teams cheered them on.

New Stent-like Electrode Allows Humans to Operate Computers With Their Thoughts

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/bionics/new-stent-like-electrode-allows-humans-to-operate-computers-with-their-thoughts

Two Australian men with neuromuscular disorders regained some personal independence after researchers implanted stent-like electrodes in their brains, allowing them to operate computers using their thoughts

This is the first time such a device, dubbed a “stentrode,” has been implanted in humans, according to its inventors. The system also makes real-world use of brain-computer interfaces (BCIs)—devices that enable direct communication between the brain and a computer—more feasible.

The feat was described today in the Journal of Neurointerventional Surgery. “This paper represents the first fully implantable, commercial BCI system that patients can take home and use,” says Tom Oxley, founder of Melbourne-based Synchron, which developed the device.

Brain Implant Bypasses Eyes To Help Blind People See

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/bionics/progress-toward-a-brain-implant-for-the-blind

Early humans were hunters, and their vision systems evolved to support the chase. When gazing out at an unchanging landscape, their brains didn’t get excited by the incoming information. But if a gazelle leapt from the grass, their visual cortices lit up. 

That neural emphasis on movement may be the key to restoring sight to blind people. Daniel Yoshor, the new chair of neurosurgery at the University of Pennsylvania’s Perelman School of Medicine, is taking cues from human evolution as he devises ways to use a brain implant to stimulate the visual cortex. “We’re capitalizing on that inherent bias the brain has in perception,” he tells IEEE Spectrum.

He recently described his experiments with “dynamic current steering” at the Bioelectronic Medicine Summit, and also published the research in the journal Cell in May. By tracing shapes with electricity onto the brain’s surface, his team is producing a relatively clear and functional kind of bionic vision. 

Yoshor is involved in an early feasibility study of the Orion implant, developed by the Los Angeles-based company Second Sight, a company that’s been on the forefront of technology workarounds for people with vision problems.

In 2013, the U.S. Food and Drug Administration approved Second Sight’s retinal implant system, the Argus II, which uses an eyeglass-mounted video camera that sends information to an electrode array in the eye’s retina. Users have reported seeing light and dark, often enough to navigate on a street or find the brightness of a face turned toward them. But it’s far from normal vision, and in May 2019 the company announced that it would suspend production of the Argus II to focus on its next product.

The company has had a hard time over the past year: At the end of March it announced that it was winding down operations, citing the impact of COVID-19 on its ability to secure financing. But in subsequent months it announced a new business strategy, an initial public offering of stock, and finally in September the resumption of clinical trials for its Orion implant.  

The Orion system uses the same type of eyeglass-mounted video camera, but it sends information to an electrode array atop the brain’s visual cortex. In theory, it could help many more people than a retinal implant: The Argus II was approved only for people with an eye disease called retinitis pigmentosa, in which the photoreceptor cells in the retina are damaged but the rest of the visual system remains intact and able to convey signals to the brain. The Orion system, by sending info straight to the brain, could help people with more widespread damage to the eye or optic nerve.

Six patients have received the Orion implant thus far, and each now has an array of 60 electrodes that tries to represent the image transmitted by the camera. But imagine a digital image made up of 60 pixels—you can’t get much resolution. 

Yoshor says his work on dynamic current steering began with “the fact that getting info into the brain with static stimulation just didn’t work that well.” He says that one possibility is that more electrodes would solve the problem, and wonders aloud about what he could do with hundreds of thousands of electrodes in the brain, or even 1 million. “We’re dying to try that, when our engineering catches up with our experimental imagination,” he says. 

Until that kind of hardware is available, Yoshor is focusing on the software that directs the electrodes to send electrical pulses to the neurons. His team has conducted experiments with two blind Second Sight volunteers as well as with sighted people (epilepsy patients who have temporary electrodes in their brains to map their seizures). 

One way to understand dynamic current steering, Yoshor says, is to think of a trick that doctors commonly use to test perception—they trace letter shapes on a patient’s palm. “If you just press a ‘Z’ shape into the hand, it’s very hard to detect what that is,” he says. “But if you draw it, the brain can detect it instantaneously.” Yoshor’s technology does something similar, grounded in well-known information about how a person’s visual field maps to specific areas of their brain. Researchers have constructed this retinotopic map by stimulating specific spots of the visual cortex and asking people where they see a bright spot of light, called a phosphene.

The static form of stimulation that disappointed Yoshor essentially tries to create an image from phosphenes. But, says Yoshor, “when we do that kind of stimulation, it’s hard for patients to combine phosphenes to a visual form. Our brains just don’t work that way, at least with the crude forms of stimulation that we’re currently able to employ.” He believes that phosphenes cannot be used like pixels in a digital image. 

With dynamic current steering, the electrodes stimulate the brain in sequence to trace a shape in the visual field. Yoshor’s early experiments have used letters as a proof of concept: Both blind and sighted people were able to recognize such letters as M, N, U, and W. This system has an additional advantage of being able to stimulate points in between the sparse electrodes, he adds. By gradually shifting the amount of current going to each (imagine electrode A first getting 100 percent while electrode B gets zero percent, then shifting to ratios of 80:20, 50:50, 20:80, 0:100), the system activates neurons in the gaps. “We can program that sequence of stimulation, it’s very easy,” he says. “It goes zipping across the brain.”

Second Sight didn’t respond to requests for comment for this article, so it’s unclear whether the company is interested in integrating Yoshor’s stimulation technique into its technology. 

But Second Sight isn’t the only entity working on a cortical vision prosthetic. One active project is at Monash University in Australia, where a team has been preparing for clinical trials of its Gennaris bionic vision system.

Arthur Lowery, director of the Monash Vision Group and a professor of electrical and computer systems engineering, says that Yoshor’s research seems promising. “The ultimate goal is for the brain to perceive as much information as possible. The use of sequential stimulation to convey different information with the same electrodes is very interesting, for this reason,” he tells IEEE Spectrum in an email. “Of course, it raises other questions about how many electrodes should be simultaneously activated when presenting, say, moving images.”

Yoshor thinks the system will eventually be able to handle complex moving shapes with the aid of today’s advances in computer vision and AI, particularly if there are more electrodes in the brain to represent the images. He imagines a microprocessor that converts whatever image the person encounters in daily life into a pattern of dynamic stimulation.

Perhaps, he speculates, the system could even have different settings for different situations. “There could be a navigation mode that helps people avoid obstacles when they’re walking; another mode for conversation where the prosthetic would rapidly trace the contours of the face,” he says. That’s a far-off goal, but Yoshor says he sees it clearly. 

Electronic Blood Vessels to the Rescue

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/bionics/electronic-bloodvessel

The number one cause of mortality worldwide is cardiovascular disease. Now scientists reveal electronic blood vessels might one day use electricity to stimulate healing and deliver gene therapies to help treat such maladies, a new study finds.

One-third of all U.S. deaths are linked to cardiovascular disease, according to the American Heart Association. When replacement blood vessels are needed to treat advanced cases of cardiovascular disease, doctors prefer ones taken from the patient’s own body, but sometimes the patient’s age or condition prevents such a strategy.

Artificial blood vessels that can prove helpful in cases where replacements more than 6 millimeters wide are needed are now commercially available. However, when it comes to smaller artificial blood vessels, so far none have succeeded in clinical settings. That’s because a complex interplay between such vessels and blood flow often triggers inflammatory responses, causing the walls of natural blood vessels to thicken and cut off blood flow, says Xingyu Jiang, a biomedical engineer at the Southern University of Science and Technology in Shenzhen, China. Jiang and his team report having developed a promising new artificial blood vessel that doesn’t cause inflammatory response. The scientists detailed their findings online on 1 October in the journal Matter.

An Open-Source Bionic Leg

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/the-human-os/biomedical/bionics/opensource-bionicleg

Details on the design and clinical tests of an open-source bionic leg are now freely available online, so that researchers can hopefully create and test safe and useful new prosthetics.

Bionic knees, ankles and legs under development worldwide to help patients walk are equipped with electric motors. Getting the most from such powered prosthetics requires safe and reliable control systems that can account for many different types of motion: for example, shifting from striding on level ground to walking up or down ramps or stairs.

However, developing such control systems has proven difficult. “The challenge stems from the fact that these limbs support a person’s body weight,” says Elliott Rouse, a biomedical engineer and director of the neurobionics lab at the University of Michigan, Ann Arbor. “If it makes a mistake, a person can fall and get seriously injured. That’s a really high burden on a control system, in addition to trying to have it help people with activities in their daily life.”

The Tech Behind The Mind Reading Pangolin Dress Could Lead To Wireless—And Batteryless—Exoskeleton Control

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/the-human-os/biomedical/bionics/the-tech-behind-a-mind-reading-dress-could-lead-to-wireless-batteryless-exoskeleton-control

An eye-catching dress that’s part art project, part cutting-edge tech, was presented today at the Ars Electronica Festival in Linz, Austria. The dress showcases an ultra-low energy, high resolution, brain-computer interface that’s so sensitive that if the wearer thinks about moving a single finger, it can identify which one—without the need for an implant.

The dress is the result of a collaboration between researchers at Johannes Kepler University Linz (JKU), developers at medical engineering company G.tec, and fashiontech designer Anouk Wipprecht.  Known as the Pangolin dress, it has 1,024 individual head-mounted electrodes in 64 groups of 16. These detect electrical signals coming from the brain. The data from these sensors is combined, analyzed, and converted into colors displayed by 32 Neopixel LEDs and 32 servo-driven scales, creating a whole-body visualization of neural activity.

The biggest technical advance is in the new custom chip connected to each electrode. Measuring 1.6 millimeters on a side, the single-channel chip integrates an amplifier, an analog-to-digital converter (ADC), and a digital signal processor—and it draws less than 5 microwatts of power. Because it uses so little power, the chip can be powered by a nearby basestation in the manner of a contactless RFID chip, and return data wirelessly as well (although for the Pangolin dress, wired data connections were used to allow the creators to focus on developing and debugging other system elements while COVID-19 restrictions limited access to facilities.) Powering and communicating with the chip wirelessly would eliminate the need to tether a patient to a test system in a wired system, and even eliminate the bulk and weight of a batteries found in conventional wireless BCI systems. The biggest challenge was fitting into “the power budget for the sensor electronics,” says Harald Pretl, professor at the Institute for Integrated Circuits at JKU. “We had to create an amplifier design dedicated to this, an ADC dedicated to this, and also, based on ultrawideband, our own transmission protocol.” Another challenge was dealing with “fading and shadowing effects around the head,” adds Pretl.

But although the chip design was a custom build, it was deliberately made without recourse to exotic fabrication techniques: “We tried to focus on not using expensive technology there. We are using 180-nanometer [fabrication] technology, but we were able to get higher performance by using some advanced circuit design tricks,” explains JKU’s Thomas Faseth, who is the Pangolin dress’s project lead.

For the Pangolin dress, each group of electrodes and chips was mounted on a six-sided tile, and the wearer’s head—in this case, that of Megi Gin in Austria [below], who became as much a test participant as model—was covered in tiles, each hosting a custom chip and antenna.

The next step up the technology stack was provided by Austrian commercial BCI developer and manufacturer, G.tec. G.tec brought its experience of analyzing neural signals to bear,  integrating and interpreting the data coming from the tiles. For example, when you decide to move a muscle voluntarily, it sets off a localized pattern of activity in your motor cortex that can be detected and identified, “With 1,024 separate channels, you can get single finger resolution. This is something you normally cannot do with surface electrodes, you need implants,” says G.tec co-founder Christoph Guger. Normally, says Guger, surface electrode systems have 64 channels, which is only enough to distinguish, for example, whether the movement is intended for the right or left arm.

Because brains vary, identifying such fine grained detail requires calibrating the system to the individual wearer, using machine learning to recognize the patterns associated with different motions. However, the system doesn’t actually need you to be able to move any given muscle—in fact it learns more easily when the participant simply imagines performing a movement, because imagining a movement typically takes longer than performing it, producing a more sustained signal. This means that patients with paralyzed or missing limbs may be able to avail of the technology, and Guger even speculates as far as the possibility of BCI-controlled exoskeletons.

The Pangolin dress determines whether the wearer is one of a number of mental states, including stressed, neutral, and meditative, and expresses this through motion and light. The dress itself was created by designer Wipprecht, who previously wrote for IEEE Spectrum about her collaboration with G.tec in designing the Unicorn headset, intended to help therapists tune programs for children with ADHD.  

The dress combines rigid and fabric elements, with the rigid elements 3D-printed using selective laser sintering in nine interlocking segments. The modular nature of the sensors sparked the idea of taking inspiration from the flexible pangolin’s often lustrous keratin scales, says Wipprecht. For Wipprecht, part of the challenge was that brain signals can change much faster than a servo can change angle, so she decided to focus on reflecting the frequency of change instead: i.e when low frequency brain waves predominate, the servos and lights move and pulse slowly.  “So much of the interaction design in my pieces is about creating an elegant behavior that gives a good sense of what the data is doing. In this case, since we have a lot of data points through all the 1,024 channels in the dress, my interactions can become even more spot-on … A hectic state is reflected in quick jittery movements and white lights, a neutral state is reflected in neutral movements and blue lights, and a meditative state is where the dress will turn purple with smooth, flowing movements,” says Wipprecht.

The Pangolin collaboration began late last year and continued despite the complication of Covid-19 restrictions. Although the JKU and G.tec teams were located in Austria, Wipprecht is in the hard-hit state of Florida. For the Austrian teams this meant severely restricted access to their labs, but they were able to send four sensor modules to Wipprecht as she worked on the dress design. The completed dress only arrived in Austria last month for final  testing.

The Pangolin team hopes to create another iteration of the dress, this time using the sensors in a completely wireless mode. Discussions about commercializing the sensor technology are ongoing, but Guger estimates that within about two years it could be an off-the-shelf option for other researchers.

The Quest for a Bionic Breast

Post Syndicated from Edd Gent original https://spectrum.ieee.org/the-human-os/biomedical/bionics/quest-bionic-breast

As many as 100,000 breast cancer patients have one or both breasts removed in mastectomies every year in the United States. This surgery frequently leads to loss of breast sensation, which is thought to contribute to the high rates of sexual dysfunction among breast cancer survivors. But a bionic breast now under development could restore these important tactile sensations.

The project is the brainchild of Stacy Lindau, a gynecologist at the University of Chicago who specialises in the sexual function of women with cancer. She says she’s dealt with countless women over the past decade whose sex lives have been dramatically impacted by this loss of sensation.

Upgraded Google Glass Helps Autistic Kids “See” Emotions

Post Syndicated from Nick Haber original https://spectrum.ieee.org/biomedical/bionics/upgraded-google-glass-helps-autistic-kids-see-emotions

Imagine this scene: It’s nearly dinnertime, and little Jimmy is in the kitchen. His mom is rushing to get dinner on the table, and she puts all the silverware in a pile on the counter. Jimmy, who’s on the autism spectrum, wants the silverware to be more orderly, and while his mom is at the stove he carefully begins to put each fork, knife, and spoon back in its slot in the silverware drawer. Suddenly Jimmy hears shouting. His mom is loud; her face looks different. He continues what he’s doing.

Now imagine that Jimmy is wearing a special kind of Google Glass, the augmented-reality headset that Google introduced in 2013. When he looks up at his mom, the head-up display lights up with a green box, which alerts Jimmy that he’s “found a face.” As he focuses on her face, an emoji pops up, which tells Jimmy, “You found an angry face.” He thinks about why his mom might be annoyed. Maybe he should stop what he’s doing with the silverware and ask her.

Our team has been working for six years on this assistive technology for children with autism, which the kids themselves named Superpower Glass. Our system provides behavioral therapy to the children in their homes, where social skills are first learned. It uses the glasses’ outward-facing camera to record the children’s interactions with family members; then our software detects the faces in those videos and interprets their expressions of emotion. Through an app, caregivers can review auto-curated videos of social interactions.

Over the years we’ve refined our prototype and run clinical trials to prove its beneficial effects: We’ve found that its use increases kids’ eye contact and social engagement and also improves their recognition of emotions. Our team at Stanford University has worked with coauthor Dennis Wall’s spinoff company, Cognoa, to earn a “breakthrough therapy” designation for Superpower Glass, which puts the technology on a fast track toward approval by the U.S. Food and Drug Administration (FDA). We aim to get health insurance plans to cover the costs of the technology as an augmented-reality therapy.

When Google Glass first came out as a consumer device, many people didn’t see a need for it. Faced with lackluster reviews and sales, Google stopped making the consumer version in 2015. But when the company returned to the market in 2017 with a second iteration of the device, Glass Enterprise Edition, a variety of industries began to see its potential. Here we’ll tell the story of how we used the technology to give kids with autism a new way to look at the world.

When Jimmy puts on the glasses, he quickly gets accustomed to the head-up display (a prism) in the periphery of his field of view. When Jimmy begins to interact with family members, the glasses send the video data to his caregiver’s smartphone. Our app, enabled by the latest artificial-intelligence (AI) techniques, detects faces and emotions and sends the information back to the glasses. The boundary of the head-up display lights up green whenever a face is detected, and the display then identifies the facial expression via an emoticon, emoji, or written word. The users can also choose to have an audio cue—a voice identifying the emotion—from the bone-conducting speaker within the glasses, which sends sound waves through the skull to the inner ear. The system recognizes seven facial expressions—happiness, anger, surprise, sadness, fear, disgust, and contempt, which we labeled “meh” to be more child friendly. It also recognizes a neutral baseline expression.

To encourage children to wear Superpower Glass, the app currently offers two games: “Capture the Smile,” in which the child tries to elicit happiness or another emotion in others, and “Guess the Emotion,” in which people act out emotions for the child to name. The app also logs all activity within a session and tags moments of social engagement. That gives Jimmy and his mom the ability to watch together the video of their conflict in the kitchen, which could prompt a discussion of what happened and what they can do differently next time.

The three elements of our Superpower Glass system—face detection, emotion recognition, and in-app review—help autistic children learn as they go. The kids are motivated to seek out social interactions, they learn that faces are interesting, and they realize they can gather valuable information from the expressions on those faces. But the glasses are not meant to be a permanent prosthesis. The kids do 20-minute sessions a few times a week in their own homes, and the entire intervention currently lasts for six weeks. Children are expected to quickly learn how to detect the emotions of their social partners and then, after they’ve gained social confidence, stop using the glasses.

Our system is intended to ameliorate a serious problem: limited access to intensive behavioral therapy. Although there’s some evidence that such therapy can diminish or even eliminate core symptoms associated with autism, kids must start receiving it before the age of 8 to see real benefits. Currently the average age of diagnosis is between 4 and 5, and waitlists for therapy can stretch over 18 months. Part of the reason for the shortage is the shocking 600 percent rise since 1990 in diagnoses of autism in the United States, where about one in 40 kids is now affected; less dramatic surges have occurred in some parts of Asia and Europe.

Because of the increasing imbalance between the number of children requiring care and the number of specialists able to provide therapy, we believe that clinicians must therefore look to solutions that can scale up in a decentralized fashion. Rather than relying on the experts for everything, we think that data capture, monitoring, and therapy—the tools needed to help all these children—must be placed in the hands of the patients and their parents.

Efforts to provide in situ learning aids for autistic children date back to the 1990s, when Rosalind Picard, a professor at MIT, designed a system with a headset and minicomputer that displayed emotional cues. However, the wearable technology of the day was clunky and obtrusive, and the emotion-recognition software was primitive. Today, we have discreet wearables, such as Google Glass, and powerful AI tools that leverage massive amounts of publicly available data about facial expressions and social interactions.

The design of Google Glass was an impressive feat, as the company’s engineers essentially packed a smartphone into a lightweight frame resembling a pair of eyeglasses. But with that form factor comes an interesting challenge for developers: We had to make trade-offs among battery life, video streaming performance, and heat. For example, on-device processing can generate too much heat and automatically trigger a cutback in operations. When we tried running our computer-vision algorithms on the device, that automatic system often reduced the frame rate of the video being captured, which seriously compromised our ability to quickly identify emotions and provide feedback.

Our solution was to pair Glass with a smartphone via Wi-Fi. The glasses capture video, stream the frames to the phone, and deliver feedback to the wearer. The phone does the heavy computer-vision work of face detection and tracking, feature extraction, and facial-expression recognition, and also stores the video data.

But the Glass-to-phone streaming posed its own problem: While the glasses capture video at a decent resolution, we could stream it only at low resolution. We therefore wrote a protocol to make the glasses zoom in on each newly detected face so that the video stream is detailed enough for our vision algorithms.

Our computer-vision system originally used off-the-shelf tools. The software pipeline was composed of a face detector, a face tracker, and a facial-feature extractor; it fed data into an emotion classifier trained on both standard data sets and our own data sets. When we started developing our pipeline, it wasn’t yet feasible to run deep-learning algorithms that can handle real-time classification tasks on mobile devices. But the past few years have brought remarkable advances, and we’re now working on an updated version of Superpower Glass with deep-learning tools that can simultaneously track faces and classify emotions.

This update isn’t a simple task. Emotion-recognition software is primarily used in the advertising industry to gauge consumers’ emotional responses to ads. Our software differs in a few key ways. First, it won’t be used in computers but rather in wearables and mobile devices, so we have to keep its memory and processing requirements to a minimum. The wearable form factor also means that video will be captured not by stable webcams but by moving cameras worn by kids. We’ve added image stabilizers to cope with the jumpy video, and the face detector also reinitializes frequently to find faces that suddenly shift position within the scene.

Failure modes are also a serious concern. A commercial emotion-recognition system might claim, for example, a 98 percent accuracy rate; such statistics usually mean that the system works well on most people but consistently fails to recognize the expressions of a small handful of individuals. That situation might be fine for studying the aggregate sentiments of people watching an ad. But in the case of Superpower Glass, the software must interpret a child’s interactions with the same people on a regular basis. If the system consistently fails on two people who happen to be the child’s parents, that child is out of luck.

We’ve developed a number of customizations to address these problems. In our “neutral subtraction” method, the system first keeps a record of a particular person’s neutral-expression face. Then the software classifies that person’s expressions based on the differences it detects between the face he or she currently displays and the recorded neutral estimate. For example, the system might come to learn that just because Grandpa has a furrowed brow, it doesn’t mean he’s always angry. And we’re going further: We’re working on machine-learning techniques that will rapidly personalize the software for each user. Making a human–AI interaction system that adapts robustly, without too much frustration for the user, is a considerable challenge. We’re experimenting with several ways to gamify the calibration process, because we think the Superpower Glass system must have adaptive abilities to be commercially successful.

We realized from the start that the system would be imperfect, and we’ve designed feedback to reflect that reality. The green box face-detection feature was originally intended to mitigate frustration: If the system isn’t tracking a friend’s face, at least the user knows that and isn’t waiting for feedback that will never come. Over time, however, we came to think of the green box as an intervention in itself, as it provides feedback whenever the wearer looks at a face, a behavior that can be noticeably different for children on the autism spectrum.

To evaluate Superpower Glass, we conducted three studies over the past six years. The first one took place in our lab with a very rudimentary prototype, which we used to test how children on the autism spectrum would respond to wearing Google Glass and receiving emotional cues. Next, we built a proper prototype and ran a design trial in which families with autistic kids took the devices home for several weeks. We interacted with these families regularly and made changes to the prototype based on their feedback.

With a refined prototype in hand, we then set out to test the device’s efficacy in a rigorous way. We ran a randomized control trial in which one group of children received typical at-home behavioral therapy, while a second group received that therapy plus a regimen with Superpower Glass. We used four tests that are commonly deployed in autism research to look for improvement in emotion recognition and broader social skills. As we described in our 2019 paper in JAMA Pediatrics, the intervention group showed significant gains over the control group in one test (the socialization portion of the Vineland Adaptive Behavior Scales [PDF]).

We also asked parents to tell us what they had noticed. Their observations helped us refine the prototype’s design, as they commented on technical functionality, user frustrations, and new features they’d like to see. One email from the beginning of our at-home design trial stands out. The parent reported an immediate and dramatic improvement: “[Participant] is actually looking at us when he talks through google glasses during a conversation…it’s almost like a switch was turned.… Thank you!!! My son is looking into my face.”

This email was extremely encouraging, but it sounded almost too good to be true. Yet comments about increased eye contact continued throughout our studies, and we documented this anecdotal feedback in a publication about that design study. To this day, we continue to hear similar stories from a small group of “light switch” participants.

We’re confident that the Superpower Glass system works, but to be honest, we don’t really know why. We haven’t been able to determine the primary mechanism of action that leads to increased eye contact, social engagement, and emotion recognition. This unknown informs our current research. Is it the emotion-recognition feedback that most helps the children? Or is our device mainly helping by drawing attention to faces with its green box? Or are we simply providing a platform for increased social interaction within the family? Is the system helping all the kids in the same way, or does it meet the needs of various parts of the population differently? If we can answer such questions, we can design interventions in a more pointed and personalized way.

The startup Cognoa, founded by coauthor Dennis Wall, is now working to turn our Superpower Glass prototype into a clinical therapy that doctors can prescribe. The FDA breakthrough therapy designation for the technology, which we earned in February 2019, will speed the journey toward regulatory approval and acceptance by health insurance companies. Cognoa’s augmented-reality therapy will work with most types of smartphones, and it will be compatible not only with Google Glass but also with new brands of smart glasses that are beginning to hit the market. In a separate project, the company is working on a digital tool that physicians can use to diagnose autism in children as young as 18 months, which could prepare these young kids to receive treatment during a crucial window of brain development.

Ultimately, we feel that our treatment approach can be used for childhood concerns beyond autism. We can design games and feedback for kids who struggle with speech and language, for example, or who have been diagnosed with attention deficit hyperactivity disorder. We’re imagining all sorts of ubiquitous AI-powered devices that deliver treatment to users, and which feed into a virtuous cycle of technological improvement; while acting as learning aids, these devices can also capture data that helps us understand how to better personalize the treatment. Maybe we’ll even gain new scientific insights into the disorders in the process. Most important of all, these devices will empower families to take control of their own therapies and family dynamics. Through Superpower Glass and other wearables, they’ll see the way forward.

This article appears in the April 2020 print issue as “Making Emotions Transparent.”

About the Authors

When Stanford professor Dennis Wall met Catalin Voss and Nick Haber in 2013, it felt like a “serendipitous alignment of the stars,” Wall says. He was investigating new therapies for autism, Voss was experimenting with the Google Glass wearable, and Haber was working on machine learning and computer vision. Together, the three embarked on the Superpower Glass project to encourage autistic kids to interact socially and help them recognize emotions.

DARPA Races To Create a “Firebreak” Treatment for the Coronavirus

Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/bionics/darpas-firebreak-treatment-for-the-coronavirus

When DARPA launched its Pandemic Preparedness Platform (P3) program two years ago, the pandemic was theoretical. It seemed like a prudent idea to develop a quick response to emerging infectious diseases. Researchers working under the program sought ways to confer instant (but short-term) protection from a dangerous virus or bacteria. 

Today, as the novel coronavirus causes a skyrocketing number of COVID-19 cases around the world, the researchers are racing to apply their experimental techniques to a true pandemic playing out in real time. “Right now, they have one shot on goal,” says DARPA program manager Amy Jenkins. “We’re really hoping it works.”

The P3 program’s plan was to start with a new pathogen and to “develop technology to deliver medical countermeasures in under 60 days—which was crazy, unheard of,” says Jenkins. The teams have proven they can meet this ambitious timeline in previous trials using the influenza and Zika viruses. Now they’re being asked to pull off the same feat with the new coronavirus, which more formally goes by the name SARS-CoV-2 and causes the illness known as COVID-19.

James Crowe, director of the Vanderbilt Vaccine Center at the Vanderbilt University Medical Center, leads one of the four P3 teams. He spoke with IEEE Spectrum from Italy, where he and his wife are currently traveling—and currently recovering from a serious illness. “We both got pretty sick,” Crowe says, “we’re pretty sure we already had the coronavirus.” For his wife, he says, it was the worst illness she’s had in three decades. They’re trying to get tested for the virus.

Ironically, Crowe is quite likely harboring the raw material that his team and others need to do their work. The DAPRA approach called for employing antibodies, the proteins that our bodies naturally use to fight infectious diseases, which remain in our bodies after an infection.

In the P3 program, the 60-day clock begins when a blood sample is taken from a person who has fully recovered from the disease of interest. Then the researchers screen that sample to find all the protective antibodies the person’s body has made to fight off the virus or bacteria. They use modeling and bioinformatics to choose the antibody that seems most effective at neutralizing the pathogen, and then determine the genetic sequence that codes for the creation of that particular antibody. That snippet of genetic code can then be manufactured quickly and at scale, and injected into people.

Jenkins says this approach is much faster than manufacturing the antibodies themselves. Once the genetic snippets are delivered by an injection, “your body becomes the bioreactor” that creates the antibodies, she says. The P3 program’s goal is to have protective levels of the antibodies circulating within 6 to 24 hours.

DARPA calls this a “firebreak” technology, because it can provide immediate immunity to medical personnel, first responders, and other vulnerable people. However, it wouldn’t create the permanent protection that vaccines provide. (Vaccines work by presenting the body with a safe form of the pathogen, thus giving the body a low-stakes opportunity to learn how to respond, which it remembers on future exposures.)

Robert Carnahan, who works with Crowe at the Vanderbilt Vaccine Center, explains that their method offers only temporary protection because the snippets of genetic code are messenger RNA, molecules that carry instructions for protein production. When the team’s specially designed mRNA is injected into the body, it’s taken up by cells (likely those in the liver) that churn out the needed antibodies. But eventually that RNA degrades, as do the antibodies that circulate through the blood stream. 

“We haven’t taught the body how to make the antibody,” Carnahan says, so the protection isn’t permanent. To put it in terms of a folksy aphorism: “We haven’t taught the body to fish, we’ve just given it a fish.”

Carnahan says the team has been scrambling for weeks to build the tools that enable them to screen for the protective antibodies; since very little is known about this new coronavirus, “the toolkit is being built on the fly.” They were also waiting for a blood sample from a fully recovered U.S. patient. Now they have their first sample, and are hoping for more in the coming weeks, so the real work is beginning. “We are doggedly pursuing neutralizing antibodies,” he says. “Right now we have active pursuit going on.”

Jenkins says that all of the P3 groups (the others are Greg Semposki’s lab at Duke University, a small Vancouver company called AbCellera, and the big pharma company AstraZeneca) have made great strides in technologies that rapidly identify promising antibodies. In their earlier trials, the longer part of the process was manufacturing the mRNA and preparing for safety studies in animals. If the mRNA is intended for human use, the manufacturing and testing processes will be much slower because there will be many more regulatory hoops to jump through. 

Crowe and Carnahan’s team has seen another project through to human trials; last year the lab worked with the biotech company Moderna to make mRNA that coded for antibodies that protected against the Chikungunya virus. “We showed it can be done,” Crowe says.

Moderna was involved in a related DARPA program known as ADEPT that has since ended. The company’s work on mRNA-based therapies has led it in another interesting direction—last week, the company made news with its announcement that it was testing an mRNA-based vaccine for the coronavirus. That vaccine works by delivering mRNA that instructs the body to make the “spike” protein that’s present on the surface of the coronavirus, thus provoking an immune response that the body will remember if it encounters the whole virus. 

While Crowe’s team isn’t pursuing a vaccine, they are hedging their bets by considering both the manufacture of mRNA to code for the protective antibodies, and manufacturing the antibodies themselves. Crowe says that the latter process is typically slower (perhaps 18 to 24 months), but he’s talking with companies that are working on ways to speed it up. The direct injection of antibodies is a standard type of immunotherapy.   

Crowe’s convalescence in Italy isn’t particularly relaxing. He’s working long days, trying to facilitate the shipment of samples to his lab from Asia, Europe, and the United States, and he’s talking to pharmaceutical companies that could manufacture whatever his lab comes up with. “We have to figure out a cooperative agreement for something we haven’t made yet,” he says, “but I expect that this week we’ll have agreements in place.”

Manufacturing agreements aren’t the end of the story, however. Even if everything goes perfectly for Crowe’s team and they have a potent antibody or mRNA ready for manufacture by the end of April, they’d have to get approval from the U.S. Food and Drug Administration. To get a therapy approved for human use typically takes years of studies on toxicity, stability, and efficacy. Crowe says that one possible shortcut is the FDA’s compassionate use program, which allows people to use unapproved drugs in certain life-threatening situations. 

Gregory Poland, director of the Mayo Clinic Vaccine Research Group, says that mRNA-based prophylactics and vaccines are an important and interesting idea. But he notes that all talk of these possibilities should be considered “very forward-looking statements.” He has seen too many promising candidates fail after years of clinical trials to get excited about a new bright idea, he says. 

Poland also says the compassionate use shortcut would likely only be relevant “if we’re facing a situation where something like Wuhan is happening in a major city in the U.S., and we have reason to believe that a new therapy would be efficacious and safe. Then that’s a possibility, but we’re not there yet,” he says. “We’d be looking for an unknown benefit and accepting an unknown risk.” 

Yet the looming regulatory hurdles haven’t stopped the P3 researchers from sprinting. AbCellera, the Vancouver-based biotech company, tells IEEE Spectrum that its researchers have spent the last month looking at antibodies against the related SARS virus that emerged in 2002, and that they’re now beginning to study the coronavirus directly. “Our main discovery effort from a COVID-19 convalescent donor is about to be underway, and we will unleash our full discovery capabilities there,” a representative wrote in an email. 

At Crowe’s lab, Carnahan notes that the present outbreak is the first one to benefit from new technologies that enable a rapid response. Carnahan points to the lab’s earlier work on the Zika virus: “In 78 days we went from a sample to a validated antibody that was highly protective. Will we be able to hit 78 days with the coronavirus? I don’t know, but that’s what we’re gunning for,” he says. “It’s possible in weeks and months, not years.”

Mum No More: 3D Printed Vocal Tract Lets Mummy Speak

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/bionics/mummy-3d-printed-vocal-tract

The coffin that holds the mummified body of the ancient Egyptian Nesyamun, who lived around 1100 B.C., expresses the man’s desire for his voice to live on. Now, 3,000 years after his death, that wish has come true.

Using a 3D-printed replica of Nesyamun’s vocal tract and an electronic larynx, researchers in the UK have synthesized the dead man’s voice. The researchers described the feat today in the journal Scientific Reports

“We’ve created sound for Nesyamun’s vocal tract exactly as it is positioned in his coffin,” says David Howard, head of the department of electrical engineering at Royal Holloway University of London, who coauthored the report. “We didn’t choose the sound. It is the sound the tract makes.”

10 Tantalizing Tech Milestones to Look for in 2020

Post Syndicated from The IEEE Spectrum editorial staff original https://spectrum.ieee.org/biomedical/bionics/10-tantalizing-tech-milestones-to-look-for-in-2020

graphic link to special report landing page
  • Mind-Controlled Bionic Limbs Will Debut in the Boston Marathon

    MIT researchers have developed a way of controlling bionic limbs with thoughts alone. First tried in humans in 2016, the method will be hitting new strides in 2020, when Brandon Korona, a veteran who lost his leg in Afghanistan, uses his new bionic limb to compete in the Boston Marathon. The mind-control technique involves reconstructing muscles near the base of the amputation site and linking them so that the muscles contract and extend in unison. This dynamic interaction and the electrical impulses it generates make it possible for the limb’s processor, which controls the bionic joints, to exchange signals with the brain. This exchange tells the brain where a joint is, how fast it is moving, and what size load it is bearing.

  • Artificial Diamonds Will Really Shine

    Diamonds can cost a lot, in money and even blood, given the sometimes shady ethics of the trade. A solution may lie in lab-grown diamonds. Production of these artificial gems will ramp up considerably in 2020, when one of the world’s largest diamond companies, De Beers Group, opens a manufacturing facility in Oregon that will produce about 500,000 rough carats per year. There, mixed gases and hundreds of chemical substrates will be added to reactors and subjected to high temperatures, transforming carbon into its diamond form. It takes about two weeks to make a 1-carat sparkler, which will retail for about US $800.

  • Huge Pumped-Storage Project Breaks Ground in Montana

    As the electric grid increasingly relies on wind turbines and solar panels, it requires ever more backup energy to make up for shortfalls on windless or cloudy days. To help address this need, Absaroka Energy will begin construction on a new pumped- storage hydroelectric facility this year, harnessing three massive reservoirs inside the Gordon Butte mountain, in Montana. When electricity is needed, the reservoirs will release water onto three turbine generators below, which together can generate 400 megawatts of electricity. Surplus electricity will be used to pump water back into the reservoirs. The new system uses separate motors to control the pumps, as well as separate turbines and generators. Isolating the components gives the system about 80 percent efficiency.

  • Will a Facebook-Backed Cryptocurrency Overcome Legal Hurdles?

    In July 2019, Facebook and 27 other companies announced plans to release a worldwide cryptocurrency, called Libra. The Libra Association aims to create a safe, stable currency, one that could be especially convenient for the 1.7 billion people globally who don’t have bank accounts. However, the announcement has prompted concerns about how Libra might be used for money laundering and the privatization of money, among other issues. The Libra Association says it is taking proper measures for security and data privacy, an assertion repeated by Facebook CEO Mark Zuckerberg at hearings before the U.S. Congress last October. Meanwhile, seven major companies, including PayPal, Visa, and Mastercard, have left the group. But the association says it still plans to roll out Libra in 2020.

  • Tesla’s 2020 Roadster to Hit the Streets

    Here’s some news to get any gearhead’s heart racing: Elon Musk claims that the 2020 Tesla Roadster will be able to rocket from 0 to 97 kilometers per hour (60 miles per hour) in a mere 1.9 seconds. This new machine— not to be confused with Tesla’s 2008–2012 Roadster—will have two motors in the rear and one in the front and offer the option of rocket thrusters powered by compressed air. Aside from the excitement surrounding the Roadster’s acceleration and top speed (which will exceed 200 mph), perhaps the most important spec is range. Currently, the longest range of an electric car is around 600 km (375 miles), but the Roadster will be good for 1,000 km (620 miles).

  • Rolls-Royce to Fly Record-Breaking Electric Plane

    In the first quarter of 2020, Rolls- Royce will unveil ACCEL, which it says is the fastest all-electric plane ever designed. The company claims its one-seat racing plane should exceed 480 kilometers per hour (300 miles per hour), smashing the current record of 340 km/h, set in 2017 by a Siemens plane. Rolls-Royce and its partners had to monitor more than 20,000 data points per second to optimize the plane’s battery system. An active cooling system allows the battery to discharge at high rates. Look to the skies over Britain to see this plane in action.

  • Lockheed Martin Takes Another Step Toward Compact, Convenient Nuclear Fusion

    Controlled nuclear fusion has been the object of many a failed quest over the past 60 years. Though it could go far to solve the world’s energy needs, the technical demands of fusion power are stupendous. Lockheed Martin has a patent pending on a reactor design that it says has a real chance at success. The reactor is compact, relying on magnetic fields to confine the hydrogen plasma and on electromagnetic fields to ignite and sustain the plasma. This process causes hydrogen atoms to fuse into helium, releasing torrents of energy. Throughout 2020, the company will test the fifth prototype, T5, which it says is significantly more powerful than earlier versions. The tests will show whether the design can handle the immense heat and pressure from the highly energized plasma inside.

  • A New Job for a Robot: Mowing

    In 2020, the long wait for a lawn-mowing counterpart to the Roomba will finally be over. iRobot plans to launch Terra, which can mow a lawn in straight, even rows without any human oversight. To navigate, the robot relies on a handful of radio- frequency beacons strategically placed throughout the yard, keeping at least three beacons within its line of sight at all times. By measuring the time it takes for signals to travel between itself and the beacons, Terra can locate itself on a preprogrammed map of the lawn. If anyone tries to steal this hard-working robot, antitheft software registers that the machine has left the premises and renders it inoperable.

  • GE Unveils a Bigger, Better Wind Turbine

    In 2020, GE Renewable Energy will seek certification for its Haliade-X offshore wind turbine, whose rated capacity of 12 megawatts would make it the largest and most powerful on the market. It boasts 107-meter-long blades made of a composite of glass and carbon fiber in a resin matrix. The massive area swept by those blades will let the turbine capture up to 67 gigawatt-hours annually, enough clean energy to power 16,000 households and save up to 42,000 metric tons of CO2. Assuming certification in 2020, sales are expected to commence in 2021.

  • Google and Apple Compete for the Attention of Gamers

    Gamers will be busy with two gaming services from Apple and Google. Critical to the new services—which launched recently and are expected to sweep the industry in the coming year—is the expansion of bandwidth, in the form of faster Wi-Fi and the emerging 5G capability, both of which greatly reduce lag. For US $4.99 per month, players can access more than 100 games through Apple Arcade, with more being rolled out each month. Google Stadia is available for $9.99 per month, with the option to purchase additional games at up to 4K resolution and at 60 frames per second. While Google Stadia games can be played on a variety of devices, Apple Arcade is, unsurprisingly, available only on Apple devices.

Caltech’s Brain-Controlled Exoskeleton Will Help Paraplegics Walk

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/biomedical/bionics/caltechs-braincontrolled-exoskeleton-will-help-paraplegics-walk

graphic link to special report landing page

Bipedal robots have long struggled to walk as humans do—balancing on two legs and moving with that almost-but-not-quite falling forward motion that most of us have mastered by the time we’re a year or two old. It’s taken decades of work, but robots are starting to get comfortable with walking, putting them in a position to help people in need.

Roboticists at the California Institute of Technology have launched an initiative called RoAMS (Robotic Assisted Mobility Science), which uses the latest research in robotic walking to create a new kind of medical exoskeleton. With the ability to move dynamically, using neurocontrol interfaces, these exoskeletons will allow users to balance and walk without the crutches that are necessary with existing medical exoskeletons. This might not seem like much, but consider how often you find yourself standing up and using your hands at the same time.

“The only way we’re going to get exoskeletons into the real world helping people do everyday tasks is through dynamic locomotion,” explains Aaron Ames, a professor of civil and mechanical engineering at Caltech and colead of the RoAMS initiative. “We’re imagining deploying these exoskeletons in the home, where a user might want to do things like make a sandwich and bring it to the couch. And on the clinical side, there are a lot of medical benefits to standing upright and walking.”

The Caltech researchers say their exoskeleton is ready for a major test: They plan to demonstrate dynamic walking through neurocontrol this year.

Getting a bipedal exoskeleton to work so closely with a human is a real challenge. Ames explains that researchers have a deep and detailed understanding of how their robotic creations operate, but biological systems still present many unknowns. “So how do we get a human to successfully interface with these devices?” he asks.

There are other challenges as well. Ashraf S. Gorgey, an associate professor of physical medicine and rehabilitation at Virginia Commonwealth University, in Richmond, who has researched exoskeletons, says factors such as cost, durability, versatility, and even patients’ desire to use the device are just as important as the technology itself. But he adds that as a research system, Caltech’s approach appears promising: “Coming up with an exoskeleton that can provide balance to patients, I think that’s huge.”

One of Ames’s colleagues at Caltech, Joel Burdick, is developing a spinal stimulator that can potentially help bypass spinal injuries, providing an artificial connection between leg muscles and the brain. The RoAMS initiative will attempt to use this technology to exploit the user’s own nerves and muscles to assist with movement and control of the exoskeleton—even for patients with complete paraplegia. Coordinating nerves and muscles with motion can also be beneficial for people undergoing physical rehabilitation for spinal cord injuries or stroke, where walking with the support and assistance of an exoskeleton can significantly improve recovery, even if the exoskeleton does most of the work.

“You want to train up that neurocircuitry again, that firing of patterns that results in locomotion in the corresponding muscles,” explains Ames. “And the only way to do that is have the user moving dynamically like they would if they weren’t injured.”

Caltech is partnering with a French company called Wandercraft to transfer this research to a clinical setting. Wandercraft has developed an exoskeleton that has received clinical approval in Europe, where it has already enabled more than 20 paraplegic patients to walk. In 2020, the RoAMS initiative will focus on directly coupling brain or spine interfaces with Wandercraft’s exoskeleton to achieve stable dynamic walking with integrated neurocontrol, which has never been done before.

Ames notes that these exoskeletons are designed to meet very specific challenges. For now, their complexity and cost will likely make them impractical for most people with disabilities to use, especially when motorized wheelchairs can more affordably fulfill many of the same functions. But he is hoping that the RoAMS initiative is the first step toward bringing the technology to everyone who needs it, providing an option for situations that a wheelchair or walker can’t easily handle.

“That’s really what RoAMS is about,” Ames says. “I think this is something where we can make a potentially life-changing difference for people in the not-too-distant future.”

This article appears in the January 2020 print issue as “This Exoskeleton Will Obey Your Brain.”

You Want a Prosthetic Leg With a Tesla Coil and Spark Gaps? No Problem

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/biomedical/bionics/you-want-a-prosthetic-leg-with-a-tesla-coil-and-spark-gaps-no-problem

Imagine performing complicated choreography with thousands of volts rippling up and down inside your leg, creating a ladder of buzzing miniature bolts of lightning. That’s what Viktoria Modesta does in a promotional art video released this week for Rolls Royce’s line of Black Badge cars. The video, which you can see at the bottom of this page, is in a “cybernetic glam” style that shows Modesta striding through futuristic settings until she transforms into the famous leaning woman ornament that adorns the front bonnet of Rolls Royces.

Modesta is a self-described bionic pop artist who often uses very elaborate versions of her prosthetic limb. To create her outfit for the video—including the 3D printed matching bodice—she turned to fashiontech designer Anouk Wipprecht (who wrote about her creation of a maker-friendly EEG headset in the June issue of IEEE Spectrum) and Sophie de Oliveira Barata of the Alternative Limb Project.

Modesta, Wipprecht, and de Oliveira Barata started visiting their local Rolls Royce dealerships and brainstorming about what Modesta’s look could be for the video. Wipprecht began thinking about using a Tesla coil to generate electric “wings,” and brought on two previous collaborators: Joe DiPrima (profiled in Spectrum’s May issue) and his brother John. The DiPrima’s are the founders of ArcAttack, which makes and performs with Tesla coils. ArcAttack often makes large coils, some of which have been installed as attractions in U.S. science museums. Soon the team began wondering if they could put a Jacob’s Ladder inside a hollow leg. (If you’ve ever seen an old-school mad scientist’s laboratory in a movie, you’re familiar with the rising spark effect of a Jacob’s Ladder).

But while “Jacob Ladders are cool things, they’re sort of unreliable. If you’re walking around with it and the wind starts blowing and stuff like that, it just breaks the arc. Maybe it might work like one out of five times,” says Joe DiPrima. Given that the video was designed to promote a car, “I thought that maybe we could design a Jacob’s Ladder worked more like a distributor cap in a car. So we have an actual mechanical switch that switches high voltage from the Tesla coil and energizes a specific spark gap. And so by doing that we make the effect bulletproof.” The result was a series of spark gaps which can fire in a Jacob’s Ladder-like way, but with the speed and pattern under the control of a programmable microcontroller. Wipprecht and DiPrima shared with IEEE Spectrum some behind-the-scenes footage of how they prototyped and built the leg for this video:

Making a wearable Tesla coil came with unusual challenges: “A Tesla coil doesn’t have a lot of capacitance when they’re super small. So they’re really sensitive to their surroundings as far as frequency is concerned and stuff like that,” says DiPrima—even a nearby hand can affect the coil. “One of the things that I did to make the frequency of the coil more stable is I put a capacitor string inside of it…that added 20 picofarads of capacitance, which is pretty significant for a coil that small.” As well as stabilizing the coil itself, the extra capacitance gave the design more flexibility with regard to the length of the wires used to connect to the spark gaps in the leg.

It was decided to fit the coil and the supported electronics into a shoe, so they worked with United Nude to make a custom pair that would let them fit everything into the heel. The development process took about six months and grew to include a number of other specialists, such as Alex Freire.

“I’m not a prosthetist,” says Wipprecht. For an artifical leg “there’s really specific needs, like the socket placement, the length of the feet, the balance.” Freire made the sockets and the feet which were then sent to de Oliveira Barata to complete the leg based on Wipprecht’s design sketches, which featured an open space for the spark electrodes. The feet are made of rubber, while the upper part of the prosthetic is made of carbon fiber, but much of the length is made from fiberglass. “That is the space where the electrical effects needed to happen, so it couldn’t be carbon fiber because that’s conductive,” says Wipprecht.

In the finished video, these effects have been enhanced using CGI, but it is still based on real sparks as a practical effect. The bodice Modesta wears is designed to evoke the engine of the Black Badge car. To create it, Wipprecht worked from a 3-D body scan of Modesta, and printed the form using a nylon material with a laser-sintering printer. The printed neck and hip parts were used to make a mold from which parts were cast, which where then layered with the same carbon fiber material used in Black Badge cars.

Mind Meld: Tetris Players Electronically Connect Their Brains

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/bionics/video-game-players-electronically-connect-their-brains

Humans collaborate using brain-to-brain communication to play video game

Have you ever felt a strong emotion, such as elation from taking in a scenic view, and wanted to share it with the people around you? Not “share” as in tell them about it or post it on social media, but actually share it—like beam the feeling from your brain into theirs?  

Researchers at the University of Washington in Seattle say they would like to give humans that kind of brain-to-brain interaction, and have demonstrated a baby step toward that goal. In a set of experiments described in the journal Scientific Reports, the researchers enabled small groups of people to communicate collaboratively using only their minds.

In the experiments, participants played a Tetris-like video game. They worked in groups of three to decide whether to rotate a digital shape as it fell toward rows of blocks at the bottom of the screen. 

The participants could not see, hear, or communicate with each other in any way other than through thinking. Their thoughts—electrical signals in the brain—were read using electroencephalography (EEG) and delivered using transcranial magnetic stimulation (TMS).

The messages sent between participants’ brains were limited to “yes” and “no”. But the researchers who developed the system hope to expand upon it to enable the sharing of more complex information or even emotions. “Imagine if you could make a person feel something,” says Andrea Stocco, an assistant professor at the University of Washington, who collaborated on the experiments.

We already try to elicit emotions from each other—to varying degrees of success—using touch, words, pictures and drugs. And brain stimulation techniques such as TMS have for more than a decade been used to treat psychiatric disorders such as depression. Sharing emotions using brain-to-brain interaction is an extension of these existing practices, Stocco says. “It might sound creepy, but it’s important,” he says.

Stocco’s experiments with the technology, called brain-to-brain interface, or BBI, are the first demonstration of BBI between more than two people, he and his colleagues say. 

To be accurate, the technology should probably be called brain-to-computer-to-computer-to-brain interface. The brain signals of one person are recorded with EEG and decoded for their meaning (computer number one). The message is then re-coded and sent to a TMS device (computer number two), which delivers electrical stimulation to the brain of the recipient.

In Stocco’s experiments, this chain of communication has to happen in the amount of time it takes a Tetris-style block to drop (about 30 seconds—it’s slow Tetris). Participants work in groups of three: two senders and one receiver. The senders watch the video game and each decide whether to rotate the block or not. Then they send their yes or no decision to the receiver, who sits in a different room, and is charged with taking action to rotate the block or not, based on the senders’ messages. (Receivers can only see half the game—the piece that is falling—and not the rows of blocks into which the piece is supposed to fit, so they depend on the senders’ advice.) 

If senders want to rotate the block, they focus their attention on an area of their screen that says “yes” with an LED flashing beneath it at 17 hertz. If they do not want to rotate it, they focus their attention on area of the screen that says “no” with an LED flashing beneath it at 15 hertz. 

The difference in brain activity caused by looking at these two different rates of flashing light is fairly easy to detect with EEG, says Stocco. A computer then evaluates the brainwave patterns, determines if they corresponded with yes or no, and sends that information to the third person in the group (the receiver). 

The receiver wears a TMS device that induces gentle, electrical stimulation in the brain non-invasively. If the message from a sender is “yes” the TMS device stimulates the receiver’s brain in a way that produces some kind of visual cue, such as a flash of color. If the message is no, the receiver gets no visual cue. Messages to the receiver from the two senders arrived one after the other, in the same order each time.

To make things interesting, the researchers asked one of the two senders to frequently transmit the wrong answer. All the receivers noticed the pattern of bad advice, and chose to listen to the more accurate sender. 

Compared with the complexity of human thought, this binary form of brain-to-brain communication is just one step toward something that might be useful outside the lab. No emotions were shared between participants, aside from perhaps a little nostalgia among the participants who grew up with the real Tetris. 

To reach a higher level of sophistication in brain-to-brain communication, researchers will likely need equipment that can read brain activity with more spatial resolution than EEG, and can stimulate with more specificity than TMS. For this, some BBI researchers are turning to fMRI and ultrasound, Stocco says.

BBI work is slow going. Stocco and his University of Washington colleague Rajesh Rao first demonstrated a form of BBI in 2013. Other groups followed on shortly after. Now, six years later, researchers working on the technology are only inches from where they started. “There are maybe four or five groups working on this globally, so we get maybe one paper a year,” says Stocco. 

DARPA Funds Ambitious Brain-Machine Interface Program

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/bionics/darpa-funds-ambitious-neurotech-program

The N3 program aims to develop wearable devices that let soldiers to communicate directly with machines.

DARPA’s Next-Generation Nonsurgical Neurotechnology (N3) program has awarded funding to six groups attempting to build brain-machine interfaces that match the performance of implanted electrodes but with no surgery whatsoever.

By simply popping on a helmet or headset, soldiers could conceivably command control centers without touching a keyboard; fly drones intuitively with a thought; even feel intrusions into a secure network. While the tech sounds futuristic, DARPA wants to get it done in four years.

“It’s an aggressive timeline,” says Krishnan Thyagarajan, a research scientist at PARC and principal investigator of one of the N3-funded projects. “But I think the idea of any such program is to really challenge the community to push the limits and accelerate things which are already brewing. Yes, it’s challenging, but it’s not impossible.”

The N3 program fits right into DARPA’s high-risk, high-reward biomedical tech portfolio, including programs in electric medicine, brain implants and electrical brain training. And the U.S. defense R&D agency is throwing big money at the program: Though a DARPA spokesperson declined to comment on the amount of funding, two of the winning teams are reporting eye-popping grants of $19.48 million and $18 million.

Plenty of noninvasive neurotechnologies already exist, but not at the resolution necessary to yield high-performance wearable devices for national security applications, says N3 program manager Al Emondi of DARPA’s Biological Technologies Office.

Following a call for applications back in March, a review panel narrowed the pool to six teams across industry and academia, Emondi told IEEE Spectrum. The teams are experimenting with different combinations of magnetic fields, electric fields, acoustic fields (ultrasound) and light. “You can combine all these approaches in different, unique and novel ways,” says Emondi. What the program hopes to discover, he adds, is which combinations can record brain activity and communicate back to the brain with the greatest speed and resolution.

Specifically, the program is seeking technologies that can read and write to brain cells in just 50 milliseconds round-trip, and can interact with at least 16 locations in the brain at a resolution of 1 cubic millimeter (a space that encompasses thousands of neurons).

The four-year N3 program will consist of three phases, says Emondi. In the current phase 1, teams have one year to demonstrate the ability to read (record) and write to (stimulate) brain tissue through the skull. Teams that succeed will move to phase 2. Over the ensuing 18 months, those groups will have to develop working devices and test them on living animals. Any group left standing will proceed to phase 3—testing their device on humans.

Four of teams are developing totally noninvasive technologies. A team from Carnegie Mellon University, for example, is planning to use ultrasound waves to guide light into and out of the brain to detect neural activity. They plan to use  interfering electrical fields to write to specific neurons.

The three other teams proposing non-invasive techniques include Johns Hopkins University’s Applied Physics Laboratory, Thyagarajan’s team at PARC, and a team from Teledyne Technologies, a California-based industrial company.

The two remaining teams are developing what DARPA calls “minutely invasive” technologies which, as we described in September, require no incisions or surgery but may involve technology that is swallowed, sniffed, injected or absorbed into the human body in some way.

Rice University, for example, is developing a system that requires exposing neurons to a viral vector to deliver instructions for synthetic proteins that indicate when a neuron is active. Ohio-based technology company Battelle is developing a brain-machine interface that relies on magnetoelectric nanoparticles injected into the brain.

“This is uncharted territory for DARPA, and the next step in brain-machine interfaces,” says Emondi. “If we’re successful in some of these technologies…that’s a whole new ecosystem that doesn’t exist right now.”