Tag Archives: Biomedical

Facebook Closer to Augmented Reality Glasses With Brain Implant That Decodes Dialogue From Neural Activity

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/devices/brain-implant-decodes-dialogue-from-neural-activity

The project is the first to decode question-and-answer speech from brain signals in real time

In 2017, Facebook’s Mark Chevillet gave himself two years to prove whether it was feasible to build a non-invasive technology able to read out 100 words per minute from brain activity.

It’s been two years, and the verdict is in: “The promise is there,” Chevillet told IEEE Spectrum. “We do think it will be possible.”

As research director of Facebook Reality Labs’ brain-computer interface program, Chevillet plans to push ahead with the project—and the company’s ultimate goal to develop augmented reality (AR) glasses that can be controlled without having to speak aloud.

Chevillet’s optimism is fueled in large part by a first in the field of brain-computer interfaces that hit the presses this morning: In the journal Nature Communications, a team at the University of California, San Francisco, in collaboration with Facebook Reality Labs, has built a brain-computer interface that accurately decodes dialogue—words and phrases both heard and spoken by the person wearing the device—from brain signals in real time.

What the Media Missed About Elon Musk’s $150 Million Augmented Brain Project

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/devices/elon-musks-150-million-augmented-brain-project-what-the-media-missed

Elon Musk’s company Neuralink last week announced a plan to sync our brains with artificial intelligence. Here’s what news outlets overlooked.

Elon Musk is known for trumpeting bold and sometimes brash plans. So it was no surprise last week when the Tesla founder made an announcement—in front of a live audience and streamed online, with a video trailer and thematic music—that his new company Neuralink plans to sync our brains with artificial intelligence. (Don’t worry, he assured the audience, “this is not a mandatory thing.”)

What was surprising was the breathless coverage in most media, which lacked context or appreciation for the two decades of research on which Neuralink’s work stands. 

Upgrade to Superhuman Reflexes Without Feeling Like a Robot

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/the-human-os/biomedical/devices/enabling-superhuman-reflexes-without-feeling-like-a-robot

Electrical muscle stimulation can speed up human reflexes. But can we use it without giving up control of our bodies?

We’re used to seeing robots with reflexes that are much faster than anything humans are capable of, but it’s not just high-speed actuators that make them so quick. Perception and control systems outperform our own nervous systems and brains, meaning that we can boost our own reaction times by just letting a robot take control of our muscles. It’s not very hard to do this—all it takes is electrical muscle stimulation (EMS), which uses electrode pads attached to the skin to gently shock your muscles, causing them to contract.

By adding sensors to an EMS system, physical reaction times can be significantly reduced, since the EMS is able to actuate your muscles much faster than your brain can. Essentially, you’d be a puppet, having surrendered your free will to a computer that can force you to move. This may not be a trade-off that you want to accept, which is why researchers from Sony and the University of Chicago are working on an EMS system that makes your reflexes noticeably faster without completely removing the feeling that you’re doing things yourself.

With This AI, 60 Percent of Patients Who Had Surgery Could Have Avoided It

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/more-than-60-of-patients-could-have-avoided-surgery-if-this-ai-had-been-their-doctor

Machine learning algorithms that combine clinical and molecular data are the “wave of the future,” experts say

A man walks into a doctor’s office for a CT scan of his gallbladder. The gallbladder is fine but the doctor notices a saclike pocket of fluid on the man’s pancreas. It’s a cyst that may lead to cancer, the doctor tells him, so I’ll need to cut it out to be safe.

It’ll take three months to recover from the surgery, the doctor adds—plus, there’s a 50 percent chance of surgical complications, and a 5 percent chance the man will die on the table.

An estimated 800,000 patients in the United States are incidentally diagnosed with pancreatic cysts each year, and doctors have no good way of telling which cysts harbor a deadly form of cancer and which are benign. This ambiguity results in thousands of unnecessary surgeries: One study found that up to 78 percent of cysts for which a patient was referred to surgery ended up being not cancerous.

Now there’s a machine learning algorithm that could help. Described today in the journal Science Translational Medicine, surgeons and computer scientists at Johns Hopkins University have built a test called CompCyst (for comprehensive cyst analysis) that is significantly better than today’s standard-of-care—a.k.a. doctor observations and medical imaging—at predicting whether patients should be sent home, monitored, or undergo surgery.

Mind Meld: Tetris Players Electronically Connect Their Brains

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/bionics/video-game-players-electronically-connect-their-brains

Humans collaborate using brain-to-brain communication to play video game

Have you ever felt a strong emotion, such as elation from taking in a scenic view, and wanted to share it with the people around you? Not “share” as in tell them about it or post it on social media, but actually share it—like beam the feeling from your brain into theirs?  

Researchers at the University of Washington in Seattle say they would like to give humans that kind of brain-to-brain interaction, and have demonstrated a baby step toward that goal. In a set of experiments described in the journal Scientific Reports, the researchers enabled small groups of people to communicate collaboratively using only their minds.

In the experiments, participants played a Tetris-like video game. They worked in groups of three to decide whether to rotate a digital shape as it fell toward rows of blocks at the bottom of the screen. 

The participants could not see, hear, or communicate with each other in any way other than through thinking. Their thoughts—electrical signals in the brain—were read using electroencephalography (EEG) and delivered using transcranial magnetic stimulation (TMS).

The messages sent between participants’ brains were limited to “yes” and “no”. But the researchers who developed the system hope to expand upon it to enable the sharing of more complex information or even emotions. “Imagine if you could make a person feel something,” says Andrea Stocco, an assistant professor at the University of Washington, who collaborated on the experiments.

We already try to elicit emotions from each other—to varying degrees of success—using touch, words, pictures and drugs. And brain stimulation techniques such as TMS have for more than a decade been used to treat psychiatric disorders such as depression. Sharing emotions using brain-to-brain interaction is an extension of these existing practices, Stocco says. “It might sound creepy, but it’s important,” he says.

Stocco’s experiments with the technology, called brain-to-brain interface, or BBI, are the first demonstration of BBI between more than two people, he and his colleagues say. 

To be accurate, the technology should probably be called brain-to-computer-to-computer-to-brain interface. The brain signals of one person are recorded with EEG and decoded for their meaning (computer number one). The message is then re-coded and sent to a TMS device (computer number two), which delivers electrical stimulation to the brain of the recipient.

In Stocco’s experiments, this chain of communication has to happen in the amount of time it takes a Tetris-style block to drop (about 30 seconds—it’s slow Tetris). Participants work in groups of three: two senders and one receiver. The senders watch the video game and each decide whether to rotate the block or not. Then they send their yes or no decision to the receiver, who sits in a different room, and is charged with taking action to rotate the block or not, based on the senders’ messages. (Receivers can only see half the game—the piece that is falling—and not the rows of blocks into which the piece is supposed to fit, so they depend on the senders’ advice.) 

If senders want to rotate the block, they focus their attention on an area of their screen that says “yes” with an LED flashing beneath it at 17 hertz. If they do not want to rotate it, they focus their attention on area of the screen that says “no” with an LED flashing beneath it at 15 hertz. 

The difference in brain activity caused by looking at these two different rates of flashing light is fairly easy to detect with EEG, says Stocco. A computer then evaluates the brainwave patterns, determines if they corresponded with yes or no, and sends that information to the third person in the group (the receiver). 

The receiver wears a TMS device that induces gentle, electrical stimulation in the brain non-invasively. If the message from a sender is “yes” the TMS device stimulates the receiver’s brain in a way that produces some kind of visual cue, such as a flash of color. If the message is no, the receiver gets no visual cue. Messages to the receiver from the two senders arrived one after the other, in the same order each time.

To make things interesting, the researchers asked one of the two senders to frequently transmit the wrong answer. All the receivers noticed the pattern of bad advice, and chose to listen to the more accurate sender. 

Compared with the complexity of human thought, this binary form of brain-to-brain communication is just one step toward something that might be useful outside the lab. No emotions were shared between participants, aside from perhaps a little nostalgia among the participants who grew up with the real Tetris. 

To reach a higher level of sophistication in brain-to-brain communication, researchers will likely need equipment that can read brain activity with more spatial resolution than EEG, and can stimulate with more specificity than TMS. For this, some BBI researchers are turning to fMRI and ultrasound, Stocco says.

BBI work is slow going. Stocco and his University of Washington colleague Rajesh Rao first demonstrated a form of BBI in 2013. Other groups followed on shortly after. Now, six years later, researchers working on the technology are only inches from where they started. “There are maybe four or five groups working on this globally, so we get maybe one paper a year,” says Stocco. 

Why So Many Medical Advances Never Make it to Mainstream Medicine

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/devices/qa-the-future-of-digital-medicine

Eric Topol on why promising technologies like smartphone ultrasound struggle to achieve widespread adoption

For more than a decade, engineers have been innovating their way through the nascent field of digital medicine: creating pills with sensors on them, disease-detecting facial recognition software, tiny robots that swim through the body to perform tasks, smartphone-based imaging and diagnostics, and sensors that track our vitals. But for all that creativity, only a small portion of these inventions get widely adopted in health care. In an essay published today in Science Translational Medicine, Eric Topol, a cardiologist, geneticist, digital medicine researcher, and director of the Scripps Research Translational Institute, discusses the state of digital medicine, ten years in. IEEE Spectrum caught up with Topol to ask a few questions for our readers.

Smart Speaker Listens for Audible Signs of Cardiac Arrest

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/smart-speaker-listens-for-cardiac-arrest

This AI system detects unique gasping sounds that occur when the heart stops beating

When a person’s heart malfunctions and suddenly stops beating, death can occur within minutes—unless someone intervenes. A bystander administering CPR right away can triple a person’s chances of surviving a cardiac arrest.

Last July, we described a smart watch designed to detect cardiac arrest and summon help. Now, a team at the University of Washington has developed a totally contactless AI system that listens to detect the telltale sound of agonal breathing—a unique guttural gasping sound made by 50 percent of cardiac arrest patients.

The smart speaker system, described today in the journal npj Digital Medicine, detected agonal breathing events 97 percent of the time with almost no false alarms in a proof-of-concept study.

The team imagines using the tool—which can run on Amazon’s Alexa or Google Home, among other devices—to passively monitor bedrooms for the sound of agonal breathing and, if detected, set off an alarm.