Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/smart-speaker-listens-for-cardiac-arrest
This AI system detects unique gasping sounds that occur when the heart stops beating
When a person’s heart malfunctions and suddenly stops beating, death can occur within minutes—unless someone intervenes. A bystander administering CPR right away can triple a person’s chances of surviving a cardiac arrest.
Last July, we described a smart watch designed to detect cardiac arrest and summon help. Now, a team at the University of Washington has developed a totally contactless AI system that listens to detect the telltale sound of agonal breathing—a unique guttural gasping sound made by 50 percent of cardiac arrest patients.
The smart speaker system, described today in the journal npj Digital Medicine, detected agonal breathing events 97 percent of the time with almost no false alarms in a proof-of-concept study.
The team imagines using the tool—which can run on Amazon’s Alexa or Google Home, among other devices—to passively monitor bedrooms for the sound of agonal breathing and, if detected, set off an alarm.
The first study of a new treatment in humans demonstrates a noninvasive, harmless cancer killer
Tumor cells that spread cancer via the bloodstream face a new foe: a laser beam, shined from outside the skin, that finds and kills these metastatic little demons on the spot.
In a study published today in Science Translational Medicine, researchers revealed that their system accurately detected these cells in 27 out of 28 people with cancer, with a sensitivity that is about 1,000 times better than current technology. That’s an achievement in itself, but the research team was also able to kill a high percentage of the cancer-spreading cells, in real time, as they raced through the veins of the participants.
If developed further, the tool could give doctors a harmless, noninvasive, and thorough way to hunt and destroy such cells before those cells can form new tumors in the body. “This technology has the potential to significantly inhibit metastasis progression,” says Vladimir Zharov, director of the nanomedicine center at the University of Arkansas for Medical Sciences, who led the research.
Post Syndicated from Amos Zeeberg original https://spectrum.ieee.org/the-human-os/biomedical/devices/engineering-a-medical-revolution
Manufacturing immunotherapies in machines, instead of by hand, could reduce errors and improve access to these promising cancer drugs
When Novartis’s cancer treatment Kymriah was approved by the U.S. Food and Drug Administration in 2017, it signaled the arrival of CAR-T, a much-hyped form of therapy that proved stunningly effective at curing some hard-to-treat forms of cancer in trials. Like other CAR-T treatments, though, Kymriah is difficult to make and is produced specially for each patient. Novartis set its price at US $400,000 per treatment.
But before long, Novartis was simply giving Kymriah away for free to some patients. The company couldn’t consistently manufacture the drug to meet the specifications spelled out in the FDA’s approval. Doses that were outside of those specifications couldn’t legally be sold.
Novartis’s costly manufacturing issues are representative of the state of cell therapy, which includes CAR-T and other treatments where living cells are injected into a patient. CAR-T is one of the most promising fields in medicine and at the end of 2018, there were more than 400 CAR-T trials going around the world.
But for all the promise, the challenges of manufacturing cell therapies are making it hard to deliver the actual treatments. “We’re still banging into fundamental challenges of manufacturing a living cell–based product,” says Scott Burger, a long-time consultant in the cell-therapy industry.
Gates/Bezos-funded charity champions research into methods for early detection of Alzheimer’s disease
The Alzheimer’s Drug Discovery Foundation (ADDF), a public charity co-founded by Bill Gates, Jeff and MacKenzie Bezos, and former Estée Lauder CEO Leonard Lauder, just announced the first award recipients of their $50 million Diagnostics Accelerator research program.
The four recipients, chosen from a pool of 300 applicants across 30 countries, are developing reliable, cost-effective ways to diagnose Alzheimer’s disease, including one that will use machine learning to detect early signs of concern through an eye scan.
Using this technique, microrobots could deliver stem cells to hard-to-reach places
Engineers have built microrobots to perform all sorts of tasks in the body, and can now add to that list another key skill: delivering stem cells. In a paper published today in Science Robotics, researchers describe propelling a magnetically-controlled, stem-cell-carrying bot through a live mouse.
Under a rotating magnetic field, the microrobots moved with rolling and corkscrew-style locomotion. The researchers, led by Hongsoo Choi and his team at the Daegu Gyeongbuk Institute of Science & Technology (DGIST), in South Korea, also demonstrated their bot’s moves in slices of mouse brain, in blood vessels isolated from rat brains, and in a multi-organ-on-a chip.
The invention provides an alternative way to deliver stem cells, which are increasingly important in medicine. Such cells can be coaxed into becoming nearly any kind of cell, making them great candidates for treating neurodegenerative disorders such as Alzheimer’s.
Post Syndicated from Eliza Strickland original https://spectrum.ieee.org/the-human-os/biomedical/devices/a-wearable-that-helps-women-get-not-get-pregnant
The in-ear sensor from Yono Labs will soon predict a woman’s fertile days
Women’s bodies can be mysterious things—even to the women who inhabit them. But a wearable gadget called the Yono aims to replace mystery with knowledge derived from statistics, big data, and machine learning.
A woman who is trying to get pregnant may spend months tracking her ovulation cycle, often making a daily log of biological signals to determine her few days of fertility. While a plethora of apps promise to help, several studies have questioned these apps’ accuracy and efficacy.
Meanwhile, a woman who is trying to avoid pregnancy by the “fertility awareness method” may well not avoid it, since the method is only 75 percent effective.
Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/bionics/darpa-funds-ambitious-neurotech-program
The N3 program aims to develop wearable devices that let soldiers to communicate directly with machines.
DARPA’s Next-Generation Nonsurgical Neurotechnology (N3) program has awarded funding to six groups attempting to build brain-machine interfaces that match the performance of implanted electrodes but with no surgery whatsoever.
By simply popping on a helmet or headset, soldiers could conceivably command control centers without touching a keyboard; fly drones intuitively with a thought; even feel intrusions into a secure network. While the tech sounds futuristic, DARPA wants to get it done in four years.
“It’s an aggressive timeline,” says Krishnan Thyagarajan, a research scientist at PARC and principal investigator of one of the N3-funded projects. “But I think the idea of any such program is to really challenge the community to push the limits and accelerate things which are already brewing. Yes, it’s challenging, but it’s not impossible.”
The N3 program fits right into DARPA’s high-risk, high-reward biomedical tech portfolio, including programs in electric medicine, brain implants and electrical brain training. And the U.S. defense R&D agency is throwing big money at the program: Though a DARPA spokesperson declined to comment on the amount of funding, two of the winning teams are reporting eye-popping grants of $19.48 million and $18 million.
Plenty of noninvasive neurotechnologies already exist, but not at the resolution necessary to yield high-performance wearable devices for national security applications, says N3 program manager Al Emondi of DARPA’s Biological Technologies Office.
Following a call for applications back in March, a review panel narrowed the pool to six teams across industry and academia, Emondi told IEEE Spectrum. The teams are experimenting with different combinations of magnetic fields, electric fields, acoustic fields (ultrasound) and light. “You can combine all these approaches in different, unique and novel ways,” says Emondi. What the program hopes to discover, he adds, is which combinations can record brain activity and communicate back to the brain with the greatest speed and resolution.
Specifically, the program is seeking technologies that can read and write to brain cells in just 50 milliseconds round-trip, and can interact with at least 16 locations in the brain at a resolution of 1 cubic millimeter (a space that encompasses thousands of neurons).
The four-year N3 program will consist of three phases, says Emondi. In the current phase 1, teams have one year to demonstrate the ability to read (record) and write to (stimulate) brain tissue through the skull. Teams that succeed will move to phase 2. Over the ensuing 18 months, those groups will have to develop working devices and test them on living animals. Any group left standing will proceed to phase 3—testing their device on humans.
Four of teams are developing totally noninvasive technologies. A team from Carnegie Mellon University, for example, is planning to use ultrasound waves to guide light into and out of the brain to detect neural activity. They plan to use interfering electrical fields to write to specific neurons.
The three other teams proposing non-invasive techniques include Johns Hopkins University’s Applied Physics Laboratory, Thyagarajan’s team at PARC, and a team from Teledyne Technologies, a California-based industrial company.
The two remaining teams are developing what DARPA calls “minutely invasive” technologies which, as we described in September, require no incisions or surgery but may involve technology that is swallowed, sniffed, injected or absorbed into the human body in some way.
Rice University, for example, is developing a system that requires exposing neurons to a viral vector to deliver instructions for synthetic proteins that indicate when a neuron is active. Ohio-based technology company Battelle is developing a brain-machine interface that relies on magnetoelectric nanoparticles injected into the brain.
“This is uncharted territory for DARPA, and the next step in brain-machine interfaces,” says Emondi. “If we’re successful in some of these technologies…that’s a whole new ecosystem that doesn’t exist right now.”
Post Syndicated from Jean Kumagai original https://spectrum.ieee.org/biomedical/imaging/xray-detection-may-be-perovskites-killer-app
The wonder crystal could yield imagers that are far more sensitive than commercial detectors
The crystalline material known as perovskite makes for a superefficient photovoltaic cell. Researchers are also exploring perovskites’ potential in transistors and LED lighting. But there’s yet another use for this wonder crystal, and it may be the most promising of all: as X-ray detectors.
Dozens of groups around the world are exploring this area, and major X-ray imaging manufacturers, including Samsung and Siemens, are considering perovskite for their next-generation machines. Compared with today’s X-ray imagers, detectors based on perovskite compounds are far more sensitive and use less power. And for certain applications, the materials can be tuned to emit color when irradiated. Lab prototypes of imagers that use perovskite have been demonstrated to be at least 100 times as efficient as their conventional counterparts.
“Interest in perovskite crystals for imaging emerged out of all the recent enthusiasm to get better solar panels,” says I. George Zubal, director of the nuclear medicine and computed tomography programs at the National Institute of Biomedical Imaging and Bioengineering (NIBIB), in Bethesda, Md. His program funds research into new imaging devices, procedures, and software, including groups looking at perovskite X-ray detection.
What makes perovskites so useful for X-ray detection is the same thing that makes them good for solar cells: They’re excellent at converting light into electrical charge. In a direct detector, X-ray photons are converted into electrons inside a semiconductor. In a scintillator imager, the X-ray photons are first converted into visible light, which is then converted into electrons by a photodiode array.
Conventional direct X-ray detectors have higher resolution than do scintillators, but they take longer to acquire an image. That’s because the semiconductor material they typically use—amorphous selenium—isn’t great at stopping X-rays. Scintillator imagers, on the other hand, are more sensitive than direct X-ray imagers—meaning you need fewer X-rays to create the image—but yield a lower-quality image.
Perovskites could be the answer to the main shortcomings of current X-ray imagers, says Zubal. “Perovskite stops a lot more of the X-rays [compared to amorphous selenium], and being a semiconductor it should give us higher-resolution images, showing the small structures of objects…. You’re also lowering the radiation dose to the patient, which is another main reason for the NIBIB’s enthusiasm.”
In one experiment, Xiaogang Liu’s group at the National University of Singapore started with a commercial flat-panel X-ray detector that used bulk scintillators of cesium iodide thallium. The group removed the CsI(TI) layer and replaced it with a layer of nanocrystals of cesium lead bromide—an inorganic perovskite—directly coating them onto photodiode arrays. When coupled with photomultiplier tubes, the resulting device had a detection limit that was just 1/400 that of medical X-ray machines, as the group reported in Nature last September. Several X-ray manufacturers are now testing nanocrystal scintillators using his group’s approach, Liu says.
Liu credits grad student Qiushui Chen for coming up with the idea of using perovskite nanocrystals in this way. “A lot of our recent work involves rare-earth materials, which is what conventional scintillators use,” Liu says. To form the perovskite layer, the researchers mixed the nanocrystals with liquid cyclohexane and then spin-coated the mixture onto a flexible substrate.
“We got a little bit lucky, because we discovered that the nanocrystals had to be deposited on the substrate through a solid-state process,” Liu says. “If the particles are dispersed in solution, it’s no good.”
Researchers have also demonstrated perovskites in direct X-ray detectors with vastly superior performance to that of commercial imagers. In general, says the NIBIB’s Zubal, direct X-ray detectors are “highly more desirable” than scintillators because they avoid the extra step of converting visible light into electrons. The projects that NIBIB is supporting involve direct detection.
Jinsong Huang and his group at the University of North Carolina at Chapel Hill have been studying direct X-ray detectors based on perovskites since 2014. (Huang also works on perovskite photovoltaics.) In one experiment, they coated methylammonium lead tribromide—a common perovskite compound—onto a regular X-ray detector that used amorphous silicon to convert the X-rays to electrons. The addition of the perovskite layer made it 3,000 times as sensitive.
“When you want extremely efficient and sensitive detectors, you need to count single photons, and that’s not easy,” Huang explains. “We showed that we can make materials that allow you to distinguish the signal from the noise.” Huang recently created a startup to commercialize radiation detectors based on his group’s work.
There are still a number of hurdles to cross before perovskite scintillators or direct X-ray imagers will be ready for market. A big obstacle is that some perovskites are sensitive to moisture. Liu has developed a method for coating each nanocrystal with silicon dioxide and is exploring other protective methods. Perovskite layers can also be encapsulated in glass, much like traditional solar cells are.
But in general, perovskite X-ray imagers won’t need to be quite as hardy as perovskite PVs or LEDs, because the environmental conditions they’ll face are more benign. Solar panels need to perform even after being exposed to the elements for 20 years, while LEDs are exposed to heat and, of course, light, both of which can degrade a perovskite compound. X-ray machines, by contrast, are typically used in climate-controlled settings. For that reason, Liu and Huang believe perovskite X-ray detectors will be commercialized much more quickly than other perovskite applications.
Huang predicts that perovskite detectors will open up new applications for X-rays, expanding what’s already a multibillion-dollar industry. More efficient imagers would draw less power, lending themselves to portable machines that run on batteries. Liu’s group has also demonstrated a variety of tunable, color-emitting perovskite nanocrystals. That work could lead to multicolor X-ray displays, which are impossible with today’s scintillator X-ray machines.
And because they use flexible substrates, perovskite imagers could conform to whatever’s being scanned; anyone who has experienced the discomfort of a mammogram will appreciate that feature. Faster, more sensitive imagers would also reduce the radiation from dental and medical X-rays and airport security scanners.
“Once we can make X-rays much safer, the market will change because you’ll be able to put the detectors everywhere,” Huang says.
Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/devices/algorithms-play-doctor-in-brain-stimulation
These algorithms spot mood changes before you do, and could someday tell a stimulator to zap your brain to treat disorders
A man with depression is driving to work when his mood darkens, and the familiar inklings of anxiety begin to creep in. His brain cells start to fire in a pattern that has, in the past, led him down a dark, dangerous mental road. But this man has a set of electrodes implanted in his brain, and wireless software nearby that’s closely monitoring his neural activity. Algorithms recognize the shift in his brain waves and order up a therapeutic dose of electrical stimulation, zapping the faulty circuits. The man arrives at work calm and stable.
The technology in this scenario doesn’t exist yet, but it’s the vision of Maryam Shanechi, an electrical engineer at the University of Southern California’s Viterbi School of Engineering, and Edward Chang, a neurosurgeon at the University of California, San Francisco. Shanechi presented their progress this week in Nashville, Tennessee, at a neurotechnology meeting held by DARPA, the research arm of the U.S. Department of Defense.
So far, Shanechi and her team have successfully developed algorithms that decoded the brain activity associated with mood changes in seven people. Now, they’re figuring out how to stimulate the brain to affect those mood changes, she reported at the meeting.
When the two pieces of the technology come together, they would form a closed-loop system that puts stimulation therapy decisions in the hands of an algorithm. “We are developing a precise, personalized therapy that takes readings of brain activity, and, based on that, makes decisions on stimulation parameters,” Shanechi said in her presentation in Nashville on Wednesday.
Participants were less accurate and became more tired when completing a task with the HoloLens, compared to the naked eye
With the right device, some programming, and the flick of a switch, augmented reality (AR) can change the world—or at least change what we see a few centimeters in front of our eyes. But while the industry rapidly expands and works hard to improve the AR experience, it must also overcome an important natural barrier: the way in which our eyes focus on objects.
A recent study shows that our eyes are not quite up to the task of simultaneously focusing on two separate objects—one real and one not—in close proximity to one another.
The results, published 6 May in IEEE Transactions on Biomedical Engineering, suggest that accomplishing an AR-assisted task that’s close at hand (within two meters) and requires a high level of precision may not be feasible with existing technology. This could be unwelcome news for researchers attempting to design certain AR-assisted programs.
Engineers have designed a scheme to let thousands of brain implants talk at up to 10 megabits per second
Brain-computer interfaces have managed some amazing feats: allowing paralyzed people to type words and move a robot using only their minds, to name two examples. Brown University neuroengineering professor Arto Nurmikko has had a hand in some of those developments, but even he says the technology is at only a rudimentary stage—the equivalent of the computer understanding the brain’s intention to bend a single finger.
“We’re trying to go from the bending-of-the-finger paradigm to tying shoe laces and even to the concert pianist level. That requires lots more spatial and temporal resolution from an electronic brain interface,” Nurmikko says. His team is hoping that kind of resolution will come along with the transition from a single, hard wired neural implant to a thousand or more speck-size neural implants that wirelessly communicate with computers outside the brain. At the IEEE Custom Integrated Circuits Conference, engineers from Brown University, Qualcomm, and the University of California San Diego presented the final part of a communications scheme for these implants. It allows bidirectional communication between the implants and an external device with an uplink rate of 10 megabits per second and a downlink rate of 1 Mb/s.
Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/devices/kidneyx-prize-winners-redesign-dialysis
Artificial kidneys and a wearable that prevents blood clots were among the winning designs
In the 1960s, a pacemaker was the size of a microwave and a dialysis machine was the size of a refrigerator. Today, a pacemaker is the size of a vitamin and a dialysis machine is, well, the size of a refrigerator. They do have nice LED displays though.
“Kidney disease is underserved,” says Shuvo Roy, a bioengineer at the University of California, San Francisco. Although failed kidneys kill more people each year than breast or prostate cancer, “the field has not seen much innovation in the last 50 years,” says Roy.
The U.S. Department of Health and Human Services and the American Society of Nephrology want to change that. The organizations have teamed up to bring money and attention to the disease with the KidneyX: Redesign Dialysis competition. Started in 2018, the competition challenges innovators in any field to propose tools and technologies that could enable the design of new artificial kidney devices.
Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/biomedical/devices/koreas-new-futuristic-hospital
Hologram visitors, indoor navigation, facial recognition security, and voice-controlled rooms are coming to a hospital in South Korea
When Yonsei University Health System opens its newest hospital next year, in Yongin, about 25 miles outside of Seoul, it will be decked out with some of tech’s hottest gadgets.
Very sick patients in isolation rooms can visit with holograms of their loved ones. Visitors will find their way around the hospital using an augmented reality (AR)-based indoor navigation system. Authorized medical workers will use facial recognition to enter secure areas. Patients can call a nurse and control their bed, lights, and TV with an Alexa-style voice assistant.
That’s the vision, at least. Yonsei and Korean telecommunications company SK Telecom, last week jointly announced that they had signed a memorandum of understanding to build technology for the futuristic hospital, scheduled to open in February 2020. SK Telecom will support the technology with a 5G network, and is considering securing it with quantum cryptography, according to the announcement.
Post Syndicated from University of Maryland original https://spectrum.ieee.org/biomedical/devices/saving-liveswith-robots
University of Maryland engineer wants to equip ambulances with medical robots enhanced by machine learning to help trauma patients
At the moment of traumatic injury, no physician is present. Emergency medical technicians respond first—they stabilize the patient during ambulance transport, while specialized trauma teams prepare to receive the patient at a hospital.
That is, if the patient makes it there.
“The ride to the hospital is the riskiest part for the trauma patient,” says Axel Krieger, assistant professor of mechanical engineering at the University of Maryland, who specializes in medical robotics and computer vision. Krieger says that estimates suggest one-third of trauma fatalities likely would have survived if they had access to hospital-level of care sooner. He aims to help make that level of care standard on the ambulance ride—a long way from his undergraduate days in Germany, where he studied automotive engineering.
To improve the health-giving capacity for trauma patients during the ambulance ride, Krieger wants to equip the ambulance with a medical robot enhanced by machine learning (ML). “One of the biggest dangers during the ambulance ride is undiagnosed, internal hemorrhagic bleeding,” he says. “It’s currently undetectable with methods available on the ambulance ride. You can’t see it.”
But a robot can.
“Imagine you have a patient in the emergency vehicle, and a robot scans the patient and obtains ultrasound images,” says Krieger, who is a member of the Maryland Robotics Center. “This can provide a critical level of life-saving diagnosis and care not yet possible during an emergency ambulance ride.”
The robot scans and visualizes the injury, then compares and analyzes the scans with its ML algorithm—which was trained using data from similar real-life patient images. It focuses on anatomic areas known to be especially vulnerable to hidden injury and bleeding—such as the pelvic area and space between the lungs, spleen, and liver—to determine severity of wounds based on location, depth, and interaction with vital anatomy; compute volume of blood loss; and assess hemorrhagic potential. Analyzing these characteristics en route would help produce an injury profile useful in triaging the patient so he or she can receive appropriate care as soon as possible—perhaps in the ambulance, and most certainly upon arrival at the hospital.
To develop this ML-based intelligent scanning robot, Krieger and several A. James Clark School of Engineering graduate students collaborated with trauma experts at the University of Maryland Medical Center’s R Adams Cowley Shock Trauma Center.
The research is still experimental and not yet approved for clinical use with patients—but Krieger believes it will be soon.
“It’s the translational aspect to patient care that really excites me,” he says. “If we can help more people survive, this is the best use of our work.”
Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/biomedical/ethics/is-life-expectancy-finally-topping-out
A slowing rate of improvement hints at a looming asymptote, at least on a population-wide basis
Ray Kurzweil, Google’s chief futurist, says that if you can just hang on until 2029, medical advances will start to “add one additional year, every year, to your life expectancy. By that I don’t mean life expectancy based on your birth date but rather your remaining life expectancy.” Curious readers can calculate what this trend would do to the growth of the global population, but I will limit myself here to a brief review of survival realities.
In 1850, the combined life expectancies of men and women stood at around 40 years in the United States, Canada, Japan and much of Europe. Since then the values have followed an impressive, almost perfectly linear increase that nearly doubled them, to almost 80 years. Women live longer in all societies, with the current maximum at just above 87 years in Japan.
The trend may well continue for a few decades, given that life expectancies of elderly people in affluent countries rose almost linearly from 1950 to 2000 at a combined rate of about 34 days per year. But absent fundamental discoveries that change the way we age, this trend to longer life must weaken and finally end. The long-term trajectory of Japanese female life expectancies—from 81.91 years in 1990 to 87.26 years in 2017—fits a symmetrical logistic curve that is already close to its asymptote of about 90 years. The trajectories for other affluent countries also show the approaching ceiling. Records available show two distinct periods of rising longevity: Faster linear gains (about 20 years in half a century) prevailed until 1950, followed by slower gains.
If we are still far from the limit to the human life-span, then the largest survival gains should be recorded among the oldest people. This was indeed the case for studies conducted in France, Japan, the United States, and the United Kingdom from the 1970s to the early 1990s. Since then, however, the gains have leveled off.
There may be no specific genetically programmed limit to life-span—much as there is no genetic program that limits us to a specific running speed. But life-span is a bodily characteristic that arises from the interaction of genes with the environment. Genes may themselves introduce biophysical limits, and so can environmental effects, such as smoking.
The world record life-span is the 122 years claimed for Jeanne Calment, a Frenchwoman who died in 1997. Strangely, after more than two decades, she still remains the oldest survivor ever, and by a substantial margin. (Indeed, the margin is so big as to be suspicious: Her age and even her identity are in question.) The second oldest supercentenarian died at 119, in 1999, and since that time there have been no survivors beyond the 117th year.
And if you think that you have a high chance to make it to 100 because some of your ancestors lived that long, you should know that the estimated heritability of life-span is modest, just between 15 and 30 percent. Given that people tend to marry others like themselves, a phenomenon known as assortative mating, the true heritability of human longevity is probably even lower than that.
Of course, as with all complex matters, there is always room for different interpretation of published statistical analyses. Kurzweil hopes that dietary interventions and other tricks will extend his own life until such time as major scientific advances can preserve him forever. It is true that there are ideas on how such preservation might be achieved, among them the rejuvenation of human cells by extending their telomeres, the nucleotide sequences at the ends of a chromosome that fray with age. If it works, maybe it can lift the realistic maximum well above 125 years.
But in 2019 the best advice I can give to all but a few remarkably precocious readers of these essays is to plan ahead—but not as far ahead as the 22nd century.
This article appears in the May 2019 print issue as “Life-Span and Life Expectancy.”
A brain-computer interface that records signals in the motor cortex can synthesize speech from activity in a user’s brain
Two years ago, a 64-year-old man paralyzed by a spinal cord injury set a record when he used a brain-computer interface (BCI) to type at a speed of eight words per minute.
Today, in the journal Nature, scientists at the University of California, San Francisco, present a new type of BCI, powered by neural networks, that might enable individuals with paralysis or stroke to communicate at the speed of natural speech—an average of 150 words per minute.
Post Syndicated from University of Maryland original https://spectrum.ieee.org/biomedical/imaging/shedding-light-on-the-future-of-lasik
A University of Maryland-developed microscopy technique could eliminate the “surgery” aspect of LASIK
Fischell Department of Bioengineering (BIOE) researchers have developed a microscopy technique that could one day be used to improve LASIK and eliminate the “surgery” aspect of the procedure. Their findings were published in March in Physical Review Letters.
In the 20 years since the FDA first approved LASIK surgery, more than 10 million Americans have had the procedure done to correct their vision. When performed on both eyes, the entire procedure takes about 20 minutes and can rid patients of the need to wear glasses or contact lenses.
While LASIK has a very high success rate, virtually every procedure involves an element of guesswork. This is because doctors have no way to precisely measure the refractive properties of the eye. Instead, they rely heavily on approximations that correlate with the patient’s vision acuity—how close to 20/20 he or she can see without the aid of glasses or contacts.
In search of a solution, BIOE Assistant Professor Giuliano Scarcelli and members of his Optics Biotech Laboratory have developed a microscopy technique that could allow doctors to perform LASIK using precise measurements of how the eye focuses light, instead of approximations.
“This could represent a tremendous first for LASIK and other refractive procedures,” Scarcelli said. “Light is focused by the eye’s cornea because of its shape and what is known as its refractive index. But until now, we could only measure its shape. Thus, today’s refractive procedures rely solely on observed changes to the cornea, and they are not always accurate.”
The cornea—the outermost layer of the eye—functions like a window that controls and focuses light that enters the eye. When light strikes the cornea, it is bent—or refracted. The lens then fine-tunes the light’s path to produce a sharp image onto the retina, which converts the light into electrical impulses that are interpreted by the brain as images. Common vision problems, such as nearsightedness or farsightedness, are caused by the eye’s inability to sharply focus an image onto the retina.
To fix this, LASIK surgeons use lasers to alter the shape of the cornea and change its focal point. But, they do this without any ability to precisely measure how much the path of light is bent when it enters the cornea.
To measure the path light takes, one needs to measure a quantity known as the refractive index; it represents the ratio of the velocity of light in a vacuum to its velocity in a particular material.
By mapping the distribution and variations of the local refractive index within the eye, doctors would know the precise degree of corneal refraction. Equipped with this information, they could better tailor the LASIK procedure such that, rather than improved vision, patients could expect to walk away with perfect vision—or vision that tops 20/20.
Even more, doctors might no longer need to cut into the cornea.
“Non-ablative technologies are already being developed to change the refractive index of the cornea, locally, using a laser,” Scarcelli said. “Providing local refractive index measurements will be critical for their success.”
Knowing this, Scarcelli and his team developed a microscopy technique that can measure the local refractive index using Brillouin spectroscopy—a light-scattering technology that was previously used to sense the mechanical properties of tissue and cells without disrupting or destroying either.
“We experimentally demonstrated that, by using a dual Brillouin scattering technology, we could determine the refractive index directly, while achieving three-dimensional spatial resolution,” Scarcelli said. “This means that we could measure the refractive index of cells and tissue at locations in the body—such as the eyes—that can only be accessed from one side.”
In addition to measuring corneal or lens refraction, the group is working on improving its resolution to analyze mass density behavior in cell biology or even cancer pathogenesis, Scarcelli said.
Along with Scarcelli, BIOE Ph.D. student Antonio Fiore (first author) and Carlo Bevilacqua, a visiting student from the University of Bari Aldo Moro in Bari, Italy, contributed to the paper.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.