Publicly available skin cancer detection apps, such as SkinVision, use AI-based analysis to determine if a new or changing mole is a source of concern or nothing to worry about. Yet according to a new analysis of the scientific evidence behind those apps, there’s a lot to worry about.
Scientists have just announced the completion of a lofty DARPA challenge to integrate 10 human organs-on-chips in an automated system to study how drugs work in the body. The technology provides an alternative to testing drugs on humans or other animals.
Referred to as the “Interrogator” by its developers, the system links up to ten different human organ chips—miniaturized microfluidic devices containing living cells that mimic the function of the organs they represent, such as intestine, heart or lung—and maintains their function for up to three weeks. In two experiments, the system successfully predicted how a human body metabolizes specific drugs.
“This is a wonderful technology for the field of organ-on-a-chip,” says Yu Shrike Zhang, a bioengineer at Harvard University Medical School and Brigham and Women’s Hospital in Boston, who was not involved in the research. A platform that automates the culturing, linking, and maintenance of multiple human organ-on-chips, all while inside a sterile incubator, “represents a great technological advancement,” says Zhang, who last year wrote about the promises and challenges of organ-on-a-chip systems for IEEE Spectrum.
In 2010, Ingber and colleagues reported the first human organ-on-a-chip, a human lung. Each chip, roughly the size of a computer memory stick, is composed of a clear polymer containing hollow channels: one channel is lined with endothelial cells, the same cells that line human blood vessels, and another hosts organ-specific cells, such as liver or kidney cells.
After creating numerous individual organ chips, Ingber received a 2012 DARPA grant to try to integrate 10 organs-on-chips and use them to study how drugs are absorbed and metabolized in the body.
Eight years and three prototypes later, the team succeeded. The most recent version of the platform took four years to develop, says Richard Novak, a senior staff engineer at the Wyss Institute who built the machine. Within that time, a whole year was needed to develop a user interface that biologists with no programming experience could easily operate.
“It enables a really complex experiment to be set up in two minutes,” says Novak.
The “Interrogator,” as Novak fondly calls it, consists of a robotic system that pipettes liquids—such as a blood substitute and/or a drug of choice—into the channels; a peristaltic pump to move those liquids through the microfluidic chips; custom software with an easy drag-n-drop interface; and a mobile microscope to monitor the chips and their connections without having to manually reach in and take out each chip for examination, as was done with older systems.
Best of all, says Novak, the whole machine fits into a standard laboratory incubator, which maintains living cells at constant temperature and light conditions.
Finally, the team was ready to interrogate the Interrogator. Could the system truly mimic the human body in a drug test? To find out, the scientists connected a human gut chip, liver chip, and kidney chip, then added nicotine to the gut chip to simulate a person orally swallowing the drug (such as if a person were chewing nicotine gum). The time it took the nicotine to reach each tissue, and the maximum nicotine concentrations in each tissue, closely matched levels previously measured in patients.
In a second test, the researchers linked liver, kidney, and bone marrow chips and administered cisplatin, a common chemotherapy drug. Once again, the drug was metabolized and cleared by the kidney and liver at levels that closely matched those measured in patients. Cells in the kidney chip even expressed the same biological markers of injury as a living kidney does during chemotherapy treatment.
“Compared against clinical studies, they matched up really nicely,” says Novak. The team is now using their linked organ chips to study the gut microbiome and influenza transmission. The Interrogator technology IP has been licensed by a Wyss Institute spin-off company, Boston-based Emulate, which Ingber founded.
Biological organisms have certain useful attributes that synthetic robots do not, such as the abilities to heal, adapt to new situations, and reproduce. Yet molding biological tissues into robots or tools has been exceptionally difficult to do: Experimental techniques, such as altering a genome to make a microbe perform a specific task, are hard to control and not scalable.
Now, a team of scientists at the University of Vermont and Tufts University in Massachusetts has used a supercomputer to design novel lifeforms with specific functions, then built those organisms out of frog cells.
The new, AI-designed biological bots crawl around a petri dish and heal themselves. Surprisingly, the biobots also spontaneously self-organize and clear their dish of small trash pellets.
I remember a faded yellow booklet, about the size of a wallet, that my mother used to pull out once a year at the doctor’s office to record my vaccines. Today, nurses document my children’s vaccination history in electronic health records that will likely follow them to adulthood.
To eradicate a disease—such as polio or measles—healthcare workers need to know who was vaccinated and when. Yet in developing countries, vaccination records are sparse and, in some cases, non-existent. For example, during a rural vaccination campaign, a healthcare worker may mark a child’s fingernail with a Sharpie, which can wash or scrape off within days.
Now, a team of MIT bioengineers has developed a way to keep invisible vaccine records under the skin. Delivered through a microneedle patch, biocompatible quantum dots embed in the skin and fluoresce under near-infrared light—creating a glowing trace that can be detected at least five years after vaccination. The work is described today in the journal Science Translational Medicine.
Our nervous system is specialized to produce and conduct electrical currents, so it’s no surprise that gentle electric stimulation has healing powers. Neural stimulation—also known as neuromodulation, bioelectronic medicine, or electroceuticals—is currently used to treat pain, epilepsy, and migraines, and is being explored as a way to combat paralysis, inflammation, and even hair loss. Muscle stimulation can also bestow superhuman reflexes and improve short-term memory.
But to reach critical areas of the body, such as the brain or the spine, many treatments require surgically implanted devices, such as a cuff that wraps around the spinal cord. Implanting such a device can involve cutting through muscle and nerves (and may require changing a battery every few years).
Now, a team of biomedical engineers has created a type of electrode that can be injected into the body as a liquid, then harden into a stretchy, taffy-like substance. In a paper in the journal Advanced Healthcare Materials, the multi-institutional team used their “injectrodes” to stimulate the nervous systems of rats and pigs, with comparable results to existing implant technologies.
A mother smiles at her toddler via a live video feed, then runs her fingers along the computer screen. Miles away, the boy feels the strokes of her hand on his back.
A man with a lower-arm amputation picks up a beer can with his prosthetic hand and feels the artificial fingers make contact with the can.
A gamer’s animated character is struck on the arm and shoulder by an opponent, and the gamer feels pressure on her corresponding body parts.
These are real-life applications of a new electronic skin technology from the lab of John Rogers and his colleagues at Northwestern University, detailed in a paper published today in the journal Nature. The soft, lightweight sheet of electronics is wireless, battery-free, sticks right to the skin, and re-creates a sensation of touch.
During the Framingham Heart Study, a long-term research study initiated in 1948 that collected health data from thousands of people, researchers discovered that high cholesterol and elevated blood pressure increase one’s risk of heart disease. Thanks to that insight, at-risk individuals can reduce their chances of developing the condition by taking drugs to lower cholesterol and blood pressure.
Nick Williams didn’t ask permission. The graduate student just stuck his pinky finger under the printer and watched it paint two silver lines on his skin. When printing was complete, Williams put a small LED light at one end of the lines and applied voltage to the other. He smiled as the light glowed.
Williams showed the electronically active tattoo to his advisor, Duke University electrical engineer Aaron Franklin. Since Williams barely felt the printing, and the silver washed off with soap and water, they tried it again.
Why waste the energy used to tilt one’s head or digest food? University of Wisconsin-Madison engineer Xudong Wang is an expert at harvesting the body’s mechanical energy to power devices, such as an electric bandage that accelerates healing and a stomach implant that subdues hunger.
Now, Wang’s team is back with a self-powered wearable to tackle an age-old nemesis: hair loss.
Wang’s lab has created a motion-activated, flexible wearable that promotes hair regeneration via gentle electrical stimulation. They describe their work in a study published this month in the journal ACS Nano. In rodents, the device stimulated hair growth better than conventional topical medications.
The device can be discreetly hidden under a baseball cap, says Wang. He hopes to begin a clinical trial with humans within six months.
The project is the first to decode question-and-answer speech from brain signals in real time
In 2017, Facebook’s Mark Chevillet gave himself two years to prove whether it was feasible to build a non-invasive technology able to read out 100 words per minute from brain activity.
It’s been two years, and the verdict is in: “The promise is there,” Chevillet told IEEE Spectrum. “We do think it will be possible.”
As research director of Facebook Reality Labs’ brain-computer interface program, Chevillet plans to push ahead with the project—and the company’s ultimate goal to develop augmented reality (AR) glasses that can be controlled without having to speak aloud.
Chevillet’s optimism is fueled in large part by a first in the field of brain-computer interfaces that hit the presses this morning: In the journal Nature Communications, a team at the University of California, San Francisco, in collaboration with Facebook Reality Labs, has built a brain-computer interface that accurately decodes dialogue—words and phrases both heard and spoken by the person wearing the device—from brain signals in real time.
Machine learning algorithms that combine clinical and molecular data are the “wave of the future,” experts say
A man walks into a doctor’s office for a CT scan of his gallbladder. The gallbladder is fine but the doctor notices a saclike pocket of fluid on the man’s pancreas. It’s a cyst that may lead to cancer, the doctor tells him, so I’ll need to cut it out to be safe.
It’ll take three months to recover from the surgery, the doctor adds—plus, there’s a 50 percent chance of surgical complications, and a 5 percent chance the man will die on the table.
An estimated 800,000 patients in the United States are incidentally diagnosed with pancreatic cysts each year, and doctors have no good way of telling which cysts harbor a deadly form of cancer and which are benign. This ambiguity results in thousands of unnecessary surgeries: One study found that up to 78 percent of cysts for which a patient was referred to surgery ended up being not cancerous.
Now there’s a machine learning algorithm that could help. Described today in the journal Science Translational Medicine, surgeons and computer scientists at Johns Hopkins University have built a test called CompCyst (for comprehensive cyst analysis) that is significantly better than today’s standard-of-care—a.k.a. doctor observations and medical imaging—at predicting whether patients should be sent home, monitored, or undergo surgery.
This AI system detects unique gasping sounds that occur when the heart stops beating
When a person’s heart malfunctions and suddenly stops beating, death can occur within minutes—unless someone intervenes. A bystander administering CPR right away can triple a person’s chances of surviving a cardiac arrest.
Last July, we described a smart watch designed to detect cardiac arrest and summon help. Now, a team at the University of Washington has developed a totally contactless AI system that listens to detect the telltale sound of agonal breathing—a unique guttural gasping sound made by 50 percent of cardiac arrest patients.
The smart speaker system, described today in the journal npj Digital Medicine, detected agonal breathing events 97 percent of the time with almost no false alarms in a proof-of-concept study.
The team imagines using the tool—which can run on Amazon’s Alexa or Google Home, among other devices—to passively monitor bedrooms for the sound of agonal breathing and, if detected, set off an alarm.
The four recipients, chosen from a pool of 300 applicants across 30 countries, are developing reliable, cost-effective ways to diagnose Alzheimer’s disease, including one that will use machine learning to detect early signs of concern through an eye scan.
By simply popping on a helmet or headset, soldiers could conceivably command control centers without touching a keyboard; fly drones intuitively with a thought; even feel intrusions into a secure network. While the tech sounds futuristic, DARPA wants to get it done in four years.
“It’s an aggressive timeline,” says Krishnan Thyagarajan, a research scientist at PARC and principal investigator of one of the N3-funded projects. “But I think the idea of any such program is to really challenge the community to push the limits and accelerate things which are already brewing. Yes, it’s challenging, but it’s not impossible.”
Plenty of noninvasive neurotechnologies already exist, but not at the resolution necessary to yield high-performance wearable devices for national security applications, says N3 program manager Al Emondi of DARPA’s Biological Technologies Office.
Following a call for applications back in March, a review panel narrowed the pool to six teams across industry and academia, Emondi told IEEE Spectrum. The teams are experimenting with different combinations of magnetic fields, electric fields, acoustic fields (ultrasound) and light. “You can combine all these approaches in different, unique and novel ways,” says Emondi. What the program hopes to discover, he adds, is which combinations can record brain activity and communicate back to the brain with the greatest speed and resolution.
Specifically, the program is seeking technologies that can read and write to brain cells in just 50 milliseconds round-trip, and can interact with at least 16 locations in the brain at a resolution of 1 cubic millimeter (a space that encompasses thousands of neurons).
The four-year N3 program will consist of three phases, says Emondi. In the current phase 1, teams have one year to demonstrate the ability to read (record) and write to (stimulate) brain tissue through the skull. Teams that succeed will move to phase 2. Over the ensuing 18 months, those groups will have to develop working devices and test them on living animals. Any group left standing will proceed to phase 3—testing their device on humans.
Four of teams are developing totally noninvasive technologies. A team from Carnegie Mellon University, for example, is planning to use ultrasound waves to guide light into and out of the brain to detect neural activity. They plan to use interfering electrical fields to write to specific neurons.
The three other teams proposing non-invasive techniques include Johns Hopkins University’s Applied Physics Laboratory, Thyagarajan’s team at PARC, and a team from Teledyne Technologies, a California-based industrial company.
The two remaining teams are developing what DARPA calls “minutely invasive” technologies which, as we described in September, require no incisions or surgery but may involve technology that is swallowed, sniffed, injected or absorbed into the human body in some way.
Rice University, for example, is developing a system that requires exposing neurons to a viral vector to deliver instructions for synthetic proteins that indicate when a neuron is active. Ohio-based technology company Battelle is developing a brain-machine interface that relies on magnetoelectric nanoparticles injected into the brain.
“This is uncharted territory for DARPA, and the next step in brain-machine interfaces,” says Emondi. “If we’re successful in some of these technologies…that’s a whole new ecosystem that doesn’t exist right now.”
Artificial kidneys and a wearable that prevents blood clots were among the winning designs
In the 1960s, a pacemaker was the size of a microwave and a dialysis machine was the size of a refrigerator. Today, a pacemaker is the size of a vitamin and a dialysis machine is, well, the size of a refrigerator. They do have nice LED displays though.
“Kidney disease is underserved,” says Shuvo Roy, a bioengineer at the University of California, San Francisco. Although failed kidneys kill more people each year than breast or prostate cancer, “the field has not seen much innovation in the last 50 years,” says Roy.
The U.S. Department of Health and Human Services and the American Society of Nephrology want to change that. The organizations have teamed up to bring money and attention to the disease with the KidneyX: Redesign Dialysis competition. Started in 2018, the competition challenges innovators in any field to propose tools and technologies that could enable the design of new artificial kidney devices.
A brain-computer interface that records signals in the motor cortex can synthesize speech from activity in a user’s brain
Two years ago, a 64-year-old man paralyzed by a spinal cord injury set a record when he used a brain-computer interface (BCI) to type at a speed of eight words per minute.
Today, in the journal Nature, scientists at the University of California, San Francisco, present a new type of BCI, powered by neural networks, that might enable individuals with paralysis or stroke to communicate at the speed of natural speech—an average of 150 words per minute.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.