All posts by Michelle Hampson

Why Aren’t COVID Tracing Apps More Widely Used?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/why-arent-covid-tracing-apps-more-widely-used

As the COVID-19 pandemic began to sweep around the globe in early 2020, many governments quickly mobilized to launch contract tracing apps to track the spread of the virus. If enough people downloaded and used the apps, it would be much easier to identify people who have potentially been exposed. In theory, contact tracing apps could play a critical role in stemming the pandemic.

In reality, adoption of contract tracing apps by citizens was largely sporadic and unenthusiastic. A trio of researchers in Australia decided to explore why contract tracing apps weren’t more widely adopted. Their results, published on 23 December in IEEE Software, emphasize the importance of social factors such as trust and transparency.

Muneera Bano is a senior lecturer of software engineering at Deakin University, in Melbourne. Bano and her co-authors study human aspects of technology adoption. “Coming from a socio-technical research background, we were intrigued initially to study the contact tracing apps when the Australian Government launched the CovidSafe app in April 2020,” explains Bano. “There was a clear resistance from many citizens in Australia in downloading the app, citing concerns regarding trust and privacy.”

To better understand the satisfaction—or dissatisfaction—of app users, the researchers analyzed data from the Apple and Google app stores. At first, they looked at average star ratings, number of downloads, and conducted a sentiment analysis of app reviews.

However, just because a person downloads an app doesn’t guarantee that they will use it. What’s more, Bano’s team found that sentiment scores—which are often indicative of an app’s popularity, success, and adoption—were not an effective means for capturing the success of COVID-19 contract tracing apps.

“We started to dig deeper into the reviews to analyze the voices of users for particular requirements of these apps. More or less all the apps had issues related to the Bluetooth functionality, battery consumption, reliability and usefulness during pandemic.”

For example, apps that relied on Bluetooth for tracing had issues related to range, proximity, signal strength, and connectivity. A significant number of users also expressed frustration over battery drainage. Some efforts have been made to address this issue; for example, Singapore launched an updated version of its TraceTogether app that allows it to operate with Bluetooth while running in the background, with the goal of improving battery life.

But, technical issues were just one reason for lack of adoption. Bano emphasizes that, “The major issues around the apps were social in nature, [related to] trust, transparency, security, and privacy.”

In particular, the researchers found that resistance to downloading and using the app was high in countries with a voluntary adoption model and low trust-index on their governments such as Australia, the United Kingdom, and Germany.  

“We observed slight improvement only in the case of Germany because the government made sincere efforts to increase trust. This was achieved by increasing transparency during ‘Corona-Warn-App’ development by making it open source from the outset and by involving a number of reputable organizations,” says Bano. “However, even as the German officials were referring to their contact tracing app as the ‘best app’ in the world, Germany was struggling to avoid the second wave of COVID-19 at the time we were analyzing the data, in October 2020.”

In some cases, even when measures to improve trust and address privacy issues were taken by governments and app developers, people were hesitant to adopt the apps. For example, a Canadian contract tracing app called COVID Alert is open source, requires no identifiable information from users, and all data are deleted after 14 days. Nevertheless, a survey of Canadians found that two thirds would not download any contact tracing app because it still “too invasive.” (The survey covered tracing apps in general, and was not specific to the COVID Alert app).

Bano plans to continue studying how politics and culture influence the adoption of these apps in different countries around the world. She and her colleagues are interested in exploring how contact tracing apps can be made more inclusive for diverse groups of users in multi-cultural countries.

Superconducting Microprocessors? Turns Out They’re Ultra-Efficient

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/hardware/new-superconductor-microprocessor-yields-a-substantial-boost-in-efficiency

Computers use a staggering amount of energy today. According to one recent estimate, data centers alone consume two percent of the world’s electricity, a figure that’s expected to climb to eight percent by the end of the decade. To buck that trend, though, perhaps the microprocessor, at the center of the computer universe, could be streamlined in entirely new ways. 

One group of researchers in Japan have taken this idea to the limit, creating a superconducting microprocessor—one with zero electrical resistance. The new device, the first of its kind, is described in a study published last month in the IEEE Journal of Solid-State Circuits.

Superconductor microprocessors could offer a potential solution for more energy efficient computing power—but for the fact that, at present, these designs require ultra-cold temperatures below 10 kelvin (or -263 degrees Celsius). The research group in Japan sought to create a superconductor microprocessor that’s adiabatic, meaning that, in principle, energy is not gained or lost from the system during the computing process.

While adiabatic semiconductor microprocessors exist, the new microprocessor prototype, called MANA (Monolithic Adiabatic iNtegration Architecture), is the world’s first adiabatic superconductor microprocessor. It’s composed of superconducting niobium and relies on hardware components called adiabatic quantum-flux-parametrons (AQFPs). Each AQFP is composed of a few fast-acting Josephson junction switches, which require very little energy to support superconductor electronics. The MANA microprocessor consists of more than 20,000 Josephson junctions (or more than 10,000 AQFPs) in total.

Christopher Ayala is an Associate Professor at the Institute of Advanced Sciences at Yokohama National University, in Japan, who helped develop the new microprocessor. “The AQFPs used to build the microprocessor have been optimized to operate adiabatically such that the energy drawn from the power supply can be recovered under relatively low clock frequencies up to around 10 GHz,” he explains. “This is low compared to the hundreds of gigahertz typically found in conventional superconductor electronics.”

This doesn’t mean that the group’s current-generation device hits 10 GHz speeds, however. In a press statement, Ayala added, “We also show on a separate chip that the data processing part of the microprocessor can operate up to a clock frequency of 2.5 GHz making this on par with today’s computing technologies. We even expect this to increase to 5-10 GHz as we make improvements in our design methodology and our experimental setup.” 

The price of entry for the niobium-based microprocessor is of course the cryogenics and the energy cost for cooling the system down to superconducting temperatures.  

“But even when taking this cooling overhead into account,” says Ayala, “The AQFP is still about 80 times more energy-efficient when compared to the state-of-the-art semiconductor electronic device, [such as] 7-nm FinFET, available today.”

Since the MANA microprocessor requires liquid helium-level temperatures, it’s better suited for large-scale computing infrastructures like data centers and supercomputers, where cryogenic cooling systems could be used.

“Most of these hurdles—namely area efficiency and improvement of latency and power clock networks—are research areas we have been heavily investigating, and we already have promising directions to pursue,” he says.

How to Find the Ideal Replacement When a Research Team Member Leaves

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/consumer-electronics/audiovideo/how-to-find-the-ideal-replacement-when-an-academic-team-member-leaves

Two brilliant minds are always better than one. Same goes for three, four, and so on. Of course the greater the number of team members, the greater the chance one or more will end up departing from the team mid-project—leaving the rest of the group casting about for the ideal replacement. 

Research, thankfully, can answer how best to replace a research team member. The new model not only helps identify a replacement who has as the right skillset, but it also accounts for social ties within the group. The researchers say their model “outperforms existing methods when applied to computer science academic teams.” 

The model is described in a study published December 8 in IEEE Intelligent Systems.

Feng Xia is an associate professor in the Engineering, IT and Physical Sciences department at Federation University Australia who co-authored the study. He notes that each member within an academic collaboration plays an important contributing role and has a certain degree of irreplaceability with the team. “Meanwhile, turnover rate has also increased, leading to the fact that collaborative teams are [often] facing the problem of the member’s absence. Therefore, we decided to develop this approach to minimize the loss,” he says.

Xia’s previous research has suggested that members with stable collaboration relationships can improve team performance, yielding higher quality output. Therefore, Xia and his colleagues incorporated ways of accounting for familiarity among team members into their model, including relationships between just two members (pair-wise familiarity) and multiple members (higher-order familiarity).

“The main idea of our technique is to find the best alternate for the absent member,” explains Xia. “The recommended member is supposed to be the best choice in context of familiarity, skill, and collaboration relationship.”

The researchers used two large datasets to develop and test their model. They explored 42,999 collaborative relationships between 15,681 scholars using the CiteSeerX dataset, which captures data within the field of computer science. Another 436,905 collaborative relationships between 252,439 scholars was explored through the MAG (Microsoft Academic Graph) dataset, which contains scientific records and relative information covering multiple disciplines.

Testing of the model reveals that it is effective at finding a good replacement member for teams. “Teams that choose the recommended candidates achieved better team performance and less communication costs,” says Xia. These results mean that the replacement members had good communication with the others, and illustrate the importance of familiarity among collaborators.

The researchers aim to make their model freely available on platforms such as GitHub. Now they are working towards models for team recognition, team formation, and team optimization. “We are also building a complete team data set and a self-contained team optimization online system to help form research teams,” says Xia.

These AIs Can Predict Your Moral Principles

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/these-ais-can-predict-your-moral-principles

The death penalty, abortion, gun legislation: There’s no shortage of controversial topics that are hotly debated today on social media. These topics are so important to us because they touch on an essential underlying force that makes us human, our morality.

Researchers in Brazil have developed and analyzed three models that can describe the morality of individuals based on the language they use. The results were published last month in IEEE Transactions on Affective Computing.

Ivandré Paraboni is an associate professor at the School of Arts, Sciences and Humanities at the University of São Paulo who led the study. His team choose to focus on a theory commonly used by social scientists called Moral foundations theory. It postulates several key categories of morality including care, fairness, loyalty, authority, and purity. 

The aim of the new models, according to Paraboni, is to infer values of those five moral foundations just by looking at their writing, regardless of what they are talking about. “They may be talking about their everyday life, or about whatever they talk about on social media,” Paraboni says. “And we may still find underlying patterns that are revealing of their five moral foundations.”

To develop and validate the models, Paraboni’s team provided more than 500 volunteers with questionnaires. Participants were asked to rate eight topics (e.g., same sex marriage, gun ownership, drug policy) with sentiment scores (from 0 = ‘totally against’ to 5 = ‘totally in favor’). They were also asked to write out explanations of their ratings.

Human judges then gave their own rating to a subset of explanations from participants. The exercise determined how well humans could infer the intended opinions from the text. “Knowing the complexity of the task from a human perspective in this way gave us a more realistic view of what the computational models can or cannot do with this particular dataset,” says Paraboni.

Using the text opinions from the study participants, the research team created three machine learning algorithms that could assess the language used in each participant’s statement. The models analyzed psycholinguistics (emotional context of words), words, and word sequences, respectively.

All three models were able to infer an individual’s moral foundations from the text. The first two models, which focus on individual words used by the author, were more accurate than the deep learning approach that analyzes word sequences.

Paraboni adds, “Word counts–such as how often an individual uses words like ‘sin’ or ‘duty’–turned out to be highly revealing of their moral foundations, that is, predicting with higher accuracy their degrees of care, fairness, loyalty, authority, and purity.”

He says his team plans to continue to incorporate other forms of linguistic analysis into their models. They are, he says, exploring other models that focus more on the text (independent of the author) as a way to analyze Twitter data.

App Aims To Reduce Deaths From Opioid Overdose

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/unityphilly-app-opioid-overdose

Journal Watch report logo, link to report landing page

If administered promptly enough, naloxone can chemically reverse an opioid overdose and save a person’s life. However, timing is critical–quicker administration of the medication can not only save a life, but also reduce the chances that brain damage will occur.

In exploring new ways to administer naloxone faster, a team of researchers has harnessed an effective, community-based approach. It involves an app for volunteers, who receive an alert when another app user nearby indicates that an overdose is occurring and naloxone is needed. The volunteers then have the opportunity to respond to the request.

A recent pilot study in Philadelphia shows that the approach has the potential to save lives. The results were published November 17 in IEEE Pervasive Computing.

“This concept had worked with other medical emergencies like cardiac arrest and anaphylaxis, so we thought we could have success with transferring it to opioid overdoses,” says Gabriela Marcu, an assistant professor at the University of Michigan’s School of Information, who was involved in the study.

The app, called UnityPhilly, is meant for bystanders who are in the presence of someone overdosing. If that bystander doesn’t have naloxone, they can use UnityPhilly to send out an alert with a single push of a button to nearby volunteers who also have the app. Simultaneously, a separate automated call is sent to 911 to initiate an emergency response.

Marcu’s team chose to pilot the app in Philadelphia because the city has been hit particularly hard by the opioid crisis, with 46.8 per 100,000 people dying from overdoses each year. They piloted the UnityPhilly app in the neighborhood of Kensington between March 2019 and February 2020.

In total, 112 participants were involved in the study, half of whom self-identified as active opioid users. Over the one-year period, 291 suspected overdose alerts were reported.

About 30% of alerts were false alarms, cancelled within two minutes of the alert being sent. On the other hand, at least one dose of naloxone was administered by a study participant in 36.6% of cases. Of these instances when naloxone was administered, 96% resulted in a successful reversal of the overdose. This means that a total of 71 out of 291 cases resulted in a successful reversal. 

Marcu notes that there are many advantages to the UnityPhilly approach. “It has been designed with the community, and it’s driven entirely by the community. It’s neighbors helping neighbors,” she explains.

One reported downfall of the approach with the current version of UnityPhilly is that volunteers only see a location on a map, with no context of what kind of building or environment they will be entering to deliver the naloxone dose. To address this, Marcu says her team is interested in refining the user experience and enhancing how app users can communicate with one another before, during, and after they respond to an overdose. 

“What’s interesting is that so many users still remained motivated to incorporate this app into their efforts, and their desire to help others drove adoption and acceptance of the app in spite of the imperfect user experience,” says Marcu. “So we look forward to continuing our work with the community on this app… Next, we plan on rolling it out city-wide in Philadelphia.”

Flexible, Wearable Sensors Detect Workers’ Fatigue

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/biomedical/devices/flexible-wearable-sensors-detect-workers-fatigue

Fatigue in the workplace is a serious issue today—leading to accidents, injuries and worse. Some of history’s worst industrial disasters, in fact, can be traced at least in part to worker fatigue, including the 2005 Texas City BP oil refinery explosion and the nuclear accidents at Chernobyl and Three Mile Island.

Given the potential consequences of worker fatigue, scientists have been exploring wearable devices for monitoring workers’ alertness, which correlates with physiological parameters such as heart rate, breathing rate, sweating, and muscle contraction. In a recent study published November 6 in IEEE Sensors Journal, a group of Italian researchers describe a new wearable design that measures the frequency of the user’s breathing—which they argue is a proxy for fatigue. Breathing frequency is also used to identify stressing conditions such as excessive cold, heat, hypoxia, pain, and discomfort.

“This topic is very important since everyday thousands of work-related accidents occur throughout the world, affecting all sectors of the economy,” says Daniela Lo Presti, a PhD student at  Università Campus Bio-Medico di Roma, in Rome, Italy, who was involved in the study. “We believe that monitoring workers’ physiological state during [work]… may be crucial to prevent work-related accidents and improve the workers’ quality performances and safety.”

The sensor system that her team designed involves two elastic bands that are worn just below the chest (thorax) and around the abdomen. Each band is flexible, made of a soft silicon matrix and fiber optic technology that conforms well to the user’s chest as he or she breathes.

“These sensors work as optical strain gauges. When the subject inhales, the diaphragm contracts and the stomach inflates, so the flexible sensor that is positioned on the chest is strained,” explains Lo Presti. “Conversely, during the exhalation, the diaphragm expands, the stomach depresses, and the sensor is compressed.”

The sensors were tested on 10 volunteers while they did a variety of movements and activities, ranging from sitting and standing to lateral arm movements and lifting objects from the ground. The results suggest that the flexible sensors are adept at estimating respiratory frequency, providing similar measurements to a flow meter (a standard machine for measuring respiration). The researchers also found that their sensor could be strained by up to 2.5% of its initial length.

Lo Presti says this design has several strengths, including the conformation of the sensor to the user’s body. The silicon matrix is dumbbell shaped, allowing for better adhesion of the sensing component to the band, she says.

However, the sensing system must be plugged into a bulky instrument for processing the fiber optical signals (called an optical interrogator). Lo Presti says other research teams are currently working on making these devices smaller and cheaper. “Once high-performant, smaller interrogators are available, we will translate our technology to a more compact wearable system easily usable in a real working scenario.”

AI-Directed Robotic Hand Learns How to Grasp

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/automaton/robotics/humanoids/robotic-hand-uses-artificial-neural-network-to-learn-how-to-grasp-different-objects

Reaching for a nearby object seems like a mindless task, but the action requires a sophisticated neural network that took humans millions of years to evolve. Now, robots are acquiring that same ability using artificial neural networks. In a recent study, a robotic hand “learns” to pick up objects of different shapes and hardness using three different grasping motions.  

The key to this development is something called a spiking neuron. Like real neurons in the brain, artificial neurons in a spiking neural network (SNN) fire together to encode and process temporal information. Researchers study SNNs because this approach may yield insights into how biological neural networks function, including our own. 

“The programming of humanoid or bio-inspired robots is complex,” says Juan Camilo Vasquez Tieck, a research scientist at FZI Forschungszentrum Informatik in Karlsruhe, Germany. “And classical robotics programming methods are not always suitable to take advantage of their capabilities.”

Conventional robotic systems must perform extensive calculations, Tieck says, to track trajectories and grasp objects. But a robotic system like Tieck’s, which relies on a SNN, first trains its neural net to better model system and object motions. After which it grasps items more autonomously—by adapting to the motion in real-time. 

The new robotic system by Tieck and his colleagues uses an existing robotic hand, called a Schunk SVH 5-finger hand, which has the same number of fingers and joints as a human hand.

The researchers incorporated a SNN into their system, which is divided into several sub-networks. One sub-network controls each finger individually, either flexing or extending the finger. Another concerns each type of grasping movement, for example whether the robotic hand will need to do a pinching, spherical or cylindrical movement.

For each finger, a neural circuit detects contact with an object using the currents of the motors and the velocity of the joints. When contact with an object is detected, a controller is activated to regulate how much force the finger exerts.

“This way, the movements of generic grasping motions are adapted to objects with different shapes, stiffness and sizes,” says Tieck. The system can also adapt its grasping motion quickly if the object moves or deforms.

The robotic grasping system is described in a study published October 24 in IEEE Robotics and Automation Letters. The researchers’ robotic hand used its three different grasping motions on objects without knowing their properties. Target objects included a plastic bottle, a soft ball, a tennis ball, a sponge, a rubber duck, different balloons, a pen, and a tissue pack. The researchers found, for one, that pinching motions required more precision than cylindrical or spherical grasping motions.

“For this approach, the next step is to incorporate visual information from event-based cameras and integrate arm motion with SNNs,” says Tieck. “Additionally, we would like to extend the hand with haptic sensors.”

The long-term goal, he says, is to develop “a system that can perform grasping similar to humans, without intensive planning for contact points or intense stability analysis, and [that is] able to adapt to different objects using visual and haptic feedback.”
 

New Sensor Integrated Within Dental Implants Monitors Bone Health

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/the-human-os/biomedical/devices/new-sensor-integrated-within-dental-implants-monitors-bone-health

Journal Watch report logo, link to report landing page

Scientists have created a new sensor that can be integrated within dental implants to passively monitor bone growth, bypassing the need for multiple x-rays of the jaw. The design is described in study published September 25 in IEEE Sensors Journal.

Currently, x-rays are used to monitor jaw health following a dental implant. Dental x-rays typically involve low doses of radiation, but people with dental implants may require more frequent x-rays to monitor their bone health following surgery. And, as professor Alireza Hassanzadeh of Shahid Beheshti University, Tehran, notes, “Too many X-rays is not good for human health.”

To reduce this need for x-rays, Hassanzadeh and two graduate students at Shahid Beheshti University designed a new sensor that can be integrated within dental implants. It passively measures changes in the surrounding electrical field (capacitance) to monitor bone growth. Two designs, for short- and long-term monitoring, were created.

The sensors are made of titanium and poly-ether-ether-ketone, and are integrated directly into a dental implant using microfabrication methods. The designs do not require any battery, and passively monitor changes in capacitance once the dental implant is in place.

“When the bone is forming around the sensor, the capacitance of the sensor changes,” explains Hassanzadeh. This indicates how the surrounding bone growth changes over time. The changes in capacitance, and thus bone growth, are then conveyed to a reader device that transfers the measurements into a data logger.  

In their study, the researchers tested the sensors in the femur and jaw bone of a cow. “The results reveal that the amount of bone around the implant has a direct effect on the capacitance value of the sensor,” says Hassanzadeh.

He says that the sensor still needs to be optimized for size and different implant shapes, and clinical experiments will need to be completed with different kinds of dental implant patients. “We plan to commercialize the device after some clinical tests and approval from FDA and authorities,” says Hassanzadeh.

Use AI To Convert Ancient Maps Into Satellite-Like Images

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/ai-ancient-maps-satellite-images

Ancient maps give us a slight glimpse of how landscapes looked like centuries ago. But what would we see if we looked at these older maps with a modern lens?

Henrique Andrade is a student at Escola Politécnica da Universidade de Pernambuco, Recife who has been studying maps of his hometown Recife, in Brazil, for several years now. “I gathered all these digital copies of maps, and I ended up discovering things about my hometown that aren’t so widely known,” he says. “I feel that in Recife people were denied access to their own past, which makes it difficult for them to understand who they are, and consequently what they can do about their own future.”

Andrade approached a professor at his university, Bruno Fernandes, with an idea: to develop a machine learning algorithm that could transform old maps into Google satellite images. Such an approach, he believes, could inform people of how land use has changed over time, including the social and economic impacts of urbanization.

To see the project realized, they used an existing AI tool called Pix2pix, which relies on two neural networks. The first one creates images based on the input set, while the second network that decides if the generated image is fake or not. The networks are then trained to fool each other, and ultimately create realistic-looking images based on the historical data provided.

Andrade and Fernandes describe their approach in a study published 24 September 2020 in IEEE Geoscience and Remote Sensing Letters. In this study, they took a map of Recife from 1808 and generated modern day images of the area.

“When you look at the images, you get a better grasp of how the city has changed in 200 years,” explains Andrade. “The city’s geography has drastically changed—landfills have reduced the water bodies and green areas were all removed by human activity.”

He says an advantage of this AI approach is that it requires relatively little input volume; however, the input requires some historical context, and the resolution of the generated images is lower than what the researchers would like.

“Moving forward, we are working on improving the resolution of the images, and experimenting on different inputs,” says Andrade. He sees this approach to generate modern images of the past as widely applicable, noting that it could be applied to various locations and could be used by urban planners, anthropologists, and historians.

A New Way For Autonomous Racing Cars to Control Sideslip

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/get-a-grip-a-new-way-for-autonomous-racing-cars-to-avoid-sideslip

When race car drivers take tight turns at high speeds, they rely on their experience and gut feeling to hit the gas pedal without spinning out. But how does an autonomous race car make the same decision?

Currently, many autonomous cars rely on expensive external sensors to calculate a vehicle’s velocity and chance of sideslipping on the racetrack. In a different approach, one research team in Switzerland has recently developed a novel a machine learning algorithm that harnesses measurements from more simple sensors. They describe their design in a study published August 14 in IEEE Robotics and Automation Letters.

As a race car takes a turn around the track, its forward and lateral velocity determines how well the tires grip the road—and how much sideslip occurs.

“(Autonomous) race cars are typically equipped with special sensors that are very accurate, exhibit almost no noise, and measure the lateral and longitudinal velocity separately,” explains Victor Reijgwart, of the Autonomous Systems Lab at ETH Zurich and a co-creator of the new design.

These state-of-the art sensors only require simple filters (or calculations) to estimate velocity and control sideslip. But, as Reijgwart notes, “Unfortunately, these sensors are heavy and very expensive—with single sensors often costing as much as an entry-level consumer car.”

His group, whose Formula Student team is named AMZ Racing, sought a novel solution. Their resulting machine learning algorithm relies on several measurements including: two normal inertial measurement units, the rotation speed and motor torques at all four wheels, and the steering angle. They trained their model using real data from racing cars on flat, gravel, bumpy, and wet road surfaces.

In their study, the researchers compared their approach to the external velocity sensors that have been commonly used at multiple Formula Student Driverless events across Europe in 2019. Results show that the new approach demonstrates comparable performance when the cars are undergoing a high level of sideslip (at 10◦ at the rear axle), but offers several advantages. For example, the new approach is better at rejecting biases and outlier measurements. The results also show that the machine learning approach is 15 times better than using just simple algorithms with non-specialized sensors.

“But learning from data is a two-edged sword,” says Sirish Srinivasan, another AMZ Racing member at ETH Zurich. “While the approach works well when it has been used under circumstances that are similar to the data it was trained on, safe behavior of the [model] cannot yet be guaranteed when it is used in conditions that significantly differ from the training data.”

Some examples include unusual weather conditions, changes in tire pressure, or other unexpected events.

The AMZ Racing team participates in yearly Formula Student Driverless engineering competitions, and hopes to apply this technique in the next race.

In the meantime, the team is interested in further improving their technique. “Several open research questions remain, but we feel like the most central one would be how to deal with unforeseen circumstances,” says Reijgwart. “This is, arguably, a major open question for the machine learning community in general.”

He notes that adding more “common sense” to the model, which would give it more conservative but safe estimates in unforeseen circumstances, is one option.  In a more complex approach, the model could perhaps be taught to predict its own uncertainty, so that it hands over control to a simpler but more reliable mode of calculation when the AI encounters an unfamiliar scenario.

This Motorized Backpack Eases the Burden for Hikers

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/consumer-electronics/portable-devices/this-motorized-backpack-eases-the-burden-for-hikers

Journal Watch report logo, link to report landing page

For backpackers, it’s a treat to escape civilization and traverse through the woods, enjoying the ruggedness of the great outdoors. But carrying that heavy backpack for days can undoubtedly be a drag.

One group of researchers in China have some welcomed news to lighten the load. They’ve designed a new backpack that accounts for the inertial forces of the bag against a backpacker’s body as they walk, reducing metabolic energy required by the user by an average of 11%. Their design is described in a study published July 27 in IEEE Transactions on Neural Systems and Rehabilitation Engineering.

Caihua Xiong, a professor at Huazhong University of Science and Technology who was involved in the study, notes that humans around the world and across the ages have been exploring ways to lighten their loads. “Asian people utilized flexible bamboo poles to carry bulky goods, and Romans designed suspended backpacks to carry heavy loads, which show energetic benefits,” he notes. “These designed passive carrying tools have the same principle [as ours].”

As humans walk, our gait is particularly energy efficient when only one foot is on the ground. But when we transition to the other foot and both are temporarily grounded, this is where the energy transfer becomes less efficient. And if we are carrying a heavy backpack, the extra inertial force from the backpack’s vertical movement and oscillations creates further inefficiencies during this transition.

To adjust for these inertial forces, Xiong’s team designed a motorized backpack that has two different modes. In its passive mode, two elastic ropes symmetrically arranged balance the weight of load within the backpack. Or, the user can choose to turn the system into active mode, whereby a rotary motor regulates the acceleration of load. The whole backpack weighs 5.3 kg, and was designed to carry loads up to 30 kg.

In experiments with seven similarly sized men, the researchers compared the energy requirements of using a typical rucksack compared to their new backpack system, both with the motorized system on and off. The participants were assigned to try these three scenarios in random order, while surface electromyography signals from their legs muscles and respiratory measurements were taken to analyze their energy expenditure.

Results show that, the motorized backpack in active mode reduces the load acceleration by 98.5% on average. In terms of metabolic costs for the user, motorized backpack design required on average 8% and 11% less energy, in passive and active modes respectively, compared to the standard rucksack. Xiong cautions that these reductions may in part be due to the distribution of weight within the two backpacks, since the designed system has the motor in a fixed position higher in the backpack. In contrast, the contents of the rucksack are loose in the compartment and thus the weight distribution compared to the user’s center-of-mass is different.

Also of note, this study involved people walking on flat ground. Xiong says he is interested in potentially commercializing the product, but aims to first explore ways of improving the system for different walking speeds and terrains.

Amaran the Tree-Climbing Robot Can Safely Harvest Coconuts

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/automaton/robotics/industrial-robots/amaran-tree-climbing-robot-can-safely-harvest-coconuts

Journal Watch report logo, link to report landing page

Coconuts may be delicious and useful for producing a wide range of products, but harvesting them is no easy task. Specially trained harvesters must risk their lives by climbing trees roughly 15 meters high to hack off just one bunch of coconuts. A group of researchers in India has designed a robot, named Amaran, that could reduce the need for human harvesters to take such a risk. But is the robot up to the task?

The researchers describe the tree-climbing robot in a paper published in the latest issue of IEEE/ASME Transactions on Mechatronics. Along with lab tests, they compared Amaran’s ability to harvest coconuts to that of a 50-year-old veteran harvester. Whereas the man bested the robot in terms of overall speed, the robot excelled in endurance.

To climb, Amaran relies on a ring-shaped body that clasps around trees of varying diameter. The robot carries a control module, motor drivers, a power management unit, and a wireless communications interface. Eight wheels allow it to move up and down a tree, as well as rotate around the trunk. Amaran is controlled by a person on the ground, who can use an app or joystick system to guide the robot’s movements.

Once Amaran approaches its target, an attached controller unit wields a robotic arm with 4 degrees of freedom to snip the coconut bunch. As a safety feature, if Amaran’s main battery dies, a backup unit kicks in, helping the robot return to ground.

Rajesh Kannan Megalingam, an assistant professor at Amrita Vishwa Vidyapeetham University, in South India, says his team has been working on Amaran since 2014. “No two coconut trees are the same anywhere in the world. Each one is unique in size, and has a unique alignment of coconut bunches and leaves,” he explains. “So building a perfect robot is an extremely challenging task.”

While testing the robot in the lab, Megalingam and his colleagues found that Amaran is capable of climbing trees when the inclination of the trunk is up to 30 degrees with respect to the vertical axis. Megalingam says that many coconut trees, especially under certain environmental conditions, grow at such an angle.

Next, the researchers tested Amaran in the field, and compared its ability to harvest coconuts to the human volunteer. The trees ranged from 6.2 to 15.2 m in height.

It took the human on average 11.8 minutes to harvest one tree, whereas it took Amaran an average of 21.9 minutes per tree (notably 14 of these minutes were dedicated to setting up the robot at the base of the tree, before it even begins to climb).

But Megalingam notes that Amaran can harvest more trees in a given day. For example, the human harvester in their trials could scale about 15 trees per day before getting tired, while the robot can harvest up to 22 trees per day, if the operator does not get tired. And although the robot is currently teleoperated, future improvements could make it more autonomous, improving its climbing speed and harvesting capabilities. 

“Our ultimate aim is to commercialize this product and to help the coconut farmers,” says Megalingam. “In Kerala state, there are only 7,000 trained coconut tree climbers, whereas the requirement is about 50,000 trained climbers. The situation is similar in other states in India like Tamil Nadu, Andhra, and Karnataka, where coconut is grown in large numbers.”

He acknowledges that the current cost of the robot is a barrier to broader deployment, but notes that community members could pitch together to share the costs and utilization of the robot. Most importantly, he notes, “Coconut harvesting using Amaran does not involve risk for human life. Any properly trained person can operate Amaran. Usually only male workers take up this tree climbing job. But Amaran can be operated by anyone irrespective of gender, physical strength, and skills.”

Predicting the Lifespan of an App

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/internet/predicting-the-lifespan-of-an-app

The number of apps smartphone users have to choose from is daunting, with roughly 2 million available through the Apple Store alone. But survival of the fittest applies to the digital world too, and not all of these apps will go on to become the next Tik Tok. In a study published 29 July in IEEE Transactions on Mobile Computing, researchers describe a new model for predicting the long-term survival of apps, which outperforms seven existing designs.

“For app developers, understanding and tracking the popularity of an app is helpful for them to act in advance to prevent or alleviate the potential risks caused by the dying apps,” says Bin Guo, a professor at Northwestern Polytechnical University who helped develop the new model.

“Furthermore, the prediction of app life cycle is crucial for the decision-making of investors. It helps evaluate and assess whether the app is promising for the investors with remarkable rewards, and provides in advance warning to avoid investment failures.”

In developing their new model, AppLife, Guo’s team took a Multi-Task Learning (MTL) approach. This involves dividing data on apps into segments based on time, and analyzing factors – such as download history, ratings, and reviews – at each time interval. AppLife then predicts the likelihood of an app being removed within the next one or two years.

The researchers evaluated AppLife using a real-world dataset with more than 35,000 apps from the Apple Store that were available in 2016, but had been released the previous year. “Experiments show that our approach outperforms seven state-of-the-art methods in app survival prediction. Moreover, the precision and the recall reach up to 84.7% and 95.1%, respectively,” says Guo.

Intriguingly, AppLife was particularly good at predicting the survival of apps for tools—even more so than apps for news and video. Guo says this could be because more apps for tools exist in the dataset, feeding the model with more data to improve its performance in this respect. Or, he says, it could be caused by greater competition among tool apps, which in turn leads to more detailed and consistent user feedback.

Moving forward, Guo says he plans on building upon this work. While AppLife currently looks at factors related to individual apps, Guo is interested in exploring interactions among apps, for example which ones complement each other. Analyzing the usage logs of apps is another area of interest, he says.

New LiDAR Sensor Uses Mirrors to Achieve High Efficiency

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/new-lidar-sensor-uses-mirrors-to-achieve-high-efficiency

Light Detection and Ranging (LiDAR) technology relies on fast, precisely timed laser pulses – a useful application for various kinds of sensors, including those that support the Internet of Things (IoT). However, many current sensors that rely on LiDAR are expensive, bulky, heavy, and power hungry.

One group of researchers is proposing a novel design for a LiDAR-based sensor that is both affordable and requires very little power. The sensor relies on a collection of microelectromechanical (MEMS) mirrors to achieve high efficiency – enough so to be powered by a 9-volt battery. The design is described in a study published 2 July in IEEE Sensors Letters.

“This system is especially suitable for smart buildings,” says Huikai Xie, a professor at the University of Florida who co-designed the new LiDAR-based sensor. “For instance, this system may enable heating, ventilation, and air-conditioning (HVAC) systems to control air flow by estimating the population distribution and tracking people’s movement.”

An outstanding issue, however, is creating LiDAR sensors that can achieve a wide field of view without consuming too much energy. Existing designs tend to rely on motorized optomechanical scanners to disperse the LiDAR signals and achieve a wider field of view—yet these devices typically consume about 10 watts of power.

Instead of a motorized optomechanical scanner, the new design by Xie’s group relies on MEMS mirrors to control the LiDAR signals. The mirrors require significantly less power to manipulate than the bulkier motorized scanner that have typically been used. What’s more, a passive infrared sensor ensures that the whole system is only activated when people are present.
In the future, Xie says he envisions this sensor being used not just for detecting people in smart homes,  but for applications ranging from robotics to  small unmanned air vehicles.

The design currently relies on an off-the-shelf time-of-flight (TOF) engine for analyzing the returning laser signals, which Xie says is the bulkiest and most energy-intensive component. Moving forward, his team plans on developing their own, smaller (TOF) device that uses less energy. As well, he says, “We will continue advancing the MEMS technology by making MEMS mirrors with larger optical aperture, larger scan angle and faster scan frequency.”

“The whole MEMS LiDAR system can be powered by a battery with a maximum power consumption of 2.7 W,” says Xie. “The passive infrared sensor can put the system into an idle mode, which extends the battery life by three times or more.”

In the future, Xie says he envisions this sensor being used not just for detecting people in smart homes,  but for applications ranging from robotics to  small unmanned air vehicles.

Robotic Chameleon Tongue Snatches Nearby Objects in the Blink of an Eye

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/robotics/robotics-hardware/robotic-chameleon-tongue-snatches-objects

Chameleons may be slow-moving lizards, but their tongues can accelerate at astounding speeds, snatching insects before they have any chance of fleeing. Inspired by this remarkable skill, researchers in South Korea have developed a robotic tongue that springs forth quickly to snatch up nearby items.

They envision the tool, called Snatcher, being used by drones and robots that need to collect items without getting too close to them. “For example, a quadrotor with this manipulator will be able to snatch distant targets, instead of hovering and picking up,” explains Gwang-Pil Jung, a researcher at Seoul National University of Science and Technology (SeoulTech) who co-designed the new device.

There has been other research into robotic chameleon tongues, but what’s unique about Snatcher is that it packs chameleon-tongue fast snatching performance into a form factor that’s portable—the total size is 12 x 8.5 x 8.5 centimeters and it weighs under 120 grams. Still, it’s able to fast snatch up to 30 grams from 80 centimeters away in under 600 milliseconds. 

To create Snatcher, Jung and a colleague at SeoulTech, Dong-Jun Lee, set about developing a spring-like device that’s controlled by an active clutch combined with a single series elastic actuator. Powered by a wind-up spring, a steel tapeline—analogous to a chameleon’s tongue—passes through two geared feeders. The clutch is what allows the single spring unwinding in one direction to drive both the shooting and the retracting, by switching a geared wheel between driving the forward feeder or the backward feeder.

The end result is a lightweight snatching device that can retrieve an object 0.8 meters away within 600 milliseconds. Jung notes that some other, existing devices designed for retrieval are capable of accomplishing the task quicker, at about 300 milliseconds, but these designs tend to be bulky. A more detailed description of Snatcher was published July 21 in IEEE Robotics and Automation Letters.

“Our final goal is to install the Snatcher to a commercial drone and achieve meaningful work, such as grasping packages,” says Jung. One of the challenges they still need to address is how to power the actuation system more efficiently. “To solve this issue, we are finding materials having high energy density.” Another improvement is designing a chameleon tongue-like gripper, replacing the simple hook that’s currently used to pick up objects. “We are planning to make a bi-stable gripper to passively grasp a target object as soon as the gripper contacts the object,” says Jung.

New AI Dupes Humans into Believing Synthesized Sound Effects Are Real

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/new-ai-dupes-humans-into-believing-synthesized-sound-effects-are-real

Journal Watch report logo, link to report landing page

Imagine you are watching a scary movie: the heroine creeps through a dark basement, on high alert. Suspenseful music plays in the background, while some unseen, sinister creature creeps in the shadows… and then–BANG! It knocks over an object.

Such scenes would hardly be as captivating and scary without the intense, but perfectly timed sound effects, like the loud bang that sent our main character wheeling around in fear. Usually these sound effects are recorded by Foley artists in the studio, who produce the sounds using oodles of objects at their disposal. Recording the sound of glass breaking may involve actually breaking glass repeatedly, for example, until the sound closely matches the video clip.

In a more recent plot twist, researchers have created an automated program that analyzes the movement in video frames and creates its own artificial sound effects to match the scene. In a survey, the majority of people polled indicated that they believed the fake sound effects were real. The model, AutoFoley, is described in a study published June 25 in IEEE Transactions on Multimedia.

“Adding sound effects in post-production using the art of Foley has been an intricate part of movie and television soundtracks since the 1930s,” explains Jeff Prevost, a professor at the University of Texas at San Antonio who co-created AutoFoley. “Movies would seem hollow and distant without the controlled layer of a realistic Foley soundtrack. However, the process of Foley sound synthesis therefore adds significant time and cost to the creation of a motion picture.”

Intrigued by the thought of an automated Foley system, Prevost and his PhD student, Sanchita Ghose, set about creating a multi-layered machine learning program. They created two different models that could be used in the first step, which involves identifying the actions in a video and determining the appropriate sound.

The first machine learning model extracts image features (e.g., color and motion) from the frames of fast-moving action clips to determine an appropriate sound effect.

The second model analyzes the temporal relationship of an object in separate frames. By using relational reasoning to compare different frames across time, the second model can anticipate what action is taking place in the video.

In a final step, sound is synthesized to match the activity or motion predicted by one of the models. Prevost and Ghose used AutoFoley to create sound for 1,000 short movie clips capturing a number of common actions, like falling raining, a galloping horse, and a ticking clock.

Analysis shows–unsurprisingly–that AutoFoley is best at producing sounds where the timing doesn’t need to align perfectly with the video (e.g., falling rain, a crackling fire). But the program is more likely to be out of sync with the video when visual scenes contain random actions with variation in time (e.g., typing, thunderstorms).

Next, Prevost and Ghose surveyed 57 local college students on which movie clips they thought included original soundtracks. In assessing soundtracks generated by the first model, 73% of students surveyed chose the synthesized AutoFoley clip as the original piece, over the true original sound clip. In assessing the second model, 66% of respondents chose the AutoFoley clip over the original sound clip.

“One limitation in our approach is the requirement that the subject of classification is present in the entire video frame sequence,” says Prevost, also noting that AutoFoley currently relies on a dataset with limited Foley categories. While a patent for AutoFoley is still in the early stages, Prevost says these limitations will be addressed in future research.

New AI Estimates the Size of Remote Refugee Camps Using Satellite Data

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/new-ai-estimates-the-size-of-remote-refugee-camps-using-satellite-data

Journal Watch report logo, link to report landing page

Over the course of the Syrian civil war that has been raging since 2011, the number of displaced refugees has been growing at an alarming rate. The Rukban refugee camp at the nation’s southern border near Jordan is one eye-opening example. According to UN data, the number of refugee tents set up in the area has increased from roughly 132 to 11,702 over the four-year period between 2015 and 2019. 

Being able to monitor the expansion of refugee camps is important for humanitarian planning and resource allocation. But keeping a reliable count can prove challenging when refugee camps become so large. To support these efforts, several models have been developed to analyze satellite data and estimate the population of refugee camps based on the number of tents that are detected. One recent DARPA-funded research project has led to a new machine learning algorithm with high precision and accuracy in accomplishing the task. (In this case, precision and accuracy are closely related but not synonymous. Accuracy refers to the ratio of all correctly identified pixels—with and without tents; precision is the correct identification of tent-containing pixels only.) The new model is described in a recent publication in IEEE Geoscience and Remote Sensing Letters.

Jiang Li, a professor at Old Dominion University, is a co-developer of the new model. The program was trained and tested using satellite data across two time points, in 2016 and 2017. It takes satellite images and breaks them down into arbitrary pieces, extracting spectral and spatial information. Filters help classify the data at the pixel level. This approach is referred to as a fully convolutional network (FCN) model.

In their study, the researchers compared their FCN model to several others, finding modest increases in accuracy and substantial increases in precision. Their model was up to 4.49 percent more accurate than the others, and as much as 41.99 percent more precise.

“In the validation data, manual labelling showed 775 tents and our FCN model discovered 763 tents, with 1.55 percent error,” says Li. “All other competing models have errors ranging from 12.9 percent to 510.90 percent [meaning they drastically overcounted the number of tents], and are much less accurate.”

With the model developed, Li says his team is waiting for guidance from DARPA before implementing the tool in a real-world setting. “We are certain that real applications of our new model will require us to connect to some potential users, such as UNOSAT, DHS, and other government agencies. This process may take time,” he says. “In any event, our model is generic and can be adapted to other applications such as flood and hurricane damage assessment, urban change detection, et cetera.”

While it’s certainly true that the model could be adapted for other applications, Li cautions that the process could be labor intensive. His team relied on an existing database of labeled image data to build the tent-sensing FCN model, but adapting the model for other purposes could potentially mean manually labelling a new dataset of relevant images. 

“The data hungry problem is currently a big hurdle for this application. We are investigating state-of-the-art data augmentation strategies and active learning methods as alternatives in order to further improve training efficiency,” says Li.

New Hardware Mimics Spiking Behavior of Neurons With High Efficiency

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/hardware/new-hardware-mimics-spiking-behavior-of-neurons-with-high-efficiency

Journal Watch report logo, link to report landing page

Nothing computes more efficiently than a brain, which is why scientists are working hard to create artificial neural networks that mimic the organ as closely as possible. Conventional approaches use artificial neurons that work together to learn different tasks and analyze data; however, these artificial neurons do not have the ability to actually “fire” like real neurons, releasing bursts of electricity that connect them to other neurons in the network. The third generation of this computing tech aims to capture this real-life process more accurately – but achieving such a feat is hard to do efficiently.

Smart Home Devices Can Reveal Behaviors Associated With Dementia

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/biomedical/devices/smart-home-devices-can-reveal-the-health-status-of-individuals

Journal Watch report logo, link to report landing page

As people age, cognitive decline can happen in subtle ways that are not always immediately obvious to family members or friends. One solution for better detecting these subtle changes, however, could already be in the homes of many people, in the form of a smart home device.

In a recent study, researchers demonstrate that it’s possible to use data from smart home devices to detect behavioral differences between people who are experiencing cognitive decline and healthy individuals. The results, which could have broader implications for the monitoring of many different health conditions, were published 3 June in IEEE Journal of Biomedical and Health Informatics.

Gina Sprint, an Assistant Professor of Computer Science at Gonzaga University, is one of the researchers involved in the study. Sprint and her collaborators at Washington State University developed a novel algorithm for analyzing data from smart home devices; it’s called Behavior Change Detection for Groups (BCD-G). In particular, the algorithm analyzes behavioral patterns of residents across time.

In the study, 14 volunteers were monitored continuously in their homes for one month. Seven of these volunteers were living with dementia, while the other seven comprised a healthy control group of similar age and educational background. BCD-G was then used to assess the volunteers as they engaged in 16 types of activities, such as bathing, cooking, sleeping, working, and taking medications.

Using BCD-G to compare the two groups revealed some intriguing differences in behavior.

“First, the in-home walk speed of the cognitively impaired group was about half as fast as the age-matched healthy control group,” says Sprint. “Also, the cognitively impaired group had a greater variance in the duration of the activities they performed and what time they started the activities. They slept more during the day and at night, and lastly, they exhibited large behavioral differences related to how often and when they would leave their home, take their medications, and get dressed.”

While BCD-G proved useful for uncovering the behavioral patterns of people with dementia in this study, Sprint notes that the algorithm can be applied to a number of other health conditions. For example, BCD-G could be used to monitor patients recovering from a stroke or traumatic brain injury.

“Because BCD-G looks at changes across time points, it has potential to help with almost any condition where a clinician would want to know if someone is improving or declining,” explains Sprint.

Moving forward, her team plans to consult with clinicians to gather their feedback on BCD-G and further expand upon the tool. “Involving clinicians with frontline experience when creating algorithms like BCD-G is key to making machine learning applicable in the real-world,” she says. “Successful applications can assist clinicians in treatment planning and ultimately improve patients’ health.”

New Electronic Nose Sniffs Out Perfectly Ripe Peaches for Harvest

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/chemical-sensors/new-electronic-nose-sniffs-out-perfectly-ripe-peaches-for-harvest

Have you ever tried to guess the ripeness of a peach by its smell? Farmers with a well-trained nose may be able to detect the unique combination of alcohols, esters, ketones, and aldehydes, but even an expert may struggle to know when the fruit is perfect for the picking. To help with harvesting, scientists have been developing electronic noses for sniffing out the ripest and most succulent peaches. In a recent study, one such e-nose exceeds 98 percent accuracy.

Sergio Luiz Stevan Jr. and colleagues at Federal University of Technology – Paraná and State University of Ponta Grossa, in Brazil, developed the new e-nose system. Stevan notes that even within a single, large orchard, fruit on one tree may ripen at different times than fruit on another tree, thanks to microclimates of varying ventilation, rain, soil, and other factors. Farmers can inspect the fruit and make their best guess at the prime time to harvest, but risk losing money if they choose incorrectly.

Fortunately, peaches emit vaporous molecules, called volatile organic compounds, or VOCs. “We know that volatile organic compounds vary in quantity and type, depending on the different phases of fruit growth,” explains Stevan. “Thus, the electronic noses are an [option], since they allow the online monitoring of the VOCs generated by the culture.”

The e-nose system created by his team has a set of gas sensors sensitive to particular VOCs. The measurements are digitized and pre-processed in a microcontroller. Next, a pattern recognition algorithm is used to classify each unique combination of VOC molecules associated with three stages of peach ripening (immature, ripe, over-ripe). The data is stored internally on an SD memory card and transmitted via Bluetooth or USB to a computer for analysis.

The system is also equipped with a ventilation mechanism that draws in air from the surrounding environment at a constant rate. The passing air is subjected to a set level of humidity and temperature to ensure consistent measurements. The idea, Stevan says, is to deploy several of these “noses” across an orchard to create a sensing network. 

He notes several advantages of this system over existing ripeness-sensing approaches, including that it is online, conducts real-time continuous analyses in an open environment, and does not require direct handling of the fruit. “It is different from the other [approaches] present in the literature, which are generally carried out in the laboratory or warehouses, post-harvest or during storage,” he says. The e-nose system is described in a study published June 4 in IEEE Sensors Journal.

While the study shows that the e-nose system already has a high rate of accuracy at more than 98 percent, the researchers are continuing to work on its components, focusing in particular on improving the tool’s flow analysis. They have filed for a patent and are exploring the prospect of commercialization.

For those who prefer their fruits and grains in drinkable form, there is additional good news. Stevan says in the past his team has developed a similar e-nose for beer, to analyze both alcohol content and aromas. Now they are working on an e-nose for wine, as well as a variety of other fruits.