All posts by Michelle Hampson

GPUs Can Now Analyze a Billion Complex Vectors in Record Time

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/gpus-can-now-analyze-millions-of-images-in-record-time

Journal Watch report logo, link to report landing page

The complexity of a digital photo cannot be understated.

Each pixel comprises many data points, and there can be millions of pixels in just a single photo. These many data points in relation to each other are referred to as “high-dimensional” data and can require immense computing power to analyze, say if you were searching for similar photos in a database. Computer programmers and AI experts refer to this as “the curse of high curse of high dimensionality.” 

In a study published July 1 in IEEE Transactions on Big Data, researchers at Facebook AI Research propose a novel solution that aims to ease the burden of this curse. But rather than the traditional means of a computer’s central processing units (CPUs) to analyze high-dimensional media, they’ve harnessed Graphical Processing Units (GPUs). The advancement allows 4 GPUs to analyze more than 95 million high-dimensional images in just 35 minutes. This speed is 8.5 times faster than previous techniques that used GPUs to analyze high-dimensional data.

“The most straightforward technique for searching and indexing [high-dimensional data] is by brute-force comparison, whereby you need to check [each image] against every other image in the database,” explains Jeff Johnson, a research engineer at Facebook AI Research who co-developed the new approach using GPUs. “This is impractical for collections containing billions of vectors.”

CPUs, which have high memory storage and thus can handle large volumes of data, are capable of such a task. However, it takes a substantial amount of time for CPUs to transfer data among the various other supercomputer components, which causes an overall lag in computing time.

In contrast, GPUs offer more raw processing power. Therefore, Johnson and his team developed an algorithm that allows GPUs to both host and analyze a library of vectors. In this way, the data is managed by a small handful of GPUs that do all the work. Notably, GPUs typically have less overall memory storage than CPUs, but Johnson and his colleagues were able to overcome this pitfall using a technique that compresses vector databases and makes them more manageable for the GPUs to analyze.

“By keeping computations purely on a GPU, we can take advantage of the much faster memory available on the accelerator, instead of dealing with the slower memories of CPU servers and even slower machine-to-machine network interconnects within a traditional supercomputer cluster,” explains Johnson.

The researchers tested their approach against a database with one billion vectors, comprising 384 gigabytes of raw data. Their approach reduced the number of vector combinations that need to be analyzed, which would normally be a quintillion (1018), by at least 4 orders of magnitude.

“Both the improvement in speed and the decrease in database size allow for solving problems that would otherwise take hundreds of CPU machines, in effect democratizing large-scale indexing and search techniques using a much smaller amount of hardware,” he says.

Their approach has been made freely available through the Facebook AI Similarity Search (Faiss) open source library. Johnson notes that the computing tech giant Nvidia has already begun building extensions using this approach, which were unveiled at the company’s 2021 GPU Technology Conference.

New AR System Alters Sight, Sound and Touch

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/new-ar-system-alters-sight-sound-and-touch

Journal Watch report logo, link to report landing page

There’s the typical pitter-patter sound that comes from a person drumming their fingers along a tabletop. But what if this normal pitter-patter sound was perceived as a series of hollow echoes? Or rolling thunder? A new augmented reality system called Tactile Echoes provides users with experiences like this, and could be used for a wide range of gaming, entertainment and research purposes.

Notably, the system does not require any equipment between the user’s fingertips and the contact surface, meaning users can enjoy the real sensation of their environment along with the visual, haptic and auditory augmented enhancements.

“Tactile Echoes is possibly the first programmable system for haptic augmented reality that allows its users to freely touch physical objects or surfaces augmented with multimodal digital feedback using their hands,” says Anzu Kawazoe, a PhD candidate at the University of California, Santa Barbara who co-designed Tactile Echoes.

It accomplishes this using a sensor that is placed on the top of the user’s fingernail, which detects the vibrations that are naturally produced within the finger as it touches a surface. The vibrational signals are processed and translated into programmed sounds.

Different tactile feedback and sounds can be played with each interaction, because the vibrational patterns in our fingers change depending on what surface we touch, or the intensity of pressure applied. For example, Tactile Echoes may play a light, fun echo when you tap an object lightly, or play a sudden thud if you jab the object with force.

“We were motivated by the idea of being able to almost magically augment any ordinary object or surface, such as a simple wooden table, with lively haptic and effects that playfully respond to how or where we touch,” explains Kawazoe.

Her team took the system one step further by integrating the wearable device with virtual environments created by a smart projector or a VR or AR headset. In this way, users can “touch” virtual objects in their real environment, and experience enhanced graphic, sound and haptic feedback.

The researchers tested Tactile Echoes through a number of user experiments, described in study published May 26 in IEEE Transactions on Haptics. First, study participants were asked to describe different sounds, and what perceptions and associations are evoked with each one. In a second experiment, participants used Tactile Echoes to complete an interactive, augmented reality video game that was projected onto an ordinary desktop surface. Users reported that the Tactile Echoes feedback greatly enhanced the responsiveness, and their level of engagement and agency in playing the game.

While Tactile Echoes is still a prototype, Kawazoe says her team is interested in collaborating with companies to commercialize this tech. “Some of the most promising applications we can envisage include augmented reality games that can be played on any table top, musical devices whose interface can be projected wherever needed, and educational systems for enlivening learning by K-12 students,” she says.

She also notes that her team’s research so far with Tactile Echoes has revealed some interesting perceptual phenomena that occur when haptic feedback is delayed through the system.  In particular, they believe that perceptual masking is happening, whereby the perception of one stimulus affects the perceived intensity of a second stimulus.

“We are thinking that this tactile masking effect is working on the Tactile Echoes system. Specifically, time-delayed tactile feedback is as perceived stronger,” explains Kawazoe. “We are preparing new experiments to investigate these effects, and plan to use the results to further improve the Tactile Echoes system.”

Tracking Plastics in the Ocean By Satellite

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/remote-sensing/scientists-track-the-flow-of-microplastics-in-the-ocean-from-space

Journal Watch report logo, link to report landing page

It’s well known that microplastics have infiltrated much of the world’s oceans, where these persistent pieces of garbage threaten the livelihood of marine life. But truly quantifying the extent of this problem globally remains a challenge.

Now, a pair of researchers are proposing a novel approach to tracking these minute scraps of plastic in the world’s oceans—remotely from space. They describe their new satellite-based approach in a study published this month in IEEE Transactions on Geoscience and Remote Sensing, which hints at seasonal fluxes of microplastic concentrations in the oceans.

The problem of plastic persisting in environments is only growing. The annual global production of plastic has been increasing steadily every year since the 1950s, reaching 359 million metric tons in 2018. “Microplastic pollution is a looming threat to humans and our environment, the extent of which is not well-defined through traditional sampling alone,” explains Madeline Evans, a Research Assistant with the Department of Climate and Space Sciences and Engineering, University of Michigan. “In order to work towards better solutions, we need to know more about the problem itself.”

Evans was an undergraduate student at the University of Michigan when she began working with a professor of climate and space science at the institution, Christopher Ruf. The two have been collaborating over the past couple of years to develop and validate their approach, which uses bistatic radar to track microplastics in the ocean, using NASA’s Cyclone Global Navigation Satellite System (CYGNSS).

The underlying concept behind their approach hinges on how the presence of microplastics alters the surface of the ocean. “The presence of microplastics and surfactants in the ocean cause the ocean surface to be less responsive to roughening by the wind,” says Ruf.

Therefore, Ruf and Evans explored whether CYGNSS measurements of the ocean’s surface roughness that deviated from predicted values, given local winds speeds, could be used to confirm the presence of microplastics. Indeed, they validated their approach by comparing it to other models for predicting microplastics. But whereas existing models provide a static snapshot of the extent and scope of microplastic pollution, this new approach using CYNGSS can be used to understand microplastic concentrations in real-time.

Using their approach to analyze global patterns, the researchers found that microplastic concentrations in the North Indian Ocean tend to be highest in late winter/early spring and lowest in early summer. These fluxes coincide with monsoon season, prompting the authors to suggest that the outflow patterns of rivers into the ocean and the diluting power of the rain may be influential for microplastic concentration, although more research is needed to confirm this.  As well, they detected a detected a strong seasonal pattern in the Great Pacific Garbage Patch, whereby concentrations of microplastics are highest in summer and lowest in winter.

“The seasonal shift in our measurements surprised me the most,” says Evans. “Before working on this project, I conceptualized the Great Pacific Garbage Patch as a consistent, mostly static mass. I was surprised to witness these large accumulations of microplastics behaving in such a dynamic way.”

Ruf emphasizes the novelty of being able to track microplastic concentrations in real time. “Time lapse measurements of microplastic concentration have never been possible before, so we are able to see for the first time some of its dynamic behavior, like the seasonal dependence and the outflow from major rivers into the ocean,” he says.

However, he notes that not being able to validate the model with direct sampling of microplastics remains a major limitation. Raf and Evans are now serving on global task force dedicated to remote sensing of marine litter and debris, which aims to improve these measurements in the future.

“It is very exciting to be involved in something this new and important,” says Raf. “[This work] will hopefully lead to better awareness of the problem of ocean microplastics and other marine debris and litter, and maybe someday to changes in industrial practices and lifestyle choices to help reduce the problem.”

Search-and-Rescue Drone Locates Victims By Homing in on Their Phones

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/robotics/drones/searchandrescue-drone-locates-victims-by-homing-in-on-their-phones

Journal Watch report logo, link to report landing page

When a natural disaster strikes, first responders must move quickly to search for survivors. To support the search-and-rescue efforts, one group of innovators in Europe has succeeded in harnessing the power of drones, AI, and smartphones, all in one novel combination.

Their idea is to use a single drone as a moving cellular base station, which can do large sweeps over disaster areas and locate survivors using signals from their phones. AI helps the drone methodically survey the area and even estimate the trajectory of survivors who are moving.

The team built its platform, called Search-And-Rescue DrOne based solution (SARDO), using off-the-shelf hardware and tested it in field experiments and simulations. They describe the results in a study published 13 January in IEEE Transactions on Mobile Computing.

“We built SARDO to provide first responders with an all-in-one victims localization system capable of working in the aftermath of a disaster without existing network infrastructure support,” explains Antonio Albanese, a Research Associate at NEC Laboratories Europe GmbH, which is headquartered in Heidelberg, Germany.

The point is that a natural disaster may knock out cell towers along with other infrastructure. SARDO, which is quipped with a light-weight cellular base station, is a mobile solution that could be implemented regardless of what infrastructure remains after a natural disaster. 

To detect and map out the locations of victims, SARDO performs time-of-flight measurements (using the timing of signals emitted by the users’ phones to estimate distance). 

A machine learning algorithm is then applied to the time-of-flight measurements to calculate the positions of victims. The algorithm compensates for when signals are blocked by rubble.

If a victim is on the move in the wake of a disaster, a second machine learning algorithm, tasked with estimating the person’s trajectory based on their current movement, kicks in—potentially helping first responders locate the person sooner.   

After sweeping an area, the drone is programmed to automatically maneuver closer to the position of a suspected victim to retrieve more accurate distance measurements. If too many errors are interfering with the drone’s ability to locate victims, it’s programmed to enlarge the scanning area.

In their study, Albanese and his colleagues tested SARDO in several field experiments without rubble, and used simulations to test the approach in a scenario where rubble interfered with some signals. In the field experiments, the drone was able to pinpoint the location of missing people to within a few tens of meters, requiring approximately three minutes to locate each victim (within a field roughly 200 meters squared. As would be expected, SARDO was less accurate when rubble was present or when the drone was flying at higher speeds or altitudes.

Albanese notes that a limitation of SARDO–as is the case with all drone-based approaches–is the battery life of the drone. But, he says, the energy consumption of the NEC team’s design remains relatively low.

The group is consulting the laboratory’s business experts on the possibility of commercializing this tech.  Says Albanese: “There is interest, especially from the public safety divisions, but still no final decision has been taken.”

In the meantime, SARDO may undergo further advances. “We plan to extend SARDO to emergency indoor localization so [it is] capable of working in any emergency scenario where buildings might not be accessible [to human rescuers],” says Albanese.

Smart City Video Platform Finds Crimes and Suspects

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/networks/smart-tracking-platform-anveshak-proves-effective-on-the-streets-of-bangalore

Journal Watch report logo, link to report landing page

In many cities, whenever a car is stolen or someone is abducted, there are video cameras on street corners and in public spaces that could help solve those crimes. However, investigators and police departments typically don’t have the time or resources required to sift through hundreds of video feeds and tease out the evidence they need. 

Aakash Khochare, a doctoral student at the Indian Institute of Science, in Bangalore, has been working for several years on a platform that could be useful. Khochare’s Anveshak, which means “Investigator” in Hindi, won an IEEE TCSC SCALE Challenge 2019 award in 2019.  That annual competition is sponsored by the IEEE Technical Committee on Scalable Computing. Last month, Anveshak was described in detail in a study published in IEEE Transactions on Parallel and Distributed Systems.

Anveshak features a novel tracking algorithm that narrows down the range of cameras likely to have captured video of the object or person of interest. Positive identifications further reduce the number of camera feeds under scrutiny. “This reduces the computational cost of processing the video feeds—from thousands of cameras to often just a few cameras,” explains Khochare.

The algorithm selects which cameras to analyze based on factors such as the local road network, camera location, and the last-seen position of the entity being tracked.

Anveshak also decides how long to buffer a video feed (ie., download a certain amount of data) before analyzing it, which helps reduce delays in computer processing. 

Last, if the computer becomes overburdened with processing data, Anveshak begins to intelligently cut out some of the video frames it deems least likely to be of interest. 

Khochare and his colleagues tested the platform using open dataset images generated by 1,000 virtual cameras covering a 7 square kilometer region across the Indian Institute of Science’s Bangalore campus. They simulated an object of interest moving through the area and used Anveshak to track it within 15 seconds. 

Khochare says Anveshak brings cities closer to a practical and scalable tracking application. However, he says future versions of the program will need to take privacy issues into account.

“Privacy is an important consideration,” he says. “We are working on incorporating privacy restrictions within the platform, for example by allowing analytics to track vehicles, but not people. Or, analytics that track adults but not children. Anonymization and masking of entities who are not of interest or should not be tracked can also be examined.”

Next, he says, the team plans to extend Anveshak so that it is capable of tracking multiple objects at a time. “The use of camera feeds from drones, whose location can be dynamically controlled, is also an emerging area of interest,” says Khochare.

He says the goal is to eventually make Anveshak an open source platform for researchers to use.

Ultrasonic Holograms: Who Knew Acoustics Could Go 3D?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/acoustic-holograms-and-3d-ultrasound

Although its origins trace back to 1985, acoustic holography as a field has been hampered by rampant noise created by widespread reflection and scattering. (Lasers tamp this problem down somewhat for conventional optical holograms; ultrasound as yet offers no such technological quick fix.) 

But in a recent study published last month in IEEE Sensors Journal, a group of researchers report an improved method for creating acoustic holograms. While the advance won’t lead to treatment with acoustic holograms in the immediate future, the improved technique yields higher sensitivity and a better focusing effect than previous acoustic hologram methods.

There are a number of intriguing possibilities that come with manipulation using sound, including medical applications. Ultrasound can penetrate human tissues and is already used for medical imaging. But more precise manipulation and imaging of human tissues using 3D holographic ultrasound  could lead to completely new therapies—including targeted neuromodulation using sound.

The nature of sound itself poses the first hurdle to be overcome. “The medical application of acoustic holograms is limited owing to the sound reflection and scattering at the acoustic holographic surface and its internal attenuation,” explains Chunlong Fei, an associate professor at Xidian University who is involved in the study.

To address these issues, his team created their acoustic hologram via a “lens” consisting of a disc with a hole at its center. They placed a 1 MHz ultrasound transducer in water and used the outer part of the transducer surface to create the hologram. By creating a hole in the center of the lens, the center of the transducer generates and receives soundwaves with less reflection and scattering.

Next, the researchers compared their new disc approach to more conventional acoustic hologram techniques. They performed this A vs. B comparison via ultrasound holographic images of several thin needles, 1.25 millimeters in diameter or less.

“The most notable feature of the hole-hologram we proposed is that it has high sensitivity and maintains good focusing effect [thanks to the] holographic lens,” says Fei. He notes that these features will lead to less scattering and propagation loss than what occurs with traditional acoustic holograms.

Fei envisions several different ways in which this approach could one day be applied medically, for example by complementing existing medical imaging probes to achieve better resolution, or for applications such as nerve regulation or non-invasive brain stimulation. However, the current set up, using water, would need to be modified to be more suitable for medical setting, along with several next steps related to characterizing and manipulating the hologram, says Fei.

The varied design improvements Fei’s team hopes to develop match the equally eclectic possible applications of ultrasonic hologram technology. In the future, Fei says they hope acoustic holographic devices might achieve super-resolution imaging, particle trapping, selective biological tissue heating—and even find new applications in personalized medicine.

3D Fingerprint Sensors Get Under Your Skin

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/sensors/imagers/3d-fingerprint-sensor-id-subcutaneous

Journal Watch report logo, link to report landing page

Many people already use fingerprint recognition technology to access their phones, but one group of researchers wants to take this skin-deep concept further. In a study published January 18 in IEEE Sensors Journal, they described a technique that maps out not only the unique pattern of a person’s fingerprint, but the blood vessels underneath. The approach adds another layer of security to identity authentication.

Typically, a fingerprint scan only accounts for 2D information that captures a person’s unique fingerprint pattern of ridges and valleys, but this 2D information could easily be replicated.

“Compared with the existing 2D fingerprint recognition technologies, 3D recognition that captures the finger vessel pattern within a user’s finger will be ideal for preventing spoofing attacks and be much more secure,” explains Xiaoning Jing, a distinguished professor at North Carolina State University and co-author of the study.

To develop a more secure approach using 3D recognition, Jing and his colleagues created a device that relies on ultrasound pulses. When a finger is placed upon the system, triggering a pressure sensor, a high-frequency pulsed ultrasonic wave is emitted. The amplitudes of reflecting soundwaves can then be used to determine both the fingerprint and blood vessel patterns of the person.

In an experiment, the device was tested using an artificial finger created from polydimethylsiloxane, which has an acoustic impedance similar to human tissues. Bovine blood was added to vessels constructed in the artificial finger. Through this set up, the researchers were able to obtain electronic images of both the fingerprint and blood vessel patterns with resolutions of 500 × 500 dots per inch, which they say is sufficient for commercial applications.

Intriguingly, while the blood vessel features beneath the ridges of the artificial finger could be determined, this was not the case for 40% of the blood vessels that lay underneath the valleys of the fingerprint. Jing explains that this is because the high-frequency acoustic waves cannot propagate through the tiny spaces confined within the valleys of the fingerprint. Nevertheless, he notes that enough of the blood vessels throughout the finger can be distinguished enough to make this approach worthwhile, and that data interpolation or other advanced processing techniques could be used to reconstruct the undetected portion of vessels.

Chang Peng, a post-doc research scholar at North Carolina State University and co-author of the study, see this approach as widely applicable. “We envision this 3D fingerprint recognition approach can be adopted as a highly secure bio-recognition technique for broad applications including consumer electronics, law enforcement, banking and finance, as well as smart homes,” he says, noting that this group is seeking a patent and looking for industry partners to help commercialize the technology.

Notably, the current set up using a single ultrasound inducer takes an hour or more to acquire an image. To improve upon this, the researchers are planning to explore how an array ultrasonic fingerprint sensors will perform compared to the single sensor that was used in this study. Then they aim to test the device with real human fingers, comparing the security and robustness of their technique to commercialized optical and capacitive techniques for real fingerprint recognition.

Smart Surface Sorts Signals—The “Metaprism” Evaluated

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/wireless/could-you-receive-data-using-a-metaprism

The search for better ways to optimize and direct wireless communication is never-ending. These days the uses of metamaterials—marvels of material science engineered to have properties not found in naturally-occurring stuff—also seem to be similarly boundless. Why not use the latter to attempt the former? 

This was the basic idea behind a recent theoretical study, published earlier this month in IEEE Transactions on Wireless Communications. In it, Italian scientists describe a means of directing wireless communication signals using a reflective surface made from a metamaterial. This meta “prism” differently redirects microwave and radio signals depending on their frequencies, much the way conventional prisms bend light differently depending on a light beam’s color.

Earlier generations of wireless networks rely on lower frequency electromagnetic waves that can easily penetrate objects and that are omni-directional, meaning a signal can be sent in a general area and does not require a lot of directional precision to be received.

But as these lower frequencies became overburdened with increasing demand, network providers have started using shorter wavelengths, which yield substantially higher rates of data transmission—but require more directional precision. These higher frequencies also must be relayed around objects that the waves cannot penetrate. 5G networks, now being rolled out, require such precision. Future networks that rely on even higher frequencies will too. 

Davide Dardari is an Associate Professor at the University of Bologna, Italy, who co-authored the new study. “The idea is rather simple,” he says. “The metaprism we proposed… [is] made of metamaterials capable of modifying the way an impinging electromagnetic field is reflected.” 

Just like a regular prism refracts white light to create a rainbow, this proposed metaprism could be used to create and direct an array of different frequencies, or sub-carrier channels. Sub-carrier frequencies would then be assigned to different users and devices based on their positioning.

A major advantage of this approach is that metaprisms, if distributed properly around an area, could passively direct wireless signals, without requiring any power supply. “One or more surfaces designed to act as metaprisms can be deployed in the environment, for example on walls, to generate artificial reflections that can be exploited to enhance the coverage in obstructed areas,” explains Dandari.

He also notes that, because the metaprism would direct the multiple, parallel communication channels in different directions, this will help reduce latency when multiple users are present. “As a [result of these advantages], we expect the cost of the prisms will be much lower than that of the other solutions,” he says.

While this work is currently theoretical, Dandari plans to collaborate with other experts to develop tailored metamaterials required for the metaprism. He also plans to explore the advantages of the metaprism as a means of controlling signal interference.

Smart Algorithm Bursts Social Networks’ “Filter Bubbles”

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/networks/finally-a-means-for-bursting-social-media-bubbles

Journal Watch report logo, link to report landing page

Social media today depends on building echo chambers (a.k.a. “filter bubbles”) that wall off users into like-minded digital communities. These bubbles create higher levels of engagement, but come with pitfalls— including limiting people’s exposure to diverse views and driving polarization among friends, families and colleagues. This effect isn’t a coincidence either. Rather, it’s directly related to the profit-maximizing algorithms used by social media and tech giants on platforms like Twitter, Facebook, TikTok and YouTube.

In a refreshing twist, one team of researchers in Finland and Denmark has a different vision for how social media platforms could work. They developed a new algorithm that increases the diversity of exposure on social networks, while still ensuring that content is widely shared.

Antonis Matakos, a PhD student in the Computer Science Department at Aalto University in Espoo, Finland, helped develop the new algorithm. He expresses concern about the social media algorithms being used now.

While current algorithms mean that people more often encounter news stories and information that they enjoy, the effect can decrease a person’s exposure to diverse opinions and perspectives. “Eventually, people tend to forget that points of view, systems of values, ways of life, other than their own exist… Such a situation corrodes the functioning of society, and leads to polarization and conflict,” Matakos says.

“Additionally,” he says, “people might develop a distorted view of reality, which may also pave the way for the rapid spread of fake news and rumors.”

Matakos’ research is focused on reversing these harmful trends. He and his colleagues describe their new algorithm in a study published in November in IEEE Transactions on Knowledge and Data Engineering

The approach involves assigning numerical values to both social media content and users. The values represent a position on an ideological spectrum, for example far left or far right. These numbers are used to calculate a diversity exposure score for each user. Essentially, the algorithm is identifying social media users who would share content that would lead to the maximum spread of a broad variety of news and information perspectives.

Then, diverse content is presented to a select group of people with a given diversity score who are most likely to help the content propagate across the social media network—thus maximizing the diversity scores of all users.

In their study, the researchers compare their new social media algorithm to several other models in a series of simulations. One of these other models was a simpler method that selects the most well-connected users and recommends content that maximizes a person’s individual diversity exposure score.

Matakos says his group’s algorithm provides a feed for social media users that is at least three times more diverse (according to the researchers’ metric) than this simpler method, and even more so for baseline methods used for comparison in the study.  

These results suggest that targeting a strategic group of social media users and feeding them the right content is more effective for propagating diverse views through a social media network than focusing on the most well-connected users. Importantly, the simulations completed in the study also suggest that the new model is scalable.

A major hurdle, of course, is whether social media networks would be open to incorporating the algorithm into their systems. Matakos says that, ideally, his new algorithm could be an opt-in feature that social media networks offer to their users.

“I think [the social networks] would be potentially open to the idea,” says Matakos. “However, in practice we know that the social network algorithms that generate users’ news feeds are orientated towards maximizing profit, so it would need to be a big step away from that direction.”

 

Why Aren’t COVID Tracing Apps More Widely Used?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/why-arent-covid-tracing-apps-more-widely-used

As the COVID-19 pandemic began to sweep around the globe in early 2020, many governments quickly mobilized to launch contract tracing apps to track the spread of the virus. If enough people downloaded and used the apps, it would be much easier to identify people who have potentially been exposed. In theory, contact tracing apps could play a critical role in stemming the pandemic.

In reality, adoption of contract tracing apps by citizens was largely sporadic and unenthusiastic. A trio of researchers in Australia decided to explore why contract tracing apps weren’t more widely adopted. Their results, published on 23 December in IEEE Software, emphasize the importance of social factors such as trust and transparency.

Muneera Bano is a senior lecturer of software engineering at Deakin University, in Melbourne. Bano and her co-authors study human aspects of technology adoption. “Coming from a socio-technical research background, we were intrigued initially to study the contact tracing apps when the Australian Government launched the CovidSafe app in April 2020,” explains Bano. “There was a clear resistance from many citizens in Australia in downloading the app, citing concerns regarding trust and privacy.”

To better understand the satisfaction—or dissatisfaction—of app users, the researchers analyzed data from the Apple and Google app stores. At first, they looked at average star ratings, number of downloads, and conducted a sentiment analysis of app reviews.

However, just because a person downloads an app doesn’t guarantee that they will use it. What’s more, Bano’s team found that sentiment scores—which are often indicative of an app’s popularity, success, and adoption—were not an effective means for capturing the success of COVID-19 contract tracing apps.

“We started to dig deeper into the reviews to analyze the voices of users for particular requirements of these apps. More or less all the apps had issues related to the Bluetooth functionality, battery consumption, reliability and usefulness during pandemic.”

For example, apps that relied on Bluetooth for tracing had issues related to range, proximity, signal strength, and connectivity. A significant number of users also expressed frustration over battery drainage. Some efforts have been made to address this issue; for example, Singapore launched an updated version of its TraceTogether app that allows it to operate with Bluetooth while running in the background, with the goal of improving battery life.

But, technical issues were just one reason for lack of adoption. Bano emphasizes that, “The major issues around the apps were social in nature, [related to] trust, transparency, security, and privacy.”

In particular, the researchers found that resistance to downloading and using the app was high in countries with a voluntary adoption model and low trust-index on their governments such as Australia, the United Kingdom, and Germany.  

“We observed slight improvement only in the case of Germany because the government made sincere efforts to increase trust. This was achieved by increasing transparency during ‘Corona-Warn-App’ development by making it open source from the outset and by involving a number of reputable organizations,” says Bano. “However, even as the German officials were referring to their contact tracing app as the ‘best app’ in the world, Germany was struggling to avoid the second wave of COVID-19 at the time we were analyzing the data, in October 2020.”

In some cases, even when measures to improve trust and address privacy issues were taken by governments and app developers, people were hesitant to adopt the apps. For example, a Canadian contract tracing app called COVID Alert is open source, requires no identifiable information from users, and all data are deleted after 14 days. Nevertheless, a survey of Canadians found that two thirds would not download any contact tracing app because it still “too invasive.” (The survey covered tracing apps in general, and was not specific to the COVID Alert app).

Bano plans to continue studying how politics and culture influence the adoption of these apps in different countries around the world. She and her colleagues are interested in exploring how contact tracing apps can be made more inclusive for diverse groups of users in multi-cultural countries.

Superconducting Microprocessors? Turns Out They’re Ultra-Efficient

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/hardware/new-superconductor-microprocessor-yields-a-substantial-boost-in-efficiency

Computers use a staggering amount of energy today. According to one recent estimate, data centers alone consume two percent of the world’s electricity, a figure that’s expected to climb to eight percent by the end of the decade. To buck that trend, though, perhaps the microprocessor, at the center of the computer universe, could be streamlined in entirely new ways. 

One group of researchers in Japan have taken this idea to the limit, creating a superconducting microprocessor—one with zero electrical resistance. The new device, the first of its kind, is described in a study published last month in the IEEE Journal of Solid-State Circuits.

Superconductor microprocessors could offer a potential solution for more energy efficient computing power—but for the fact that, at present, these designs require ultra-cold temperatures below 10 kelvin (or -263 degrees Celsius). The research group in Japan sought to create a superconductor microprocessor that’s adiabatic, meaning that, in principle, energy is not gained or lost from the system during the computing process.

While adiabatic semiconductor microprocessors exist, the new microprocessor prototype, called MANA (Monolithic Adiabatic iNtegration Architecture), is the world’s first adiabatic superconductor microprocessor. It’s composed of superconducting niobium and relies on hardware components called adiabatic quantum-flux-parametrons (AQFPs). Each AQFP is composed of a few fast-acting Josephson junction switches, which require very little energy to support superconductor electronics. The MANA microprocessor consists of more than 20,000 Josephson junctions (or more than 10,000 AQFPs) in total.

Christopher Ayala is an Associate Professor at the Institute of Advanced Sciences at Yokohama National University, in Japan, who helped develop the new microprocessor. “The AQFPs used to build the microprocessor have been optimized to operate adiabatically such that the energy drawn from the power supply can be recovered under relatively low clock frequencies up to around 10 GHz,” he explains. “This is low compared to the hundreds of gigahertz typically found in conventional superconductor electronics.”

This doesn’t mean that the group’s current-generation device hits 10 GHz speeds, however. In a press statement, Ayala added, “We also show on a separate chip that the data processing part of the microprocessor can operate up to a clock frequency of 2.5 GHz making this on par with today’s computing technologies. We even expect this to increase to 5-10 GHz as we make improvements in our design methodology and our experimental setup.” 

The price of entry for the niobium-based microprocessor is of course the cryogenics and the energy cost for cooling the system down to superconducting temperatures.  

“But even when taking this cooling overhead into account,” says Ayala, “The AQFP is still about 80 times more energy-efficient when compared to the state-of-the-art semiconductor electronic device, [such as] 7-nm FinFET, available today.”

Since the MANA microprocessor requires liquid helium-level temperatures, it’s better suited for large-scale computing infrastructures like data centers and supercomputers, where cryogenic cooling systems could be used.

“Most of these hurdles—namely area efficiency and improvement of latency and power clock networks—are research areas we have been heavily investigating, and we already have promising directions to pursue,” he says.

How to Find the Ideal Replacement When a Research Team Member Leaves

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/consumer-electronics/audiovideo/how-to-find-the-ideal-replacement-when-an-academic-team-member-leaves

Two brilliant minds are always better than one. Same goes for three, four, and so on. Of course the greater the number of team members, the greater the chance one or more will end up departing from the team mid-project—leaving the rest of the group casting about for the ideal replacement. 

Research, thankfully, can answer how best to replace a research team member. The new model not only helps identify a replacement who has as the right skillset, but it also accounts for social ties within the group. The researchers say their model “outperforms existing methods when applied to computer science academic teams.” 

The model is described in a study published December 8 in IEEE Intelligent Systems.

Feng Xia is an associate professor in the Engineering, IT and Physical Sciences department at Federation University Australia who co-authored the study. He notes that each member within an academic collaboration plays an important contributing role and has a certain degree of irreplaceability with the team. “Meanwhile, turnover rate has also increased, leading to the fact that collaborative teams are [often] facing the problem of the member’s absence. Therefore, we decided to develop this approach to minimize the loss,” he says.

Xia’s previous research has suggested that members with stable collaboration relationships can improve team performance, yielding higher quality output. Therefore, Xia and his colleagues incorporated ways of accounting for familiarity among team members into their model, including relationships between just two members (pair-wise familiarity) and multiple members (higher-order familiarity).

“The main idea of our technique is to find the best alternate for the absent member,” explains Xia. “The recommended member is supposed to be the best choice in context of familiarity, skill, and collaboration relationship.”

The researchers used two large datasets to develop and test their model. They explored 42,999 collaborative relationships between 15,681 scholars using the CiteSeerX dataset, which captures data within the field of computer science. Another 436,905 collaborative relationships between 252,439 scholars was explored through the MAG (Microsoft Academic Graph) dataset, which contains scientific records and relative information covering multiple disciplines.

Testing of the model reveals that it is effective at finding a good replacement member for teams. “Teams that choose the recommended candidates achieved better team performance and less communication costs,” says Xia. These results mean that the replacement members had good communication with the others, and illustrate the importance of familiarity among collaborators.

The researchers aim to make their model freely available on platforms such as GitHub. Now they are working towards models for team recognition, team formation, and team optimization. “We are also building a complete team data set and a self-contained team optimization online system to help form research teams,” says Xia.

These AIs Can Predict Your Moral Principles

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/these-ais-can-predict-your-moral-principles

The death penalty, abortion, gun legislation: There’s no shortage of controversial topics that are hotly debated today on social media. These topics are so important to us because they touch on an essential underlying force that makes us human, our morality.

Researchers in Brazil have developed and analyzed three models that can describe the morality of individuals based on the language they use. The results were published last month in IEEE Transactions on Affective Computing.

Ivandré Paraboni is an associate professor at the School of Arts, Sciences and Humanities at the University of São Paulo who led the study. His team choose to focus on a theory commonly used by social scientists called Moral foundations theory. It postulates several key categories of morality including care, fairness, loyalty, authority, and purity. 

The aim of the new models, according to Paraboni, is to infer values of those five moral foundations just by looking at their writing, regardless of what they are talking about. “They may be talking about their everyday life, or about whatever they talk about on social media,” Paraboni says. “And we may still find underlying patterns that are revealing of their five moral foundations.”

To develop and validate the models, Paraboni’s team provided more than 500 volunteers with questionnaires. Participants were asked to rate eight topics (e.g., same sex marriage, gun ownership, drug policy) with sentiment scores (from 0 = ‘totally against’ to 5 = ‘totally in favor’). They were also asked to write out explanations of their ratings.

Human judges then gave their own rating to a subset of explanations from participants. The exercise determined how well humans could infer the intended opinions from the text. “Knowing the complexity of the task from a human perspective in this way gave us a more realistic view of what the computational models can or cannot do with this particular dataset,” says Paraboni.

Using the text opinions from the study participants, the research team created three machine learning algorithms that could assess the language used in each participant’s statement. The models analyzed psycholinguistics (emotional context of words), words, and word sequences, respectively.

All three models were able to infer an individual’s moral foundations from the text. The first two models, which focus on individual words used by the author, were more accurate than the deep learning approach that analyzes word sequences.

Paraboni adds, “Word counts–such as how often an individual uses words like ‘sin’ or ‘duty’–turned out to be highly revealing of their moral foundations, that is, predicting with higher accuracy their degrees of care, fairness, loyalty, authority, and purity.”

He says his team plans to continue to incorporate other forms of linguistic analysis into their models. They are, he says, exploring other models that focus more on the text (independent of the author) as a way to analyze Twitter data.

App Aims To Reduce Deaths From Opioid Overdose

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/unityphilly-app-opioid-overdose

Journal Watch report logo, link to report landing page

If administered promptly enough, naloxone can chemically reverse an opioid overdose and save a person’s life. However, timing is critical–quicker administration of the medication can not only save a life, but also reduce the chances that brain damage will occur.

In exploring new ways to administer naloxone faster, a team of researchers has harnessed an effective, community-based approach. It involves an app for volunteers, who receive an alert when another app user nearby indicates that an overdose is occurring and naloxone is needed. The volunteers then have the opportunity to respond to the request.

A recent pilot study in Philadelphia shows that the approach has the potential to save lives. The results were published November 17 in IEEE Pervasive Computing.

“This concept had worked with other medical emergencies like cardiac arrest and anaphylaxis, so we thought we could have success with transferring it to opioid overdoses,” says Gabriela Marcu, an assistant professor at the University of Michigan’s School of Information, who was involved in the study.

The app, called UnityPhilly, is meant for bystanders who are in the presence of someone overdosing. If that bystander doesn’t have naloxone, they can use UnityPhilly to send out an alert with a single push of a button to nearby volunteers who also have the app. Simultaneously, a separate automated call is sent to 911 to initiate an emergency response.

Marcu’s team chose to pilot the app in Philadelphia because the city has been hit particularly hard by the opioid crisis, with 46.8 per 100,000 people dying from overdoses each year. They piloted the UnityPhilly app in the neighborhood of Kensington between March 2019 and February 2020.

In total, 112 participants were involved in the study, half of whom self-identified as active opioid users. Over the one-year period, 291 suspected overdose alerts were reported.

About 30% of alerts were false alarms, cancelled within two minutes of the alert being sent. On the other hand, at least one dose of naloxone was administered by a study participant in 36.6% of cases. Of these instances when naloxone was administered, 96% resulted in a successful reversal of the overdose. This means that a total of 71 out of 291 cases resulted in a successful reversal. 

Marcu notes that there are many advantages to the UnityPhilly approach. “It has been designed with the community, and it’s driven entirely by the community. It’s neighbors helping neighbors,” she explains.

One reported downfall of the approach with the current version of UnityPhilly is that volunteers only see a location on a map, with no context of what kind of building or environment they will be entering to deliver the naloxone dose. To address this, Marcu says her team is interested in refining the user experience and enhancing how app users can communicate with one another before, during, and after they respond to an overdose. 

“What’s interesting is that so many users still remained motivated to incorporate this app into their efforts, and their desire to help others drove adoption and acceptance of the app in spite of the imperfect user experience,” says Marcu. “So we look forward to continuing our work with the community on this app… Next, we plan on rolling it out city-wide in Philadelphia.”

Flexible, Wearable Sensors Detect Workers’ Fatigue

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/biomedical/devices/flexible-wearable-sensors-detect-workers-fatigue

Fatigue in the workplace is a serious issue today—leading to accidents, injuries and worse. Some of history’s worst industrial disasters, in fact, can be traced at least in part to worker fatigue, including the 2005 Texas City BP oil refinery explosion and the nuclear accidents at Chernobyl and Three Mile Island.

Given the potential consequences of worker fatigue, scientists have been exploring wearable devices for monitoring workers’ alertness, which correlates with physiological parameters such as heart rate, breathing rate, sweating, and muscle contraction. In a recent study published November 6 in IEEE Sensors Journal, a group of Italian researchers describe a new wearable design that measures the frequency of the user’s breathing—which they argue is a proxy for fatigue. Breathing frequency is also used to identify stressing conditions such as excessive cold, heat, hypoxia, pain, and discomfort.

“This topic is very important since everyday thousands of work-related accidents occur throughout the world, affecting all sectors of the economy,” says Daniela Lo Presti, a PhD student at  Università Campus Bio-Medico di Roma, in Rome, Italy, who was involved in the study. “We believe that monitoring workers’ physiological state during [work]… may be crucial to prevent work-related accidents and improve the workers’ quality performances and safety.”

The sensor system that her team designed involves two elastic bands that are worn just below the chest (thorax) and around the abdomen. Each band is flexible, made of a soft silicon matrix and fiber optic technology that conforms well to the user’s chest as he or she breathes.

“These sensors work as optical strain gauges. When the subject inhales, the diaphragm contracts and the stomach inflates, so the flexible sensor that is positioned on the chest is strained,” explains Lo Presti. “Conversely, during the exhalation, the diaphragm expands, the stomach depresses, and the sensor is compressed.”

The sensors were tested on 10 volunteers while they did a variety of movements and activities, ranging from sitting and standing to lateral arm movements and lifting objects from the ground. The results suggest that the flexible sensors are adept at estimating respiratory frequency, providing similar measurements to a flow meter (a standard machine for measuring respiration). The researchers also found that their sensor could be strained by up to 2.5% of its initial length.

Lo Presti says this design has several strengths, including the conformation of the sensor to the user’s body. The silicon matrix is dumbbell shaped, allowing for better adhesion of the sensing component to the band, she says.

However, the sensing system must be plugged into a bulky instrument for processing the fiber optical signals (called an optical interrogator). Lo Presti says other research teams are currently working on making these devices smaller and cheaper. “Once high-performant, smaller interrogators are available, we will translate our technology to a more compact wearable system easily usable in a real working scenario.”

AI-Directed Robotic Hand Learns How to Grasp

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/automaton/robotics/humanoids/robotic-hand-uses-artificial-neural-network-to-learn-how-to-grasp-different-objects

Reaching for a nearby object seems like a mindless task, but the action requires a sophisticated neural network that took humans millions of years to evolve. Now, robots are acquiring that same ability using artificial neural networks. In a recent study, a robotic hand “learns” to pick up objects of different shapes and hardness using three different grasping motions.  

The key to this development is something called a spiking neuron. Like real neurons in the brain, artificial neurons in a spiking neural network (SNN) fire together to encode and process temporal information. Researchers study SNNs because this approach may yield insights into how biological neural networks function, including our own. 

“The programming of humanoid or bio-inspired robots is complex,” says Juan Camilo Vasquez Tieck, a research scientist at FZI Forschungszentrum Informatik in Karlsruhe, Germany. “And classical robotics programming methods are not always suitable to take advantage of their capabilities.”

Conventional robotic systems must perform extensive calculations, Tieck says, to track trajectories and grasp objects. But a robotic system like Tieck’s, which relies on a SNN, first trains its neural net to better model system and object motions. After which it grasps items more autonomously—by adapting to the motion in real-time. 

The new robotic system by Tieck and his colleagues uses an existing robotic hand, called a Schunk SVH 5-finger hand, which has the same number of fingers and joints as a human hand.

The researchers incorporated a SNN into their system, which is divided into several sub-networks. One sub-network controls each finger individually, either flexing or extending the finger. Another concerns each type of grasping movement, for example whether the robotic hand will need to do a pinching, spherical or cylindrical movement.

For each finger, a neural circuit detects contact with an object using the currents of the motors and the velocity of the joints. When contact with an object is detected, a controller is activated to regulate how much force the finger exerts.

“This way, the movements of generic grasping motions are adapted to objects with different shapes, stiffness and sizes,” says Tieck. The system can also adapt its grasping motion quickly if the object moves or deforms.

The robotic grasping system is described in a study published October 24 in IEEE Robotics and Automation Letters. The researchers’ robotic hand used its three different grasping motions on objects without knowing their properties. Target objects included a plastic bottle, a soft ball, a tennis ball, a sponge, a rubber duck, different balloons, a pen, and a tissue pack. The researchers found, for one, that pinching motions required more precision than cylindrical or spherical grasping motions.

“For this approach, the next step is to incorporate visual information from event-based cameras and integrate arm motion with SNNs,” says Tieck. “Additionally, we would like to extend the hand with haptic sensors.”

The long-term goal, he says, is to develop “a system that can perform grasping similar to humans, without intensive planning for contact points or intense stability analysis, and [that is] able to adapt to different objects using visual and haptic feedback.”
 

New Sensor Integrated Within Dental Implants Monitors Bone Health

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/the-human-os/biomedical/devices/new-sensor-integrated-within-dental-implants-monitors-bone-health

Journal Watch report logo, link to report landing page

Scientists have created a new sensor that can be integrated within dental implants to passively monitor bone growth, bypassing the need for multiple x-rays of the jaw. The design is described in study published September 25 in IEEE Sensors Journal.

Currently, x-rays are used to monitor jaw health following a dental implant. Dental x-rays typically involve low doses of radiation, but people with dental implants may require more frequent x-rays to monitor their bone health following surgery. And, as professor Alireza Hassanzadeh of Shahid Beheshti University, Tehran, notes, “Too many X-rays is not good for human health.”

To reduce this need for x-rays, Hassanzadeh and two graduate students at Shahid Beheshti University designed a new sensor that can be integrated within dental implants. It passively measures changes in the surrounding electrical field (capacitance) to monitor bone growth. Two designs, for short- and long-term monitoring, were created.

The sensors are made of titanium and poly-ether-ether-ketone, and are integrated directly into a dental implant using microfabrication methods. The designs do not require any battery, and passively monitor changes in capacitance once the dental implant is in place.

“When the bone is forming around the sensor, the capacitance of the sensor changes,” explains Hassanzadeh. This indicates how the surrounding bone growth changes over time. The changes in capacitance, and thus bone growth, are then conveyed to a reader device that transfers the measurements into a data logger.  

In their study, the researchers tested the sensors in the femur and jaw bone of a cow. “The results reveal that the amount of bone around the implant has a direct effect on the capacitance value of the sensor,” says Hassanzadeh.

He says that the sensor still needs to be optimized for size and different implant shapes, and clinical experiments will need to be completed with different kinds of dental implant patients. “We plan to commercialize the device after some clinical tests and approval from FDA and authorities,” says Hassanzadeh.

Use AI To Convert Ancient Maps Into Satellite-Like Images

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/ai-ancient-maps-satellite-images

Ancient maps give us a slight glimpse of how landscapes looked like centuries ago. But what would we see if we looked at these older maps with a modern lens?

Henrique Andrade is a student at Escola Politécnica da Universidade de Pernambuco, Recife who has been studying maps of his hometown Recife, in Brazil, for several years now. “I gathered all these digital copies of maps, and I ended up discovering things about my hometown that aren’t so widely known,” he says. “I feel that in Recife people were denied access to their own past, which makes it difficult for them to understand who they are, and consequently what they can do about their own future.”

Andrade approached a professor at his university, Bruno Fernandes, with an idea: to develop a machine learning algorithm that could transform old maps into Google satellite images. Such an approach, he believes, could inform people of how land use has changed over time, including the social and economic impacts of urbanization.

To see the project realized, they used an existing AI tool called Pix2pix, which relies on two neural networks. The first one creates images based on the input set, while the second network that decides if the generated image is fake or not. The networks are then trained to fool each other, and ultimately create realistic-looking images based on the historical data provided.

Andrade and Fernandes describe their approach in a study published 24 September 2020 in IEEE Geoscience and Remote Sensing Letters. In this study, they took a map of Recife from 1808 and generated modern day images of the area.

“When you look at the images, you get a better grasp of how the city has changed in 200 years,” explains Andrade. “The city’s geography has drastically changed—landfills have reduced the water bodies and green areas were all removed by human activity.”

He says an advantage of this AI approach is that it requires relatively little input volume; however, the input requires some historical context, and the resolution of the generated images is lower than what the researchers would like.

“Moving forward, we are working on improving the resolution of the images, and experimenting on different inputs,” says Andrade. He sees this approach to generate modern images of the past as widely applicable, noting that it could be applied to various locations and could be used by urban planners, anthropologists, and historians.

A New Way For Autonomous Racing Cars to Control Sideslip

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/cars-that-think/transportation/self-driving/get-a-grip-a-new-way-for-autonomous-racing-cars-to-avoid-sideslip

When race car drivers take tight turns at high speeds, they rely on their experience and gut feeling to hit the gas pedal without spinning out. But how does an autonomous race car make the same decision?

Currently, many autonomous cars rely on expensive external sensors to calculate a vehicle’s velocity and chance of sideslipping on the racetrack. In a different approach, one research team in Switzerland has recently developed a novel a machine learning algorithm that harnesses measurements from more simple sensors. They describe their design in a study published August 14 in IEEE Robotics and Automation Letters.

As a race car takes a turn around the track, its forward and lateral velocity determines how well the tires grip the road—and how much sideslip occurs.

“(Autonomous) race cars are typically equipped with special sensors that are very accurate, exhibit almost no noise, and measure the lateral and longitudinal velocity separately,” explains Victor Reijgwart, of the Autonomous Systems Lab at ETH Zurich and a co-creator of the new design.

These state-of-the art sensors only require simple filters (or calculations) to estimate velocity and control sideslip. But, as Reijgwart notes, “Unfortunately, these sensors are heavy and very expensive—with single sensors often costing as much as an entry-level consumer car.”

His group, whose Formula Student team is named AMZ Racing, sought a novel solution. Their resulting machine learning algorithm relies on several measurements including: two normal inertial measurement units, the rotation speed and motor torques at all four wheels, and the steering angle. They trained their model using real data from racing cars on flat, gravel, bumpy, and wet road surfaces.

In their study, the researchers compared their approach to the external velocity sensors that have been commonly used at multiple Formula Student Driverless events across Europe in 2019. Results show that the new approach demonstrates comparable performance when the cars are undergoing a high level of sideslip (at 10◦ at the rear axle), but offers several advantages. For example, the new approach is better at rejecting biases and outlier measurements. The results also show that the machine learning approach is 15 times better than using just simple algorithms with non-specialized sensors.

“But learning from data is a two-edged sword,” says Sirish Srinivasan, another AMZ Racing member at ETH Zurich. “While the approach works well when it has been used under circumstances that are similar to the data it was trained on, safe behavior of the [model] cannot yet be guaranteed when it is used in conditions that significantly differ from the training data.”

Some examples include unusual weather conditions, changes in tire pressure, or other unexpected events.

The AMZ Racing team participates in yearly Formula Student Driverless engineering competitions, and hopes to apply this technique in the next race.

In the meantime, the team is interested in further improving their technique. “Several open research questions remain, but we feel like the most central one would be how to deal with unforeseen circumstances,” says Reijgwart. “This is, arguably, a major open question for the machine learning community in general.”

He notes that adding more “common sense” to the model, which would give it more conservative but safe estimates in unforeseen circumstances, is one option.  In a more complex approach, the model could perhaps be taught to predict its own uncertainty, so that it hands over control to a simpler but more reliable mode of calculation when the AI encounters an unfamiliar scenario.

This Motorized Backpack Eases the Burden for Hikers

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/consumer-electronics/portable-devices/this-motorized-backpack-eases-the-burden-for-hikers

Journal Watch report logo, link to report landing page

For backpackers, it’s a treat to escape civilization and traverse through the woods, enjoying the ruggedness of the great outdoors. But carrying that heavy backpack for days can undoubtedly be a drag.

One group of researchers in China have some welcomed news to lighten the load. They’ve designed a new backpack that accounts for the inertial forces of the bag against a backpacker’s body as they walk, reducing metabolic energy required by the user by an average of 11%. Their design is described in a study published July 27 in IEEE Transactions on Neural Systems and Rehabilitation Engineering.

Caihua Xiong, a professor at Huazhong University of Science and Technology who was involved in the study, notes that humans around the world and across the ages have been exploring ways to lighten their loads. “Asian people utilized flexible bamboo poles to carry bulky goods, and Romans designed suspended backpacks to carry heavy loads, which show energetic benefits,” he notes. “These designed passive carrying tools have the same principle [as ours].”

As humans walk, our gait is particularly energy efficient when only one foot is on the ground. But when we transition to the other foot and both are temporarily grounded, this is where the energy transfer becomes less efficient. And if we are carrying a heavy backpack, the extra inertial force from the backpack’s vertical movement and oscillations creates further inefficiencies during this transition.

To adjust for these inertial forces, Xiong’s team designed a motorized backpack that has two different modes. In its passive mode, two elastic ropes symmetrically arranged balance the weight of load within the backpack. Or, the user can choose to turn the system into active mode, whereby a rotary motor regulates the acceleration of load. The whole backpack weighs 5.3 kg, and was designed to carry loads up to 30 kg.

In experiments with seven similarly sized men, the researchers compared the energy requirements of using a typical rucksack compared to their new backpack system, both with the motorized system on and off. The participants were assigned to try these three scenarios in random order, while surface electromyography signals from their legs muscles and respiratory measurements were taken to analyze their energy expenditure.

Results show that, the motorized backpack in active mode reduces the load acceleration by 98.5% on average. In terms of metabolic costs for the user, motorized backpack design required on average 8% and 11% less energy, in passive and active modes respectively, compared to the standard rucksack. Xiong cautions that these reductions may in part be due to the distribution of weight within the two backpacks, since the designed system has the motor in a fixed position higher in the backpack. In contrast, the contents of the rucksack are loose in the compartment and thus the weight distribution compared to the user’s center-of-mass is different.

Also of note, this study involved people walking on flat ground. Xiong says he is interested in potentially commercializing the product, but aims to first explore ways of improving the system for different walking speeds and terrains.