All posts by Payal Dhar

Study: 6G’s Haptic, Holographic Future?

Post Syndicated from Payal Dhar original

Journal Watch report logo, link to report landing page

Imagine a teleconference but with holograms instead of a checkerboard of faces. Or envision websites and media outlets across the Internet that allow you to make haptic connections (i.e. those involving touch as well as sight and sound). Researchers studying the future of sixth-generation (6G) wireless communications are now sketching out possibilities—though not certainties—for the kinds of technologies a 6G future could entail. 

Sixth-generation wireless technologysays Harsh Tataria, a communications engineering lecturer at Lund University, Sweden—will be characterized by low latencies and ultrahigh frequencies, with data transfer speeds potentially hitting 100 Gbps. Tataria, along with colleagues from Lund University, Spark New Zealand, University of Southern California (USC), and King’s College London, recently published a paper in Proceedings of the IEEE, presenting a holistic, top-down view of 6G wireless system design. Their study began by considering the challenges and technical requirements of next-generation networks—and forecasting some of the technological possibilities that could be practically realizable within that context.

Such future-casting is to be expected as 5G deployment picks up speed around the world, at which point subsequent generations of wireless technologies come more into focus. Tataria calls this “a natural progression,” to look at the emerging trends in both technology and consumer demands. “When we look at 6G, we’re really look[ing] at vastly connected societies,” he says, “even a step beyond what 5G is capable of doing, such as real-time holographic communications.”

The study outlines what it calls a “high-fidelity holographic society,” one in which “Holographic presence will enable remote users [to be represented] as a rendered local presence. For instance, technicians performing remote troubleshooting and repairs, doctors performing remote surgeries, and improved remote education in classrooms could benefit from hologram renderings.” The authors note that 4G and expected 5G data rates may not enable such technologies—but that 6G might—owing to the fact that “holographic images will need transmission from multiple viewpoints to account for variation in tilts, angles, and observer positions relative to the hologram.”

Even simple phone conversations could involve new levels of multimedia-rich experience. “For example, in this interview…we could be talking to the rendered presence [of each other],” says Mansoor Shafi, another study co-author. “And that would provide a much richer experience than the audio call we are having at the moment.”

Another promising possibility the study teases involves what they call a haptic Internet. “We believe that a variety of sensory experiences may get integrated with holograms,” the authors write. “To this end, using holograms as the medium of communication, emotion-sensing wearable devices capable of monitoring our mental health, facilitating social interactions, and improving our experience as users will become the building blocks of networks of the future.”

Mischa Dohler, another co-author, believes that 6G will consolidate the “Internet of skills” or the ability to transmit skills over the internet. “We can do it with audio and video, but we can’t touch [through] the Internet…[or] move objects.” The consolidation of edge computing, robotics, AI, augmented reality and 6G communications will make this possible, he says. “This next generation Internet…will democratize skills the very same way as the Internet has democratized information.”

Molisch also hopes that 6G will bring better chip-to-chip communication. “As we go to 200 Gbps or more…the cable connections are just not able to keep up,” he says. “As we are [moving] to higher data rates, higher processing speeds…wireless links are one way in which this bottleneck can be overcome.” This also means increased reliability as wireless connections are not impacted by shaking or vibration, and lower costs because replacing cables “might be more expensive than just putting in wireless transceivers on to the chips.”

Other use cases mentioned in the paper involve what they call extremely high-rate “information showers”—hotspots where one can experience terabits-per-second data transfer rates—mobile edge computing, and space-terrestrial integrated networks. But, as Molisch cautions, “There is [still] a lot of research that needs to be done…before the actual standardisation process can start.”

With 6G going up to terahertz frequencies, there will be tremendous challenges in building new hardware as well, the researchers say. Better semiconductor technologies will also be needed for faster devices. Other challenges remain as well, including power consumption.

With frequency bands moving up in the hundreds of gigahertz, “even fundamental things like circuits and substrates to develop circuits [are] extremely tricky,” says Tataria. “So getting all those things right, and going from the fundamental-level details all the way up to building a system is going to be substantially harder than what it first came across.” Their study, therefore, attempts to explore the trade-offs involved in each futuristic technology.

As the authors point out, this study is not a comprehensive or definitive account of 6G’s capabilities and limitations—but rather a documentation of the research conducted to date and the interesting directions for 6G technologies that future researchers could pursue.

Visible Touch: How Cameras Can Help Robots Feel

Post Syndicated from Payal Dhar original

The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs. 

To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.

A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.” 

Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.

However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision. 

“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.

This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device. 

The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities. 

In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.

As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.”

3D Printed Home Technologies Scaling Up Around the World

Post Syndicated from Payal Dhar original

In December 2020, Larsen & Toubro Construction in India 3D-printed a concrete, ground+1, 65-square-meter model residential building at their test facility in Kanchipuram. Larsen & Toubro used COBOD’s robotic 3D construction printers with a concrete mix developed in-house by L&T. 

It was, to be clear, only a proof of concept. But the dwelling (pictured above) is also a sign of developments to come. 

Printing with concrete is a complicated matter. The concrete must flow fluidly enough to be extruded through the printer, it must be quick-hardening and strong enough to take the load of subsequent layers, and it must be able to bond sufficiently between those layers.

“In addition, 3D concrete printing must overcome the challenge of [working] in an open-to-sky environment by controlling variables like ambient temperature, moisture, humidity, dust and the like that can affect the concrete properties,” says M.V. Satish, senior executive vice-president at L&T. A suitable 3D concrete mix, therefore, took numerous field trials to perfect, he says, especially as the team wanted to use locally available building material.

Integrating horizontal and vertical reinforcement bars during the printing process proved a challenge too, Satish says. To meet that challenge, his team modified their print head to embed the rebars.

India faces a massive housing shortfall, and the government’s Housing for All program promises an ambitious 20 million urban and 30 million rural homes by 2022. “3D printing will certainly be a technology well-suited to meet the speed and scale demanded by India’s mass housing industry” says Satish, but it is still early days yet.

So far, L&T have demonstrated that the technology is feasible, but their prototype is still too early-stage to know how much it would cost and how long it would take to build if developed at a large scale. “That would depend on various factors like building configuration, level of architectural sophistication, wall dimensions, project built-up area, number of floors or apartments in a building, and such like,” Satish says.

While the Kanchipuram model complies with Indian building codes, “there are various design validation processes [still required from] statutory bodies, research institutions, et cetera,” he says.

On the other side of the world, Mighty Buildings, based in California, have already been delivering a selection of 3D-printed structures to customers.

Using a proprietary thermoset composite material—“effectively synthetic stone,” says Sam Ruben, co-founder and chief sustainability officer—they print entire 350 square feet studios which then are placed in the building site on the prepared foundation and connected to utilities. Two such studio mods can be connected for a bigger house. Later this year, the firm will roll out their larger custom houses, made from 3D-printed panels that will be delivered to the building site and assembled there. 

Even though most of their sales have been accessory dwelling unit direct to consumers, Mighty Buildings would like to move into B2B services by plugging into the existing construction industry supply chain. Disruption through collaboration, Ruben calls it “For us, it makes sense to focus on the 3D-printing, on the material,” he says. “And then work with developers and builders who have the expertise in their [specific] areas.”

To that end, Mighty Buildings has already signed their first contract for a development project in southern California. The company’s vision, Ruben says, is to distribute their production process both across the U.S. and “around the world, in areas where we have … partners—and we have the demand.”

Both Satish and Ruben say that 3D-printing has the potential to completely change the dynamics of mass construction market. Despite the high capital investment needed for automated 3D-printing construction compared to conventional technologies, there are concomitant advantages, Satish says, such as in terms of safety, speed, scale and design complexity. 

“We still build using techniques and equipment and tools that are that someone building 100 years ago would be familiar with,” Ruben says. “It is the last major industry that hasn’t embraced new technology as a way to increase productivity.”

Bionic muscles that are stronger, faster, and more efficient

Post Syndicated from Payal Dhar original

Artificial muscles, once a tangle of elaborate servomotors, and hydraulic and pneumatic actuators, is now a thing of shape-memory alloys and hair-thin carbon nanotube (CNT) fibers. Bionics are, in brief, getting smaller—though perhaps not simpler. 

Electrochemical CNT muscles are also energy efficient, and they provide larger muscle strokes as well. Recently, a group of researchers from the U.S., Australia, South Korea and China, working with polymer-coated CNT fibers twisted into yarn, have effectively demonstrated how these muscles can be  faster, more powerful and more energy efficient.

Electrochemically driven CNT muscles actuate when a voltage is passed between the muscle fiber and a counter-electrode, causing a movement of ions to and from the surrounding electrolyte and the muscle. Generally speaking, this results in the muscles either contracting or expanding, until the potential reaches zero charge—after which it changes direction. In other words, a bipolar muscle stroke ensues. The bipolar movement, however, results in a smaller muscle stroke, reducing the muscle’s efficiency. 

The research team devised a way to circumvent this limitation. “When we coated the internal surface of the yarns with about a nanometer thicknesses of special polymers, we could shift the potential of zero charge of the muscle to outside the [stability window of the electrolyte, a voltage range beyond which it breaks down],” says Ray Baughman, director of the Alan G. MacDiarmid NanoTech Institute, University of Texas at Dallas, one of the authors of the paper. 

These polymers are ionically conducting materials with either positively or negatively charged chemical groups. In other words, they can accept either positive (cations) or negative ions (anions). With the potential of zero charge outside the electrolyte’s stability window, only one kind of ion (either cations or anions) infiltrates the muscle, and the muscle actuates in a unipolar direction. 

In the lab, the researchers used a CNT yarn muscle with a counter-electrode that was non-actuating to demonstrate their concept. But Baughman says that this doesn’t have to be the case. They found that by using two different types of their polymer-coated carbon nanotube yarns—one with positive substituents and the other with negative—they could create a dual-electrode unipolar muscle. “You can use the mechanical work being done by each muscle [additively]… [by putting] an unlimited numbers of these muscles together.”

The team were also able to make a dual-electrode CNT yarn muscle with a solid-state electrolyte, eliminating the need for a liquid electrolyte bath. “These dual electrode, unipolar muscles were woven to make actuating textiles that could be used for morphing clothing,” said Zhong Wang, a doctoral student and co-author, in the press release.

The group’s electrochemical unipolar muscles generate an average mechanical power output that is 10 times the average capability of human muscles, and about 2.2 times the weight-normalized power capability of a turbocharged V-8 diesel engine.

As such, it has a wide range of applications—including robotics and adaptable clothing. Of the former example, Baughman says robotic motors can be heavy and difficult to coordinate in a device with broad freedom of movement. By contrast, the artificial muscles could power electric robotic exoskeletons, which, could enable a person to work in a warehouse and move heavy items around with ease. 

Clothing that could adjust according to comfort is another application, allowing wearers to change the porosity of textiles depending on the weather. Medical implants, like a heart assist apparatus, could also use compact and lightweight artificial muscles, as could prosthetics. We are [currently]… writing a proposal for doing studies that will actually involve patients,” Baughman adds. 

Before real-world applications become possible, the challenge is to produce cost-effective, high-quality carbon nanotube yarn at scale. Baughman and his team also hope to adapt the unipolar CNT muscles to make more powerful mechanical energy harvesters.

Noise-Canceling Headphones Without the Headphones

Post Syndicated from Payal Dhar original

Active noise cancellation/control (ANC) headsets depend on passive sound attenuation from the padding and cups found in headphones and earphones. This makes ANC headphones effective noise control devices, although wearing these for extended periods of time can be uncomfortable and even cause injury. A group of researchers from the University of Technology Sydney’s Centre for Audio, Acoustics and Vibration have been working on a new kind of virtual noise cancellation system that moves the ANC components away from personal audio equipment and into the headrest of a chair.

Active noise cancellation has an almost century-long history—going back to the 1930s when the first patent was awarded in the US for a device that could cancel out unwanted noise by inverting the polarity of these sounds and playing them back. In 1989 the first working prototype of ANC headphones was released for the aviation industry. In 1991, ANC was used for noise-control in an enclosed space—for the hardtop model of the Nissan Bluebird in Japan.

“However, this ANC headrest system has seen very little progress over the years compared to ANC headphones,” says Tong Xiao, one of the authors of the current study.

There are multiple reasons for this. Existing ANC headrests use microphones set in strategic positions around a user’s head to sample the sounds reaching them. These setups are best for dealing with low-frequency noises up to 1 kHz. However, the passive noise control provided by the cushioning and cupping available in headphones, which reduce high-frequency noises more effectively, are absent. These high frequencies include human speech, which is around 4 to 6 kHz. 

Thus the Sydney team needed to demonstrate an ANC system that worked with both high and low frequencies. So they used a remote acoustic sensing system built around a laser Doppler vibrometer (LDV), which measures non-contact vibrations over a wide range. In their tests, they placed a tiny, jewelry-sized, retro-reflective membrane in the ear as a pick-up for the LDV.

The test was a success, reports Xiao, with noise cancellation active to 6 kHz—and attenuation by 10 to 20 dB

The researchers tested their system on three representative types of  environmental noises—aircraft interior, airplane flyby noise, and speech. Xiao says their system performs closer to a pair of ANC headphones than to any existing ANC headrests, with a wide frequency response but without the need for any bulky headsets or other devices. “We call it ‘virtual ANC headphones,’” he says.

The performance of these virtual headphones was demonstrated on a head and torso simulator (HATS) device to record consistent measurements. However, Xiao says, they have performing initial tests with human subjects since the paper was published.  

“The configuration…[is] simple and yet effective,” he says “[One] limitation…so far is the cost, since the system uses LDVs, which are delicate scientific instruments and can be pricey. However, we believe science advancements in the near future can make the system more cost effective.”

There are other aspects of the system to be ironed out still—a more realistic head-tracking system; improved material of the membrane placed in the user’s ears; lasers that are safe and invisible to humans; and, of course, even higher levels still of noise reduction.

Solar-based Electronic Skin Generates Its Own Power

Post Syndicated from Payal Dhar original

Replicating the human sense of touch is complicated—electronic skins need to be flexible, stretchable, and sensitive to temperature, pressure and texture; they need to be able to read biological data and provide electronic readouts. Therefore, how to power electronic skin for continuous, real-time use is a big challenge. 

To address this, researchers from Glasgow University have developed an energy-generating e-skin made out of miniaturized solar cells, without dedicated touch sensors. The solar cells not only generate their own power—and some surplus—but also provide tactile capabilities for touch and proximity sensing. An early-view paper of their findings was published in IEEE Transactions on Robotics.

When exposed to a light source, the solar cells on the s-skin generate energy. If a cell is shadowed by an approaching object, the intensity of the light, and therefore the energy generated, reduces, dropping to zero when the cell makes contact with the object, confirming touch. In proximity mode, the light intensity tells you how far the object is with respect to the cell. “In real time, you can then compare the light intensity…and after calibration find out the distances,” says Ravinder Dahiya of the Bendable Electronics and Sensing Technologies (BEST) Group, James Watt School of Engineering, University of Glasgow, where the study was carried out. The team used infra-red LEDs with the solar cells for proximity sensing for better results.

To demonstrate their concept, the researchers wrapped a generic 3D-printed robotic hand in their solar skin, which was then recorded interacting with its environment. The proof-of-concept tests showed an energy surplus of 383.3 mW from the palm of the robotic arm. “The eSkin could generate more than 100 W if present over the whole body area,” they reported in their paper.

“If you look at autonomous, battery-powered robots, putting an electronic skin [that] is consuming energy is a big problem because then it leads to reduced operational time,” says Dahiya. “On the other hand, if you have a skin which generates energy, then…it improves the operational time because you can continue to charge [during operation].” In essence, he says, they turned a challenge—how to power the large surface area of the skin—into an opportunity—by turning it into an energy-generating resource.

Dahiya envisages numerous applications for BEST’s innovative e-skin, given its material-integrated sensing capabilities, apart from the obvious use in robotics. For instance, in prosthetics: “[As] we are using [a] solar cell as a touch sensor itself…we are also [making it] less bulkier than other electronic skins.” This, he adds, will help create prosthetics that are of optimal weight and size, thus making it easier for prosthetics users. “If you look at electronic skin research, the the real action starts after it makes contact… Solar skin is a step ahead, because it will start to work when the object is approaching…[and] have more time to prepare for action.” This could effectively reduce the time lag that is often seen in brain–computer interfaces.

There are also possibilities in the automation sector, particularly in electrical and interactive vehicles. A car covered with solar e-skin, because of its proximity-sensing capabilities, would be able to “see” an approaching obstacle or a person. It isn’t “seeing” in the biological sense, Dahiya clarifies, but from the point of view of a machine. This can be integrated with other objects, not just cars, for a variety of uses. “Gestures can be recognized as well…[which] could be used for gesture-based control…in gaming or in other sectors.”

In the lab, tests were conducted with a single source of white light at 650 lux, but Dahiya feels there are interesting possibilities if they could work with multiple light sources that the e-skin could differentiate between. “We are exploring different AI techniques [for that],” he says, “processing the data in an innovative way [so] that we can identify the the directions of the light sources as well as the object.”

The BEST team’s achievement brings us closer to a flexible, self-powered, cost-effective electronic skin that can touch as well as “see.” At the moment, however, there are still some challenges. One of them is flexibility. In their prototype, they used commercial solar cells made of amorphous silicon, each 1cm x 1cm. “They are not flexible, but they are integrated on a flexible substrate,” Dahiya says. “We are currently exploring nanowire-based solar cells…[with which] we we hope to achieve good performance in terms of energy as well as sensing functionality.” Another shortcoming is what Dahiya calls “the integration challenge”—how to make the solar skin work with different materials.

AlphaFold Proves That AI Can Crack Fundamental Scientific Problems

Post Syndicated from Payal Dhar original

Any successful implementation of artificial intelligence hinges on asking the right questions in the right way. That’s what the British AI company DeepMind (a subsidiary of Alphabet) accomplished when it used its neural network to tackle one of biology’s grand challenges, the protein-folding problem. Its neural net, known as AlphaFold, was able to predict the 3D structures of proteins based on their amino acid sequences with unprecedented accuracy. 

AlphaFold’s predictions at the 14th Critical Assessment of protein Structure Prediction (CASP14) were accurate to within an atom’s width for most of the proteins. The competition consisted of blindly predicting the structure of proteins that have only recently been experimentally determined—with some still awaiting determination.

Called the building blocks of life, proteins consist of 20 different amino acids in various combinations and sequences. A protein’s biological function is tied to its 3D structure. Therefore, knowledge of the final folded shape is essential to understanding how a specific protein works—such as how they interact with other biomolecules, how they may be controlled or modified, and so on. “Being able to predict structure from sequence is the first real step towards protein design,” says Janet M. Thorntondirector emeritus of the European Bioinformatics InstituteIt also has enormous benefits in understanding disease-causing pathogens. For instance, at the moment only about 18 of the 26 proteins in the SARS-CoV-2 virus are known.

Predicting a protein’s 3D structure is a computational nightmare. In 1969 Cyrus Levinthal estimated that there are 10300 possible conformational combinations for a single protein, which would take longer than the age of the known universe to evaluate by brute force calculation. AlphaFold can do it in a few days.

As scientific breakthroughs go, AlphaFold’s discovery is right up there with the likes of James Watson and Francis Crick’s DNA double-helix model, or, more recently, Jennifer Doudna and Emmanuelle Charpentier’s CRISPR-Cas9 genome editing technique.

How did a team that just a few years ago was teaching an AI to master a 3,000-year-old game end up training one to answer a question plaguing biologists for five decades? That, says Briana Brownell, data scientist and founder of the AI company PureStrategy, is the beauty of artificial intelligence: The same kind of algorithm can be used for very different things. 

“Whenever you have a problem that you want to solve with AI,” she says, “you need to figure out how to get the right data into the model—and then the right  sort of output that you can translate back into the real world.” 

DeepMind’s success, she says, wasn’t so much a function of picking the right neural nets but rather “how they set up the problem in a sophisticated enough way that the neural network-based modeling [could] actually answer the question.”

AlphaFold showed promise in 2018, when DeepMind introduced a previous iteration of their AI at CASP13, achieving the highest accuracy among all participants. The team had trained its to model target shapes from scratch, without using previously solved proteins as templates.

For 2020 they deployed new deep learning architectures into the AI, using an attention-based model that was trained end-to-end. Attention in a deep learning network refers to a component that manages and quantifies the interdependence between the input and output elements, as well as between the input elements themselves. 

The system was trained on public datasets of the approximately 170,000 known experimental protein structures in addition to databases with protein sequences of unknown structures. 

“If you look at the difference between their entry two years ago and this one, the structure of the AI system was different,” says Brownell. “This time, they’ve figured out how to translate the real world into data … [and] created an output that could be translated back into the real world.”

Like any AI system, AlphaFold may need to contend with biases in the training data. For instance, Brownell says, AlphaFold is using available information about protein structure that has been measured in other ways. However, there are also many proteins with as yet unknown 3D structures. Therefore, she says, a bias could conceivably creep in toward those kinds of proteins that we have more structural data for. 

Thornton says it’s difficult to predict how long it will take for AlphaFold’s breakthrough to translate into real-world applications.

“We only have experimental structures for about 10 per cent of the 20,000 proteins [in] the human body,” she says. “A powerful AI model could unveil the structures of the other 90 per cent.”

Apart from increasing our understanding of human biology and health, she adds, “it is the first real step toward… building proteins that fulfill a specific function. From protein therapeutics to biofuels or enzymes that eat plastic, the possibilities are endless.”

4G on the Moon: One Small Leap, One Giant Step

Post Syndicated from Payal Dhar original

Standing up a 4G/LTE network might seem well below the pay grade of a legendary innovation hub like Nokia Bell Labs. However, standing up that same network on the moon is another matter. 

Nokia and 13 other companies — including SpaceX and Lockheed-Martin — have won five-year contracts totaling over US$ 370 million from NASA to demonstrate key infrastructure technologies on the lunar surface. All of which is part of NASA’s Artemis program to return humans to the moon.

NASA also wants the moon to be a stepping stone for further explorations in the solar system, beginning with Mars.

For that, says L.K. Kubendran, lead for commercial space technology partnerships, a lunar outpost would need, at the very least, power, shelter, and a way to communicate.

And the moon’s 4G network will not only be carrying voice and data signals like those found in terrestrial applications, it’ll also be handling remote operations like controlling lunar rovers from a distance. The antennas and base stations will need to be ruggedized for the harsh, radiation-filled lunar environment. Plus, over the course of a typical lunar day-night cycle (28 earth days), the network infrastructure will face temperature extremes of over 250ºC from sunlight to shadow. Of course, every piece of hardware in the network must first be transported to the moon, too. 

“The equipment needs to be hardened for environmental stresses such as vibration, shock and acceleration, especially during launch and landing procedures, and the harsh conditions experienced in space and on the lunar surface,” says Thierry Klein, head of the Enterprise & Industrial Automation Lab, Nokia Bell Labs. 

Oddly enough, it’ll probably all work better on the moon, too. 

“The propagation model will be different,” explains Dola Saha, assistant professor in Electrical and Computer Engineering at the University at Albany, SUNY. She adds that the lack of atmosphere as well as the absence of typical terrestrial obstructions like trees and buildings will likely mean better signal propagation. 

To ferry the equipment for their lunar 4G network, Nokia will be collaborating with Intuitive Machines — who’s also developing a lunar ice drill for the moon’s south pole. “Intuitive Machines is building a rover that they want to land on the moon in late 2022,” says Kubendran. That rover mission now seems likely to be the rocket that’ll begin hauling 4G infrastructure to the lunar surface.    

For all the technological innovation a lunar 4G network might bring about, its signals could unfortunately also mean bad news for radio astronomy. Radio telescopes are notoriously vulnerable to interference (a.k.a. radio frequency interference, or RFI). For instance, a stray phone signal from Mars carries power enough to interfere with the Jodrell Bank radio observatory in Manchester, U.K. So, asks Emma Alexander, an astrophysicist writing for The Conversation, “how would it fare with an entire 4G network on the moon?”

That depends, says Saha. 

“Depends on which frequencies you’re using, and…the side lobes and…filters [at] the front end of the of these networks,” Saha says. On Earth, she continues, there are so many applications that the FCC in the US, and corresponding bodies in other countries, allocate frequencies to each of them. 

“RFI can be mitigated at the source with appropriate shielding and precision in the emission of signals,” Alexander writes. “Astronomers are constantly developing strategies to cut RFI from their data. But this increasingly relies on the goodwill of private companies.” As well as regulations by governments, she adds, looking to shield earthly radio telescopes from human-generated interference from above. 

Understanding Causality Is the Next Challenge for Machine Learning

Post Syndicated from Payal Dhar original

“Causality is very important for the next steps of progress of machine learning,” said Yoshua Bengio, a Turing Award-wining scientist known for his work in deep learning, in an interview with IEEE Spectrum in 2019. So far, deep learning has comprised learning from static datasets, which makes AI really good at tasks related to correlations and associations. However, neural nets do not interpret cause-and effect, or why these associations and correlations exist. Nor are they particularly good at tasks that involve imagination, reasoning, and planning. This, in turn, limits AI from being able to generalize their learning and transfer their skills to another related environment.

The lack of generalization is a big problem, says Ossama Ahmed, a master’s student at ETH Zurich who has worked with Bengio’s team to develop a robotic benchmarking tool for causality and transfer learning. “Robots are [often] trained in simulation, and then when you try to deploy [them] in the real world…they usually fail to transfer their learned skills. One of the reasons is that the physical properties of the simulation are quite different from the real world,” says Ahmed. The group’s tool, called CausalWorld, demonstrates that with some of the methods currently available, the generalization capabilities of robots aren’t good enough—at least not to the extent that “we can deploy [them] safely in any arbitrary situation in the real world,” says Ahmed.

The paper on CausalWorld, available as a preprint, describes benchmarks in a simulated robotics manipulation environment using the open-source TriFinger robotics platform. The main purpose of CausalWorld is to accelerate research in causal structure and transfer learning using this simulated environment, where learned skills could potentially be transferred to the real world. Robotic agents can be given tasks that comprise pushing, stacking, placing, and so on, informed by how children have been observed to play with blocks and learn to build complex structures. There is a large set of parameters, such as weight, shape, and appearance of the blocks and the robot itself, on which the user can intervene at any point to evaluate the robot’s generalization capabilities.

In their study, the researchers gave the robots a number of tasks ranging from simple to extremely challenging, based on three different curricula. The first involved no environment changes; the second had changes to a single variable; and the third allowed full randomization of all variables in the environment. They observed that as the curricula got more complex, the agents showed less ability to transfer their skills to the new conditions.

“If we continue scaling up training and network architectures beyond the experiments we report, current methods could potentially solve more of the block stacking environments we propose with CausalWorld,” points out Frederik Träuble, one of the contributors to the study. Träuble adds that “What’s actually interesting is that we humans can generalize much, much quicker [and] we don’t need such a vast amount of experience… We can learn from the underlying shared rules of [certain] environments…[and] use this to generalize better to yet other environments that we haven’t seen.”

A standard neural network, on the other hand, would require insane amounts of experience with myriad environments in order to do the same. “Having a model architecture or method that can learn these underlying rules or causal mechanisms, and utilize them could [help] overcome these challenges,” Träuble says.

CausalWorld’s evaluation protocols, say Ahmed and Träuble, are more versatile than those in previous studies because of the possibility of “disentangling” generalization abilities. In other words, users are free to intervene on a large number of variables in the environment, and thus draw systemic conclusions about what the agent generalizes to—or doesn’t. The next challenge, they say, is to actually use the tools available in CausalWorld to build more generalizable systems.

Despite how dazzled we are by AI’s ability to perform certain tasks, Yoshua Bengio, in 2019, estimated that present-day deep learning is less intelligent than a two-year-old child. Though the ability of neural networks to parallel-process on a large scale has given us breakthroughs in computer vision, translation, and memory, research is now shifting to developing novel deep architectures and training frameworks for addressing tasks like reasoning, planning, capturing causality, and obtaining systematic generalization. “I believe it’s just the beginning of a different style of brain-inspired computation,” Bengio said, adding, “I think we have a lot of the tools to get started.”

Spinach Gives Fuel Cells a Power Up

Post Syndicated from Payal Dhar original

When Shouzhong Zou and his team at the Department of Chemistry, American University, decided to try spinach as way to improve the performance of fuel cells, even they were a little surprised at how well it worked. In their proof-of-concept experiments, they used spinach—bought from local supermarkets—to make a carbon-rich catalyst that can be used in fuel cells and metal-air batteries.

The spinach was a used a precursor for high-performance catalysts required for the oxygen reduction reactions (ORRs) in fuel cells. Traditionally, fuel cells have used platinum-based catalysts, but not only is platinum very expensive and difficult to obtain, it can be vulnerable to chemical poisoning in certain conditions. Consequently, researcher have looked into biomass-derived, carbon-based, catalysts to replace platinum, but there have been bottlenecks in preparing the materials in a cost-effective and non-toxic way. “We were a little bit lucky to pick up spinach,” says Zou, because of its high iron and nitrogen content. “At this point [our method] does require us to add a little bit more nitrogen into the starting material, because even though [spinach] has a lot of nitrogen to begin with, during the preparation process, some of this nitrogen gets lost.”

Zou and his team weren’t the first to discover the electrochemical wonders of spinach, of course, even though other studies have used the leafy greens for other purposes. For example, a 2014 study harvested activated carbon from spinach to create capacitor electrodes, while a more recent paper tackled spinach-based nanocomposites as photocatalysts. Spinach, apart from being abundant in iron and nitrogen (both essential in ORRs), is easy to cultivate, and “definitely cheaper than platinum,” Zou adds. 

The preparation of the spinach-based catalyst sounds as first suspiciously like a smoothie recipe at first—wash fresh leaves, pulverize into a juice, and freeze-dry. This freeze-dried juice is then ground into a powder, to which melamine is added as a nitrogen promoter. Salts like sodium chloride and potassium chloride—“pretty much like the table salt that we use in our kitchen,” says Zou—are also added, necessary for creating pores that increase the surface area available for reactions. Nanosheets are produced from the spinach–melamine–salt composites by pyrolyzing them at 900 C a couple of times. “Obviously…we can optimize how we prepare this material [to make it more efficient].”

An efficient catalyst means a faster, more efficient reaction. In the case of fuel cells, this can increase the energy output of batteries. This is where the porosity of the nanosheets helps. “Even though we call them nanosheets,” Zou says, “when they are stacked together, it’s not like a stack of paper that is very solid.” The addition of salts to create tiny holes that allows oxygen to penetrate the material rather than access only the outer surfaces. “We need to make it porous enough that…all the active sites can be used.”

The other factor that favorably disposed the American University team towards spinach was that it is a renewable source of biomass. “Sustainability is a very important factor in our consideration,” says Zou. The big question to explore, he adds, is how can we avoid competition “with the dinner table”. (Biofuel production has already raised concerns about food crops being diverted away from hungry mouths.) “And the second is, how do we keep the carbon footprint down in terms of his catalyst preparation…because currently we do use high temperatures in our preparation procedure?… If we can find different ways to do these to achieve the same type of material, that will cut back the energy consumption and reduce significantly the carbon footprint.”

Even though the results are promising, there is yet a long way to go. Zou cautions that the study so far is only a proof-of-principle. “We need to be very careful when we talk about practical applications because something that shows excellent performance in [lab] conditions could become more challenging when we implement them in the real device.” Another aspect that needs further study, he adds, is that while the spinach-derived catalyst outperforms platinum-based catalysts in alkaline conditions, the performance in an acidic medium is not as efficient. “So obviously, there is still some tuning we need to do to see if they can work through a range of pH.”

A complete prototype is obviously a next step—testing the catalyst derived from spinach in a fuel cell. “That’s the kind of expertise I don’t have in my lab at this point,” Zou admits. “We are thinking about collaborating with other groups, or we can build up our expertise in this area, because it’s a necessary step.”

ITER Celebrates Milestone, Still at Least a Decade Away From Fusing Atoms

Post Syndicated from Payal Dhar original

It was a twinkle in U.S. President Ronald Reagan’s eye, an enthusiasm he shared with General Secretary Mikhail Gorbachev of the Soviet Union: boundless stores of clean energy from nuclear fusion.

That was 35 years ago. 

On July 28, 2020, the product of these Cold Warriors’ mutual infatuation with fusion, the International Thermonuclear Experimental Reactor (ITER) in Saint-Paul-lès-Durance, France inaugurated the start of the machine assembly phase of this industrial-scale tokamak nuclear fusion reactor. 

An experiment to demonstrate the feasibility of nuclear fusion as a virtually inexhaustible, waste-free and non-polluting source of energy, ITER has already been 30-plus years in planning, with tens of billions invested. And if there are new fusion reactors designed based on research conducted here, they won’t be powering anything until the latter half of this century.

Speaking from Elysée Palace in Paris via an internet link during last month’s launch ceremony, President Emmanuel Macron said, [ITER] is proof that what brings together people and nations is stronger than what pulls them apart. [It is] a promise of progress, and of confidence in science.” Indeed, as the COVID-19 pandemic continues to baffle modern science around the world, ITER is a welcome beacon of hope.

ITER comprises 35 collaborating countries, including members of the European Union, China, India, Japan, Russia, South Korea and the United States, which are directly contributing to the project either in cash or in kind with components and services. The EU has contributed about 45%, while the others pitch in about 9% each. The total cost of the project could be anywhere between $22 billion to $65 billion—even though the latter figure has been disputed.

The idea for ITER was sparked back in 1985, at the Geneva Superpower Summit, where President Ronald Reagan of the United States and General Secretary Mikhail Gorbachev of the Soviet Union spoke of an international collaboration to develop fusion energy. A year later, at the US–USSR Summit in Reykjavik, an agreement was reached between the European Union’s Euratom, Japan, the Soviet Union and the United States to jointly start work on the design of a fusion reactor. At that time, controlled release of fusion power hadn’t even been demonstrated—that only happened in 1991, by the Joint European Torus (JET) in the UK.

The first big component to be installed at ITER was the 1,250-metric ton cryostat base, which was lowered into the tokamak pit in late May 2020. The cryostat is India’s contribution to the reactor, and uses specialized tools specifically procured for ITER by the Korean Domestic Agency to place components weighing hundreds of tonnes and having positioning tolerances of a few millimeters. Machine assembly is scheduled to finish by the end of 2024, and by mid-2025, we are likely to see first plasma production.

Anil Bhardwaj, group leader of the cryostat team, tells IEEE Spectrum “First plasma will only verify [various] compliances for initial preparation of the plasma. That does not mean that we are achieving fusion.”

That will come another decade or so down the line.

If everything goes to plan, the first deuterium–tritium fusion experiments will be demonstrated by 2035, and will in essence be replicating the fusion reactions that take place in the sun. ITER estimates that for 50 MW of power injected into the tokamak to heat the plasma (up to 150 million degrees Celsius), 500 MW of thermal power for 400- to 600-second periods will be output, a tenfold return (expressed as Q ≥ 10). The existing record as of now is Q = 0.67, held by the JET tokamak.

Despite recent progress, there is still a lot of uncertainty around ITER. Critics decry the hyperbole around it, especially of it being a magic-bullet solution to the worlds energy problems, in the words of Daniel Jassby, a former researcher at the Princeton Plasma Physics Lab. His 2017 article explains why “scaling down the sun” may not be the ideal fallback plan.

“In the most feasible terrestrial fusion reaction [using deuterium–tritium fuel], 80% of the fusion output is in the form of barrages of neutron bullets, whose conversion to electrical energy is a dubious endeavor,” he said in an interview. Switching to a different type of reactor based on much weaker fusion reactions might result in less neutron production, but also are unlikely to produce net energy of any type.

Delays and mismanagement have also plagued ITER, something that Jassby contends was a result of poor leadership. “There are only a few people in the world who have the technological, administrative and political expertise that allow them to make continuous progress in directing and completing a multinational project,” he said. Bernard Bigot, who took over as director-general five years ago, possesses the requisite skillset, in Jassby’s opinion. At present, ITER is running about six years behind schedule.

Critics of ITER are also concerned about diverting resources from developing existing renewable energies. “The greatest energy issue of our time is not supply, but how to choose among the plethora of existing energy sources for wide-scale deployment,” Jassby said. ITER’s value, however, he said, lies in delinking the fantasy of electricity from fusion energy, thus saving hundreds of billions of dollars in the long run.

Jassby thinks that if successful, ITER will allow physicists to study long-lived, high-temperature fusioning plasmas or the development of neutron sources. There are practical applications for fusion neutrons, he says, such as isotope production, radiography and activation analysis. He adds that ITER can have significant benefits if new technologies emerge application in other fields, such as superconducting magnets, new materials and novel fabrication techniques.

Philippa Browning, professor of astrophysics at the University of Manchester, believes that only something of the scale of ITER can test how things work in fusion reactors. “It may well be that in future alternative devices turn out to be better, but those advantages could be incorporated into the successor to ITER which will be a demonstration fusion power station… The route to fusion power is slow, [so] we can hope that it will be ready when it is really needed in the second half of this century.” Meanwhile, she added, “it is important that other approaches to fusion are explored in parallel, smaller and more agile projects.”

One of the most impressive things about ITER, Browning said, is the combination of a truly international cooperation pushing at the frontiers in many ways. “Understanding how plasmas interact with magnetic fields is a hugely challenging scientific problem… There are all sorts of scientific and technological spin-offs, as well as the direct contribution to achieving, hopefully, a fusion power station.”

Indian Mobile Service Providers Suspected of Providing Discriminatory Services

Post Syndicated from Payal Dhar original

India’s Telecom Disputes Settlement and Appellate Tribunal (TDSAT) has granted interim relief to telecom companies Bharti Airtel and Vodafone Idea, allowing them to continue with their premium-service plans. The TDSAT order came on 18 July, exactly a week after the country’s telecom regulatory authority had blocked the two companies from offering better speeds to higher-paying customers, citing net neutrality violations.

“This is not a final determination by the TDSAT,” says Apar Gupta of the Internet Freedom Foundation, a digital liberties organization that has been at the forefront of the fight for online freedom, privacy, and innovation in India. While the Telecom Regulatory Authority of India (TRAI) continues with its inquiry, the two providers will not be prevented from rolling out their plains.

The matter was brought to TRAI’s notice on 8 July by a rival mobile service provider, Reliance Jio, which wrote to the regulatory body asking about Airtel’s and Vodafone Idea’s Platinum and RedX plans, respectively. “Before offering any such plans ourselves…we would like to seek the Authority’s views on whether [these] tariff offerings…are in compliance with the extant regulatory framework,” the letter said.

Three days later, TRAI asked for the respective Airtel and Vodafone Idea plans to be blocked while these claims were investigated. It also sent both telcos a 10-point questionnaire related to various elements of their services, seeking clarification on how they defined “priority 4G network” and “faster speeds,” among other things. Following the blocking of the plans, Vodafone Idea approached TDSAT, arguing that TRAI’s order was illegal and arbitrary, considering that their RedX plan had been rolled out over eight months earlier. When contacted for comment on the matter, Vodafone declined, “as the matter is in TDSAT court.” Airtel, meanwhile, has agreed to comply with TRAI’s directive and not take new customers for its Platinum plan until the matter has been fully investigated.

Although it is being framed as such by media coverage and in the court of public opinion, strictly speaking, the offering of new tariffs by Airtel and Vodafone Idea are not net neutrality concerns, says Nikhil Pahwa, co-founder of Save the Internet, the campaign that played a key role in framing India’s net neutrality rules. “In India, net neutrality regulation covers…whether specific internet services or apps are either being priced differentially or being offered at speeds different from the rest of the Internet.” However, from a consumer perspective, he adds, “I think it is important for the TRAI to investigate these plans because…it is impossible for telecom operators to guarantee speeds for customers. What needs to be investigated is whether speeds are effectively deprecated for a particular set of consumers, because the throughput from a mobile base station is limited.”

Since July 2018, India has had stringent net neutrality regulations in place—possibly among the strongest in the world—at least on paper. Any form of data discrimination is banned; blocking, degrading, slowing down or granting preferential speeds or treatment by providers is prohibited; and Internet service providers stand to lose their licenses if found in violation. This was the result of a massive, public, volunteer-driven campaign since 2015. Save the Internet estimates that over 1 million citizens were part of the campaign at one point or another.

The concept of net neutrality captured public imagination when, in 2014, Airtel decided it would charge extra for VoIP services. The company pulled its plan after public outcry, but the wheels of differential pricing were set in motion. This resulted in TRAI prohibiting discriminatory tariffs for data services in 2016—a precursor to the net neutrality principles adopted two years later. These developments also forced Facebook to withdraw its zero-rated Free Basics service in India.

“We have not seen net neutrality enforcement in India till now in a very clear manner,” says Gupta, adding that TRAI is in the process of coming up with an enforcement mechanism. “They opened a consultation on it, and invited views from people… Right now they’re in the process of making…recommendations to the Department of Telecom, which can then frame them under the Telegraph Act.” The telecom department exercises wider powers under this Act, even though TRAI also has specific powers in administering certain licensing conditions, including quality of service and interconnection.

“[The] internet is built around the idea that all users have equal right to create websites, applications, and services for the rest of the world, and enables innovation because it is a space with infinite competition,” Pahwa says. And net neutrality is at the core of that freedom.

Peer Review of Scholarly Research Gets an AI Boost

Post Syndicated from Payal Dhar original

In the world of academics, peer review is considered the only credible validation of scholarly work. Although the process has its detractors, evaluation of academic research by a cohort of contemporaries has endured for over 350 years, with “relatively minor changes.” However, peer review may be set to undergo its biggest revolution ever—the integration of artificial intelligence.

Open-access publisher Frontiers has debuted an AI tool called the Artificial Intelligence Review Assistant (AIRA), which purports to eliminate much of the grunt work associated with peer review. Since the beginning of June 2020, every one of the 11,000-plus submissions Frontiers received has been run through AIRA, which is integrated into its collaborative peer-review platform. This also makes it accessible to external users, accounting for some 100,000 editors, authors, and reviewers. Altogether, this helps “maximize the efficiency of the publishing process and make peer-review more objective,” says Kamila Markram, founder and CEO of Frontiers.

AIRA’s interactive online platform, which is a first of its kind in the industry, has been in development for three years.. It performs three broad functions, explains Daniel Petrariu, director of project management: assessing the quality of the manuscript, assessing quality of peer review, and recommending editors and reviewers. At the initial validation stage, the AI can make up to 20 recommendations and flag potential issues, including language quality, plagiarism, integrity of images, conflicts of interest, and so on. “This happens almost instantly and with [high] accuracy, far beyond the rate at which a human could be expected to complete a similar task,” Markram says.

“We have used a wide variety of machine-learning models for a diverse set of applications, including computer vision, natural language processing, and recommender systems,” says Markram. This includes simple bag-of-words models, as well as more sophisticated deep-learning ones. AIRA also leverages a large knowledge base of publications and authors.

Markram notes that, to address issues of possible AI bias, “We…[build] our own datasets and [design] our own algorithms. We make sure no statistical biases appear in the sampling of training and testing data. For example, when building a model to assess language quality, scientific fields are equally represented so the model isn’t biased toward any specific topic.” Machine- and deep-learning approaches, along with feedback from domain experts, including errors, are captured and used as additional training data. “By regularly re-training, we make sure our models improve in terms of accuracy and stay up-to-date.”

The AI’s job is to flag concerns; humans take the final decisions, says Petrariu. As an example, he cites image manipulation detection—something AI is super-efficient at but is nearly impossible for a human to perform with the same accuracy. “About 10 percent of our flagged images have some sort of problem,” he adds. “[In academic publishing] nobody has done this kind of comprehensive check [using AI] before,” says Petrariu. AIRA, he adds, facilitates Frontiers’ mission to make science open and knowledge accessible to all.

Crowdsourced Protein Modeling Efforts Focus on COVID-19

Post Syndicated from Payal Dhar original

IEEE COVID-19 coverage logo, link to landing page

Researchers have been banking on millions of citizen-scientists around the world to help identify new treatments for COVID-19. Much of that work is being done through distributed computing projects that utilize the surplus processing power of PCs to carry out various compute-intensive tasks. One such project is [email protected], which helped model how the spike protein of SARS-CoV-2 binds with the ACE2 receptor of human cells to cause infection. Started at Stanford University in 2000, [email protected] is currently based at the Washington University School of Medicine in St. Louis; it undertakes research into various cancers, and neurological and infectious diseases by studying the movement of proteins.

Proteins are made up of a sequence of amino acids that fold into specific structural forms. A protein’s shape is critical in its ability to undertake its specific function. Viruses have proteins that enable them to suppress a host’s immune system, invade cells, and replicate.

Greg Bowman, director of [email protected], says, “We’re basically building maps of what these viral proteins can do… [The distributed computing network] is like having people around the globe jump in their cars and drive around their local neighborhoods and send us back their GPS coordinates at regular intervals. If we can develop detailed maps of these important viral proteins, we can identify the best drug compounds or antibodies to interfere with the virus and its ability to infect and spread.”

After Covid-19 was declared a global pandemic, [email protected] prioritized research related to the new virus. The number of devices running its software shot up from some 30,000 to over 4 million as a result. Tech behemoths such as Microsoft, Amazon, AMD, Cisco, and others have loaned computing power to [email protected] The European Organization for Nuclear Research (CERN) has freed up 10,000 CPU cores to add to the project, and the Spanish premier soccer league La Liga has chipped in with its supercomputer that is otherwise dedicated to fighting piracy.

While [email protected] models how proteins fold, another distributed computing project called [email protected]—this one at the University of Washington Institute for Protein Design (IPD)—predicts the final folded shape of the protein. Though the projects are quite different, they are complementary.

“A big difference…is that the [email protected] distributed computing is…directly contributing to the design of new proteins… These calculations are trying to craft brand new proteins with new functions,” says Ian C. Haydon, science communications manager and former researcher at IPD. He adds that the [email protected] community, which comprises about 3.3 million instances of the software, has helped the research team come up with more than 2 million candidate antiviral proteins that recognize the coronavirus’s spike protein and bind very tightly to it. When that happens, the spike is no longer able to recognize or infect a human cell.

“At this point, we’ve tested more than 100,000 of what we think are the most promising options,” Haydon says. “We’re working with collaborators who were able to show that the best of these antiviral proteins…do keep the coronavirus from being able to infect human cells…. [What’s more,] they have a potency that looks at least as good if not better than the best known antibodies.”

There are many possible outcomes for this line of research, Haydon says. “Probably the fastest thing that could emerge… [is a] diagnostic…tool that would let you detect whether or not the virus is present.” Since this doesn’t have to go into a human body, the testing and approval process is likely to be quicker. “These proteins could [also] become a therapy that…slows down or blocks the virus from being able to replicate once it’s already in the human body… They may even be useful as prophylactic.”

The Search for Extraterrestrial Intelligence Gets a Major Upgrade

Post Syndicated from Payal Dhar original

We’ve all wondered at one point or another if intelligent life exists elsewhere in the universe. “I think it’s very unlikely that we are alone,” says Eric Korpela, an astronomer at the University of California Berkeley’s Search for ExtraTerrestrial Intelligence (SETI) Research Center. “They aren’t right next door, but they may be within a thousand light years or so.”

Korpela is project director of the [email protected] project. For more than two decades, that project harnessed the surplus computing power of over 1.8 million computers around the globe to analyze data collected by radio telescopes for narrow-band radio signals from space that could indicate the existence of extraterrestrial technology. On 31 March 2020, [email protected] stopped putting new data in the queue for volunteers’ computers to process, but it’s not the end of the road for the project.

Now begins the group’s next phase. “We need to sift through the billions of potential extraterrestrial signals that our volunteers have found and find any that show signs of really being extraterrestrial,” says Korpela. That task is difficult, he adds, because humans “make lots of signals that look like what we would expect to see from E.T.”

Could Airships Rise Again?

Post Syndicated from Payal Dhar original

Transportation produces about one-fourth of global anthropogenic carbon emissions. Of this, maritime shipping accounts for 3 percent, and this figure is expected to increase for the next three decades even though the shipping industry is actively seeking greener alternatives, and developing near-zero-emission vessels.

Researchers with the International Institute for Applied Systems Analysis (IIASA) in Austria recently explored another potential solution: the return of airships to the skies. Airships rely on jet stream winds to propel them forward to their destinations. They offer clear advantages over cargo ships in terms of both efficiency and avoided emissions. Returning to airships, says Julian Hunt, a researcher at the IIASA and lead author of the new study, could “ultimately [increase] the feasibility of a 100 percent renewable world.”

Today, world leaders are meeting in New York for the UN Climate Action Summit to present plans to address climate change. Already, average land and sea surface temperatures have risen to approximately 1 degree C above pre-industrial levels. If the current rate of emissions remains unchecked, the Intergovernmental Panel on Climate Change estimates that by 2052, temperatures could rise by up to 2 degrees C. At that point, as much as 30 percent of Earth’s flora and fauna could disappear, wheat production could fall by 16 percent, and water would become more scarce.

According to Hunt and his collaborators, airships could play a role in cutting future anthropogenic emissions from the shipping sector. Jet streams flow in a westerly direction with an average wind speed of 165 kilometers per hour (km/h). On these winds, a lighter-than-air vessel could travel around the world in about two weeks (while a ship would take 60 days) and require just 4 percent of the fuel consumed by the ship, Hunt says.

Rooftop Solar Refinery Produces Carbon-Neutral Fuels

Post Syndicated from Payal Dhar original

Scientists in Switzerland have demonstrated a technology that can produce kerosene and methanol from solar energy and air

Scientists have searched for a sustainable aviation fuel for decades. Now, with emissions from air traffic increasing faster than carbon-offset technologies can mitigate them, environmentalists worry that even with new fuel-efficient technologies and operations, emissions from the aviation sector could double by 2050.

But what if, by 2050, all fossil-derived jet fuel could be replaced by a carbon-neutral one made from sunlight and air?

In June, researchers at the Swiss Federal Institute of Technology (ETH) in Zurich demonstrated a new technology that creates liquid hydrocarbon fuels from thin air—literally. A solar mini-refinery—in this case, installed on the roof of ETH’s Machine Laboratory—concentrates sunlight to create a high-temperature (1,500 degrees C) environment inside the solar thermochemical reactor.

Delhi Rolls Out a Massive Network of Surveillance Cameras

Post Syndicated from Payal Dhar original

The state government says closed-circuit TVs will help fight crime, but digital liberties activists are concerned about the project’s lack of transparency

In India, the government of Delhi is rolling out an ambitious video surveillance program as a crime-prevention measure. Technicians will install more than a quarter million closed-circuit TV (CCTV) cameras near residential and commercial properties across the city, and in schools. A central monitoring system is expected to take care of behind-the-scenes logistics, though authorities have not shared details on how the feeds will be monitored.

After delays due to political and legal wrangles, the installations began on 7 and 8 July. The first cameras to go up in a residential area were installed in Laxmi Bai Nagar, at a housing society for government employees, and at the upmarket Pandara Road in New Delhi. When the roll out is complete, there will be an average of 4,000 cameras in each of Delhi’s 70 assembly constituencies, for a total of around 280,000 cameras.

In early 2020, the National Capital Territory of Delhi (usually just called ‘Delhi’), which includes New Delhi, the capital of India, will vote to elect a new state assembly. Lowering the crime rates is a key election issue for the incumbent Aam Aadmi Party (literally, Common Man’s [sic] Party). The party has promised that the CCTV cameras will deter premeditated crime and foster a semblance of order among the general public.

Cyberespionage Collective Platinum Targets South Asian Governments

Post Syndicated from Payal Dhar original

Kaspersky says the group used an HTML-based exploit that’s almost impossible to detect

Following a trail of suspicious digital crumbs left in cloud-based systems across South Asia, Kaspersky Lab’s security researchers have uncovered a steganography-based attack carried out by a cyberespionage group called Platinum. The attack targeted government, military, and diplomatic entities in the region.

Platinum was active years ago, but was since believed to have been disarmed. Kaspersky’s cyber-sleuths, however, now suspect that Platinum might have been operating covertly since 2012, through an “elaborate and thoroughly crafted” campaign that allowed it to go undetected for a long time.

The group’s latest campaign harnessed a classic hacking tool known as steganography. “Steganography is the art of concealing a file of any format or communication in another file in order to deceive unwanted people from discovering the existence of [the hidden] initial file or message,” says Somdip Dey, a U.K.-based computer scientist with a special interest in steganography at the University of Essex and the Samsung R&D Institute.