All posts by Dexter Johnson

AI Recodes Legacy Software to Operate on Modern Platforms

Post Syndicated from Dexter Johnson original

Last year, IBM demonstrated how AI can perform the tedious job of software maintenance through the updating of legacy code. Now Big Blue has introduced AI-based methods for re-coding old applications so that they can operate on today’s computing platforms.

The latest IBM initiatives, dubbed Mono2Micro and Application Modernization Accelerator (AMA), give app architects new tools for updating legacy applications and extracting new value from them. These initiatives represent a step towards a day when AI could automatically translate a program written in COBOL into Java, according to Nick Fuller, director of hybrid cloud services at IBM Research.

Fuller cautions that these latest AI approaches are currently only capable of breaking the legacy machine code of non-modular monolithic programs into standalone microservices. There still remains another step in translating the programming language because, while the AMA toolkit is in fact designed to modernize COBOL, at this point it only provides an incremental step in the modernization process, according to Fuller. “Language translation is a fundamental challenge for AI that we’re working on to enable some of that legacy code to run in a modern software language,” he added.

In the meantime, IBM’s latest AI tools offer some new capabilities. In the case of Mono2Micro, it first analyzes the old code to reveal all the hidden connections within it that application architects would find extremely difficult and time consuming to uncover on their own, such as the multiple components in the underlying business logic that contain numerous calls and connections to each other. 

Mono2Micro leverages AI clustering techniques to group similar code together, revealing more clearly how groups of code interact. Once Mono2Micro ingests the code, it analyzes the source and object code both statically (analyzing the program before it runs) and dynamically (analyzing the program while it’s running).

The tool then refactors monolithic Java-based programs and their associated business logic and user interfaces into microservices. This refactoring of the monolith into standalone microservices with specific functions minimizes the connections that existed in the software when it was a monolithic program, changing the application’s structure without altering its external behavior.

The objective of the AMA toolkit is to both analyze and refactor legacy applications written in even older languages (COBOL, PL/I). For the AMA toolkit, static analysis of the source code coupled with an understanding of the application structure is used to create a graph that represents the legacy application. When used in conjunction with deep-learning methods, this graph-based approach facilitates data retention as AMA goes through deep-learning processes.

IBM’s AI strategy addresses the key challenges for machine learning when the data input is code and the function is analysis: volume and multiple meanings. Legacy, mission-critical applications are typically hundreds of thousands to millions of lines of code. In this context, applying machine learning (ML) techniques to such large volumes of data can be made more efficient through the concept of embeddings.

These embedding layers represent a way to translate the data into numerical values. The power of embeddings comes from them mapping a large volume of code with multiple possible meanings to numerical values. This is what is done, for example, in translating natural human language to numerical values using “word” embeddings. It is also done in a graph context as it relates to code analysis.

“Embedding layers are tremendous because without them you would struggle to get anything approaching an efficiently performing machine-learning system,” said Fuller.

He added that in the case of code analysis, the ML system gets better in recommending microservices for the refactored legacy application by replicating the application functionality.

Fuller noted: “Once you get to that point, you’re not quite home free, but you’re essentially 70 percent done in terms of what you’re looking to gain, namely a mission critical application that is refactored into a microservices architecture.”

IBM Makes Tape Storage Better Than Ever

Post Syndicated from Dexter Johnson original

Introduced by strains of the Strauss waltz that served as the soundtrack for “2001: A Space Odyssey,” IBM demonstrated a new world record in magnetic tape storage capabilities in a live presentation this week from its labs in Zurich, Switzerland.

A small group of journalists looked on virtually as IBM scientists showed off a 29-fold increase in the storage capability of its current data-tape cartridge, from 20 terabytes (TB) to 580 TB. That’s roughly 32-times the capacity of LTO-Ultrium (Linear Tape-Open, version 9), the latest industry-standard in magnetic tape products.

While these figures may sound quite impressive, some may wonder whether this story might have mistakenly come from a time capsule buried in the 1970s. But the fact is tape is back, by necessity.

Magnetic tape storage has been undergoing a renaissance in recent years, according to Mark Lantz, manager of CloudFPGA and tape technologies at IBM Zurich. This resurgence, Lantz argues, has been driven by the convergence of two trends: exponential data growth and a simultaneous slowing down of areal density in hard-disk drives (HDD).

In fact, the growth rate of HDD areal density has slowed down to under an 8% compound annual growth rate over the last several years, according to Lantz. This slowdown is occurring while data is growing worldwide to the point where it is expected to hit 175 zettabytes by 2025, representing a 61% annual growth rate.

This lack of HDD scaling has resulted in the price per gigabyte of HDD rising dramatically. Estimates put HDD bytes at four times the cost of tape bytes. This creates a troublesome imbalance at an extremely inopportune moment: just as the amount of data being produced is increasing exponentially, data centers can’t afford to store it.

Fortunately a large portion of the data being stored is what’s termed “cold,” meaning it hasn’t been accessed in a long time and is not needed frequently. These types of data can tolerate higher retrieval latencies, making magnetic tape well suited for the job.

Magnetic tape is also inherently more secure from cybercrime, requires less energy, provides long-term durability, and has a lower cost per gigabyte than HDD. Because of these factors IBM estimates that more than 345,000 exabytes (EB) of data already resides in tape storage systems. In the midst of these market realities for data storage, IBM’s believes that its record-setting demonstration will enable tape to meet its scaling roadmap for the next decade.

This new record involved a 15-year journey that IBM had undertaken with Fujifilm to continuously push the capabilities of tape technology. After setting six new records since 2006, IBM and Fujifilm achieved this latest big leap by improving three main areas of tape technology: the tape medium, a new tape head technology with novel use of an HDD detector for reading the data, and servo-mechanical technologies that ensure the tape tracks precisely.

For the new tape medium, Fujifilm set aside the current industry standard of barium ferrite particles and incorporated smaller strontium ferrite particles in a new tape coating, allowing for higher density storage on the same amount of tape.

With Fujifilm’s strontium ferrite particulate magnetic tape in hand, IBM developed a new low friction tape head technology that could work with the very smooth surfaces of the new tape. IBM also used an ultra-narrow 29 nm wide tunnel-magnetoresistance (TMR) read sensor that enables reliable detection of data written on the strontium ferrite media at a linear density of 702 kilobytes per inch.

IBM also developed a family of new servo-mechanical technologies for the system. This suite of technologies measures the position of the tape head on the tape and then adjusts that position so the data is written in the correct location. Transducers then scan the center of the tracks during read-back operation. In the aggregate all these new servo technologies made head positioning possible at a world record accuracy of 3.2 nm, this while the tape is streamed over the read head at a speed of about 15 km/h.

Alberto Pace, head of data storage for the European Organization for Nuclear Research (CERN), put the development in context: “Only 20 years ago, all the data produced by the old Large Electron-Positron (LEP) collider had to be held in the big data center. Today all that data from the old LEP fits into a cabinet in my office. I expect that in less than 20 years we will have all the data from the Large Hadron Collider that now resides in our current data center fitting into a small cabinet in my office.”

Plasmonic Nanomotors Move the Field in New Directions

Post Syndicated from Dexter Johnson original

Over the years, light has been the key to powering and directing the movement of nanomotors. But precisely controlling that movement and actuating motion lateral to a beam of light has remained a challenge.

Now researchers at the Institute of Industrial Science at The University Of Tokyo are using light to drive objects with both precision and lateral motion. (See video below.)

Of course, light—like any particle—carries momentum. Which means light can transfer its momentum too, as in the work that won the 2018 Nobel Prize in physics for the development of so-called optical tweezers.

The University of Tokyo researchers rely on a technology called plasmonics—exploiting waves of electrons (a.k.a. surface plasmons) that are triggered when photons strike a metal surface.

Earlier this year, researchers from Vanderbilt University in the U.S. extended the utility of those Nobel-winning optical tweezers by using plasmonics to enable handling of delicate biomolecules.

Now the University of Tokyo research, published in the journal Science Advances, harness plasmonics to move the nanomotor in directions other than just that of the incident laser beam.

Researchers placed metal nanoparticles in strategic locations to exploit the scattering of light. This makes it possible for the laser plus nanoparticles to move objects laterally as well as along the direction the laser light is propagating. 

This technique provides precise control of the optical force generated from light’s linear momentum, according to Yoshito Tanaka, an assistant professor at the University of Tokyo, and co-author of the research.

“One of the strong points of our nanomotor is fast and precise control, which is essential for lab-on-a chip applications” said Tanaka.

Moreover, Tanaka argues that the applications of this nanomotor are not limited to lab-on-a chip devices.

“By taking advantage of fast and precise control of the optical force, our nanomotor could be used for the measurement of force on protein molecular motors and their motion control,” Tanaka added.

Tanaka also believes that their research could further advance applications of light-driven nanomotors for assembling and powering nanomachines and nanorobots.

The Technology Readiness Level (TRL) of this work and most light-powered nanomotors at this time is still Level 4: validation in laboratory environment, according to Tanaka. He argues that in general the main hurdle for nanomotors to move into the industrial level is the cost of fabrication.

“The relative weakness of our approach is a more expensive fabrication of our nanomotors than others,” conceded Tanaka. “But, if our nanomotors can be fabricated by nanoimprint lithography technology, this weakness should be solved.”

IBM Toolkit Aims at Boosting Efficiencies in AI Chips

Post Syndicated from Dexter Johnson original

In February 2019, IBM Research launched its AI Hardware Center with the stated aim of improving AI computing efficiency by 1,000 times within the decade. Over the last two years, IBM says they’ve been meeting this ambitious goal: They’ve improved AI computing efficiency, they claim, by two-and-a-half times per year.

Big Blue’s big AI efficiency push comes in the midst of a boom in AI chip startups. Conventional chips often choke on the huge amount of data shuttling back and forth between memory and processing. And many of these AI chip startups say they’ve built a better mousetrap. 

There’s an environmental angle to all this, too. Conventional chips waste a lot of energy performing AI algorithms inefficiently. Which can have deleterious effects on the climate.  

Recently IBM reported two key developments on their AI efficiency quest. First, IBM will now be collaborating with Red Hat to make IBM’s AI digital core compatible with the Red Hat OpenShift ecosystem. This collaboration will allow for IBM’s hardware to be developed in parallel with the software, so that as soon as the hardware is ready, all of the software capability will already be in place.

“We want to make sure that all the digital work that we’re doing, including our work around digital architecture and algorithmic improvement, will lead to the same accuracy,” says Mukesh Khare, vice president of IBM Systems Research. 

Second, IBM and the design automation firm Synopsys are open-sourcing an analog hardware acceleration kit — highlighting the capabilities analog AI hardware can provide.

Analog AI is aimed at the so-called von Neumann bottleneck, in which data gets stuck between computation and memory. Analog AI addresses this challenge by performing the computation in the memory itself.

The Analog AI toolkit will be available to startups, academics, students and  businesses, according to Khare. “They can all … learn how to leverage some of these new capabilities that are coming down the pipeline. And I’m sure the community may come up with even better ways to exploit this hardware than some of us can come up with,” Khare says.

A big part of this toolkit will be the design tools provided by Synopsys. 

“The data movement is so vast in an AI chip that it really necessitates that memory has to be very close to its computing to do these massive metrics computations,” says Arun Venkatachar, vice president of Artificial Intelligence & Central Engineering at Synopsys. “As a result, for example, the interconnects become a big challenge.”

He says IBM and Synopsys have worked together on both hardware and software for the Analog AI toolkit. 

“Here, we were involved through the entire stack of chip development: materials research and physics, to the device and all the way through verification and software,” Venkatachar says. 

Khare says for IBM this holistic approach translated to fundamental device and materials research, chip design, chip architecture, system design software and emulation, as well as a testbed for end-users to to validate performance improvements.

“It’s important for us to work in tandem and across the entire stack,” Khare adds. “Because developing hardware without having the right software infrastructure is not complete, and the other way around as well.”

The Lithium-Ion Battery With Built-In Fire Suppression

Post Syndicated from Dexter Johnson original

If there are superstars in battery research, you would be safe in identifying at least one of them as Yi Cui, a scientist at Stanford University, whose research group over the years has introduced some key breakthroughs in battery technology.

Now Cui and his research team, in collaboration with SLAC National Accelerator Laboratory, have offered some exciting new capabilities for lithium-ion batteries based around a new polymer material they are using in the current collectors for them. The researchers claim this new design to current collectors increases efficiency in Li-ion batteries and reduces the risks of fires associated with these batteries.

Current collectors are thin metal foils that distribute current to and from electrodes in batteries. Typically these metal foils are made from copper. Cui and his team redesigned these current collectors so that they are still largely made from copper but are now surrounded by a polymer.

The Stanford team claim in their research published in the journal Nature Energy that the polymer makes the current collector 80 percent lighter, leading to an increase in energy density from 16 to 26 percent. This is a significant boost over the average yearly increase of energy density for Li-ion batteries, which has been stuck at 5 percent a year seemingly forever.

This method of lightening the batteries is a bit of a novel approach to boosting energy density. Over the years we have seen many attempts to increase energy density by enlarging the surface area of electrodes through the use of new electrode materials—such as nanostructured silicon  in place of activated carbon. While increased surface area may increase charge capacity, energy density is calculated by the total energy over the total weight of the battery.

The Stanford team have calculated the increase of 16 to 26 percent in the gravimetric energy density of their batteries by replacing the commercial  copper/aluminum current collectors (8.06 mg/cm2 for copper and 5.0 mg/cm2 for aluminum) with their polymer collections current collectors (1.54 mg/cm2 for polymer-copper material and 1.05 mg/cm2 for polymer-aluminum). 

“Current collectors don’t contribute to the total energy but contribute to the total weight of battery,” explained Yusheng Ye, a researcher at Stanford and co-author of this research. “That’s why we call current collectors ‘dead weight’ in batteries, in contrast to ‘active weight’ of electrode materials.”

By reducing the weight of the current collector, the energy density can be increased, even when the total energy of the battery is almost unchanged. Despite the increased energy density offered by this research, it may not entirely alleviate so-called “range anxiety” associated with electric vehicles in which people have a fear of running out of power before reaching the next charge location. While the press release claims that this work will extend the range of electric vehicles, Ye noted that the specific energy improvement in this latest development is based on the battery itself. As a result, it is only likely to have around a 10% improvement in the range of an electric vehicle.

“In order to improve the range from 400 miles to 600 miles, for example, more engineering work would need to be done taking into account the active parts of the batteries will need to be addressed together with our ultra-light current collectors,” said Ye.

Beyond improved energy density efficiency, the polymer-based charge collectors are expected to help reduce the fires associated with Li-ion batteries. Of course, traditional copper current collectors don’t contribute to battery combustion on their own. The combustion issues in Li-ion batteries  are related to the electrolyte and separator that are not used within the recommended temperatures and voltage windows.

“One of the key innovations in our novel current collector is that we are able to embed fire retardant inside without sacrificing the energy density and mechanical strength of the current collector,” said Ye. “Whenever the battery has combustion issues, our current collector will instantaneously release the fire retardant and extinguish the fire. Such function cannot be achieved with traditional copper or aluminum current collector.”

The researchers have patented the technology and are in discussions with battery manufacturers for commercialization. Cui and his team have already worked out some of the costs associated with adopting the polymer and they appear attractive. According to Ye, the cost of the polymer composite charge collector is around $1.3 per m2, which is a bit lower than the cost of copper foil, which is around $1.4 per m2. With these encouraging numbers, Ye added: “We are expecting industry to adopt this technology within the next few years.”

IBM Watson’s Next Challenge: Modernize Legacy Code

Post Syndicated from Dexter Johnson original

IBM’s initiatives into artificial intelligence have served as bellwethers for how AI helps us re-imagine computing, but also how it can transform the industries to which it is applied. There was no more clear a demonstration of AI’s capability than when IBM had its Watson supercomputer defeat all the human champions in the game show Jeopardy.

The years that followed that success back in 2011, however, were years of struggle for IBM to find avenues for Watson to turn its game-show success into commercially viable problem solving. For example, attempts to translate that problem-solving capability to medical diagnosis has been fraught with challenges.

That said, IBM’s core business is enterprise-wide information technology and it is this arena that the company has targeted for its next AI initiative: IBM Watson AIOps. Launched in May, the aim is to provide a broad range of new AI-powered capabilities and services to help enterprises automate various aspects of IT development, infrastructure and operations.

A key focus is getting AI to “speak” code. The market demand for AI communicating in code is immense. According to Gartner, AI augmentation will recover 6.2 billion hours of worker productivity in 2021. Teaching AI code could streamline and automate many of the IT processes that currently require time-consuming manual oversight and troubleshooting, such as security, system management, and multiple cloud environments.

To discuss the challenge of getting AI to translate code and how it fits into the decade-long story of IBM’s Watson, we talked to Ruchir Puri, Chief Scientist at IBM Research,  who has been one of the architects that has been behind Watson going back to its success at Jeopardy.

Q: It would seem that being able to have computers communicate with one another, in code, wouldn’t be such a big challenge. Presumably, it’s a language that computers use to communicate with each other. So why has it been such a challenge for AI to do this? And how are you overcoming those challenges?

A: This is a question that we’ve asked ourselves as well. And intuitively, one would hypothesize that it should be easy for AI. The problem actually lies in the not-so-successful endeavors of rule-based systems. So, take programming language translation as an example. If it was easy enough and the rule-based systems would work, then early programming languages like COBOL and others, that have long since seen their heydays, would have been converted by now. So, what is stopping this?

What is stopping this is really—just like human languages—programming languages have context. And the meaning of a particular statement on a line actually is related to what occurs before, and deriving that context and making the translation, just like human languages, takes a lot of effort and time and resources. And the larger the program gets, the harder it gets to translate it over, even more so than human languages. While in human language, the context may be limited to that paragraph or maybe that particular document, here the context can actually relate to multiple libraries and other services that are related to that particular program. So, I think the difficulty really lies in the context, which is preventing the rule-based systems from being successful.

I’ll make it even more concrete. Roughly speaking, rule-based systems will be successful in translating somewhere between 50 to 60 percent of a program. It is true that part of the program can be translated reasonably well, however, that still leaves half of the program to be translated manually, and that remaining 50 percent is the hardest part, typically involving very complex rules. And that’s exactly where AI kicks in because it can act like humans. It can derive the context with sequence-to-sequence models, just like we have applied in human languages, and really piggyback on a lot of that research being done on Natural Language Processing to make a more significant dent in that technology.

Q: If an IT group at a large organization were to migrate their COBOL-based data into an open-source platform, how long would that typically take? And how much faster is AI at doing those kinds of jobs?

A: So, let me give you some examples. One of the large automobile companies we were working with, was working on a mission-critical application that they had, with roughly around million lines of code, and all written in old versions of Java technology.

They wanted to build micro-services out of that and build a true cloud-native application. They worked on this for more than a year with some of their experts. And keep in mind, in many of these applications, the architects and the programmers and the coders who wrote these applications are long gone, so that expertise is hard to find.

With our system—we call it the Accelerator for Application Modernization with AI—we were able to reduce the time to roughly around six weeks. And a lot of that was just setting up the data. It wasn’t the execution of the program. So, you are looking at almost an order of magnitude improvement.

Q: One of the challenges with Watson AI—in particular in healthcare—has been overcoming the difference between the way machines learn and the way people work. Will the advancement in enabling AI to speak code, help address those kinds of issues?

A: Let me first step back and say having machines able to speak the language of code will help in every area. However, healthcare is one of the industries that presents some unique challenges in the areas of security and vulnerability and compliance, where it will absolutely be helpful.

The use of AI in healthcare is still evolving, and it’s a journey. To expect AI to be able to give the right answer in all diagnosis scenarios is expecting too much. The technology has not reached that level yet. However, that’s precisely why we say it’s more about augmenting the healthcare experts than it is about replacing in many ways.

It’s about being able to give you the analysis that you may not have seen before, presenting it to the physician, and seeing if it enhances the notion of that particular scenario they are looking at. This is because AI can look at literally thousands and thousands of documents and on-going drug studies, analyze them, find the insights, and summarize that information and present it in the right way to augment the experts.

We have documented cases, in particular, for example, leukemia case studies done in Japan and North Carolina, where it was proven that it has been very helpful. I can certainly say it’s not about presenting the right diagnosis all the time; it’s about augmenting the experts, in this particular case, physicians, with the right information.

AI for code I would say has less relevance in this particular scenario because it is more about patient data. It is more about the documents that have existed or the ongoing drug trials and studies in a fast-evolving medical literature.

Q: Do you think enabling AI to speak in code will soon lead to the obsolescence of human coders?

A: No, I don’t believe this will occur. Does automation lead to elimination of demand for human skills? I would say if anything, it increases it. We’ve seen this throughout history. From the very beginning with the invention of wheel, when somebody was carrying things on their back, society has progressed forward. And in this particular case, I would say, use of AI will reduce and eliminate the need for cumbersome tasks like code reviews, testing, migration, translation, search and others, but AI is not going to invent the algorithm for you.

That critical human ingenuity remains key in creating a totally new way, a re-imagined way, of solving a problem. Those kinds of creative breakthroughs are very far away for AI, which is exactly what humans are good at. But the use of AI for automating manually intensive tasks does leave lot more time for people to think through what is the right way of solving the problem structure, the algorithm, and the overall solution correctly, and letting machines do the rest of the job.

So, if anything, I would say, it’s going to make our life and the solutions that the programmers are developing a lot better because it gives them a lot more time to really focus on the issues which make the biggest difference. This is in contrast to now when majority of our time goes into translating the code, refactoring it into micro-services for hybrid cloud and cloud-native applications, and testing them.

Q: How do you see this current work with Watson AIOps fitting into the overall context of Watson, moving from Watson’s Jeopardy! initiative to where it is now in tackling large enterprise IT issues?

A: IBM has been very forward thinking when it comes to AI. If you look at a lot of companies out there, everybody is applying AI to their own expertise, their own mainline business. For instance, Google is applying it to its search engine and maybe targeted advertising and then Facebook applying it to social networks and Amazon applying it to recommendation systems and so on.

The business that IBM lives and breathes every day is information technology applied to enterprises. So, information technology is what we do, and AI applied to information technology has and will have a transformative impact on enterprises. This is exactly the reason we started this AI initiative: to apply AI to IT, from modernizing legacy applications for hybrid cloud to intelligently managing those applications with Watson AIOps. Underlying all of this are our AI innovations and breakthrough research milestones like Watson Jeopardy and the Project Debater. AI innovations in understanding human language can be leveraged to understand programming languages. It’s about that continuum and transferring that learning into newer domains like software engineering.

Enterprises are on a journey to transform their information technology for hybrid cloud and that journey has many challenges. I would say most clients are only 20 to 30 percent of the way there and 70 percent of the journey is still remaining, which is exactly what AI can help accelerate – in advising the enterprises on that journey, moving and building their legacy applications for hybrid cloud, and intelligently managing those applications.

I believe AI is on a path to disrupt a lot of areas, and help a lot of areas. But the intersection of AI and software engineering is really interesting. It is said that software is eating the world. In this case, I would say AI is eating software. AI is really disrupting software engineering itself.

Here Comes the Internet of Plastic Things, No Batteries or Electronics Required

Post Syndicated from Dexter Johnson original

When technologists talk about the “Internet of Things” (IoT), they often gloss over the fact that all these interconnected things need batteries and electronics to carry out the job of collecting and processing data while they’re communicating to one another. This job is made even more challenging when you consider that many of the objects we would like to connect are made from plastic and do not have electronics embedded into them.

Now researchers at the University of Washington have devised a way of using 3D printed plastic to create objects that communicate with smartphone or other Wi-Fi devices without the need for batteries or electronics.

This research builds on previous work at the University of Washington dating back to 2014 in which another research team employed battery-less chips that transmit their bits by either reflecting or not reflecting a Wi-Fi router’s signals. With this kind of backscattering, a device communicates by modulating its reflection of the Wi-Fi signal in the space.

The challenge with existing Wi-Fi backscatter systems is that they require multiple electronic components, including RF switches that can toggle between reflective and non-reflective states, digital logic that controls the switch to encode the appropriate data as well as a power source/harvester that powers all these electronic components.

In this latest research, the University of Washington team has been able to leverage this Wi-Fi backscatter technology to 3D geometry and create easy to print wireless devices using commodity 3D printers. To achieve this, the researchers have built non-electronic and printable analogues for each of these electronic components using plastic filaments and integrated them into a single computational design.

The researchers are making their CAD models available to 3D printing enthusiasts so that they can create their own IoT objects. The designs include a battery-free slider that controls music volume, a button that automatically orders more cornflakes from an e-commerce website and a water sensor that sends an alarm to your phone when it detects a leak.

“We are using mechanism actuation to transmit information wirelessly from these plastic objects,” explained Shyam Gollakota, an associate professor at the University of Washington, who with students Vikram Iyer and Justin Chan, published their original paper on the research in 2017.

The researchers, who have been steadily working on the technology since their original paper, have leveraged mechanical motion to provide the power for their objects. For instance, when someone opens a detergent bottle, the mechanical motion of unscrewing the top provides the power for it to communicate data.

“We translate this mechanical motion into changes in antenna reflections to communicate data,” said Gollakota. “Say there is a Wi-Fi transmitter sending signals. These signals reflect off the plastic object; we can control the amount of reflections arriving from this plastic object by modulating it with the mechanical motion.”

To ensure that the plastic objects can reflect Wi-Fi signals, the researchers employ composite plastic filament materials with conductive properties. These take the form of plastic with copper and graphene filings.

“These allow us to use off-the-shelf 3D printers to print these objects but also ensure that when there is an ambient Wi-Fi signal in the environment, these plastic objects can reflect them by designing an appropriate antenna using these composite plastics,” said Gollakota.

Once the reflective material was created, the next challenge for the researchers was to communicate the collected data. The researchers ingeniously translated the 0 and 1 bits of traditional electronics by encoding these bits as 3D printed plastic gears. A 0 and 1 bit are encoded with the presence and absence of tooth on the gear, respectively. These gears reflect the WiFi signal differently depending on whether they are transmitting a 1 bit or a 0 bit. 

“The way to think of it is that you have two parts of an antenna,” explained Gollakota. “As the gear moves, and depending on whether we are using a 0 bit or a 1 bit, we connect or disconnect the two disjointed parts of the antenna. This changes the reflections as seen by a wireless receiver.”

In this arrangement, the mechanical nature of many sensors and widgets are leveraged to power the backscatter design. “We have computational designs that use push buttons to harvest energy from user interaction as well as a combination of circular plastic springs to store energy,” added Gollakota.

While the researchers are commercializing their technology by making their CAD models available to 3D printing enthusiasts, they envision a fairly broad commercial market for the technology.

Gollakota suggested that e-commerce websites would like to know how a user might be interacting with the objects they sell (after the user has given consent of course). This could alert an e-commerce website that a container needs a refill. For instance, the researchers demonstrated a prototype of a detergent bottle that could report when it is empty.

But perhaps even more consequential is the idea that this technology could be used in point-of-care medical situations, such as tracking when a pill bottle is opened or closed, or how much insulin pen usage occurs.

Gollakota added: “In a recent version of this work, we showed that we can not only send wireless data, we can also store information about how the object was used outside the wireless range and this information can be uploaded by the push of a button when the person comes in the range of the base station.”

IBM’s Envisons the Road to Quantum Computing Like an Apollo Mission

Post Syndicated from Dexter Johnson original

At its virtual Quantum Computing Summit last week, IBM laid out its roadmap for the future of quantum computing. To illustrate the enormity of the task ahead of them, Jay Gambetta, IBM Fellow and VP, Quantum Computing, drew parallels between the Apollo missions and the next generation of Big Blue’s quantum computers. 

In a post published on the IBM Research blog, Gambetta said:  “…like the Moon landing, we have an ultimate objective to access a realm beyond what’s possible on classical computers: we want to build a large-scale quantum computer.”

Lofty aspirations can lead to landing humankind on the moon and possibly enabling quantum computers to help solve our biggest challenges such as administering healthcare and managing natural resources. But it was clear from Gambetta’s presentation that it is going to require a number of steps to achieve IBM’s ultimate aim: a 1,121-qubit processor named Condor.

For IBM, it’s been a process that started in the mid-2000s with its initial research into superconducting qubits, which are on its roadmap through at least 2023.

This reliance on superconducting qubits stands in contrast to Intel’s roadmap which depends on silicon spin qubits. It would appear that IBM is not as keen as Intel to have qubits resemble a transistor.

However, there is one big issue for quantum computers based on superconducting qubits: they require extremely cold temperatures of about 20 millikelvin (-273 degrees C). In addition, as the number of superconducting qubits increases, the refrigeration system needs to expand as well. With an eye toward reaching its 1,121-qubit processor by 2023, IBM is currently building an enormous “super fridge” dubbed Goldeneye that will be 3 meters tall and 2 meters wide.

Upon reaching the 1,121-qubit threshold, IBM believes it could lay the groundwork for an entirely new era in quantum computing in which it will become possible to scale to error-corrected, interconnected, 1-million-plus-qubit quantum computers.

“At 1,121 qubits, we expect to start demonstrating real error correction which will result in the circuits having higher fidelity than without error correction,” said Gambetta.

Gambetta says that Big Blue engineers will need to overcome a number of technical challenges to get to 1,121 qubits. Back in early September, IBM made available its 65-qubit Hummingbird processor, a step up from its 27-qubit Falcon processor, which had run a quantum circuit long enough for IBM to declare it had reached a quantum volume of 64. (Quantum volume is a measurement of how many physical qubits there are, how connected they are, and how error prone they may be.)

Another issue is the so-called “fan-out” problems that result when you scale up the number of qubits on a quantum chip. As the qubits increase, you need to add multiple control wires for each qubit.

The issue has become such a concern that quantum computer scientists have adopted the Rent’s Rule that the semiconductor industry defined back in the mid-1960s. E.F. Rent, a scientist at IBM in the 1960s, observed that there was relationship between the number of external signal connections to a logic block and the number of logic gates in the logic block.  Quantum scientists have adopted the terminology to describe their own challenge with wiring of qubits.

IBM plans to address these issues next year when it introduces its 127-qubit Quantum Eagle processor that will feature through-silicon vias and multi-level wiring that will enable the “fan-out” of a large density classical control signal while protecting the qubits in order to maintain high coherence times.

“Quantum computing will face things similar to the Rent Rule, but with the multi-level wiring and multiplex readout—things we presented in our roadmap which we are working on are already—we are demonstrating a path to solving how we scale up quantum processors,” said Gambetta.

It is with this Eagle processor that IBM will introduce concurrent real-time classical computing capabilities that will enable the introduction of a broader family of quantum circuits and codes.

It’s in IBM’s next release, in 2022—the 433-qubit Quantum Osprey system—that some pretty big technical challenges will start looming.

“Getting to a 433-qubit machine will require increased density of cryo-infrastructure and controls and cryo-flex cables, said Gambetta. “It has never been done, so the challenge is to make sure we are designing with a million-qubit system in mind. To this end, we have already begun fundamental feasibility tests.”

After Osprey comes the Condor and its 1,121-qubit processor. As Gambetta noted in his post: “We think of Condor as an inflection point, a milestone that marks our ability to implement error correction and scale up our devices, while simultaneously complex enough to explore potential Quantum Advantages—problems that we can solve more efficiently on a quantum computer than on the world’s best supercomputers.”

Robotics, AI, and Cloud Computing Combine to Supercharge Chemical and Drug Synthesis

Post Syndicated from Dexter Johnson original

IBM must be brimming with confidence about its new automated system for performing chemical synthesis because Big Blue just had twenty or so journalists demo the complex technology live in a virtual room.

IBM even had one of the journalists choose the molecule for the demo: a molecule in a potential Covid-19 treatment. And then we watched as the system synthesized and tested the molecule and provided its analysis in a PDF document that we all saw in the other journalist’s computer. It all worked; again, that’s confidence.

The complex system is based upon technology IBM started developing three years ago that uses artificial intelligence (AI) to predict chemical reactions. In August 2018, IBM made this service available via the Cloud and dubbed it RXN for Chemistry.

Now, the company has added a new wrinkle to its Cloud-based AI: robotics. This new and improved system is no longer named simply RXN for Chemistry, but RoboRXN for Chemistry.

All of the journalists assembled for this live demo of RoboRXN could watch as the robotic system executed various steps, such as moving the reactor to a small reagent and then moving the solvent to a small reagent. The robotic system carried out the entire set of procedures—completing the synthesis and analysis of the molecule—in eight steps.

In regular practice, a user will be able to suggest a combination of molecules they would like to test. The AI will pick up the order and task a robotic system to run the reactions necessary to produce and test the molecule. Users will be provided analyses of how well their molecules performed.

Back in March of this year, Silicon Valley-based startup Strateos demonstrated something similar that they had developed. That system also employed a robotic system to help researchers working from the Cloud create new chemical compounds. However, what distinguishes IBM’s system is its incorporation of a third element: the AI.

The backbone of IBM’s AI model is a machine learning translation method that treats chemistry like language translation. It translates the language of chemistry by converting reactants and reagents to products through the use of Statistical Machine Intelligence and Learning Engine (SMILE) representation to describe chemical entities.

IBM has also leveraged an automatic data driven strategy to ensure the quality of its data. Researchers there used millions of chemical reactions to teach the AI system chemistry, but contained within that data set were errors. So, how did IBM clean this so-called noisy data to eliminate the potential for bad models?

According to Alessandra Toniato, a researcher at IBM Zurichh, the team implemented what they dubbed the “forgetting experiment.”

Toniato explains that, in this approach, they asked the AI model how sure it was that the chemical examples it was given were examples of correct chemistry. When faced with this choice, the AI identified chemistry that it had “never learnt,” “forgotten six times,” or “never forgotten.” Those that were “never forgotten” were examples that were clean, and in this way they were able to clean the data that AI had been presented.

While the AI has always been part of the RXN for Chemistry, the robotics is the newest element. The main benefit that turning over the carrying out of the reactions to a robotic system is expected to yield is to free up chemists from doing the often tedious process of having to design a synthesis from scratch, says Matteo Manica, a research staff member in Cognitive Health Care and Life Sciences at IBM Research Zürich.

“In this demo, you could see how the system is synergistic between a human and AI,” said Manica. “Combine that with the fact that we can run all these processes with a robotic system 24/7 from anywhere in the world, and you can see how it will really help up to speed up the whole process.”

There appear to be two business models that IBM is pursuing with its latest technology. One is to deploy the entire system on the premises of a company. The other is to offer licenses to private Cloud installations.

“From a business perspective you can think of having a system like we demonstrated being replicated on the premise within companies or research groups that would like to have the technology available at their disposal,” says Teodoro Laino, distinguished RSM, manager at IBM Research Europe. On the other hand, we are also pushing at bringing the entire system to a service level.”

Just as IBM is brimming with confidence about its new technology, the company also has grand aspirations for it.

Laino adds: “Our aim is to provide chemical services across the world, a sort of Amazon of chemistry, where instead of looking for chemistry already in stock, you are asking for chemistry on demand.”

Building a Quantum Computing Workforce from the Ground Up

Post Syndicated from Dexter Johnson original

Although quantum computing is still in its infancy, its potential means it has already become one of the fastest-growing STEM fields. Consequently, industry and academia are now starting to tackle the problem of creating a labor pool that can leverage the opportunities provided by this new field.

It’s likely that any future quantum workforce will have to come from a diverse universe of scientists and engineers, including material scientists and electronic engineers working on hardware and code developers and mathematicians working on software.

This was the view of education leaders from IBM, NYU and Howard University at a recent virtual meeting set up to discuss the challenges of the anticipated quantum computing talent shortage. 

“You have to have advanced education in order to make a good living in this industry,” explained Tina Brower-Thomas,  Education Director and Howard University Executive Director, Center for Integrated Quantum Materials. “So the question is are we preparing our K through 12 to go to the schools that have requisite curriculum that will then prepare them to be in the industry? I think, unfortunately, the answer is “no” and that’s a long-standing problem we’ve had in this country.”

IBM has been trying to pull both industry and academia together to prepare for the day when quantum computing requires a large number of trained professionals. One of IBM’s initiatives has been its Qiskit Global Summer School for future quantum software developers (prerequisites are the ability to multiply two matrices and basic Python programming experience). Qiskit has already had over 5,000 students from around the world apply to it.

Abe Asfaw, Global Lead of Quantum Education, IBM Quantum, noted that what’s really helped has been the advent of
cloud-based quantum computing.

Cloud-based systems mean no longer having a “huge barrier to entry where you have to learn quantum mechanics and then you have to learn several other things along the way. You can make the barrier a little bit lower to just a question of programming,” said Asfaw.

While being able to program cloud-based systems has democratized the field somewhat, Javad Shabani, Assistant Professor of Physics and Chair of the Shabani Lab, New York University, believes that if we’re looking for a generation that are really going to make breakthroughs, they’re going have to learn the hardware of quantum computers.

“In quantum computing at this stage in its development, you can’t separate software and hardware,” said Shabani. “We know that we don’t have a perfect quantum computer, so in order to make a little improvement you need to know the quantum computer inside and out [because of] the errors that exist in the quantum computers.”

The experiences of Shabani, Asfaw and Brower-Thomas all confirmed that even if you engage people early, broaden the spectrum of people who come into the field, a key issue is being able to offer students realistic and practical expectations of what they can expect in the immediate future for the themselves.

Shabani noted: “We all like to talk about the great potential of quantum computing, but these great capabilities come with great challenges. So we need to be careful about the hype and explain to students the realities of these great challenges and that they also create great opportunities.”

With 5G Rollout Lagging, Research Looks Ahead to 6G

Post Syndicated from Dexter Johnson original

Amid a 5G rollout that has faced its fair share of challenges, it might seem somewhat premature to start looking ahead at 6G, the next generation of mobile communications. But 6G development is happening now, and it’s being pursued in earnest by both industry and academia.

Much of the future landscape for 6G was mapped out in an article published in March of this year in an article published by IEEE Communications titled “Toward 6G Networks: Use Cases and Technologies.”  The article presents the requirements, the enabling technologies and the use cases for adopting a systematic approach to overcoming the research challenges for 6G.

“6G research activities are envisioning radically new communication technologies, network architectures, and deployment models,” said Michele Zorzi,  a professor at the University of Padua in Italy, and one of the authors of the IEEE Communications article. “Although some of these solutions have already been examined in the context of 5G, they were intentionally left out of initial 5G standards developments and will not be part of early 5G commercial rollout mainly because markets are not mature enough to support them.”

The foundational difference between 5G and 6G networks, according to Zorzi, will be the increased role that intelligence will play in 6G networks. It will go beyond merely classification and prediction tasks as is the case in legacy and/or 5G systems.

While machine-learning-driven networks are now still in their infancy, they will likely represent a fundamental component of the 6G ecosystem, which will shift towards a fully-user-centric architecture where end terminals will be able to make autonomous network decisions without supervision from centralized controllers.

This decentralization of control will enable sub-millisecond latency as required by several 6G services (which is below the already challenging 1-millisecond requirement of emerging 5G systems). This is expected to yield more responsive network management.

To achieve this new kind of performance, the underlying technologies of 6G will be fundamentally different from 5G. For example, says Marco Giordani, a researcher at the University of Padua and co-author of the IEEE Communications article, even though 5G networks have been designed to operate at extremely high frequencies in the millimeter-wave bands, 6G will exploit even higher-spectrum technologies—terahertz and optical communications being two examples.

At the same time, Giordani explains that 6G will have a new cell-less network architecture that is a clear departure from current mobile network designs. The cell-less paradigm can promote seamless mobility support, targeting interruption-free communication during handovers, and can provide quality of service (QoS) guarantees that are in line with the most challenging mobility requirements envisioned for 6G, according to Giordani.

Giordani adds: “While 5G networks (and previous generations) have been designed to provide connectivity for an essentially bi-dimensional space, future 6G heterogeneous architectures will provide three-dimensional coverage by deploying non-terrestrial platforms (e.g., drones, HAPs, and satellites) to complement terrestrial infrastructures.”

Key Industry and Academic Initiatives in 6G Development:

IBM’s $3-Billion Research Project Has Kept Computing Moving Forward

Post Syndicated from Dexter Johnson original

Back in 2014, under the looming shadow of the end of Moore’s Law, IBM embarked on an ambitious, US $3 billion project dubbed “7-nm and Beyond”. The bold aim of that five-year research project was to see how computing would continue into the future as the physics of decreasing chip dimensions conspired against it.

Six years later, Moore’s Law isn’t much of a law anymore. The observation by Gordon Moore (and later the industry-wide adherence to that observation) that the number of transistors on a chip doubled roughly every two years seems now almost to be a quaint vestige of days gone by. But innovation in computing is still required, and the “7-nm and Beyond” project has helped meet that continuing need.

“The search for new device architectures to enable the scaling of devices, and the search for new materials for performance differentiation will never end,” says Huiming Bu, Director at IBM’s Advanced Logic & Memory Technology Research, Semiconductor, and AI Hardware Group.

Although the chip industry may not feel as constrained by Moore’s Law as it has in the past, the “7-nm and Beyond” project has delivered important innovations even while some chip manufacturers have seemingly thrown up their hands in frustration at various points in recent years. 

One example of this frustration was the decision two years ago by GlobalFoundries to suspend its 7-nanometer chip development.

Back in 2015, one year into its “7-nm and Beyond” project, IBM announced its first 7-nm test chip in which extreme-ultraviolet lithography (EUV), supplied by ASML, was a key enabling technology. While there have been growing pains in the use of EUV—resulting in the richest chip manufacturers being the only ones continuing on with the scaling down that it enables—it has since become a key enabling technology not only for 7-nm nodes, but also for 5-nm nodes and beyond, according to Bu.

“Back in the 2014-2015 time window, the whole industry had a big question about the practical feasibility of EUV technology,” says Bu. “Now it’s not a question. Now, EUV has become the mainstream enabler. The first-kind 7-nm work we delivered based on EUV back then helped to build the confidence and momentum towards EUV manufacturing in our industry.”

Of course, EUV has enabled 7-nm nodes, but the aim of IBM was to look beyond that. IBM believes that the foundational element of chips to enable the scaling beyond FinFET will be the nanosheet transistor, which some have suggested may even be the last step in Moore’s Law.

The nanosheet looks to be the replacement to the FinFET architecture, and is expected to make possible the transition from the 7-nm and 5-nm nodes to the 3-nm node. In the architecture of the nanosheet field-effect transistors, current flows through multiple stacks of silicon that are completely surrounded by the transistor gate. This design greatly reduces the amount of current that can leak during off state, allowing more current to be used in driving the device when the switch is turned on.

“In 2017, the industry had a question about what will be the new device structure beyond FinFET,” says Bu. “At this point, three years later, the whole industry is getting behind nanosheet technology as the next device structure after FinFET.”

The transistors and switches have had some key developments, but the “7-nm and Beyond” project also resulted in some significant insights into how the wiring above all these transistors and switches will be made going into the future.

“Part of our innovation has been to extend copper as far as possible,” says Daniel Edelstein, IBM Fellow; Si Technology Research; MRAM/BEOL Process Strategy.  “The hard part, as always,” says Edelstein, “has been simply patterning these extremely tiny and tall trenches and filling them without defects with copper.”

Despite the challenges with using copper, Edelstein doesn’t see the industry migrating away from it to more exotic materials in the near future. “Copper is certainly not at the end of its rope for what’s being manufactured today,” said Edelstein.

He adds: “Several companies have indicated that they intend to continue using it. So I can’t tell you exactly when it breaks. But we have seen that the so-called resistance crossover point keeps getting pushed farther into the future.”

While chip dimensions, architectures and materials have driven much of the innovations of the “7-nm and Beyond” project, both Edelstein and Bu note that artificial intelligence (AI) is also playing a key role in how they are approaching the future of computing.

“With the onset of AI-type, brain-inspired computing and other kinds of non-digital computing, we’re starting to develop, at the research level, additional devices—especially emerging memory devices,” says Edelstein.

Edelstein is referring to emerging memory devices, such as phase-change memory (or memristors,” as some others refer to them), which are thought of as analog computing devices.

The emergence of these new memory devices has provided a kind of resurrection in thinking about potential applications over and above conventional data storage. Researchers are imagining new roles for the thirty-year-old magnetoresistive random-access memory (MRAM), which IBM has been working on since MRAM’s debut.

“MRAM has finally had enough breakthroughs where it’s now not only manufacturable, but also approaching the kinds of requirements that it needs to achieve to be competitive with SRAM for system cache, which is kind of the holy grail in the end,” says Edelstein.

The evidence of this embedding of MRAM and other nonvolatile memories—including RRAM and phase-change memory—directly into the processor is seen in the move last year by chip equipment manufacturer Applied Materials to give its customers the tools for enabling this change.

The pursuit of new devices, new materials, and new computing architectures for better power-performance will continue, according to Bu. He also believes that the demand to integrate various components into a holistic computing system is starting to drive a whole new world of heterogeneous integration.

Bu adds: “Building these heterogeneous architecture systems is going to become a key in future computing. It is a new innovation strategy driven by the demands of AI.”

Skin Cream Provides Special Ingredient for Non-Flammable Electrolyte in Li-ion Batteries

Post Syndicated from Dexter Johnson original

Professor Lu (right) and Ph.D. candidate Jing Xie (left) present molecular crowding electrolyte and battery prototype.

Professor Lu (right) and Ph.D. candidate Jing Xie (left) present molecular crowding electrolyte and battery prototype.    

Lithium-ion (Li-ion) batteries have proven themselves to be one of the most dependable energy storage technologies at our disposal. Unfortunately, they have a rather pronounced Achilles Heel: a non-aqueous electrolyte that is fairly combustible and as a result poses a significant fire hazard.

The stories of battery fires in Tesla cars, or the estimate from the Federal Aviation Administration that a fire in a Li-ion battery grounds a flight every 10 days, serve as testaments to what a serious issue this hazard has become.

In a very big step towards solving this problem once and for all, researchers at The Chinese University of Hong Kong (CUHK) have developed a non-flammable, eco-friendly and low-cost aqueous electrolyte based on a skin cream ingredient that can still provide high energy density unlike previous aqueous-based electrolytes.

“If one wants to fully eliminate the fire issue in batteries, using a water-based electrolyte is the most effective strategy,” explained Prof. Yi-Chun Lu, who led the research at CUHK. “When we developed our project, our aim was to develop a water-based electrolyte that delivered high voltage while maintaining a low cost and remained environmentally friendly.”

In the video below, you can see how the aqueous electrolyte is produced and see how non-flammable it is compared to electrolytes currently used in Li-ion batteries.

Non-aqueous electrolytes are ubiquitous in the Li-ion batteries of today for one simple reason: they deliver high energy density (100-400 watt-hours per kilogram (Wh/Kg)). As alternatives to non-aqueous electrolytes, researchers have begun to use highly concentrated salts in aqueous electrolytes to try and reach the energy density of non-aqueous electrolytes. However, these high-salt aqueous electrolytes create a battery chemistry that is highly toxic as well as leading to a relatively expensive battery to produce.

“While aqueous electrolytes have been used for many years, existing water-based electrolytes sacrifice voltage window or increase cost and toxicity in exchange for safety,” said Yi-Chun Lu.

In looking for a solution to this issue, Yi-Chun Lu and her team at CUHK turned to nature to leverage something known as “molecular crowding” for an inexpensive and eco-friendly alternative. Molecular crowding is a phenomenon that occurs inside living cells in which molecular crowding agents significantly suppress water activity by changing the hydrogen-bonding structure inside the cells.

In their research described in the journal Nature Materials, Yi-Chun Lu and her team employed a polymer often used in skin creams to serve as the crowding agent. This crowding agent decreases the water activity and thereby enables a wide electrolyte voltage window with low salt concentrations. (An electrolyte voltage window is the voltage gap between the positive and negative side of the battery, and, in this case, is 3.2V).

While this electrolyte can be used with any kind of electrode materials that are within its electrolyte voltage window, it cannot currently work with electrode materials that fall outside its voltage window.

This limitation on the choice of electrode materials meant that the prototype was only able to achieve 75-110 Wh/kg, which remains still below the energy density of typical Li-ion batteries that range between 100-400 Wh/kg.

This limitation can be overcome, according to Yi-Chun Lu, by increasing the voltage window to enable electrodes that sit at an even lower potential such as lithium metal.

“We have demonstrated this possibility by using gel coating in our new electrolyte to achieve a 4.0V battery using a lithium anode,” said Yi-Chun Lu. “With proper improvements of the electrode and electrolyte, the energy density possible for this class of electrolyte should be the same as typical non-aqueous electrolyte, say between 200 to 260 Wh/Kg.”

Testing the compatibility of the electrolyte with more electrode materials is the focus of ongoing research and one of the engineering challenges in bridging the technology from a lab prototype to commercial production. The researchers will also be increasing the number of times they have charged and discharged their batteries (known as cycles) from 300 cycles to 500 while they improve the electrolyte with a higher voltage window.

In terms of commercializing the technology, the CUHK team has not yet collaborated with commercial battery manufacturers in the development of their prototype, but they are now seeking industrial partners. In the meantime, they have filed for a US patent on the electrolyte.

While the research was not done in collaboration with an industrial partner, it was most certainly guided by commercial considerations.

“Low cost, high voltage and high safety are our major strengths,” said Yi-Chun Lu. “We anticipate good markets for this technology in any application that requires ultra-safe batteries.”

Safety in larger battery systems is a more pressing issue than smaller portable electronics, according to Yi-Chun Lu.

Yi-Chun Lu added: “I think this technology will be very attractive for applications like large-scale grid storage (interfacing with solar or wind farm) and electric vehicles (with further development in energy density).”

How Brexit Will Affect Europe’s Research Infrastructure

Post Syndicated from Dexter Johnson original

Despite its byzantine structure, European Union research funding has been remarkably effective at producing results, and has been notably beneficial for the United Kingdom’s research—the UK received €3.4 billion more in research funding from the EU than it had contributed in the period between 2007-2013.

Britain’s likely exit from the EU in an either “no-deal” crash or a settled agreement (currently scheduled for 31 January 2020), will probably damage scientific research both in the UK and the EU for decades to come, according to Terry Wyatt, a professor at the University of Manchester in the UK and part of the working group at The Royal Society that investigated the impact of Brexit on UK research and development (see sidebar for a detailed breakdown of how the Royal Society thinks things will play out).

“Nothing is irreparable but the danger is that it could take decades to recover from Brexit,” said Wyatt. “The damage which has already been done could continue over the next few years, or could even accelerate.”

Even if a no-deal Brexit is avoided, or Brexit avoided altogether, Wyatt believes that the damage to the UK’s reputation has already been significant, and the effects will not be repaired overnight. “I think there’s no question that damage has been done and will continue to be done to R&D and high tech industry,” he said, adding, “I can’t see how that cannot be the case.”

The impact, according to Wyatt, manifests most clearly in two ways. In the first, there is already a reluctance to engage UK partners for EU research projects. In particular, projects have avoided engaging UK leadership ever since the referendum vote to leave the EU took place in 2016. The second impact has been that EU nationals are less likely to want to apply for short-term jobs in the UK. Wyatt concedes that most of the evidence for this second impact is anecdotal, but this is largely because the data on this is so hard to collect.

“An international workforce that can migrate across international borders is the life blood of science and research,” said Wyatt. “If EU nationals post-Brexit don’t want to come and live and work in the UK, I think there’s no question that that could seriously damage UK science and technology.”

Wyatt acknowledges that press reports have indicated that the British government has promised to continue to fund ongoing or already approved research schemes and projects. In this scenario, anyone who currently has EU funding through the European Research Council, or other EU research bodies, will continue to have their research funding guaranteed by the UK and that funding will not just drop off a cliff on 31 October.

Both explicitly and implicitly the UK government has been trying to encourage people through these guarantees to continue to apply for EU funding, and they’ve been trying to encourage other European countries and scientists to regard the UK as a source of good collaborators, according to Wyatt. Despite the guarantees, Wyatt’s experience has been that these efforts to allay fears haven’t helped.

“In the past, we had lots of extremely well-qualified applicants from EU countries,” said Wyatt. “We’ve had virtually no such applicants in recent times.”

The situation becomes further muddied by the fact that the UK government has never said whether they intend to fund such research to the level that the EU was sending to UK, or instead it will just scale funding to the level that the UK was sending to the EU. In the last fully accounted EU research funding period from between 2007 and 2013, the UK contributed €5.4 billion to the EU but received €8.8 billion back in funding for UK research.

But for Wyatt the issue extends beyond even the fact that the UK has received more funding than it has put it in. He argues that the merit of EU research funding has always been that it’s based on scientific excellence. He fears that the UK will alter this guiding principle more towards targeting domestic agendas.

“The money, of course, is very important, but it’s also about the quality of the funding mechanisms and the research that gets done,” said Wyatt. “It’s important that the quality of the research be determined both in terms of excellence and the science. It is vital that research priorities be driven by the scientist rather than some government minister or bureaucrat.”

Beyond the high-level issues of research strategies and directions, the nuts and bolts of bringing researchers into the UK outside of the EU, such as Asia, is fraught with challenges, according to Wyatt.

“The hoops that you have to jump through to secure a visa for a non-EU national is daunting,” said Wyatt. “If every single research job now has to go through those hoops (unless you can find a UK person), it’s going to be a nightmare and it’s hard to imagine how the visa system is going to cope.”

Wyatt notes that the formal UK involvement in a number of the large European science and technology institutions like CERN will be little impacted by Brexit since those relationships predate the formation of the EU. However, the lack of detail and clarity in how the relationships will work going forward does not offer much confidence.

Wyatt added: “At the moment, people seem to be trading on grand promises and warm words. However, in the end for high-tech industries or for R&D, it’s hard to argue that the net result is going to be positive.”

U.S. Battery Producer Celgard Files Lawsuit for IP Theft

Post Syndicated from Dexter Johnson original

In a lawsuit filed in California, U.S. battery manufacturer Celgard, a subsidiary of Polypore International, has sued Shenzhen Senior Technology Material Co., Ltd. (Senior) for patent infringement and for misappropriating Celgard’s trade secrets and confidential information.

The suit has a bit of a spy novel twist in that Celgard alleges in its complaint that one of its senior scientists left the company in October 2016 and moved to China to join Senior, after which he changed his name to cover up his identity. This scientist is alleged to be the source through which Senior acquired Celgard’s intellectual property.

Brexit Threatens a Steep Loss of Jobs for U.K.-Based Tech Companies

Post Syndicated from Dexter Johnson original

As of press time, the United Kingdom was scheduled to leave the European Union on 31 October. However, there has been chaos in the British parliament, and it is still uncertain if the U.K. will exit on that date, or if it does, on what terms. One thing that has become clear is that Brexit will inflict significant hardship on small- and medium-size enterprises (SMEs).

As we reported in our September 2019 article on the flow of tech workers from the U.K. to Ireland, large multinationals have the capital to move operations in order to mitigate disruptions. For SMEs, the kind of measures that can be taken in the wake of Brexit are far more limited.

“All the stories about Brexit in the Financial Times have been about big multinational companies,” says Ross Brown, a professor at the University of St. Andrews, in Scotland, and coauthor of a paper on the potential impact of Brexit on SMEs. Brown says that small companies are unable to implement the kind of contingency plans that larger companies developed to deal with the chronic uncertainty of the Brexit process: “There have been a number of detrimental impacts on U.K. SMEs, and I think that’s been the hidden aspect of Brexit.”

If you go by the numbers, the impact of Brexit on SMEs should not be so hidden. Currently, SMEs make up over 99 percent of all U.K. companies, according to Brown. (In the U.K., a company having 250 employees or fewer is considered an SME. For comparison, in the United States the threshold is 500 employees.)

There are about 5.7 million SMEs in the U.K., according to Brown, and together they constitute about 60 percent of all private-sector employment. Brown estimates that 14 percent of these 5.7 million SMEs are high-tech engineering companies, meaning approximately 800,000 affected SMEs in the U.K. are in high tech.

Brown’s research indicates that two-thirds of SMEs have already reduced their capital investment. Capital investment is the lifeblood of a business, he points out. “Unless companies are investing, they’re not able to grow, they’re not able to create new products, and they might become less competitive in the marketplace,” he says. “If that’s happening across the board, it’s not going to be small beer.”

Estimates on the number of Brexit-related job losses are problematic, according to Brown. However, he remains certain that the short-term impact (the next five years) will be significant, possibly translating into a loss of around 20 percent of jobs for SMEs in the United Kingdom.

While SMEs can’t match the preparations of multinationals, there are efforts to take some advance measures, according to Brown. “Some capable and innovative companies might have set up offices in other European Union countries so they can continue accessing the single market without potentially having too much disruption,” he says.

Unfortunately, these measures are not available for all SMEs. And companies that are highly R&D focused may suffer the worst: One of the key points of concern for R&D-based companies is the potential difficulty in employing EU citizens.

This is a great fear, especially for companies in a sector like video games that is heavily reliant on Eastern European employees. “For a big company to open up a plant in Eastern Europe is fine, but for a small company of maybe 10 people to open another overseas office is really quite a major undertaking,” says Brown, who believes that many such small companies may downsize rather than take on this kind of additional risk.

This article appears in the October 2019 print issue as “Brexit Threatens British Tech Jobs.”

Ultrasensitive Microscope Reveals How Charging Changes Molecular Structures

Post Syndicated from Dexter Johnson original

New ability to image molecules under charging promises big changes for molecular electronics and organic photovolatics

All living systems depend on the charging and discharging of molecules to convert and transport energy. While science has revealed many of the fundamental mechanisms of how this occurs, one area has remained shrouded in mystery: How does a molecule’s structure change while charging? The answer could have implications for range of applications including molecular electronics and organic photovoltaics.

Now a team of researchers from IBM Research in Zurich, the University of Santiago de Compostela and ExxonMobil has reported in the journal Science the ability to image, with unprecedented resolution, the structural changes that occur to individual molecules upon charging.

This ability to peer into this previously unobserved phenomenon should reveal the molecular charge-function relationships and how they relate to biological systems converting and transporting energy. This understanding could play a critical role in the development of both organic electronic and photovoltaic devices.

“Molecular charge transition is at the heart of many important phenomena, such as photoconversion, energy and molecular transport, catalysis, chemical synthesis, molecular electronics, to name some,” said Leo Gross, research staff member at IBM Zurich and co-author of the research. “Improving our understanding of how the charging affects the structure and function of molecules will improve our understanding of these fundamental phenomena.”

This latest breakthrough is based on research going back 10 years when Gross and his colleagues developed a technique to resolve the structure of molecules with an atomic force microscope. AFMs map the surface of a material by recording the vertical displacement necessary to maintain a constant force on the cantilevered probe tip as it scans a sample’s surface.

Over the years, Gross and his colleagues refined the technique so it could see the charge distribution inside a molecule, and then were able to get it to distinguish between individual bonds of a molecule.

The trick to these techniques was to functionalize the tip of the AFM probe with a single carbon monoxide (CO) molecule. Last year, Gross and his colleague Shadi Fatayer at IBM Zurich believed that the ultra-high resolution possible with the CO tips could be combined with controlling the charge of the molecule being imaged.

“The main hurdle was in combining two capabilities, the control and manipulation of the charge states of molecules and the imaging of molecules with atomic resolution,” said Fatayer.

The concern was that the functionalization of the tip would not be able to withstand the applied bias voltages used in the experiment. Despite these concerns, Fatayer explained that they were able to overcome the challenges in combining these two capabilities by using multi-layer insulating films, which avoid charge leakage and allow charge state control of molecules.

The researchers were able to control the charge-state by attaching single electrons from the AFM tip to the molecule, or vice-versa. This was achieved by applying a voltage between the tip and the molecule. “We know when an electron is attached or removed from the molecule by observing changes in the force signal,” said Fatayer.

The IBM researchers expect that this research could have an impact in the fundamental understanding of single-electron based and molecular devices. This field of molecular electronics promises a day when individual molecules become the building blocks of electronics.

Another important prospect of the research, according to Fatayer and Gross, would be its impact on organic photovoltaic devices. Organic photovoltaics have been a tantalizing solution for solar power because they are cheap to manufacture. However, organic solar cells have been notoriously poor compared to silicon solar cells at converting sunlight to energy efficiently.

The hope is that by revealing how the structural changes of molecules under charge impact the charge transition of molecules, engineers will be able to further optimize organic photovoltaics.

Is Graphene by Any Other Name Still Graphene?

Post Syndicated from Dexter Johnson original

Consumers may finally have a way to know if their graphene-enabled products actually get any benefit from the wonder material

Last year, the graphene community was rocked by a series of critical articles that appeared in some high-profile journals. First there was an Advanced Material’s article with the rather innocuously title: “The Worldwide Graphene Flake Production”. It was perhaps the follow-up article that appeared in the journal Nature that really shook things up with its incendiary title: “The war on fake graphene”.

In these two articles it was revealed that material that had been claimed to be high-quality (and high-priced) graphene was little more than graphite powder. Boosted by their appearance in high-impact journals, these articles threatened the foundations of the graphene marketplace.

But while these articles triggered a lot of hand wringing among the buyers and sellers of graphene, it’s not clear that their impact extended much beyond the supply chain of graphene. Whether or not graphene has aggregated back to being graphite is one question. An even bigger one is whether or not consumers are actually being sold a better product on the basis that it incorporates graphene. 

Consumer products featuring graphene today include everything from headphones to light bulbs. Consequently, there is already confusion among buyers about the tangible benefits graphene is supposed to provide. And of course the situation becomes even worse if the graphene sold to make products may not even be graphene: how are consumers supposed to determine whether graphene infuses their products with anything other than a buzzword?

Another source of confusion arises because when graphene is incorporated into a product it is effectively a different animal from graphene in isolation. There is ample scientific evidence that graphene when included in a material matrix, like a polymer or even paper, can impart new properties to the materials. “You can transfer some very useful properties of graphene into other materials by adding graphene, but just because the resultant material contains graphene it does not mean it will behave like free-standing graphene, explains Tom Eldridge, of UK-based Fullerex, a consultancy that provides companies with information on how to include graphene in a material matrix.

Eldridge added: “This is why it is often misleading to talk about the superlative properties of free-standing graphene for benefiting applications, because almost always graphene is being combined with other materials. For instance, if I combine graphene with concrete I will not get concrete which is 200 times stronger than steel.”

This is what leaves consumers a bit lost at sea: Graphene can provide performance improvements to a product, but what kind and by how much?

The Graphene Council (Disclosure: The author of this story has also worked for The Graphene Council) recognized this knowledge gap in the market and has just launched a “Verified Graphene Product” Program in addition to its “Verified Graphene Producer” program. The Verified Graphene Producer program takes raw samples of graphene and characterizes them to verify the type of graphene it is, while the Verified Graphene Product program addresses the issue of what graphene is actually doing in products that claim to use it. 

Companies that are marketing products that claim to be enhanced by graphene can use this service, and the verification can be applied to their product to give buyers confidence that graphene is actually doing something. (It’s not known if there are any clients taking advantage of it yet.)

“Consumers want to know that the products they purchase are genuine and will perform as advertised,” said Terrance Barkan, executive director of The Graphene Council. “This applies equally to purchasers of graphene enhanced materials and applications. This is why independent, third-party verification is needed.”