Tag Archives: computing

New Hardware Mimics Spiking Behavior of Neurons With High Efficiency

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/hardware/new-hardware-mimics-spiking-behavior-of-neurons-with-high-efficiency

Journal Watch report logo, link to report landing page

Nothing computes more efficiently than a brain, which is why scientists are working hard to create artificial neural networks that mimic the organ as closely as possible. Conventional approaches use artificial neurons that work together to learn different tasks and analyze data; however, these artificial neurons do not have the ability to actually “fire” like real neurons, releasing bursts of electricity that connect them to other neurons in the network. The third generation of this computing tech aims to capture this real-life process more accurately – but achieving such a feat is hard to do efficiently.

Japan’s Fugaku Supercomputer Completes First-Ever Sweep of High-Performance Benchmarks

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/computing/hardware/japans-fugaku-supercomputer-is-first-in-the-world-to-simultaneously-top-all-high-performance-benchmarks

The public-private partnership Fujitsu, and national research institute RIKEN put Japan on top of the world supercomputer rankings nine long years ago with the K computer. They’ve done it again, and in spades, with their jointly developed Fugaku supercomputer.

Fugaku, another name for Mount Fuji, sits at the summit of the TOP500 list announced on 22 June. It earned the top spot with an extraordinary performance of 415 Linpack petaflops. This is nearly triple that of the runner-up and previous No. 1, Oak Ridge National Lab’s Summit supercomputer in Tennessee, built by IBM. Fugaku achieved this using 396 racks employing 152,064 A64FX Arm nodes. The Arm components comprise  approximately 95 percent of the computer’s almost 159,000 nodes. 

In addition to demonstrating world-beating speed, Fugaku beat the competition in: the High Performance Conjugate Gradients (HPCG) benchmark used to test real-world application performance; the Graph500, a rating for data-intensive loads; and HPL-AI, a benchmark for rating artificial intelligence workloads. A Fugaku prototype also took top spot for the most energy-efficient system on the Green500 list last November, achieving an outstanding 16.9 GFlops/Watt power-efficiency during a 2.0 Pflops per second Linpack performance run. 

Driving Fugaku’s success is Fujitsu’s 48-core Arm v8.2-A A64FX CPU, which the company is billing as the world’s first CPU to adopt Scalable Vector Extension—an instruction-set extension of Arm v8-A architecture for supercomputers. The 512-bit, 2.2 GHz CPU employs  1,024 Gbytes/s 3D-stacked memory and can handle half-precision arithmetic and multiply-add operations that reduce memory loads in AI and deep learning applications where lower precision is admissible. The CPUs are directly linked by a 6.8 Gbytes/s network Tofu D interconnect that uses a 6-dimensional mesh torus connection.

During three years of planning the computer starting in 2011, a number of designs and architectures were considered. “Our guiding strategy was to build a science-driven, low-powered machine that was easy to use and could run science and engineering applications efficiently,” says Toshiyuki Shimizu, Principal Engineer of Fujitsu’s Platform Development Unit. 

Independent observers say they succeeded in every element of the goal. “Fugaku is very impressive with over 7 million cores,” says Jack Dongarra, director of the Innovative Computing Lab, University of Tennessee, Knoxville. “The machine was designed for doing computational science problems from the ground up. It’s a first.”

As for the choice of Arm architecture, Shimizu notes the large number of application developers supporting Arm. “Fugaku also supports Red Hat Enterprise Linux 8.x, a de facto standard operating system widely used by commercial servers,” he points out. 

Another plus for Fugaku is that it follows the K computer by maintaining an all-CPU design. Shimizu says this makes memory access and CPU interconnectivity more efficient. Most other supercomputers rely on graphic processing units (GPUs) to accelerate performance. 

Dongarra points out an additional benefit here. “A CPU-only system simplifies the programming. Just one program is needed, not two: one for the CPU and one for the GPU.”

Designing and building a computer that, from the ground up, was intended to be Japan’s national flagship didn’t come cheap, of course. The government’s estimated budget for the project’s R&D, acquisitions, and application development is 110 billion yen (roughly US $1 billion). 

Fujitsu dispatched the first units of Fugaku to the RIKEN Center for Computational Science (R-CCS) in Kobe last December and shipments were completed last month. 

Speaking at the ISC 2020 conference in June, Satoshi Matsuoka, Director of R-CCS, said that although Fugaku was scheduled to start up next year, Japan’s government decided it should be deployed now to help combat Covid-19. He cited that it was being used to study how the virus behaves, what existing drugs might be repurposed to counter it, and how a vaccine could be made.

Other government-targeted application areas given high priority include: disaster-prevention simulations of earthquakes and tsunami; development of fundamental technologies for energy creation, conversion, and storage; creation of new materials to support next-generation industries; and development of new design and production processes for the manufacturing industry. 

Fugaku will also be used to realize the creation of a smarter society—dubbed Society 5.0—“that balances economic advancement with the resolution of social problems by a system that highly integrates cyberspace and physical space.” 

But the supercomputer industry is nothing if not a game of technology leapfrog, with one country or enterprise providing machines with the highest performance only to be outpaced a short time later. Just how long will Fugaku stay No. 1? 

Shimizu doesn’t claim to know, but he says there is room for further improvement of Fugaku’s performance. “The TOP500 result was only 81 percent of peak performance, whereas the efficiency of silicon is higher. We believe we can improve the performance in all the categories.”

But even that might not be enough to keep it on top for long. As Dongarra says, “The U.S. will have exascale machines in 2021.”  

Honeywell Claims It Has Most Powerful Quantum Computer

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/honeywell-claims-it-has-most-powerful-quantum-computer

“We expect within next three months we will be releasing world’s most powerful quantum computer,” Tony Uttley, president of Honeywell Quantum Solutions, told IEEE Spectrum in March. Right on cue, last week the company claimed it had reached that mark. The benchmark measurement, called quantum volume, is essentially a combined measure of the number of physical qubits, how connected they are, and how error prone they are. For Honeywell’s system, which has 6-qubits, that number is now 64, beating a 53-qubit IBM system that had a quantum volume of 32.

The quantum volume measure isn’t a universally accepted benchmark and has an unclear relationship to the “quantum supremacy” goal Google claimed in 2019, which compares a quantum computer to the theoretical peak performance of classical computers.  But Uttley says it’s the best measure so far. “It takes into account more than just how many physical qubits you have,” he says. Just going by the number of qubits doesn’t work because, “you don’t necessarily get all or even any of the benefits of physical qubits” in real computations.

Honeywell’s computer uses ytterbium ions trapped by an electromagnetic field within a narrow groove built in a chip. The qubit is represented by the spin state of the ion’s outermost electron and that of its nucleus. The qubits are manipulated using lasers and can be moved around the trap to carry out algorithms. Much of the quantum volume advantage of this system comes from the length of time the qubits can hold their state before noise corrupts them and crashes the computation. In the ion trap, they last for seconds, as opposed to the microseconds of many other systems. Uttley says this long “coherence time” allows for mid-circuit measurements—a quantum version of if/then programming statements—in quantum algorithms.

Because of COVID-19, most of the United States went into lockdown within weeks of Honeywell’s March prediction. So hitting the mark took a different path than expected. “We had to redesign physical layout of labs to keep social distance,” including adding plexiglass dividers, explains Uttley. And only 30 percent of the project team worked on site. “We pulled in a tremendous amount of automation,” he says.

The quantum computer itself is designed to be accessed remotely, Uttley explains. The company plans to offer it as a cloud service. And partners, such the bank JP Morgan Chase, are already running algorithms on it. The latter firm is interested in quantum algorithms for fraud detection, optimization for trading strategies, and security. More broadly customers want to explore problems of optimization, machine learning, and chemistry and materials science.

Uttley predicts 10-fold boosts in quantum volume per year going forward. His confidence comes from the nature of the ion trap system his team has developed. “It’s like we built a stadium, but right now we’re only occupying a handful of seats.”

Making Blurry Faces Photorealistic Goes Only So Far

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/software/making-blurry-faces-photorealistic-goes-only-so-far

One more trope of Hollywood spy movies has now taken at least a partial turn toward science-fact. You’ve seen it: To save the world, the secret agent occasionally needs a high-res picture of a human face recovered from a very blurry, grainy, or pixelated image.

Now, artificial intelligence has delivered a partial (though probably ultimately unhelpful) basket of goods for that fictional spy. It’s only partial because Claude Shannon’s Theory of Information Entropy always gets the last laugh. As a new algorithm now demonstrates for anyone to see, an AI-generated photorealistic face “upsampled” from the low-res original probably looks very little like the person our hero is racing the clock to track down. There may even be no resemblance at all.

Sorry, Mr. Bond and Ms. Faust. Those handful of pixels in the source image contain only so much information. Because, however convincingly the AI renders that imaginary face—and computer-generated faces can be quite uncanny these days—there’s no dodging the fact that the original image was, in fact, very information-sparse. No one, not even someone with a license to kill, gets to extract free information out of nowhere.

But that’s not the end of the story, says Cynthia Rudin, professor of computer science at Duke University in Durham, N.C. There may be other kinds of value to be extracted from the AI algorithm she and her colleagues have developed.

For starters, Rudin said, “We kind of proved that you can’t do facial recognition from blurry images because there are so many possibilities. So zoom and enhance, beyond a certain threshold level, cannot possibly exist.”

However, Rudin added that “PULSE,” the Python module her group developed, could have wide-ranging applications beyond just the possibly problematic “upsampling” of pixelated images of human faces. (Though it’d only be problematic if misused for facial recognition purposes. Rudin said there are no doubt any number of unexplored artistic and creative possibilities for PULSE, too.)

Rudin and four collaborators at Duke developed their Photo Upsampling via Latent Space Exploration algorithm (accepted for presentation at the 2020 Conference on Computer Vision and Pattern Recognition conference earlier this month) in response to a challenge.

“A lot of algorithms in the past have tried to recover the high-resolution image from the low-res/high-res pair,” Rudin said. But according to her, that’s probably the wrong approach. Most real-world applications of this upsampling problem would involve having access to only the low-res original image. That would be the starting point from which one would try to recreate the high-resolution equivalent of that low-res original.

“When we finally abandoned trying to come up with the ground truth, we then were able to take the low-res [picture] and try to construct many very good high-res images,” Rudin said.

So while PULSE looks beyond the failure point of facial recognition applications, she said, it may still find applications in fields that grapple with their own blurry images—among them, astronomy, medicine, microscopy, and satellite imagery.

Rudin cautions: So long as anyone using PULSE understands that it generates a broad range of possible images, any one of which could be the progenitor of the blurry image that’s available, PULSE has potential to give researchers a better understanding of a given imaginative space.

Say, for instance, an astronomer has a blurry image of a black hole. Coupled with an AI imaging tool that generates astronomical images, PULSE could render many possible astrophysical scenarios that might have yielded that low-res photograph.

At the moment, PULSE is optimized for human faces, because NVIDIA already developed an AI “generative adversarial network” (GAN) that creates photorealistic human faces. So the applications the PULSE team explored built atop NVIDIA’s StyleGAN algorithm.

In other words, PULSE, provides the sorting and exploring tools that sit atop the GAN that, on its own, mindlessly sprays out endless supplies of images of whatever it has been trained to make.

Rudin also sees a possible PULSE application in the field of architecture and design.

“There’s not a StyleGAN for that many other things right now,” she said. “It’d be nice to be able to generate rooms. If you had just a few pixels, it’d be nice to develop a full picture of a room. That would be cool. And that’s probably coming.

“Anytime you have that kind of generative modeling, you can use PULSE to search through that space,” she said.

And so long as searching through that space doesn’t involve a ticking timebomb set to detonate when it hits “00:00,” this PULSE may still ultimately open more doors than it blows off its hinges.

Can Software Performance Engineering Save Us From the End of Moore’s Law?

Post Syndicated from Charles E. Leiserson original https://spectrum.ieee.org/tech-talk/computing/software/software-engineering-can-save-us-from-the-end-of-moores-law

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

In the early years of aviation, one might have been forgiven for envisioning a future of ever-faster planes. Speeds had grown from 50 kilometers per hour for the Wright brothers in 1903, to about 1000 kph for a Boeing 707 in the 1960s. But since then, commercial aircraft speeds have stagnated because higher speeds make planes so energy-inefficient.

Today’s computers suffer from a similar issue. For decades, our ability to miniaturize components led to us doubling the number of transistors on a silicon chip every two years or so. This phenomenon, known as Moore’s Law (named after Intel co-founder Gordon Moore), has made computing exponentially cheaper and more powerful for decades. But we’re now reaching the limits of miniaturization, and so computing performance is stagnating.

This is a problem. Had Moore’s Law ended 20 years ago, the processors in today’s computers would be roughly 1000 times less powerful, and we wouldn’t have iPhones, Alexa or movie-streaming. What innovations might we miss out on 20 years from now if we can’t continue to improve computing performance? 

In recent years, researchers like us have been scratching our heads about what to do next. Some hope that the answer is new technologies like quantum computing, carbon nanotubes, or photonic computing. But after several years studying the situation with other experts at MIT, we believe those solutions are uncertain and could be many years in the making. In the interim, we shouldn’t count on a complete reinvention of the computer chip; we should re-code the software that runs on it. 

As we outline in an article this week in Science, for years programmers haven’t had to worry about making code run faster, because Moore’s Law did that for them. And so they took shortcuts, prioritizing their ability to write code quickly over the ability of computers to run that code as fast as possible. 

For example, many developers use techniques like “reduction”: taking code that worked on problem A, and using it to solve problem B, even if it is an inefficient way of doing it. Suppose you want to build a Siri-like system to recognize yes-or-no voice commands. Instead of building a custom program to do that, you might be tempted to use an existing program that recognizes a wide range of words, and tweak it to respond only to yes-or-no answers.

The good news is that this approach helps you write code faster. The bad news: It sometimes yields a staggering amount of inefficiency. And inefficiencies can quickly compound.  If a single reduction is 80 percent as efficient as a custom solution, and you write a program with twenty layers of reduction, the code will be 100 times less efficient than it could be.

This is no mere thought experiment. Being able to make further advances in fields like machine learning, robotics, and virtual reality will require huge amounts of computational power. If we want to harness the full potential of these technologies, we have to make changes.  As our Science article suggests, there are opportunities in developing new algorithms and streamlining computer hardware. But for most companies, the most practical way to get more computing performance is through software performance engineering—that is, making software more efficient. 

One performance engineering strategy is to “parallelize” code. Most existing software has been designed using decades-old models that assume processors can only perform one operation at a time. That’s inefficient because modern processors can do many calculations at the same time by using multiple cores on each chip, and there is parallelism built into each core as well. Strategies like parallel computing can allow some complex tasks to be completed hundreds of times faster and in a much more energy-efficient way.

While software performance engineering may be the best path forward, it won’t be an easy one. Updating existing programs to run more quickly is a huge undertaking, especially with a shortage of coders trained in parallel programming and other performance-engineering strategies. Moreover, leaders of forward-looking companies must fight against the institutional inertia of doing things how they’ve always been done. 

Nimble tech giants like Google and Amazon have already gotten this memo. The massive scale of their data centers means that even small improvements in software performance can yield big financial returns. Where these companies have led, the rest of the world must follow. For application developers, efficiency can no longer be ignored when rolling out new features and functionality. For companies, it may mean replacing long-standing software systems that are just barely eking along.

Performance engineering will be riskier than Moore’s Law ever was. Companies may not know the benefits of their efforts until after they’ve invested substantial programmer time. And speed-ups may be sporadic, uneven, and unpredictable. But as we reach the physical limits of microprocessors, focusing on software performance engineering seems like the best option for most programmers to get more out of their computers.

The end of Moore’s Law doesn’t mean your laptop is about to grind to a halt. But if we want to make real progress in fields like artificial intelligence and robotics, we must get more creative and spend the time needed to performance engineer our software.

About the Authors:

Charles E. Leiserson is a professor of computer science and engineering at MIT and an IEEE Fellow; Tao B. Schardl and Neil C. Thompson are research scientists at MIT.

Novel Error Correction Code Opens a New Approach to Universal Quantum Computing

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/computing/software/novel-error-correction-code-opens-a-new-approach-to-universal-quantum-computing

Government agencies and universities around the world—not to mention tech giants like IBM and Google—are vying to be the first to answer a trillion-dollar quantum question: How can quantum computers reach their vast potential when they are still unable to consistently produce results that are reliable and free of errors? 

Every aspect of these exotic machines—including their fragility and engineering complexity; their preposterously sterile, low-temperature operating environment; complicated mathematics; and their notoriously shy quantum bits (qubits) that flip if an operator so much as winks at them—are all potential sources of errors. It says much for the ingenuity of scientists and engineers that they have found ways to detect and correct these errors and have quantum computers working to the extent that they do: at least long enough to produce limited results before errors accumulate and quantum decoherence of the qubits kicks in.

When it comes to correcting errors arising during quantum operations, an error-correction method known as the surface code has drawn a lot of research attention. That’s because of its robustness and the fact that it’s well suited to being set out on a two-dimensional plane (which makes it amenable to being laid down on a chip). The surface code uses the phenomenon known as entanglement (quantum connectivity) to enable single qubits to share information with other qubits on a lattice layout. The benefit: When qubits are measured, they reveal errors in neighboring qubits.

For a quantum computer to tackle complicated tasks, error-correction codes need to be able to perform quantum gate operations; these are small logic operations carried out on qubit information that, when combined, can run algorithms. Classical computing analogs would be AND gates, XOR gates, and the like. 

Physicists describe two types of quantum gate operations (distinguished by their different mathematical approaches) that are necessary to achieve universal computing. One of these, the Clifford gate set, must work in combination with  magic-state distillation—a purification protocol that uses multiple noisy quantum states to perform non-Clifford gate operations.

“Without magic-state distillation or its equivalent, quantum computers are like electronic calculators without the division button; they have limited functionality,” says Benjamin Brown, an EQUS  researcher at the University of Sydney’s School of Physics. “However, the combination of Clifford and non-Clifford gates can be prohibitive because it eats up so much of a quantum computer’s resources, that there’s little left to deal with the problem at hand.”

To overcome this problem, Brown has developed a new type of non-Clifford-gate error-correcting method that removes the need for overhead-heavy distillation. A paper he published on this development appeared in Science Advances on 22 May. 

“Given it is understood to be impossible to use two-dimensional code like the surface code to do the work of a non-Clifford gate, I have used a three-dimensional code and applied it to the physical two-dimensional surface code scheme using time as the third dimension,” explains Brown. “This has opened up possibilities we didn’t have before.”

The non-Clifford gate uses three overlapping copies of the surface code that locally interact over a period of time. This is carried out by taking thin slices of the 3D surface code and collapsing them down into a 2D space. The process is repeated over and over on the fly with the help of just-in-time gauge fixing, a procedure for stacking together the two-dimensional slices onto a chip, as well as dealing with any occurring errors. Over a period of time, the three surface codes replicate the three-dimensional code that can perform the non-Clifford gate function(s).

“I’ve shown this to work theoretically, mathematically,” says Brown. “The next step is to simulate the code and see how well it works in practice.”

Michael Beverland, a senior researcher at Microsoft Quantum commented on the research: “Brown’s paper explores an exciting, exotic approach to perform fault-tolerant quantum computation. It points the way towards potentially achieving universal quantum computation in two spatial dimensions without the need for distillation—something many researchers thought was impossible.”

Brown notes that reducing errors in quantum computing is one of the biggest challenges facing scientists before machines capable of solving useful problems can be built. “My approach to suppressing errors could free up a lot of the hardware from error correction and will allow the computer to get on with doing useful stuff.”

Crowdsourcing Package Deliveries Using Taxis

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/networks/proposed-crowdsourcing-platform-uses-taxis-to-deliver-packages

Journal Watch report logo, link to report landing page

For urban areas where the demand for package delivery is only increasing, a group of researchers is proposing an intriguing solution: a crowdsourcing platform that uses taxis to deliver packages. In a study published 29 April in IEEE Transactions on Big Data, they show that such an approach could be used to deliver 9,500 packages on time per day in a city the size of New York.

Their platform, called CrowdExpress, allows a customer to set a deadline by which they want a package delivered. Taxis, which are already traveling in diverse trajectories across urban areas with passengers, are then crowdsourced to deliver the packages. Taxi drivers would be responsible for picking up and dropping off the packages, but only before and after dropping off their passengers, which avoids any inconvenience to passengers.

Chao Chen, a researcher at Chongqing University who was involved in the study, says this proposed approach offers several advantages. “For customers, packages can be delivered more economically, but [also] more speedily,” he says. “For taxi drivers, more money can be earned when taking passengers to their destinations, with only a small additional effort.”

Since the average taxi ride tends to be only a few kilometers, Chen’s group acknowledges that more than one taxi may be required to deliver a package. Therefore, with their approach, they envision a network of relay stations, where packages are collected, temporarily stored, and transferred.

In developing CrowdExpress, they first used historical taxi GPS trajectory data to figure out where to place the nodes on the package transport network. Next, they developed an online scheduling algorithm to coordinate package deliveries using real-time requests for taxis. The algorithm calculates the probability that a package will be delivered on time using a taxi that is currently available, or taxis that are likely to be available in the near future. In this way, the algorithm prioritizes human pick-ups and drop-offs while still meeting package delivery deadlines set by customers.

In their study, Chen and his colleagues evaluated the platform using the real-world taxi data generated during a single month by over 19,000 taxis in New York City. The results show that this technique could be used to ferry up to 20,000 packages a day—but with a deadline success rate of 40 percent at that volume. If CrowdExpress were used to transport 9,500 packages a day across the Big Apple, the success rate would reach 94 percent.

“We plan to commercialize CrowdExpress, but there are still many practical issues to be addressed before truly realizing the system,” Chen admits. “The maintenance cost and the potential package loss or damage at the package stations is one example,” Chen says.

“One promising solution to address the issue is to install unmanned and automatic smart boxes at the stations. In this way, packages can be safely stored and drivers are required to enter a one-time password to get them.”

Lasers Write Data Into Glass

Post Syndicated from Amy Nordrum original https://spectrum.ieee.org/computing/hardware/lasers-write-data-into-glass

Magnetic tape and hard disk drives hold much of the world’s archival data. Compared with other memory and storage technologies, tape and disk drives cost less and are more reliable. They’re also nonvolatile, meaning they don’t require a constant power supply to preserve data. Cultural institutions, financial firms, government agencies, and film companies have relied on these technologies for decades, and will continue to do so far into the future.

But archivists may soon have another option—using an extremely fast laser to write data into a 2-millimeter-thick piece of glass, roughly the size of a Post-it note, where that information can remain essentially forever.

This experimental form of optical data storage was demonstrated in 2013 by researchers at the University of Southampton in England. Soon after, that group began working with engineers at Microsoft Research in an effort called Project Silica. Last November, Microsoft completed its first proof of concept by writing the 1978 film Superman on a single small piece of glass and retrieving it.

With this method, researchers could theoretically store up to 360 terabytes of data on a disc the size of a DVD. For comparison, Panasonic aims to someday fit 1 TB on conventional optical discs, while Seagate and Western Digital are shooting for 50- to 60-TB hard disk drives by 2026.

International Data Corp. expects the world to produce 175 zettabytes of data by 2025—up from 33 ZB in 2018. Though only a fraction of that data will be stored, today’s methods may no longer suffice. “We believe people’s appetite for storage will force scientists to look into other kinds of materials,” says Waguih Ishak, chief technologist at Corning Research and Development Corp.

Microsoft’s work is part of a broader company initiative to improve cloud storage through optics. “I think they see it as potentially a distinguishing technology from something like [Amazon Web Services] and other cloud providers,” says James Byron, a Ph.D. candidate in computer science at the University of California, Santa Cruz, who studies storage methods.

Microsoft isn’t alone—John Morris, chief technology officer at Seagate, says researchers there are also focused on understanding the potential of optical data storage in glass. “The challenge is to develop systems that can read and write with reasonable throughput,” he says.

Writing data to glass involves focusing a femtosecond laser, which pulses very quickly, on a point within the glass. The glass itself is a sort known as fused silica. It’s the same type of extremely pure glass used for the Hubble Space Telescope’s mirror as well as the windows on the International Space Station.

The laser’s pulse deforms the glass at its focal point, forming a tiny 3D structure called a voxel. Two properties that measure how the voxel interacts with polarized light—retardance and change in the light’s polarization angle—can together represent several bits of data per voxel.

Microsoft can currently write hundreds of layers of voxels into each piece of glass. The glass can be written to once and read back many times. “This is data in glass, not on glass,” says Ant Rowstron, a principal researcher and deputy lab director at Microsoft Research Lab in Cambridge, England.

Reading data from the glass requires an entirely different setup, which is one potential drawback of this method. Researchers shine different kinds of polarized light—in which light waves all oscillate in the same direction, rather than every which way—onto specific voxels. They capture the results with a camera. Then, machine-learning algorithms analyze those images and translate their measurements into data.

Ishak, who is also an adjunct professor of electrical engineering at Stanford University, is optimistic about the approach. “I’m sure that in the matter of a decade, we’ll see a whole new kind of storage that eclipses and dwarfs everything that we have today,” he says. “And I firmly believe that those pure materials like fused silica will definitely play a major role there.”

But many scientific and engineering challenges remain. “The writing process is hard to make reliable and repeatable, and [it’s hard] to minimize the time it takes to create a voxel,” says Rowstron. “The read process has been a challenge in figuring out how to read the data from the glass using the minimum signal possible from the glass.”

The Microsoft group has added error-correcting codes to improve the system’s accuracy and continues to refine its machine-learning algorithms to automate the read-back process. Already, the team has improved writing speeds by several orders of magnitude from when they began, though Rowstron declined to share absolute speeds.

The team is also considering what it means to store data for such a long time. “We are working on thinking what a Rosetta Stone for glass could look like to help people decode it in the future,” Rowstron says.

This article appears in the June 2020 print issue as “Storing Data in Glass.”

What Is Confidential Computing?

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/computing/hardware/what-is-confidential-computing

A handful of major technology companies are going all in on a new security model they’re calling confidential computing in an effort to better protect data in all its forms.

The three pillars of data security involve protecting data at rest, in transit, and in use. Protecting data at rest means using methods such as encryption or tokenization so that even if data is copied from a server or database, a thief can’t access the information. Protecting data in transit means making sure unauthorized parties can’t see information as it moves between servers and applications. There are well-established ways to provide both kinds of protection.

Protecting data while in use, though, is especially tough because applications need to have data in the clear—not encrypted or otherwise protected—in order to compute. But that means malware can dump the contents of memory to steal information. It doesn’t really matter if the data was encrypted on a server’s hard drive if it’s stolen while exposed in memory.

Proponents of confidential computing hope to change that. “We’re trying to evangelize there are actually practical solutions” to protect data while it’s in use, said Dave Thaler, a software architect from Microsoft and chair of the Confidential Computing Consortium’s Technical Advisory Council.

The consortium, launched last August under the Linux Foundation, aims to define standards for confidential computing and support the development and adoption of open-source tools. Members include technology heavyweights such as Alibaba, AMD, Arm, Facebook, Fortanix, Google, Huawei, IBM (through its subsidiary Red Hat), Intel, Microsoft, Oracle, Swisscom, Tencent, and Vmware. Several already have confidential computing products and services for sale.

Confidential computing uses hardware-based techniques to isolate data, specific functions, or an entire application from the operating system, hypervisor or virtual machine manager, and other privileged processes. Data is stored in the trusted execution environment (TEE), where it’s impossible to view the data or operations performed on it from outside, even with a debugger. The TEE ensures that only authorized code can access the data. If the code is altered or tampered with, the TEE denies the operation.

Many organizations have declined to migrate some of their most sensitive applications to the cloud because of concerns about potential data exposure. Confidential computing makes it possible for different organizations to combine data sets for analysis without accessing each other’s data, said Seth Knox, vice president of marketing at Fortanix and the outreach chair for the Confidential Computing Consortium. For example, a retailer and credit card company could cross-check customer and transaction data for potential fraud without giving the other party access to the original data.

Confidential computing may have other benefits unrelated to security. An image-processing application, for example, could store files in the TEE instead of sending a video stream to the cloud, saving bandwidth and reducing latency. The application may even divide up such tasks on the processor level, with the main CPU handling most of the processing, but relying on a TEE on the network interface card for sensitive computations.

Such techniques can also protect algorithms. A machine-learning algorithm, or an analytics application such as a stock trading platform, can live inside the TEE. “You don’t want me to know what stocks you’re trading, and I don’t want you to know the algorithm,” said Martin Reynolds, a technology analyst at Gartner. “In this case, you wouldn’t get my code, and I wouldn’t get your data.”

Confidential computing requires extensive collaboration between hardware and software vendors so that applications and data can work with TEEs. Most confidential computing performed today runs on Intel servers (like the Xeon line) with Intel Software Guard Extension (SGX), which isolates specific application code and data to run in private regions of memory. However, recent security research has shown that Intel SGX can be vulnerable to side-channel and timing attacks.

Fortunately, TEEs aren’t available only in Intel hardware. OP-TEE is a TEE for nonsecure Linux Kernels running on Arm Cortex-A cores. Microsoft’s Virtual Secure Mode is a software-based TEE implemented by Hyper-V (the hypervisor for Windows systems) in Windows 10 and Windows Server 2016.

The Confidential Computing Consortium currently supports a handful of open-source projects, including the Intel SGX SDK for Linux, Microsoft’s Open Enclave SDK, and Red Hat’s Enarx. Projects don’t have to be accepted by the consortium to be considered confidential computing: For example, Google’s Asylo is similar to Enarx, and Microsoft Azure’s confidential computing services support both Intel SGX and Microsoft’s Virtual Secure Mode.

Hardware-based TEEs can supplement other security techniques, Thaler said, including homomorphic encryption and secure element chips such as the Trusted Platform Module. “You can combine these technologies because they are not necessarily competing,” he said. “Are you looking at the cloud or looking at the edge? You can pick which techniques to use.”

This article appears in the June 2020 print issue as “The Rise of Confidential Computing.”

Interactive e-Learning Platform Boosts Performance of New Musicians

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/interactive-elearning-platform-boosts-performance-of-new-musicians

While many budding musicians find joy in playing their instruments, not all are as enthusiastic to learn about music theory or the nuances of sound. To make lessons more engaging for music students, a group of researchers in Slovenia have created a new e-learning platform called Troubadour.

The platform can be adapted to different music curriculums and includes gaming features to support student engagement. A controlled study, published 15 May in IEEE Access, shows that Troubadour is effective at boosting the exam performance of first-year music students.

Matevž Pesek, a researcher at University of Ljubljana who helped build the platform, began playing the accordion at age eight, and has since taken up the keyboard, guitar, and Hammond organ. When he first started playing music—like many children and older beginners—he struggled with some aspects of the learning curve.

“I never completely liked the fact I needed to practice to become more proficient. Moreover, I perceived music theory as a separate problem, completely unconnected to the instrument practice,” he says. “It was only later in my adulthood when I somehow became aware of the importance of the music theory and its connection to the instrument playing.”

Pesek saw an opportunity to create Troubadour. While several online music learning platforms exist, he points out that these are not adaptable to school curriculums and many are only available in English.

“The lack of flexibility–where teachers cannot adjust the exercises according to their curriculum–and the language barrier motivated us to develop a solution for the Slovenian students,” says Pesek. “We have also made the platform’s source code publicly available for other interested individuals and communities; they can expand the platform’s applications, translate the platform to their native language, and also help us further develop the platform.”

With Troubadour, teachers select what features they want incorporated into the music exercise, and an algorithm automatically generates sound sequences to support the exercise. Students then access the Web-based platform to complete the interval diction exercises. In these exercises, melodic sequences are played and, upon recognizing the sequences, students record their answer. To make the overall exercises more engaging for students, the researchers added gaming features such as badges and a scoreboard that allows students to see where they rank against their peers.

In their study, Pesek and his colleagues evaluated the effectiveness of Troubadour as a study tool for students enrolled in a music theory course at the Conservatory of Music and Ballet Ljubljana. The data they captured included platform use and exam scores, as well as student and teacher feedback through surveys.

The results showed that, while there was a minimal benefit for second-year music students, first-year students who used Troubadour achieved an average exam score that was 9.2 percent better than those who didn’t. The teachers attributed this performance increase to better student engagement and the fact that the level of music experience and proficiency among first-year students varies.

The researchers have since expanded upon Troubadour to include rhythmic diction exercises, and are now working on harmonic exercises. “We also plan on including several different tools to aid the in-platform communication between teachers and students, and plan to support online exams within the platform,” says Pesek.

Software Development Environments Move to the Cloud

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/software-development-environments-cloud

As a newly hired software engineer, setting up your development environment can be tedious. If you’re lucky, your company will have a documented, step-by-step process to follow. But this still doesn’t guarantee you’ll be up and running in no time. When you’re tasked with updating your environment, you’ll go through the same time-consuming process. With different platforms, tools, versions, and dependencies to grapple with, you’ll likely encounter bumps along the way.

Austin-based startup Coder aims to ease this process by bringing development environments to the cloud. “We grew up in a time where [Microsoft] Word documents changed to Google Docs. We were curious why this wasn’t happening for software engineers,” says John A. Entwistle, who founded Coder along with Ammar Bandukwala and Kyle Carberry in 2017. “We thought that if you could move the development environment to the cloud, there would be all sorts of cool workflow benefits.”

How Many Qubits Are Needed For Quantum Supremacy?

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/qubit-supremacy

Quantum computers theoretically can prove more powerful than any supercomputer, and now scientists calculate just what quantum computers need to attain such “quantum supremacy,” and whether or not Google achieved it with their claims last year.

Whereas classical computers switch transistors either on or off to symbolize data as ones and zeroes, quantum computers use quantum bits or qubits that, because of the bizarre nature of quantum physics, can be in a state of superposition where they are both 1 and 0 simultaneously.

Superposition lets one qubit perform two calculations at once, and if two qubits are linked through a quantum effect known as entanglement, they can help perform 22 or four calculations simultaneously; three qubits, 23 or eight calculations; and so on. In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the visible universe.

It remains controversial how many qubits are needed to achieve quantum supremacy over standard computers. Last year, Google claimed to achieve quantum supremacy with just 53 qubits, performing a calculation in 200 seconds that the company estimated would take the world’s most powerful supercomputer 10,000 years, but IBM researchers argued in a blog post “that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.”

To see what quantum supremacy might actually demand, researchers analyzed three different ways quantum circuits that might solve problems conventional computers theoretically find intractable. Instantaneous Quantum Polynomial-Time (IQP) circuits are an especially simple way to connect qubits into quantum circuits. Quantum Approximate Optimization Algorithm (QAOA) circuits are more advanced, using qubits to find good solutions to optimization problems. Finally, boson sampling circuits use photons instead of qubits, analyzing the paths such photons take after interacting with one another.

Assuming these quantum circuits were competing against supercomputers capable of up to a quintillion (1018) floating-point operations per second (FLOPS), the researchers calculated that quantum supremacy could be reached with 208 qubits with IQP circuits, 420 qubits with QAOA circuits and 98 photons with boson sampling circuits.

“I’m a little bit surprised that we were ultimately able to produce a number that is not so far from the kinds of numbers we see in devices that already exist,” says study lead author Alexander Dalzell, a quantum physicist at the California Institute of Technology in Pasadena. “The first approach we had suggested 10,000 or more qubits would be necessary, and the second approach still suggested almost 2,000. Finally, on the third approach we were able to eliminate a lot of the overhead in our analysis and reduce the numbers to the mere hundreds of qubits that we quote.”

The scientists add quantum supremacy might be possible with even fewer qubits. “In general, we make a lot of worst-case assumptions that might not be necessary,” Dalzell says.

When it comes to Google, the researchers note the company’s claims are challenging to analyze because Google chose a quantum computing task that was difficult to compare to any known algorithm in classical computation.

“I think the claim that they did something with a quantum device that we don’t know how to do on a classical device, without immense resources, is basically accurate as far as I can tell,” Dalzell says. “I’m less confident that there isn’t some yet-undiscovered classical simulation algorithm that, if we only knew about it, would allow us to replicate Google’s experiment, or even a somewhat larger version of their experiment, on a realistic classical device. To be clear, I’m not saying I think such an algorithm exists. I’m just saying that if it did exist, it wouldn’t be completely and totally surprising.”

In the end, “have we reached quantum computational supremacy when we’ve done something that we don’t know how to do with a classical device? Or do we really want to be confident that it’s impossible even using algorithms we might have not yet discovered?” Dalzell asks. “Google seems to be pretty clearly taking the former position, even acknowledging that they expect algorithmic innovations to bring down the cost of classical simulation, but that they also expect the improvement of quantum devices to be sufficient to maintain a state of quantum computational supremacy. They rely on arguments from complexity theory only to suggest that extreme improvements in classical simulation are unlikely. This is definitely a defensible interpretation.”

Future research can analyze how quantum supremacy estimates deal with noise in quantum circuits. “When there’s no noise, the quantum computational supremacy arguments are on pretty solid footing,” Dalzell says. “But add in noise, and you give something that a classical algorithm might be able to exploit.”

The scientists detailed their findings online April 17 in a study accepted in the journal Quantum.

Tech Volunteers Help Overloaded U.S. Government Agencies

Post Syndicated from Michelle V. Rafter original https://spectrum.ieee.org/tech-talk/computing/software/tech-volunteers-help-overloaded-government-agencies

When U.S. Digital Response launched 16 March, it was four colleagues who wanted to pool their collective experience running public-sector technology programs to help government agencies that were buckling under COVID-19.

Since then, the all-volunteer group has scaled exponentially, placing more than 150 people with a range of digital skills into more than 150 short-term or ongoing assignments at 25 agencies at all levels of government, including with state labor departments struggling to keep up with new claims for unemployment insurance benefits.

As of early May, U.S. Digital Response had amassed a database of more than 4,850 other prospective volunteers who filled out the online application on the group’s website to donate their time. The group continues to accept applications for volunteers with digital, policy, and communications skills, and to encourage public agencies to fill out an online form if they need help.  

MIT Media Lab’s Food Computer Project Permanently Shut Down

Post Syndicated from Harry Goldstein original https://spectrum.ieee.org/tech-talk/computing/hardware/mit-media-lab-food-computer-project-shut-down

MIT Media Lab’s Open Agriculture Initiative led by principal scientist Caleb Harper was permanently shuttered by the university on 30 April 2020.

“Caleb Harper’s last day of employment with the Institute was April 30, and as he led the Open Agriculture Initiative at the MIT Media Lab, it is closed at MIT,” Kimberly Allen, Director of Media Relations told Spectrum in an email.

As for the fate of OpenAg’s Github repository and the OpenAg Forum (which is no longer reachable), Allen said only, “Any legacy digital properties that may be hosted on Media Lab servers will either be closed or moved in time.”

The OpenAg initiative came under scrutiny following the departure in September 2019 of Media Lab director Joichi Ito after revelations that he had solicited and accepted donations for the Media Lab from convicted child sex offender Jeffrey Epstein. In a report [PDF] commissioned by MIT and released in January, the law firm Goodwin Procter LLP found that Ito had “worked in 2018 to obtain $1.5 million from Epstein to support research by Caleb Harper, a Principal Research Scientist at the Media Lab.” The report states that the donation was never made. The report also notes that Harper, Ito and Professor Ed Boyden met with Epstein at the Media Lab on 15 April 2017, just days before 15 of 17 employees at Harper’s start up Fenome were dismissed.

OpenAg and Fenome designed, developed and fabricated personal food computers, enclosed chambers the size of a mini-fridge packed with LEDs, sensors, pumps, fans, control electronics, and a hydroponic tray for growing plants. Harper mesmerized audiences and investors around the globe with a vision of “nerd farmers” growing Tuscan tomatoes in portable boxes with recipes optimized by machine learning algorithms. But the food computers never lived up to the hype, though they did make an appearance at the Cooper Hewitt Museum’s Design Triennial in the fall of 2019, where the photos for this post were taken.

Maria T. Zuber, vice president for research at MIT, led an internal investigation following allegations that Harper told MIT staff to demonstrate food computers with plants not grown in them and that fertilizer solution used by OpenAg was discharged on into a well on the grounds of at the Bates Research and Engineering Center in Middleton, Mass., in amounts that exceeded limits permitted by the state of Massachusetts. While that investigation was being conducted, OpenAg’s activities were restricted.

The Massachusetts Department of Environmental Protection (MassDEP) concluded their review of OpenAg activities at Bates on 22 April, according to a letter dated 11 May [PDF] to the Bates community from Boleslaw Wyslouch, director of the Laboratory for Nuclear Science and the Bates Research and Engineering Center. MassDEP, Wyslouch said, “fined MIT for discharging spent plan growing solution and dilute cleaning fluids into an Underground Injection Control (UIC) well in violation of the conditions of the well registration terms.”

MassDEP originally fined MIT $25,125 but according to a post on the Bates website detailing the MassDEP review, upon the permanent closure of the OpenAg Initiative, MassDEP “suspended payment of a $10,125 portion of the fine, leaving MIT responsible for paying $15,000.”

The discharge was brought to light by a scientist formerly associated with OpenAg, Babak Babakinejad, who in addition to blowing the whistle on the chemical discharge at Bates, also alleged, in an email to Ito on 5 May 2018, that Harper had taken credit for the deployment of food computers to schools as well as to “a refugee camp in Amman despite the fact that they have never been validated, tested for functionality and up to now we could never make it work i.e. to grow anything consistently, for an experiment beyond prototyping stage.”

A subsequent investigation by Spectrum substantiated Babakinejad’s claims and found that Harper had lied about the supposed refugee camp deployment to potential investors and in several public appearances between 2017 and 2019.

Harper, who for years had been actively promoting the food computer on social media, has been mostly silent since the MIT investigation started last September. His LinkedIn profile now states that he is Executive Director of the Dairy Scale for Good (DS4G) Initiative “working to help US Dairies pilot and integrate new technology and management practices to reach net zero emissions or better while increasing farmer livelihood.”

How Network Science Surfaced 81 Potential COVID-19 Therapies

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/computing/networks/network-science-potential-covid19-therapies

IEEE COVID-19 coverage logo, link to landing page

Researchers have harnessed the computational tools of network science to generate a list of 81 drugs used for other diseases that show promise in treating COVID-19. Some are already familiar—including the malaria and lupus treatments chloroquine and hydroxychloroquine—while many others are new, with no known clinical trials underway.

Since the concept was first proposed in 2007, network medicine has applied the science of interconnected relationships among large groups to networks of genes, proteins, interactions and other biomedical factors. Both Harvard and MIT Open Courseware today offer classes in network medicine, while cancer research in particular has experienced a proliferation of network medicine studies and experimental treatments.

Albert-László Barabási, distinguished university professor at Northeastern University in Boston, is generally considered the founder of both network medicine and modern network science. In a recent interview via email, Barabási said COVID-19 represents a tremendous opportunity for a still fledgling science.

“In many ways, the COVID offers a great test for us to marshal the set of highly predictive tools that we as a community [have developed] in the past two decades,” Barabási said.

Last month, Barabási and ten co-authors from Northeastern, Harvard and Brigham and Women’s Hospital in Boston published a pre-print paper proposing a network medicine-based framework for repurposing drugs as COVID-19 therapies. The paper has not been submitted for peer-review yet, says Deisy Morselli Gysi, a postdoctoral researcher at Northeastern’s Network Science Institute.

“The paper is not under review anywhere,” she said. “But we are planning of course to submit it once we have [laboratory] results.”

The 81 potential COVID-19 drugs their computational pipeline discovered, that is, are now being investigated in wet-lab studies.

The number-one COVID-19 drug their network-based models predicted was the AIDS-related protease inhibitor ritonavir. The U.S. Centers for Disease Control’s ClinicalTrials.gov website lists 108 active or recruiting trials (as of May 6) involving ritonavir, with a number of the current trials being for COVID-19 or related conditions.

However, the second-ranked potential COVID-19 drug their models surfaced was the antibacterial and anti-tuberculosis drug isoniazid. ClinicalTrials, again as of May 6, listed 65 active or recruiting studies for this drug — none of which, however, were for coronavirus. The third and fourth-ranked drugs (the antibiotic troleandomycin and cilostazol a drug for strokes and heart conditions) also have no current coronavirus-related clinical trials, according to ClinicalTrials.gov.

Barabási said the group’s study took its lead from a massively-collaborative paper from March 27 which identified 26 of the 29 proteins that make up the SARS-CoV-2 coronavirus particle. The study then identified 332 human proteins that bind to those 26 coronavirus proteins.

Barabási, Gysi and co-researchers then mapped those 332 proteins to the larger map of all human proteins and their interactions. This “interactome” (a molecular biology concept first proposed in 1999) tracks all possible interactions between proteins.

Of those 332 proteins that interact with the 26 known and studied coronavirus proteins, then, Barabási’s group found that 208 of them interact with one another. These 208 proteins form an interactive network, or what the group calls a “large connected component” (LCC). And a vast majority of these LCC proteins are expressed in the lung, which would explain why coronavirus manifests so frequently in the respiratory system: Coronavirus is made up of building blocks that each can chemically latch onto a network of interacting proteins, most of which are found in lung tissue.

However, the lung was not the only site in the body where Barabási and co-authors discovered coronavirus network-based activity. They also discovered several brain regions whose expressed proteins interact in large connected networks with coronavirus proteins. Meaning their model predicts coronavirus could manifest in brain tissue as well for some patients.

That’s important, Gysi said, because when their models made this prediction, no substantial reporting had yet emerged about neurological COVID-19 comorbidities. Today, though, it’s well-known that some patients experience a neurological-based loss of taste and smell, while others experience strokes at higher rates.

Brains and lungs aren’t the only possible hosts for the novel coronavirus. The group’s findings also indicate that coronavirus may manifest in some patients in reproductive organs, in the digestive system (colon, esophagus, pancreas), kidney, skin and the spleen (which could relate to immune system dysfunction seen in some patients).

Of course the first drug the FDA approved for emergency use specifically for COVID-19 is the protease inhibitor remdesivir. However Barabási and Gysi’s group did not surface that drug at all in their study.

This is for a good reason, Gysi explained. Remdesivir targets the SARS-CoV-2 virus specifically and not any interactions between the virus and the human body. So remdesivir would not have showed up on the map of their network science-based analysis, she said.

Barabási said his team is also investigating how network science can assist medical teams conducting contact tracing for COVID-19 patients.

“There is no question that the contact tracing algorithms will be network science based,” Barabási said.

Coding for COVID-19: Contest Calls on Developers to Help Fight the Pandemic

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/coding-covid19-ibm-contest-calls-developers-pandemic

Now in its third year, IBM’s Call for Code challenge is a global initiative encouraging developers to create solutions for the world’s most pressing issues. Previous competitions called for apps to mitigate the effects of natural disasters and technologies that can assist people after catastrophes. This year’s challenge—a partnership with the United Nations’ Human Rights Office and the David Clark Cause—offers two tracks to tackle climate change and the COVID-19 pandemic.

Australia’s Contact-Tracing COVIDSafe App Off to a Fast Start

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/computing/software/australias-contact-tracing-covidsafe-app

The Australian government launched its home-grown COVIDSafe contact-tracing app for the new coronavirus on 26 April. And despite the government’s history of technology failures and misuse of personal data, smartphone users have been eager to download the opt-in software on Apple’s App Store and on Google Play. But if the government is to achieve its target of 10 million downloads, there’s still a ways to go.

How to Implement a Software-Defined Network (SDN) Security Fabric in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how_to_implement_a_software_defined_network_sdn_security_fabric_in_aws

AWS Webinar

You’re Invited! Join SANS and AWS Marketplace to learn how implementing an SDN can enhance visibility and control across multiple virtual private clouds (VPCs) in your network. With an SDN fabric, you can enable higher granularity of control over lateral traffic between VPCs, blocking malicious traffic while maintaining normal traffic flow.

What Are Deepfakes and How Are They Created?

Post Syndicated from Sally Adee original https://spectrum.ieee.org/tech-talk/computing/software/what-are-deepfakes-how-are-they-created

A growing unease has settled around evolving deepfake technologies that make it possible to create evidence of scenes that never happened. Celebrities have found themselves the unwitting stars of pornography and politicians have turned up in videos appearing to speak words they never really said.

Concerns about deepfakes have led to a proliferation of countermeasures. New laws aim to stop people from making and distributing them. Earlier this year, social media platforms including Facebook and Twitter banned deepfakes from their networks. And computer vision and graphics conferences teem with presentations describing methods to defend against them.

So what exactly is a deepfake, and why are people so worried about them?

Rapid Matchmaking for Terahertz Network Transmitters and Receivers

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/networks/terahertz-linkdiscovery

Scientists may have solved a fundamental problem that would have put a snag in efforts to build wireless terahertz links for networks beyond 5G, a new study finds.

Terahertz waves lie between optical waves and microwaves on the electromagnetic spectrum. Ranging in frequency from 0.1 to 10 terahertz, they could be key to future 6G wireless networks that will transmit data at terabits (trillions of bits) per second.

But whereas radio waves can transmit data via omnidirectional broadcasts, higher frequency waves diffract less, so communications links involving them employ narrow beams. This makes it more challenging to quickly set up links between transmitters and receivers.