Tag Archives: Computing/Hardware

Quantum Entanglement Meets Superconductivity in Novel Experiment

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/nanoclast/computing/hardware/quantum-entanglement-superconductivity-technology-news-novel-experiment

Two mysterious components of quantum technology came together in a lab at Rice University in Houston recently. Quantum entanglement—the key to quantum computing—and quantum criticality—an essential ingredient for high-temperature superconductors—have now been linked in a single experiment.

The preliminary results suggest something approaching the same physics is behind these two essential but previously distinct quantum technologies. The temptation, then, is to imagine a future in which a sort of grand unified theory of entanglement and superconductivity might be developed, where breakthroughs in one field could be translated into the other.

4 Ways to Make Bigger Quantum Computers

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/computing/hardware/4-ways-to-make-bigger-quantum-computers

As researchers strive to boost the capacity of quantum computers, they’ve run into a problem that many people have after a big holiday: There’s just not enough room in the fridge.

Today’s quantum-computer processors must operate inside cryogenic enclosures at near absolute zero, but the electronics needed for readout and control don’t work at such temperatures. So those circuits must reside outside the refrigerator. For today’s sub-100-qubit systems, there’s still enough space for specialized cabling to make the connection. But for future million-qubit systems, there just won’t be enough room. Such systems will need ultralow-power control chips that can operate inside the refrigerator. Engineers unveiled some potential solutions in December during the IEEE International Electron Devices Meeting (IEDM), in San Francisco. They ranged from the familiar to the truly exotic.

  • CryoCMOS

    Perhaps the most straightforward way to make cryogenic controls for quantum computers is to modify CMOS technology. Unsurprisingly, that’s Intel’s solution. The company unveiled a cryogenic CMOS chip called Horse Ridge that translates quantum-computer instructions into basic qubit operations, which it delivers to the processor as microwave signals.

    Horse Ridge is designed to work at 4 kelvins, a slightly higher temperature than the qubit chip itself, but low enough to sit inside the refrigerator with it. The company used its 22-nanometer FinFET manufacturing process to build the chip, but the transistors that make up the control circuitry needed substantial reengineering.

    “If you take a transistor and cool it to 4 K, it’s not a foregone conclusion that it will work,” says Jim Clarke, director of quantum hardware at Intel. “There are a lot of fundamental characteristics of devices that are temperature dependent.”

    Others are working along the same lines. Google presented a cryogenic CMOS control circuit earlier in 2019. In research that was not yet peer-reviewed at press time, Microsoft and its collaborators say they have built a 100,000-transistor CMOS control chip that operates at 100 millikelvins.

  • Microrelays

    In logic circuits, transistors act as switches, but they aren’t the only devices that do so. Engineers in Tsu-Jae King Liu’s laboratory at the University of California, Berkeley, have developed micrometer-scale electromechanical relays as ultralow-power alternatives to transistors. They were surprised to discover that their devices operate better at 4 K than at room temperature.

    At room temperature, the devices suffer some mechanical peculiarities. First, ambient oxygen can react with the relay’s electrode surfaces. Over time, this reaction can form a high-resistance layer, limiting the device’s ability to conduct current. But at cryogenic temperatures, oxygen freezes out of the air, so that problem doesn’t exist.

    Second, the contacts in microscale relays tend to stick together. This shows up as a hysteresis effect: The relay opens at a slightly different voltage than the one at which it closes. But because the adhesive forces are weaker at cryogenic temperatures, the hysteresis is less than 5 percent of what it is at room temperature.

    “We didn’t suspect ahead of time that these devices would operate so well at cryogenic temperatures,” says Liu, who led the research presented at IEDM by her graduate student Xiaoer Hu. “In retrospect, we should have.”

  • Single-flux quantum logic

    Hypres, in Elmsford, N.Y., has been commercializing cryogenic ICs for several years. Seeking to steer its rapid single-flux quantum (RSFQ) logic tech into the realm of quantum computing, the company recently spun out a startup called Seeqc.

    In RSFQ and its quantum version, SFQuClass logic, quantized pulses of voltage are blocked, passed, or routed by Josephson junctions, the same type of superconducting devices that make up most of today’s quantum computer chips. In 2014 physicists at University of Wisconsin–Madison first suggested that these pulses could be used to program qubits, and Seeqc’s scientists have been collaborating with them and Syracuse University scientists since 2016.

    Seeqc is now designing an entire system using the technology: a digital-control, error-correction, and readout chip designed to work at 3 to 4 K and a separate chip designed to work at 20 millikelvins to interface with the quantum processor.

  • Weyl semimetals

    Quantum computing is already strange, but it might take some even stranger tech to make it work. Scientists at Lund University, in Sweden, and at IBM Research–Zurich have designed a new device called a Weyl semimetal amplifier that they say could bring readout electronics closer to the qubits. Don’t worry if you don’t know what a Weyl semimetal is. There are things about these materials that even the scientists trying to make devices from them don’t fully understand.

    What they do know is that these materials, such as tungsten diphosphide, exhibit extremely strong, temperature-dependent magnetoresistance when chilled to below about 50 K. The device they simulated has a gate electrode that produces a magnetic field inside the Weyl semimetal, causing its resistance to go from tiny to huge in a matter of picoseconds. Connecting the input from a qubit to the device could make a high-gain amplifier that dissipates a mere 40 microwatts. That could be low enough for the amplifier to live in the part of the fridge close to where the qubits themselves reside.

This article appears in the February 2020 print issue as “4 Ways to Handle More Qubits.”

Will China Attain Exascale Supercomputing in 2020?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/computing/hardware/will-china-attain-exascale-supercomputing-in-2020

graphic link to special report landing page

To the supercomputer world, what separates “peta” from “exa” is more than just three orders of magnitude.

As measured in floating-point operations per second (a.k.a. FLOPS), one petaflop (1015 FLOPS) falls in the middle of what might be called commodity high-performance computing (HPC). In this domain, hardware is hardware, and what matters most is increasing processing speed as cost-effectively as possible.

Now the United States, China, Japan, and the European Union are all striving to reach the exaflop (1018 ) scale. The Chinese have claimed they will hit that mark in 2020. But they haven’t said so lately: Attempts to contact officials at the National Supercomputer Center, in Guangzhou; Tsinghua University, in Beijing; and Xi’an Jiaotong University yielded either no response or no comment.

It’s a fine question of when exactly the exascale barrier is deemed to have been broken—when a computer’s theoretical peak performance exceeds 1 exaflop or when its maximum real-world compute speed hits that mark. Indeed, the sheer volume of compute power is less important than it used to be.

“Now it’s more about customization, special-purpose systems,” says Bob Sorensen, vice president of research and technology with the HPC consulting firm Hyperion Research. “We’re starting to see almost a trend towards specialization of HPC hardware, as opposed to a drive towards a one-size-fits-all commodity” approach.

The United States’ exascale computing efforts, involving three separate machines, total US $1.8 billion for the hardware alone, says Jack Dongarra, a professor of electrical engineering and computer science at the University of Tennessee. He says exascale algorithms and applications may cost another $1.8 billion to develop.

And as for the electric bill, it’s still unclear exactly how many megawatts one of these machines might gulp down. One recent ballpark estimate puts the power consumption of a projected Chinese exaflop system at 65 megawatts. If the machine ran continuously for one year, the electricity bill alone would come to about $60 million.

Dongarra says he’s skeptical that any system, in China or anywhere else, will achieve one sustained exaflop anytime before 2021, or possibly even 2022. In the United States, he says, two exascale machines will be used for public research and development, including seismic analysis, weather and climate modeling, and AI research. The third will be reserved for national-security research, such as simulating nuclear weapons.

“The first one that’ll be deployed will be at Argonne [National Laboratory, near Chicago], an open-science lab. That goes by the name Aurora or, sometimes, A21,” Dongarra says. It will have Intel processors, with Cray developing the interconnecting fabric between the more than 200 cabinets projected to house the supercomputer. A21’s architecture will reportedly include Intel’s Optane memory modules, which represent a hybrid of DRAM and flash memory. Peak capacity for the machine should reach 1 exaflop when it’s deployed in 2021.

The other U.S. open-science machine, at Oak Ridge National Laboratory, in Tennessee, will be called Frontier and is projected to launch later in 2021 with a peak capacity in the neighborhood of 1.5 exaflops. Its AMD processors will be dispersed in more than 100 cabinets, with four graphics processing units for each CPU.

The third, El Capitan, will be operated out of Lawrence Livermore National Laboratory, in California. Its peak capacity is also projected to come in at 1.5 exaflops. Launching sometime in 2022, El Capitan will be restricted to users in the national security field.

China’s three announced exascale projects, Dongarra says, also each have their own configurations and hardware. In part because of President Trump’s China trade war, China will be developing its own processors and high-speed interconnects.

“China is very aggressive in high-performance computing,” Dongarra notes. “Back in 2001, the Top 500 list had no Chinese machines. Today they’re dominant.” As of June 2019, China had 219 of the world’s 500 fastest supercomputers, whereas the United States had 116. (Tally together the number of petaflops in each machine and the numbers come out a little different. In terms of performance, the United States has 38 percent of the world’s HPC resources, whereas China has 30 percent.)

China’s three exascale systems are all built around CPUs manufactured in China. They are to be based at the National University of Defense Technology, using a yet-to-be-announced CPU; the National Research Center of Parallel Computer Engineering and Technology, using a nonaccelerated ARM-based CPU; and the Chinese HPC company Sugon, using an AMD-licensed x 86 with accelerators from the Chinese company HyGon.

Japan’s future exascale machine, Fugaku, is being jointly developed by Fujitsu and Riken, using ARM architecture. And not to be left out, the EU also has exascale projects in the works, the most interesting of which centers on a European processor initiative, which Dongarra speculates may use the open-source RISC-V architecture.

All four of the major players—China, the United States, Japan, and the EU—have gone all-in on building out their own CPU and accelerator technologies, Sorensen says. “It’s a rebirth of interesting architectures,” he says. “There’s lots of innovation out there.”

Building a Quantum Computer From Off-the-Shelf Parts

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/hardware/scalable-qubits-quantum-computer-news-silicon-wafer

A new technique for fabricating quantum bits in silicon carbide wafers could provide a scalable platform for future quantum computers. The quantum bits, to the surprise of the researchers, can even be fabricated from a commercial chip built for conventional computing.

The recipe was surprisingly simple: Buy a commercially available wafer of silicon carbide (a temperature-robust semiconductor used in electric vehicles, LED lights, solar cells, and 5G gear) and shoot an electron beam at it. The beam creates a deficiency in the wafer which behaves, essentially, as a single electron spin that can be manipulated electrically, magnetically, or optically.

Intel Unveils Cryogenic Chips to Speed Quantum Computing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/intel-unveils-cryogenic-chips-to-speed-quantum-computing

At the IEEE International Electron Devices Meeting in San Francisco this week, Intel is unveiling a cryogenic chip designed to accelerate the development of the quantum computers they are building with Delft University’s QuTech research group. The chip, called Horse Ridge for one of the coldest spots in Oregon, uses specially-designed transistors to provide microwave control signals to Intel’s quantum computing chips.

The quantum computer chips in development at IBM, Google, Intel, and other firms today operate at fractions of a degree above absolute zero and must be kept inside a dilution refrigerator. However, as companies have managed to increase the number of quantum bits (qubits) in the chips, and therefore the chips’ capacity to compute, they’ve begun to run into a problem. Each qubit needs its own set of wires leading to control and readout systems outside of the cryogenic container. It’s already getting crowded and as quantum computers continue to scale—Intel’s is up to 49 qubits now—there soon won’t be enough room for the wires.   

Why Use Time-Of-Flight for Distance Measurement?

Post Syndicated from Terabee original https://spectrum.ieee.org/computing/hardware/why-use-timeofflight-for-distance-measurement

The most sophisticated of sensor module technologies are our innovative 3D ToF cameras which can capture depth data across three spatial dimensions. Depth sensing with Time-of-Flight sensors is discreet, completely eye-safe, and it is designed to work indoors even in low light or complete darkness.

This technology is a powerful enabler for applications such as people counting, digital stock monitoring, and room occupancy monitoring. With fast refresh rates, our 3D TOF sensor modules are also able to distinguish between simple gestures to assist with innovative human-machine interfaces and next-generation contactless controls.

 Shifting to ever smarter and versatile sensor modules

Terabee has just released a new type of indirect Time-of-Flight smart sensor for industrial and logistics applications. The sensor offers 12.5 meter detection capabilities using Time-of-Flight technology. It features a robust IP65 enclosure to ensure dust-proof and water-resistant capabilities in a compact and lightweight form factor (99 grams).

The sensor provides proximity notification with a classic NO/NC switching output (0-24V), while also communicating calibrated distance data via RS485 interface.

The sensor module features 6 embedded operating modes allowing for programmable distance thresholds. This makes it easy to set on-the-field in a matter of minutes thanks to teach-in buttons.

Operating modes allow the same sensor module to trigger alarms, detect movements, count objects and check for object alignment.

This versatility means that a single sensor can be purchased in bulk and programmed to automate many different control and monitoring processes. This is especially useful in reconfigurable warehouses and factories to save precious setup time.

image

Next steps

Terabee plans to build on its deep technology expertise in sensing hardware and to develop cutting-edge applications. In the coming 12 months, we will offer further solutions for many markets such as mobile robotics, smart farming, smart city, smart buildings and industrial automation in the form of devices, software and OEM services.

Learn more about Terabee

What is Time-of-Flight (ToF) distance sensing?

Several methods of detection are available for determining the proximity of an object or objects in real-time, each of which is differentiated by a diverse range of underlying hardware. As a result, distance sensors incorporate an extremely broad field of technologies: infrared (IR) triangulation, laser, light-emitting diode Time-of-Flight, ultrasonic, etc.

Various types of signals, or carriers, can be used to apply the Time-of-Flight principle, the most common being sound and light. Sound is mostly used in ultrasound sensors or radars.

Active optical Time-of-Flight, is a remote-sensing method to estimate range between a sensor and a targeted object by illuminating an object with a light source and by measuring the travel time from the emitter to the object and back to the receiver.

For light carriers, two technologies are available today: direct ToF ,based on pulsed-light, and indirect ToF based on continuous wave modulation.

 Terabee’s unique Time-of-Flight sensing technology

Established in 2012, Terabee has since grown into a diverse organization made up of leading experts in the sensing sector. As a certified CERN Technology partner, we offer an expansive range of sensor modules and solutions for some of the most cutting-edge fields on the market, from robotics to industry 4.0 and IoT applications.

At Terabee, we use light as carriers for our sensors to combine higher update rates, longer range, lower weight, and eye-safety. By carefully tuning emitted infrared light to specific wavelengths, we can ensure less signal disturbance and easier distinction from natural ambient light, resulting in the highest performing distance sensors for their given size and weight.

Terabee ToF sensor modules utilize infrared LEDs which are eye-safe and energy-efficient, while providing broader fields of view than lasers, offering a larger detection area per pixel. Our single 2D infrared sensors have a 2 to 3° field of view (FoV), which provides a more stable data stream for improved consistency of results in many applications.

Over the years we have mastered the product of sensor modules using indirect ToF technology. Thanks to our in-house R&D, Product Development, Production and Logistics departments, we have managed to push the boundaries of this technology for increased precision, longer range and smaller size, offering great value to customers at competitive prices.

We also offer multidirectional sensor module arrays, combining the functionalities of multiple ToF sensors for simultaneous monitoring of multiple directions in real-time, which is especially useful for short-range anti-collision applications. Our unique Hub comes with different operating modes to avoid crosstalk issues and transmit data from up to 8 sensors to a machine.

Cerebras Unveils First Installation of Its AI Supercomputer at Argonne National Labs

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/cerebras-unveils-ai-supercomputer-argonne-national-lab-first-installation

At Supercomputing 2019 in Denver, Colo., Cerebras Systems unveiled the computer powered by the world’s biggest chip. Cerebras says the computer, the CS-1, has the equivalent machine learning capabilities of hundreds of racks worth of GPU-based computers consuming hundreds of kilowatts, but it takes up only one-third of a standard rack and consumes about 17 kW. Argonne National Labs, future home of what’s expected to be the United States’ first exascale supercomputer, says it has already deployed a CS-1. Argonne is one of two announced U.S. National Laboratories customers for Cerebras, the other being Lawrence Livermore National Laboratory.

Developing Purpose-Built & Turnkey RF Applications

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/developing-purposebuilt-turnkey-rf-applications

Developing Purpose-Built & Turnkey RF Applications 

This ThinkRF white paper will explore how SIs can develop a purpose-built, turnkey RF application that lets end-users improve their business and understand the spectrum environment.

image

Save Time with Ready-To-Use Measurements

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/save-time-with-readytouse-measurements

The right measurement applications can increase the functionality of your signal analyzer and reduce your time to insight with ready-to-use measurements, built-in results displays, and standards conformance tests. They can also help ensure consistent measurement results across different teams and your design cycle. This efficiency means you can spend less time setting up measurements and more time evaluating and improving your designs. Learn about general-purpose or application-specific measurements that can help save you time and maintain measurement consistency in this eBook.

image

Register for Our Application Note “Tips and Tricks on How to Verify Control Loop Stability”

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/register-for-our-application-note-tips-and-tricks-on-how-to-verify-control-loop-stability

The Application Note explains the main measurement concept and will guide the user during the measurements and mention the main topics in a practical manner. Wherever possible, a hint is given where the user should pay attention. 

The Latest Techniques in Power Supply Test – Get the App Note

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/the-latest-techniques-in-power-supply-test-get-the-app-note

DC Electronic Loads are becoming more popular in test systems as more electronic devices convert or store energy. Learn about Keysight’s next-generation electronic loads, allowing for a complete DC power conversion solution on the popular N6700 modular power system.

image

Understanding ADC Bits

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/white-paper-understanding-adc-bits

Did you know your oscilloscope’s effective number of bits (ENOB) is just as important as the number of ADC bits? ADC bits is one of the most widely known specifications. Many engineers rely on this as the sole specification that determines an oscilloscope’s quality. However, the importance of ADC bits is often exaggerated while other critical indications of signal integrity get pushed to the background. Learn about the major impacts, ENOB has on your measurements in this whitepaper.

Google’s Quantum Tech Milestone Excites Scientists and Spurs Rivals

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/googles-quantum-tech-milestone-excites-scientists-and-spurs-rivals

Quantum computing can already seem like the realm of big business these days, with tech giants such as Google, IBM, and Intel developing quantum tech hardware. But even as rivals reacted to Google’s announcement of having shown quantum computing’s advantage over the most powerful supercomputer, scientists have welcomed the demonstration as providing crucial experimental evidence to back up theoretical research in quantum physics.

Nonlinear Magnetic Materials Modeling Webinar

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/nonlinear-magnetic-materials-modeling

In this webinar, you will learn how to model ferromagnetic materials and other nonlinear magnetic materials in the COMSOL® software.

Ferromagnetic materials exhibit saturation and hysteresis. These factors are major challenges in the design of electric motors and transformers because they affect the iron loss. In addition, the loss of ferromagnetic properties at elevated temperatures (Curie temperatures) is an important nonlinear multiphysics effect in, for example, induction heating. It can also cause permanent material degradation in permanent magnet motors.

This webinar will demonstrate how to build high-fidelity finite element models of ferromagnetic devices using COMSOL Multiphysics®. The presentation concludes with a Q&A session.

PRESENTERS:

Magnus Olsson, Technology Manager, COMSOL

Magnus Olsson joined COMSOL in 1996 and currently leads development for the electromagnetic design products. He holds an MSc in engineering physics and a PhD in plasma physics and fusion research. Prior to joining COMSOL, he worked as a consulting specialist in electromagnetic computations for the Swedish armed forces.

 

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to [email protected] for a webinar code. To request your certificate complete the form here: http://innovationatwork.ieee.org/spectrum/

Attendance is free. To access the event please register.

NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​

Dream Your Future with CERN!

Post Syndicated from Cern original https://spectrum.ieee.org/computing/hardware/dream-your-future-with-cern

On 14 and 15 September CERN opened its doors to the public on the occasion of its Open Days, a unique opportunity to witness the incredible work going on behind the scenes of this unique organisation, whose mission is to answer the fundamental questions of the universe. More than 75,000 visitors of all ages and backgrounds came to CERN’s many visit points, with more than 100 activities, guided by 3,000 dedicated and passionate volunteers eager to share the wonders of this unique place to work.

CERN is the world’s largest particle physics research centre. It is an incredible place, with its myriad of accelerators, detectors, computing infrastructure and experiments that serve to research the origins of our universe. Seeing it for oneself is the only way to understand and realise the sheer enormity of what is going on here. We traditionally have over 110’000 visitors per year coming to CERN, numbers that grow all the time. It is a very popular place to visit at any time as its ranking on Tripadvisor confirms.

Every five years, CERN enters a ‘Long shutdown’ phase for essential upgrades and maintenance work which last several months, and this is the ideal opportunity to open CERN up to the public with its ‘Open days’, for people to see, experience and integrate what science on this scale actually looks like. The theme of these open days was “Explore the future with us”, with the aim to engage visitors in how we work at CERN, engage them in the process of science, human endeavour driven by values of openness, diversity and peaceful collaboration.

You can of course visit CERN at any time, although on a more reduced scale than the open days. While in operation, the Large Hadron Collider and detectors are clearly inaccessible. In the regular annual shutdown periods, limited underground visits are possible but cannot be guaranteed, however there are many interesting places to be visited above ground at all times, with free of charge visits and tours on offer. Furthermore, if coming in person is not feasible, people can take virtual tours notably of the LHC and the computing centre.

Who works at CERN? A common misconception about CERN is that all employees work in physics. CERN’s mission is to uncover the mysteries of our universe and is known as the largest physics laboratory in the world, so in many ways this misconception comes from a logical assumption. What is probably less tangible and less well understood by the public is that to achieve this level of cutting edge particle physics research, you need the infrastructure and tools to perform it: the accelerators, detectors, technology, computing and a whole host of other disciplines. CERN employs 2600 staff members to build, operate and maintain this infrastructure that is in turn used by a worldwide community of physicists to perform their world-class research.

Of the 2600 staff members, only 3% are research physicists – CERN’s core hiring needs are for engineers and technicians and support staff in a wide variety of disciplines, spanning electricity, mechanics, electronics, material science, vacuum, and of course computing. Let’s not forget that CERN is the birth place of the world wide web and advances in computing are key here – it’s a great place to work as a software or hardware engineer!

Working at CERN is enriching on so many levels, it is a privilege to be a part of this Organization which has such a noble mission, uniting people from all over the world with values that truly speak to me: diversity, commitment, creativity, integrity and professionalism. Every day is a new opportunity to learn, discover and grow. The benefits of working at CERN are plentiful, and the quality of life offered in the Geneva region is remarkable. We often say it’s working in a place like nowhere else on earth! So don’t hesitate to come find out for yourself, on a visit or … by joining us as a student, a graduate or a professional. Apply now and take part! https://careers.cern

Key parameters for selecting RF inductors

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/key-parameters-for-selecting-r-f-inductors

Download this application note to learn the key selection criteria an engineer needs to understand in order to properly evaluate and specify RF inductors, including inductance value, current rating, DC resistance (DCR), self-resonant frequency (SRF) and more.

What Google’s Quantum Supremacy Claim Means for Quantum Computing

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/how-googles-quantum-supremacy-plays-into-quantum-computings-long-game

Google’s claim to have demonstrated quantum supremacy—one of the earliest and most hotly anticipated milestones on the long road toward practical quantum computing—was supposed to make its official debut in a prestigious science journal. Instead, an early leak of the research paper has sparked a frenzy of media coverage and some misinformed speculation about when quantum computers will be ready to crack the world’s computer security algorithms.