GQI has developed a full analysis of the recent technical paper that describes a Harvard-led experiment on error correction. The article below is a summary of our conclusions. To obtain a copy of the full analysis, contact us at info@global-qi.com.

A Harvard-led experiment with up to 48 logical qubits is the most eye-catching demonstration of quantum error correcting codes to date. However, GQI believes this is not yet a ‘Sputnik moment’ for fault tolerant quantum computing.

Sputnik – On 4 October 1957 the Soviet Union launched Sputnik 1 into orbit. This grabbed headlines around the world, radically raised the profile of this important new technology and kicked-off the Space Race.

Key points about the Harvard-led experiment with collaborators from MIT, QuICS, and QuEra is that it involved up to 48 logical qubits to demonstrate quantum error correction codes. A notable demonstration includes using 280 physical qubits to form logical qubits and up to 200 2Q logical gates. And it leverages a technique of flexible qubit shuttling for implementation.

The demonstration builds on a body of work performed starting in 2017 by many researchers from this team. This includes technologies like Magneto-Optical Traps (MOT), Spatial Light Modulators (SLM), and Acousto-Optical Deflectors (AOD) that they developed for analog quantum simulators. The team moved on to demonstrate gate-model operations in 2019, Zoned architecture and Raman optical system refinement in 2022, a high-fidelity 2Q gates in 2023. They also create a blueprint for a zoned neutral-atom architecture leveraging high-rate Q LDPC codes which they introduced in 2023. 

The architecture leverages neutral atoms’ strengths, including long-lived hyperfine states for memory qubits. This enables high-rate Q LDPC codes for memory qubits and planar codes for Rydberg-mediated gates. And the AOD technology enables the usage of a zoned processor architecture.

Although we regard this demonstration as a great step forward, there is still several challenges that need to be overcome to make this approach truly useful for achieve Fault Tolerant Quantum Computing (FTQC). These include:

Challenge 1: Code scaling for systematically suppressing logical errors

This requires ensuring that as the codes get bigger (e.g. provide greater distance between the codewords), the logical error rates continue to improve. This needs to work across multiple rounds of error correction (so far Harvard only demonstrated error detection over a single round). This  requires continued improvement in the physical error rate to meet the error correction thresholds and make the error correction circuits more effective. And in neutral atoms platforms, providing an efficient means for reloading and resetting the atoms to refresh them during long calculations.

Challenge 2: Universal fault-tolerant circuits with realistic clock times

It is not sufficient to achieve quantum advantage if an error correction architecture only implements a set of the Clifford gates (gates including H, S, CNOT, and others that can be generated by combining these together). A universal gate set that can perform any quantum calculation requires an additional gate, such as the T-gate. These are typicallycreated using a process called magic state distillation which is very resource intensive and often a dominant source of overhead. The team needs to do further work to show how this, or an alternative scheme, can be achieved robustly and efficiently.

Challenge 3: Platform capable of scaling to commercially relevant size

Although an error correction demonstration of 48 logical qubits is the largest we have seen so far, it is still not enough to achieve a useful quantum advantage over classical processors using the best available algorithms. So, the number of logical qubits needs to continue to increase. This will always be a key challenge for the hardware designers because they need to do it while still maintaining or even improving the physical error rates. This includes not introducing overwhelming crosstalk, complexity in qubit calibration or an overload of environmental isolation (e.g. cooling power, or vacuum cycle time).

But it also introduces two additional challenges. Larger module sizes can possibly negatively impact the overall speed performance, particularly in architectures that utilize qubit shuttling. So further work needs to develop ways of increasing module size without ending up with excessive run times for a user’s calculations.

But there will still be limits to how large a single module can be created. It might be as large as 10,000 physical qubits, but even that would not be enough to perform the types of large-scale quantum computations that users will want to do. Many researchers are pursuing multi-module architectures with the modules linked by either opticalor microwave connections in order to scale up. This will require some form of transducer to convert the neutral atom qubits to a photon so entangle qubits can be sent to a neighboring module. This will require much more research which is still in its very early stages.

Conclusion

Nonetheless, this demonstration is an impressive step forward that builds upon the progress shown by others working on error correction. But we look forward to seeing more progress in 2024 and beyond as researchers build upon this effort.

QuEra will be hosting a webinar on January 9, 2024 where they will be providing additional details about their roadmap. You can register it at https://quera.link/roadmap.

December 19, 2023