Google held its Quantum Summer Symposium last week and described a number of new developments that shows they are continuing to progress in their quantum developments.
They first announced of a free program called the Quantum Virtual Machine (QVM) that runs in a Colab notebook. Although Google has not made any of their quantum processors openly available to the public yet, the QVM allows anyone to emulate one of Google’s processors to test out their programs and see how they would run run on one of their quantum machines. Although there are many other quantum simulators out there, the key differentiator in QVM is that it is programmed with measurements from Google’s Sycamore processors including such metrics as qubit decay, dephasing, gate errors, readout errors, and connectivity limitations. This capability could be quite helpful with researchers working in areas such as error mitigation algorithms and would allow the researchers to test out their programs much faster and easier than if they were using an actual processor. Google’s testing indicates that the results of running a program on the QVM correlate quite closely with running the same program on the actual Sycamore processor. You can read more about the QVM in a blog posted on Google’s Quantum AI website here, a web page for it here, and the documentation page for the software here.
The next announcement was that they have official release version 1.0 of Cirq. Cirq is Google’s open source quantum SDK and has been in development since 2017. For the past five years, the release numbers have always been in the form 0.X indicating that it was still in development and not ready for a full formal release. The promotion to 1.0 indicates the software is now much more mature and stable and will not introduce any breaking changes when minor releases such as 1.1, 1.2, etc. are released. Cirq is currently able to work with quantum programs that may use hundreds of qubits and thousands of gates. There are also several libraries available that can work with Cirq including TensorFlow Quantum, OpenFermion, Pytket, Mitiq, and Qsim. Cirq also has support from numerous backends including AQT, IonQ, Pascal, Rigetti, IQM, Azure Quantum, and of course Google’s own hardware and its Quantum Virtual machine. For more about Cirq version 1.0, you can read a blog posted on Google’s website here, a web page for the platform here, and the GitHub repository for the software here.
It’s well known that the goal of Google and just about everybody else is to create an error corrected quantum computer that will be fault tolerant. And there have been a number of interesting research articles of groups working on experiments to show this is possible. (See here, here, and here, for example.) The problem is that the quality of the currently available qubits, is not really all that good. So when someone groups together multiple physical qubits together to create a logical qubit, the result is often that the logical error rate of the group is worse than the error rate of a single physical qubit because every additional qubit that is added is another potential source of error. The goal is to improve the physical error rate of a qubit, such that when multiple qubits are grouped together, the logical error rate of the group comes out better. The physical error rate of a single qubit where this occurs is called the error threshold level. At that point, more complex codes can be implemented that will drive the logical error rate down further and further as more qubits are grouped together in the code. Google, in a recent paper titled Suppressing quantum errors by scaling a surface code logical qubit has demonstrated this empirically for the first time. In particular, they first implemented a distance-3 error correction code in their Sycamore processor by grouping together 17 qubits and achieved a logical error rate of 3.028%±0.023%. And then they implemented a distance-5 code using 49 qubits and achieved a logical error rate of 2.914%±0.016%. (See here for a definition of codeword Hamming distance.) Although this is a very modest reduction of only about 4%, it is going in the right direction. Google’s ultimately goal is to develop an error correction architecture that can use 1000 physical qubits to create one logical qubit with an error rate of 10-6 or less. They will need to do this by continuing to improve the physical error rate in the qubits while expanding the number of qubits available for this very complex code. This is the next key milestone they have identified in their research effort. You can read a technical paper describing their results on arXiv here.
Finally, if you read the technical paper mentioned about carefully you will notice a reference to an expanded Sycamore device with 72 transmon qubits. So apparently, although we didn’t see a formal announcement about this, they have created a big brother to their original Sycamore device which had 53 working qubits. Google used this expanded version of Sycamore in their distance-5 experiments mentioned above.
July 23, 2022