GQI is pleased to provide a three part series of articles exploring the transition from NISQ to FTQC in partnership with Dr. Ish Dhand, CEO and co-founder of QC Design. Executive summaries of each of the three parts will be published here on the Quantum Computing Report with the full text published on the QC Design website. The content, intelligence and opinions in this report are those of the author and do not necessarily reflect GQI’s findings or conclusions. At GQI we thrive to publish a variety of viewpoints on our platform to provide our audience with the highest variety of quality analysis. GQI is distributing this information at No Charge and we are grateful to Dr. Dhand for his kind collaboration.

Part II: The right roadmap accelerates the path to fault-tolerant quantum computing 

This post is the second part of a three-part series on the transition from NISQ to fault-tolerance. While the first post focussed on the requirements for transformative quantum computing, this second post is about what roadmaps are being pursued by top hardware players to meet those requirements. The full post is available on the QC Design webpage: https://qc.design/news/roadmaps and an executive summary follows. 

Executive summary 

Building a useful quantum computer will need better qubits and better gates. This is true in the noisy intermediate scale quantum (NISQ) era and it is true about building a futuristic transformative fault-tolerant quantum computer (FTQC). That said, building towards FTQC is not only a harder challenge, but also fundamentally different in many ways from the efforts needed for NISQ. That’s because there are many aspects of NISQ development that are not likely to be relevant for fault tolerance – error mitigation, variational algorithms, and a larger variety of native gates, to name a few. FTQC requires running a few gates over and over again with high quality with respect to very specific errors, while other errors don’t matter that much. So a team that knows which aspects of hardware to focus on could have an unfair advantage in progressing faster towards FTQC than teams focussing additionally or exclusively on NISQ. 

A fault-tolerance blueprint accomplishes this goal of helping hardware teams focus on the errors that matter in the operations that matter. What exactly is a fault-tolerance blueprint? It is a detailed plan for a hardware manufacturer to build and operate a working FTQC. The blueprint describes everything ranging from how to define and fabricate physical qubits and how to manipulate the qubits to make them fault tolerant, to how to process the massive amounts of data that is coming out of the device to find the errors and correct them. 

A blueprint is essential for a hardware manufacturer to get to fault-tolerance because every different modality of manufacturing quantum computing comes with different strengths and limitations: constraints on what operations and connectivity are possible, and imperfections in the qubits and gates. Understanding the impact of these constraints and imperfections is as important as it is challenging – the errors and constraints need to be modeled using deep theoretical expertise and simulated using powerful design software. This understanding then informs the development of fault-tolerance blueprints: a great blueprint finds the constraints and imperfections that are most critical to FTQC and find ways to circumvent them. And it provides a clear target for hardware development to focus on. In doing this, it provides a powerful leverage for the hardware efforts. 

Some of the strongest teams building hardware already have their roadmaps for fault-tolerance. Among these teams, some have even published detailed blueprints – PsiQuantum, Xanadu, ORCA, Quandela and Photonic Inc. in photonics and optically-addressable spins, Google, IBM and AWS in superconducting qubits, Quera in neutral atoms, and Universal Quantum and EleQtron in trapped ions. Each of these efforts, and some parallel efforts from academic teams, shed light on the relevant hardware constraints and imperfections and find ways to overcome them. 

But this is just the beginning in terms of designing for fault tolerance. Every unique hardware modality being championed in the quantum industry comes with its own imperfections and constraints, which must be addressed in their blueprints. Building these blueprints is still in progress for many of the newer hardware modalities within the major platforms, and also some of the novel and promising hardware platforms as well. And the blueprints that do exist currently for the more mature modalities of computation need to become more comprehensive, identifying and fixing all significant limitations towards FTQC. I expect the progress on this front to be swift and impactful for the future of quantum computing.

The full Part II document can be downloaded here.

About the Author

Dr. Ish Dhand is the CEO and co-founder of QC Design, a quantum computing company that designs useful and scalable quantum computers by offering licenses to fault-tolerance architectures and design software. Prior to this, he headed the architecture team at Xanadu. Ish has over 10 years of research and leadership experience in quantum computing including a strong focus on fault-tolerance, quantum advantage, and error suppression.

November 18, 2023