Quantum Computing Report

Scaling Quantum Hardware – The Case for Modular Scaling

Today’s quantum platforms need to radically scale-up to deliver on quantum computing’s true commercial promise. Innovation to meet this challenge continues to throw focus onto new parts of the quantum stack. Quantum networking technology is closely connected to modularity and so much more central to this quest than is commonly realized. Forward thinking quantum players, ecosystem builders and investors are already charting their strategies to maximize their share of this prize.

The best of today’s early quantum systems are already capable of notional ‘beyond classical’ calculations. This starts to be possible with a high fidelity 50-60 qubit module where environmental noise and crosstalk are sufficiently under control. However, whether such Noisy Intermediate Scale Quantum (NISQ) modules will be capable of broad commercial utility is much less clear. With 200-300 physical qubits and advanced techniques for error suppression and error mitigation, very interesting science experiments certainly seem possible, but commercial applications with a strong genuinely quantum advantage may be more difficult to achieve.

The applications of quantum computing where our understanding of the likely benefits are most established, (cryptanalysis, materials science, quantum chemistry) often envisage large systems of 1000Q – 25,000Q+ high performing qubits, able to sustain quantum calculations without error over a trillion plus quantum operations – the terra quop regime.

To reach this level of performance most believe that techniques for quantum error correction (QEC) will also be required to map multiple physical qubits onto higher performing logical qubits. The catch is that these techniques introduce significant overheads. Though impacted by many factors, the most widely established techniques might require an eye-watering ratio of 1500:1 physical to logical qubits (or more). Popular summaries typically conclude that we will need ‘millions’ of physical qubits to serve large scale quantum applications.

Proposals exist to reduce the overheads of error correction, but often depend on significantly more demanding qubit properties: not just much higher raw physical fidelity, but also higher and non-local inter-qubit connectivity during the error correction cycle. Even if we assume relatively aggressive assumptions, a module size of 10,000 physical qubits and an error correction overhead of 20:1, that is still not enough to serve large applications in a single module. 

GQI believes that for almost all of today’s proposed quantum computing architectures a modular approach to scaling will also ultimately be required. This will likely entail a distributed rather than a monolithic quantum computing stack. As a bonus, modular scaling comes with significant additional advantages: flexibility, maintainability and redundancy.

Most architectures will ultimately need to leverage interconnects, operating at either microwave or optical frequencies. Such strategies come with a close connection to the wider field of quantum communications and quantum networking. If sufficiently performant optical photonic interconnects can be achieved, they hold out the promise of unique synergies and flexibilities across these fields. 

GQI concludes that it is important for all quantum sector participants to take a more holistic view in appraising quantum computing hardware architectures than has often been the case. 

The fundamental requirement remains:

  • a high fidelity qubit platform

But we also need to maintain a focus on:

  • practical and optimal module sizes 
  • qubit connectivity and interconnect compatibility
  • the QEC codes and pathways to fault tolerance these configurations enable
  • classical control system performance (particularly real time decoding)
  • how the system will ultimately fit within a hybrid HPC data center environment.

Efficiently orchestrating such large, modular quantum hardware systems will itself be a key challenge across multiple disciplines: our understanding of the physics, physical engineering and software engineering. Academic work has often addressed these issues under the label ‘distributed quantum computing’, or ‘entanglement distribution’. Commercial hardware players refer to systems engineering and modular scaling. Would-be midstack software/firmware providers talk of operation systems and software stack abstraction. 

GQI feels these converging discussions have sometimes obscured the centrality of the question of quantum networking technology, at least at short ranges, to the delivery of mainstream quantum computing. GQI believes that many early quantum computing players have often not explained this part of the roadmap in sufficient detail. Major challenges often lurk in these gaps.

GQI believes that challenge also brings opportunity. New startups are enriching the ‘network layers’ of the quantum computing stack with innovation. Opportunities exist to capture important heights in the quantum computing value chain. New business models are available for investors to back.

Developments to watch

  • True roadmaps – do hardware roadmaps clearly explain the technology milestones by which modular scaling will be realized; or if not, how do they escape this bottleneck?
  • Fidelities – particularly 2Q gate fidelities demonstrated in simultaneous operation across a multi-qubit device; even modular scaling is useless, if the modules aren’t of sufficiently high quality.
  • Modules sizes – innovations that shift the dial of potential module size. Aspirations certainly exist to do this; but we need to see evidence to support that a new option is realistic.
  • NISQ schemes – whether for frontier science or early niche commercial utility; the use of hybrid classical resources and aggressive parallelism may be another pointer to the importance of modularity and how the network layers of the stack need to operate.
  • FTQC schemes – what connectivity do new high-rate codes require; is the pathway to a universal set of fault tolerant operations known; we need to understand the coupler and interconnect technologies required to deliver these schemes.
  • Interconnects performance – this matters across multiple dimensions
    • Fidelity – the fidelity with which entanglement can be generated
    • Rate – the rate at which this entanglement is established (it is typically possible to trade-off entanglement and rate) 
    • Compatibility – operating frequency window will be a key mark of compatibility; rate of entanglement generation is set to be as important as fidelity.

GQI has produced a full 39 page Focus Report titled Scalable Quantum Hardware that describes in much more detail the challenges and strategies that companies are taking to developing modular technologies for scaling up quantum computers. Visit the GQI webpage HERE for information on how to obtain it.

March 22, 2024

Exit mobile version