The Inevitable Shift to Qubit Efficiency & Error Correction Is Finally Happening

by Ray Smets, President & CEO and Andrei Petrenko, Head of Product, Quantum Circuits, Inc.

This year we’ve seen a number of new and updated roadmaps released by some high-profile quantum hardware vendors. The targets are aggressive.

Recently, many providers have unveiled trajectories involving millions of physical qubits. Logical qubit numbers, which harness the challenges of error correction, range as high as tens of thousands. Logical error rates, which are the standard benchmark of QPU overall computational performance, are slated to fall dramatically, waving the green flag on commercial-grade applications. Companies that have not published new roadmaps are vocally doubling down on their existing plans, sticking to the core tenet that at least a million or more physical qubits are required to usher in the onset of quantum utility.

The history of quantum roadmaps clearly shows a qubit arms race, which in many cases obscures deeper, more pragmatic focus on core technology approaches that will maximize success in the long run.

One of the most important is error correction and the key facets of a given modality’s architecture to support it efficiently. Likewise, near-term capabilities of quantum systems are essential to support a robust ecosystem of application exploration. They foster novel algorithm discovery within the quantum community and industry at large, setting up the field to hit the ground running when the qubit numbers and error rates do reach the numbers required for quantum utility.

This arms race therefore merits a pause. We need to say what the market is unable to see and what vendors do not want to hear.

For years, high-volume qubit vendors have been running the wrong race

They’re running an inefficient race requiring excessive “brute force” engineering to produce massive qubit counts while compounding the issue by attempting to scale before solving error correction challenges.

In recent years, large slices of the industry focus on boiling down a QPU to a few technical statistics, which oversimplifies the message for a mass market that doesn’t know what it doesn’t know. Yes, the qubit numbers matters, as does the error rate. And of course, the logical qubit count is important too. But there are many more focal areas, metrics, and questions that enable a well-rounded judgment on effective quantum methodologies. Without them, information solely on qubit counts creates misleading perceptions.

If it takes a football field and a dedicated powerplant to build just a single large-scale QPU , is that the best approach? Is it the most efficient method? Is it cost-efficient? What applications matter? How will a QPU be democratized and fit into an HPC ecosystem and reach developers worldwide?

Quantum is missing an opportunity to tell a bigger story. A more meaningful story. The whole story. Investors, the media, enterprises, governments – they all need to hear it. The majority of the quantum industry is missing another race that avoids the public inertia of qubit counts. Quantum error correction is the critical race, and it’s essential to get it right.

Run the Right Race

Error correction is one of the capabilities a quantum computer requires to successfully execute algorithms at scale. Its purpose is to correct the errors that corrupt qubits. Given the special quantum properties of qubits, which make quantum computing so powerful, resource requirements for error correction have typically been high in most conventional techniques, making it difficult.

One promising way to simplify the path to fault-tolerant quantum computing leverages an emerging technique called Dual-Rail Cavity Qubits (DRQs). This approach, which is advancing rapidly, has shown some of the strongest performance metrics across hardware modalities. As a multi-component, all-superconducting unit, the DRQ design uses the properties of microwave-frequency resonators and transmons arranged in a novel layout. This enables high-fidelity quantum gates and measurements at high clock speeds — a combination that remains rare in the industry.

DRQs also introduce a new capability that undermines the brute-forced qubit count arms race – error detection that’s built in at the hardware level. Unlike other qubits, which are measured in just 0 and 1, DRQs return three results – 0, 1 and *. It’s this third result, *, that indicates a DRQ had an error during an algorithm. What’s unique is that users can learn, for the first time on a qubit-by-qubit basis, where the majority of errors occurred in the algorithm.

Conventional approaches across ions, atoms, and other superconductors experience “silent errors.” Algorithm results just get worse and only bulky error correction codes can improve the result. With built-in error detection, DRQs provide extra insight into error dynamics, bridging the gap from near-term systems to scalable error correction much more efficiently.

Error detection with DRQs is therefore more than just a mechanism to enhance fidelities for near-term quantum programs. It’s about winning the race to error correction more efficiently. The DRQ approach both reduces the number of qubits required to achieve low error rates and makes it easier to get error correction working to begin with. Conventional approaches do not have these advantages, and that’s what will hold them back.

DRQs keep things simple, saving capex and computational resources while maintaining the low error rates necessary for commercial-grade applications and maintaining efficiency to reduce the overall QPU footprint. Why build 1 million qubits to do something revolutionary if you can do it with 10X fewer, or better? No football field required. Less capex drain. Less resource demands. Longer cash burn rates. Better investment returns. A greater chance at achieving sustainable fault tolerance.

At the same time, new features in quantum hardware and software are emerging that enable researchers and developers to explore uncharted spaces in quantum application development. We believe error detection is a fundamental requirement. DRQs give users access to previously inaccessible sources of high-quality data that no other system provides – error records at the single qubit level. These error records can be leveraged to discover novel applications across a host of high-value spaces.

More data means more classical horsepower is needed to help with the processing, both real-time and offline. That’s where the other two legs of the stool come in – GPUs and CPUs. The QPU is a remarkable force that will further accelerate today’s HPC techniques and maximize AI and quantum benefits. DRQs will be a key resource.

To be objective, we’re not the only ones with a hardware-efficient approach. Other hardware modalities like cat states are showing progress. They have offered roadmaps targeting low overheads between physical and logical qubit numbers and have shown impressive results on error correction. Some providers are targeting promising theoretical error correction codes that require overcoming high qubit connectivity hurdles at scale. While we root for the success of the quantum industry and look forward to how these methodologies evolve, we are championing the success of our dual-rail architecture and expect it to win out.

A key priority for advancing quantum computing is to deploy increasingly capable systems that can deliver larger logical qubit numbers and lower logical error rates, which is achieved by focusing on error correction as the strategy. Progress in recent years has demonstrated that practical, efficient architectures can support this path, with a growing emphasis on roadmaps that prioritize system usability and efficiency for end users.

The goal is to develop increasingly capable quantum systems while simplifying the challenge of scaling by exploring innovative qubit architectures. A pragmatic “correct first, then scale” philosophy will provide a faster and more efficient path to fault tolerance that enterprises can rely on for practical applications.

Ultimately, there will be multiple winners in quantum computing, all pursuing paths focused on error detection, error correction, and hardware efficiency. The brute-forced high-volume qubit arms race is the wrong race to run. It’s time to realize that qubit efficiency is the rabbit to chase.

Ray Smets, President and CEO of Quantum Circuits and board member, brings extensive executive experience across computing, security, and networking, previously leading companies like Napatech, Cisco, Motorola, McAfee, and AT&T, and holds engineering and business degrees from the University of Florida and Stanford.

Andrei Petrenko, Ph.D., Head of Product Strategy at Quantum Circuits and a quantum hardware expert with nearly 15 years of experience, drives the company’s unique superconducting architecture and customer-focused ecosystem while sharing his insights as a frequent industry speaker.

July 19, 2025