By Andre Sariva, Diraq

A Quantum Computing event on the 3rd of April (see footage here) marked the launch of the first two whitepapers about Quantum Technologies commissioned by Standards Australia, a non-governmental not-for-profit organisation, similar to the American ANSI or the European IEC. They were on the topics of Quantum Computing (which is available in full form) and Quantum Communications (only the executive summary is available at the moment). These are fantastic, authoritative reads, and two more reports are planned for later release.

At the launch event, a number of very interesting points were raised about the sustainability of quantum computing research and the role of standardisation.

To set the context it is important to visualise what is the state of quantum technologies in 2023. Australia has had commercial endeavours in quantum communications and quantum sensing now for decades. Moreover, it is home to some of the world-leading quantum computing hardware developers, such as Silicon Quantum Computing, Quantum Brilliance and Diraq. Standards play a very different role in these scenarios – the concept of quantum advantage in sensing and communications is well understood and testable with current technology, while it remains elusive and theoretical in quantum computing.

A natural question was then posed to the panel of experts that was invited to the event (including yours truly): could it be too early to set standards in such a nascent field with such a distant horizon for practical commercial applications?

The unanimous view of experts in the room, both quantum scientists and policy makers, was that the standardisation of terminology in quantum technology should have happened sooner. Standards are an instrument for supporting governments and corporations to guarantee that their investments are protected by conventions that remove any technical lack of clarity, an urgent need in the case of the quantum market. For instance, it will be one of the main tools for surviving a potential quantum winter.

The world is becoming increasingly aware that fast-grab quantum advantage with small NISQ algorithms might not happen. The only mathematically provable advantageous algorithms developed so far rely on multimillion qubit processors that can operate fault-tolerantly, or at least with qubits that can tolerate deeper circuits and perform calculations much faster than the current ones. The endeavour to build such a machine is as much a scientific challenge as a financial one. Disillusioned investors that were expecting more immediate returns will abandon the scene, elevating the bar for quantum companies to unlock the needed investments from either governments or private investors with deep enough pockets and flexible investment mandates.

In such a world of less abundance, serious companies can only differentiate themselves if well-defined standards of quality are in place. Moreover, taxpayer money will need to be invested with some serious regard to verifiability of claims and standardised validation of quantum operations. Finally, investors will need metrics for gauging progress in the long valley between the initial blueprints that they signed up for and the actual finalised product.

The problem can be as simple as defining the word “qubit”. For most scientists, there is no controversy about what the world means. However, there are a number of technologies that elude the standard paradigm of a two-level system with a set of calibrated operations, initialisation and readout. Examples include adiabatic/annealing quantum computing, continuous variables and quantum simulation. In these cases, the use (and, in some cases, abuse) of the use of the term “qubit” can lead to disparities between what different vendors offer. If a government body starts a tender process for a 100-qubit quantum computer, it is important that those 100 qubits are actually doing what is intended for them to do with a minimum certified fidelity.

Early efforts to determine such standards were self-organised by academics. The field of Quantum Computing Verification and Validation (QCVV) constitutes a vibrant crowd at any scientific conference. However, counting on individuals to do this job is bound to become a problem in the longer run. These volunteers are only efficient gatekeepers if they remain unbiased and agnostic to any particular commercial exploration. But in a world where a huge talent gap exists, the economic pressure to enlist scientists in quantum startups is slowly emptying the rooms of the truly independent QCVV experts.

Some efforts for independent certification as a service have been created. Quantum Benchmark is a former Canadian startup (which is now part of Keysight’s portfolio) doing precisely that. But it has so far been mostly a spontaneous initiative from hardware makers to gain their “seal of approval”, rather than a public policy of requiring independent certification. Ultimately, the matter is only meaningful if national standardisation bodies, such as Standards Australia, have frameworks in place to regulate what metrics should be used when discussing a target performance for quantum processors.

Standardisation efforts globally

There is one current draft International Standard under the Information Technology family of standards led by the joint technical committee (JTC 1) of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The ongoing draft is accepting comments at the moment. This document only scratched the surface trying to merely define vocabulary and terminology in the field. This very simple example already highlights the difficulties and commercial frictions generated by simple terms such as qubits, quantum processors and others.

Most standards organisations are still in a very early road-mapping/white paper stage. Here are some examples:

Critiques to the current standardisation efforts


With the boom in the quantum industry, finding unbiased experts with enough influence in this field to write a widely respected and adopted set of standards is becoming increasingly difficult. Governments must include in their quantum initiatives some money to sustain an independent group of academics, which can consult with industry, but that ultimately are economically independent and able to provide standardisation driven by science, and not by commercial interests.


There is a geopolitical pressure for nations to lead the establishment of standards. However, the field is only nascent and the quantum market is only sustainable when seen at a global scale. We therefore must make sure that standardisation efforts do their best to consult across countries, including countries with less developed quantum industries but with the potential to mediate unbiased discussions and ultimately with the capability to represent the views of future consumers of such technologies.

Moreover, it is important that within each country, input to standards is taken from a balanced representation of industry, academics, stakeholders and the general public.


Perhaps one of the biggest challenges in assembling a set of experts that have respected opinions, remain unbiased by commercial interests, have clout to make bold standards that might not benefit some commercial entities (especially those with “loose” scientific standards) and are willing to spend the significant effort needed to concoct such documents. A quick scan in lists of names worldwide involved in quantum standardisation reveals very few household names, which creates worries about the willingness of the community to embrace such standards in the longer run. This creates a vulnerability for the effort of creating a truly unbiased, international set of standards.

So, what is next?

The follow up really is on all of us, the people who care about quantum computing.

Firstly, we need a strong sense among providers and consumers of the value of standardisation. A standardisation effort backed by experts and representative of every serious quantum effort is key. Standards are only as useful as their adoption among companies and users. Perhaps this could be an early target for the newly established International Council of Quantum Industry Associations. It is clear that the level of international awareness about the need for immediate action is not yet quite there.

Another important step is to make sure that the people responsible for procuring quantum computing services and hardware are aware of these subtleties and capable of finding the appropriate support. Most tenders don’t involve the technical complexity of quantum computing, so it will be rare to find a procurement system in place that is ready for this type of complexity.

Finally, governments need to step in and fund independent bodies that can guarantee that some experts remain unbiased and capable of providing the oversight needed to guarantee that narratives do not dominate over scientific facts.

Dr. Saraiva has worked for over a decade providing theoretical solutions to problems in silicon spin quantum computation, as well as other quantum technologies. He currently is the Head of Solid-State Theory for Diraq, an Australian start-up developing a scalable quantum processor.

April 21, 2023