Jack Krupansky, a freelance consultant that wrote extensively about quantum computing, is interviewed by Yuval Boger. Jack and Yuval talk about why Jack is taking a break from quantum, what would cause him to re-engage, his advice to Fortune 500 CIOs, quantum benchmarking, and much more.


Jack Krupansky occupies a unique place in the quantum community. Since 2018, as non-affiliated observer, he wrote dozens of highly-detailed informal papers exploring various aspects of quantum computing. He recently announced that his current stage, a five-year immersion in quantum computing, is coming to an end, and I was curious to find out why. Though Jack shies away from microphones (at least from my microphone), he agreed to a written interview to discuss his thoughts and outlook for quantum computing.


Yuval Boger: Hello, Jack. Who are you and what do you do?
Jack Krupansky: I’m Jack Krupansky, a freelance semi-retired former software developer. I’ve been doing a lot of freelance writing over the past six years, but not for money. My central focus for the past five years has been quantum computing. I formerly did a lot of work in compilers, programming language design, software tools, computer graphics, GUI software tools, electronic CAD and CAE applications, software agents, database systems, and search engines, among other things. Mostly I worked at or with tech startups, and with a few big companies in the mix as well. Now I consider myself a technologist, interested more in the capabilities, limitations, and issues with technologies, such as quantum computing, rather than being involved in applications or hands-on development of the technologies.

Now, after five years of immersion in quantum computing and having written all of the important stuff I could think of writing, it feels like a good time to shift gears and begin a distinct new stage of my quantum journey. I don’t feel the need to write as much now, and I do feel that the quantum computing sector needs to catch up with what I have spent so much time writing about and with so many of the promises that have been made and expectations which have been set over the past five years, so I’m tempted to step back, reflect a bit, and watch developments in quantum computing play out to a fair degree before I might be tempted to renew my own efforts in quantum computing beyond a more minimal engagement that I anticipate over the coming months and maybe a year or two.
Yuval:What event or achievement would convince you to re-engage with quantum computing?
Jack:I’d really like to see a vendor achieve my proposal for a quantum computer with 48 fully-connected near-perfect qubits – full any-to-any connectivity and at least 3.5 nines or 4 nines of qubit fidelity, able to run at least 2,500 gates, and support a million gradations of phase and probability amplitude, sufficient to support a 20-bit quantum Fourier transform. Something that has at least some significant quantum advantage – on the order of 1,000,000X a classical solution. Maybe in two years or so. That would definitely get my attention. Short of that, a more modest achievement might be 3.25 nines of qubit fidelity being common, and full qubit connectivity for 36 or at least 27 qubits. And I really do want to see 24 to 28 or even 32-qubit algorithms being the common case. I really want to see 40-qubit algorithms – as a stepping stone to greater sophistication – but even 32 and 36-qubit algorithms would catch my eye.

Hundreds of qubits with mediocre qubit fidelity and crappy qubit connectivity (only nearest neighbors) just aren’t very appealing to me. And we really can’t get to meaningful practical quantum computing until we achieve thousands or a million or more gradations of phase and probability amplitude.

Sure, lesser achievements are likely to get at least some of my attention, but we’ve had plenty of time to prototype and experiment with toy-like quantum computers and toy-like quantum algorithms, so we need to up our game and get to quantum systems which are at least a sizable fraction of a truly practical quantum computer.
Yuval:If you were working with a Fortune 500 CIO, what would you advise her to do – if anything – about quantum computing?
Jack:First, the caveat that most very large tech-savvy organizations tend to have advanced technology groups which are continuously evaluating new technologies, so they are most likely already looking at quantum computing, doing their own research, prototyping, and experimentation. Rely on those future-tech gatekeepers to alert you when quantum computing – or any other new and advanced technology – really is ready to consider for full-scale development and production deployment.

That said, I would advise that we are still deep in the pre-commercialization stage of quantum computing, not close to commercialization, with much research, prototyping, and experimentation yet to be done. The essential pair of questions for a CIO to ask their advanced technology group: when will this technology be ready to demonstrate a production-scale quantum solution, and when will it be ready for production deployment. Sure, research, prototyping, and experimentation can be done today, but production at scale is a completely different matter.

Premature commercialization or even thinking about production deployment are out of reach and will remain impractical for at least the next few years. One minor exception is generation of true random numbers, which all current quantum computers can easily do right now.

Whether quantum computing might be ready to begin commercialization in two years is debatable. Definitely not before then. And it could easily be two to four years, or even five years. And even seven years is not out of the question. In the meantime, organizations should selectively – and cautiously – engage only in research, prototyping, and experimentation as new quantum computing technology becomes available, but not expect that full-scale product engineering or even any non-trivial pilot projects will be feasible during the remainder of this pre-commercialization stage. 
Yuval:But given this estimation, would you advise the hypothetical CIO to wait a few years before diving into it, or to assign a small team in his company to learn and experiment?
Jack:Quantum computing as it is conceptualized today is not usable for typical technical staff at even the largest organizations. An extreme technical elite are required to do even basic tasks. What I call The Lunatic Fringe. Generally, there won’t be any need for even large organizations to contemplate development of production quantum applications until The ENIAC Moment has been achieved in the research labs – the first instance of a production-scale quantum application which addresses a practical real-world problem which achieves significant quantum advantage and delivers very dramatic business value.

Unless you’re leading a truly leading-edge tech-savvy organization, most organizations will be better served if their initial forays into quantum computing are by way of using the services of what I am calling configurable packaged quantum solutions, where a truly quantum-savvy application vendor has marshaled the elite quantum talent needed to produce a packaged solution which can then be used by any organization without the need to assemble their own elite quantum team, which will be very challenging for all but the most elite of quantum-savvy organizations. Such solutions don’t exist today, but waiting for them will be the most productive route for all but the most elite organizations. Sure, some research, prototyping, and experimentation with quantum technologies is also possible, but unlikely to be very productive unless your organization really is one of the top-tier technology leaders.

And if you’re a small or medium-size enterprise, it would be best to stay away from quantum computing until configurable packaged quantum solutions do become available – and mature enough to match your organization’s tolerance for technical risk. Eventually, once The FORTRAN Moment has occurred, it will become feasible for medium-size and even smaller enterprises to develop their own quantum applications, but that’s not likely in the next few to five years – and the technology will then be far more advanced than it is today, so that any knowledge about current quantum computers will no longer be relevant, so learning about current quantum computers will have negligible value for small to medium-size enterprises, in general, although there could be some rare exceptions.
Yuval:Imagine you’re given “master of the quantum computing universe” powers, and you can control the 2-year work plan of companies in this space. What would you have them do?
Jack:First and foremost, get all of the algorithm people focused on running on classical quantum simulators rather than on real quantum computers, with noise models configured to match the real hardware we expect to see two to three years from now. I want to see lots of 24, 28, 32, 36, 40, and even 44-qubit algorithms on simulators, not mere toy algorithms using 8, 10, 12 or even 16 qubits. 
Get the algorithm folks focused on using quantum Fourier transform and quantum phase estimation – not wasting scarce technical talent on variational methods, which will never achieve dramatic quantum advantage. Focus on near-perfect qubits – 3.5 to 4 to 5 nines of qubit fidelity. Full any-to-any qubit connectivity is a must.

Full quantum error correction should be relegated to being a longer-term research project – nothing much practical in two, three, and maybe not even in four years.

How many qubits? Qubit fidelity and qubit connectivity, as well as granularity of phase and probability amplitude will be limiting factors, not the qubit count. I’d really like to see a vendor achieve my proposal for a quantum computer with 48 fully-connected near-perfect qubits – full any-to-any connectivity and at least 3.5 nines or 4 nines of qubit fidelity, able to run at least 2,500 gates, and support a million gradations of phase and probability amplitude, sufficient to support a 20-bit quantum Fourier transform. Something that has at least some significant quantum advantage – on the order of 1,000,000X a classical solution.

And dramatically ramp up longer-term research for better qubit technologies, higher-level algorithmic building blocks, a higher-level programming model, and even a quantum-native high-level quantum programming language sufficient to achieve what I call The FORTRAN Moment – when non-elite technical staff can finally develop their own custom quantum algorithms and applications – with the greatest of ease.

And if we’re lucky, maybe in the two to three or four-year timeframe a few elite wild-eyed lunatic-fringe  technical teams will develop what I call configurable packaged quantum solutions where non-elite developers can configure solutions to their organization’s problems without having to touch or even understand the quantum algorithms or application code which invokes those quantum applications. Such solutions will enable many large organizations to actually deploy production quantum applications without the need for their own large quantum research and development groups, especially when high-caliber quantum technical talent is scarce and expensive. 
Yuval:Your technical background is in software development. You previously wrote about “configurable packaged quantum solutions“ where users “are able to configure the software without doing any software development.” What is needed so that developers can create these configurable solutions? And, Do you feel that the current level of software development tools is sufficiently advance for that ‘lunatic fringe’ to be able to develop these configurable packaged quantum solutions?
Jack:Production-scale quantum algorithms and applications will require high-fidelity qubits with full any-to-any qubit connectivity, as well as fine granularity of phase and probability amplitude, and support for fairly deep quantum circuits of at least a few thousand gates.

Full quantum error correction is unlikely to be available in the next few years, so elite algorithm designers and application developers will need the combination of low-error near-perfect qubits – 3.5 to 4.5 nines of qubit fidelity – as well as some minimal degree of manual error mitigation.

My previously mentioned proposal for a quantum computer with 48 fully-connected near-perfect qubits would be an excellent starting point for configurable packaged quantum solutions.

Only the most super-elite, the so-called lunatic fringe, technical staff will have the skills needed to develop applications at this stage, let alone the even more-elite skills needed to generalize algorithms and applications so that they can be configurable by non-elite, non-quantum technical staff – what I am calling configurable packaged quantum solutions. It’s difficult to assess what software development tools the lunatic fringe will require at this stage, but the essential characteristic of the lunatic fringe is that they are able to make do with what’s available, however primitive it may be. In fact, it is precisely the lunatic fringe who may have the best insight and actually create a lot of the more advanced software development tools as a side effect of developing the initial configurable packaged quantum solutions.

A good example from current technology are search engine packages such as Elasticsearch and Apache Solr, where a vast amount of configuration and customization can be performed without any need to directly tinker with or even understand the core code of the search engine itself, using XML and JSON-based configuration commands.

A good target will be quantum computational chemistry where the user should be able to configure the solution based only on a knowledge of chemistry, and the elite code within the solution will then dynamically generate the quantum algorithms depending on the particular atoms and molecules which have been configured.

Before this stage we’ll see The ENIAC Moment when just a few production-scale applications can be mustered under very challenging technical conditions. They will be few and very expensive.Some years after configurable packaged quantum solutions debut, research and hardware will have progressed to The FORTRAN Moment when non-elite, non-lunatic fringe, but quantum-aware technical staff can design and develop their own full-custom quantum algorithms and applications – without breaking either a sweat or the bank.
Yuval:In your opinion, is there a significant risk of a “quantum winter” in the next 2-3 years?
Jack:I figure that everybody has a free pass for at least the next two years. There’s that level of patience built into the sector. But after two years or so, managers, executives, and investors will be getting impatient and anxious if many of the blizzard of promises haven’t at least started to become fulfilled in a dramatically meaningful manner.

I figure that people will have a full year after the next two years to pull it all together. But after three years, we need to see practical quantum computing as fairly common, or at least within a relatively few months of being practical. If not, then at that stage investment flows and budgets and staffing growth for quantum computing could begin to rapidly decline or tighten or at least fail to expand rapidly – the onset of a Quantum Winter.

I see the risks rising, even now – too much hype, too many wild promises, and lots more investment money flowing. We are indeed seeing a lot of technical advances, but sad to say that even 20-qubit quantum algorithms are very rare and 24-qubit quantum algorithms for practical real-world problems are absolutely nonexistent. I see it as a coin flip – it could go either way.

The main risk is that there is so much research that is still needed before product engineering teams can begin proper commercialization efforts. Research can be fickle and fail even for the best ideas. Research can take a lot longer than expected. Maybe there might be five years or maybe only four years of research needed to achieve practical quantum computing, but there is the risk that investors, managers, executives, and the folks controlling budgets might be too focused on two to three years rather than four to five years.

The good news is that pure research funding is not contingent on commercial prospects, so the fundamental research needed for practical quantum computing in four to five to seven to ten years should continue flowing, particularly for high-value government projects. And once that research culminates, commercial firms can finally see daylight to a prompt path to commercial quantum products, so a Quantum Spring can commence, ending any Quantum Winter, if that did indeed occur.
Yuval:The classical world has plenty of benchmark tests: from CPU tests (e.g., PassMark, 3DMark) to Machine Learning (e.g., MLPerf). What benchmarks are important for quantum computing?
Jack:I do believe in application-oriented benchmarks, but they won’t be very relevant until we have something approaching practical quantum computing. Until then, functional benchmarks will be more important. Such as qubit fidelity (nines of reliability), qubit connectivity, and granularity of phase and probability amplitude. And even IBM’s Quantum Value (QV), to at least a limited degree – log2(QV) tells you roughly how many qubits you can use in a quantum algorithm.

Benchmarks for quantum Fourier transform would be a good middle ground, approximating a fair amount of what applications will need, such as for quantum phase estimation for quantum computational chemistry. But even here, these will be fairly useless until qubit quality gets to the level of enabling a 10 to 14-bit quantum Fourier transform, as a start, and then we wait and watch as the bit-count slowly rises as the hardware progresses. Whether 20-bit quantum Fourier transform will be sufficient for practical real-world problems remains to be seen. But at least this would be a very useful and practical benchmark. And something the hardware folks can focus their attention on – any hardware improvements are for naught if they don’t move the needle for the quantum Fourier transform benchmark.
Yuval:As we get close to the end of our conversation, I wanted to ask: What quantum pioneer, dead or alive, would you be most interested in having dinner with?
Jack:Although I would leap at the opportunity to ask Peter Shor a lot of questions about his factoring algorithm, I feel that a lot of the technical limitations, primarily hardware, which will prevent a successful implementation for large semiprimes (1024, 2048, and 4096-bit public keys for encryption) are beyond even his area of expertise, so I’m not sure that would be so productive. Although if it were a roundtable with a mathematician expert in number theory (Miller?), an elite analog engineer who is expert in the design of DACs (digital to analog converters), and an elite physicist who is expert on how finely phase and probability amplitude can be controlled, and a few other areas of expertise, then maybe we could get to the bottom of some of the thorny technical issues – or agree that even the experts don’t know enough to confirm whether Shor’s factoring algorithm is even theoretically feasible in a real world constrained by the laws of physics, such as theoretical and physical limits to DACs and phase control, and whether there are Planck-type limits to granularity of phase and probability amplitude.

Feynman would be my top choice. I’d ask what he thinks about how quantum computing has turned out, so far, relative to his original expectations. And ask for his insights as to how to focus it on a path that is more likely to achieve the success he imagined. And his thoughts on what calculations should remain classical, even if only approximate, to avoid wasting too much energy on achieving accuracy beyond what is truly useful. I’d like to hear his thoughts on whether there is a Planck-level unit for phase and probability amplitude and what limits he sees for using phase and probability amplitude as data in quantum computations. I’d also ask him in hindsight what he might change from his Nobel prize lecture based on research since then. And what areas he would focus on if he were starting out today as a younger physicist. And what he would focus his research attention on now, if he was still an active researcher.
Yuval:How can people get in touch with you to learn more about your work?
Jack:You can connect with me on LinkedIn. And you can see all of my writing on quantum computing on Medium.
Yuval:Thank you so much for spending time with me today.

Yuval Boger is a quantum computing executive. Known as the “Superposition Guy” as well as the original “Qubit Guy,” he most recently served as Chief Marketing Officer for Classiq. He can be reached on LinkedIn or at this email.

September 30, 2022