Scott Genin, Vice President of Materials Discovery at OTI Lumionics, is interviewed by Yuval Boger. Scott and Yuval spoke about using quantum to design new materials for the OLED and display industry, how quantum algorithms can save a lot of time in the process, the path they would hope to take from quantum-inspired to true quantum algorithms, and much more.

Transcript

Yuval Boger: Hello Scott, and thank you for joining me today.

Scott Genin: Thank you, Yuval, for having me on.

Yuval: It’s my pleasure. So who are you, and what do you do?

Scott: So I’m the Vice President of Materials Discovery at OTI Lumionics. I lead the early-stage materials development processes, including the development of new computational tools for simulating materials, along with kind of directing preliminary synthesis.

Yuval: And I’m not terribly familiar with OTI. Could you tell me a little bit about the company, what you guys do, what type of customers you have, how long you’ve been in existence, or anything you’re willing to share?

Scott: Yeah, so OTI was founded in 2011 by Michael Helander and Zhibin. And they came out of the University of Toronto labs, particularly the materials science and engineering department. And they basically decide to try to commercialize the inventions that they developed during their PhD, in particularly the OLED and display industry. So OTI formed around that. We became a specialty chemical company that is focused in the display industry where we develop new novel materials and engineering applications for OLEDS and displays.

Yuval: Where could I find your products?

Scott: Our product is currently, it’s undergoing mass production qualification. It’s past some significant milestones with respect to that, and it’s currently in the hands of some OEMs. And at this point, it’s really up to them whether they like the material, whether they like the final display and final phone. But phones and laptops, they take a long time to develop surprisingly, and so I can’t divulge too much, but they’ve got to take the chemical, they’ve got to make a display, they’re currently making phones, and then they got to drop them off some cliffs to see how they explode.

Yuval: How does quantum or quantum computing fit into all of this process?

Scott: So you have to think about the early stages of chemical design to really understand where quantum computing fits in. So a lot of the times, we get a list of specifications from a customer, we have to try to ask the question, what chemical or what family of chemicals are we going to start synthesizing in order to reach these specifications? So you can go out, and you can basically just try to synthesize a bunch of random combinations. Quantum chemistry as a field has evolved significantly where we can use quantum chemistry as a guide. And this has been done within pharma, it’s done within the materials industry. But one of the conundrums that quantum chemistry and molecular dynamic simulations have as an issue is they’re actually not that precise. And if you want, so basically, if you have, let’s say, 10,000 candidates, you could synthesize. Quantum chemistry’s really only going to be able to tell you what are going to be the top 1,000 candidates, but you still have to end up going and synthesizing those a 1,000 candidates.

Now, a lot of people will shift and say, well, what about using machine learning? And the problem with machine learning is that a lot of machine learning applications in chemistry, they’re trained on quantum chemistry data. So all the bad habits that methods like density functional theory have, or coupled cluster singles doubles has, the machine learning algorithm basically inherits those bad habits and then amplifies them because it is still a statistical engine or a machine.

So it’s really where quantum computing fits in, is quantum computing enables us to do these very fine quantum chemistry simulations at very high accuracy. And that means that we can take, instead of having to synthesize a thousand chemicals, we may only end up having to synthesize 100 chemicals. And that saves enormous amounts of time and money, and it allows you to get into the scale-up process, like scaling up your actual chemical faster. Because one of the things that I think is a bit underappreciated outside of the chemical industry is that really scale-up is where, A, a lot of things go wrong, and B, can take a lot of time and money. So the faster you can get into that process, the more you can learn from it and feed it back into your computational models.

Yuval: So just to play back what I think you told me, I understand these around numbers, but you say, Hey, maybe we start with 10,000 candidates, and through quantum computing simulation at the quantum chemistry level, we can narrow them down to 100, to 1% of that, and that saves a ton of time and money in the valuation and analysis process. Is that about right?

Scott: Yep, that would be about right.

Yuval: Do you actually run it on a quantum computer or is this what’s typically called quantum inspired?

Scott: Yeah, so we run pretty much all on quantum-inspired algorithms. It’s one of the things that through our journey of learning about quantum computers and trying to do quantum chemistry on a quantum computer, we’ve kind of gone through the process of just saying, well, the quantum computers of, in the NISQ era, the deep fidelities and the measurement fidelities are not sufficient at this point in time to really do anything considered serious on a quantum computer. So we decided basically to do a quantum-inspired algorithm and approach. And we’ve really focused on basically just trying to scale it up so that we can simulate many qubits. We’ve made a lot of compromises in terms of what our quantum-inspired method can and cannot simulate. At this point, it’s basic, it just effectively does quantum chemistry. It wouldn’t really be able to simulate any other operation that a quantum computer could do.

But this, it still runs on classical hardware. There’s a couple advantages in that we don’t have measurement error. That’s one of the aspects that can be quite damaging to quantum chemistry methods on quantum computers. We can deploy it on mass, on classical hardware, either on the cloud or on-prem. One day, we would expect though a very good quantum computer would eventually surpass these quantum-inspired algorithms. But our team works very hard at pushing the limits of these algorithms that we have developed.

Yuval: How large are these molecules?

Scott: So most OLED molecules can really range between maybe 40 atoms up to 90 atoms. The main constituent is carbon. However, some of these molecules contain heavy metals like platinum or iridium, and that’s actually where you can throw in a lot of complexity, a lot more complexity than if you just had 90 carbon atoms, right? It’s like, the thing I always say is that in principle you could say that I have a chain of a very famous problem to simulate within the academic community to simulate a chain of hydrogens. So you could principal say that, well, I can simulate a chain of 90 hydrogens, and that’s an impressive feat, but it’s not reflective upon reality. Because more especially when you get into these complex interactions between a heavy metal and conjugated carbon rings, you really need what is called kind of a universal, whatever what we classify as a general purpose quantum chemistry algorithm, something like that. Even DFT is classified as that because it has a lot of flexibility, it can handle a lot of different problems given that you can pick a correct DFT functional.

Whereas a lot of niche specialty quantum chemistry algorithms like DMRG, DMET, they really focus at solving very tough specific problems like a chromium dimer, but they’re not considered a general purpose overall well-rounded algorithm. What we try to aim for is because we have these fairly complex systems and in a materials discovery operation, you also have to consider that your recommendation algorithm or your molecule generation algorithm is going to generate molecules that you have to kind of pretend, you basically have to handle all sorts of corners because it could really generate anything. And so these algorithms have to be classified as a general purpose. That’s where we see a lot of advantage in our qubit coupled cluster formalism, as it is done on quantum-inspired methods. It can handle really any type of problem. It works on these large OLED molecules. But one of the tricky things about these molecules is their complex interaction with their metal centers.

Yuval: What is the path that you went through in developing this? I mean, for instance, did you initially mean to actually run it on a quantum computer and said, well, we’re going to do chemical simulations and here are the qubits and here’s how we’re going to do it. And then figure it out that there is no quantum computer that’s good enough to actually run this, so you went back to quantum inspired, or did you go through a different path?

Scott: No, you’ve actually summarized, I would say, the cliff notes of what happened. I mean, it’s a bit of a funnier story because when I started at OTI in 2017, I was just brought on to do conventional simulations. The quantum computing was not even on our radar. And then what happened was the Creative Destruction Lab, Professor Ajay Agrawal opened up this kind of expanded the Creative Destruction Lab and opened up this quantum machine learning stream. And Michael knows Ajay quite well. And so they were asking for people to submit initial ideas. And so the question came, is quantum chemistry going to be better on a quantum computer? And back in 2017, there was a lot of hype around it. IBM had recently published their breakthrough Nature publication on simulating hydrogen, lithium hydride, beryllium hydride. And they came to me and said, well, do you think this is going to be better? I said, well, community thinks it’s going to be better, so let’s find out. And we went all in.

We started with a lot of the open source software, and I remember trying to run basically a simulation of a water molecule and myself not having a formal background in quantum computing, we ran it on a one terabyte RAM note. I left it running overnight. I came back the next day, and it was still running I scratched my head, and I said, well, this sounds quite problematic. How can I validate that a quantum computer is actually going to be better? I have to validate these assertions.

So then I went out, and I actually asked my former PhD supervisor, Stuart Aitchison, if he knew anyone within the quantum chemistry at UFT, I found Artur Izmaylov, who is a professor at University of Toronto in theoretical chemistry. And I asked, and we  partnered to figure out whether we could basically evaluate this better than what we could do with using all the source tools. And that led us down the path of developing qubit coupled cluster formalism. And we hired a bunch of very bright, brilliant theoreticians to continue pushing this entire formalism. And eventually, it was all developed to basically run on a quantum computer initially. That was the whole principle, you develop QCC because it generates the smallest gate depth. That’s the first thing.

Then we developed IQCC because quantum computers don’t have full connectivity. This was actually pretty much all developed to get around the shortcomings of the Aspen Q16, I believe, at that time, which was made by Rigetti. And eventually, it just spun off into this thing. Well, okay, but we need to prep Hamiltonians that could be 54 qubits. And then, well, now that we can do 54 qubits, could we do 72 qubit simulations? Oh, well, not quite yet. Well, how do we make sure that we can get there? And eventually, our team just continued to push the boundaries of what the number of qubits we could simulate using this quantum-inspired method. We continued to take more and more trade-offs in terms of general purpose versus very specific purpose. Since I don’t have to simulate anything aside from electronic structure problem, I thought these trade-offs were perfectly reasonable and eventually, now we’re here by simulating over a hundred qubits quite easily on classical hardware.

Yuval: What do you need to go back to quantum or to real quantum? Is it how many cubits do you need? Or what kind of fidelity? Or maybe you’re just happy with where you are and you’ll never go back to a real quantum machine.

Scott: This is a very good question. I don’t have a perfect answer for that. Technically, like QCC and IQCC, they still run on universal quantum computers. We still have the means to run on quantum computers. It’s still part of our full-stack pipeline. So it’s not like we’ve completely abandoned that. To get onto where universal quantum computers have to get to is either… So one of the big problems is that you need, we’re talking about, at least 300 logical qubits, not physical qubits. So these things must be effectively almost a 100% error corrected. The only way a universal quantum computer is going to be what we can at least do on a classical computer, is if you can basically run that entire gate sequence.. So you’re talking about just maybe millions of gate operations to actually get something meaningful.

One of the conundrums is that as the molecules get bigger, you need more and more complex basis sets. Now, that’s a core principle on quantum chemistry, is you need very good basis sets that have diffuse or polarization functions in order to actually model your molecule correctly. When I see simulations of, could have 12 hydrogens or 20 hydrogens for all I care at this point, and it’s an STO-3G basis, it’s like that’s not a meaningful problem. If I see an organic molecule like a benzene ring simulated in STO-3G basis, it’s an interesting computational problem, but it’s not meaningful because that’s not an accurate representation of the molecule. So from a quantum computing operating perspective, it’s fine to do those things to demonstrate that a quantum computer works and that it could be meaningful, but it doesn’t really move the needle in terms of from the applied computational chemistry side. So we really need to see effectively mere perfect fidelities, and we need to see at least 300 logical, fully interconnected qubits.

So my opinion is that if you can just get 300 qubits, which manufacturers are building quantum computers that have close to this, that have this size, we can put easily over 300 qubits on a chip, it’s just now we just need to really improve the fidelity. But the fidelity needs to be perfect because these gate sequences are not short, they’re incredibly long, they’re quite cumbersome. And that’s what we wait for, but in the meantime, we’re very happy with the progress that we’ve made. We’re going to continue to improve our quantum inspired algorithms, and I’m sure we’re going to continue to disclose more and more interesting results.

Yuval: How specific are these algorithms for OLEDS and other display technologies? Is there an opportunity to go and sell this to pharma companies that certainly or other chemical companies that need sophisticated simulations?

Scott: Yeah. So the quantum-inspired method that we’ve developed basically just solves the electronic structure problem and the electronic structure problem underpins many different, effectively underpins almost the entire field of materials discovery. When you’re doing molecular dynamic simulations, especially in pharma, the molecular dynamic simulations and docking really form the core of pharma simulations. You often need electronic structure calculations to initially determine what is going to be your force field. And one of the biggest complaints is once you start getting into some esoteric or some fairly complicated proteins that may have metal center standard force fields, just generating your force fields using a minimal basis set is not enough anymore.

When you’re talking about transition state calculations for ruthenium catalyst for carbon capture, you need highly accurate, reliable simulation tools. And that’s something which the current tool set, like a density functional theory, is just not really equipped to do it. It’s not that DFT is a bad method, DFT is a perfectly fine method. It’s that it’s not consistent or reliable enough. Basically, oftentimes what people don’t like to admit about density functional theory is often in academic circles, you’ll basically actually synthesize and measure a lot of molecules in their properties. Then you’ll develop a new DFT functional that will match that family of compounds. And it kind of works, but it would be much better to just have a method that treated on all kind of molecules kind of consistently than having to constantly switch between DFT functionals. It’s known that within catalytic reactions to optimize the geometry correctly, you have to switch between different DFT functionals. That’s a pain for an automated discovery platform. That’s an incredibly difficult ask.

Yuval: You talk about the difficulties in gate-based machines and fidelities, and the length of the circuit. I think that other companies in the quantum chemistry field sometimes use annealer. Sometimes they’re trying to use analog quantum computers. Have you considered other types of quantum computers except for digital gate-based computers?

Scott: We have. We did consider the annealer. The annealer was very good because one of the issues we were experiencing very early on was calculating the expectation value. And so if you can transform your Hamiltonian into a QUBO, when we published about this, you can effectively do an expectation value calculation almost instantaneous, like within milliseconds. Whereas on a standard gate base digital, when you have to keep sampling, and you keep sampling, you keep sampling, and the number of qubits you have for every additional qubit, approximately have to increase the number of samples you take by five to 10 times. Annealers did not have this issue. We’ve collaborated with D-Wave and Fujitsu on this.

The conundrum is that in terms of calculating expectation value very quickly, that’s their core advantage. They also provide advantages with doing a very specific portion of the QCC protocol, but it only happens at the beginning. So we have used and have developed on it, but I would say that they have a niche application in this space. In terms of the analog, that’s a little bit more tricky because there hasn’t been as much kind of research or development on the analog quantum computers for chemistry. There’s some interesting work coming out, but we’re still a little bit skeptical of the results at this point in time. But I’m always open to being very pleasantly surprised by any sort of improvements or whatnot. 

Yuval: And last, a hypothetical question, if you could have dinner with one of the quantum greats, dead or alive, who would that person be?

Scott: Oh, that’s a very, very good question. I would have to think that it would have to be the Dr. Bravy or Dr. Kataev, because really their preliminary research actually was kind of one of the core founding and cornerstones of what we’ve developed. And I would say that a lot of our methodologies and methods have really spun out from the very, their incredible generosity to the field. I think they are really stand out as great titans of just the conceptual and theoretical quantum space and their willingness to really share and disclose everything is something fantastic, I think.

Yuval: Scott, thank you so much for joining me today.

Scott: Thank you so much, Yuval.

Yuval Boger is the chief marketing officer for QuEra, a leader in neutral atom quantum computers. Known as the “Superposition Guy” as well as the original “Qubit Guy,” he can be reached on LinkedIn or at this email.