Quantum Computing Report

Podcast with Professor Misha Lukin (Harvard), Dolev Bluvstein (Harvard), and Harry Zhou (Harvard and QuEra)

Professor Misha Lukin (Harvard), Dolev Bluvstein (Harvard), and Harry Zhou (Harvard and QuEra) are interviewed by Yuval Boger. They discuss their recent Harvard-led work in quantum error correction, published in Nature, highlighting the evolution from physical to logical qubits and the realization of up to 48 logical qubits. The authors emphasize the significance of error correction in maintaining quantum states in large systems. They discuss using neutral atoms and optical tweezers in their experiments, the scalability of their methods, and the potential for practical applications. The conversation touches on future directions and much more.

Transcript

This podcast episode is a follow-up to the paper “Logical quantum processor based on reconfigurable atom arrays” (published in Nature on Dec 6th, 2023, and then posted on ArXiv) by Bluvstein et al. The work reported in this paper was performed at Harvard by a Harvard-led group that also included researchers from MIT, QuEra, UMD, and NIST.

Yuval Boger: Hello Misha, hello Dolev, hello Harry, how are you doing today?

Dolev Bluvstein: Good, thank you.

Yuval: Misha, who are you and what do you do?

Prof. Misha Lukin: I’m a professor of physics here at Harvard University, where I do research in quantum optics and quantum information science. And I’m also teaching and I’m in charge of the group which explores various directions in these areas. I’m also a co-founder and board member of QuEra Computing, which is a startup company spun out of my lab..

Yuval: And Dolev, how about you?

Dolev: I’m a PhD student at Misha’s group, I’m in my fifth year. And I do experimental physics and I work on this atom array experiment. In particular, a big focus of my PhD has been programming quantum circuits with the motion of atoms and its applications to quantum error correction.

Yuval: And Harry, who are you and what do you do?

Harry Zhou: I’m currently a research scientist only between QuEra Computing and Harvard University. I collaborate with people on both sides, primarily nowadays thinking about different architectural aspects related to using the atom array systems for error-corrected quantum computing.

Yuval: We’re recording this podcast in the context of an error correction paper that you guys wrote. Who wants to tell me about the paper and why is it significant?

Misha: So maybe I can start and provide a little bit of the background to this. And I’ll let Dolev and also Harry talk about the new exciting recent developments. So, to get the background, maybe we can look a little bit into history. As many of you know, the ideas of quantum computing date back now over 40 years ago. Thanks to pioneering theories by many famous physicists like Richard Feynman, for example. The real kind of excitement in this area started during the early to mid-90s when people started to realize that there were some algorithms that one could execute on quantum computers, which are actually kind of amazing. They give a big speedup compared to classical computers in areas ranging from simulating quantum systems. This is something that Feynman originally envisioned. To other applications like, for example, factory.

But, I was actually Ph.D. student during that time, and I was actually not really part of that field yet but I was in all learning about this field as it was kind of starting and gathering steam, and very early on, many people recognized that building useful quantum computers, large scale quantum computers that can execute a lot of operations will be extremely hard. And the reason for that is quite fundamental. So basically, all objects around us, consist of atoms and molecules, which are in principle, quantum mechanical, but if you look around, you don’t see objects and quantum superpositions, you don’t see, objects which are obviously entangled with, particles with objects, far away. And that’s for good reason. So basically once you start looking at systems kind of at a large scale, at a macroscopic scale, they actually lose their quantum character. And this is because, basically, quantum superpositions are very fragile, at least at a large scale. And for this reason, they just do not naturally occur in the real world. So, when people started thinking about quantum computers, they realized that the power of quantum computers relies precisely in creating superposition states of large quantum registers. And also entangled states of a large number of particles. And it was very clear from the very beginning that these big superpositions, which you require to, for example, useful quantum computation, will be extremely fragile. And what it means operationally is that if you start, for example, building this superpositions or doing quantum logic to execute quantum computation, each logical operation will have a small probability of error.

And, once this error starts to accumulate eventually, this state which is supposed to be a large quantum superposition just becomes a classical object, kind of classical probabilistic statistical mixture. And it basically loses a quantum character. And people, many of them are very serious, very well recognized theorists, started pointing out early on that, I mean, look, to really execute a really kind of deep quantum computation, you need circuits with large number of qubits and large number of gates. And if you want to execute it successfully, you need to have the error in a logical operation to be extraordinarily small, way smaller beyond what one can actually imagine, which one could imagine at the time in the 1990s, and I would say still much smaller than the error that we could possibly imagine ever, with any realistic, device. 

And for this reason, there was actually a kind of following kind of discovery, briefly with excitement, there was also, some very healthy skepticism about these quantum computers, that they can ever be built at a kind of useful scale. And then some of the same people who came up with these really ingenious algorithms, people like Peter Shor, came out with these ideas of quantum error correction. And that already at the theoretical level, I would say, has really been an amazing discovery. So in principle, this quantum error correction uses this idea of redundant coding, very similar in spirit to classical error correction, where instead of just, encoding a bit of information and a physical bit of zero and one, you encode it into the state, for example, with multiple zeros to be zero and multiple ones to be one. And then what you can do, you could use some kind of majority voting procedure to kind of, if the error occurred, to basically, detect and correct it. And, of course, this has been known for many decades. But, early on, people pointed out that these ideas cannot be applied to quantum devices because in quantum systems A) you cannot clone the quantum state. It’s fundamentally prohibited. And B) if you measure the quantum state, you basically collapse it. 

Against this backdrop, in the late 90s, people came up, started to come up with the idea of how to actually correct errors in quantum systems. And these ideas were based on this redundant encoding, but it turns out that if you are more clever, if you use entanglement to encode quantum information, to distribute it between multiple physical qubits, you can, at least in principle, detect and correct quantum errors. But realizing this quantum correction, error correction, and, unfortunately, proved to be very hard for multiple reasons, some of which we can discuss. And I would say, up until very recently, these ideas were entirely in the realm of theory, which was very beautiful, very ingenious. But, it was even hard to start probing the building blocks of this quantum error correction approach. So, over the last few years, in a number of systems, some small building blocks, then so-called logical qubits, have been realized, but in a very kind of proof of concept, in a way. And what is special about this work, is that for the first time, not only we can realize all the building blocks, but we start putting them together to stack executing algorithms that benefit from this quantum error correction and fault tolerance problem. 

Yuval: thank you, and what are the key findings, and key demonstrations in this paper? Maybe Dolev and Harry.

Dolev: At a high level, the way I would describe it is that we’re starting to build not processors based out of physical qubits, but processors made out of logical qubits. And whereas, we’ve kind of made a lot of progress in the field over the past two decades, and specifically the last decade, testing algorithms with physical qubit devices, we’re now able to start testing them with logical qubit devices, and kind of explore the subtleties and challenges of that, and also the benefits. 

So, in terms of some key observations, the technical observations made in the paper, we find a really important property, which is that the operations that we do between logical qubits can improve as we increase the size of our error-correcting code. And this demonstration with an operation improving with code size is a pretty important kind of the first demonstration of this entangling operation. We see that we can make lots of logical qubits and program what we call fault-tolerant algorithms between them, algorithms that are robust to having a single error. In particular, we make up to 48 logical qubits and study hundreds of logical operations, whereas people previously primarily studied two logical qubits with one operation. 

What we see from doing this algorithm with this many logical qubits and many logical operations is that we can actually do the algorithm much better with our error-corrected qubits than with our physical qubits. Part of that are things that we kind of were expecting just from the error correcting and coding being better, but there are also kind of additional findings that we’re making when we’re actually trying to build these logical qubit devices in the lab and program algorithms with them. There’s a lot of kind of interesting subtleties there in terms of both challenges and benefits. And one of the really important aspects of our approach is using these neutral atoms trapped in optical tweezers.

Maybe I’ll just go a little bit into that now. So people have been, this is one of the various approaches to quantum computing, and over the past few years, there’s been a variety of breakthroughs for our system. So one is that we’ve gotten very good at controlling large arrays of these neutral atoms trapped in optical tweeters, and we’ve used this for doing a variety of quantum simulations in the past few years. Two years ago, we also started to move qubits around in the middle of the quantum computation, and it turns out that for quantum error correction, this is a really important facet of being able to do kind of complex operations, which I’ll mention more in a second. And then another really important recent advance is our ability to entangle qubits with a really high success probability or a really high fidelity. But in terms of the kind of key underlying principle of this logical processor, so because we have the optics where we can control lots of neutral atoms in parallel, and because we have the ability to move logical qubits around, one of the big challenges that have historically been associated with doing error corrected algorithms is you need to take, as Misha was saying, to protect the logical qubit, you take it and then delocalize it using entanglement across many physical qubits. But now that it’s spread out, it’s pretty hard to operate on. And one of the big new advances for us is that we can take logical qubits and then delocalize them over many physical qubits, and then take these two logical blocks and then directly stack them on top of each other by using the motion of qubits, and then get them to directly interact in that way. And that’s a much more efficient way of entangling these logical degrees of freedom. So that’s one of the really important advancements in this work.

Yuval: And does that mean that the operation on the logical qubit is essentially a simultaneous operation on multiple physical qubits?

Dolev: That’s right. Yeah, it’s what’s called a transversal operation. And what it means here, which is practically very beneficial, is that to do that same operation, to do this operation on the logical qubit level, we just now need to do that same exact operation on each individual physical qubit of that block. And with optics, this is something that really naturally multiplexes, because we can just take two logical qubits, move them right on top of each other, pulse our one global laser that entangles all of the pairs at the same time. And in this one single step, we can do a logical entangling gate with kind of a very similar level of complexity to how we would just do a single physical entangling gate. And that’s one of the key advances that has allowed us to now be able to build this logical processor.

Yuval: And Harry, it was mentioned that you guys executed a complex algorithm. What algorithm is that? And is that a useful algorithm from a business perspective?

Harry: Yeah, so there were a variety of different algorithms that were kind of explored in this experiment, starting from something that may be in the scale of our paper are relatively small, but actually are already quite a bit larger than the one or two logical qubit demonstrations that had come before. We were able to prepare a 4-qubit GHZ gate, which is a kind of highly entangled space between 4 qubits. And then we were also to take this even further and run kind of an interesting circuit on 48 logical qubits involving hundreds, kind of over a hundred entangling gates, as well as many non-qubit operations, be that the types of operation that really allow your quantum computer to start doing classically hard or potentially intractable computations. And we were able to run these very complex circuits on these. 

So from the physics point of view, there are actually potentially interesting kind of interpretations of the experiments we did in terms of so-called scrambling dynamics and the different various flavors of complex many-body dynamics. Currently, the particular circuit that we run might not have very clear, direct commercial applications yet, but we are also very helpful that many of the same building blocks that we’ve been able to demonstrate here could then both motivate but also be further developed into a routine that can be useful. And indeed, a lot of the kind of these gadgets that you look at, they actually have some flavor that might look a little bit like the primitive operations of say what you see in an adder. Indeed, there has also been work from others that kind of has been going along the lines of viewing these gadgets from that lens. And so we’re also very excited to see whether we can take these same results and techniques and bring them really into the practical application.

Yuval: And why 48 logical qubits? I mean, couldn’t you push it a little bit more, maybe 58, and be beyond the simulation limit?

Dolev: So we intentionally made 48 logical qubits as a way to be able to still simulate our system. It’s true that we can now use these error-corrected systems to now go to even larger systems and 48 is not a hard limit at all. And in the kind of next few years, we’ll probably continue to do error-corrected circuits that are even larger and even more complex. 

Misha: So in a way, this is related to, of course, to these discussions of quantum supremacy. And actually, even with 48 logical qubits, with these kinds of shortcuts of complexity that we have made, it is actually impossible to simulate them directly. But if you want, what we did is we kind of broke ourself, our own supremacy circuit. We found kind of a very clever kind of shortcut which allowed us to do these simulations, basically with a very clever computational trick. The goal there was to really benchmark the system to show that it really can, therefore, and at this level, these ideas of error correction, and error detection are still useful. I mean, that’s a key goal of this work..

Yuval: Dolev mentioned that qubits need to be shuttled and moved around. That sounds like it could take some time, certainly more time than quantum operations take. Is that a disadvantage of the neutral atom approach?

Misha: Yeah, this is also a very excellent question. It is true that moving atoms is a relatively slow operation. So, these atom moves typically occur on a kind of timescale of about 100 microseconds. And this is to be compared to, let’s say, our clock speed due to our gates, which easily can be a kind of microsecond or even somewhat below. And of course, we are thinking about how one could speed it up. So I do not think that we are at any kind of fundamental limit. And I think some of this work is already starting to explore the ideas. But I would say even at this 100 microsecond, well, this kind of architecture where you reconfigure connectivity, and kind of create non-local connections, during the computation process, can be exceptionally powerful. 

And basically, the idea here is that it allows you to do two otherwise impossible things. So, first of all, it allows you to do these so-called transversal gates. These transversal gates are very special because they do not result in a kind of massive propagation of errors. For that reason, for example, it is generally agreed that these transversal gates could be done in a fault-tolerant way with much fewer rounds of error correction per gate. and this, for the sufficiently large codes, already immediately results in a massive saving. Moreover, by moving atoms, we can enable virtually all-to-all connectivity with our processor without incurring the overhead, which is typically associated, for example, with swapping, with moving qubits, etc. And so that, in any kind of realistic practical algorithm, results in another massive saving. So to summarize, while, we, of course, would like to speed things up and we would like to, optimize this atom moving and, include our clock speed, we believe that already at the current clock speed, what we have done is, is already quite special in the sense that, in a large-scale algorithm, this kind of, non-local connectivity immediately allows us to, enable massive savings. And, to understand, these things are really a frontier of both science and engineering, And in fact, what we hope to do in the next, few months is to really understand, like at the architectural level, but also at a practical level, what kind of algorithms can be re-sped up during this approach, what kind of operations benefit most from this non-local character, So this is really, I would say these are the questions which in principle could have been asked in the past, but our work makes it very clear that this kind of non-local connectivity and reconfigurable architecture is, remarkably suitable for this kind of quantum error-corrected fault-tolerant devices.

Yuval: Dolev, Misha mentioned 100 microseconds to move a qubit, but isn’t that much longer than the coherence time in the Rydberg state? I mean, how do you overcome that you can have enough computational cycles before you lose coherence?

Dolev: That’s a great question. So here’s some jargon, is that we have two qubit states that store coherence for very long times. There are hyperfine qubit states, and they’re stored inside the clock states of our rubidium atom. It’s our neutral atom that we use for storing quantum information. So these states have very long coherence times. They’re on the scale of two seconds in our current system. It could be, even on the scale of 100 seconds with additional effort. And then we only very briefly go through the so-called Rydberg state, which is how we do quantum gates. This was first envisioned back in 2001 by Misha and company. But the way that works is we zap the atom with a very specific laser frequency. The electron gets excited to a very specific orbital state. Now the atom grows to a large size and then picks up a large effective dipole, which interacts with other atoms. This causes a really strong interaction that allows us to entangle the two hyperfine qubit states. But we only go through this very briefly. So that’s how we do our quantum gate, which is at a very high fidelity.

But then we store the information back down in the hyperfine qubit state. And then here the coherence is very long, such that now we move the qubits around. There’s no decoherence or there’s a quite negligible level of decoherence during this motion time. So from the perspective of being able to program early quantum circuits, this 100 microsecond movement time is not really a limitation.

Yuval: Harry, from your perspective, how well does this scale? I mean, right now, you’re showing wonderful results with hundreds of qubits, but likely we’re going to need thousands or tens of thousands or even more qubits. Does this approach scale to a large qubit count?

Harry: That’s a great question. I think even with the currently demonstrated tools in the lab, it can already go quite far. So for example, at the Harvard lab, we’ve been able to show reliable trapping of a thousand or so qubits, which is already quite a lot of qubits that you can work with. But also with newer generations of these machines that are being constructed here at Harvard and in our community, we can easily go to somewhere probably around 10,000 or so qubits. With some clever ideas that could maybe be bootstrapped a few factors further. And also at the same time, there are active efforts in interconnecting multiple models so that each module could be roughly of the tens of thousands of prices, but then you can still do a lot and scale further. 

One thing to note also is that the flexible connectivity that Misha mentioned earlier, not only provides avenues for reducing the time costs, it also provides avenues for significantly reducing the qubit overhead for doing error-corrected quantum computation. So for example, in a recent paper that we put out, we showed how using so-called quantum low-density parity-check codes that have kind of much better encoding rates, so-called, which means that you can pack a lot more logical information into the same number of physical qubits. Actually with on the order of 50,000 or so physical qubits, you can imagine kind of having a 1000 logical qubit processor with very low logical error rate, in the order of like 10 to the minus 10. And so this really also provides us routes where even if we kind of more easily can get to the tens of thousands, and it might be slightly tough to go much beyond that, that already opens up a lot of computational capabilities that we can now further explore.

Misha: So maybe I can add a little bit to you also. it’s worth noting that whenever we run these experiments, every time we start this experiment, we trap some 10 million atoms. Right, and basically, what it means is that we have a very plentiful resource for each atom, can encode the physical qubit, we have a very plentiful resource for creating the qubits, and the issue that stops us from using all of these, millions of atoms is control. And that, I would say, is really a critical innovation in this new work is that we show how we now can very efficiently control logical qubits, rather than just, focusing on controlling individual physical qubits. And I think this is really a paradigm shift in the community. And I think that’s the key reason why we are in a unique position to start scaling up these techniques, because basically, at the high level, we need a number of controls that scale with the number of logical qubits, which is a lot less prohibitive, than, if you use basically, if you just want to control each one atom individually.

Yuval: AI is everywhere these days, and people talk about the chatGPT moment as the moment that AI became sort of universally useful or almost universally useful. Do you think we’re at the chatGPT moment for quantum? And if not, how far are we?

Misha: I can try to answer this question. So, I mean, to be honest, I don’t think we are at this moment yet. But at the same time, I do think we are coming to an inflection point in quantum. And this, why this point is special is because these ideas like our quantum error correction, fault tolerance, as I already mentioned in the beginning, are very intriguing ideas, initially people thought these things could not be possible theoretically. But, by now these ideas are almost 20 years old, and, these were like theoretical ideas, which were extremely kind of intriguing, and extremely promising, but, clearly, very much out of reach. So what we are now, basically doing is we are entering this kind of new era, I would say, where, these ideas, all of this, all of a sudden become very much kind of practical, they are now a laboratory reality. And we can now start using these ideas as tools to kind of build, and scale, and make quantum processors useful. And I think in that sense, we are at the inflection point and, I really, think that these ones will greatly accelerate the progress towards these goals of, being large-scale useful quantum processors.

Yuval: And your best guess for business usability, how soon are we?

Misha: Well, well, that’s your job, so, but, okay, my sense is that I do think that using the techniques which we have demonstrated, we have a very clear path to building, systems with, maybe hundreds or maybe hundreds of logical qubits within the next few years. At the same time, given the rate of progress in this field, I would be shocked if, within the next one, two, or three years, we would not put ourselves in a position where the path to thousands of logical qubits is basically similarly in a clear site. So at that point, I would say there will be, very clear practical value of these machines beyond just scientific, experiments which we already enjoy,

Yuval: As we come close to the end of our conversation, there was a question I’d like to ask all my guests, and I’m particularly interested in asking it here, and perhaps first Misha and then Harry and Dolev. If you could have dinner with one of the quantum greats dead or alive, who would that person be?

Misha: I’m sure that Richard Feynman would enjoy seeing these results. So, I mean, it’s in some way what we are doing is kind of, is implementing his vision, he envisioned, encoding qubits and single atoms and turning on interactions between them, and now we can do it in a way which is a must to theirs. I think this is very special.

Yuval: And Dolev and Harry, please don’t answer Misha, so someone else.

Dolev: It would be pretty interesting to go all the way back to someone in the era of Schrodinger and kind of see from their perspective how much things have changed. I mean, already the classical computing revolution leveraged a lot of their discoveries, but I mean, for example, Schrodinger has a famous quote that, to interact with single atoms or single particles is just a thought experiment. And that we never actually do this in practice. And now we’ve gotten so good at controlling single particles that we’ve started, entangling them to create controlled single particles, controlled logical particles. And I think it would be really interesting to see their perspective on all of it. So, yeah, I think I would answer that.

Harry: I guess Misha and Dolev took the first two people that came to mind. So I’ll have to find another one. And actually, maybe I’ll go even a little bit further and kind of say Alan Turing. I guess the reason for that for me is that I think one of the reasons that I find thinking about quantum computing really cool is both that it has a lot of kind of potential practical applications, but also that in a sense, it’s giving us a new almost computational lens on the universe. Things that with the regular classical machines, things that we might not imagine being able to efficiently do, you actually have these machines that go kind of beyond the regular like kind of classical Turing machine paradigm. And you actually have these machines with a slightly different set of computational capabilities. And so what I would actually be very interested in kind of learning from him or seeing how he would respond to that because at the time when Alan Turing was around, I guess quantum mechanics was known. People didn’t really necessarily think from the length of the computational power of it. And so I think it will be super fascinating to see with seeing all of these modern advances, what his perspective would be on both kinds of computational models.

Yuval: Excellent.

Dolev: Can I add one thing to what Misha had said previously? But just in terms of how special where we currently are is, I think there are also some other groups in the field that kind of really deserve a lot of acknowledgment. We are in a really special place after this advance. And there is a pretty exciting opportunity in the coming years to see how we’re able to advance this and really try to realize this vision of a large scale quantum computer. If it wasn’t for the decades of work that came before, this would obviously not have been possible. And for example, other platforms like the ion approach, for example, we definitely learned a lot from. When we first started to do this qubit shuttling, this ion approach and this ion vision of a qubit shuttling architecture was inspiring for us. There are various reasons why our shuttling approach is easier and kind of works quite a bit better, at least for this error-correcting application. And actually to credit the superconducting qubit people who have really spent a lot of effort thinking about how to control their complex systems and get to larger scales, we learned from them how important this parallel control is and to leverage it and kind of not let it go. 

And then more over on top of that, we’re not the only neutral atom group in the field, and there’s a lot of very exciting work happening in the field with neutral atoms. So we have various things that we’ve innovated on in terms of the movement of qubits and its application to error correction. There are lots of other interesting things going on in the field in terms of different atomic species that people are exploring that can have a lot of unique benefits that can kind of make the error correction much more efficient, different ways of measuring the qubits that can be a lot better, different ways of controlling the qubits. And I think something that’s pretty special in our community is that, we’ve done a pretty good job at keeping open communication and that looking into the future while we are kind of starting to develop a plan, we are looking for as many breakthroughs as possible. And the kind of more breakthroughs that happen across the field and that we can kind of simultaneously leverage between the groups, the better.

Misha: Actually, I think this is an excellent point, and maybe I would like to add a little bit to that. So, specifically, in any industry, and even in the quantum industry, I think people, if they make forecasts, they try to do linear extrapolations. And while if you do that on a kind of short timescale, one or two years, I think it totally makes sense. I think this work, shows that sometimes these linear extrapolations are completely missing the point, completely incorrect. And what this means is that, yes, I think there is a clear plan towards, 100 logical qubits. We are making plans about, 1000 logical qubits now, but what Dolev has mentioned and what is happening in this field right now, in this field of neutral atom quantum computing, is that there is an incredible level of innovation. Particularly by, some of the groups, them, or, alumni from our communities here, Harvard and MIT, they’re really exploring very cool new directions, And, I would say, if like, I attend, sessions where these guys give talks, I really feel that the future is in a very good hands. I think there are possibilities for multiple breakthroughs in the future, which could really accelerate the progress way beyond what we can imagine today. And, I think we should be very clear eyes about the challenges ahead, like these, things which I just mentioned really, above like, inspiring and makes me extremely hopeful that the useful, quantum computers can actually be built in not pretty strategy. In not pretty some future.

Yuval: And my last question, I promise, Misha, what, professionally speaking, what keeps you up at night?

Misha: So that’s actually a very good question. And in fact, I was asked this question four days ago by Mike Friedman, at the conference. And I told him that I am actually sleeping very well, So, believe it or not. But look, I mean, again, if we want to realize this vision of making practical, large-scale systems with thousands of logical quantum beats operating at rates, below, one part to the 10 to the 10 or something like that,. I do think we need to keep the innovation going. So there are certainly, things in the near term we are working on, among them, we want to make these systems which we have now operate continuously, rather than a pulse mode, as we have done so far. All the way to the kinds of things that, Harry mentioned. So right now, scaling up using conventional approaches implies massive overhead in terms of qubit numbers, in terms of controls, I cannot imagine that this is the optimal way to scale up, So, for example, what we have shown here is that, the best way to really implement a useful algorithm is to think about the application, the algorithm, quantum error correcting code, and the implementation altogether. And, what I think we really, the unique opportunity we have the next few years is to really try to apply this approach to a much broader set of problems, including practical problems, problems which matter for industry, problems which matter for science. And I think that’s, our work shows that it’s possible, and, I really would like to see this direction flourish within the next year or two. That’s what I’m most excited about.

Yuval: Perfect. Harry, Dolev, and Misha, thank you so much for joining me today. 

Yuval Boger is the chief marketing officer for QuEra, the leader in neutral-atom quantum computers. Known as the “Superposition Guy” as well as the original “Qubit Guy,” he can be reached on LinkedIn or at this email.

December 11, 2023

Exit mobile version