The Current State of Quantum Technologies

I’ve been asked by a reader to write about the current state of quantum technologies. There has been a real buzz about it outside of the physics (academic) community for a while now and all the major companies are investing large amounts of money into it. There are countless start-ups, not just attempting to build quantum devices, but actually working on the quantum software, just on the off chance that sooner or later we will have large scale quantum computers available and ready to use. In addition, governments of all developed societies in the world, are using significant fractions of their taxpayer’s money to invest into universities in order to boost the ongoing industrial effort. The future is clearly quantum…

As someone who entered the field in 94/95, I can tell you that it was far from obvious things would end up going this way. Admittedly, there was then already some degree of excitement, because Peter Shor had just published his algorithm for a quantum factorisation of large numbers (my first paper – with Artur Ekert and Adriano Barenco – presented the full quantum circuit for this algorithm, which included subroutines for quantum addition, multiplication and exponentiation). Shor’s algorithm turned out to solve a problem that is exponentially difficult on a classical computer, in a polynomial number of steps. Plus, this problem is relevant in cryptography, which arouses interest well beyond academia.

On top of it, and roughly at the same time, Artur Ekert gave a (by now) legendary talk introducing an audience of uninitiated physicists to the basics of quantum computers. Here is a quote from Dave Wineland’s Nobel Prize winning lecture (who himself got it for pioneering the use of trapped ions as qubits): “At the 1994 International Conference on Atomic Physics held in Boulder, Colorado, Artur Ekert presented a lecture outlining the ideas of quantum computation, a subject new to most of the audience. This inspired Ignacio Cirac and Peter Zoller, who attended the conference and were very familiar with the capabilities (and limitations) of trapped-ion experiments, to propose a basic layout for a quantum computer utilizing trapped ions. This seminal paper was the first comprehensive proposal for how a quantum information processor might be realized.” So, from about 95 onwards, many experimental physicists started to take quantum computation seriously.

But a real transformation, I think, took place at the dawn of the new millennium. This is when quantum information entered the domain of many-body systems. People begun to show that qubits could be made out of, say, superconductors (and in more than one way!). It is well worth remembering that up until then some prominent physicists (some with Nobel Prizes) believed that superconductors will never be preparable in all relevant superpositions of states. This led to an explosion of qubit platforms. Almost every quantum physicist – round about that time – had their own paper on how to build qubits and lots of them (platforms) were actively being pursued in various laboratories worldwide. Anyone who was capable of putting two different states of their system into a superposition could join the game.

Then the real heavyweights begun to smell profit. Google, Microsoft, IBM and so on, as well as private investors who started funding start-ups, which is how we got to where we are now.

But where exactly are we?

This is not an easy question to answer, for a number of different, albeit somewhat possibly related, reasons. First and foremost, industrial efforts are, by their very nature, always surrounded by a cloud of secrecy. Secondly, companies like to up their claims as this affects (usually positively) the values of their stocks. Thirdly, technological progress is impossible to forecast exactly (and probably not even inexactly until it’s too late). Fourthly, there is a disconnect between scientists in academia and researchers in industry (to be expected, as us academics are still predominantly interested in pursuing fundamental scientific problems and do not consider engineering, even when it is of the quantum kind, worthy of too much attention). The list could go on, but this gives you a rough idea and acts as an apology for the apparent vagueness of the remaining bit of the blog.

So where are we? There are various online platforms that offer up to 15 quantum bits or so with different degrees of fidelity. They are easy to use, in the way that one submits whatever sequence of quantum gates one would like to execute, and then this gets run on an actual machine, with the results returned to the user.

As far as the qubit implementations go, the superconductors are the dominant way, but cold atoms and photons are also out there as qubits. Each has advantages and disadvantages and none is a clear winner for the ultimate technology. A clear winner would probably be a qubit that is – quantumly speaking – stable at room temperature. Stable in a way that, roughly speaking, quantum superpositions of its logical states would last for a long time and that it could quickly couple to other like qubits.

Before evaluating these existing platforms, it is worth remarking that what we are after is universality, meaning that all dynamical transformations that are possible to think of in theory, could actually be executed on this machine. In that sense, even the 15 odd quantum bit computers, that are now freely available, are not what one would call universal. We cannot do anything we like with them (such as prepare any entangled state of them) and with a high degree of reliability.

To make matters more difficult, in order to scale things up (say, in order to have a thousand reliable universal qubits), one needs to engage in error correction. This is normally done through redundancy and the current theoretical limits suggest that we will need hundreds of physical qubits that encode one reliable logical qubit. That means that we will need a quantum computer with about a million physical qubits.

No such technology exists at present. People do claim to be able to manipulate hundreds, even thousands of qubits, but this is usually for very specialised, one off, quantum computations. Even though these experiments are no doubt impressive, they are nowhere near being universal and/or scalable. By universal, I mean being able to execute any set of desired transformation even on a limited number of qubits with high fidelity, while by scalable, I mean that the number of qubits can in principle be increased while not compromising on the fidelity. Both properties are, of course, enjoyed by the conventional (classical) computers.

Can something useful still be done with the present level of command? Various proof-of-principle computations, however, nothing that would impress someone having a real-world application in mind. Don’t get me wrong, they (current quantum computers) are great as educational tools, but certainly nothing to write home about as far as being the next generation of personal computers (if they were, they wouldn’t be publicly available to run for free).

The applications we hear about, such as improving the time for solving the travelling salesman problem (one of those ubiquitous hard problems, whose reduction to a polynomial time would collapse the whole class of hard problems to an easy class) or simulating drug development or quantum financial modelling and predicting complex weather patterns, provide good long-term motivations for continuing our quantum endeavour, but are not at all on the horizon yet. There is also the quantum AI, a possibility that the ordinary “classical” AIs could be improved through quantum physics and even upgraded to become AGIs (G is for “general”, like we humans think we are).

The good news is that the progress is tremendous. One could say that quantum computers follow some kind of a Moore’s law, in the sense that the number of qubits we can control has been doubling from 2000 onwards every 3-4 years or so (hence, still less than a hundred qubits at present). However, as I said, it is difficult extrapolating accurately technological trends. But if we continue at this rate – and admittedly it is a big if, we could have a thousand logical qubits quantum computer by 2040.

What excites me the most is that the improving quantum technologies immediately reflects back on the fundamental physics. My hope is that, because of this, by 2030 we will have already proved that gravity has to be (at least in part) quantum. Also, that we would be able to quantum entangle two or more living systems (most likely only viruses and bacteria, but it’s a start). There are a host of other basic physics questions I’d like us to be able to address, such as questions related to the universality of quantum superpositions (i.e., can anything be superposed?), the status of the quantum principle of local tomography (i.e., are measurements on individual quantum systems sufficient to reveal entanglement?) and the status of potentials in fundamental interactions (i.e., is the Aharonov-Bohm effect really non-local?).

And maybe, just maybe, we start getting glimpses of the physics that lies beyond (and some physics must lie beyond). When this happens, I think that quantum technologies would have fulfilled their promise and would have reached their full potential. For there is no better way for a technology to go than to lead to a new understanding of the universe we live in.

Let me know what you think?

Sign up to my substack if you'd like to have my articles delivered straight to your inbox

Leave a Comment





ASK ME ANYTHING!

If you'd like to ask me a question or discuss my research then please get in touch.