Photoniques Magazine No. 131 | Page 70

BACK TO BASICS
QUANTUM computing
in a classical computer memory, this operation has an exponential cost in the number n of qubits. Bookkeeping all operations in the circuit, we arrive at a cost that scales as n2 n on a classical computer( the cost of a fast Fourier transform), while it scales as n 2 on the quantum computer: this is an exponential gain!
Yet, this impressive gain has strings attached, with a theoretical limitation and a practical one. Let us first dwell on the theoretical one. Supposing the coefficients α 0, α 0,…,
α 2 n – 1, of the Fourier transformed state | ψ correspond to the values

> of pixels of an image we would like to recover. Given the probabilistic nature of quantum measurements, to obtain these very coefficients, we need at least 2 n readouts to learn the histogram( which, in fact, is only enough to learn the modulus of the α i’ s)… And because of the projective nature of quantum measurements, this means repeating the circuit at least 2 n times … causing us to lose the exponential advantage. This rule has only one exception: if, from all the coefficients, only a few( let us say only one, the kth one, α k) are nonzero, then the result of reading out the final state is always the same: it returns k with probability 1. Thus, only one circuit execution and readout are required. If moreover, the solution of our problem was to find k, then we have indeed obtained a speedup. In other words, in many quantum algorithms, acceleration can be reached only when the final distribution of coefficients is highly skewed( peaked), and the solution can be read off the peaks of the distribution. This is typically what Peter Shor’ s famous factoring algorithm does [ 1 ]. This is also what makes it difficult to invent efficient quantum algorithms for machine learning: it usually involves reading out a lot of information( in additional to loading a lot of training data, which is also costly) [ 2 ].

The second limitation is practical: quantum states are fragile to external classical influence. This means that the longer a computation, the more likely it is to be destroyed by external influence. This deleterious influence, called decoherence, degrades the quality of quantum states, called fidelity, exponentially with the number of gates. Hence, with current processors, only 100-1000 gates can be applied before too much harm happens. The quantum Fourier transform we mentioned above, with its n 2 gates, is already out of reach: with n = 100 qubits, it would require about 10 000 gates!
The art of quantum algorithmics thus boils down to finding creative ways to extract computational advantage despite these limitations.
Figure 1. Quantum bits and quantum circuits. Top left: a single qubit can be in superposition of | 0

> > and | 1. Bottom left: two qubits can be entangled. Right: Quantum circuit representing a Fourier transform on n = 5 qubits, corresponding to a classical discrete FT on a vector of N = 32 points. The input wave function requires a potentially long preparation circuit. The output wavefunction( which contains the Fourier spectrum) is not directly accessible: only probabilistic measurements give access to the largest amplitudes.

WHAT USES FOR QUANTUM COMPUTERS? The first concern of Richard Feynman, when he advocated the use of machines with quantum inside, was however not these intrinsically quantum limitations, but those of classical computers [ 3 ]. He had in mind a major conundrum of modern physics called the many-body problem. Ubiquitous in materials science, quantum chemistry, or nuclear physics, this problem arises in systems where interactions between particles matter. For instance, in solids, interactions between electrons are suspected to be the main origin of high-temperature superconductivity. Yet, interactions are also precisely the reason why these problems are difficult to tackle with classical computers: so-called meanfield approaches fail, and the more advanced methods that have been developed in the last fifty years all reach an exponential wall in some regime. For instance, tensor network techniques are sensitive to the amount of entanglement in the problem: their price scales exponentially with this entanglement. Monte-Carlo methods suffer from so-called sign problems that lead to statistical errors that diverge exponentially with system size or at low temperature [ 4 ].
Feynman pointed out that quantum computers, on the other hand, would be free from those ills, as information is directly stored in the system, and time evolution happens naturally through Schrödinger’ s equation, not via costly matrix vector multiplications( as in tensor networks) or high-dimensional integrals( computed in Monte- Carlo algorithms). In a way, by trying to simulate directly, namely with an artificial many-body system, the many-body physics that one is interested in, one does away with the problems of classical processors [ 5 ].
Quantum many-body problems are thus often believed to be among the first applications of quantum computers. As it turns out, outstanding computational problems
68 www. photoniques. com I Photoniques 131