Will a quantum computer perform a calculation by 2026 that is impossible for any classical supercomputer?
➕
Plus
38
Ṁ3041
2026
65%
chance

Resolves yes if before 2026 a peer-reviewed publication demonstrates a quantum computer performing a calculation that is proven to be impossible for any existing classical supercomputer.

Get
Ṁ1,000
and
S1.00
Sort by:

Maybe this relates to a definition of proven? In practical i.e. experimental physics there can only be very strong evidence but no perfect prove from data, similar to security where practical systems are proven in practice without perfect/theoretical prove.

As there is already theoretical prove of theoretical quantum advantage, assuming the complexity hierarchy does not resolve to p=no I assume you mean practical prove?

Can you share more on your criteria?

I agree that this is not resolved, but am surprised about the rates, as random circuit sampling regularly is outperforming SoA of supercomputers and it gets increasingly hard to follow the scaling of QC:

"In light of the discussion above, in this work we primarily focus on providing direct evidence for the ability of the

H2 quantum computer to sample with high fidelity from

circuits that are difficult to classically simulate. Nevertheless, we do compute FXEB for system sizes that we are

able to (N ≤ 40), and confirm that it agrees well with

other fidelity estimates. We also report samples from

circuits for which we are unable (due to the severe classical difficulty of simulating the circuits) to compute FXEB.

5

We are generally unaware of any classical methods to produce FXEB numbers comparable to what we expect from

these circuits without performing essentially exact numerical simulations of the circuit (up to the modest loss

in fidelity observed experimentally, see Sec. IV)"

https://www.quantinuum.com/news/quantinuums-h-series-hits-56-physical-qubits-that-are-all-to-all-connected-and-departs-the-era-of-classical-simulation

https://arxiv.org/abs/2406.02501

Yes, it's possible. Quantum computing is advancing rapidly. Experts believe a breakthrough demonstrating superiority over classical supercomputers could occur by 2026 in peer-reviewed research, similar to advancements seen with tools like Calculatrice heure de travail: https://mauricettecalculette.fr/.

Since complexity theorists haven't actually proved P != NP, this raises a question which I don't see answered below about what counts as "proof".

For example, there have been claims in the past that BosonSampling like experiments have produced results unsimulatable by classical computers, which have then been countered by other teams creating new algorithms to simulate those experiments. How long does a claim have to go unchallenged to count?

predicts NO

@BoltonBailey And this is why I have so many limit orders on NO. Every quantum supremacy paper immediately becomes a target for computational groups to beat it with more sophisticated methods. When IBM published their most recent supremacy demonstration, it was less than one week before a preprint was put on arXiv which simulated the system using a tensor network approach.

"Proven to be impossible" is a bit vague. As of current understanding - any quantum calculation can be performed by any classical computer, but there might be calculations that take an unreasonable time using classical computers - years, decades or even thousands of years. However, such calculations are still theoretically possible, though impractical.

@ProjectVictory I second this question. Additionally, what about approximations? Suppose that you stipulate that whatever classical calculation is done must actually completely with results. Then do you accept algorithms which only approximate the quantum result?

@Pykess What I had in mind is that it performs a calculation that would not be feasible on a classical computer, ie it would take longer than the lifetime of the universe to compute.

@GabeGarboden Okay, so then I re-ask my point about approximations. Do you require that the classical counterpart be exact? Or if it made approximations in order to complete the calculation in a reasonable amount of time, would that suffice?

@Pykess It does not have to be exact, as long as it's shown the classical computer cannot come to a reasonable approximation within a reasonable expected time.

@GabeGarboden Thanks for the clarifications!