Why are quantum computers faster than classical computers?

In summary, the article discusses how contextuality is essential to understanding why quantum computers work faster than classical computers. It cites two references which argue that quantum computers rely on this concept to some degree. Second, it provides some explanations for why this might be the case. Finally, it notes that this is not always the case and that classical computers can sometimes be faster.
  • #36
I was not thinking about simulations whether in polynomial or exponential time. The very definition of non-deterministic Turing machine is unphysical because it tries out all possibilities at once and if there is a solution, then it solves the problem (which is curiously a popular misconception of what a quantum computer is capable of doing).
 
Physics news on Phys.org
  • #37
The very definition of non-deterministic Turing machine is unphysical because it tries out all possibilities at once and if there is a solution, then it solves the problem
Yes, but you can simulate it by serializing all possibilities.

Both deterministic and nondeterministic Turing machines can be simulated or can not, depending on what you mean by simulation.
 
  • #38
What I tried to convey when I said that the a non-deterministic Turing machine is unphysical is really simple, and probably doesn't contradict or even related to what you argued at all. It is the same as saying that it is unphysical for me to go to two different places at the same time and then based on those "experiences," pick the place that I'm happier to be into be real.

But now I understand the reason behind your choice of words better.
 
  • #39
Demystifier said:
There are certainly other ways to compute things in quantum physics. Moreover, if you take a look at the literature on quantum computation, path integrals are (almost) never mentioned.

The main mention I've seen of path integrals in quantum computing is in the proof that PSPACE = BQPSPACE.

The contributions to a single output amplitude can be figured out up by iterating through all the ways the input amplitudes flow to it through the intermediate states and operations. Each individual flow-path's contribution is cheap to compute in both time and space. There's a huge number of these paths, so it will take a lot of time to add them all up, but you only need to consider one at a time (and keep track of the total) so it doesn't take much space.

With the ability to generate the amplitude of individual output states, you can pick one with the correct probability. Choose a random number between 0 and 1, then iterate through all the possible output states while cumulatively subtracting the squared magnitude of their amplitude off of that number. Once it hits or goes past zero, return the current output state. Again this may take a lot of time, since there's exponentially many states to go through, but you only need to track the remaining total so it doesn't take much space.
 
Back
Top