1 Molecule Computes Thousands of Times Faster Than a PC 246
alexhiggins732 writes with this tantalizing PopSci snippet: "A demo of a quantum calculation carried out by Japanese researchers has yielded some pretty mind-blowing results: a single molecule can perform a complex calculation thousands of times faster than a conventional computer. A proof-of-principle test run of a discrete Fourier transform — a common calculation using spectral analysis and data compression, among other things — performed with a single iodine molecule transpired very well, putting all the molecules in your PC to shame."
Quantum computers aren't X times faster. (Score:5, Interesting)
I really hate it when people come up with the simple "Quantum computer 1000 times faster than conventional computer". It's not just overly simplistic, it's wrong.
Quantum computers can turn some problems that require exponential time to solve into a polynomial time. So instead of taking 2^n time, it might take n^3 time. That's cannot in any realistic way be described as being "X times faster".
Re:Thats cheating (Score:5, Interesting)
Show me a single molecule quantum device (Score:5, Interesting)
I've never seen a quantum computing device smaller than the size of a small room, so I'm not really sure how fair it is to compare it to a PC.
Really the PC doesn't even use full atoms for calculations, it uses electrons and electron holes in the atoms, and its at least 2000 times smaller than any quantum device I've seen.
You don't really get to say its one molecule when its a device made up of a fuckton of molecules and you are comparing too it a PC which uses subatomic elements to actually do the work.
You have a fast calculator ... the size of a room ... which I can put 2000 slower and easier to make calculators in and end up faster.
Sure, eventually, they'll make it smaller and smaller, but your comparison is like saying using an f16 to deliver mail is faster than using a postal truck to deliver milk. Just because you make two statements that share a verb doesn't mean you've made a comparison thats in any way meaningful.
Re:This could be the breakthrough... (Score:5, Interesting)
Bah! People need to stop complaining when it turns out that an important incremental advance in the field of quantum computing isn't already a commercially viable quantum computer that's being integrated into a chip for release next week. There won't be commercially viable products for many years to come. What is needed many, many incremental improvements in a broad variety of disciplines. None of the proof-of-principle experiments around today are attempting to be demonstrations of viable technology. This experiment demonstrates that am arbitrary quantum state can be deterministically written to the vibrational modes of a molecule, allowed to evolve and be read out by projective measurement. It is an important result because it helps open a new avenue of attack: vibrational energy levels in molecules.
The experiment is a beast that requires expensive, ultra-fast lasers, pulse shaping optics, and a molecular jet. It won't be integrated into PCI expansion card anytime soon but the fact that it is possible to coherently prepare superpositions of vibrational modes in molecules is interesting in its own right and is potentially important for quantum computation. Another decade or three of fundamental research and well funded grad students (ha) are going to be required before we can expect a commercial application.
Re:Thats cheating (Score:5, Interesting)
If you define enough real world processes as calculation, you prove none of our laws of physics are the real ones.
For just one example, Nature can't be storing irrational numbers as infinite series expressions (where would the infinitely large registers to store them be?). Another way to put this is, if some process in Nature counts as a calculation, Nature can't be doing that calculation using numbers such as pi or e, but rather finite approximations of such numbers, that allow results in finite time.
(Otherwise, somewhere 'outside' the observable universe, there is an infinite amount of storage available for each number needed, and some sort of mechanism that handles those calculations in what looks like finite time to any point of view inside the universe - congratulations, you've just proved both the omnipresence and the omnipotence of God - probably not what you were aiming to do).
There are other ways around this, such as claiming real world events are just approximations - but what does it mean to say that nature has approximated what would happen to that apple that just fell on Newton's noggin, if there had been an exact inverse square law of gravity inside our computationally finite universe? This sort of claim sounds suspiciously like Plato's cave. Is there an ideal law of gravity that is somehow more real than the law of gravity actually expressed in the universe?
Alternately, maybe the problem is with claiming that some things are computations, just because they can be interpreted as approximate (usually analog) computations by an observer, that also has other knowledge necessary to parse the events as the results of computations. That's probably just as likely to lead to wild implications, but at least they are different wild implications.
Re:This could be the breakthrough... (Score:3, Interesting)
The ultimate improbability bomb...I like it. The advertising slogan could be "yes, God DOES play dice with the world...and you can, too!"
Re:This could be the breakthrough... (Score:4, Interesting)
I think the real question should be how many measurements per second can you do.
This is what standard computes do. To get the next step, you have to measure/read the previous state. So you have just zero or one, because that is the easiest to measure. Then you measure in gigahertz.
How many measurements per second can quantum computers do?
Re:This could be the breakthrough... (Score:3, Interesting)
This P-value and the P-value you're thinking of aren't the same. Ordinarily, when we think of P-value, we're thinking of errors caused by statistical chance, errors in the data and so on. However, in quantum computing, even purely mathematical computations have a probability of correctness. In other words, when you add 2 + 2 with a quantum computer, you don't get 4. You get 4 (p=.95). When you evaluate the mathematical function, you get the result, plus a probability of that result being the correct result.
As I understand it, there's a trade-off between uncertainty and speed in quantum computing. You can get results faster, but you'll have a higher probability that your machine returns 2+2=5.
Re:This could be the breakthrough... (Score:3, Interesting)
The literature that I've read in the press seems unanimous in stating that quantum computers are going to be better than conventional computers. This is particularly evident with respect to encryption and searching. I am now beginning to wonder if it is even possible to explain it to a layperson like myself.
Good question, though. Sorry I can't answer it.
Re:Thats cheating (Score:1, Interesting)
This is stupid, irrational numbers can absolutely be constructed. Obvious example: construct a circle. The ratio of the circumference to the diameter is pi. You've still "calculated" pi. Now, if you're making the (much) more subtle argument that entropy in the sense of lacking information means that the second law of thermodynamics is violated whenever you draw a circle, I'm more impressed, as this is technically true... we never actually know the length of the circumference. I don't see where god comes in.
Maybe my fundamental issue with what you are saying is that, well, nature couldn't give two craps what humans call computation.
Re:This could be the breakthrough... (Score:3, Interesting)
The simplest explanation I can offer is that, at the quantum level, moving bare information (yes, even abstract ones and zeros) from one location to another to perform calculations runs into a bottleneck due to the Heisenberg uncertainty principal. The simple act of measuring (for example, reading a bit out of RAM or out of a CPU register) gets more and more disruptive to increasingly small systems.
Quantum computing is not magic, but it does differ from the classical approach in that you perform a lot of your calculating horsepower inside of closed systems wherein, afterwards, reading the result destroys the system — much like smashing a piggy-bank. You introduce your input data into a system at a certain quantum ground state, and as each input is introduced the system transforms from one wave-function to another, performing your calculation in a manner that might even be considered "analog", as quantization only occurs at the time of measurement. Once all the input is introduced, you then measure the system to obtain your output. This measurement destroys the system, and only provides an "answer", none of the interim calculations survive.
The seeming magic is in the fact that the interim calculations are carried out in a system entirely isolated from outside causality. We are accustomed to measuring the effectiveness of a system component such as an integrated circuit by reading from and writing to it, and combining it's efforts in realtime with efforts from all across the machine in question. We are accustomed to thinking of information as entirely abstract, and that is a foundation of classical computing. In quantum computing, engineers understand that information is instead bulky, and at smaller scales you reach diminishing returns moving it across your machine. Performing calculations in localized, potentially mind-numbingly tiny closed systems neutralizes this drawback to moving information (in a word, causality) and allows otherwise incalculable gains in the speed and parallelization of information processing.
Let me try this from a different angle. If you are comfortable with simple physics concepts such as not being able to communicate faster than the speed of light, then you can easily grok the information processing bottleneck that fairly homogeneous physical principal imposes upon computing. For example, if you wired a CPU in New York to a stick of RAM in China, then it's just not possible to surpass seek times of 38 milliseconds. In practical terms you'd never be close, routing and switching and non-geodesic data paths would stymie your efforts so you might optimize those, but the bare fact of the bad design decision in placing your components murders your ultimate capability. If you became used to that level of computing limitation, you would probably even design your algorithms to make the best of that situation and rely as little upon seek time as possible.
Then, when a friend walks up to you using a relatively poorly constructed laptop whose CPU is located inches from the RAM, running an OS chock full of algorithms that don't fear seek time, then it's processing power and capabilities would simply knock you out of your chair by comparison. That cheap laptop is obviously not magic, but you are ham-strung by the expectations your New York / China computer has left you with.
Classical vs. Quantum computing is very much like that. We are, all of us, hamstrung by the implicit computational limitations of relative causality. We want to fetch data from the RAM and take it to the CPU to be processed. We want to move data from this portion of the CPU to that portion for more processing. The bottleneck we face is very related to the "speed of light" bottleneck, but it's not strictly the same. It is the bottleneck of causality itself: The Heisenberg Uncertainty Principal. Information IS causality. Sending a message, be it by yelling across the house or making an example out of a fired employee or pumping electrons down copper wire always involves forcing one thing to cause th