jd writes: "Professor Rakesh Kumar at the University of Illinois has produced research showing that allowing communication errors between microprocessor components and then making the software more robust will actually result in chips that are faster and yet require less power. His argument is that at the current scale errors in transmission occur anyway and that the efforts of chip manufacturers to hide these to create the illusion of perfect reliability simply introduces a lot of unnecessary expense, demands excessive power and deoptimises the design. He favors a new architecture, which he calls the "stochastic processor" which is designed to gracefully handle data corruption and error recovery. He believes he has shown such a design would work and that it will permit Moore's Law to continue to operate into the foreseeable future. However, this is not the first time someone has tried to fundamentally revolutionize the CPU. The Transputer, the AMULET, the FM8501, the iWARP and the Crusoe were all supposed to be game-changers but died a cold, lonely death instead — and those were far closer to design philosophies programmers are currently familiar with. Modern software simply isn't written with the level of reliability the Stochastic Processor requires in mind (and many software packages are too big and too complex to port), and the volume of available software frequently makes or breaks new designs. Will this be "interesting but dead-end" research, or will the Professor pull off a CPU architectural revolution really not seen since the microprocessor was designed?"
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×