2008 Turing Award Winners Announced 66
The Association for Computing Machinery has announced the 2008 Turing Award Winners. Edmund M. Clarke, Allen Emerson, and Joseph Sifakis received the award for their work on an automated method for finding design errors in computer hardware and software. "Model Checking is a type of "formal verification" that analyzes the logic underlying a design, much as a mathematician uses a proof to determine that a theorem is correct. Far from hit or miss, Model Checking considers every possible state of a hardware or software design and determines if it is consistent with the designer's specifications. Clarke and Emerson originated the idea of Model Checking at Harvard in 1981. They developed a theoretical technique for determining whether an abstract model of a hardware or software design satisfies a formal specification, given as a formula in Temporal Logic, a notation for describing possible sequences of events. Moreover, when the system fails the specification, it could identify a counterexample to show the source of the problem. Numerous model checking systems have been implemented, such as Spin at Bell Labs."
Just moves the errors up one level (Score:4, Insightful)
Guarding the guard (Score:3, Insightful)
PS: Good news, guys: psychology has lots of female students, so we might solve two problems in one blow.
So.... (Score:3, Insightful)
Seriously, I had the same questions about formal, mathematical specifications when I learned of them. In my own experience (mostly business software), most re-work in software comes from a mismatch with the functional specification, or because of stuff that was left out of the functional spec but should have been in there. There are still actual programming or logic errors, but improved testing methodes and test functions of development frameworks have helped catch those bugs ever earlier in the development cycle.
But perhaps formal methods have their use for certain kinds of software.
A great day (Score:3, Insightful)
Re:Just moves the errors up one level (Score:4, Insightful)
Having a fully provable program like this is like having a test suite that checks 100% of the branches in your program. It can substantially reduce errors that otherwise might slip by due to having failed to write a test for various conditions.
Yes, every time you find a mismatch, you have to consider whether it is the program or the specification that is wrong. Still, the errors that you miss will be those for which the specification and the program are wrong in THE SAME WAY, which should be very uncommon.
Truly helpful for optimizations (Score:5, Insightful)
Re:Just moves the errors up one level (Score:3, Insightful)
You assume that the code isn't generated directly from the model. Given that there are many tools that do just that (Rational Rose, et. al.), I'd say that the parent is correct. We've simply moved our correctness requirements up one level of abstraction.