2008 Turing Award Winners Announced 66
The Association for Computing Machinery has announced the 2008 Turing Award Winners. Edmund M. Clarke, Allen Emerson, and Joseph Sifakis received the award for their work on an automated method for finding design errors in computer hardware and software. "Model Checking is a type of "formal verification" that analyzes the logic underlying a design, much as a mathematician uses a proof to determine that a theorem is correct. Far from hit or miss, Model Checking considers every possible state of a hardware or software design and determines if it is consistent with the designer's specifications. Clarke and Emerson originated the idea of Model Checking at Harvard in 1981. They developed a theoretical technique for determining whether an abstract model of a hardware or software design satisfies a formal specification, given as a formula in Temporal Logic, a notation for describing possible sequences of events. Moreover, when the system fails the specification, it could identify a counterexample to show the source of the problem. Numerous model checking systems have been implemented, such as Spin at Bell Labs."
WARNING: GNAA (Score:4, Informative)
Re: (Score:1, Informative)
Re: (Score:1)
A Dubious Prize (Score:2)
Re: (Score:1)
Re: (Score:3, Funny)
PARITY ERR
PARITY ERR
PARITY ERR
PARITY ERR
PARITY
HELLO BigJClark, MY NAME IS DOCTOR SBAITSO.
Re: (Score:3, Funny)
What about those... (Score:5, Funny)
http://science.slashdot.org/article.pl?sid=07/12/09/1356201 [slashdot.org]
That has to be worth some kind of reward.
Re: (Score:2, Funny)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Primary Source (Score:5, Informative)
"Model Checking" - such a disappointment (Score:1, Offtopic)
Despite the fact that the models we'd much rather be checking are all from Playboy/Hustler etc.
Surely we can get together enough of a crowd to award a prize for that?
Just moves the errors up one level (Score:4, Insightful)
So.... (Score:3, Insightful)
Seriously, I had the same questions about formal, mathematical specifications when I learned of them. In my own experience (mostly business software), most re-work in software comes from a mismatch with the functional specification, or because of stuff that was left out of the functional spec but should have been in there. There are still actual programming or logic errors, but improved testing methodes and test functions of development frameworks have helped catch those bugs ever
Re:Just moves the errors up one level (Score:5, Interesting)
The DoD has funded these efforts heavily since the 1970's, and computer science graduate students have been all over them for as long as I can remember. I've read way too many dull papers on the topic, as one amateur modern algebraist after another discovers the wonders of Hoare and rushes into print with his or her "unique" twist, all to the end of starting yet another unremarkable academic career.
Of course, the illusion of "perfect" software never fails to amuse me, since I remember an Interdata 32 overheating in the lab and making serious fixed point arithmetic errors. Sort of grounds one in reality, doesn't it, when the machine can't add. Sure glad the program was declared "correct," though.
Re:Just moves the errors up one level (Score:5, Informative)
Re:Just moves the errors up one level (Score:4, Insightful)
Having a fully provable program like this is like having a test suite that checks 100% of the branches in your program. It can substantially reduce errors that otherwise might slip by due to having failed to write a test for various conditions.
Yes, every time you find a mismatch, you have to consider whether it is the program or the specification that is wrong. Still, the errors that you miss will be those for which the specification and the program are wrong in THE SAME WAY, which should be very uncommon.
Re: (Score:3, Insightful)
You assume that the code isn't generated directly from the model. Given that there are many tools that do just that (Rational Rose, et. al.), I'd say that the parent is correct. We've simply moved our correctness requirements up one level of abstraction.
Re:Just moves the errors up one level (Score:5, Interesting)
Once I went to a talk about applications of model checking to the verification of software. A programmer was constantly changing a state-based algorithm for call setup in a telephone switch, and was having trouble keeping it correct. Enter model checking. Two people wrote temporal specifications of call setup, and every night or so, they'd grind the model checker on the latest version of the code. No, that didn't prove the code was correct, but it did catch an enormous number of bugs in a tricky piece of concurrent code.
Oh. The programmer was Ken Thompson. The people applying the model checker were Gerard Holzmann (the designer of SPIN) and Margaret Smith.
I'm not saying the technology is applicable everywhere, but you gotta give Clarke, Emerson, and Sifakis a lot of credit for opening a good door.
Re: (Score:1)
Re:Just moves the errors up one level (Score:4, Interesting)
I hope your prof failed you.
Re: (Score:2)
The GP says that some are and he cited an example.
So his prof can be a dweeb and you can be right at the same time.
Truly helpful for optimizations (Score:5, Insightful)
Not only that... (Score:2)
That said, you do have a point that formulating the models is not justified for a lot of routine code. Furthermore, in practice, as I understa
Re: (Score:3, Informative)
The program became little more than a different expression of the formal specifications, such that it would be possible not only to check that a program had no "errors" and followed the specification exactly, but it would also be possible to have an automated process translate the formal specification into a program directly.
That depends on the problem. I can specify a square root finding algorithm: the output multiplied by itself must be equal to the input (within some epsilon error tolerance). I doubt you can extract any useful implementation from that. A slightly more complicated example, I can specify a sorting algorithm: the output must have all the same elements as the input, and for each index i of the ouput the ith element must be less than the (i+1)th element. Trying to translate that (or its more formal equivalent) i
Nice job, but... (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
That was kind of my point, but I guess jokes about the halting problem and undecidability are a little over this crowd.
Re: (Score:2)
More importantly can they determine whether your program is malicious or not?
That's similar to the halting problem, but more relevant to desktop computer users nowadays, given the lack of good program sandboxing that's done in a user friendly manner.
Let's call it the "pwning problem". It's amazing even up till to day that users are still expected to solve that problem regularly, somehow, and here's the good bit: usually
Guarding the guard (Score:3, Insightful)
PS: Good news, guys: psychology has lots of female students, so we might solve two problems in one blow.
Re: (Score:1)
not only that, it would provide psychology students with steady stream of abnormal personalities to study ...
Re: (Score:1)
Re:Guarding the guard (Score:4, Informative)
2008? (Score:1)
Re: (Score:2)
Re: (Score:1)
VIS - An open source model checker (Score:2, Informative)
A great day (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Congrats to Ed. (Score:1, Interesting)
I dropped Emerson's class (Score:4, Interesting)
An open source model checker for Java (Score:5, Informative)
Just about time (Score:2, Informative)
It took a long time but a lot of the major des