Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Announcements Hardware

2008 Turing Award Winners Announced 66

The Association for Computing Machinery has announced the 2008 Turing Award Winners. Edmund M. Clarke, Allen Emerson, and Joseph Sifakis received the award for their work on an automated method for finding design errors in computer hardware and software. "Model Checking is a type of "formal verification" that analyzes the logic underlying a design, much as a mathematician uses a proof to determine that a theorem is correct. Far from hit or miss, Model Checking considers every possible state of a hardware or software design and determines if it is consistent with the designer's specifications. Clarke and Emerson originated the idea of Model Checking at Harvard in 1981. They developed a theoretical technique for determining whether an abstract model of a hardware or software design satisfies a formal specification, given as a formula in Temporal Logic, a notation for describing possible sequences of events. Moreover, when the system fails the specification, it could identify a counterexample to show the source of the problem. Numerous model checking systems have been implemented, such as Spin at Bell Labs."
This discussion has been archived. No new comments can be posted.

2008 Turing Award Winners Announced

Comments Filter:
  • I think even most slashdotters could get an award for passing the Turing Test! (In all seriousness, congratulations to the winners - I haven't read the details yet, but it's quite an accomplishment.)
    • What is it like in there?
      • Re: (Score:3, Funny)

        by Anonymous Coward
        PARITY ERR ... 85362 ???
        PARITY ERR ... 16376 ???
        PARITY ERR ... 56721 ???
        PARITY ERR ... 23423 ???
        PARITY ERR ... RECOVERED
        PARITY

        HELLO BigJClark, MY NAME IS DOCTOR SBAITSO.
    • Re: (Score:3, Funny)

      by Anonymous Coward
      That's interesting BalorTFL. Why do you think that?
  • by joey_knisch ( 804995 ) on Monday February 04, 2008 @05:44PM (#22299182)
    Russian chat bots that convinced men not only that they were easy to score with females but needed a credit card up front.
    http://science.slashdot.org/article.pl?sid=07/12/09/1356201 [slashdot.org]

    That has to be worth some kind of reward.
  • Primary Source (Score:5, Informative)

    by jdschulteis ( 689834 ) on Monday February 04, 2008 @06:04PM (#22299490)
    At least DDJ isn't somebody's blog, but why not link directly to ACM's press release [acm.org]?
  • In The Slashdot Universe, "model checking" is all about logic analysis.

    Despite the fact that the models we'd much rather be checking are all from Playboy/Hustler etc.

    Surely we can get together enough of a crowd to award a prize for that?
  • by hoggoth ( 414195 ) on Monday February 04, 2008 @06:18PM (#22299712) Journal
    I took a Computer Science course on discrete logic with a professor who was very into "model checking". By the end of the course I finally understood that all we had done was move the logic and the source of errors from the computer program to the formal specification. The formal specification was just as rigorous and complex as a computer program. The program became little more than a different expression of the formal specifications, such that it would be possible not only to check that a program had no "errors" and followed the specification exactly, but it would also be possible to have an automated process translate the formal specification into a program directly. The professor proclaimed that we now had a system that could prove programs correct. I pointed out that we had not, we had only changed programming languages to a mathematical one instead of a more typical computer programming language.

    • So.... (Score:3, Insightful)

      .... Did he flunk you?

      Seriously, I had the same questions about formal, mathematical specifications when I learned of them. In my own experience (mostly business software), most re-work in software comes from a mismatch with the functional specification, or because of stuff that was left out of the functional spec but should have been in there. There are still actual programming or logic errors, but improved testing methodes and test functions of development frameworks have helped catch those bugs ever
    • by Bozdune ( 68800 ) on Monday February 04, 2008 @06:40PM (#22300014)
      This is precisely the problem with such ideas. As you said, if a program is sufficiently rigorously specified that an automated proof-of-correctness can be generated, then the specification of the program is obviously complex enough to require that it, too, must undergo testing to ensure that it is correct, and so on. We might end up with 2 = 2, but that doesn't help much if we wanted 3.

      The DoD has funded these efforts heavily since the 1970's, and computer science graduate students have been all over them for as long as I can remember. I've read way too many dull papers on the topic, as one amateur modern algebraist after another discovers the wonders of Hoare and rushes into print with his or her "unique" twist, all to the end of starting yet another unremarkable academic career.

      Of course, the illusion of "perfect" software never fails to amuse me, since I remember an Interdata 32 overheating in the lab and making serious fixed point arithmetic errors. Sort of grounds one in reality, doesn't it, when the machine can't add. Sure glad the program was declared "correct," though.

      • by Picolino ( 1233212 ) on Monday February 04, 2008 @07:03PM (#22300304)
        The purpose of model checking is rarely to specify the whole behavior of the program, but to ensure that some condition are always true or false. Such condition can be the absence of buffer overflow ... relatively easy to formulate, hard to discover ...
      • by Deef ( 162646 ) on Monday February 04, 2008 @07:39PM (#22300764)
        Just because this is true (that program correctness proofs are themselves very complex) doesn't mean that the technique is without value. If you have such a formal specification for a program, you now have supposedly identically operating code written in two different languages, which can be checked against each other for errors, hopefully automatically.

        Having a fully provable program like this is like having a test suite that checks 100% of the branches in your program. It can substantially reduce errors that otherwise might slip by due to having failed to write a test for various conditions.

        Yes, every time you find a mismatch, you have to consider whether it is the program or the specification that is wrong. Still, the errors that you miss will be those for which the specification and the program are wrong in THE SAME WAY, which should be very uncommon.
        • Re: (Score:3, Insightful)

          by quanticle ( 843097 )

          Still, the errors that you miss will be those for which the specification and the program are wrong in THE SAME WAY, which should be very uncommon.

          You assume that the code isn't generated directly from the model. Given that there are many tools that do just that (Rational Rose, et. al.), I'd say that the parent is correct. We've simply moved our correctness requirements up one level of abstraction.

      • by rayadoCanyon ( 1233260 ) on Monday February 04, 2008 @08:13PM (#22301166)

        Once I went to a talk about applications of model checking to the verification of software. A programmer was constantly changing a state-based algorithm for call setup in a telephone switch, and was having trouble keeping it correct. Enter model checking. Two people wrote temporal specifications of call setup, and every night or so, they'd grind the model checker on the latest version of the code. No, that didn't prove the code was correct, but it did catch an enormous number of bugs in a tricky piece of concurrent code.

        Oh. The programmer was Ken Thompson. The people applying the model checker were Gerard Holzmann (the designer of SPIN) and Margaret Smith.

        I'm not saying the technology is applicable everywhere, but you gotta give Clarke, Emerson, and Sifakis a lot of credit for opening a good door.

    • > The formal specification was just as rigorous and complex as a computer program. The program became little more than a different expression of the formal specifications Have you got an example of this? The specification of what you want to do is generally significantly simpler than how you actually do something.
    • by El Cabri ( 13930 ) on Monday February 04, 2008 @07:50PM (#22300906) Journal
      The formal specification for, say, liveness of an interlocking system is a one-liner in a typical temporal logic notation, and you can apply it without significant modification to any number of different implementations, of any number of different applications, whatever their complexity. This is leverage : you put your trust in a very short piece of "code" (the formal spec for your property) and in the tool itself (which is the same kind of trust you put in your compiler), and in return you get trust on a huge complicated piece of software that you wrote. Then you break down your testing into many, independent property checks that all validate one aspect of one big piece of inter-mangled software. That's hugely powerful.

      I hope your prof failed you.
      • by thaig ( 415462 )
        You're just saying that some specifications aren't as complicated as the implementation.

        The GP says that some are and he cited an example.

        So his prof can be a dweeb and you can be right at the same time.

    • by bunratty ( 545641 ) on Monday February 04, 2008 @08:08PM (#22301110)
      Your professor was correct. Yes, the computer can automatically write a program from the specification. On the other hand, it probably isn't very efficient. You could write a deviously clever program to produce the same output, and when others don't buy into the tricks you've used, you can prove conclusively that your program is 100% correct. The same technique can prove that the latest processor optimizations don't have bugs (think of the Pentium division problem).
    • You may not be able to prove that your design was what you originally had in mind, but as I understand model checking you can certainly prove certain useful properties - for instance, that eventually it returns to a particular "state" (something akin to not getting stuck in some loop). That's something you can't do just on the basis of looking and testing your code.

      That said, you do have a point that formulating the models is not justified for a lot of routine code. Furthermore, in practice, as I understa

    • Re: (Score:3, Informative)

      by Coryoth ( 254751 )

      The program became little more than a different expression of the formal specifications, such that it would be possible not only to check that a program had no "errors" and followed the specification exactly, but it would also be possible to have an automated process translate the formal specification into a program directly.

      That depends on the problem. I can specify a square root finding algorithm: the output multiplied by itself must be equal to the input (within some epsilon error tolerance). I doubt you can extract any useful implementation from that. A slightly more complicated example, I can specify a sorting algorithm: the output must have all the same elements as the input, and for each index i of the ouput the ith element must be less than the (i+1)th element. Trying to translate that (or its more formal equivalent) i

  • ...can they determine whether my program will halt or not?
    • Sure! They'll have the answer any day now... just, waiting on the result...
    • by Surt ( 22457 )
      Not them, but MS has software that will do this for all software not designed to defeat it, which since I presume yours was not, probably will work for your program.
    • by El Cabri ( 13930 )
      You can prove that a program halts, or doesn't, provided that you understand the program, and in particular the invariants that will determine it to halt. You just cannot write a program that will answer the question on an arbitrary other program. Typically you will design the proof as you design the program. That is the point made in Disjkstra's "A Discipline of Programming" essay (published in the 70s). This kind of things has been forgotten since then and hackers have since built the software ecosyst
      • by algae ( 2196 )

        You just cannot write a program that will answer the question on an arbitrary other program.

        That was kind of my point, but I guess jokes about the halting problem and undecidability are a little over this crowd.

    • by TheLink ( 130905 )
      In theory who knows. In practice it will - someone or something will stop it eventually.

      More importantly can they determine whether your program is malicious or not?

      That's similar to the halting problem, but more relevant to desktop computer users nowadays, given the lack of good program sandboxing that's done in a user friendly manner.

      Let's call it the "pwning problem". It's amazing even up till to day that users are still expected to solve that problem regularly, somehow, and here's the good bit: usually
  • Guarding the guard (Score:3, Insightful)

    by Mr2cents ( 323101 ) on Monday February 04, 2008 @06:23PM (#22299800)
    Model checking surely has it's merits, but how are you going to check the model for errors? If I have learned one thing writing software, it is that there is a difference between what people want, and what people say they want. And there is, and never will be, no software that can check if those two are the same. Actually, I'm beginning to think there might be a lot to be gained if computer scientists team up with psychologists to get better specifications out of customers.

    PS: Good news, guys: psychology has lots of female students, so we might solve two problems in one blow.
    • PS: Good news, guys: psychology has lots of female students, so we might solve two problems in one blow.

      not only that, it would provide psychology students with steady stream of abnormal personalities to study ...

      • by amh131 ( 126681 )
        I gotta say that in my fairly modest experience I never noticed a lack of "abnormal" personalities in any psychology class. Made me wonder about the value of all those tests that the grad students made the undergrads take given that everything seemed so darn far away from the median ...
    • by El Cabri ( 13930 ) on Monday February 04, 2008 @07:36PM (#22300722) Journal
      You're not expected to model the full behavior of your program. Model checking is useful for testing individual properties such as bounds on ressource allocation, non-deadlock of thread synchronization, or security-related invariants.
  • Then why is the article titled 2007 Turing Award Winners Announced?
  • Fabio Somenzi at the University of Colorado contributes to an open source tool to perform model checking called VIS (Verification Interacting with Synthesis) available at: http://vlsi.colorado.edu/~vis/ [colorado.edu]. I recommend anyone interested in this to check it out. It can perform model checking on Verilog modules directly.
  • A great day (Score:3, Insightful)

    by El Cabri ( 13930 ) on Monday February 04, 2008 @07:13PM (#22300418) Journal
    I hope this will help push the use of formal methods, particularly in software development. There is no way that software development can be carried out in the massively distributed / multi-core era using the test-and-tweak, black magic approach that has so far dominated software creation and that has led to the big mess that we are in.
    • by PPH ( 736903 )
      True. But as these mehods, and more to the point, the tools, become more intelligent, eventually they will pass the Turing Test. In other words, the output will be indistinguishable from that of a human, with all the logic errors, missing parentheses, and unreachable code that we currently expect.
    • by Coryoth ( 254751 )

      I hope this will help push the use of formal methods, particularly in software development.
      Sadly, I doubt it. Just read the comments here, or on any other similar article on Slashdot. There are a great many would be programmers who proudly reject any sense of formality. In discussions on Slashdot I've had a a hard enough time pitching lightweight formal methods with JML or similar. I don't quite fully understand where the resistance comes from, but it is very firmly ingrained.
  • Congrats to Ed. (Score:1, Interesting)

    by Anonymous Coward
    I was a PhD student of Ed, so great news for being at a "Turing award distance" 1. The model checking technique has indeed come a long way since its inception in 1981. It is more successful in hardware verification, then in software verification, so much so that most chip makers use it in one form or another. To those who say that the specification becomes as complex as the original program itself, there is some truth to that. However, one can also start with simple specifications, like After a request i
  • by omnirealm ( 244599 ) on Monday February 04, 2008 @08:12PM (#22301156) Homepage
    I signed up for Emerson's graduate course on model checking and reactive systems a couple of years back. The first day of class, he walked in 15 minutes late and said something like, "Welcome to my class. No homework or tests. Everyone gets an 'A'. Let's see what kind of papers we can come up with." He then dived right into some intense theory as if he were casually picking up a conversation he left off the semester before. I spent the next few hours feeling like total deadweight (several other grad students just sat there silent the whole time, with expressions on their faces like deer caught in headlights). I wound up just dropping the class; it took me another year of grad courses to get all the background theory I needed to just keep pace, and I hate wasting time, even for an easy 'A'. Too bad I graduated just before he taught his class again; I would have given it another shot before leaving UT Austin.
  • by martinde ( 137088 ) on Monday February 04, 2008 @08:22PM (#22301246) Homepage
    NASA released an open source model checker for Java called JPF [sourceforge.net]. It's a JVM implemented in Java that can do model checking on "generic" Java apps, finding deadlocks and things like that.
  • Just about time (Score:2, Informative)

    by leoval ( 827218 )
    I have been hearing about formal verification for hardware design for the last 10 years and at some point I had to opportunity to do minor work on one of the formal verification products that my company produces. The technology is really interesting and contrary the the uninformed opinion of other slashdotters in this thread, it delivers tangible results and provides a clear advantage over classic verification techniques (vector testing, testbenches and so for).

    It took a long time but a lot of the major des

Over the shoulder supervision is more a need of the manager than the programming task.

Working...