Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Clockless Computing 342

ender81b writes "Scientific American is carrying a nice article on asynchronous chips. In general, the article advocates that eventually all computer systems will have to move to an asynchronous design. The article focuses on Sun's efforts but gives a nice overview of the general concept of asynchronous chip design." We had another story about this last year.
This discussion has been archived. No new comments can be posted.

Clockless Computing

Comments Filter:
  • by Mifflesticks ( 473216 ) on Wednesday July 17, 2002 @02:58PM (#3903667)
    For any large project (such as an MPU), using asynchronous logic isntead of synchronous for the entire thing means it goes from being "merely" really-really-hard to damn-near-impossible.
  • by Anonymous Coward on Wednesday July 17, 2002 @02:59PM (#3903680)
    Yet another old idea revived. The Amiga's Zorro expansion bus was asyncronous and plug n play in the 80s (although the rest of the machine was clocked).
  • Explanation, sorta (Score:3, Interesting)

    by McCart42 ( 207315 ) on Wednesday July 17, 2002 @02:59PM (#3903681) Homepage
    To clear a few things up, just because a processor/motherboard is "clockless" does not mean it won't be able to tell time. They can still use the 60 Hz AC signal for ticks.

    This is really cool. I was learning a little about asynchronous systems in my Logic Design and Computer Organization class last fall...they seemed pretty cool on a small scale, however they could get really difficult to work with when you're dealing with something as complex as a processor.
  • Return of the 68000? (Score:2, Interesting)

    by vanyel ( 28049 ) on Wednesday July 17, 2002 @03:01PM (#3903700) Journal
    Wasn't the 68000 asynchronous?
  • by McCart42 ( 207315 ) on Wednesday July 17, 2002 @03:15PM (#3903787) Homepage
    After reading the article, I have to wonder why asynchronous processors (or smaller logic devices, such as ALUs) haven't been considered before. The ideas have certainly been around for awhile--and in fact, asynchronous is intrinsically simpler than synchronous logic. The only conclusion on this I can reach is that while asynchronous designs may be "simpler" in theory, in that they don't include a clock pulse, they are much more difficult to work with in practice. Here's an example for those of you that have worked with logic design: try creating the logic for a simple vending machine that dispenses the product whenever a combination of coins (triggered by 3 switches, quarter, dime, and nickel) adds up to $0.50. Which would you prefer to use--synchronous or asynchronous logic? I know when I did this example I got myself stuck by using asynchronous logic, because while asynchronous logic meant less memory states (all states above $0.50 were treated the same), it also meant lots of added complexity, which I didn't need for the problem at hand.

    I foresee lots of bugs, but if they can pull this off, more power to them.
  • by Baki ( 72515 ) on Wednesday July 17, 2002 @03:16PM (#3903799)
    In a way yes. If I remember well, it's memory addressing and I/O bus system was asynchronous (not the clock of the CPU itself), meaning no 'wait states'. It would request a memory location and react as soon as the memory came up with the result. I forgot the details though.

  • by AstroJetson ( 21336 ) <.gmizell. .at. .carpe-noctum.net.> on Wednesday July 17, 2002 @03:31PM (#3903925) Homepage
    It's dead accurate. By law, the number of zero crossings on an AC line must be 10368000 every day. If there are too many in the morning, they have to make up for it that afternoon.

    But I think an asynchronous computer would still use a RTC to keep track of calendar time. It has to keep time even when it's turned off.
  • by Dielectric ( 266217 ) on Wednesday July 17, 2002 @03:32PM (#3903936)
    A while back I saw a whitepaper on an asynchronous design, but it was being done for low power applicatios. Basically, you had two lines for each bit. Condition 00 wasn't allowed and could be used to detect faults. 10 was one, 01 was zero, and 11 was idle. Nothing would happen until one of the lines dropped, so there was no clock but the CPU still knew when it was time to do something. It was a fully static design where no power was being used unless there was some user interaction. You could run it off a few nanoamps, so a piece of citrus fruit would run it until the fruit rotted. Simple chemistry.

    I think this was from Seiko-Epson. I might have the states screwed up but that's the idea.
  • by MikeD83 ( 529104 ) on Wednesday July 17, 2002 @03:38PM (#3903991)
    An industry standard benchmark (SPEC CPU benchmark for example) will be used.
    This of course has problems because a lot factors into the speed of a computer. For instance motherboard chipsets will become increasingly important.
  • by AstroJetson ( 21336 ) <.gmizell. .at. .carpe-noctum.net.> on Wednesday July 17, 2002 @03:45PM (#3904026) Homepage
    Exactly right. Nowdays, most of the Motorola embedded processors (many of which use 68000 or 68020 cores) can generate their own DTACK signals. For example, the 68302 has four CS (chip select) lines that you can internally map to whatever address ranges you want. You specify how many wait states are required and the DTACK and CS signals get generated automagically. This cuts down dramatically on on-board glue logic and address decoding logic, which is important for (typically small) embedded designs.
  • by neongenesis ( 549334 ) on Wednesday July 17, 2002 @03:50PM (#3904059)

    The famous PDP-6 was asynch logic. It made a very fast machine out of very few transistors, but was a nightmare to maintain. The follow-on PDP-10 was syncronous logic.

    There must be some history out there somewhere of the problems DEC had with the asynchronous logic. Any old MIT research notes?

  • Clockless issues (Score:2, Interesting)

    by KeggInKenny ( 593779 ) on Wednesday July 17, 2002 @03:52PM (#3904075) Journal
    Despite the marketing problems associated with clockless machines (as a one-time computer retail sales guy, it was easy to talk a 1st-time-buyer into upgrading from the 800MHz system for the identical 900MHz for $50 difference) there are some other asynchronis aspects which may throw a wrench into design. First, if the chip is asynchronous, there must be a way to signal that the chip is ready for the next instruction - i.e. a "ready" line. Similarly since everyonw is pumped about the chips coming out in the last few years which execute multiple instructions simutaneously in different parts of the chip (think pipelines) there would have to be several of these signal lines. This will require additional logic circuits simply to decode these lines and figure out what instuctions can be executed, where they can, when they can, and so on. A related problem occurs with large, time consuming instructions which require the majority of the chip to execute. It would be difficult to implement a system which is by its nature asynchronous with a system of semaphores and time-estimating logic. And how much faster would this type of design actually be than a RISC based chip with a fast floating point unit. Since flops are typically the operation bottleneck in a new breed of processor, are we really saving time? I realize that the whole presumption of an asynchronous chip is that in traditional clocked chips we have to wait a finite amount of time (usually determined by the most complex operation) for every cycle, even if the op can be completed in less time. But personally I don't think that we'll see these on the market for at least ten years (well... maybe in a few microcontrollers, but only for really, really specialised tasks). Still it's good to see a few novel ideas (or in this case applying an old idea on a completely different scale) in this industry of re-packaging buzzwords.
  • by flanagan ( 35585 ) on Wednesday July 17, 2002 @04:47PM (#3904553)
    The problem with slapping active cooling on an asynchronous chip is that the chip will *stop* working if it gets too cold, just like if it gets too hot.

    Here's why:

    There are two main aspects to consider in an asynchronous chip, gate delay (the time for a gate to open/close) and propagation delay (the time it takes for a signal to go from one gate to the next).

    Asynchronous logic works by carefully arranging the length and geometry of the wiretraces between gates, so that the signals coming from those traces all hit their target gate (nearly) simultaneously.

    The problem is that gate delays are affected by temperature differently than propagation delays. They both get faster with cooling, and slower with heating, but they do so nonlinearly, and at *different rates*. And asynchronous logic requires those rates to be carefully matched. Change the rates too much, and the chip breaks.

    Synchronous logic doesn't have this problem (as much), because the whole point of latching everything between clock cycles is to give the slower signals time to catch up to the faster ones, and to force them all to wait up until everybody is ready (at which point the clock releases the latch, and the next cycle starts). But this has the downside of the extra wiring, circuitry, and power required to run all the clock lines and latches.

  • Real-Time (Score:3, Interesting)

    by Amazing Quantum Man ( 458715 ) on Wednesday July 17, 2002 @05:28PM (#3904837) Homepage
    How would an asynchronous process affect determinism requirements, such as those of a hard real-time system?
  • by BitMan ( 15055 ) on Wednesday July 17, 2002 @06:20PM (#3905132)

    Unless I missed it, there was no mention of Theseus Logic's [theseus.com] Null Convention Logic [theseus.com] at all which is a real disappointment. Theseus has one of the few approaches that doesn't require a PhD-level of education to understand and design in.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...