Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Asynchronous Logic: Ready For It? 192

prostoalex writes "For a while academia and R&D labs explored the possibilities of asynchronous logic. Now Bernard Cole from Embedded.com tells us that asynchronous logic might receive more acceptance than expected in modern designs. The main advantages, as article states, are 'reduced power consumption, reduced current peaks, and reduced electromagnetic emission', to quote a prominent researcher from Philips Semiconductors. Earlier Bernard Cole wrote a column on self-timed asynchronous logic."
This discussion has been archived. No new comments can be posted.

Asynchronous Logic: Ready For It?

Comments Filter:
  • alright, question... (Score:2, Interesting)

    by Anonymous Coward
    so how long do you think this will take before it's implemented on any sort of a large scale?
    • by darn ( 238580 ) on Monday October 21, 2002 @11:20AM (#4496247)
      The largest ascynchronous project (to my knowledge)is the MiniMips [caltech.edu] that was developed at Caltech 1997 and has 1.5 M transistors. It was modelled after the R3000 mips architecture.
      The best selling larg scale asynchronous circuit seems to be a micro controler that Philips [philips.com] developed and used in a pager series.
    • The Vax 8600 CPU (Score:3, Informative)

      by Tjp($)pjT ( 266360 )
      Is the earliest example I can think of. Roughly 50 ECL 10000 Gate Arrays. It was only syncronous at the "edges" like buss interfaces. Circa 1983. I was on the simulation tool design team. Loads of fun on the skew analysis portion of the simulation. You have to account for all the "local" varieties of skew (within a cell, within a quadrant of the chip, and within the chip overall, and more), and the lead and trace generated skew as well.
  • by scott1853 ( 194884 ) on Monday October 21, 2002 @10:20AM (#4495651)
    Isn't that when your boss gives you several conflicting ideas on how he wants a product to be implemented, all at the same time?
  • by sp00nfed ( 518619 ) on Monday October 21, 2002 @10:21AM (#4495665) Homepage
    Doesn't the pentium 4 and most CPU's nowadays have at least some elements of asynchronous logic? And I'm pretty sure that certain circuits in most current CPU's are implemented asynchronously.

    Isn't this the same as having a CPU without a timer? (i.e. no MHz/GHz rating).

    • by Xeger ( 20906 ) <slashdotNO@SPAMtracker.xeger.net> on Monday October 21, 2002 @10:41AM (#4495865) Homepage
      Within the space of a single clock cycle, the Pentium (or other designs) might make use of asynchronous logic, but (and this is the important bit) the asynchronicity only exists within the domain of the CPU. The external interface to the CPU is still governed by a clock: you supply the CPU with inputs, trigger its clock, and a short (fixed) while later it supplies you with outputs. Asynchronous logic removes the clock entirely.
      • Within the space of a single clock cycle, the Pentium (or other designs) might make use of asynchronous logic
        Okay, this is silly. Within a single clock cycle, all logic is asynchronous. Circuits are designed to have an upper bound which is less than whatever the local clock limit is, but the above statement is true by definition.
        • I don't see how your statement is in any way different from mine. I brought up the fact that CPUs make use of asynchronous logic within a single clock cycle, and explained that this does not make them asynchronous machines.

          I think you parsed my sentence incorrectly.

          I could say "The US might be a free country, but we still have laws to protect others' freedoms from impinging on our own."

          Does this mean that I am in doubt, as to whether the US is a free country? No; the US is a free country by definition, the Constitution having defined it as such.

          The "Might...but" construct is frequently used in the English language to introduce a fact, and then qualify it.
          • But there is a huge leap from single cycle circuits to the external interface. All synchronous CPU's are asynchronous between clocks (looks like we were both saying this), but there's a lot of room for asynchronous circuits that have a threshold >1 CPU clock, but small enough for the task. It's very unlikely that we'll get fully asynchronous chips in the near future, simply because the vast majority of the tools, methodologies, etc. are for synchronous designs. But having asynchronous circuits doing some work on a synchronous chip is much more likely. (And I think is a better path anyway.)
            • Ah; I think I see what you're getting at. IANEE (I Am Not An Electrical Engineer), so forgive me if I'm asking a naive question, but...

              For clarification, can you give a specific example of a situation where an asynchronous circuit would be useful as part of a clocked chip?

              If it's simply a matter of an async circuit helping a more complex synchronous machine do its job, then the machine's output is ultimately still tied to the clock, no?

              Perhaps you're saying that it's possible to build async circuits to accomplish discrete computational tasks that, while they take longer than a single clock cycle to complete, still get the job done faster than an equivalent synchronous circuit? In this case, I might issue an asynchronous instruction which "completes" in one clock cycle, but the results only become available several cycles later, in a special register. This would call for the additional of special async instructions to a machine's instruction set, and all the accompanying compiler antics.
  • Kurzweil (Score:3, Insightful)

    by Anonymous Coward on Monday October 21, 2002 @10:22AM (#4495681)
    Brains use async logic elements. Maybe the only way to achieve good artificial intelligence with practical speeds is with async logic. With a cluster of async nodes you can build a physical simulation of neural nets. Consider having a small array of async nodes simulating parts of a neural net at a time. That would be a lot faster than what would be possible with ordinary sequential processing. Async logic might very well bring large neural net research into practicality.
    • Re:Kurzweil (Score:3, Insightful)

      by pclminion ( 145572 )
      Brains use async logic elements.

      First off, there's no proof of this. The brain certainly appears to be asynchronous, but there's no evidence to suggest that there isn't some kind of internal, distributed clocking mechanism that keeps the individual parts working together. There's not enough evidence either way.

      Async logic might very well bring large neural net research into practicality.

      Why does everyone seem to think that ANNs are the way toward "true AI?" ANNs are superb pattern matching machines. They can predict, and are resilient to link damage to some degree. But they do not think. ANNs have nothing to do with what's really going on in a biological brain, except that they are made of many interacting simple processing elements. The biological brain inspired ANN, but that's all.

      • Re:Kurzweil (Score:2, Interesting)

        by LordKane ( 582228 )
        "ANNs have nothing to do with what's really going on in a biological brain, except that they are made of many interacting simple processing elements."

        I'm not sure quite how this can be. ANNs are inspired by and based on the biological brain. But they are not related? ANNs are just pattern matchers, and our brains are nothing like that? I beg to differ. ANNs are very similar to our brains. Humans are giant pattern matchers. How do we learn? Does stuff just pop into out heads and !BANG! we know it? No, we discover it or are told first. Science is based on being able to reproduce the results of an experiment, matching a cause->effect pattern. Speech is matching sound to meanings. Not everyone equates a sound to the same meaning, as patterns that people learned can be different. Animals are great examples of pattern machines. How are animals trained? Most often, by positive and negative reinforcement, which is essentially, conditioning. We do the same thing to our ANNs, you get a cookie if it's right, nothing if it's wrong. The close matches are kept, the others are thrown out. So, in what way are ANNs nothing like a biological brain? To me, it seems that they are incredibly similar, just on different scales. Our ANNs today are tiny and don't do to much as compared to the standard of a brain, which is layers upon layers of interlaced patterns. ANNs use simple structures as the basis for their overall structure, are our brain cells not very similar? To me, it seems that they are incredibly similar, just on different scales.
        • Re:Kurzweil (Score:5, Insightful)

          by pclminion ( 145572 ) on Monday October 21, 2002 @12:13PM (#4496836)
          I've had this argument many times. First, there's lots of evidence that biological brains are heavily chaotic, which ANNs traditionally are not. Second, brains are extremely recurrent in ways that could never be simulated by traditional computers -- there are simply too many links. Third, the human brain is not based merely on reward and punishment. When I sit in a chair at night, pondering whether I agree or not with what Bush has done today, there's no clear source of reward or punishment. Yet, at the end of the day, my brain has changed. ANNs have no ability to self-contemplate and change in this way.

          Fourth, when an ANN is trained, every weight in the network is changed. In a biological brain, particular links form and are destroyed, but learning is not a global process. I'm not a neuroscientist, so if I'm wrong, someone please point that out.

          Fifth, you can ask a human why he/she came to a particular conclusion. You can't ask an ANN why it reached a particular conclusion. Sometimes, analysis is possible on smaller networks. But for multi-layer networks with thousands of hidden units, this becomes impossible. I really don't think it's a question of computational power. I have a deep sense that somehow, biobrains are fundamentally different from their mathematical cousins.

          I won't claim that ANNs have no place in thinking machines. But having worked with them extensively, I feel that, although they are extremely valuable computational tools, they are not a magic wand. Many pattern recognition and data organization tasks can be much better performed by traditional symbolic algorithms.

      • Re:Kurzweil (Score:5, Interesting)

        by imadork ( 226897 ) on Monday October 21, 2002 @12:02PM (#4496667) Homepage
        Why does everyone seem to think that ANNs are the way toward "true AI?" ANNs are superb pattern matching machines. They can predict, and are resilient to link damage to some degree. But they do not think. ANNs have nothing to do with what's really going on in a biological brain, except that they are made of many interacting simple processing elements. The biological brain inspired ANN, but that's all.

        I couldn't agree more. I remember reading a comparison between the current state of AI and the state of early Flight Technology. (it may have even been here, I don't recall. I make no claim to thinking this up myself. Perhaps someone can point me to a link discussing who first thought of this?)

        One of the reasons that early attempts at flight did not do well is because the people designing them merely tried to imitate things that fly naturally, without really understanding why things were built that way in the first place. So, people tried to make devices with wings that flapped fast, and they didn't work. It wasn't until someone (Bernoulli?) figured out how wings work - the scientific principles behind flight - that we were able to make flying machines that actually work.

        Current AI and "thinking machines" are in a similar state as the first attempts to fly were in. We can do a passable job at using our teraflops of computing power to do a brute-force imitation of thought. But until someone understands the basic scientific principles behind thought, we will never make machines that think.

    • Actually, it's the only practical way to create a fairly sophisticated computation unit.

      The problem is, to have a synchronous chip, there has to be synchronicity.

      Problem: The more transistors a chip has, the smaller the production process has to be to keep the production profitable.
      The smaller the process the slower the signal travels.

      So, you'll get a system, where the clocks signal can't be synchronous (or it'll be terrible complicated to distribute the signal). Hence, it'll have to be asynchronous.

      Not necessarily a completion-based asynchronous logic, maybe a multi-core like the current IBM PowerPCs.
      But async-logic actually seems to be the easier way as SMP doesn't scale perfectly.

      Furthermore, the smaller the structures and the higher the clock, the larger the clock driver has to be.

      IRC, 1/3 of the current chips is used to drive the clock alone. But don't cite me on that.
    • I have strong reservations about whether logic alone can achieve consciousness at all. The fact that it is asynchronous adds nothing that wasn't there before.

      • Well lets see.
        Average (reather than worst) case ferformance.
        Lewer latency.
        Lewer power consumption.
        Zero power consumption static state.
        Lower EMI.
        Security by being imune to clock glitch attacks and some power attacks
        what else do you want?
        • You misunderstand. Asyncrhonous circuits certainly have different features from synchronous circuits, there is no disputing this. But in terms of input and output, you would expect an asynch and synch chip to produce the same results if running the same program right? The logic is the same. I mention this because Kurzweil is a proponent of strong AI which claims that consciousness is merely some form of algorithm that any computational device can run. I was merely doubting that consciousness is purely a function of logic, and therefore consciousness cannot be expressed algorithmically. Consequently, since asynch logic is still defined by logic, it adds nothing new to the AI equation and will no more produce consciousness on a chip than synchronous circuits.

  • by phorm ( 591458 ) on Monday October 21, 2002 @10:24AM (#4495693) Journal
    On the flip side, the millions of simultaneous transitions in synchronous logic begs for a better way, and that may well be asynchronous logic

    The advantage outlined here seems to be independant functionality between different areas of the PC. It would be nice if the components could work independently and time themselves, but is there really a huge loss in sustained synchonous data transfer?

    From what I've understood, in most aspects of computing, synchronous data communication is preferable. IE, network cards, sound-cards, printers, etc. Don't better models support bi-directional synchronous communication?
    • by Hard_Code ( 49548 ) on Monday October 21, 2002 @10:39AM (#4495840)
      "but is there really a huge loss in sustained synchonous data transfer?"

      I'll answer that question, right after I look up the answer in memory...
    • by Junks Jerzey ( 54586 ) on Monday October 21, 2002 @10:46AM (#4495915)
      From what I've understood, in most aspects of computing, synchronous data communication is preferable. IE, network cards, sound-cards, printers, etc. Don't better models support bi-directional synchronous communication?

      You're just talking about I/O. Of course I/O has to be synchronous, because it involves handshaking.

      I think there are some general misconceptions about what "asynchronous" means. Seriously, all I'm seeing are comments from people without a clue about chip design, other than what they read about at arstechnica.com or aceshardware.com. And if you don't know anything about the *real* internals of synchronous chips, then how can you blast asynchronous designs?

      So-called asynchronous processors have already been designed and prototyped. Chuck Moore's recent (as in "ten years old") stack processors are mostly asynchronous, for example. Most people are only familiar with the x86 line, and to a lesser extent the PowerPC, and a much, much lesser extent the Alpha and UltraSPARC. Unless you've done some research into a *variety* of processor architectures, please refrain from commenting. Otherwise you come across like some kind of "Linux rules!" weenie who doesn't have a clue what else is out there besides (Windows, MacOS, and UNIX-variants).
      • Actually, most popular communications formats are "asynchronous".

        Don't confuse yourself. Synchronous communications involve a real-time shared clock between points.

        Then you have asynchronous communications standards like RS-232. The sender and receiver choose a baud rate, and the receiver waits for a start bit, then starts sampling the stream using it's local clock. So long as the clocks are close enough, and the packets are short enough, you'll never get an error.

        Then you have standards like Fast Ethernet, which are also asynchronous. AFAIK, the clock used to decode the Ethernet packet is contained somewhere in the preamble, and a PLL is tuned to the packet's clock rate. This is to avoid the obvious problems of the simple async communications of RS-232.

        A SAMPLE OF THE ACTUAL CLOCK used to encode the packet is avaliable to the receiver, but the receiver can only use this to tune it's local clock. It has to do the decoding asynch.
        • Then you have standards like Fast Ethernet, which are also asynchronous. AFAIK, the clock used to decode the Ethernet packet is contained somewhere in the preamble, and a PLL is tuned to the packet's clock rate.

          You're right. Metcafe's prototype and all variants tereafter (that I'm aware of) use a phase-modulated baseband signal. (Cable modems use somethign very similar to ethernat, except they use otherwise unused cable channels instead of using baseband.) You have a bunch of leading zeroes to get the PLL locked, then a 1 to signal the beginning of the header. Some of the leading zeros get discarded with every hub/switch thepacket goes through as the PLL is locking on to the clock. The original ethernet paper is a good read, one of those things wher you sit back afterwards and say to your self "that's the right way to do it".

    • by Orne ( 144925 ) on Monday October 21, 2002 @10:57AM (#4496009) Homepage
      The root problem is data transfer within the CPU, not data transfer between I/O devices.

      The clock speed (now >10E9 Hz) is the upper limit of your chip's ability to move a voltage signal around the chip. Modern CPUs are "staged" designs, where data is basically broken into an opcode "decode" stage, "register load", "operation", and "register unload" stages. For a given stage, you cannot clock the output of the stage faster than the time it takes for the computations to complete, or you're basically outputting garble.

      A synchronous design indicates that every flip-flop on the chip is tied to the same clock signal, which can mean one HUGE amount of wiring just to get everything running at the same speed, which raises costs. On top of that, you have charging effects due to the switching between HI and LO, which can cause voltage problems (which is why capacitors are added to CPUs) Then add resistive effects, where current becomes heat, and you run the risk of circuit damage. All of this puts some hard limits on how fast you can make a chip, and for what price.

      Asynchronous chip design allows us to throw away the clock circuitry, and every stage boundary becomes status polling (are you done yet, are you done yet, ok, lets transfer the results). With proper design, you can save a lot of material, you can decouple the dependance of one stage on another, so the max instruction/second speed can now run at the raw rate of the material.
      • AFAIK the modern CPUs are already asynchronous internally to a large extent. This is because at today's clock frequencies, the signal runtime difference becomes significant, i.e. by the time it takes for the signal to move across the whole die, several clock cycles would already have passed. So, prefetch, ALU, instruction decoding, FPU, etc. all operate independently from each other. I'm no expert on this though, maybe someone more knowledgable than me can shed more light on this.
        • Pipelining (Score:5, Informative)

          by Andy Dodd ( 701 ) <atd7@@@cornell...edu> on Monday October 21, 2002 @12:07PM (#4496744) Homepage
          In most modern CPUs, all of those occur independently in different units in the pipeline.

          But they still do their function once per global clock cycle. After that, they pass their results on to the next stage.

          As a result, the clock rate is limited by the longest propagation time across a given pipeline stage. A solution that allows for higher clock speeds is to increase the number of pipeline stages. This means that each stage has to do less. (The P4 one-ups this by having stages that are the equivalent of a NOP just to propagate the signal across the chip. But they're still globally clocked and synchronous.)

          P4 has (I believe) a 20-stage pipeline. (It's in that ballpark) - The Athlon is sub-10, as are almost all other CPUs. This is why the P4 can achieve such a high clockrate, but it's average performance often suffers. (Once you have a 20-stage pipeline, you have to make guesses when branching as to WHICH branch you're going to go on. Mispredict and you have to start over again, paying a clock cycle penalty.)

          Shorter pipelines can get around the branch misprediction issue by simply dictating that certain instruction orders are invalid. (For example, the MIPS architecture states that the instruction in memory after a branch instruction will always be executed, removing the main pipeline dependency issue in MIPS CPUs.)

          With asynch logic, each stage can operate independently. I see a MAJOR boon in ALU performance - Adds/subtracts/etc. take up FAR less propagation time than multiplies/divides - but in synch logic the ALU has to operate at the speed of the slowest instruction.

          Most important is the issue of power consumption - CMOS logic consumes almost no power when static (i.e. not changing its state), power consumption is almost exactly a linear function of how often the state changes, i.e. how fast the clock is going. With async logic, if there's no need for a state change (i.e. a portion of the CPU is running idle), almost no power consumed. It is possible to get some advantages in power consumption simply by changing the clock speed. (e.g. Intel SpeedStep allows you to change between two clock multiplier values dynamically, Transmeta's LongRun gives you FAR more control points and saves even more power, many Motorola microcontrollers such as the DragonBall series can adjust their clock speed in small steps - One Moto uC can adjust from 32 kHz to 16 MHz with a software command.)
      • The stage boundaries are already essentially status polling, because the stages are, themselves, pipelined, with different lengths depending on the complexity of the particular unit. Due to all of the out-of-order and parallel execution, it would be a pain to keep track outside of the unit what to expect when; instead, the information just goes through with the data. This does, however, avoid the problem of the clock being too fast for some stages.

        The real issue is clock distribution, because, unlike power and ground, you have to distribute clock in wires (rather than plates held at different voltages), and you have to move a lot of current around all over the place, which makes the inductive issues more complicated to simulate and work out: you've put a strangely-shaped antenna broadcasting a tone with harmonics at exactly the frequencies you care about in the middle of your chip.

      • I am done now here you go is better than are you done yet???? or am I Missing something really big?
    • by anonymous loser ( 58627 ) on Monday October 21, 2002 @11:09AM (#4496121)
      The advantage outlined here seems to be independant functionality between different areas of the PC. It would be nice if the components could work independently and time themselves, but is there really a huge loss in sustained synchonous data transfer?


      Yes, for many reasons which are somewhat glossed over in the article (I guess the author assumes you are an EE or CPE familiary with the subject). Here's a quick breakdown of the two major issues:


      1. Power Distribution & Consumption - In a synchronous system, every single unit has a clock associated with it that runs at some multiple of the global clock frequency. In order to accomplish this you must have millions of little wires running everywhere which connect the global clock to the individual clocks on all the gates (a gate is a single unit of a logic function, sorta like a 0 or 1). Electricity does not run through wires for free except in superconductors. Real wires are like little resistors in that to push the current through them, you have to give up some of the power you are distributing (how much is a function of the cross-sectional area of the wire). The power which doesn't make it through the wire turns into heat. One of the reasons you can fry an egg on your P4 is because it's literally throwing away tons of power just trying to syncrhonize all the gates to the global clock. As stated in the article, in an asynchronous system, the clocks are divided up on a modular basis, and only the modules that are running need power at all. This design technique is already used to some degree in synchronous designs as well (sorta like the power saving feature on your laptop), but does not benefit as much since in a synchronous design must always trigger at the global clock frequency rather than only triggering when necessary.


      2. Processor Speed - Much like the speed of an assembly line is limited to the slowest person on the line, so too is the speed of a CPU limited to the slowest unit. The problem with a synchronous design is that *everything* must run at the slower pace, even if they could theoretically move faster. In an asynchronous design, the parts that can go faster, will, so the total processing time can be reduced.


      Hope that helps.

      • But that doesn't take out the fact that there's still a slowest person in the line. Granted, for some cycles this person might not be needed, but what if they were for some calculation... it would not speed up the process.

        It's not that I disagree with the asynchronous design... I see the benifits, just pointing out a little (IMHO) flaw in your logic. :-)
      • Is is also possible to get more finely grained parallelism in the processor this way? Say, for instance, one certain stage of the pipe runs at half the speed of the other stages. Could one then just replace one of these stages with two, and have the pipeline alternate which stages it uses?

        BlackGriffen
    • Actually, the biggest advantage is in routing.

      On a synchronous design of any complexity, quite a bit of the routing (i.e. where the wires go) is due to clock distribution. The CLK signal is one of the few that needs to go to every corner of the chip. There are various strategies for doing this, but they all have difficulties.

      One method is to lay a big wire across the center of the chip. Think of a bedroom, with the bed's headboard against one wall; you end up with a U-shaped space. Now, suppose you (some data) need to get from one tip of the 'U' (the decoder) to the other (an IO port). Either you have to walk around the entire bed (a long wire), or go over it (a shorter wire). The obvious choice is to go over, but when you have a wire with one voltage crossing a wire with a (potentially different) voltage, you get capacitance, and that limits the clock speed of the entire chip.

      With an asynchronous design (lots of smaller blocks with their own effective clocks), you don't have this. Data can be routed wherever it needs to go, without fear of creating extra capacitance. The downside is that they're very difficult to design. This is partially because there are no tools for this - most of the mainstream hardware simulators slow waaaaaaayyy down once you get more than a few clock signals running around.

      -- Hamster

  • Problem with Async (Score:3, Insightful)

    by adrox ( 206071 ) on Monday October 21, 2002 @10:25AM (#4495705)
    The problem with asynchronous logic is that even though it might seem faster in theory you have to deal with the introduction of many new race conditions. Thus to prevent to the race conditions you need to implement many handshacking methods. In the end it really becomes no faster than sychronous logic due to the handshacking. This is especially true these days with 2.5 GHz CPUs.
    • Nowdays we have good software which ensures you dont have race conditions. Infact this is where async becomes great as your data and clock is one big rase condition. The clock must be slower than data.
      • by taeric ( 204033 )
        Not sure if you were serious or not...

        Software will having next to nothing to do with the race conditions in the processor. Instead, the race condition you pointed out will be the difficulty. That is, how can you ensure the "ready" signal is indeed slower then the computations that a module is performing? This is not an easy thing to do. Especially if you want it to report as soon as it is done. Most likely, a signal will fire after the longest time a unit could take. You do not have a speed up for the fast solutions, but you don't have to worry about complex logic on the ready path, either. Another solution would be handshaking, but then you may run into an explosion in the amount of logic.

        Also, something I think would be a problem. Many of the current optimizations in out of order execution are almost custom fit to a clocked design. That is, the processor knows IO will take so many cycles, branches take a certain amount, etc. Now, currently (especially with hyperthreading) the processor is coming closer to keeping the execution units busy at all times. Do people really expect some magical increase when the clock is taken out? The scheduler will have to change dramatically. Right?
        • This is done using handshaking. Its a method of communication and ensuring that both partys are happy before moving onto the next piece of data.
          As for logic there are several methods to ensure that the result is ready before the latch switches. Using matched delays involves races but is safer as its nore local than a global clock.
          A better method is using things like dual rail and Delay insensitivety. This uses two wires to communidate data. Wiggle one for a one and the other for a zero. No races.
          Asynchronous isnt that weird you know. Fine an instruction might take a 1ns or 1.2ns depending on the data. It still follows the rules of sequencing.

          Read first chapter of this [man.ac.uk] for more details of race free computation.

          I even made a method of converting synchronous designs into async ones automaticly.
        • Oh wait i see what youre getting at. The software is used in the design process reather than at run time. If you use a tool like "balsa" to design then you can get race free implementations.
          • :) Yeah, I got confused by your use of the word software at first. Looking at it now, you were obviously talking about design software. Oops.

            Still, a few more questions. When I think of most async designs, I think of the ready signal as being a designed race condition where the ready signal is guaranteed to fail. The only difference between this and clocked designs is that this condition is determined by the module (not a global clock) and can vary. Is this wrong?

            If not... then that part I get. However, I fail to see how there will not be some logic considerations when you move to a non-clocked system. First, since there is no deterministic way to know how long a module will take, there is no way to know what can be done in the wait period. That is to say, when a module "blocks", would it be advantageous to wait, or to do something else then check back?

            The way I see it, there are four possibilities.
            1) You wait, and the wait time was considerably shorter due to the async design. This is obviously a win on the async side.
            2) You wait, but the wait time was the same as a sync design. I see this as detrimental. (Unless no work could have been performed in the wait time, obviously)
            3) You complete another "ready" task and check back later, but the process was ready well before you checked back. Nothing really lost here, but what was gained?
            4) You complete another "ready" task and check back later, and the process is "ready" shortly before you get back.

            To me, with hyperthreading and lots of other new designs, 4 is the designed for solution today. So, in theory, you can keep the processor occupied at all times.

            Now, my main comment, if the processor is busy at all times (in all places) what can be gained by going to async design? I do agree it is worth looking into, simply because it is there, but I am skeptical of those that think a magnitude of performance increase is possible overnight.

            Also, I am only looking at this from a CPU perspective in this. I can see the obvious advantages of it elsewhere. (Esp. in regards to power.)
    • The calculation of the race conditions is what you use to get the performance. You eliminate the handshaking and in some cases add extra buffers in to get the timing correct. Handshaking is what makes synchronous logic synchronous. Asynchronous runs constantly at race conditions and if done correctly delivers the expected output in a calculated window (of time) from the presentation of the inputs. The only way you generate the handshaking signals you refer to is to have them generated in response to the presentation of data taking into account the skew of the various signals involved. Most folks in logic design would call those handshaking signals clocks and thus the generation of "many handshaking signals" defies the definition of asychronous logic and actually makes the design syncronous. It is a "wheel of incarnation" thing. In the early seventies quite a few designs had an overload of one-shots for timing (an asyncronous logic concept, but a bad one), late seventies syncronous logic eliminated them and improved reliability, then in the eighties asyncronous logic was refined by the big players. Then syncronous logic overcame some of its problems (like the skew of all those clock signals) and it became predominate in both research and implementation. Now as the chips grow larger and the systems get more complex, the skew of the clock signals is again a really nasty problem.
      So we turn again to asyncronous logic to solve the problems.

      As a Russian friend of mine is fond of saying, the only thing new is history that has been forgotten.
  • Cyclic History (Score:5, Interesting)

    by nurb432 ( 527695 ) on Monday October 21, 2002 @10:27AM (#4495725) Homepage Journal
    Isn't this where the idea of digital logic really got started? At least its how it was taught when I was in school.

    We even did some design work in async. Cool stuff. Easy to do, fast as hell...

    Never did figure out why it never caught on. Except for the difficulty in being general purpose.. so easy of a job with sync logic. And i guess it does take a certian mind-set to follow it.
    • Re:Cyclic History (Score:4, Interesting)

      by Anonvmous Coward ( 589068 ) on Monday October 21, 2002 @11:29AM (#4496335)
      "Never did figure out why it never caught on."

      I think the internet is a good metaphor of this technology. Take Quake 3 for example. Think about what all it takes to get several people playing over the net. They all have to respond within a certain time-out phase, for adequate performance they have to respond in a fraction of the timeout time, and there's a whole lot of code dedicated to making sure that when I fire my gun, 200ms later it hits the right spot and dings the right player for it.

      It works, but the logic to make that work is FAR more complex than the logic it takes to make something like a 'clocked internet' work. The downside, though, is that if you imagine what a clocked internet would be like, you'd understand why Q3 wouldn't work at all. In other words, the benefits would probably be worthwhile, but it's not a simple upgrade.

    • Certainly some of the early computers had to work asynchrnously. They were so physically big that it was quite difficult to synchronize things, so the designs tended to be asynchonous. A good example is the Ferranti Atlas Computer [ukuug.org], a beautiful bit of kit that lacked a central clock. Just in case someone doesn't click on the link and find they information, I'll quote Aspinall on this paper:
      The machine, unlike its predecessors, did not have a clock in the central processor. It was felt that to tie down everything to the slowest operation, which was implied by the use of a clock, would be against the principle of the machine. Instead a single Pre-Pulse wandered round the various elements of the machine where it would initiate an action and wait for the self timing of the action to complete before wandering off to the next element. Occasionally it would initiate an action and move on to another element that could operate concurrently. For example the floating-point arithmetic unit would be completing a division operation whilst the program would be executing several fixed-point operations. Also there was a pipeline between the processor and the main core store.
      The system had other minor innovations like paging, two level backing store and so on. The Grandfather of C, BCPL was also developed on this system.

      Kudos to Ferranti, Plessey and the University of Manchester who did a lot of the design work.

    • Isn't this where the idea of digital logic really got started?

      Yes.

      At least its how it was taught when I was in school. We even did some design work in async. Cool stuff. Easy to do, fast as hell...

      Never did figure out why it never caught on.


      It DID catch on. But the chips kept getting bigger.

      It's easier to design silicon compilers for synchronous designs than for asynchronous - and when you've got millions of gates per chip you REALLY want compiler assist, rather than to lay out all the circuit details by human effort.

      It's also easier to make automated TEST program generators for synchronous designs, to run the machines that test the chips when they come out of fabrication and reject the ones that are broken. You NEVER get high-90s coverage with human-generated "functional" tests - but a compiler can get there easily:
      - Add muxes in front of the flops to string 'em into "scan chains" - big shift registers connected either to the regular pins or a "JTAG" controller. Then on the tester you'll:
      - "Scan in" a random starting state.
      - Step the chip a few times.
      - "Scan out" the result and see if it matches expected, simultaneously scanning in a different starting condition.

      The test generation program becomes essentially a random-number generator, chip simulator, and fault-tested-so-far counter, with a few finesses for things like getting things reset properly, testing gates with big fanin, making sure busses aren't floating, rejecting patterns that don't test anything new, working around flops that weren't on the scan chain because they were on a critical path, avoiding logic loops that become implied RS flops or ring oscilators (depending on whether the loop has an even or odd number of inversions), identifying logic circuits that have untestable failure modes, and the like.

      But full-scan and partial-scan don't work if the flops aren't tied to a small number of clock domains that can be tied together or otherwise controlled directly by the tester. Asynchronous logic elements (such as ripple counters or other circuitry where a flop's clock is driven from another flop's output, or other logic that's something other than a clock distribution and switching system) just don't scan well.

      There IS a way to get the same sort of massive observability and controllability over asynchronous designs - the Cross Check array - along with automatic test program generation systems to work with it. (Think of DRAM- or active-matrix-LCD-style cross-point addressing of test-points and signal-injection points - about one for every four regular transistors on the chip.) It tests async designs just fine, and gives better coverage than full scan with about half the silicon overhead.

      But it's patented. The company that made it never got much market penetration in the US fabs. It has since merged and the product may be completely gone at this point. Except for Sony, which had an unlimited license from funding them when they were a startup, their own software, and (as of a few years back at least) used it in all of their consumer chips.
  • by catwh0re ( 540371 ) on Monday October 21, 2002 @10:28AM (#4495739)
    A while back I read an article about intel making p2 clockless chips, that performed rougly 3 times(in MHz terms not overall performance) faster.

    Intel recognise clockless as the future, and hence the P4 actually has portions designed that are clockless.

    Before know-it-all's follow this up with "but it runs at 2.xx GHz", let them please read an article on about how much of your chip is oscilating at that immense speed.

    As it's said in the EE industry, "oh god imagine if that clock speed was let free on the whole system"

  • by Call Me Black Cloud ( 616282 ) on Monday October 21, 2002 @10:29AM (#4495745)
    and it doesn't work all that great.

    It usually goes like this: little head decides to take some action that big head later decides wasn't such a good thing to do.

    Fortunately I've invested in a logic synchronization device, which I like to call "wife". Wife now keeps little head from failing to sync with big head through availability (not use) of tools "alimony", "child support", and "knife" (aka "I'll chop that damn thing off while you sleep!")
  • Doing it already... (Score:2, Informative)

    by Sheetrock ( 152993 )
    Technically speaking, if you're not using a SMP system you're processing logic asynchronously.

    But more to the point: while asynchronous logic may appear to offer a simple tradeoff (slower processing time for more efficient battery life), recent advances in microsilic design make the argument for asynchronous components moot. For one thing, while two synchronous ICs take twice the power of one asynchronous IC (not quite because of the impedance caused by the circuit pathway between two chips, but that's negligible under most circumstances), they will in general arrive at a result twice as quickly as its serial pal. Twice as quick, relatively equal power consumption.

    The real reason for the drive towards asynchronicity is to cut down on the costs of an embedded design. Most people don't need their toaster to process the 'Is the bread hot enough' instruction with twice the speed of other people's toasters. But for PDAs (Personal Data Assistants) or computer peripherals I wouldn't accept an asychronous design unless it was half as much.

    • Eh... "Asynchronous" means "without synchronization" (ie. "without clock"). It has nothing to do with serial vs. parallell operation.

      HIBT?

  • These chips are great for battery powered devices, such as pagers, because they don't have to power a clock. Extends the batt life at least 2x. But even if the advantages are superior to clocked chips for larger markets, how do you market something like this to people who want to see "Pentium XXXVIV 1,000,000 Ghz" on the packaging?
  • by renoX ( 11677 ) on Monday October 21, 2002 @10:39AM (#4495838)
    I'm wondering how asynchronous logic stand up against transiant errors induced by a cosmic ray?

    On a synchronous circuit most of the time such glitch won't do anything because it won't occur at the same time the clock "ring" so the incorrect transient value will be ignored.

    As the "drawing size" of circuits gets lower and lower, every circuit must be hardened against radiations, not only circuits which must go on space or in planes..
    • You could mitigate this problem somewhat by including redunant computation as part of your asynchronous workload. If radiation only causes local transients, then any sensitive operations could be performed by two different units, and their results compared in a third unit.

      The disadvantage is that a glitch in any of the three units would result in a computation being detected as invalid. And, of course, it adds even more complexity to an already staggeringly complex intra-unit communication problem.

      The advantage is that you don't need to spend as much time radiation-hardening your chips. Also, they become more naturally fault tolerant. For the longest time, system design in the space exploration field has been dominated by multiple redundancy; I think they would really dig multiple redundancy within a single chip.
    • There are two factors here.
      Firstly the on a glitch the synchronous part will take a certain period to return a wire low/high and resume its operation. By then it would be too late as the clock has gone. A asynchronous property called Delay Insensitivety which some designs have allows any wire to have any delay to rise or fall. So you can pick of any wire from your lets say ALU reeroute it outside the chip through a a telephone line to the other side of the world and back to the inside of the chip and the design would still work (maybe 1 ips but never the less the result would be correct)
      Secondly async releases much less EMI. The inside of your computer is riddled with radiations much nastyer than cosmic rays. Most chips are composed of millions of arials which pickup all these rays and make your chip malfunction. Fine you can slow down your clock and hope for the best but its better not to create them in the first place.
    • I'm wondering how asynchronous logic stand up against transiant errors induced by a cosmic ray?

      What about ECC on each internal bus? It works well for external busses (RAM, etc.).
  • More info: (Score:5, Informative)

    by slamden ( 104718 ) on Monday October 21, 2002 @10:48AM (#4495930)
    There was an article [sciam.com] in Scientific American about this just recently...
  • What if? (Score:5, Insightful)

    by bunyip ( 17018 ) on Monday October 21, 2002 @10:53AM (#4495975)
    I'm sure that many /. readers, like me, are wondering if asynchronous chips get faster if you pour liquid nitrogen on them.

    Seriously though, does the temperature affect the switching time? Or does the liquid nitrogen trick just prevent meltdown of an overclocked chip?
    • Re:What if? (Score:3, Funny)

      by brejc8 ( 223089 )
      Yes they do. We had a demonstration board where if you sprayed some CFC sprey then it would increase in speed. Only a little because it was plastic packaging but it was quite cool.
      When testing it I left it running a dhrystone test overnight logging the results and as the office would cool down at night the chip went a little bit faster. as slow down by the morning. I think i might have invented the most complex thermomiter ever.
    • Re:What if? (Score:3, Informative)

      by twfry ( 266215 )
      Yes, temperature does effect switching time, although not nearly as much as voltage for sub-micron channels. Lower temperature translates into slightly faster switching times. But if you really wanted to speed up a path, a slightly higher voltage will perform the job better. Also, in the field its easier to have control over voltage than temperature.

      As a result of this, one of the newer hardware design ideas it to provide multiple voltage levels within a chip. Higher performace logic is driven by a larger voltage difference than logic where performance is not as much of a concern.

    • Re:What if? (Score:2, Informative)

      Asynchronous circuits inherently run at the fastest possible speed, given the conditions (i.e. temperature, operating voltage), because they are self-timed, not timed by an external source. Transistors change state faster in lower temperatures and higher voltages. Since the transistors trigger each other to change states, that automatically happens as fast as possible. In synchrounous logic, the clock time is the assumed ammount of time of the worst case of the slowest transistor combination.
  • Read the article (Score:5, Informative)

    by Animats ( 122034 ) on Monday October 21, 2002 @10:56AM (#4495994) Homepage
    Read the cited article: "Asynchronous Logic Use -- Provisional, Cautious, and Limited". The applications being considered aren't high-end CPUs. Most of the stuff being discussed involves low-duty-cycle external asynchronous signals. Think networking devices and digital radios, not CPUs.

    In synchronous circuits, there are power spikes as most of the gates transition at the clock edge. It's interesting that this issue is becoming a major one. ICs are starting to draw a zillion amps at a few millivolts and dissipate it in a small space while using a clock rate so high that speed of light lag across the chip is an issue. Off-chip filter capacitors are too far from the action, and on-chip filter capacitors take up too much real estate. Just delivering clean DC to all the gates is getting difficult. But async circuitry is not a panacea here. Just because on average, the load is constant doesn't help if there are occasional spikes that cause errors.

    One of the designers interviewed writes: "I suspect that if the final solution is asynchronous, it will be driven by a well-defined design methodology and by CA tools that enforce the methodology." That's exactly right. Modern digital design tools prevent the accidental creation of race conditions. For synchronous logic, that's not hard. For async logic, the toolset similarly has to enforce rules that eliminate the possibility of race conditions. This requires some formal way of dealing with these issues.

    If only programmers thought that way.

    • For synchronous logic, that's not hard. For async logic, the toolset similarly has to enforce rules that eliminate the possibility of race conditions. This requires some formal way of dealing with these issues.

      aka, Design Patterns.

  • asynchronous logic? (Score:2, Interesting)

    by snatchitup ( 466222 )
    Sounds like my wife.

    But seriously, isn't that an oxymoron?

    At first, I thought it meant that we take a program, break it up into logic elements and scramble them like an egg. That won't work.

    But after reading, I see it means that everything isn't pulsed by the same clock. So, if a circuit of 1,000 transistors only needs 3 picoseconds to do it's job, while another 3000 transistors actually need 5 picoseconds, then entire 4000 transistors are turned on for5 picoseconds. So, 3000 transistors are needlessly powered for 2 picoseconds.

    This adds up when we're talking 4 million transistors and living in the age of the Gigahertz.
    • But if the 2nd circuit takes somewhere between 2 and 5 picoseconds, depending on the operation being executed, then half the time you're more efficient, the other half the same.
      Full-chip synchronized clock-ticks bring the average operation execution time down to the speed of the slowest, every time.
    • Just one thing... (Score:3, Informative)

      by Andy Dodd ( 701 )
      In CMOS logic, power consumption is not related too much to the static state of the chips, i.e. "transistor is on for 5 ps".

      It's related to how often the state change occurs.

      A good example of where async logic might be useful:
      ALU multiply operation takes 20 pS, LOTS of transistors
      ALU add/subtract op takes 5, FAR fewer transistors
      In current designs, this usually means that add/subtract ops have to run at a clock rate that is slow enough to accomodate that 20 pS clock
      In an async design, the add/subtract instructions can run 4 times as fast. But since the multiply/divide stage is not clocked, those transistors aren't doing anything so overall power usage is less. (The add/subtract stage uses 4x the power it did before, but the mult/div stage was probably using 10x or more the power the add/sub stage was using)
  • am i ready? (Score:4, Funny)

    by cheesyfru ( 99893 ) on Monday October 21, 2002 @10:59AM (#4496022) Homepage
    Am I ready for asynchronous logic? It doesn't really matter -- it can come along whenever it wants, and I'll come use it when I have some spare cycles.
  • by tomhudson ( 43916 ) <barbara.hudson@b ... com minus distro> on Monday October 21, 2002 @11:04AM (#4496063) Journal
    Definitions for the real world: Asynchronous logic: anything you think about before your first cup of coffee...

    Second real-world definition: When someone else (usually of the opposite sex) answers your question with an accusation that's completely off-topic.

    Third real-world definition: Many slashdot posts (sort of including this one :-)

  • by dpbsmith ( 263124 ) on Monday October 21, 2002 @11:06AM (#4496081) Homepage
    Surely Digital Equipment Corporation's PDP-6 [mit.edu] had it in 1963?

    Or is this modern "asynchronous" logical some totally different concept?
  • For those who may have missed it (as I did the first time)...the article title itself is a bit playful.
  • Isn't an UART at least partly an asynchronous chip? So you probably have got one your PC today...

    And Chuck Moore's description of an asynchronous Forth chip is available in Google's cache [216.239.39.100] (I don't know why he pulled it from the web site).
    • Re:UARTs? (Score:2, Informative)

      As I explained above, RS-232 ( which uses a UART ), is an asynchronous communications standard. The chip itself is clocked.

      There is no clock shared between the two points of communication. Each end "agrees" on a clock speed, but there is no guarantee how accurately each end produces said clock speed.

      A receiver detects a new packet when it receives the start bit, and it samples the incoming serial stream using it's own local clock. This is the asynchronous part of the communications, the receiver really has no idea if it's sampling right, and clock skew between the sender and receiver can produce errors.
  • Sun has talked quite a bit about async logic in their own designs. I forget if it is in their current generation of chips or not, but they've talked about putting 'islands' of async logic into their chips, with an eventual goal of using it throughout.

    The article as embedded.com talks about 'security'. What they really mean here is like, for example, in those smart access cards in a DirecTV. They say a clockless design is harder to figure out what is going on. So, it is a DRM monster, they say.
  • Ready Am I (Score:3, Funny)

    by istartedi ( 132515 ) on Monday October 21, 2002 @11:18AM (#4496225) Journal

    No problem asynchronous logic will be. To program some say difficult but they weak minded people are. Excuse me, I have to post a response to the story on Slashdot about logic asynchronous now.

  • Aww hell, My SO has been doing this for *Years*, i mean she is the queen of one-sided logic for ages ;-)
    p.s. Kylie if your reading, j/k love ya!
    ~what was that? I dunno, but you've got it's license plate number stamped on your forhead ~*ouch*~
  • All I want to know is:
    1. how this would change the appearance of code that I can write?
    2. Would there be any difference at all?
    3. Would I need an entirely new programming language, replete with syntax to leverage asynchronous logic?
    4. Are there (sensible) examples of this for me to gawk at?
    This really sounds interesting but being just a dumb programmer, I'd be interested in seeing this concept in terms I can understand (if it exists...).
    • 1. it doesnt. The processor might be async but the code is just your original code. Amulet have been making asynchronous ARMs and I made an async MIPS with no need to alter the code.
      2. Well it would use less power if thats what you want. Or go fatser or if the chip dentects a freese instruction it will staticly sit there waiting for an interrupt.
      3. nope
      4. C
  • by mfago ( 514801 ) on Monday October 21, 2002 @11:38AM (#4496414)
    Here at Caltech [caltech.edu] the CS department is into this kind of thing.

    They've even built a nearly complete MIPS 3000 compatible processor [caltech.edu] using async logic.

    Seems pretty cool, but I'm waiting for some company to expend the resources to implement a more current processor (such as the PowerPC 970 perhaps) in this fashion.
    • Here at Manchester [man.ac.uk] in the CS department is into this kind of thing.

      They've even built a complete ARM compatible processors using async logic.

      We did make one with an external company to use in their products.

      I am currently working on making a nice fast MIPS design myself
    • by Andy Dodd ( 701 )
      One of the guys heavily involved in that project is no longer at Caltech, but is a professor at Cornell University now.

      http://vlsi.cornell.edu/~rajit/

      One of the best (albeit toughest) profs I've ever had. This guy knows his stuff, and is very good at passing the knowledge on. :)

      Happens to be responsible for Cornell's only FreeBSD lab, which the CS students prefer to the CS department's own systems. Many of them continue using the CSL lab long after finishing ECE/CS 314. (Req'd for all ECE and CS majors.)
  • Here's a link [sun.com] to a list of papers & presentations on the topic.

    It's not new, company's like Sun have been pursuing this for years. Here's info [sun.com] on the FLEETzero prototype async chip they were showing off at the ASYNC 2001 conference last year.

  • by brejc8 ( 223089 ) on Monday October 21, 2002 @12:39PM (#4497119) Homepage Journal
    This [man.ac.uk] is my favorite example of asynchronnous logic.
    As the rat speeds up or slows down the chip compensates for it.
    Not often that you cant play with toy mice and call it research.
  • Am I ready for asynchronous logic?
  • Dear god people, do you have any idea how impossible asynchronous circuits are to debug?!!?

    I spend several hours a day with a hair dryer and go through many cans of freeze spray debugging many many stupid little asynchronous designs that engineers think are "cool" or "sweet". Yeah, and they work on the FPGA on their desk and none other.

    Please please please, if you don't want to listen to me then goto http://www.chipcenter.com/pld/pldf030.htm
    and read what Xilinx's Director of Applicatons Engineering has to say.

    quote "Asynchronous design methods may ruin your project, your career and your health"
  • by Neurotensor ( 569035 ) on Monday October 21, 2002 @02:34PM (#4498293)

    I'm studying chip design and my supervisor scoffs at asynchronous logic. I don't have any real input of my own, but his view is that we've been waiting for commercially viable asynchronous designs for as long as cheap fusion, and neither has happened yet despite many loud enthusiasts.

    One of the real problems of asynchronous logic is in testing. With synchronous logic your design is partitioned into registers and combinational logic. The combinational stuff can be tested at production by use of every possible test vector, while registers are rather easy to test. Together these two tests virtually guarantee that the state machine works. Do that for every state machine and you're done.

    Asynchronous state machines, however, have no obvious way to break them down. You have to give them sequences of inputs and check their sequential outputs. Even if you think it's working you can never be sure, and what happens when the temperature changes? Race conditions can result in the state machine breaking under changing temperatures.

    Synchronous design is a very mature field. Nowadays you can be sure that a design works before fabrication (well, almost.. =) and then synthesise it into gates that ought to work first go. If they didn't then AMD and Intel would go under pretty soon!

    Asynchronous design is hard and my hat goes off to the people who do it for a living. But the same amount of effort would result in far more development using standard techniques. I guess you really have to want to do it.

    Yes, synchronous logic has serious issues with clock distribution, but it's still the most commercially viable design technique. The fact that your CPU is fully synchronous is testament to that.

    So, which will come first: cheap fusion or reliable asynchronous logic?

  • I just happened to check out a textbook on the subject of asynchronous circuit design and so far its been pretty good (1st part of chapter 1) Anyway it gives the benefits of asynchronous design:
    1. Elemination of clock skew problems - the clock is a timing signal, but it takes a certain amount of time for the clock signal to propogate around the chip, so as the clock frequency goes up, this becomes a huge problem
    2. Average-case Performance Synchronous circuits must be timed to the worst performing elements. Asynchronous circuits have dynamic speeds.
    3. Adaptivity to processing and environmental variations Dynamic speed here againg. If temp goes down, circuit speeds up. If supply voltage goes up, speed goes up. Adapts to fastest possible speed for given conditions
    4. Component modularity and reuse easier interface because difficulty with timing issues are avoided (handshake signals used instead).
    5. Lower system power requirements it takes alot of power to propogate the clock signal, plus spurios transistor transistions are avoided. (MOSFETS only use considerable power when they change states).
    6. Reduced noise All activity is locked into a single frequency in synchronous, so big current spikes cause large ammounts of noise. Good analogy is the noise of 50 marching soldiers vs. the noise of 50 people walking at their own pace. The synchronous nature of the soldiers causes the magnitude of the noise to be much greater.
    Major drawback: Not enough designers with experience and lack of asynchronous design tools. So far the book is a great read, but pretty technical (good for an EE or com sci person who's had a basic digital logic class).

    The book is "Asynchronous Circuit Design" by Chris J Myers from the University of Utah.

    Also I wrote a paper about this for my computer architecture class:
    http://ee.okstate.edu/madison/asynch.pdf [okstate.edu]

God made the integers; all else is the work of Man. -- Kronecker

Working...