Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Clockless Computing 342

ender81b writes "Scientific American is carrying a nice article on asynchronous chips. In general, the article advocates that eventually all computer systems will have to move to an asynchronous design. The article focuses on Sun's efforts but gives a nice overview of the general concept of asynchronous chip design." We had another story about this last year.
This discussion has been archived. No new comments can be posted.

Clockless Computing

Comments Filter:
  • by npietraniec ( 519210 ) <npietran.resistive@net> on Wednesday July 17, 2002 @02:58PM (#3903660) Homepage
    You actually could "overclock it" because such computers would have a maximum speed... Instead of spinning their wheels like todays computers do, they would only clock when they needed to. They'd be able to achieve quicker bursts because all that wheel spinning wouldn't melt the processor.
  • by The Fun Guy ( 21791 ) on Wednesday July 17, 2002 @03:07PM (#3903738) Homepage Journal
    The article talks about an advantage of clockless chips being the fact that you can do away with all of the overhead in sending out the clock signal to the various parts of the chip. It also discusses what kind data processing activities are more suited for clocked vs. clockless chips. To get a best-of-both-worlds chip design, what about farming out various responsibilities on the chip to clockless sub-sections? The analogy I have in mind is when I drop my laundry off at the dry cleaners. I am on a tight schedule, and I have a lot of things to do in a certain sequence, while the dry cleaners collects laundry and does it at various rates during the course of the day. This particular laundry of mine can be done at any point over the next 4 days, and held afterwards, just so long as I have the finished product exactly when I need it, Thursday at 4:15 p.m. Different people assign different limits on the time-sensitivity of their laundry. The clocked sections can drop off their data for processing, and pick it up when they need it, and what happens inbetween is up to the clockless subchip, which does more-or-less FIFO, but can be flexible based on the time-sensitivity of the task.
  • But... (Score:3, Insightful)

    by ZaneMcAuley ( 266747 ) on Wednesday July 17, 2002 @03:15PM (#3903791) Homepage Journal
    ... won't the buss and storage devices be a bottleneck still?

    Bring on the solid state storage.
  • Tools (Score:3, Insightful)

    by loli_poluzi ( 593774 ) on Wednesday July 17, 2002 @03:20PM (#3903844)
    Kevin Nermoyle (Sun VP) advocated asynch at the 2001 uProcessor Forum. The biggest and most daunting objection I heard in response was that tool support would be a killer. There is no tool support for asynch design at the complexity level needed to do a processor. You're left to a bunch of Dr. Crays using the length of their forearm to resolve race conditions with wiring distance. Since a large portion of the industry would have to make the leap to get the tool guys to invest in development, this kills any realistic possibility of an overnight asynch revolution. Small niche applications will have to get the ball rolling on this. Even still, designer's would need to get a lot smarter to think asynch. Think of how many chip protocols rely on a clock. How do you do even simple flow control in a queue for example? Pipelining goes to pot - its a whole different world. My two-cents.. Loli
  • by mikehoskins ( 177074 ) on Wednesday July 17, 2002 @03:28PM (#3903895)
    Are you just talking about a passive backplane? If so, we're talking about something VERY different here. If it's a passive backplane, you're talking about power and conections, little else.
  • by Alien54 ( 180860 ) on Wednesday July 17, 2002 @03:29PM (#3903904) Journal
    So ...

    if we have clockless computers for the desktop, HOW will Intel and AMD market them?

    After all, a large quick and dirty rating they have used for decades is the clock speed. Throw that away and what do you have?

    I can see the panic in their faces now...

  • by Mike_K ( 138858 ) on Wednesday July 17, 2002 @03:33PM (#3903946)
    Simple. Crank up the voltage.

    One huge advantage of asynchronous circuits is that you can turn the power down, and the chip simply slows down (up to a point, but you see the point). You turn power up (increase Vcc) and the chip runs faster. Same principles apply in overclocking your desktop chip, except here you don't need to crank voltage AND clock :)

    Of course doing this could ruin your chip.

    m
  • by ZipR ( 584654 ) on Wednesday July 17, 2002 @03:36PM (#3903970)
    Give me an asynchronous life!!!
  • by mikehoskins ( 177074 ) on Wednesday July 17, 2002 @03:37PM (#3903976)
    Today, we get to benchmark a system using measurements such as TpM, MIPS, FLOPS, etc. How do you quantify how fast a clockless machine is? Yes, they're supposedly faster with fewer transistors, but how do you sell a clockless computer to somebody who asks you how much faster a system is than the old one it replaces?

    The Pentium IV is supposed to be partially clockless, but to the outside world, all the I/O is clocked, making it easy to benchmark. If the I/O, logic, memory, etc., were ALL clockless, how fast is the machine?

    Government contracts of big systems are really picky about things like this.

    I think marketing will be the most likely problem for this technology. (Interfacing to clocked equipment won't be.)

  • by tlambert ( 566799 ) on Wednesday July 17, 2002 @03:43PM (#3904015)
    If you have looked at the "bucket brigade" graphic in the article, then you will know what I'm talking about...

    Is it just me, or does that picture seem to imply that you get a lower "buckets per unit time" throughput from asynchronous processing?

    I know that this is not the claim of the article... but it's still my gut reaction to the graphic.

    "Gandy Dancers" (railroad manual track laying and repair teams) were so-called because the first part of their name was the Chicago tool maker that made track laying tools, and the second part of their name came from the fact that they worked to a rhythm.

    A better analogy would be a work-content based multipath route, where the amount of time is based on the type of work to be performed.

    This would have implied (correctly) that, in an synchronous system, you should be able to "make up for" slow elements by doubling them up: i.e., when you are faced with a slow section of pipe, rather than bottle-necking, make it wider, instead.

    Or to use their analogy, if you have a slow guy, then get another slow guy to stand next to him so he doesn't bottlneck the brigade.

    Probably a more apt analogy would be nice: it's hard to show throughput increases, except by number of buckets in the hands of the people.

    -- Terry
  • by Alomex ( 148003 ) on Wednesday July 17, 2002 @04:14PM (#3904244) Homepage
    In the past I've mentioned here the role that popular publications like Scientific American have in creating hype. Be it the semantic web, nanotechnology, AI or asynchronous circuits, SciAm seems to focus on pie-in-the-sky ideas with a very small chance of success.

    That would be fine if they acknowledged this in the text, but more often than not they take an extremely bullish approach and echo the wildest promises by the researchers as if they were to happen tomorrow.

    Very smart people have been working for many years in asynchronous circuits, yet the likeliest scenario are hybrid designs mixing synch and asynch circuits (the asynch circuit stops the clock from propagating).

    Why do SciAm and other such publications do this? According to Chomsky because they are told so by the trilateral comission. Personally, I think they do it because it sells magazines.

  • by Rupert ( 28001 ) on Wednesday July 17, 2002 @04:21PM (#3904320) Homepage Journal
    It's more accurate if you think of the amount of water getting to the other end. If the water supply is irregular, the synchronous bucket chain will sometimes be sending empty buckets. The asynchronous bucket chain only has to send full buckets. If one person is 1% slower than the others, the other people on the synchronous bucket chain have to wait a whole extra cycle, reducing throughput by 50%. Throughput on the asynchronous bucket chain is reduced by just 1%.
  • by William Tanksley ( 1752 ) on Wednesday July 17, 2002 @04:36PM (#3904455)
    That's not true either. It can take fewer transistors even at a small scale, and it often takes fewer transistors at a large scale, since propagating the clock pulse across a chip requires a surprising amount of circuitry.

    Consider that the Pentium 4 added entire pipeline stages for the sole purpose of getting data from one side of the chip to the other in step with the clock.

    Consider that the x25, a largely asynchronous chip, has about as many gates as a 386 yet contains 25 parallel processors.

    The main problem isn't impossibility or complexity; the problem is that asynchronous design isn't yet understood. We have a LOT of research to do. Once we've done it, engineers will consider asynchrony to be a simple, solved problem.

    -Billy
  • by William Tanksley ( 1752 ) on Wednesday July 17, 2002 @04:47PM (#3904548)
    It's amusing to read the claim that an asychronous chip couldn't take advantage of pipelining. You see, the thing is that pipelining exists ONLY to control two of the disadvantages of clocked processors.

    First, it allows different instructions to complete in different amounts of time. An asynchronous chip wouldn't have that disadvantage.

    Second, it allows 'idle' portions of the chip to be used by other instructions whose time hasn't come. Asynchronous chips are vulnerable to that as well, but they can be much less vulnerable than even the most pipelined architecture, because dataflow can completely guide the chip: you can hammer in more data as soon as the previous data's been slurped in.

    So far from not taking advantage of pipelining, asynchonous chips naturally have one of the advantages of pipelining, and can be built to have the only other.

    -Billy
  • by Weird Dave ( 224717 ) on Wednesday July 17, 2002 @04:57PM (#3904636) Homepage
    Even in "clockless computing" there's still kind of a "clock," in that a new instruction would trigger the operation instead of clock edges.

    If you had a multiply instruction that took maybe 3 clock cycles in a clocked computer, you would still have 3 cycles/events/stages or whatever you wanted to call them in an unclocked computer that the instruction would need to step through. The instruction wouldn't be regulated by the clock, but pushing it through the stages would take a certain amount of time... And there might be ways to "overclock that" if you wanted to call it that... Of course, that's kind of what you said at the end :)

    Man, you lost and you know it. You were caught bullshitting, and if I had any mod points, I'd mod you down. Don't even try to claim that there is some sort of pretend clock in an asynchronous processor if you look at it sideways. You're never going to overclock the asynchronous part of a processor because there's no clock, by definition. You might be able to make it run faster by mucking with other factors like the ambient temperature, but that's not what you said, and you know it! You can save no face because you are totally and completely wrong. Have a nice day.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...