Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Sun Microsystems Hardware

Sun Unveils Direct chip-to-chip Interconnect 185

mfago writes "On Tuesday September 23, Sun researchers R. Drost, R. Hopkins and I. Sutherland will present the paper "Proximity Communication" at the CICC conference in San Jose. According to an article published in the NYTimes, this breakthrough may eventually allow chips arranged in a checkerboard pattern to communicate directly with each other at over a Terabit per second using arrays of capacitively coupled transmitters and recievers located on the chip edges. Perhaps the beginning of a solution to the lag between memory and interconnect speed versus cpu frequency?"
This discussion has been archived. No new comments can be posted.

Sun Unveils Direct chip-to-chip Interconnect

Comments Filter:
  • by holzp ( 87423 ) on Monday September 22, 2003 @08:37AM (#7023947)
    therefore the speeed increase will be unnoticable.
    • Great ! I am so happy to see that there are some real programmers exist who see the truth. I have seen our Sun E3500 with 8 CPUs felt like a pentium pro with java shit running on it. But it was management's vision ... what we can do. I just procured the servers and pretended that I am doing social work by giving Sun more money.
    • Actually, you might notice a difference: Java might peg all of your CPUs seemingly for no good reason.
      Depending on what patch level your Solaris is at, your JVM might be using one OS thread-handling model or another. One apparently makes Java go nuts on the spinlocks, which is less noticable on slower machines. True story of a Solaris support case at work...

      If you want to sell more hardware, why would you make a framework that scales well down to smaller, slower hardware? The idea is that the more hardwar

  • Timing? (Score:3, Interesting)

    by afidel ( 530433 ) on Monday September 22, 2003 @08:38AM (#7023949)
    I wonder if this release might have been pressed forward a bit to squelch some of the talk about Sun losing their will to innovate after Bill Joy left.
    • Re:Timing? (Score:5, Insightful)

      by Usagi_yo ( 648836 ) on Monday September 22, 2003 @08:43AM (#7024011)
      No. What you don't understand or realize is that Bill Joy actually left 2 years ago, when he "retired" into distinguished senior engineer, from CTO. This latest move by Bill Joy, full retirement is merely a continuation of that. At least thats how I see it.
  • This sounds like a sweet technology. Hopefully we'll see this in a real product in the near future.

    Though the way people talk about SUN, were more likely to see it licensed to some other company...

    • Isn't that called a trace? Or another fancy name would be a lead? I think that there are people with prior art...
      • I don't get it either. You want to make memory access faster and faster, so you put it closer and closer to the cpu. Eventually the bus length reaches 0, as the two chips are physically adjacent. So what?
        • Isn't that called a trace? Or another fancy name would be a lead? I think that there are people with prior art...

          No, a trace is a flat wire stuck to (or etched from) a printed circuit board. This invention (process, really, see below) obviates the need for PCB's between (at least some of the) chips. A lead is a wire, not stuck to a PCB, such as the input connections to most oscilloscopes and test equipment.

          I don't get it either. You want to make memory access faster and faster, so you put it closer
  • terrabit (Score:2, Funny)

    by lanswitch ( 705539 )
    Does "terrabit" mean that it will be made of pieces of the earth?
  • No registration (Score:5, Informative)

    by Anonymous Coward on Monday September 22, 2003 @08:42AM (#7023987)
    Via Google [nytimes.com]
  • by KarmaPolice ( 212543 ) on Monday September 22, 2003 @08:43AM (#7024012) Homepage
    This could prove very interesting as the speed usually drops when "leaving the chip" to do communications. There has been alot of research to develop protocols to ease on-chip communication when several ICs are combined on a single chip. If Suns technology can stand the test, NoC/SoC products could reduce it's time-to-marked dramatically...smaller and faster devices for everyone!

    BTW: I didn't RTFA since it requires (free) reg.
  • by Thinkit3 ( 671998 ) * on Monday September 22, 2003 @08:43AM (#7024014)
    Or maybe Rambus is already fixing to sue them.
  • by Anonymous Coward on Monday September 22, 2003 @08:44AM (#7024022)
    That is the nature of the beast.

    Remember how excited you were to get your hands
    on a 386 machine?

    The thrill of your first encounter with a 286 screamer?

    Upgrading to 16k from 4k on your TRS-80?

    Your first disk drive for your Apple 2?

    It's all relative.

    So enjoy
    • Spoilsport :p

      Just because you didn't get that 486 you wanted all those years ago, dosen't mean we can't enjoy the nice fast computers!

      ;)
    • If you weren't already been moderated to 5, I would spend a point with your post.

      What you're stating is obvious but true: at this point, speed improvements are meaningless except for a fistful of aplications.

      For the common user, a fasterr chip means nothing: the application is the choking point.
      • ... speed doesn't matter anymore.

        I wonder why these dummies in these corporations spend so much money in high performance computing research, man if only they would listen to Carlos.

        After all, it's not like this will make Mine Sweeper or the Windows Calculator program any faster. Speed doesn't matter people, give it up. Let's just dedicate ourselves to farming.

        • You wonder... let me explain to you:

          The dummies are the people that think a 2 MHz CPU will be ppreferable to the 1.4 MHz they have now, and upgrade.

          The corporations aren't dummies, they are very smart in fleecing the market.

          Maybe you need some super-duper machine, but for 99% of people it just doesn't make sense.

          Cheers,
      • What you're stating is obvious but true: at this point, speed improvements are meaningless except for a fistful of aplications.

        You're right of course. Only a fistful of applications need ever increasing cpu speed.

        And that small fistful of applications all begin with one word.
        • Microsoft Word
        • Microsoft Excel
        • Microsoft PowerPoint
        • Microsoft Outlook
        • Microsoft FrontPage
        • etc...

        For the common user, a fasterr chip means nothing: the application is the choking point.

        I don't believe that I ne

    • Hey, now, don't forget the TI-994/A! Insert the Extended Basic cartridge, and, WHAMMO! You just doubled your memory to 32k!

      Cool beans, dude.
  • by Atomizer ( 25193 ) on Monday September 22, 2003 @08:45AM (#7024030)
    Whatever, I think this will end up being the SUV of chip to chip conections. ;)
  • by Peridriga ( 308995 ) on Monday September 22, 2003 @08:51AM (#7024084)
    This might be the obvious question but, why hasn't anyone done this before?

    It seems obvious, the end of chip has pins. The chip it will eventually connect to has pins. Instead of having 20 trace lines to the next chip why not redesign them so the out/inputs of both line up to reduce the complexity of the design.

    Anyone wanna fill in my mental gap for me?
    • by Jah-Wren Ryel ( 80510 ) on Monday September 22, 2003 @09:02AM (#7024187)
      It has been done before, probably the most recent incarnation is hypertransport from AMD. The only difference at the 50,000ft view is that the speeds and feeds are faster. This is an evolutionary step, not revolutionary or innovationary,
      • HyperTransport is more than AMD. In fact, it includes Sun!

        from the HyperTransport FAQ [hypertransport.org]
        "6. What is the current specification release?
        The current HyperTransport Technology Specification is Release 1.05. It is backward compatible to previous releases (1.01, 1.03, and 1.04) and adds 64-bit addressing, defines the HyperTransport switch function, increases the number of outstanding concurrent transactions, and enhances support for PCI-X 2.0 internetworking."
        • This is MISinformative. Perhaps the noble moderator misread the Informative option as Misinformative? Or perhaps the gentle moderator doesn't know the difference between Hypertransport(TM) (a bus standard like, but faster than, ISA, PCI, etc. using plain old PCB traces as chip interconnect) and the new PCB-less chip interconnect discussed in the fine article? If this is the case (and I suspect it is), I must note that the moderator had no business moderating this particular post.

          The original poster is
          • Perhaps the responder misread the post to which he is responding, as did the Anonymous Coward before.

            The HyperTransport consortium has Sun Microsystems as a member. HyperTransport is used in AMD systems. Nothing more, nothing less.
            • Nope, I read it. I guess you misread, or simply ignored, the original post to which you replied, which clearly confused Hyperchip and the new technology, to wit:

              [This new technology] has been done before, probably the most recent incarnation is hypertransport from AMD. The only difference at the 50,000ft view is that the speeds and feeds are faster. This is an evolutionary step, not revolutionary or innovationary,

              Although you didn't directly re-state the posters' false claim, you did continue that
      • ibm has layed out similar plans for modular storage blocks (http://www.google.com/search?sourceid=navclient&i e=UTF-8&oe=UTF-8&q=ibm+storage+bricks) connected by pads on the surfaces. good luck patenting that, unless the application was made a couple of years ago.
    • Yeah, YOU try to coordinate redesigns for 4 different chip vendors, who are also redesigning their chips for EACH OF THEIR CUSTOMERS.
    • It has been done.

      The DEC PDP11/03 aka LSI-11 was implemented as a multi chip (4 + 1 rom) CPU. The 5 chips were placed right next to each other.

      This chip set was also setup by others with the UCSD Pascal "p-code" as the instruction set.

      Other CPU in the series had MMU, and additional instructions in additional chips.
    • Ah, the Inmos Transputer, ideal for parallel applications. Now as dead as a doornail.

      Transputer background [classiccmp.org]

    • Alot of reasons. We'll start with large complex chips. No, the pins aren't at the end of the chip, they are underneath the chip. BGA, CGA, LGA, ball grid, colume grid or land grid array.

      Then we'll go to ... well, they sorta did. They just enclosed it into one big chip.

      Then of course Heat and cooling and power requirments.

      Then onto manufacturability and repairability. Don't want to have a $15k board that has to be thrown away whenever there are problems with it. You do want to be able to repair it.

      N

    • I think sun is talking about having -no- trace lines, the chips have transmitters and receivers rather than traces between chips.
    • by Anonymous Coward on Monday September 22, 2003 @09:18AM (#7024321)
      You can't simply just remove the circuit board to achieve better speeds, you need to eliminate the need for the pad that converts internal logic to what we currently use externally. That is what Sun is claiming they have done.

      Sun's technology is not simply soldering to pins directly together (as you suggest), which is effectively the same thing as wiring through a circuit board. The high speed, low drive strength, low-voltage drivers have to go through pads that convert the internal signal to a slower, high drive strength, high voltage driver, that will yield a reliable connection to the next chip. I'm not an expert in this area, but Physics just gets in the way. There are capacitive issues, and interconnect delay issues.

      Sun is claiming to use capacitive coupling (put the pins really close together, but don't physically connect them.) This way they don't have to drive the external load of the pin/board connection, and are claiming they will be able to scale this down to a pad that will be able to switch faster than existing physical wire connected pins. Which means they believe they can make this technology work with lower drive stengths.

      They still have a ways to go. Notice that the P4 has faster connections using existing techology. Sun did a proof of concept, and claim they can speed it up 100x. So they haven't _proved_ that this will operate faster yet. They still have many things to overcome to make this viable, including how to make a mass production/assembly process. It's going to be a few years. At least.
    • Prior art: a four-pin CPU. [purdue.edu]

      --Pat / zippy@cs.brandeis.edu

  • of these, well that's kind of the point actually :-)
  • by BlankStare ( 709795 ) on Monday September 22, 2003 @08:59AM (#7024163)
    I wonder if this hardware computing model could provide the first real base for Neural Network computing? As far as I know, any neural network is currently emulated on linear processing machines.
    • I wonder if this hardware computing model could provide the first real base for Neural Network computing?

      This is not a hardware computing model, it's a new interconnect technology. So no.

      As far as I know, any neural network is currently emulated on linear processing machines

      The neural network group [ed.ac.uk] at Edinburgh University has been developing parallel neural network chips using analog technology for some time now. Because neural networks are very fault tolerant, the errors introduced by analog adder

  • FINALLY! (Score:5, Interesting)

    by JoeLinux ( 20366 ) <joelinux@ g m a i l . c om> on Monday September 22, 2003 @08:59AM (#7024168)
    Someone gets it. As an Electrical Engineer-in-training, I was always frustrated with people who got these big bad processors and wondered why their improvement was minimal.

    They never quite grasped that the biggest bottleneck is between the processor and memory.

    My EE instructor always said that they could improve performance by doing one simple thing: make the interconnects on the motherboard between the motherboard and RAM rounded instead of cornered. You could then increase bus speed as you wouldn't have magnetic loss at the corners like you do now.

    You fix that, and you can see a SUBSTANTIAL improvement in performance. The only thing that can be done beyond that is to get a Platypus drive (Solid state "Hard Drive" made from Quikdata made from DDR RAM). Then you reduce your access time to your hard drive from milliseconds to nano/microseconds.
    • Obviously an EE in training =)
      Have you ever tried to route a complex multilayer PCB design? If you have then you will know that it would be basically impossible to guarentee all straight paths between the CPU and RAM, or any other component. Besides if you want fast ram you put it on or near the CPU die. Hence processors like the Xeon, Itanium, HP PA-8800, etc which derive most of their performance gains over their desktop competitors by having large L2 and huge L3 caches.
    • Re:FINALLY! (Score:3, Informative)

      by Jeff DeMaagd ( 2015 )
      I think the fourty five degree corners is about as much of a compromise as one can get without being too expensive with routing.

      Another problem is that the speed of memory itself isn't that great unless you want to spend a _lot_ of money, to the tune of $50-$100 per megabyte as we see in advanced processor caches, and the faster it is, the more very power inefficient it becomes, maybe to a sizeable fraction of a watt per megabyte.
    • Re:FINALLY! (Score:3, Interesting)

      by hackstraw ( 262471 ) *
      Other people "get it". If you go to UVA, you might want to talk with Dr. McCalpin, and take a look at the stream memory benchmark [virginia.edu].

      Memory bandwidth is a bottleneck, not the biggest. It depends on the application. Sometimes an app is CPU bound, disk bound, network bound, or memory bound (or graphics card bound if 130FPS is too slow for your eyes). Also, chip-to-chip interconnects will not change the memory bandidth issue, because if the data does not fit on the chips or thier cache, then its going in mem
    • by StandardCell ( 589682 ) on Monday September 22, 2003 @10:21AM (#7024901)
      If you look at a modern evaluation board with gigabit SERDES or SERialization-DESerialization (e.g. the 3.125Gbit/s differential signal pair per channel), the trace routes are typically rounded, with no square corners. This is done to reduce the effective impedance along the line which needs to be carefully controlled. They also run in parallel closely-routed pairs because it's typically a differential signal. Actually looks a bit like a set of minature train tracks without the railroad ties.

      In fact, multichannel SERDES is the next real interconnect technology. It's used in Infiniband, HyperTransport, PCI Express, Rambus RDRAM and in 10 Gb/s Ethernet (usually as 4x3.125Gbit/s channels as a XAUI interface between optical module and switch fabric silicon with 8b/10b conversion). There are even variants, such as LSI Logic's HyperPHY, that are deployed specifically for numerous high-bandwidth chip-to-chip interconnections. The problem that is cropping up is that the traditional laminate PCBs are becoming the limiting factor in increasing per-channel connectivity, to the extent that 10Gbit/s per channel speeds are next to impossible on these boards due to the lack of signal integrity. There has been some experimentation for very short hops on regular boards, as well as using PTFE resins to manufacture the boards themselves, but it's precarious at best.

      As for Sun's technology, it's interesting but I don't know how much it will catch on or how feasible it will be. It creates packaging issues and requires good thermal modelling and 3-D field modelling to account for expansion and contraction through the operating temperature range and the presence of nearby signals, which could affect the integrity of the signals.
    • Re:FINALLY! (Score:5, Insightful)

      by chrysrobyn ( 106763 ) * on Monday September 22, 2003 @10:30AM (#7024970)
      Someone gets it. As an Electrical Engineer-in-training, I was always frustrated with people who got these big bad processors and wondered why their improvement was minimal.
      They never quite grasped that the biggest bottleneck is between the processor and memory.

      Don't get too frustrated with this. There will always be people who don't understand something fundamental to your training. That's why you're trained, to understand these non-obvious fundamentals. Now that you understand a CPU has to be fed data in order to process it, it's obvious, but a PHB wouldn't necessarily come to that conclusion on his own.

      My EE instructor always said that they could improve performance by doing one simple thing: make the interconnects on the motherboard between the motherboard and RAM rounded instead of cornered. You could then increase bus speed as you wouldn't have magnetic loss at the corners like you do now.
      You fix that, and you can see a SUBSTANTIAL improvement in performance. The only thing that can be done beyond that is to get a Platypus drive (Solid state "Hard Drive" made from Quikdata made from DDR RAM). Then you reduce your access time to your hard drive from milliseconds to nano/microseconds.

      Your EE instructor will tell you lots of things that can help performance. For example, making the L2 cache be the size of main memory. Just because it helps performance doesn't make it worth the price. Rounded edges on the PCB are not easy to accomplish and their benefits may not be outweighed by the added price-- even for exceptionally high end servers. Without looking at the math, I would like to toss "10% performance adder, 50% cost adder" out into the air, and say that most people would rather save the dough. Another factor to consider is reliability. Intuition suggests to me that reliability would go up without sharp edges, but intuition also tells me that modelling board coupling on a 4 layer board would be a real pain in the ass, to say nothing of a server class 6 or 8 (or higher) layer board if you have to model curved structures. You might not find an easy way to capitalize on your wonderful curved wire performance. Not only do you have to worry about your slowest path, but your quickest one can't arrive so quickly that the other chip can't sample the previous output.

      Take care in your classes when you use the word "only". Taking advantage of our wonderful next generation 64 bit processors and multiple gigs of RAM, we could conceivably copy the contents of the hard drive to main memory (especially if we are only concerned with 1-4 gigs of data in a low cost solution). Here, we get the enhanced bandwidth of main memory instead of having to kludge through the southbridge, PCI controller, IDE/SCSI to RAM interface and back.

      There are many things that improve system performance-- and the system is the only thing that matters-- rounded wires and SSDs (solid state drives) are only the beginning. Depending on the application, a larger L3 cache may make more difference, or a wider faster CPU to CPU interface, or a pair of PCI controllers hanging off the southbridge for twice the bandwidth, or integrating the northbridge onto the CPU, or ...

      The best engineering advice I can give you is that the answer is always, "It depends". You'll spend the next 5-30 years of your life learning how to answer the followon question "Depends on what?". Almost everything has advantages and disadvantages and there are few absolutes.

      The "Someone gets it" and "They never quite grasped" attitude may get you in trouble. Being proactive and explaining and educating instead will likely be more effective.

      • The "Someone gets it" and "They never quite grasped" attitude may get you in trouble. Being proactive and explaining and educating instead will likely be more effective

        Not on Slashdot, alas.

    • Re:FINALLY! (Score:2, Insightful)

      by ingenthr ( 34535 )
      Quite right. Sun gets this quite well. Look for the articles on the Niagara processor. People always look at it as an SMP on a chip and try to compare it to hyperthreading or the stuff that IBM has done. However, what Sun is doing is 'fast switching' between threads based on stalls for memory access. This is another way of solving the problem you mention.

      If the software and programming model are capable (and most software run on Solaris is) of exploiting this, you effecively trade off bandwidth (easy
    • by Anonymous Coward
      You need more training. Or less ego.

      Look at a recent P4 motherboard for 45 degree traces. Look at any previous motherboard with RAMBUS (even an Nintendo 64 from November 1999) for curved traces.

      It's not so much a question as knowing about something as it is implementing it. If it isn't affordable, it isn't worth it. Because if it isn't affordable, you might be able to buy two affordable ones for the same price. And you're going to have trouble beating the performance of two systems with one.

      Finally, to m
    • Chuck Moore's 25x MISC stack machine technology.

      See www.colorforth.com, and www.ultratechnology.com for more information on this overlooked, and underrated stuff.

  • Is this new? (Score:4, Insightful)

    by 4im ( 181450 ) on Monday September 22, 2003 @09:00AM (#7024171)

    Sounds a lot like the ol' Transputer (was from INMOS), of course faster. One could also think of AMD's HyperTransport. So, again, except maybe for the speed, I don't see much innovation here.

    If only people could remember that "terra" has something to do with earth, "tera" is the unit...

    • I wouldn't say it was innovation but its a step in the right direction. Its very close to the CSP model as dealt with before (http://www.wotug.org/), which should allow for efficient use of multiple processors.
    • Re:Is this new? (Score:3, Insightful)

      It doesn't sound like the Transputer to me. Sure, they resemble each other in that you can build a 2D array of chips from them by design, but you miss (or inadvertently downplay) that the innovation occurs in the fundamental electronic engineering issue of what happens in the bits of circuitry that drive the pins/pads - the transputer used asynchronous links and conventional pins and a nice but conventional memory interface, the Sun chip is doing something new, or if not new, seldom seen and highly promisin
    • Yes, it is. And no, it doesn't. The Transputer has a printed-circuit board. This doesn't. Any more questions?
  • or it might not (Score:4, Insightful)

    by penguin7of9 ( 697383 ) on Monday September 22, 2003 @09:00AM (#7024172)
    Placing large numbers of chips adjacent to one another has obvious problems with heat and power, in particular when running at those speeds. That, rather than interconnect technology, is probably the main reason we still package up chips in large packages.

    This might be useful for placing a small number of chips close together, in particular chips that may require different manufacturing processes.
    • Re:or it might not (Score:2, Insightful)

      by Derivin ( 635919 )
      Heat will definatly be an issue, but much less power will be required. The majority of the power required by chips is used to push data on and off the chip. It takes alot of poser to move a signal from a 25 micron PCB path.

      This technology (if it pans out) will mostlikly enter teh private sector in cell phones, DVD players and other small consumer electronics that have a very large number of units produced.

      Silicon wafer production has always had one major problem. Impurities. The ability to use more of the
    • Increase yield? (Score:2, Interesting)

      by mmol_6453 ( 231450 )
      ...in particular chips that may require different manufacturing processes.

      Or at least portions of more complex circuits where part of the circuit may not warrant the added cost of SOI, 90nm, or strained silicon.

      But then, those divisions are already made. AMD, for one, is working on recombining those parts. As an example, consider AMD's putting the memory controller on the CPU die.

      I am curious, however, as to whether you could have more than one silicon die in the same ceramic casing. This would let y
  • What effect will this have on the dip to chip ratio? Will this prevent breakage under load? Has anyone measured the performance benefits of salsa compared to sour cream and onion?

    Most importantly, will I still need my ThinkGeek 'I am teh Chip Haxx0R' bib?

  • Anybody remember the viruses which could travel from floppy to floppy back in the C64 days? You would put an infected floppy next to a clean floppy, and the virus would just hop over! Don't know about the speed though...

    (No kidding, there were people back then who told and believed this nonsense ;)
  • Great potential for gridcomputing, just keep adding chips.
  • The article is a bit vague as to what the innovation really is.

    The article immediately made me think of multi-chip modules. Multi-chip modules is an idea which never really caught on in the industry (except for IBM), and I'm not sure how Sun's innovation isn't just a take-off along that idea. Multi-chip modules have failed due to costs since much has to go right to get a multi-chip module that works.

    Any practical chip-to-chip connectivity scheme had better have a good rework scheme. If it doesn't, it's
    • Well, one benefit I can see is that you don't have to drive a huge (in silicon terms) chunk of copper over to the next chip. I guess they have the distances and other parameters figured out so that this capacitive coupling is actually an advantage compared to copper traces.

      They could probably do something similar with arrays of laser diodes beaming out the edges of the chips. Then again, maybe the capacitive coupling is better than that in terms of power consumption and speed.
      • They could probably do something similar with arrays of laser diodes beaming out the edges of the chips

        Definitely. That would be electromagnetic coupling. Sun's using capacitive coupling, using only the E field. Last week we saw an article on a company using inductive coupling (magnetism) for short-distance data links (in their first product, a wireless earset).

        EM is long-range (drops according to the inverse square) but very hard to convert to and from electricity.

        M is short-range (inverse sixth power)
  • Shall we call it Prime Intellect [planetmirror.com]?

    (actually, by the story naming convention, it would be closer to intellect 1, but oh well)
  • Bad math? (Score:5, Insightful)

    by Quixote ( 154172 ) on Monday September 22, 2003 @09:19AM (#7024332) Homepage Journal
    I hate it when the hype overshadows the technical details. Here's a snippet from the article:

    By comparison, an Intel Pentium 4 processor, the fastest desktop chip, can transmit about 50 billion bits a second. But when the technology is used in complete products, the researchers say, they expect to reach speeds in excess of a trillion bits a second, which would be about 100 times the limits of today's technology.

    If a P4 is already doing 50 Gbps (as they say), and this uber-technology will allow 1Tbps (which is 20x a P4's 50Gbps), then how is that "100x the limits of today's technology" ?

    <shakes head>

  • by Mr. Ophidian Jones ( 653797 ) on Monday September 22, 2003 @09:20AM (#7024350)
    Normally I don't pimp Sun, but here's something that makes me think they still have a finger on the pulse of things:
    Read about plans for Sun's "Niagra" core [theregister.co.uk]

    I understand they hope to create blade systems using high densities of these multiscalar cores for incredible throughput.

    There's your parallel/grid computing. ;-)

  • the Transputer [sbu.ac.uk]. It had 4 available hardware connections and the description of the way the different processors communicate is very similar to what is described by the article.

    Of course to take maximum effect of this communication speed in general parallel applications, main memory access would have to be improved. I'd guess these things will have huge on-chip caches.
  • by leery ( 416036 ) on Monday September 22, 2003 @09:21AM (#7024360) Journal
    IANAEE either, but this made a little more sense to me after I read this Inforworld article [infoworld.com], which talks about two other aspects of Sun's DARPA-funded project: clockless "asynchronous logic", and building processors with interchangeable and upgradable modules. They absolutely need these busless "proximity" interconnects for the processor modules to communicate at close to on-chip speeds, and the clockless architecture lets them get rid of the bus. Or vice versa... or something like that.

    Working prototype computer about six years away, according to the article.
    • [please strike the "absolutely" from above--i'm not qualified to use that adverb] ...Obviously, announcing this kind of concrete breakthrough is also good for PR, stock price, and future DARPA funding.
  • by Anonymous Coward
    As usual with alot of Computer Science, this appears to be just an old idea reinvented...the Transputer [smith.edu]...and about time too!
  • Now there's something we EE's know about. (Or not...) [ieee.org] We got it wrong in the upper North East with the huge black out.

    Looks like it's even used in the tiny chip to chip communications. Basically, to overcome the impotence caused by the little bit of impedance between the chips, we'll add some capacitance (CAPs). Adding the cap's to ground provides reactive power.
  • There is nothing new under the Sun. This concept, along with several others like it have been around for at least 15 years.
  • Ralston Purina [purina.com] will sue for copyright infringement.
  • by Anonymous Coward
    "Perhaps the beginning of a solution to the lag between memory and interconnect speed versus cpu frequency?"

    You mean the problem that everyone outside the PC world already solved? Please people, for your own sake go learn about the alpha architecture. Where all the CPUs connect to other cpus via north, south, east and west. They can all communicate that way, even routing around failed cpus. Then you can start crying when you realize crapaq threw it away.
  • I cna't help but wonder just how bad the heat problems could eventually get on a system designed like this. I mean, the Northbridge on a typical PC these days can burn your hand...
  • Nothing new here. No?
    I remember seeing the first Transputers on my very first Cebit visit sometime in the early 90s. The Transputer workstations would crunch full screen fractal grafics in seconds, which was an amazing feat back then. Just plain *everybody* was convinced they would put the then ruling Amiga to rest or - also a popular theory back then - would be adapted by Commodore. There is this Transputing PL Ocam that, as far as I can tell, makes Java, C# and all the rest look like kiddiecrap. Everyone
    • Firstly its transputer and occam. No capitalization. And occam didn't rule. In many ways it sucked. No data structuring (until occam 2.5 by which time it was too late) and numerous painful limitations to ensure appropriate semantics for checking for parallelism mistakes. occam's variant of CSP was cool but others (Rob Pike) had done it before the Inmos crowd and in some repsects in a better way. The reason the transputer was popular was its floating-point unit which at the time was the dog's balls, the link
  • Seems to me that not only would such a design cause RF interference, but it would be susceptible to strong RF fields as well. I doubt it could pass part 15 compliance.

  • ...did anyone else remember the MicroJ-11, a PDP11-on-TWO-chips implementation in a single DIP? Two chips wired together on one carrier. (IIRC the floating-point unit was one chip and everything else was on the other.) It got used in cluster storage controllers (HSC70/90) and all sorts of interesting gadgets.
  • Ivan E. Sutherland has always been a great thinker. An article about asynchronous computers fascinated me last year. You can find more details here [weblogs.com]. And you can count on him for real products to come.
  • With all this talk of Sun Chips, is anyone else hungry? I wonder if they'll produce a ranch version.

  • Is that...

    1,000,000,000,000 bits per second

    or

    2^40 bits per second?

    Theres a whole bunch of bits per second difference there...
  • "Perhaps the beginning of a solution to the lag between memory and interconnect speed versus cpu frequency?"

    Thank God! This was something that used to keep me up at all hours of the night. A solution to this problem will change the world. It may even stop the RIAA from suing young girls and stop global warming. Thank you for letting me sleep at night once again.

    WTF?

    --ken
  • The lab next door to me has an old Evans & Sutherland ESV graphics processor. It's the size of a small dorm fridge and it was built in '90 or '91, and is still working...as an end table, after being retired in '98. But I understand that a few are still doing graphics work today, despite (I think) not being Y2K compliant. Anyway, I like knowing that the age of 65 Dr. Sutherland's still out there working since back in the day apparently he did a lot of cool work.

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...