Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

10-TFlop Computer Built from Standard PC Parts 247

OrangeTide writes "Using PCI host adapters and Xeon processors, engineers at Lawrence Livermore National Labs have achieved 10-TFlops relatively cheaply. More information can be obtained from this article at EETimes." Lately, Linux seems to be the operating system of choice for new supercomputers, and this one's no different. It's cool to see big iron made cheaply.
This discussion has been archived. No new comments can be posted.

10-TFlop Computer Built from Standard PC Parts

Comments Filter:
  • Imagine... (Score:5, Funny)

    by Anonymous Coward on Tuesday November 12, 2002 @02:50AM (#4649302)
    A commodity supercomputing cluster of these! (There has to be a better name for it, but I'm new here on Slashdot).
  • by jabex ( 320163 ) on Tuesday November 12, 2002 @02:51AM (#4649305) Homepage
    ... which was specifically developed for running Doom III.
  • imagine the future (Score:5, Insightful)

    by ryochiji ( 453715 ) on Tuesday November 12, 2002 @02:56AM (#4649317) Homepage
    From the article:
    >The 1- to 10-teraflops processing range is opening up a revolutionary capability for scientific applications

    In the not too distant future, that kind of processing power could very well be available in home PCs. Imagine what that would do to...well, I mean, dang it, what the heck will we do? Game frame rates can only go so high. Even realism of 3D graphics may have it's limits. Oh sure, we'll find something, but it's difficult for us to imagine now...

    • by Charles Dodgeson ( 248492 ) <jeffrey@goldmark.org> on Tuesday November 12, 2002 @03:00AM (#4649335) Homepage Journal
      In the not too distant future, that kind of processing power could very well be available in home PCs. [...] Oh sure, we'll find something [to do with it], but it's difficult for us to imagine now

      the Search for Extremely Trivial Iterations at home.

    • by jericho4.0 ( 565125 ) on Tuesday November 12, 2002 @03:32AM (#4649429)
      I know exactly what I want to do with this kind of power. I want a massive FPS with a wicked frame rate. I want every object and material in that world to react with me. Newtownian physics down to grains of sand. Nothing short of movie quality realisim.I also want it to be massivly multiplayer. That will require some huge bandwidth and IO. Get on it Livermore!

      • Actually, it would be pretty cool to see a community that contributed to all the little details - kind of like a co-op world.

        then you'd have this huge pool of resources from which to draw - no time to model and texture that bookcase you need for a room? just go and buy it.

        grin, there could be auctions, just like real life, and since the 'structures' and objects you'd be buying have taken time and effort to create, and will have varying degrees of craftsmanship, you'd have a chance that it would actually turn into a market.

        the value, of course, would be that everybody would have to be able to use the resources, resell and develop their own parts of the world.

        we've seen something vaguely similar in the mod community since Doom (the original, kiddies), but it's not really there yet.

        • Actually there is... or was... I don't know. It's called Alphaworld. It's a huge multiuser VRML based world where anyone could claim some space and build something. Don't know if it exists anymore, used it years ago on my 486 with a 14.4k modem :) Google for it if you are intrested...
      • ... and then we can give Slartibartfast the award for his lovely work on fjords.
    • yeah, well.. (Score:3, Interesting)

      by radon28 ( 593565 )
      "There is no reason anyone would want a computer in their home." - Ken Olson, President, chairman and founder of Digital Equipment Corporation, 1977.
      • Re:yeah, well.. (Score:2, Insightful)

        by Kibo ( 256105 )
        Interesting observation. So whatever happened to DEC anyway? Oh yeah.... It's not always the sick or weak, sometimes it's the unaware that end up being prey.
      • One of the bigwigs at Xerox said, "We are not going to gamble our future on something called a mouse," and just gave away the precursor of Macintosh to Steve Jobs. Those suits in the boardrooms make village idiots look smart.
    • Oh sure, we'll find something, but it's difficult for us to imagine now...

      Come on; you know what we'll do with this; what we do with every major technological advance: new, more realistic, and more processor-intensive pr0n.

    • I'll easily use up all that computing power with my single program that simulates single cell and multi cell bacteria colonies and that uses mutation to create more specialized and complicated creatures.
  • We use it! (Score:4, Interesting)

    by e8johan ( 605347 ) on Tuesday November 12, 2002 @02:57AM (#4649318) Homepage Journal
    Were I work (I can't say where... I've signed papers...) we have replaced an SGI 'super computer' (or mini computer, whatever, a big number crunching beast of silicon!) with a Beowulf cluster. This not only gives us great scaleability, but also lots of FLOPS per dollar (or rather, krona :^P).
  • by joeflies ( 529536 ) on Tuesday November 12, 2002 @02:57AM (#4649319)
    Then the world will finally see the 4000 Playstation 2's that Saddam used to build a supercomputer
    • by 7-Vodka ( 195504 ) on Tuesday November 12, 2002 @03:39AM (#4649446) Journal
      or maybe just one apple mac. Sitting all alone in a big empty room because saddam bought the marketing spiel.
    • Processing power (Score:5, Interesting)

      by rovingeyes ( 575063 ) on Tuesday November 12, 2002 @03:42AM (#4649454)
      Actually, your statement made me wonder for a while. I remember that till not long ago, US wouldn't let other countries buy latest super computers becoz they feared it'd be used to do those nuclear explosion simlations. Now I'm not sure if it still is the case.

      Anyways, what I'm trying to point out is that it is actually becoming very convinient to build a super computer with lots of PCs that just lie idle. I am not sure if Saddam has heard about cheap linux systems. But what if he could build a super computer cluster?

      Boy this gets interesting and scarier at the same time.

      • Re:Processing power (Score:2, Interesting)

        by Duds ( 100634 )
        Indeed. Beowolfing (is that a new word) conviently skips a lot of that. The law was actually changed when the US realised that the Playstation 2 was technically a super computer. Not that they had jurstiction on the PS2 itself but it brought home it was daft.
      • by sql*kitten ( 1359 ) on Tuesday November 12, 2002 @04:17AM (#4649524)
        Anyways, what I'm trying to point out is that it is actually becoming very convinient to build a super computer with lots of PCs that just lie idle. I am not sure if Saddam has heard about cheap linux systems. But what if he could build a super computer cluster?

        Well, it depends. A Linux cluster is a good way to render a movie, because you can easily parallelize that task - send a frame to each node you've got, wait for it to come back, send out the next one, then when you're done composite them into an animation. That's easy, because you can make each task essentially stateless. For example, you don't have to wait for frame 1 to rasterize before you know how to light frame 2.

        But in many scientific computations, there is a limit to how you can subdivide a task. Say you are modelling the movement of a gas in 3 dimensional space, you cannot partition your space 3x3x3 and send it to 27 compute nodes, because what happens in each partition both influences and is influenced by what happens in adjacent partitions. If you did try to do something like this on a cluster designed for rendering movies (or brute forcing a cipher, or serving web pages) performance would be terrible because of the overhead of communication between nodes. For that, a Single System Image machine has a vast advantage.

        So the question is (and I don't know, I didn't study nuclear physics beyond A-level), are the significant computational problems associated with the development of nuclear weapons easy to parallelize, or do they require a real supercomputer [sgi.com]?
        • So the question is (and I don't know, I didn't study nuclear physics beyond A-level), are the significant computational problems associated with the development of nuclear weapons easy to parallelize, or do they require a real supercomputer [sgi.com]?

          Well a simple pointer to the answer might be this article [zdnet.com.au]. Now whether this experiment was successful or can it be reproduced?

        • by FuzzyDaddy ( 584528 ) on Tuesday November 12, 2002 @08:36AM (#4650233) Journal
          So the question is (and I don't know, I didn't study nuclear physics beyond A-level), are the significant computational problems associated with the development of nuclear weapons easy to parallelize, or do they require a real supercomputer [sgi.com]?

          I believe the calculations needed are massive finite element calculations. And I would imagine that things happen quickly enough in a nuclear explosion that there's a lot of significant stuff going on over a time period much shorter than it takes for any change to move from one side of the simulated device to the other.

          As an analogy, suppose you wanted to simulate a large number of gravitating bodies. You would break the problem up into sections. Even though each body acts on every other, bodies outside a certain distance can be treated by their average force. So you can simulate things near each other on the same node, and have the nodes talk to pass the information about the "average" field. It requires some communication between nodes, but a large amount of work can be done on an individual nodes.

          Or for your gas example, if you broke the problem up into boxes, you would have to "hand off" a particle as it passed from one box to another, and perhaps pass off information about forces close to the box boundaries. But if a lot of stuff is happening in a single box (like, say, chemical reactions), you can still get a big benefit out of parallalization.

          Also, if designing nuclear bombs is anything like designing microwave components, you would have several simulations going at the same time, to try different variations on one design. Or you would design several subparts and have them running at the same time.

          In short, I think that the problem very much lends it self to parallel computing.

      • Indeed, what if Saddam thought about using a Beowulf cluster to develop nukes?

        Hm. Waitasec. Didn't the USA design nuclear weapons in the 1940s using a dozen nerds with pocket protectors and slide rules?!?

        I honestly don't see the big stink about computing power being linked to weapons research. If our enemies want weapons, no technology embargo will put their desires down, if they're strong enough.
    • Then the world will finally see the 4000 Playstation 2's that Saddam used to build a supercomputer

      I'm glad you got modded up, but it could've been "insightful" instead of "funny".

      Take Xbox for example, neat little box, equiped with P3 something, and quite cheap. Put linux on that and build a relatively cheap cluster out of those. Not that far fetched, if you ask me.

      Had these linux capable consoles emerged before 3D revolution the price comparison would've been even better.

      And what's best, M$ takes about $50 loss for each XBox sold.

      In any case, PC clusters are stepping on the toes of super computers big time. We'll probably either see super computers get cheaper or vanish slowly.
  • zerg (Score:4, Funny)

    by Lord Omlette ( 124579 ) on Tuesday November 12, 2002 @02:57AM (#4649322) Homepage
    Look, all the 'cool' people are doing Linux, right? But BIG IRON is clearly trying to suck you and your money in. Oh, it's so cheap, but they don't tell you about the hidden costs, do they?

    Just say no to BIG IRON!
  • by Wild Bill Hickock ( 618119 ) on Tuesday November 12, 2002 @02:57AM (#4649325) Journal
    can be found here. [top500.org]
  • Parallel computing (Score:5, Interesting)

    by vlad_petric ( 94134 ) on Tuesday November 12, 2002 @02:58AM (#4649329) Homepage
    The difficulty is not to conglomerate processing power ... you can do that relatively easily with Benjamins ... the real difficulty is in either parallelizing your computations, or making a single processor work faster.

    So the Teraflops they're mentioning are just a theoretical upper bound, don't get too aroused when you see it.

    The Raven.

    • by Hawaiian Lion ( 411949 ) on Tuesday November 12, 2002 @03:45AM (#4649460)
      I've done a little parallel programming working at my college for a professor as well as an internship for Gemini Observatories in Hawaii.

      From what I've seen, just about any simulation involving large systems of particles can be fairly easily parallelized code-wise. These are mostly the sort of problems that require massive processing power in the first place.

      I can't think of a reason why we shouldn't be getting hyped about these teraflops. We use a 8 node AppleSeed cluster at work and I've seen that thing hump out 4-6 gigaflops of crunching power. It takes as long as a week to run some of our molecular dynamics simulations. If we had 10 teraflops of power in our hands those simulations could take somewhere on the order of minutes instead of days.

      • by sql*kitten ( 1359 ) on Tuesday November 12, 2002 @04:29AM (#4649550)
        I can't think of a reason why we shouldn't be getting hyped about these teraflops. We use a 8 node AppleSeed cluster at work and I've seen that thing hump out 4-6 gigaflops of crunching power. It takes as long as a week to run some of our molecular dynamics simulations. If we had 10 teraflops of power in our hands those simulations could take somewhere on the order of minutes instead of days.

        As an aside, I have to wonder whether or not that's a good thing. I have noticed in myself and almost everyone I've worked with that having massive amounts of CPU at your disposal makes you sloppy - people tend to take a "shotgun" approach, rather than thinking through a problem, they just "try something" until it works. Of course in some cases, CPU really is cheaper than developer time, but in just as many cases, it's an excuse for laziness. I see this all the time, people will build an over-complex solution using technologies like J2EE and EJBs when something much simpler and more efficient would suffice. For another example, every Slashbot who has complained about bloat in MS Office knows exactly what I mean.

        Roll on the teraflops, but not before developers have the self-discipline to use them well.
        • in case you were interested...

          GROMACS [gromacs.org] is the main simulation program we use. Its very well programmed, optimized, and GPL to boot. I hope that the software I write will have this sort of functionality and optimization.

          • GROMACS is the main simulation program we use. Its very well programmed, optimized, and GPL to boot. I hope that the software I write will have this sort of functionality and optimization

            Indeed, this is one of the cases in which CPU is really useful. I think GROMACS is the core of what Folding@Home do. But you can bet they developed it on much smaller systems and proved it for trivial cases based on deep understanding of the theory and algorithms before making it scale to supercomputers.
        • It IS a good thing. Problems that could never be solved otherwise become easy when you can throw an arbitrary amount of cpu power at a problem. For instance we were trying to compact a 5 element antenna array into a smaller space, so one of the engineers cooked up a genetic algorithm to find a more fit solution where the fitness paramaters were towards a smaller antenna that would have similar characteristics to the origional antenna. Well what do you know but the computer found a 3 element array that would perform within .5% of the 5 element array! This is not something a human would have ever found through trial and error or even good design work, but the software found the solution in under a week running on a cluster of about a dozen Sun's.
        • One of the benefits of computers is the ability to solve a problem with iteration rather than trying to come up with a classic "equation" and solve it. When I first entered the job market I had a trusty Pickett N4ES slide rule (and an N600-ES pocket slide rule) and had to first explain a problem with an equation and then solve the equation (from the "inside out" which was why HP calculators with RPG were so popular with engineers when they first came out versus the TI models... but I digress).

          With the introduction of the HP-35 calculator (the "electronic slide rule") we could solve problems by just crunching the numbers at our desks. With the availability of programmable calculators (HP-67/97 and HP-41 - both of which I still use... but then I still use the slide rules too) we could program them to iterate through problems.

          Not as elegant, certainly. But lots more efficient. And I'm sure that most of us have lost some of our old abilities to "see" problems in math... and perhaps some students never really learn that. But the jobs still get done and the tools still keep making it easier. I'm thinking about a Beowulf cluster for our office, actually.
    • by tyler_larson ( 558763 ) on Tuesday November 12, 2002 @11:31AM (#4651396) Homepage
      Yes, I saw this computer before it was delivered to LLNL (hardly off-the-shelf parts, BTW). Very sleek looking, though.

      The interesting thing about this setup is that it doesn't work like the traditional supercomputer. It's more like a community of totally independant computers all willing to work on the same problem.

      The system employs a whole lotta control nodes that spend their whole time trying to assign work out to the worker nodes. The problem then becomes not just parallelizing the work but coordinating the workers. Apparently with this cluster design, it's not all as cut-and-dried as with a "real" supercomputer. They have been able to do some really cool stuff, though. Like, for example, any computer in the cluster can address the memory on any other computer.

      The admins I talked to said they weren't really sure just how fast the system could go, because they could never get it to operate at full capacity. They said the fastest they'd gotten it to go was 4T-Flops, but they figured they were only at %40 theoretical capacity.

  • Is this a big deal? (Score:4, Interesting)

    by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday November 12, 2002 @03:00AM (#4649334) Homepage
    The important part isn't the number of FLOPS (to get those you can just keep buying more PCs until you reach the desired number) but the performance in applications which are not 'embarassingly parallel'. In other words how good is the interconnect between machines? The article talks about a new network to replace Gigabit Ethernet.
    • by dsfd ( 622555 )
      You are right, the network AND the algorithms are the the key in many cases, not just FLOPS.

      A good example is solving large linear equation systems, with say 10^7 unkowns or more. This is a central problem in many fields of scientific computing. In our CFD simulations we need to solve 10^6 linear systems with 3x10^6 unkowns each to obtain the final answer.

      It is difficult to use large number of processors to do it efficiently, specially if you use a conventional 100 Mbits/s network with high latency. Currently we are using 36 processors to do so, and the solution of each system takes about 4 seconds. Just multiply to have an idea of the total processing time !

      But without Beowulf clusters (and GNU/Linux is a central part of them) this kind of problems would requiere conventional, very expensive supercomputers.

  • by jki ( 624756 ) on Tuesday November 12, 2002 @03:01AM (#4649336) Homepage
    Selected clips:

    The system has a few unique features that the lab says will facilitate applications performance, including a fast, custom-made network that taps into an enterprisewide file system.

    "This network approach is nice because we can use a standard PCI slot on each processor node, which gives a 4.5-microsecond latency," he said, as opposed to 90-s latency for Gigabit Ethernet."

    The boards are linked by a network assembled by Linux Networx into a clustered system that will have 960 server nodes.

    The file system, called Lustre, uses a client/server model. Large, fast RAM-based memory systems support a metadata center, and data is represented across the enterprise in the form of object-storage targets. "Being able to share data across the enterprise is an exciting new capability

    I think this is especially interesting, because it seems to glue together pieces from traditional clustering and distribted or metacomputing. Is there some site for this project with more details?

  • Does that mean... (Score:2, Insightful)

    by haxor.dk ( 463614 )
    ...that Apple's glorious supercomputers are obsolete?

    Damn... :/
    • Re:Does that mean... (Score:4, Interesting)

      by Dr. Spork ( 142693 ) on Tuesday November 12, 2002 @03:18AM (#4649393)
      You know, it wouldn't be stupid of Apple to try to build in some code for arbitrarily large clusters into Darwin. It would really be a prestige coup if a mac cluster became a top-500 computer.
      • Imagine an Orchard of them apples !
      • Re:Does that mean... (Score:2, Informative)

        by d^2b ( 34992 )

        Until Apple submits SPECCPU [spec.org] benchmark results, it is hard to escape the conclusion that they are not cost effective machines for building scientific computing clusters.

        Of course the benchmarks might make that conclusion inescapable.

        Mac fans are welcome to do the benchmarking to prove my suspicions incorrect. Or you could translate this page [u-tokyo.ac.jp] from Japanese. It seems to say that a G4 at 1GHz is about 1/6 the speed of a 2.8GHz P4 on the floating point benchmark.

        Yes, they would be rockin fast if they used IBM Power4s. But they don't.

  • by valmont ( 3573 ) on Tuesday November 12, 2002 @03:06AM (#4649352) Homepage Journal
    ... that penguins couldn't do steroids?

  • by Dr. Spork ( 142693 ) on Tuesday November 12, 2002 @03:08AM (#4649358)
    Do I understand correctly that they just wired PCI slots from different motherboards together, instead of running the data around over ethernet (which probably would have been plugged into a PCI slot anyway)? If so, I mean, if there's nothing more to it than that, it seems like this will be a kickass way of clustering. But there must be something more to it than I realize, because if there wasn't, there wouldn't be so many ethernet-based beowulf systems.

    So please explain this. I mean, I have two linux boxes in my room and each has a free PCI slot. What do I need to to to network them over directly over PCI?

    • PCI Null-Modem (Score:5, Interesting)

      by Bios_Hakr ( 68586 ) <xptical.gmail@com> on Tuesday November 12, 2002 @03:46AM (#4649461)
      Uuh, I mean null-card connection. I have never really looked at the PCI spec from an eletrical engineer standpoint, but there are probably power leads, data leads, timing leads, and ground leads on there.

      The data leads should be easy...TX to RX. Although they may use a full-duplex lead where the data shares the bus based on clock pulses.

      The power could be dropped, as both machines already have the proper power requirements. The ground leads could be tied together if you wanted, but dropping them shouldn't have too much impact on the final outcome.

      The tricky part would be the clock pulses. In order to keep the data integrity, you need to have both bachines on the same clock. The easy way would be to take the crystal from one motherboard and wire it to the other. Same crystal, same clock pulse.

      Then drivers would be needed to make the other computer look like an attached device. Shouldn't be too difficult. Just take a NIC driver and modify it...heavily.

      I think an easier option would be to share data across the IDE bus. Make an IDE driver look like a NIC driver and send IP across IDE. In fact, I remember Linux Journal publishing an article about someone doing IP over SCSI about 2 years ago. Get some SCSI cards and make your own version of a CDDI network ring.
      • Cool! The only part that really sounds hard is the clock synching, and the software. So why not really go for it and use the AGP slot? Shouldn't that give you higher throughput and even less latency? This looks like it would be an awesome geek project.
        • Re:PCI Null-Modem (Score:4, Interesting)

          by Bios_Hakr ( 68586 ) <xptical.gmail@com> on Tuesday November 12, 2002 @05:08AM (#4649677)
          AGP is only useful for one-way communication. Great for shoving data to the monitor, but sucks for pulling data from an outside source. I'd say if you really wanted a challenge, use the memory bus for networking. The prob would be timeouts while waiting for the data to be pulled across the wire from the other machines' memory bank.

          Or, go for broke and use the second processor slot in a dual mobo.

          On the cheap end, you could use the USB and a null-modem cable there to link 2 boxes.
      • "The tricky part would be the clock pulses. In order to keep the data integrity, you need to have both bachines on the same clock. The easy way would be to take the crystal from one motherboard and wire it to the other. Same crystal, same clock pulse."

        With the frequencies and distances involved, I doubt that that will work. I think the tranmission line delay of the wires will mess it up. I suspect they have some sort of dual-port buffer in the cards to allow multiple clock domains.

        What I'm wondering is: Where can I get those cards and how much do they cost?

  • I need one! (Score:5, Funny)

    by Newer Guy ( 520108 ) on Tuesday November 12, 2002 @03:11AM (#4649371)
    I have a lot of movies to convert to DIVX...
  • by stephenisu ( 580105 ) on Tuesday November 12, 2002 @03:21AM (#4649398)
    I told you my other Boxen was a 1000 node beowulf cluster... But no one believed my sticker...
  • Great, now I can do my spreadsheets and word processing way faster than before... :)
  • It's ten times cheaper, but still roughly a hundred times more expensive than most people can afford.
  • What is most hilarious about all this is that three years ago the same people at the Lab who put together this cluster, Livermore Computing, insisted that Linux was a toy....That it had no future in scientific computing....That it was a hobbyist's OS.

    I sure hope they love the taste of crow....
  • by PerryMason ( 535019 ) on Tuesday November 12, 2002 @04:48AM (#4649595)
    "I was doing my nuclear simulations on the ASCII White and it was like BEEP BEEP BEEP...and like half my work was gone..."
  • Does any /. reader have any info on this? Is this a network / distributed filesystem? Why did they choose to write a new filesystem rather than pick from any of the existing filesystems out there? More importantly, is this code publicly available?

  • (Open)MOSIX? (Score:3, Interesting)

    by Jeppe Salvesen ( 101622 ) on Tuesday November 12, 2002 @06:59AM (#4649922)
    Anyone have any experience using (Open)MOSIX? I have a partially CPU-bound application (automatic part is IO-bound, manual part is CPU-bound) in Perl, Apache and MySQL. Anyone got experience with this stuff?

    For those who don't even know what MOSIX is, it is a kernel patch that essentially creates a virtual computer out of several boxes. They claim they will scale your application as long as you have multiple processes (they migrate them as needed) - without any coding on your part.

    Since I'm looking for extra performance with limited resources, this looks like a potentially easy way out :)
  • by sdeath ( 199845 ) on Tuesday November 12, 2002 @07:24AM (#4649985)
    The title says it all. Big Iron is _engineered_. No matter how big or how spiffy a Beowulf cluster is, it's still just a bunch of PC motherboards kludged together with a bunch of network cards. There is a reason Crays are expensive - they are _worth it_ from a performance standpoint, because not every problem lends itself easily to the solution of a Beowulf cluster. Some problems require the exchange of a lot of data between a lot of nodes, and a little math will show that it won't take much data interchange to saturate even a GigE switch. Adding more machines is not going to help; craftily designing and overengineering the network _might_, but by the time you get this whole damned thing glued together well enough to approximate a Cray's performance, you'll have spent enough to have just flat-out bought a Cray in the first place.

    As others have noted, while this thing may have a theoretical peak performance of 10 TFLOPS, I'm willing to bet that number goes down like Monica Lewinsky on Quaaludes when you feed this magical supercomputer a problem that's _not_ suitable for distributed.net (i.e. one where computations on one node are dependent on computations on another node, like fluid-dynamics problems, turbulence, etc.)

    Yeah, it's interesting as a curiosity, but this is by no means spectacular. Beowulf is good for what it's good for, which is a "poor-man's supercomputer" that works well for coarsely-parallel problems that don't require a lot of internode communication. It's not the Philosopher's Stone, folks.

    -SD
    • The distributed memory Crays (T3D, T3E) are just the same: boards and network cards. The processors they use are not faster than the last generation PC processors. The difference are the NICs, that have about 10 times more bandwith and 10 times less latency (compared with standard fast ethernet cards).

      There is the difference. As you say, for certain problems, this means that the whole machine is about 10 times faster than a Beowulf.

      However, if/when conventional NICs are fast enough, specially in terms of latency, both systems can be equivalent again. In the meantime, a lot of people are trying to develop parallel algorithms that minimize the number and size of the messages, allowing to use cheap PCs as supercomputers.
  • Typo in article (Score:2, Informative)

    by Spunk ( 83964 )
    You'd think the EETimes would catch something like this:

    nearly the same performance as the ASCII White system

    No, it's ASCI White [llnl.gov]. Accelerated Strategic Computing Initiative, not the text format.
  • by Anonymous Coward
    Which means there's a 4 GB hard limit on the amount of RAM a process can use, and a big performance hit if a node has more than 4 GB RAM. Of course with 10 TB, there's some room to spare, but if a calculation is not very parallelizable, you're still limited to the speed of one node.
  • There's a distinction to be drawn between big iron and a lot of small iron. People who've never used big iron never draw that distinction, nor do people who're trying to publicise their latest and greatest cluster, but there is a difference. A cluster of fast 32 bit PCs networked with gigabit ethernet does not big iron make.
  • by peter303 ( 12292 ) on Tuesday November 12, 2002 @09:16AM (#4650403)
    2/3rds the cost of the three year computer lifetime is the electricity and cooling system. When TOC is counted a transmeta based cluster or the super-dese SGI cluster announced yesterday is cheaper.
  • It's very impressive what they've built, and I'm not knocking it, but I nearly split my sides at some of the quotes not directly related to the speed or hardware architecture of the thing:-

    "We have been using the File Transfer Protocol over Gigabit Ethernet, but now we will be able to read files directly from any available disk,"

    Well - like wow - NFS/CIFS anyone. They've been ftp'ing docs to each other? ROFL :)

    "Being able to share data across the enterprise is an exciting new capability. It will allow more collaboration among research projects,"

    Ahh my sides are splitting - "shock news, scientists discover file sharing" heheh. Don't these guys have a file server? Guys listen up - you didn't need to design a world beating clusterbeast with 10Tflops just to share some files! LOL all that power just to let Larry from Sub-Atomic Meddling dept. look at a paper from Dave from the Induced Super Novae Working Comittee heheheh. These guys need to get out more: imagine their annoyance when they made this big announcement only to discover that not only has Novell, Microsoft/SAMBA, Unix/NFS done this already - they did it with only one CPU in the server!

    "This network approach is nice because we can use a standard PCI slot on each processor node, "

    Hmm like any network card you care to mention then really... Heheh "Hey like.. this network stuff is like - cool man!" What next? They invent a board with a button for each character they need to type? Priceless.

    I'm sure it's great but I only just stopped laughing at those quotes. I can only imagine (or hope) it's a case of clueless journo mis-quoting or quoting out of context or just completely missing the point of the project.
  • by grub ( 11606 ) <slashdot@grub.net> on Tuesday November 12, 2002 @09:29AM (#4650471) Homepage Journal

    "We have been using the File Transfer Protocol over Gigabit Ethernet, but now we will be able to read files directly from any available disk."

    translation
    We used to use FTP over Gig-E but came up with something more L337.

  • Ahem!

    Great work with that new supercluster! You guys are doing great, getting the most teraflops for your dollar!

    Ummm...since you don't need it anymore...would you mind letting me have that ASCI White machine?

  • Finally this thing made slashdot! I've been trying to tell y'all about this for months, but the editors haven't found it newsworthy - see my journal (7/26 and 10/28) for links to additional articles and the home page for the cluster. [slashdot.org]
  • by Conspiracy_Of_Doves ( 236787 ) on Tuesday November 12, 2002 @11:06AM (#4651199)
    a race between engineers trying to make faster and better computers and Microsoft trying to make more bloated and processor-heavy operating systems. So far, Microsoft is winning.
    • Intel has a name for it, "the software spiral". And they are happy as pigs in shit for Microsoft to continue their bloat-ware development at full steam. It just means that people will need more power to run the crap that comes outta Redmond.

      However it seems maybe M$ is losing ground and, thereby, helping facilitate the current market slowdown for new consumer system purchases. There's only so much crap they can cram into Office, I guess. And grandpa can't tell the diff between his PIII and that fancy new P4 so why buy one.

      I dunno if I mean this post to be taken as sarcasm or as academic... it could go either way. It's all certainly true, but also somewhat Heller-ian.

  • The term "cluster" has been used in so many contexts that it no longer has any meaning. OpenMOSIX cluster. MPI cluster. Disk cluster. Web server cluster. Is the definition of cluster simply any group of computers either on a LAN or the internet performing vaguely related tasks?
    • ... running in parallel to complete one overall task would be a closer definition.

      A web cluster is a bunch of webservers all serving the same / slightly different things working together from the same database / files / etc.

      A "cluster" computer would be a large set of computers that all act as one big computer.
  • I think the interesting part of this story seems to be that instead of standard PCI network cards (they talk about GigE being too slow), they are using Bus Cards - which I am taking to mean some form of SCSI Bus cards.

    In other words, the computers are networked via high-speed SCSI links, to increase bandwidth and thoroughput. I have always thought this was possible, and had probably already been done, but this is the first time I have seen such a thing written up about (in other words, it probably has been done in the past, and I just didn't read about it).

    I am thinking the SCSI cards here are being used in a "poor-man's MYRINET" fashion, in order to get past the bottleneck of ethernet NICs and switches. Now, if they only made (or, maybe they do?) a SCSI "switch" (are those called crossbar switches?) for the thing, you could go to a star topology instead...

"If value corrupts then absolute value corrupts absolutely."

Working...