10-TFlop Computer Built from Standard PC Parts 247
OrangeTide writes "Using PCI host adapters and Xeon processors, engineers at Lawrence Livermore National Labs have achieved 10-TFlops relatively cheaply. More information can be obtained from this article at EETimes." Lately, Linux seems to be the operating system of choice for new supercomputers, and this one's no different. It's cool to see big iron made cheaply.
Imagine... (Score:5, Funny)
Howlingly funny? (Score:3, Funny)
Beowulf!
zzziiiippp!!!!(I love the smell of nomex in the morning!)
Supercomputer developed for... (Score:5, Funny)
Re:Supercomputer developed for... (Score:5, Funny)
Well, considering who's building it and running it, it's reasonable to guess that some form of doom will be simulated on it.
imagine the future (Score:5, Insightful)
>The 1- to 10-teraflops processing range is opening up a revolutionary capability for scientific applications
In the not too distant future, that kind of processing power could very well be available in home PCs. Imagine what that would do to...well, I mean, dang it, what the heck will we do? Game frame rates can only go so high. Even realism of 3D graphics may have it's limits. Oh sure, we'll find something, but it's difficult for us to imagine now...
Re:imagine the future (Score:5, Funny)
the Search for Extremely Trivial Iterations at home.
Re:imagine the future (Score:4, Interesting)
Re:imagine the future (Score:3, Interesting)
Actually, it would be pretty cool to see a community that contributed to all the little details - kind of like a co-op world.
then you'd have this huge pool of resources from which to draw - no time to model and texture that bookcase you need for a room? just go and buy it.
grin, there could be auctions, just like real life, and since the 'structures' and objects you'd be buying have taken time and effort to create, and will have varying degrees of craftsmanship, you'd have a chance that it would actually turn into a market.
the value, of course, would be that everybody would have to be able to use the resources, resell and develop their own parts of the world.
we've seen something vaguely similar in the mod community since Doom (the original, kiddies), but it's not really there yet.
Re:imagine the future (Score:2, Informative)
Re:imagine the future (Score:2, Funny)
Re:imagine the future (Score:2)
Good point. Good joke. Yes I'm looking for a total escape from reality, is that so bad?
Re:imagine the future (Score:2)
So, if someone spends time creating a vase for Sims Online v5, it should be available for a scene in Splinter Cell v22.
Given a large enough amount of disk and large enough bandwidth someone could update their vase every now and then and the software will check and automatically update these objects to either 1) add new realism to the game/system or 2) screw everyone that loaded the new vase with a buggy vase that is updated moments later.
This could then lead to uniquely crafted virtual objects that might be considered art and sought after like real art.
Re:imagine the future (Score:2)
Good idea, though. This could be hooked with a voice system so you could request "Baroque Vase number 32" and then place it on a table using some nifty future interface device.
Then again, we'll have to wait for the porn industry to find a way to make money for it before it really takes off.
yeah, well.. (Score:3, Interesting)
Re:yeah, well.. (Score:2, Insightful)
Re:yeah, well.. (Score:2)
Re:imagine the future (Score:3, Funny)
Come on; you know what we'll do with this; what we do with every major technological advance: new, more realistic, and more processor-intensive pr0n.
Re:Image Recognition. (Score:2)
this already exists (Score:2)
Give me hundred million billion trillion teraflops (Score:2)
We use it! (Score:4, Interesting)
Re:We use it! (Score:3, Funny)
Re:We use it! (Score:4, Funny)
Are you referring to this part of your CV [chalmers.se]?
Gardening
Juli 7 - 27 1986 i managed a group of tomato plants.
Re:We use it! (Score:4, Funny)
Re:We use it! -Give him a break! (Score:2)
hej e8johan -
Guys, Give him a break! My fiance' graduated with a Masters' from Chalmers. It's considered one of the more prestigious engineering schools in Europe. It's the Swedish "MIT". I think a little is being lost in the translation.. Don't mistake sarcasm for criticism. Slashdot is truely an international affair; Don't push our guests away.
(my Swede and I reside in Indiana, and I'm a homegrown Ohio boy)
Re:We use it! (Score:4, Funny)
Mini computers are the dinosaurs of old.
Hey, wouldn't dinosaurs be the dinosaurs of old? Mini computers would be the dinosaurs of kinda recent. =)
Wait until the weapons inspectors get to Iraq! (Score:5, Funny)
Re:Wait until the weapons inspectors get to Iraq! (Score:4, Funny)
Re:Wait until the weapons inspectors get to Iraq! (Score:2)
Instead of struggling with Windows settings and DLL's, I can blow up countries!
He's certainly not going to save Christmas....
Processing power (Score:5, Interesting)
Anyways, what I'm trying to point out is that it is actually becoming very convinient to build a super computer with lots of PCs that just lie idle. I am not sure if Saddam has heard about cheap linux systems. But what if he could build a super computer cluster?
Boy this gets interesting and scarier at the same time.
Re:Processing power (Score:2, Interesting)
Re:Processing power (Score:4, Insightful)
Well, it depends. A Linux cluster is a good way to render a movie, because you can easily parallelize that task - send a frame to each node you've got, wait for it to come back, send out the next one, then when you're done composite them into an animation. That's easy, because you can make each task essentially stateless. For example, you don't have to wait for frame 1 to rasterize before you know how to light frame 2.
But in many scientific computations, there is a limit to how you can subdivide a task. Say you are modelling the movement of a gas in 3 dimensional space, you cannot partition your space 3x3x3 and send it to 27 compute nodes, because what happens in each partition both influences and is influenced by what happens in adjacent partitions. If you did try to do something like this on a cluster designed for rendering movies (or brute forcing a cipher, or serving web pages) performance would be terrible because of the overhead of communication between nodes. For that, a Single System Image machine has a vast advantage.
So the question is (and I don't know, I didn't study nuclear physics beyond A-level), are the significant computational problems associated with the development of nuclear weapons easy to parallelize, or do they require a real supercomputer [sgi.com]?
Re:Processing power (Score:2)
Well a simple pointer to the answer might be this article [zdnet.com.au]. Now whether this experiment was successful or can it be reproduced?
Re:Processing power (Score:5, Insightful)
I believe the calculations needed are massive finite element calculations. And I would imagine that things happen quickly enough in a nuclear explosion that there's a lot of significant stuff going on over a time period much shorter than it takes for any change to move from one side of the simulated device to the other.
As an analogy, suppose you wanted to simulate a large number of gravitating bodies. You would break the problem up into sections. Even though each body acts on every other, bodies outside a certain distance can be treated by their average force. So you can simulate things near each other on the same node, and have the nodes talk to pass the information about the "average" field. It requires some communication between nodes, but a large amount of work can be done on an individual nodes.
Or for your gas example, if you broke the problem up into boxes, you would have to "hand off" a particle as it passed from one box to another, and perhaps pass off information about forces close to the box boundaries. But if a lot of stuff is happening in a single box (like, say, chemical reactions), you can still get a big benefit out of parallalization.
Also, if designing nuclear bombs is anything like designing microwave components, you would have several simulations going at the same time, to try different variations on one design. Or you would design several subparts and have them running at the same time.
In short, I think that the problem very much lends it self to parallel computing.
Re:Processing power (Score:5, Insightful)
To make a small portable nuke is harder.
Re:Processing power (Score:2)
Hm. Waitasec. Didn't the USA design nuclear weapons in the 1940s using a dozen nerds with pocket protectors and slide rules?!?
I honestly don't see the big stink about computing power being linked to weapons research. If our enemies want weapons, no technology embargo will put their desires down, if they're strong enough.
Re:The real question is (Score:2)
Answer: Depends on his intent. If he is using it for finding extra terrestrial life, by all means he can go ahead, but if he is using it to test one of his biological weapons then he is obviously bad.
Re:The real question is (Score:3, Interesting)
What if he finds some ETs who can help him out with some guy, known as GW Bush, who wants to invade his country.
Re:Wait until the weapons inspectors get to Iraq! (Score:2)
I'm glad you got modded up, but it could've been "insightful" instead of "funny".
Take Xbox for example, neat little box, equiped with P3 something, and quite cheap. Put linux on that and build a relatively cheap cluster out of those. Not that far fetched, if you ask me.
Had these linux capable consoles emerged before 3D revolution the price comparison would've been even better.
And what's best, M$ takes about $50 loss for each XBox sold.
In any case, PC clusters are stepping on the toes of super computers big time. We'll probably either see super computers get cheaper or vanish slowly.
zerg (Score:4, Funny)
Just say no to BIG IRON!
the world's fastest machines (Score:3, Informative)
Parallel computing (Score:5, Interesting)
So the Teraflops they're mentioning are just a theoretical upper bound, don't get too aroused when you see it.
The Raven.
Re:Parallel computing (Score:4, Interesting)
From what I've seen, just about any simulation involving large systems of particles can be fairly easily parallelized code-wise. These are mostly the sort of problems that require massive processing power in the first place.
I can't think of a reason why we shouldn't be getting hyped about these teraflops. We use a 8 node AppleSeed cluster at work and I've seen that thing hump out 4-6 gigaflops of crunching power. It takes as long as a week to run some of our molecular dynamics simulations. If we had 10 teraflops of power in our hands those simulations could take somewhere on the order of minutes instead of days.
Re:Parallel computing (Score:4, Insightful)
As an aside, I have to wonder whether or not that's a good thing. I have noticed in myself and almost everyone I've worked with that having massive amounts of CPU at your disposal makes you sloppy - people tend to take a "shotgun" approach, rather than thinking through a problem, they just "try something" until it works. Of course in some cases, CPU really is cheaper than developer time, but in just as many cases, it's an excuse for laziness. I see this all the time, people will build an over-complex solution using technologies like J2EE and EJBs when something much simpler and more efficient would suffice. For another example, every Slashbot who has complained about bloat in MS Office knows exactly what I mean.
Roll on the teraflops, but not before developers have the self-discipline to use them well.
Re:Parallel computing (Score:2, Informative)
GROMACS [gromacs.org] is the main simulation program we use. Its very well programmed, optimized, and GPL to boot. I hope that the software I write will have this sort of functionality and optimization.
Re:Parallel computing (Score:2)
Indeed, this is one of the cases in which CPU is really useful. I think GROMACS is the core of what Folding@Home do. But you can bet they developed it on much smaller systems and proved it for trivial cases based on deep understanding of the theory and algorithms before making it scale to supercomputers.
Re:Parallel computing (Score:2)
On the other hand, it beats using a slide rule! (Score:4, Interesting)
With the introduction of the HP-35 calculator (the "electronic slide rule") we could solve problems by just crunching the numbers at our desks. With the availability of programmable calculators (HP-67/97 and HP-41 - both of which I still use... but then I still use the slide rules too) we could program them to iterate through problems.
Not as elegant, certainly. But lots more efficient. And I'm sure that most of us have lost some of our old abilities to "see" problems in math... and perhaps some students never really learn that. But the jobs still get done and the tools still keep making it easier. I'm thinking about a Beowulf cluster for our office, actually.
More theoretical than you think (Score:4, Informative)
The interesting thing about this setup is that it doesn't work like the traditional supercomputer. It's more like a community of totally independant computers all willing to work on the same problem.
The system employs a whole lotta control nodes that spend their whole time trying to assign work out to the worker nodes. The problem then becomes not just parallelizing the work but coordinating the workers. Apparently with this cluster design, it's not all as cut-and-dried as with a "real" supercomputer. They have been able to do some really cool stuff, though. Like, for example, any computer in the cluster can address the memory on any other computer.
The admins I talked to said they weren't really sure just how fast the system could go, because they could never get it to operate at full capacity. They said the fastest they'd gotten it to go was 4T-Flops, but they figured they were only at %40 theoretical capacity.
Is this a big deal? (Score:4, Interesting)
Re:Is this a big deal? (Score:2, Interesting)
A good example is solving large linear equation systems, with say 10^7 unkowns or more. This is a central problem in many fields of scientific computing. In our CFD simulations we need to solve 10^6 linear systems with 3x10^6 unkowns each to obtain the final answer.
It is difficult to use large number of processors to do it efficiently, specially if you use a conventional 100 Mbits/s network with high latency. Currently we are using 36 processors to do so, and the solution of each system takes about 4 seconds. Just multiply to have an idea of the total processing time !
But without Beowulf clusters (and GNU/Linux is a central part of them) this kind of problems would requiere conventional, very expensive supercomputers.
Interesting Approach on Network (Score:5, Interesting)
The system has a few unique features that the lab says will facilitate applications performance, including a fast, custom-made network that taps into an enterprisewide file system.
"This network approach is nice because we can use a standard PCI slot on each processor node, which gives a 4.5-microsecond latency," he said, as opposed to 90-s latency for Gigabit Ethernet."
The boards are linked by a network assembled by Linux Networx into a clustered system that will have 960 server nodes.
The file system, called Lustre, uses a client/server model. Large, fast RAM-based memory systems support a metadata center, and data is represented across the enterprise in the form of object-storage targets. "Being able to share data across the enterprise is an exciting new capability
I think this is especially interesting, because it seems to glue together pieces from traditional clustering and distribted or metacomputing. Is there some site for this project with more details?
Re:Interesting Approach on Network (Score:3, Informative)
http://www.llnl.gov/linux/mcr/ [llnl.gov].
Re:Interesting Approach on Network (Score:2)
Re:Interesting Approach on Network (Score:2)
Re:Interesting Approach on Network (Score:2)
Does that mean... (Score:2, Insightful)
Damn...
Re:Does that mean... (Score:4, Interesting)
Re:Does that mean... (Score:2)
Re:Does that mean... (Score:2, Informative)
Until Apple submits SPECCPU [spec.org] benchmark results, it is hard to escape the conclusion that they are not cost effective machines for building scientific computing clusters.
Of course the benchmarks might make that conclusion inescapable.
Mac fans are welcome to do the benchmarking to prove my suspicions incorrect. Or you could translate this page [u-tokyo.ac.jp] from Japanese. It seems to say that a G4 at 1GHz is about 1/6 the speed of a 2.8GHz P4 on the floating point benchmark.
Yes, they would be rockin fast if they used IBM Power4s. But they don't.
who ever said ... (Score:5, Funny)
Connections through PCI bus? (Score:5, Interesting)
So please explain this. I mean, I have two linux boxes in my room and each has a free PCI slot. What do I need to to to network them over directly over PCI?
PCI Null-Modem (Score:5, Interesting)
The data leads should be easy...TX to RX. Although they may use a full-duplex lead where the data shares the bus based on clock pulses.
The power could be dropped, as both machines already have the proper power requirements. The ground leads could be tied together if you wanted, but dropping them shouldn't have too much impact on the final outcome.
The tricky part would be the clock pulses. In order to keep the data integrity, you need to have both bachines on the same clock. The easy way would be to take the crystal from one motherboard and wire it to the other. Same crystal, same clock pulse.
Then drivers would be needed to make the other computer look like an attached device. Shouldn't be too difficult. Just take a NIC driver and modify it...heavily.
I think an easier option would be to share data across the IDE bus. Make an IDE driver look like a NIC driver and send IP across IDE. In fact, I remember Linux Journal publishing an article about someone doing IP over SCSI about 2 years ago. Get some SCSI cards and make your own version of a CDDI network ring.
Re:PCI Null-Modem (Score:2)
Re:PCI Null-Modem (Score:4, Interesting)
Or, go for broke and use the second processor slot in a dual mobo.
On the cheap end, you could use the USB and a null-modem cable there to link 2 boxes.
Re:PCI Null-Modem (Score:2)
With the frequencies and distances involved, I doubt that that will work. I think the tranmission line delay of the wires will mess it up. I suspect they have some sort of dual-port buffer in the cards to allow multiple clock domains.
What I'm wondering is: Where can I get those cards and how much do they cost?
Re:Connections through PCI bus? (Score:3, Insightful)
I don't know much about this type of stuff, but wouldn't it be awesome if the way they made it work is through software? If they did, Linux/Beowulfing would be about to take a huge leap forward.
Anyway, maybe this is not the sort of thing you can solve with software. Whatever they're using to connect the computers sounds very practical though, not just for supercomputers but for general fast networking. When can we do this at home?
Re:Connections through PCI bus? (Score:5, Insightful)
Bus 0, device 12, function 0: PCI bridge: Digital Equipment Corporation DECchip 21152 (rev 3). Master Capable. Latency=64. Min Gnt=4.
But you can't use this to connect a rack of computers. For one thing the max cable length for connecting two busses would be just a few inches. For putting PCI cards in 1.75" high 1U rackmount cases, there are PCI risers [servercase.com] with a short ribbon cable that connects to the PCI slot. Even these short cables often cause timing problems. For instance, with the riser, cards may only work in the first one or two slots that will otherwise work in all the slots.
But even if you could cable all the computers together on one giant PCI bus, it would still be a bad idea. A good 24 port gigabit ethernet switch (~$2000) has a 480MB/sec switching fabric, to support full speed full duplex on each port. 32 bit 33Mhz PCI is only about 132 MB/sec, not nearly as fast. You'd need a 64 bit 66 Mhz PCI bus to keep up. And there are more expensive gbit switches with more ports that have 100 Gbit/sec fabric. And this is just gbit ethernet, the slowest and cheapest of the high speed interconnects used in modern Beowulf clusters.
There are faster ways to connect computers than gigabit ethernet. The EE times article is very untechnical, but this one [linuxnetworx.com] has some more information. LLNL has used a very fast and very expensive interface called quadrics [quadrics.com]. This is probably the fastest way to connect computers in a Beowulf. People like Cray/SGI and IBM have faster things still, but they cost real big bucks. Other ways to connect a Beowulf are the above mentioned gigabit ethernet (~$100-$250 a node for up to 24 nodes), myrinet [myri.com] (~$1400-$2000 /node up to 128 nodes), and SCIhardware [dolphinics.com] and software [scali.com] (~$1400-$2100 /node). Myrinet uses a switch like gigabet ethernet and the largest switch they have is 128 ports. SCI is switchless, each card has multiple cables (1-3), and is connected in into a ring, 2D or 3D torus.
Re:Connections through PCI bus? (Score:2)
Re:Connections through PCI bus? (Score:2)
Did we follow the standard form of not reading the article?
QUOTE:
The advantage of the pci bus is its low latency.
Re:Connections through PCI bus? (Score:2)
I need one! (Score:5, Funny)
I told you.... (Score:3, Funny)
Need for speed (Score:2)
ten times cheaper (Score:2)
And they said Linux was a toy.... (Score:2, Interesting)
I sure hope they love the taste of crow....
I can see the adds now.. (Score:5, Funny)
Filesystem called Lustre... (Score:2)
Does any /. reader have any info on this? Is this a network / distributed filesystem? Why did they choose to write a new filesystem rather than pick from any of the existing filesystems out there? More importantly, is this code publicly available?
(Open)MOSIX? (Score:3, Interesting)
For those who don't even know what MOSIX is, it is a kernel patch that essentially creates a virtual computer out of several boxes. They claim they will scale your application as long as you have multiple processes (they migrate them as needed) - without any coding on your part.
Since I'm looking for extra performance with limited resources, this looks like a potentially easy way out
This is not "Big Iron" (Score:5, Insightful)
As others have noted, while this thing may have a theoretical peak performance of 10 TFLOPS, I'm willing to bet that number goes down like Monica Lewinsky on Quaaludes when you feed this magical supercomputer a problem that's _not_ suitable for distributed.net (i.e. one where computations on one node are dependent on computations on another node, like fluid-dynamics problems, turbulence, etc.)
Yeah, it's interesting as a curiosity, but this is by no means spectacular. Beowulf is good for what it's good for, which is a "poor-man's supercomputer" that works well for coarsely-parallel problems that don't require a lot of internode communication. It's not the Philosopher's Stone, folks.
-SD
Networks and parallel algorithms are the key (Score:3, Insightful)
There is the difference. As you say, for certain problems, this means that the whole machine is about 10 times faster than a Beowulf.
However, if/when conventional NICs are fast enough, specially in terms of latency, both systems can be equivalent again. In the meantime, a lot of people are trying to develop parallel algorithms that minimize the number and size of the messages, allowing to use cheap PCs as supercomputers.
Typo in article (Score:2, Informative)
nearly the same performance as the ASCII White system
No, it's ASCI White [llnl.gov]. Accelerated Strategic Computing Initiative, not the text format.
But it's only 32-bit (Score:2, Insightful)
Not big iron. (Score:2)
"total cost of ownership" against off-the-shelf (Score:3, Interesting)
I had to laugh at these bits:- (Score:2, Troll)
"We have been using the File Transfer Protocol over Gigabit Ethernet, but now we will be able to read files directly from any available disk,"
Well - like wow - NFS/CIFS anyone. They've been ftp'ing docs to each other? ROFL
"Being able to share data across the enterprise is an exciting new capability. It will allow more collaboration among research projects,"
Ahh my sides are splitting - "shock news, scientists discover file sharing" heheh. Don't these guys have a file server? Guys listen up - you didn't need to design a world beating clusterbeast with 10Tflops just to share some files! LOL all that power just to let Larry from Sub-Atomic Meddling dept. look at a paper from Dave from the Induced Super Novae Working Comittee heheheh. These guys need to get out more: imagine their annoyance when they made this big announcement only to discover that not only has Novell, Microsoft/SAMBA, Unix/NFS done this already - they did it with only one CPU in the server!
"This network approach is nice because we can use a standard PCI slot on each processor node, "
Hmm like any network card you care to mention then really... Heheh "Hey like.. this network stuff is like - cool man!" What next? They invent a board with a button for each character they need to type? Priceless.
I'm sure it's great but I only just stopped laughing at those quotes. I can only imagine (or hope) it's a case of clueless journo mis-quoting or quoting out of context or just completely missing the point of the project.
Babelfish [Engineer - Geek] (Score:3, Funny)
"We have been using the File Transfer Protocol over Gigabit Ethernet, but now we will be able to read files directly from any available disk."
translation
We used to use FTP over Gig-E but came up with something more L337.
Pick Me! (Score:2)
Ahem!
Great work with that new supercluster! You guys are doing great, getting the most teraflops for your dollar!
Ummm...since you don't need it anymore...would you mind letting me have that ASCI White machine?
More info (Score:2)
Computer engineering is..... (Score:4, Funny)
Re:Computer engineering is..... (Score:2)
However it seems maybe M$ is losing ground and, thereby, helping facilitate the current market slowdown for new consumer system purchases. There's only so much crap they can cram into Office, I guess. And grandpa can't tell the diff between his PIII and that fancy new P4 so why buy one.
I dunno if I mean this post to be taken as sarcasm or as academic... it could go either way. It's all certainly true, but also somewhat Heller-ian.
"Cluster" - too many meanings (Score:2)
Re:"Cluster" - too many meanings (Score:2)
A web cluster is a bunch of webservers all serving the same / slightly different things working together from the same database / files / etc.
A "cluster" computer would be a large set of computers that all act as one big computer.
Interesting... (Score:2)
In other words, the computers are networked via high-speed SCSI links, to increase bandwidth and thoroughput. I have always thought this was possible, and had probably already been done, but this is the first time I have seen such a thing written up about (in other words, it probably has been done in the past, and I just didn't read about it).
I am thinking the SCSI cards here are being used in a "poor-man's MYRINET" fashion, in order to get past the bottleneck of ethernet NICs and switches. Now, if they only made (or, maybe they do?) a SCSI "switch" (are those called crossbar switches?) for the thing, you could go to a star topology instead...
Re:whoop de doo (Score:2)
Yeah? Really? So what?
The point is that even though real applications typically achieve only 5-10% of the peak flops it's still a very fast machine. And better bang for the buck than other approaches to achieve the same level of performance.
Re:Why XEONs? (Score:3, Insightful)
Why not use AMD anyway? There are xeon motherboards with chipsets like the Intel E7500 and ServerWorks GC-HE that have greater memory bandwidth and PCI bandwidth than the AMD 760MPX. For many problems in scientific computing, memory bandwidth is what is important, not CPU speed.
Re:Why XEONs? (Score:2)
I'd also add that a lot more time has been put into optimizing compilers for intel processors than for AMD. The differences really come out in computationally intensive situations.
-Isaac
Re:Why XEONs? (Score:3, Informative)
- The AMD SMP chipset is slow (memory bandwidth) compared to the newer Intel chipsets.
- IIRC, the P4s use less power than the Athlons, probably this is not as important but it is there.
I'd like to see a comparison of a newer dual Xeon machine vs. a good dual AMD to see the performance difference. I would suspect that the dual Xeon machine would be a bit faster.
Re:Linux OS of choice (Score:2)
And to make your money go further. Considerably further when you consider the possible licence costs of even trying to do something like this with Windows.
Re:No mention of the vendor Linux Networx (Score:2)
The boards are linked by a network assembled by Linux Networx into a clustered system that will have 960 server nodes.
didn't you understand?