FASTRA II Puts 13 GPUs In a Desktop Supercomputer 127
An anonymous reader writes "Last year tomography researchers of the ASTRA group at the University of Antwerp developed a desktop supercomputer with four NVIDIA GeForce 9800 GX2 graphics cards. The performance of the FASTRA GPGPU system was amazing; it was slightly faster than the university's 512-core supercomputer and cost less than 4000EUR. Today the researchers announce FASTRA II, a new 6000EUR GPGPU computing beast with six dual-GPU NVIDIA GeForce GTX 295 graphics cards and one GeForce GTX 275. The development of the new system was more complicated and there are still some stability issues, but tests reveal the 13 GPUs deliver 3.75x more performance than the old system. For the tomography reconstruction calculations these researchers need to do, the compact FASTRA II is four times faster than the university's supercomputer cluster, while being roughly 300 times more energy efficient."
Awesome (Score:5, Funny)
More Awesome (Score:3, Funny)
This was post #2 and already modded -1, Redundant.
Re: (Score:1)
This is sad, since this one was clever.
Re: (Score:3, Funny)
slashdot mods are often, as I observe, sour and pissy skeptics. even if it is humorous to them they will knock it for lack of something else to bash.
Re: (Score:3, Funny)
slashdot mods are often, as I observe, sour and pissy skeptics. even if it is humorous to them they will knock it for lack of something else to bash.
-1 troll
lol. exactly
humor on /. (Score:2)
Re: (Score:2)
Oh yeah, the fact that we get exactly the same comment everytime a fast computer + GPU is mentioned shouldn't stop the next moron from posting it.
Re: (Score:2)
Oh yeah, the fact that we get exactly the same comment everytime a fast computer + GPU is mentioned shouldn't stop the next moron from posting it.
Its so horrible. Oh god they must pay. MOD THEM DOWN! MOD THEM DOWN!
hows those lemons?
Too right it's redundant (Score:2)
It's redundant because some smartass mentions Crysis in response to *every fucking article* about someone doing something using powerful GPUs*.
Of course, if it was about CPUs, the post would be about what will be needed to run Windows 8, or 'finally meeting the minimum system requirements for Vista'.
Mostly, you can predict these posts from the title of the article. Doesn't stop crotchety people people like me coming to complain about it though...
* Footnote: When someone equally-crotchety complained about t
Re: (Score:2)
Sorry to reply to myself, but I've just noticed two comments:
mgblast's [slashdot.org], which says the same thing as mine more succinctly and bluntly. Embarrassingly, it was in the same god-damn thread.
Further down, we have this comment [slashdot.org] by RandomUsr, who actually does mention Vista. Woo! In fact, he (and the person that responded to him) also mentions anitvirus software. Never mind that this is a GPGPU system, just post crap about *something* bloated and wait for the '+1 Funny' mods to roll in.
Gods, reading these two pos
Re: (Score:2)
Gods, reading these two posts made me realise that I need to stop reading and posting to Slashdot when it's late and I'm in a bad mood and feeling misanthropic. *grumbles*
Ya think? Honestly I'm not on Slashdot very often. Maybe it is an overused joke, but I don't post a lot in hardware related forums so how would I know. Check my history if you don't believe me. If you think it's overused, not funny or redundant, do what I do when I come across such posts. Roll your eyes, then move onto the next post. Don't get your panties in a twist over it. I'm sorry if not every post on Slashdot conforms to your standards, but if you're that worried about it, go start your own f
Re: (Score:2)
Whoa, whoa. Chill, we're all friends. Don't take it personally, my comments weren't directed solely at you. It isn't a big deal (frankly, are any posts on Slashdot a 'big deal'?), so there's no need to make a mountain out of it. I don't expect it's you that malevolently thinks "Aha! Another hardware article! Just what I need to get a sardonic, lambasting response from BertieBaggio." Equally, I don't look for these jokes just so I can grumble about how people always post them. Generally, the only systematic
Re:Awesome (Score:4, Funny)
Here goes the redundant and offtopic mod.
+1 (Score:2)
Au contraire, I clicked the article link JUST to find this comment. Thankyou for maintaining a cherished /. tradition!
Re: (Score:2)
And in Soviet Russia, 13 GPUs supercompute using you!
(Is that the smell of burning Karma?)
Re: (Score:2)
(Is that the smell of burning Karma?)
No, that's petrified grits you're smelling.
Re: (Score:2)
Sadly it doesn't. Why? Because it appears to be running Linux [dvhardware.net].
News Flash (Score:2, Funny)
Re: (Score:1)
How fast is this really? (Score:3, Insightful)
Re:How fast is this really? (Score:5, Informative)
Re: (Score:2, Interesting)
Re: (Score:3, Interesting)
Re: (Score:2, Informative)
you can get absolutely incredible performance out of off-of-the-shelf GPUs these days.
I had heard this from folks, but didn't really buy it until I read this paper [nasa.gov] today. They get a speed-up (wall clock) using the GPU even though they have to go to a worse algorithm (Jacobi instead of SSOR). Pretty amazing.
Re: (Score:2)
At least a CPU program, when it crashes, does not bring down the whole OS. Memory protection? Pah, who needs such things... After all you never make coding mistakes. Right?
It is like MS-DOS programming all over again. Except the computer takes longer to reboot.
They use a worse algorithmic complexity algorithm in the paper because it actually performs better in the GPU than the other one. This happens in CPUs in several cases as well. When was the last time you saw someone using a Fibonacci heap? Memory
Re: (Score:1)
That would make the performance the same as for the GPU system.
Really? Care to share any results that support that? I'm quite sure the peak flops you can achieve on the GPU are much higher than the limited SIMD capability of the CPU.
Note that I am being generous here and actually ignoring the program setup time when they need to copy the data to the GPU.
Sure there's communications overhead, but that's true of any parallel processing problem, the trick is to find problems that have a big computation to communication ratio (which happens to be m
Re: (Score:3, Informative)
Really? Care to share any results that support that? I'm quite sure the peak flops you can achieve on the GPU are much higher than the limited SIMD capability of the CPU.
IIRC they claim 2.5-3x times more performance using a Tesla than using the CPUs in their workstation. Ignoring load time.
SSE enables a theoretical peak performance enhancement of 4x for SIMD amenable codes (e.g. you can do 4 parallel adds using vector SSE, in the time it takes to make 1 add using scalar SSE). In practice however you u
Re: (Score:2)
Their CPU numbers almost certainly take SIMD into account.
I'm doing cryptography research, and some of my colleagues have been considering building a similar "desktop supercomputer". The speedup there looks more reasonable: a single high-end GPU should be worth maybe 5-10 quad-core CPUs; it costs double and uses double the power, but it's easier to put a dozen of them in a single PC. Th
That's why I have a problem with the comparisons (Score:4, Informative)
Because it only applies to the kind of problems that CUDA is good at solving. Now while there are plenty of those, there are plenty that it isn't good for. Take a problem that is all 64-bit integer math and has a branch every couple hundred instructions and GPUs will do for crap on it. However a supercomputer with general purpose CPUs will do as well on it as basically anything else.
That's why I find these comparisons stupid. "Oh this is so much faster than our supercomputer!" No it isn't. It is so much faster for some things. Now if you are doing those things wonderful, please use GPUs. However don't then try to pretend you have a "supercomputer in a desktop." You don't. You have a specialized computer with a bunch of single precision stream processors. That's great so long as your problem is 32-bit fp, highly parallel, doesn't branch much, and fits within the memory on a GPU. However not all problems are hence they are NOT a general replacement for a supercomputer.
Re: (Score:1)
Take a problem that is all 64-bit integer math and has a branch every couple hundred instructions and GPUs will do for crap on it.
So would a Cray; supercomputers and GPUs are made for the same sorts of problems (exploiting data parallelism). Now if by 'supercomputer' you mean 'a cluster of commodity hardware', then ok, you've got a point, that heap of cpus will handle branches plenty fast.
Re: (Score:2)
Re: (Score:2)
There are also a fair number of Cell based supercomputers and even one hybrid out there. And even some pure custom solutions used by the NSA. (There is a reason they have their own chip fab.) And, if you include folding at home type applications, then GPU's represent a reasonable percentage of the worlds supper computing infrastructure.
Re: (Score:2)
Aside from a few homebrew PS3 clusters, I don't know of any large scale Cell installations. The Roadrunner is a fairly standard (if very large) Opteron based cluster, with PowerXCell co-processors. The latest Cray XT5 is is a fairly standard (if very large) Opteron based cluster, with PowerXCell or FPGA co-processors.
The NSAs ASIC systems don't count, by definition, they are not general purpose. A modern 3GHz quad-core processor will manage an exhaustive DES search in about 600 years. Deep crack in 1998 c
Re:That's why I have a problem with the comparison (Score:5, Insightful)
That was always true of supercomputers. In fact the stuff that runs well on CUDA now is almost precisely the same stuff that ran well on Cray vector machines - the classic stereotype of "Supercomputer"! Thus I do not see your point. The best computer for any particular task will always be one specialized for that task, and thus compromised for other tasks.
BTW, newer GPUs support double precision [herikstad.net].
Re: (Score:1, Insightful)
E X A C T L Y ! ! ! I always read about how fast the Cell Broadband Processor(tm) is and how anyone is a FOOL for not using it. No. They suck hard when it comes to branch prediction. Their memory access is limited to fast, but very small memory. Out of branch execution performance is awful. You have to rewrite code massively to avoid it. For embarassingly parallel problems, they are a dream. For problems not parallel, they are quite slow. An old supercomputer isn't as fast as a new one. If ordin
Re: (Score:2)
That's why I find these comparisons stupid. "Oh this is so much faster than our supercomputer!" No it isn't. It is so much faster for some things. Now if you are doing those things wonderful, please use GPUs. However don't then try to pretend you have a "supercomputer in a desktop." You don't. You have a specialized computer with a bunch of single precision stream processors. That's great so long as your problem is 32-bit fp, highly parallel, doesn't branch much, and fits within the memory on a GPU. However not all problems are hence they are NOT a general replacement for a supercomputer.
For that matter, which is faster: A two-ton flatbed truck, or a Maserati? Kinda depends on what you are trying to do, doesn't it? Want to move 3,000 pounds of Hay? You probably DON'T want the Maserati!
And all machines are like this. Some machines are better at some tasks than others. And presumably, the comparison to the University Supercomputer was because of a task that they *needed* to perform, and the pittance cost of the GPGPU-based supercomputer favored very well against the cost of leasing University supercomputer time.
Even different people are better at some things than others.... Some people are better a maths than others. Some people can take a bit of vinegar and coffee grounds, and make an artistic masterpiece.
Because I'm a jogger, I can run long distances faster than most people. But I suck at sprints, and I take long showers. I type over 100 WPM.
See?
Re: (Score:2)
Sure,,but if you look at it from their perspective - before we needed time on a supercomputer and now we don't. Either you redefine supercomputers to include that or it's another task where we don't need one, even better if you ask me. So it doesn't do everything, well running an embarrassingly parallel problem on a supercomputer would also "terrible" performance now compared to this.
That's great so long as your problem is 32-bit fp, highly parallel, doesn't branch much, and fits within the memory on a GPU.
As far as I know the Teslas will be doing double precision, and we certainly could put GPUs on a better backplane for GPU-GPU
Re: (Score:2)
Only for problems that can be described as Massively Multithreaded, Oratorical, Redundant, Periphrastic, and Gratuitous
Like WoW and Second Life.
[Citation: http://thesaurus.reference.com/browse/redundant%5D [reference.com]
Swordfish (Score:1)
Get Animated [geekonwheels.com]
*Drools*
times less (Score:4, Funny)
...consuming 300 times less power.
*sigh*
Re: (Score:1)
Re: (Score:1)
It makes perfect sense, given appropriate units, such as (1/watt)'s. Okay, maybe not.
Re:times less (Score:5, Insightful)
Re: (Score:3, Funny)
...consuming 300 times less power. *sigh*
Oops. Sorry. 300 times fewer.
Not sure how fast it is, but I know it is hot... (Score:3, Interesting)
I've got a pair of 9800gx2 in my rig. The cards turn room temperature air into ~46C air. Without proper ventilation, these things will turn a chassis into an easy bake oven.
For those not familiar with the 9800gx2 cards, it essentially is two 8800gts video cards linked together to act as a single card - something called SLI on the NVidia side of marketing. SLI typically required a mainboard/chipset that would allow you to plug in two cards and link them together. This model allowed any mainboard to have two 'internal' cards linked together, with the option of linking another 9800gx2 if your board actually supported SLI.
The pictures did not show any SLI bridge, so it looks like they are just taking advantage of multiple GPUs per card.
Re:Not sure how fast it is, but I know it is hot.. (Score:2)
The pictures did not show any SLI bridge, so it looks like they are just taking advantage of multiple GPUs per card.
There's no seven-way SLI anyway. Since the GPUs are being used for processing and not graphics, there's no need for them to work together via SLI or Crossfire or what have you as long as the OS and programs treat 'em like any other multiprocessor setup.
Re: (Score:3, Funny)
That's a brilliant idea, now people can make snacks without ever leaving the computer.
Re: (Score:1)
I did not try baking anything, but it did turn the top of the computer into a nice coffee cup warmer [multiply.com].
Yeah but... (Score:1, Redundant)
Can it play Crysis with a high frame rate on maximum?
Stability problem solved... (Score:2, Funny)
Re: (Score:2)
It's not even that hard. Just number them starting from 0 so the last one is only 12. Then when you add another make it 14. Problem solved.
lol (Score:1)
Silly (Score:2)
This isn't a huge achievement. Nobody else has done it because it's silly.
There are two major reasons... the first is they use GeForce cards. That's not a good idea, since GeForces are held to much lower quality standards than Teslas and Quadros. They're intended for gaming graphics, where a minor error here or there isn't the end of the world. "Sorry we missed your cancer, since our supercomputer miscalculated that region of the reconstruction." The second problem is, that's one bandwidth starved machine.
Re: (Score:3, Informative)
The difference between GeForce and Quadro cards is almost always completely driver based, it is the exact same hw, different sw.
This basically a roll your own Tesla, and considering the Teslas connect to the host system via an 8x or 16x PCI-e add in card, I'm gonna say you are wrong when it comes to the bandwidth issue as well...
Re: (Score:3, Informative)
The hardware is the same, but the quality control is different. Teslas and Quadros are held to rigorous standards. GeForces have an acceptable error rate. That's fine for gaming, but falls flat in scientific computing.
Re: (Score:1)
Uh... no, you are wrong. Quadros and GeForces have a lot of differences in the internal hardware. Just because they "do the same thing" (they draw triangles really, really fast) it doesn't mean they are the same. GeForces, for example, don't have optimizations for drawing points and lines, nor assume you are abusing of obsolete APIs, like immediate mode drawing; both are common in CAD applications, and almost useless in games.
Re: (Score:2)
No, the chips are almost exactly the same (except Quadros have 100% unbroken chips). You're thinking driver differences.
Re: (Score:2)
There is NO difference between Quadro and GeForce besides Geforce basically being a laser-locked defective quadro with a different firmware.
In fact, you can flash most GeForce cards with the equivalent Quadro firmware and in some applications (not gaming) get better performance.
Been tooling around with nVidia cards since NV4. They've pretty much used this same strategy for the past decade+.
Re: (Score:2, Insightful)
It's not silly: (1) this is a research project, not production medical equipment, meaning that the funds to buy Tesla cards were probably not available, and they aren't particularly worried about occasional bit errors. (2) Their particular application doesn't need much inter-GPU communication, if any, so that bandwidth is not an issue. They just need for each GPU to load datasets, chew on them, and spit out the results.
How much does your proposed GPU supercomputer cost for 13 GPUs?
Re: (Score:2)
There are two major reasons... the first is they use GeForce cards. That's not a good idea, since GeForces are held to much lower quality standards than Teslas and Quadros.
Tell that to the Quadro FX1500M that was in my HP/Compaq "professional workstation" laptop, that had a well-known die bonding problem that caused overheat failures across an entire production line. Neither HP nor nVidia recalled the defective parts and I ended up spending literally days on the phone with HP support before they sent me a new laptop. Higher quality, my ass. Quadro chips are marked differently, period the end.
Can't be to impressed: Folding@home guys did more. (Score:2)
Folding@home enthusiasts and academic contributors did more than that, and a long time ago, too. Just check this thread at foldingforums [foldingforum.org] for one example.
Re: (Score:1)
Did more what, exactly? None of the Folding setups listed have more than 4 GPU cards per motherboard.
Re: (Score:2)
They have more powerful GPUs, and have had them since a long time.
Re: (Score:2)
OK then. I'm raising an eyebrow in somewhat heightened interest.
Naming Scheme (Score:2)
Wouldn't it be nice if the FASTRA II, which is 3.75 times faster than the FASTRA I, was actually called the FASTRA 375. Then I wouldn't have to ask.
Re: (Score:2)
If it's really 3.75 times faster maybe they could call it the FASTRA System 360 Model 96 (or the Fastra 360/96) for short ;^)
What the hell is up with the clothing? (Score:2)
Generic statements FAIL! (Score:2)
it was slightly faster than the university's 512-core supercomputer and cost less than 4000EUR.
but tests reveal the 13 GPUs deliver 3.75x more performance than the old system.
It is impossible, to make such general statements about the performance, for something that is still very much specialized on long pipelines and streams of repetitive data (vector processing).
They may be much faster for tasks that fit that scheme. But slower for those that don’t.
Re: (Score:2)
The performance of a standard cluster, or even a SIMD machine will vary tremendously depending on your application as well. The only reasonable way is to pick a problem and compare performance on that problem.
They just forgot a phrase at the end of that statement: "it was slightly faster than the university's 512-core supercomputer... in this application."
Why it's 13, not 14 GPUs (Score:2, Interesting)
Apparently, the regular BIOS can't boot with more than 5? graphics cards installed due to the amount of resources (memory & I/O space) that each one requires. So the researchers asked ASUS to make a special BIOS for them which doesn't set up the graphics card resources. However, the BIOS still needs to initialize at least one video card, so they agreed that the boot video card would be the one with only a single GPU. Presumably, they could have also chosen a dual GPU card that happened to be differen
Cramped cases... (Score:1)
Re: (Score:1)
It's known as "market forces". In case you haven't noticed, the computing needs of most people can be crammed into something the size of a paperback book or so. Larger computing devices are available, but the bigger you go, the smaller the market, and thus the larger the price. If you want something big, you might take look at a computer named "Jaguar". It has a big price, too.
As far as personal computers go, they tend to be designed around CPU strengths & limitations. Intel and AMD have figured ou
Re: (Score:1)
Oh, and by the way, I'm wondering quite the opposite: why do we still see so many over-sized full ATX size cases being offered, when microATX motherboards have everything we (most of us) need? Indeed, even mini-ITX motherboards are often adequate for so many needs, and yet mini-ITX cases still seem to command a premium because they are relatively rare. It's easy (and boring) to design a big rectangular ATX box. It's an engineering challenge to make a good-looking small box that does everything you need a
Re: (Score:1)
Oh, and here's your mini-fridge size case with 10 slots:
http://www.mountainmods.com/computer-cases-c-21.html [mountainmods.com]
I wants? (Score:1)
Would be nice if it was finished (Score:1)
Next time, make the fancy video when it's finished guys.
Re: (Score:2, Flamebait)
Where do you get a motherboard that can accept 5 graphics cards?
Re: (Score:1, Interesting)
Oh, I read that wrong, it's 7 graphics cards. Who makes such a motherboard?
Re: (Score:1)
ASUS.
I didn't even RTFA, i just WTFV
Re: (Score:2)
ASUS.
I didn't even RTFA, i just WTFV
x2
lets see the video.
Re:Easy money to be made? (Score:5, Informative)
Um...read the article?
The motherboard is a ASUS P6T7 WS Supercomputer.
Re:Easy money to be made? (Score:4, Funny)
You must be new here... ;)
Re: (Score:2)
umm where in TFA does it say that?!
Re: (Score:2)
In the huge bullet point list, in bold, by product code, with a further text explanation for each piece.
It's halfway down the page underneath the photograph of the machine and the bold face, all caps title "FASTRA II".
Do you need a screenshot also?
Re: (Score:2)
Re: (Score:3, Funny)
Does that come in a picoATX version?
Re: (Score:1)
Hah. Hope they can write BIOS code from scratch... can you imagine trying to get mobo vendor support?
Re: (Score:2)
Hah. Hope they can write BIOS code from scratch... can you imagine trying to get mobo vendor support?
Yet another RTFA (or, in this case, WTFV).
Lets See (Score:1)
Mobo Manufacturer
Lets see, I can help these guys develop a new use for my line of wonky mobo's, get favourable mention all over the world on places like slashdot and reap the benefit of every geek with excess cash and a yen for a super computer or I can stand back and maybe they find someone else who has a "better" board or they develop their own.
Hmm lets think on this one
Erm.... They did. (Score:2)
They had support from the mainboard manufacturer.
Read the fucking article.
Re: (Score:2)
I am sure someone else can come up with some goodies.
Re: (Score:2)
Where do you get a motherboard that can accept 5 graphics cards?
msi 890FX-GD70 6x PCIE 2.0 x16
Re: (Score:2)
Re: (Score:1)
There were several difficulties. The most obvious is that they fit 7 double-wide cards into 7 single-wide slots. The next was that the motherboard BIOS crashes when more than 5? boards are installed. The next was that in order to allocate enough I/O space, all unnecessary devices had to be disabled, and even still the Linux kernel needed to be hacked to reduce the space allocated to various resources. After all that, it was a piece of cake.
GPU accuracy (Score:1)
Re:GPU accuracy (Score:5, Informative)
Re: (Score:3, Interesting)
Re: (Score:2, Interesting)
Re:GPU accuracy (Score:4, Interesting)
First, a gaming card is going to get fast firmware. A workstation card is going to get accurate firmware. I imagine that supercomputer cards would get specialized firmware. (I only skimmed the summary.)
GPUs are excellent at solving certain types of problems and excel at solving matrices. (That's what your video card is doing while it's rendering.) The best part of that is that most, if not all, mathematical problems can be expressed as a matrix, meaning that your super-fast GPU can solve most math problems super-fast.
Next, GPUs love working together since they don't care about what the OS is doing. All they do is take raw data and respond with an answer. Usually we're putting that answer onto the display, since otherwise wtf are we doing with a GPU? In this case, the results are returned instead of using the flashy display. So what you end up with is a set of really fast, specialized, parallel engines solving broken down matrices.
They're also not subject to the marketing whims of Moore's Law, so you can often get faster cards sooner than faster CPUs. To break down a supercomputer so that you get this kind of performance for 4000 EURO is a fantastic achievement. It's almost, but not quite, hobby range. (I'd still put money on someone trying to evolve this into a gaming rig...)
double precession needed for matrix (Score:2)