MIT Artificial Vision Researchers Assemble 16-GPU Machine 121
lindik writes "As part of their research efforts aimed at building real-time human-level artificial vision systems inspired by the brain, MIT graduate student Nicolas Pinto and principal investigators David Cox (Rowland Institute at Harvard) and James DiCarlo (McGovern Institute for Brain Research at MIT) recently assembled an impressive 16-GPU 'monster' composed of 8x9800gx2s donated by NVIDIA. The high-throughput method they promote can also use other ubiquitous technologies like IBM's Cell Broadband Engine processor (included in Sony's Playstation 3) or Amazon's Elastic Cloud Computing services. Interestingly, the team is also involved in the PetaVision project on the Roadrunner, the world's fastest supercomputer."
Just to get it out of the way... (Score:5, Funny)
"But can it run Crysis?"
*Ducks*
Re: (Score:2)
Re:Just to get it out of the way... (Score:4, Funny)
Now there is a problem.
Re: (Score:1, Informative)
There is hardly a difference [gamespot.com] between Crysis under DX9 and DX10. DX10 "features" are a Microsoft scam to promote Vista, nothing more.
So yes, you can maximise the detail levels on XP.
Re: (Score:1)
I dont see that at all. There is at least in the second shot a increable diffrence in the mid-foregroud detail. The second shot shows it off the best, and the backround is really 3D looking, wheras, the other shots look like its a bollywood set. Im loading and stripping (vlite) Vista next weekend, so Ill have a look at DX10, as well as hacking DX10 to work under vista.
Re: (Score:2)
I'd call it a tie.
Re: (Score:1)
I would say its a tossup to, because the realism is a give-and-take from the speed. No realtime ray tracing there, but its more than detail. Call it grokking, looking at the whole picture's realsim, the color, depth of field, camera tricks, detail. It was fun grabbing all the shots, and slideshowing through them and not looking at the source, until I had analysed the pictures. Most of it is quite striking, and I cant wait to get my hands on Crysis now. I have a box running both XP DX9, XP DX10, and vista DX
DX10 vs DX9 (Score:5, Informative)
There are 2 main differences between DX9 and DX10 :
I - The shaders offered by the two APIs are different (shader model 3 vs 4). None of the DX9 screen shot does self-shading. This is specially visible on the rocks (but even in action on the plancks of the fences). So there *are* available under Vista additional subtleties
II - The driver architecture is much more complex in Vista, because it is built to enable cooperation between several separate processes all using the graphics at the same time. Even if Vista automatically disables Aero when games are running full-screen (and thus the game is the only process accessing the graphic card), the additional layers of abstraction have an impact on performance. It is specially visible at low quality settings where the software overhead is more noticeable.
Re: (Score:2)
This is specially visible on the rocks (but even in action on the plancks of the fences).
Odd. I heard these were pretty constant.
Re: (Score:2)
It's quite noticable on Page 2 [gamespot.com]; see the cliffs in the last shot, Vista has shadows where XP has none. Not terribly exciting though, especially given the additional FPS impact; woo a few shadows ;)
Some things you probably have to see moving, though. e.g. Bioshock uses more dynamic water with DX10 (which as betterer vertex shaders or so?), and responds more to objects moving in it.
Re: (Score:2)
Under Vista 32-bit you're left with only 640K after removing the video memory allocations.
Us normal SLi users win with only 3.2GB left
Re: (Score:2)
I'm sorry, that was uncalled for.
Re: (Score:1)
Isn't it a beowulf cluster already? It was FREE as in beer...( Probibly gave them every one they had! ) Now they can run Folding@home!
Re: (Score:1)
What an ICREADBLE BOX. 15 fans on the front and sides. It must sound like a 747/MacProG5. Nice GPUs though...
Re: (Score:2)
I assume you men a PowerMac G5, because the MacPros are pretty much silent.
Was wondering about that too (Score:2)
So it's about one fan per GPU? Seems annoying and inefficient. Why not build it more spread apart, or use a "Central Air" system like people use in their homes.
Not using water cooling I understand, 'cause there'd be around 30 tubes snaking in and out of the box - something would fail/leak.
Re: (Score:1)
Thats LARGE FANS. There are probibly about 3 fans per actual GPU. One on the card, one on the box, and one on the Powersupply/etc...
You could just as easily bathe the thing in cooling oil. Although I am not a fan of water cooling, I can't see it as being any more unreliable than fans, done well, water cooling will outlast the machine.
Re: (Score:2)
This is just what 3D Realms has been waiting for... it's almost powerful enough to run Duke Nukem Forever!
Re: (Score:2, Informative)
Fascinating (Score:5, Interesting)
I think this part of the computing timeline is going to be
one that is well remembered. I know I find it fascinating.
This is a classic moment when tech takes the branch that
was unexpected. GPGPU computing [gpgpu.org] will soon
reach ubiquity but for right now it's the fledgling that is being
grown in the wild.
Of course I'm not earmarking this one particular project
as the start point but this year has gotten 'GPU this' and
'GPGPU that' start up events all over it. Some even said
in 2007, that it would be a buzzword in 08 [theinquirer.net].
And of course there's nothing like new tech to bring out [intel.com]
a naysayer.
Folding@home [stanford.edu] released their second generation [stanford.edu]
GPU client in April 08. While retiring the GPU1 core in
June of this year.
I know I enjoy throwing spare GPU cycles to a distributed
cause and whenever I catch sight of the icon for the GPU [stanford.edu]
client it brings the back the nostalgia of distributed clients [wikipedia.org]
of the past. [Near the bottom].
I think I was with United Devices [wikipedia.org] the longest.
And the Grid [grid.org].
Now we are getting a chance to see GPU supercomputing
installations from IBM [eurekalert.org] and this one from MIT.
Soon those will be littering the Top 500 list [top500.org].
I also look forward most to the peaceful endeavors the new
processing power will be used for... weather analysis [unisys.com],
drug creation [wikipedia.org], and disease studies [medicalnewstoday.com].
Oh yes, I realize places like the infamous Sandia will be using
the GPU to rev up atom splitting. But maybe if they keep their
bombs IN the GPU it'll lessen the chances of seeing rampant
proliferation again.
Ok, well enough of my musings over a GPU.
-AI
Re: (Score:2)
Re: (Score:3, Insightful)
"I think this part of the computing timeline is going to be one that is well remembered. I know I find it fascinating."
Well remembered? Perhaps... but I wouldn't sing their praises just yet. Advances in memory are critically necessary to keep the pace of computational speed up. The big elephants in the room are: Heat, memory bandwidth and latency. Part of the reason the GPU's this time round were not as impressive is because of increasing memory bandwidth linearly will start not have the same effects
Re:Fascinating? (Score:1)
Re: (Score:1)
Re: (Score:2)
*clears throat*
Yeah, but will it run Duke Nukem Forever?
*runs like hell*
Re: (Score:2)
Crysis ran "well" for me at Medium settings on an 8800 GTX and a 2.6GHz dual core at my monitor's native resolution of 1680x1050. (Using DirectX 10 on Vista!)
But, it ran everything on "zomg high amazing ponies!" when I connected it to my lower-resolution 720p television.
(I love doing that to Xbox fanboys - "You think Team Fortress 2 looks "amazing" on your little toy? Come over here and see it played at 60fps with more antialiasing than you could fit in the 12 dimensions of a X-hypercube, let alone an X-b
Re: (Score:2, Interesting)
In terms of actually being totally non-proprietary, Nvidia has to worry about ATI stealing their drivers (which they would or at least "borrow" alot from them), since Nvidia generally has that as their trump card over ATI no matter who has the better hardware. On the other hand, Nvidia has no interest in "borrowing" from ATI's drivers. ATI knows that, and that's why their
Re: (Score:1, Flamebait)
Nice trolling, a bit of linkies perhaps to back your statements?
Re:Say no to proprietary NVIDIA hardware (Score:5, Informative)
Tom's Hardware [tomshardware.com] did a pretty good job detailing the ups and downs of ATI and Nvidia with many of the major games of last year (BioShock, World in Conflict, etc). Overall, both companies faired well, but they reported quite a few crashes due to the ATI drivers. I've had an ATI card before, the 9800xt when Nvidia was producing their horrible 5xxx series back in 2003-04 that was totally worthless. The 9800xt was a good card for everything (gaming, graphical aps, etc). Sorry, I should have cited sources. Wasn't trolling on purpose, though I know that writing anything positive about Nvidia on slashdot is borderline blasphemy.
Re: (Score:2)
You are claiming ATI will outright steal from Nvidia, whether one driver is better than the other doesn't matter, I want you to back up your claim that they would do something like that.
Re: (Score:2)
Would you like me to call up ATI and ask them?
ATI Customer Service: What can I help you with today.
Me: If Nvidia made their drivers OSS, would you borrow from them?
ATI Customer Service: I'm sorry sir, we cannot answer that at this time. Is there anything else I can help you with?
Me: Nope, thanks.
If someone makes a better
Re: (Score:2)
Re: (Score:2)
So I was right, you are trolling, too bad the mods can't see that.
Re: (Score:1, Insightful)
We should PAY ATI to use nVidia's Drivers. I learned this on the Radeon 9800s. Solid Well performing card fairly good 3D Perfornce. Drivers utter and complete garbage. Used more memory, cause random crashes. I had to reinstall XP, after I sold the card, ( and after I had re-installed XP twice before to fix the 'feature' ) to get rid of .Net 2.0. Got a GeForce 4ti to replace. Was able to put a fan right over the GPU. Computer went to MONTHS without crashing, No more blue screens. (AMD 1.6 Ghz dual). If I eve
Re:Say no to proprietary NVIDIA hardware (Score:5, Informative)
ATi could conceivably steal parts from the first two from nVidia, but it's doubtful that they could steal anything from the last part since their hardware designs are sufficiently different to make this hard.
The problem nVidia are going to have is that the new Gallium architecture means that the first two parts are abstracted away and reusable, as is the fall-back path (which emulates functionality any specific GPU might be missing). This means that Intel and AMD both get to benefit from the other company (and random hippyware developers and other GPU manufacturers / users) improving the generic components, while nVidia are stuck developing their own entire alternative to DRI, DRM, Gallium, and Mesa. The upshot is that Intel and AMD can spend a tiny fraction of the time (and, thus, money) developing drivers that nVidia do. In the long run, this means either smaller profits or more expensive cards for nVidia, more bugs in nVidia drivers (since they don't have the same real-world coverage testing).
Now, if you're talking just about specs, then you're just plain trolling. Intel doesn't lose anything to AMD by releasing the specs for the Core 2 in a 3000 page PDF, because the specs just give you the input-output semantics, they don't give you any implementation details. Anyone with a little bit of VLSI experience could make an x86 chip, but making one that gives good performance and good performance-per-Watt is a lot harder. Similarly, the specs for an nVidia card would let anyone make a clone, but they'd have to spend a lot of time and effort optimising their design to get anywhere close to the performance that nVidia get.
Not about souce, but about *Technical Specs*. (Score:2)
Nvidia has to worry about ATI stealing their drivers {...} ATI knows that, and that's why their drivers are open.
We are not speaking about releasing source code of current drivers. In fact ATI/AMD's fglrx *IS NOT* open. At all. What is open are 2 *separate* drivers projects, which are done using the *technical data* released by AMD.
You're confusing the situation with Intel. (They paid Thungsten Graphics to write an open source drivers for i8xx/i9xx to begin with. There's no such thing as a proprietary intel drive on linux. Only an opensource driver written by TG)
What we want is not nVidia releasing the source of their
Re: (Score:1)
His blog here [botchco.com]
If you don't know who Alex Deucher is, just Google [google.com] his name.
Re:Say no to proprietary NVIDIA hardware (Score:4, Insightful)
I want to support ATI and AMD, but nVidia just works.
Their drivers are very nice.
Until that changes I'm a nVidia guy.
A third party open source driver should fix the problems.
Re: (Score:1, Informative)
I upgraded my X800XL to a 8800GT. With Windows, I never had a problem with my X800XL and I still have not see a problem with the 8800GT. The X800XL just worked and the 8800GT just works.
With Ubuntu, the X800XL was working nicely (open source drivers) and the 8800GT is a piece of crap. NVidia's drivers are horribly slow and a lot of users are reporting the same thing. I have an old computer with an even older GeForce 4 MX and it displays things faster.
Before I bought my 8800GT I didn't care much about one co
Re: (Score:2)
Erm are you using the open source drivers?
No 3d acceleration.
Use the nVidia drivers which are nearly identical to the Windows ones.
Re: (Score:2)
Maybe they work for you: I find NVidia drivers quite painful, especially for non-Windows operating sytems. And a 'third party open source driver'can't get the details of the NVidia API to work from, which means a huge amount of reverse engineering, especially of their propriatary OpenGL libraries, which are at the core of their enhanced features in non-Windows operating systems.
Re: (Score:1)
"I find NVidia drivers quite painful, especially for non-Windows operating sytems."
wait, so your telling me you have troubles with the windows drivers too? it's a single download for the platform your on and next next done.
Granted, the linux ones have a couple more steps than that, but it's still rather trivial for most people, considering it's the most frequently used driver for 3d on linux (besides possibly intel).
Re: (Score:2)
I've had to clean up when someone trying to fix their PC and driver problems went and re-installed drivers from their media, when I'd I'd updated from NVidia's site, and monitors become completely unavailable on dual-display cards from the previous working display, and had it impossible to fix without dragging another monitor in with the other connector type and fixing events from the other display. It's compounded on systems with built-in displays and add-on graphics cards.
So yes, I've had real problems wi
Re: (Score:2)
It's coming (Score:1, Funny)
The day when self modification/upgrade enthusiasts start overclocking themselves and bragging about how many fps their eyes get watching the superbowl.
Re: (Score:1)
Well, for the time being I prefer to tinker with things outside my body, thank you.
Re: (Score:2, Funny)
Jesus doesn't approve of you doing that.
Re: (Score:1)
Been threre done that. Volcano beans, fresh ground in a expresso machine.
Self-modification (Score:1)
You laugh, but it seems like my eyes have gotten faster.
I used to not care about 60hz refresh rate, but now I can't stand it. Look straight ahead at a CRT monitor running 60 hertz looks like a rapidly flickering/shimmering mess. 70 hz is still annoying b/c my peripheral vision picks it up.
I attribute my increased sensitivity to flicker to playing FPS's.
Oh and when a decent brain-computer interface comes out I'll be getting one installed.
Damn it, I just wet myself!! (Score:2)
So that's what happens (Score:5, Funny)
When gamers grow up and go to college.. blue leds and bling in the server room!
Alright! (Score:4, Funny)
Re: (Score:2)
One more step to the last invention man ever need make... hooker bot. (mine would be a Buffy Bot, but that's just personal preference)
She would have to be cooled with liquid nitrogen, running all those GPUs.
Re: (Score:1)
Re: (Score:2)
One more step to the last invention man ever need make... hooker bot. (mine would be a Buffy Bot, but that's just personal preference)
Here you go: one robotic buffing cell [intec-italy.com]
Re: (Score:2)
I know this is slashdot so you have probably not spent much time around females at leat not those of our species, but let me tell you an angry one is a dangerous creature. All of them do get pissed off some of the time. You can be the greatest guy ever and sooner or later you will make a mistake. The good news if you are a good guy they will forgive you but the period between your screw up and their forgiveness can be extreemly hazardous.
Buffy is a fun show and all but if I were ordering robo girl, I am
Computing power is how nature does it (Score:2)
Do you know the human brain has about 100 billion neurons? Each neuron can be represented as a weighted average of its inputs, a typical human neuron has some 1000 inputs and does around a hundred operations per second.
So, yes, *maybe* there could be some very smart algorithm that mimics human reasoning, but that's not how it's done in the human brain. It's raw computing power all the way.
Just how specialized is GPU hardware? (Score:4, Interesting)
I keep seeing all these articles about bringing more types of processing applications to the gpu, since it handles floating point math and parallel problems better. I only have a rudimentary understanding of programming compared to most people on this site, so the following may sound like a dumb question. But how do you determine what types of problems will perform well (or are even possible to be solved) through the use of GPUs, and just how "general purpose" can you get on such specialized hardware?
Thanks in advance.
Re: (Score:2)
Its just a matter of transforming the data in to a format the GPU can handle efficiently.
Re:Just how specialized is GPU hardware? (Score:5, Interesting)
Not really. Not every problem gains from a gpu.
As a rule of thumb, if you problem requires solving many instances of one simple subproblem which are independent of each other then a gpu helps. A gpu is like a cpu with many many cores where each cpu is not as general purpose as your intel, rather each core is optimized for some solving small problem (without optimizing for frequent load/store/switching operations etc that a general cpu can handle quite well).
So if you see an easy parallelization of your problem, you might think of using a gpu. There are problems that are believed to not be efficiently parallelizable (Linear Programming is one such problem). Also, even if your problem can be easily made parallel it might be tricky to benefit from a gpu as each subroutines might be too complex.
I don't program but my guess would be that if you can see the solution to your problem consisting of a few lines of codes running on many processors and gaining anything, a gpu might be the way to go.
Perhaps someone can explain it better.
Re: (Score:3, Informative)
Many problems such as weather prediction use finite element analysis with a "clock tick" to syncronise the results of the sub-problems. The sub-problems themselves are cubes representing X cubic kilometers of the atmosphere/surface, each sub-problem depends on the state of it's immediate neighbours. The accuracy of the results depends on the resolution of the clock tick, the volume represented by the sub-pr
Re: (Score:3, Informative)
No point being able to do calculations really fast but not be able to get the results or keep feeding the GPU with data.
I think not too long ago graphics cards were fast, but after you added the problem of getting calculation results back, it wasn't really worth it.
Re: (Score:2)
That was due to the asymmetric design of AGP.
PCI-Express is symmetric, so it doesn't have this limitation.
Re:Just how specialized is GPU hardware? (Score:5, Interesting)
Re: (Score:1)
becoming less specialized... (Score:3, Insightful)
The GPU architecture has been progressively moving to a more "general" system with every generation. Originally the processing elements in the GPU could only write to one memory location, now the hardware supports scattered writes, for example.
As such I think the GPGPU method of casting algorithms into the GPU APIs (CUDA et. al) are going to die a quick death once Larabee comes out and people can simply run their threaded codes on these finely-grained co-processors.
Several requirements (Score:2)
1) Your task has to be highly parallel. You really need something that can be made parallel to a more or less infinite level. Current GPUs have hundreds of parallel shader paths (which are what you use for GPGPU). So you have to have a problem that can be broken down in to a bunch of small parallel processes.
2) Your task needs to be single precision floating point. The latest nVidia GPUs do support double precision, but they are the only ones, and they take a major, major speed penalty (way over 50%) to do
Yet still can't get PhysX running on 2x8800M GTXs (Score:2)
Apparently it's just the mobile versions
Re: (Score:2)
Thats an easy problem to solve! Just wait for the technology to mature before purchas...Oh.
Powered by Nvidia! (Score:1)
I wonder how many BSODFLOPS (Blue screens of death per second) it can generate?
http://byronmiller.typepad.com/byronmiller/2005/10/stupid_windows_.html [typepad.com] http://www.google.com.au/search?q=nvidia+'blue+screen+of+death'+nv4_disp [google.com.au]
Re: (Score:2)
vista or xp?
forget 9x!
Re: (Score:1)
Re: (Score:2)
machine or machines? (Score:5, Insightful)
is it me or do I see two separate mobos...which means it's two machines, 8 per machine in one box....not 16?
now...if it was 16 in one...now that would be amazing....otherwise...it's not...'cuz there was that other group that did 8 in 1 [slashdot.org] (aka...16/2 => 8/1)
Re: (Score:2, Informative)
That's one machine for simulating one eye. That's why they need 2 * 8 for simulating human-level vision, or else you won't get the 3D vision.
Re: (Score:2)
Maybe so, but why not build just two machines? The only reason I can think of is that this sounds cooler. Maybe they save a bit of money on having a single cooling solution/power supply, but I don't see it. Strange enough, the machine doesn't seem to be symmetric. They've probably put one motherboard upside down, otherwise you would have to split the case. Let's hope the magic doesn't leak out.
Re: (Score:1)
It's symmetric, ...rotational symmetric [wikipedia.org]. ;P
Re: (Score:2)
well if you use the definition then none of the super computers built in the last decade count either, since they are all giant clusters
Re: (Score:2)
no...super computers, especially beowulf clusters (or even the petaboxes [linuxdevices.com])...they are interlinked in some way.
besides, super computers are usually given the designation "cluster" or something of that nature and not the singular "machine"
Re: (Score:2)
it appears I should have looked at the article, you are right, there are 2 separate boxes. I assumed that they were connected, i was wrong.
Re: (Score:2)
don't think it even applies as a "small cluster".
it's like me taking two V8 muscle cars...duck-taping them together and saying I got a V16 "car".
Here's what they need : (Score:2)
Is that a helicopter? (Score:2, Funny)
God, they stuck so many fans into that box that I bet it takes off the ground when it boots.
quantum computer? (Score:1)
Worlds fastest supercomputer eh? (Score:1)
But does it run Crysis?
Human Vision (Score:2)
One of the things I found intriguing was the note that the bulk (80%) of the neural interconnections going into t
Where them *details* at? Please update w/ a link. (Score:2)
I looked through each of TFA's linked in the story, and I don't see any technical details on this system. Whereas when the FASTRA people at Univ. of Antwerp put together their 4 9800-GX2 system for CUDA, they published all the nitty gritty down to specific parts, etc. The pictures are interesting but not enough.
Oh you kooky MIT people... (Score:2)
I had eight Quadro Plex units where I used to work for CAD/CAM/FEA/CFD...a year ago.
16 GPUs a big deal? (Score:2)
CAE's Tropos image generators use 17 GPUs per channel in a commercially available package. Each image channel (there are usually at least 3 in a flight simulator) uses 4 quad-GPU Radeon 8500 cards in addition to the onboard GPU which is only used for the operator interface. I've been working on these things for a couple of years now.
Re: (Score:1)
Re: (Score:2)
It's better than setting off real live nuclear weapons in the desert like they used to do.
Re: (Score:2)
This isn't for gaming, this is for planning how to more cost effectively kill humans.
Let me fix that for ya:
This isn't for gaming, this is for planning how to more cost effectively threaten to kill humans.