MIT Artificial Vision Researchers Assemble 16-GPU Machine 121
lindik writes "As part of their research efforts aimed at building real-time human-level artificial vision systems inspired by the brain, MIT graduate student Nicolas Pinto and principal investigators David Cox (Rowland Institute at Harvard) and James DiCarlo (McGovern Institute for Brain Research at MIT) recently assembled an impressive 16-GPU 'monster' composed of 8x9800gx2s donated by NVIDIA. The high-throughput method they promote can also use other ubiquitous technologies like IBM's Cell Broadband Engine processor (included in Sony's Playstation 3) or Amazon's Elastic Cloud Computing services. Interestingly, the team is also involved in the PetaVision project on the Roadrunner, the world's fastest supercomputer."
Re:Say no to proprietary NVIDIA hardware (Score:2, Interesting)
In terms of actually being totally non-proprietary, Nvidia has to worry about ATI stealing their drivers (which they would or at least "borrow" alot from them), since Nvidia generally has that as their trump card over ATI no matter who has the better hardware. On the other hand, Nvidia has no interest in "borrowing" from ATI's drivers. ATI knows that, and that's why their drivers are open. Yes, it may suck for wanting to run anything multimedia, graphical or gaming wise on Linux if you have Nvidia card (I have an 8800gt and I feel the pain at times on KDE), but in this case, I think Nvidia's rationale for not giving up their specs is reasonable. Now, if they only cared more about their drivers for Linux, proprietary or not.
Just how specialized is GPU hardware? (Score:4, Interesting)
I keep seeing all these articles about bringing more types of processing applications to the gpu, since it handles floating point math and parallel problems better. I only have a rudimentary understanding of programming compared to most people on this site, so the following may sound like a dumb question. But how do you determine what types of problems will perform well (or are even possible to be solved) through the use of GPUs, and just how "general purpose" can you get on such specialized hardware?
Thanks in advance.
Re:Just how specialized is GPU hardware? (Score:5, Interesting)
Not really. Not every problem gains from a gpu.
As a rule of thumb, if you problem requires solving many instances of one simple subproblem which are independent of each other then a gpu helps. A gpu is like a cpu with many many cores where each cpu is not as general purpose as your intel, rather each core is optimized for some solving small problem (without optimizing for frequent load/store/switching operations etc that a general cpu can handle quite well).
So if you see an easy parallelization of your problem, you might think of using a gpu. There are problems that are believed to not be efficiently parallelizable (Linear Programming is one such problem). Also, even if your problem can be easily made parallel it might be tricky to benefit from a gpu as each subroutines might be too complex.
I don't program but my guess would be that if you can see the solution to your problem consisting of a few lines of codes running on many processors and gaining anything, a gpu might be the way to go.
Perhaps someone can explain it better.
Re:Just how specialized is GPU hardware? (Score:5, Interesting)
Fascinating (Score:5, Interesting)
I think this part of the computing timeline is going to be
one that is well remembered. I know I find it fascinating.
This is a classic moment when tech takes the branch that
was unexpected. GPGPU computing [gpgpu.org] will soon
reach ubiquity but for right now it's the fledgling that is being
grown in the wild.
Of course I'm not earmarking this one particular project
as the start point but this year has gotten 'GPU this' and
'GPGPU that' start up events all over it. Some even said
in 2007, that it would be a buzzword in 08 [theinquirer.net].
And of course there's nothing like new tech to bring out [intel.com]
a naysayer.
Folding@home [stanford.edu] released their second generation [stanford.edu]
GPU client in April 08. While retiring the GPU1 core in
June of this year.
I know I enjoy throwing spare GPU cycles to a distributed
cause and whenever I catch sight of the icon for the GPU [stanford.edu]
client it brings the back the nostalgia of distributed clients [wikipedia.org]
of the past. [Near the bottom].
I think I was with United Devices [wikipedia.org] the longest.
And the Grid [grid.org].
Now we are getting a chance to see GPU supercomputing
installations from IBM [eurekalert.org] and this one from MIT.
Soon those will be littering the Top 500 list [top500.org].
I also look forward most to the peaceful endeavors the new
processing power will be used for... weather analysis [unisys.com],
drug creation [wikipedia.org], and disease studies [medicalnewstoday.com].
Oh yes, I realize places like the infamous Sandia will be using
the GPU to rev up atom splitting. But maybe if they keep their
bombs IN the GPU it'll lessen the chances of seeing rampant
proliferation again.
Ok, well enough of my musings over a GPU.
-AI