Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Hardware Hacking Build Science

MIT Artificial Vision Researchers Assemble 16-GPU Machine 121

lindik writes "As part of their research efforts aimed at building real-time human-level artificial vision systems inspired by the brain, MIT graduate student Nicolas Pinto and principal investigators David Cox (Rowland Institute at Harvard) and James DiCarlo (McGovern Institute for Brain Research at MIT) recently assembled an impressive 16-GPU 'monster' composed of 8x9800gx2s donated by NVIDIA. The high-throughput method they promote can also use other ubiquitous technologies like IBM's Cell Broadband Engine processor (included in Sony's Playstation 3) or Amazon's Elastic Cloud Computing services. Interestingly, the team is also involved in the PetaVision project on the Roadrunner, the world's fastest supercomputer."
This discussion has been archived. No new comments can be posted.

MIT Artificial Vision Researchers Assemble 16-GPU Machine

Comments Filter:
  • by ya really ( 1257084 ) on Sunday July 27, 2008 @04:48AM (#24356095)

    AMD/ATI have released the specs for their hardware. Why haven't the proprietary NVIDIA engineers done the same? What do they have to hide?

    In terms of actually being totally non-proprietary, Nvidia has to worry about ATI stealing their drivers (which they would or at least "borrow" alot from them), since Nvidia generally has that as their trump card over ATI no matter who has the better hardware. On the other hand, Nvidia has no interest in "borrowing" from ATI's drivers. ATI knows that, and that's why their drivers are open. Yes, it may suck for wanting to run anything multimedia, graphical or gaming wise on Linux if you have Nvidia card (I have an 8800gt and I feel the pain at times on KDE), but in this case, I think Nvidia's rationale for not giving up their specs is reasonable. Now, if they only cared more about their drivers for Linux, proprietary or not.

  • by MR.Mic ( 937158 ) on Sunday July 27, 2008 @04:56AM (#24356129)

    I keep seeing all these articles about bringing more types of processing applications to the gpu, since it handles floating point math and parallel problems better. I only have a rudimentary understanding of programming compared to most people on this site, so the following may sound like a dumb question. But how do you determine what types of problems will perform well (or are even possible to be solved) through the use of GPUs, and just how "general purpose" can you get on such specialized hardware?

    Thanks in advance.

  • by hansraj ( 458504 ) * on Sunday July 27, 2008 @05:37AM (#24356251)

    Not really. Not every problem gains from a gpu.

    As a rule of thumb, if you problem requires solving many instances of one simple subproblem which are independent of each other then a gpu helps. A gpu is like a cpu with many many cores where each cpu is not as general purpose as your intel, rather each core is optimized for some solving small problem (without optimizing for frequent load/store/switching operations etc that a general cpu can handle quite well).

    So if you see an easy parallelization of your problem, you might think of using a gpu. There are problems that are believed to not be efficiently parallelizable (Linear Programming is one such problem). Also, even if your problem can be easily made parallel it might be tricky to benefit from a gpu as each subroutines might be too complex.

    I don't program but my guess would be that if you can see the solution to your problem consisting of a few lines of codes running on many processors and gaining anything, a gpu might be the way to go.

    Perhaps someone can explain it better.

  • by moteyalpha ( 1228680 ) * on Sunday July 27, 2008 @05:38AM (#24356257) Homepage Journal
    I have been using my own GPU to do this very same thing by automatically converting images to vertex format and use the GPU to scale, shade, etc and in this way I can have a shape recognition by simply measuring the closest match on the frame buffer. There are more complex ways to use the GPU to do pseudo computation in parallel, I still think that a commonly available CAM or near CAM would increase neural like computations by being essentially a completely parallel process. It would be better to allow more people to experiment with the methods because the greatest gain and cost is the software itself and specialized hardware for a single purpose allows better profit but limits innovation.
  • Fascinating (Score:5, Interesting)

    by AlienIntelligence ( 1184493 ) on Sunday July 27, 2008 @06:17AM (#24356391)

    I think this part of the computing timeline is going to be
    one that is well remembered. I know I find it fascinating.

    This is a classic moment when tech takes the branch that
    was unexpected. GPGPU computing [gpgpu.org] will soon
    reach ubiquity but for right now it's the fledgling that is being
    grown in the wild.

    Of course I'm not earmarking this one particular project
    as the start point but this year has gotten 'GPU this' and
    'GPGPU that' start up events all over it. Some even said
    in 2007, that it would be a buzzword in 08 [theinquirer.net].

    And of course there's nothing like new tech to bring out [intel.com]
    a naysayer.

    Folding@home [stanford.edu] released their second generation [stanford.edu]
    GPU client in April 08. While retiring the GPU1 core in
    June of this year.

    I know I enjoy throwing spare GPU cycles to a distributed
    cause and whenever I catch sight of the icon for the GPU [stanford.edu]
    client it brings the back the nostalgia of distributed clients [wikipedia.org]
    of the past. [Near the bottom].

    I think I was with United Devices [wikipedia.org] the longest.
    And the Grid [grid.org].

    Now we are getting a chance to see GPU supercomputing
    installations from IBM [eurekalert.org] and this one from MIT.
    Soon those will be littering the Top 500 list [top500.org].

    I also look forward most to the peaceful endeavors the new
    processing power will be used for... weather analysis [unisys.com],
    drug creation [wikipedia.org], and disease studies [medicalnewstoday.com].

    Oh yes, I realize places like the infamous Sandia will be using
    the GPU to rev up atom splitting. But maybe if they keep their
    bombs IN the GPU it'll lessen the chances of seeing rampant
    proliferation again.

    Ok, well enough of my musings over a GPU.

    -AI

The one day you'd sell your soul for something, souls are a glut.

Working...