Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Hardware Hacking Build Science

MIT Artificial Vision Researchers Assemble 16-GPU Machine 121

lindik writes "As part of their research efforts aimed at building real-time human-level artificial vision systems inspired by the brain, MIT graduate student Nicolas Pinto and principal investigators David Cox (Rowland Institute at Harvard) and James DiCarlo (McGovern Institute for Brain Research at MIT) recently assembled an impressive 16-GPU 'monster' composed of 8x9800gx2s donated by NVIDIA. The high-throughput method they promote can also use other ubiquitous technologies like IBM's Cell Broadband Engine processor (included in Sony's Playstation 3) or Amazon's Elastic Cloud Computing services. Interestingly, the team is also involved in the PetaVision project on the Roadrunner, the world's fastest supercomputer."
This discussion has been archived. No new comments can be posted.

MIT Artificial Vision Researchers Assemble 16-GPU Machine

Comments Filter:
  • by im_thatoneguy ( 819432 ) on Sunday July 27, 2008 @04:05AM (#24355963)

    "But can it run Crysis?"

    *Ducks*

    • by TheLink ( 130905 )
      Of course not.
    • by dtml-try MyNick ( 453562 ) on Sunday July 27, 2008 @04:22AM (#24356021)
      The graphic cards might be able to handle it. But you'd have to run Vista in order to get maximized settings..
      Now there is a problem.
      • Re: (Score:1, Informative)

        by Anonymous Coward

        There is hardly a difference [gamespot.com] between Crysis under DX9 and DX10. DX10 "features" are a Microsoft scam to promote Vista, nothing more.

        So yes, you can maximise the detail levels on XP.

        • I dont see that at all. There is at least in the second shot a increable diffrence in the mid-foregroud detail. The second shot shows it off the best, and the backround is really 3D looking, wheras, the other shots look like its a bollywood set. Im loading and stripping (vlite) Vista next weekend, so Ill have a look at DX10, as well as hacking DX10 to work under vista.

          • Hmm... actually, some screenshots show even MORE details on the XP DX9 version than on the Vista DX10. Most screenshots look practically identical. Actually, very disappointing.

            I'd call it a tie.
            • I would say its a tossup to, because the realism is a give-and-take from the speed. No realtime ray tracing there, but its more than detail. Call it grokking, looking at the whole picture's realsim, the color, depth of field, camera tricks, detail. It was fun grabbing all the shots, and slideshowing through them and not looking at the source, until I had analysed the pictures. Most of it is quite striking, and I cant wait to get my hands on Crysis now. I have a box running both XP DX9, XP DX10, and vista DX

        • DX10 vs DX9 (Score:5, Informative)

          by DrYak ( 748999 ) on Sunday July 27, 2008 @07:03AM (#24356603) Homepage

          There are 2 main differences between DX9 and DX10 :

          I - The shaders offered by the two APIs are different (shader model 3 vs 4). None of the DX9 screen shot does self-shading. This is specially visible on the rocks (but even in action on the plancks of the fences). So there *are* available under Vista additional subtleties

          II - The driver architecture is much more complex in Vista, because it is built to enable cooperation between several separate processes all using the graphics at the same time. Even if Vista automatically disables Aero when games are running full-screen (and thus the game is the only process accessing the graphic card), the additional layers of abstraction have an impact on performance. It is specially visible at low quality settings where the software overhead is more noticeable.

          • This is specially visible on the rocks (but even in action on the plancks of the fences).

            Odd. I heard these were pretty constant.

        • by Fweeky ( 41046 )

          It's quite noticable on Page 2 [gamespot.com]; see the cliffs in the last shot, Vista has shadows where XP has none. Not terribly exciting though, especially given the additional FPS impact; woo a few shadows ;)

          Some things you probably have to see moving, though. e.g. Bioshock uses more dynamic water with DX10 (which as betterer vertex shaders or so?), and responds more to objects moving in it.

      • It'll have to be Vista 64-bit...

        Under Vista 32-bit you're left with only 640K after removing the video memory allocations.

        Us normal SLi users win with only 3.2GB left :P
    • Yeah, and imagine a beowulf cluster of those!

      I'm sorry, that was uncalled for.
      • Isn't it a beowulf cluster already? It was FREE as in beer...( Probibly gave them every one they had! ) Now they can run Folding@home!

        • What an ICREADBLE BOX. 15 fans on the front and sides. It must sound like a 747/MacProG5. Nice GPUs though...

          • I assume you men a PowerMac G5, because the MacPros are pretty much silent.

          • So it's about one fan per GPU? Seems annoying and inefficient. Why not build it more spread apart, or use a "Central Air" system like people use in their homes.

            Not using water cooling I understand, 'cause there'd be around 30 tubes snaking in and out of the box - something would fail/leak.

            • Thats LARGE FANS. There are probibly about 3 fans per actual GPU. One on the card, one on the box, and one on the Powersupply/etc...

              You could just as easily bathe the thing in cooling oil. Although I am not a fan of water cooling, I can't see it as being any more unreliable than fans, done well, water cooling will outlast the machine.

    • This is just what 3D Realms has been waiting for... it's almost powerful enough to run Duke Nukem Forever!
       

      • Re: (Score:2, Informative)

        by kaizokuace ( 1082079 )
        no it isn't. Duke Nukem Forever will be released when a powerful enough computer is assembled. The game will just manifest itself in the machine one powered up. But you have to have downloaded 20TB of porn and covered the internals with a thin layer of cigar smoke first.
    • Fascinating (Score:5, Interesting)

      by AlienIntelligence ( 1184493 ) on Sunday July 27, 2008 @06:17AM (#24356391)

      I think this part of the computing timeline is going to be
      one that is well remembered. I know I find it fascinating.

      This is a classic moment when tech takes the branch that
      was unexpected. GPGPU computing [gpgpu.org] will soon
      reach ubiquity but for right now it's the fledgling that is being
      grown in the wild.

      Of course I'm not earmarking this one particular project
      as the start point but this year has gotten 'GPU this' and
      'GPGPU that' start up events all over it. Some even said
      in 2007, that it would be a buzzword in 08 [theinquirer.net].

      And of course there's nothing like new tech to bring out [intel.com]
      a naysayer.

      Folding@home [stanford.edu] released their second generation [stanford.edu]
      GPU client in April 08. While retiring the GPU1 core in
      June of this year.

      I know I enjoy throwing spare GPU cycles to a distributed
      cause and whenever I catch sight of the icon for the GPU [stanford.edu]
      client it brings the back the nostalgia of distributed clients [wikipedia.org]
      of the past. [Near the bottom].

      I think I was with United Devices [wikipedia.org] the longest.
      And the Grid [grid.org].

      Now we are getting a chance to see GPU supercomputing
      installations from IBM [eurekalert.org] and this one from MIT.
      Soon those will be littering the Top 500 list [top500.org].

      I also look forward most to the peaceful endeavors the new
      processing power will be used for... weather analysis [unisys.com],
      drug creation [wikipedia.org], and disease studies [medicalnewstoday.com].

      Oh yes, I realize places like the infamous Sandia will be using
      the GPU to rev up atom splitting. But maybe if they keep their
      bombs IN the GPU it'll lessen the chances of seeing rampant
      proliferation again.

      Ok, well enough of my musings over a GPU.

      -AI

      • Huh? GPGPU was a buzzword in 2005 (at least, that's when I first saw large numbers of books about the subject appearing). Now it's pretty much expected - GPUs are the cheapest stream-vector processors on the market at the moment.
      • Re: (Score:3, Insightful)

        "I think this part of the computing timeline is going to be one that is well remembered. I know I find it fascinating."

        Well remembered? Perhaps... but I wouldn't sing their praises just yet. Advances in memory are critically necessary to keep the pace of computational speed up. The big elephants in the room are: Heat, memory bandwidth and latency. Part of the reason the GPU's this time round were not as impressive is because of increasing memory bandwidth linearly will start not have the same effects

      • I think it's great to see that we can finally start using GPUs to do things beyond gaming, but I also don't see it as the Great Second Coming of high-speed computing. GPUs are designed to tackle only one kind of problem, and a highly parallel problem at that. If you are a researcher and you can see huge gains in performance by using GPUs, then great! But GPUs are hardly general purpose, and will simply not address most of our computing needs. I see the rise of GPUs as similiar to computing in the 60's(?)
      • Some interesting GPU projects here: Nvidia Cuda [nvidia.com]
    • You don't have to duck. I'm about to take the heat for you...

      *clears throat*
      Yeah, but will it run Duke Nukem Forever?
      *runs like hell*
    • by Z34107 ( 925136 )

      Crysis ran "well" for me at Medium settings on an 8800 GTX and a 2.6GHz dual core at my monitor's native resolution of 1680x1050. (Using DirectX 10 on Vista!)

      But, it ran everything on "zomg high amazing ponies!" when I connected it to my lower-resolution 720p television.

      (I love doing that to Xbox fanboys - "You think Team Fortress 2 looks "amazing" on your little toy? Come over here and see it played at 60fps with more antialiasing than you could fit in the 12 dimensions of a X-hypercube, let alone an X-b

  • It's coming (Score:1, Funny)

    by Anonymous Coward

    The day when self modification/upgrade enthusiasts start overclocking themselves and bragging about how many fps their eyes get watching the superbowl.

    • Well, for the time being I prefer to tinker with things outside my body, thank you.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Jesus doesn't approve of you doing that.

    • Been threre done that. Volcano beans, fresh ground in a expresso machine.

    • You laugh, but it seems like my eyes have gotten faster.
      I used to not care about 60hz refresh rate, but now I can't stand it. Look straight ahead at a CRT monitor running 60 hertz looks like a rapidly flickering/shimmering mess. 70 hz is still annoying b/c my peripheral vision picks it up.

      I attribute my increased sensitivity to flicker to playing FPS's.

      Oh and when a decent brain-computer interface comes out I'll be getting one installed.

  • I noticed they didn't have 8GB of RAM though. Very sad.
  • by chabotc ( 22496 ) <chabotc@@@gmail...com> on Sunday July 27, 2008 @04:45AM (#24356081) Homepage

    When gamers grow up and go to college.. blue leds and bling in the server room!

  • Alright! (Score:4, Funny)

    by religious freak ( 1005821 ) on Sunday July 27, 2008 @04:48AM (#24356097)
    One more step to the last invention man ever need make... hooker bot. (mine would be a Buffy Bot, but that's just personal preference)
    • One more step to the last invention man ever need make... hooker bot. (mine would be a Buffy Bot, but that's just personal preference)

      She would have to be cooled with liquid nitrogen, running all those GPUs.

    • by Yoozer ( 1055188 )
      It doesn't even need to be able to play blackjack!
    • by Ruie ( 30480 )

      One more step to the last invention man ever need make... hooker bot. (mine would be a Buffy Bot, but that's just personal preference)

      Here you go: one robotic buffing cell [intec-italy.com]

    • by DarkOx ( 621550 )

      I know this is slashdot so you have probably not spent much time around females at leat not those of our species, but let me tell you an angry one is a dangerous creature. All of them do get pissed off some of the time. You can be the greatest guy ever and sooner or later you will make a mistake. The good news if you are a good guy they will forgive you but the period between your screw up and their forgiveness can be extreemly hazardous.

      Buffy is a fun show and all but if I were ordering robo girl, I am

  • by MR.Mic ( 937158 ) on Sunday July 27, 2008 @04:56AM (#24356129)

    I keep seeing all these articles about bringing more types of processing applications to the gpu, since it handles floating point math and parallel problems better. I only have a rudimentary understanding of programming compared to most people on this site, so the following may sound like a dumb question. But how do you determine what types of problems will perform well (or are even possible to be solved) through the use of GPUs, and just how "general purpose" can you get on such specialized hardware?

    Thanks in advance.

    • Its just a matter of transforming the data in to a format the GPU can handle efficiently.

      • by hansraj ( 458504 ) * on Sunday July 27, 2008 @05:37AM (#24356251)

        Not really. Not every problem gains from a gpu.

        As a rule of thumb, if you problem requires solving many instances of one simple subproblem which are independent of each other then a gpu helps. A gpu is like a cpu with many many cores where each cpu is not as general purpose as your intel, rather each core is optimized for some solving small problem (without optimizing for frequent load/store/switching operations etc that a general cpu can handle quite well).

        So if you see an easy parallelization of your problem, you might think of using a gpu. There are problems that are believed to not be efficiently parallelizable (Linear Programming is one such problem). Also, even if your problem can be easily made parallel it might be tricky to benefit from a gpu as each subroutines might be too complex.

        I don't program but my guess would be that if you can see the solution to your problem consisting of a few lines of codes running on many processors and gaining anything, a gpu might be the way to go.

        Perhaps someone can explain it better.

        • Re: (Score:3, Informative)

          by TapeCutter ( 624760 ) *
          I think you did a good job explaining, one point thought. The sub-problems need not be independent.

          Many problems such as weather prediction use finite element analysis with a "clock tick" to syncronise the results of the sub-problems. The sub-problems themselves are cubes representing X cubic kilometers of the atmosphere/surface, each sub-problem depends on the state of it's immediate neighbours. The accuracy of the results depends on the resolution of the clock tick, the volume represented by the sub-pr
      • Re: (Score:3, Informative)

        by TheLink ( 130905 )
        You also need to make sure the I/O to/fro the GPU is good enough.

        No point being able to do calculations really fast but not be able to get the results or keep feeding the GPU with data.

        I think not too long ago graphics cards were fast, but after you added the problem of getting calculation results back, it wasn't really worth it.
        • That was due to the asymmetric design of AGP.
          PCI-Express is symmetric, so it doesn't have this limitation.

    • by moteyalpha ( 1228680 ) * on Sunday July 27, 2008 @05:38AM (#24356257) Homepage Journal
      I have been using my own GPU to do this very same thing by automatically converting images to vertex format and use the GPU to scale, shade, etc and in this way I can have a shape recognition by simply measuring the closest match on the frame buffer. There are more complex ways to use the GPU to do pseudo computation in parallel, I still think that a commonly available CAM or near CAM would increase neural like computations by being essentially a completely parallel process. It would be better to allow more people to experiment with the methods because the greatest gain and cost is the software itself and specialized hardware for a single purpose allows better profit but limits innovation.
    • A GPU executes shader programs. These are typically kernels - small programs that are run repeatedly on a lot of inputs (e.g. a vertex shader kernel would run on every vertex in a scene, a pixel shader on each pixel). You can typically run several kernels in parallel (up to around 16 I think, but I've not been paying attention to the latest generation, so it might be more or less). Within each kernel, you have a simple instruction set designed to be efficient on pixels and vertexes. These are both four-
    • The GPU architecture has been progressively moving to a more "general" system with every generation. Originally the processing elements in the GPU could only write to one memory location, now the hardware supports scattered writes, for example.

      As such I think the GPGPU method of casting algorithms into the GPU APIs (CUDA et. al) are going to die a quick death once Larabee comes out and people can simply run their threaded codes on these finely-grained co-processors.

    • 1) Your task has to be highly parallel. You really need something that can be made parallel to a more or less infinite level. Current GPUs have hundreds of parallel shader paths (which are what you use for GPGPU). So you have to have a problem that can be broken down in to a bunch of small parallel processes.

      2) Your task needs to be single precision floating point. The latest nVidia GPUs do support double precision, but they are the only ones, and they take a major, major speed penalty (way over 50%) to do

  • I'm still eager to see PhysX running on my dual 8800M GTX laptop. I've run all the drivers from 177.35 up and I'm running the 8.06.12 PhysX drivers as required.
    Apparently it's just the mobile versions :(
    • by bmgoau ( 801508 )

      Thats an easy problem to solve! Just wait for the technology to mature before purchas...Oh.

  • > 8x9800gx2s donated by NVIDIA.

    I wonder how many BSODFLOPS (Blue screens of death per second) it can generate? ;)

    http://byronmiller.typepad.com/byronmiller/2005/10/stupid_windows_.html [typepad.com] http://www.google.com.au/search?q=nvidia+'blue+screen+of+death'+nv4_disp [google.com.au]
  • by MoFoQ ( 584566 ) on Sunday July 27, 2008 @06:40AM (#24356489)

    is it me or do I see two separate mobos...which means it's two machines, 8 per machine in one box....not 16?

    now...if it was 16 in one...now that would be amazing....otherwise...it's not...'cuz there was that other group that did 8 in 1 [slashdot.org] (aka...16/2 => 8/1)

    • Re: (Score:2, Informative)

      by sam0737 ( 648914 )

      That's one machine for simulating one eye. That's why they need 2 * 8 for simulating human-level vision, or else you won't get the 3D vision.

      • Maybe so, but why not build just two machines? The only reason I can think of is that this sounds cooler. Maybe they save a bit of money on having a single cooling solution/power supply, but I don't see it. Strange enough, the machine doesn't seem to be symmetric. They've probably put one motherboard upside down, otherwise you would have to split the case. Let's hope the magic doesn't leak out.

    • by dave1g ( 680091 )

      well if you use the definition then none of the super computers built in the last decade count either, since they are all giant clusters

      • by MoFoQ ( 584566 )

        no...super computers, especially beowulf clusters (or even the petaboxes [linuxdevices.com])...they are interlinked in some way.

        besides, super computers are usually given the designation "cluster" or something of that nature and not the singular "machine"

        • by dave1g ( 680091 )

          it appears I should have looked at the article, you are right, there are 2 separate boxes. I assumed that they were connected, i was wrong.

  • a system with 16 x 4870x2s. they will draw less energy too.
  • by Anonymous Coward

    God, they stuck so many fans into that box that I bet it takes off the ground when it boots.

  • They should use the quantum computer described a few posts above, it seems to be especially designed for pattern matching that computer vision might require.
  • the Roadrunner, the world's fastest supercomputer

    But does it run Crysis?

  • On June 30 of this year, The New Yorker magazine published a fascinating, if at moments disturbing article entitled The Itch [newyorker.com]. The article discusses, among other things, the human mind's perception of the reality of its environment based on the various nervous inputs it has, vision included. Apparently this is an oft debated topic among the scientific community, but it was new information to me.

    One of the things I found intriguing was the note that the bulk (80%) of the neural interconnections going into t
  • I looked through each of TFA's linked in the story, and I don't see any technical details on this system. Whereas when the FASTRA people at Univ. of Antwerp put together their 4 9800-GX2 system for CUDA, they published all the nitty gritty down to specific parts, etc. The pictures are interesting but not enough.

  • I had eight Quadro Plex units where I used to work for CAD/CAM/FEA/CFD...a year ago.

  • CAE's Tropos image generators use 17 GPUs per channel in a commercially available package. Each image channel (there are usually at least 3 in a flight simulator) uses 4 quad-GPU Radeon 8500 cards in addition to the onboard GPU which is only used for the operator interface. I've been working on these things for a couple of years now.

To invent, you need a good imagination and a pile of junk. -- Thomas Edison

Working...