Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Hardware Hacking Build Science

MIT Artificial Vision Researchers Assemble 16-GPU Machine 121

lindik writes "As part of their research efforts aimed at building real-time human-level artificial vision systems inspired by the brain, MIT graduate student Nicolas Pinto and principal investigators David Cox (Rowland Institute at Harvard) and James DiCarlo (McGovern Institute for Brain Research at MIT) recently assembled an impressive 16-GPU 'monster' composed of 8x9800gx2s donated by NVIDIA. The high-throughput method they promote can also use other ubiquitous technologies like IBM's Cell Broadband Engine processor (included in Sony's Playstation 3) or Amazon's Elastic Cloud Computing services. Interestingly, the team is also involved in the PetaVision project on the Roadrunner, the world's fastest supercomputer."
This discussion has been archived. No new comments can be posted.

MIT Artificial Vision Researchers Assemble 16-GPU Machine

Comments Filter:
  • by Anonymous Coward on Sunday July 27, 2008 @05:01AM (#24356143)

    There is hardly a difference [gamespot.com] between Crysis under DX9 and DX10. DX10 "features" are a Microsoft scam to promote Vista, nothing more.

    So yes, you can maximise the detail levels on XP.

  • by ya really ( 1257084 ) on Sunday July 27, 2008 @05:18AM (#24356201)

    Tom's Hardware [tomshardware.com] did a pretty good job detailing the ups and downs of ATI and Nvidia with many of the major games of last year (BioShock, World in Conflict, etc). Overall, both companies faired well, but they reported quite a few crashes due to the ATI drivers. I've had an ATI card before, the 9800xt when Nvidia was producing their horrible 5xxx series back in 2003-04 that was totally worthless. The 9800xt was a good card for everything (gaming, graphical aps, etc). Sorry, I should have cited sources. Wasn't trolling on purpose, though I know that writing anything positive about Nvidia on slashdot is borderline blasphemy.

  • by TheLink ( 130905 ) on Sunday July 27, 2008 @05:40AM (#24356263) Journal
    You also need to make sure the I/O to/fro the GPU is good enough.

    No point being able to do calculations really fast but not be able to get the results or keep feeding the GPU with data.

    I think not too long ago graphics cards were fast, but after you added the problem of getting calculation results back, it wasn't really worth it.
  • by Anonymous Coward on Sunday July 27, 2008 @05:45AM (#24356279)

    I upgraded my X800XL to a 8800GT. With Windows, I never had a problem with my X800XL and I still have not see a problem with the 8800GT. The X800XL just worked and the 8800GT just works.

    With Ubuntu, the X800XL was working nicely (open source drivers) and the 8800GT is a piece of crap. NVidia's drivers are horribly slow and a lot of users are reporting the same thing. I have an old computer with an even older GeForce 4 MX and it displays things faster.

    Before I bought my 8800GT I didn't care much about one company or the other, but unless NVidia can release something that works well, I guess I am pro ATI for now.

  • by kaizokuace ( 1082079 ) on Sunday July 27, 2008 @06:31AM (#24356457)
    no it isn't. Duke Nukem Forever will be released when a powerful enough computer is assembled. The game will just manifest itself in the machine one powered up. But you have to have downloaded 20TB of porn and covered the internals with a thin layer of cigar smoke first.
  • by TheRaven64 ( 641858 ) on Sunday July 27, 2008 @06:54AM (#24356561) Journal
    A video card driver typically has three major components:
    • The parts specific to the windowing system (including context switching / multiplexing).
    • The parts specific to the 3D API.
    • The parts specific to the hardware.

    ATi could conceivably steal parts from the first two from nVidia, but it's doubtful that they could steal anything from the last part since their hardware designs are sufficiently different to make this hard.

    The problem nVidia are going to have is that the new Gallium architecture means that the first two parts are abstracted away and reusable, as is the fall-back path (which emulates functionality any specific GPU might be missing). This means that Intel and AMD both get to benefit from the other company (and random hippyware developers and other GPU manufacturers / users) improving the generic components, while nVidia are stuck developing their own entire alternative to DRI, DRM, Gallium, and Mesa. The upshot is that Intel and AMD can spend a tiny fraction of the time (and, thus, money) developing drivers that nVidia do. In the long run, this means either smaller profits or more expensive cards for nVidia, more bugs in nVidia drivers (since they don't have the same real-world coverage testing).

    Now, if you're talking just about specs, then you're just plain trolling. Intel doesn't lose anything to AMD by releasing the specs for the Core 2 in a 3000 page PDF, because the specs just give you the input-output semantics, they don't give you any implementation details. Anyone with a little bit of VLSI experience could make an x86 chip, but making one that gives good performance and good performance-per-Watt is a lot harder. Similarly, the specs for an nVidia card would let anyone make a clone, but they'd have to spend a lot of time and effort optimising their design to get anywhere close to the performance that nVidia get.

  • DX10 vs DX9 (Score:5, Informative)

    by DrYak ( 748999 ) on Sunday July 27, 2008 @07:03AM (#24356603) Homepage

    There are 2 main differences between DX9 and DX10 :

    I - The shaders offered by the two APIs are different (shader model 3 vs 4). None of the DX9 screen shot does self-shading. This is specially visible on the rocks (but even in action on the plancks of the fences). So there *are* available under Vista additional subtleties

    II - The driver architecture is much more complex in Vista, because it is built to enable cooperation between several separate processes all using the graphics at the same time. Even if Vista automatically disables Aero when games are running full-screen (and thus the game is the only process accessing the graphic card), the additional layers of abstraction have an impact on performance. It is specially visible at low quality settings where the software overhead is more noticeable.

  • by TapeCutter ( 624760 ) * on Sunday July 27, 2008 @09:01AM (#24357195) Journal
    I think you did a good job explaining, one point thought. The sub-problems need not be independent.

    Many problems such as weather prediction use finite element analysis with a "clock tick" to syncronise the results of the sub-problems. The sub-problems themselves are cubes representing X cubic kilometers of the atmosphere/surface, each sub-problem depends on the state of it's immediate neighbours. The accuracy of the results depends on the resolution of the clock tick, the volume represented by the sub-problems and the accuracy of the initial conditions. This is usefull in all sorts of simulations from designing molds to minmise air pockets that can plague the process of metal casting, to shooting Cassini through the rings of Saturn, twice!

    The technique can be thought of as brute force integration with respect to time, space, matter, energy, etc, for a wide range of physical problems. The more power and raw data you throw at these types of problems the more realistic the "physics" in both video games and scientific simulations. IMHO we have only just scratched the surface of what computers can tell us about the real world through these types of simulations and much of this is due to scientists in many fields confusing "computer simulation" with "artists impression".

    BTW climate and weather modelling use the same sort of algorithm but get very different results because weather != claimte.
  • by sam0737 ( 648914 ) <samNO@SPAMchowchi.com> on Sunday July 27, 2008 @09:39AM (#24357395)

    That's one machine for simulating one eye. That's why they need 2 * 8 for simulating human-level vision, or else you won't get the 3D vision.

I've noticed several design suggestions in your code.

Working...