Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Graphics Upgrades Hardware Technology

Nvidia and AMD Hug It Out, SLI Coming To AMD Mobos 120

MojoKid writes "In a rather surprising turn of events, NVIDIA has just gone on record that, starting with AMD's 990 series chipset, you'll be able to run multiple NVIDIA graphics cards in SLI on AMD-based motherboards, a feature previously only available on Intel or NVIDIA-based motherboards. Nvidia didn't go into many specifics about the license, such as how long it's good for, but did say the license covers 'upcoming motherboards featuring AMD's 990FX, 990X, and 970 chipsets.'"
This discussion has been archived. No new comments can be posted.

Nvidia and AMD Hug It Out, SLI Coming To AMD Mobos

Comments Filter:
  • RAM (Score:2, Interesting)

    by im_thatoneguy ( 819432 ) on Friday April 29, 2011 @03:42AM (#35972448)

    I would be more excited if they had announced a new initiative to enable fast memory access between the GPU and system RAM.

    2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5. Something like Hypertransport that could enable low latency, high bandwidth memory access for expandable system memory on the cheap.

    Either that, or it's high time we got 8GB per core for GPUs.

  • by Tukz ( 664339 ) on Friday April 29, 2011 @04:15AM (#35972556) Journal

    Hacked drivers solves that problem.

  • Re:RAM (Score:5, Interesting)

    by adolf ( 21054 ) <flodadolf@gmail.com> on Friday April 29, 2011 @04:39AM (#35972630) Journal

    I would be more excited if they had announced a new initiative to enable fast memory access between the GPU and system RAM.

    Do you really think so? We've been down this road before and while it's sometimes a nice ride, it always leads to a rather anticlimactic dead-end.

    (Notable examples are VLB, EISA, PCI and AGP, plus some very similar variations on each of these.)

    2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5.

    Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

    That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

    Something like Hypertransport that could enable low latency, high bandwidth memory access for expandable system memory on the cheap.

    HTX, which is mostly just Hypertransport wrapped around a familiar card-edge connector, has been around for a good while. HTX3 added a decent speed bump to the format in '08. AFAICT, nobody makes graphics cards for such a bus, and no consumer-oriented systems have ever included it. It's still there, though...

    Either that, or it's high time we got 8GB per core for GPUs.

    This. If there is genuinely a need for substantially bigger chunks of RAM to be available to a GPU, then I'd rather see it nearer to the GPU itself. History indicates that this will happen eventually anyway (no matter how well-intentioned the new-fangled bus might be), so it might make sense to just cut to the chase...

  • Ummmm.... How? (Score:4, Interesting)

    by Sycraft-fu ( 314770 ) on Friday April 29, 2011 @05:23AM (#35972748)

    You realize the limiting factor in system RAM access is the PCIe bus, right? It isn't as though that can magically be made faster. I suppose they could start doing 32x slots, that is technically allowed by the spec but that would mean more cost both for motherboards and graphics cards, with no real benefit except to people like you that want massive amounts of RAM.

    In terms of increasing the bandwidth of the bus without increasing the width, well Intel is on that. PCIe 3.0 was finalized in November 2010 and both Intel and AMD are working on implementing it in next gen chipsets. It doubles per lane bandwidth over 2.0/2.1 by increasing the clock rate, and using more efficient (but much more complex) signaling. That would give 16GB/sec of bandwidth which is on par with what you see from DDR3 1333MHz system memory.

    However even if you do that, it isn't really that useful, it'll still be slow. See graphics cards have WAY higher memory bandwidth requirements CPUs. That's why they use GDDR5 instead of DDR3. While GDDR5 is based on DDR3 it is much higher speed and bandwidth. With their huge memory controllers you can see cards with 200GB/sec or more of bandwidth. You just aren't going to get that out of system RAM, even if you had a bus that could transfer it (which you don't have).

    Never mind that you then have to contend with the CPU which needs to use it too.

    There's no magic to be had here to be able to grab system RAM and use it efficiently. Cards can already use it, it is part of the PCIe spec. Things just slow to a crawl when it gets used since there are extreme bandwidth limitations from the graphics card's perspective.

  • by Narishma ( 822073 ) on Friday April 29, 2011 @06:59AM (#35973028)

    If you only consider the CPU then what you say is true, but you also have to take into account that AMD motherboards generally cost less than Intel ones.

  • Re:Uncanny valley (Score:4, Interesting)

    by adolf ( 21054 ) <flodadolf@gmail.com> on Friday April 29, 2011 @08:58AM (#35973602) Journal

    You took my practical argument and made it theoretical, but I'll play. ;)

    I never had an EGA adapter. I did have CGA, and the next step was a Diamond Speedstar 24x, with all kinds of (well, one kind of) 24-bit color that would put your Tseng ET3000 (ET4000?) to shame. And, in any event, it was clearly better than CGA, EGA, VGA, or (bog-standard IBM) XGA.

    The 24x was both awesome (pretty!) and lousy (mostly do to its proprietarity nature and lack of software support) at the time. I still keep it in a drawer -- it's the only color ISA video card I still have. (I believe there is also still a monochrome Hercules card kicking around in there somewhere, which I keep because its weird "high-res" mode has infrequently been well-supported by anything else.)

    Anyway...porn was never better than when I was a kid with a 24-bit video card, able to view JPEGs without dithering.

    But what I'd like to express to you is that it's all incremental. There was no magic leap between your EGA card and your Tseng SVGA -- you just skipped some steps.

    And there was no magic leap between your 4MB card (whatever it was) and your 32MB Riva TNT2: I also made a similar progression to a TNT2.

    And, yeah: Around that time, model numbers got blurry. Instead of making one chipset at one speed (TNT2), manufacturers started bin-sorting and making producing a variety of speeds from the same part (Voodoo3 2000, 3000, 3500TV, all with the same GPU).

    And also around that time, drivers (between OpenGL and DirectX) became consistent, adding to the blur.

    I still have a Voodoo3 3500TV, though I don't have a system that can use it. But I assure you that I would much rather play games (and pay the power bill) with my nVidia 9800GT than that old hunk of (ouch! HOT!) 3dfx metal.

    Fast forward a bunch and recently, I've been playing both Rift and Portal 2. The 9800GT is showing its age, especially with Rift, and it's becoming time to look for an upgrade.

    But, really, neither of these games would be worth the time of day on my laptop's ATI x300. This old Dell probably would've played the first Portal OK, but the second one...meh. And the x300 is (IIRC) listed as Rift's minimum spec, but the game loses its prettiness in a hurry when the quality settings are turned down.

    But, you know: I might just install Rift on this 7-year-old x300 laptop, just to see how it works. Just so I can have the same "wow" factor I had when I first installed a Voodoo3 2000, when I play the same game on my desktop with a 3-year-old, not-so-special-at-this-pint 9800GT.

    The steps seem smaller, these days, but progress marches on. You'll have absolute lifelike perfection eventually, but it'll take some doing to get there.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...