Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Graphics Upgrades Hardware Technology

Nvidia and AMD Hug It Out, SLI Coming To AMD Mobos 120

MojoKid writes "In a rather surprising turn of events, NVIDIA has just gone on record that, starting with AMD's 990 series chipset, you'll be able to run multiple NVIDIA graphics cards in SLI on AMD-based motherboards, a feature previously only available on Intel or NVIDIA-based motherboards. Nvidia didn't go into many specifics about the license, such as how long it's good for, but did say the license covers 'upcoming motherboards featuring AMD's 990FX, 990X, and 970 chipsets.'"
This discussion has been archived. No new comments can be posted.

Nvidia and AMD Hug It Out, SLI Coming To AMD Mobos

Comments Filter:
  • I assume the still won't let you mix AMD and nVidia video cards. Asshats. (think dedicated physx)

    • Re: (Score:3, Informative)

      by Anonymous Coward

      You already can use an Nvidia card as dedicated physx with an AMD card, but in order to not create a bottleneck and actually experience a performance loss, you need an Nvidia card that is more or less on par with your AMD card. So if you have, say, an AMD 6970, you would need like an Nvidia 460 at the very least to get good enough performance boost for it to even be worth the extra cash instead of just going crossfire.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        Parent is incorrect and it's therefore no suprise that he provided no evidence or even supportive argument for his assertions.

        'physx' is a marketing term and an API currently only hardware accelerated through nVidia cards. Adding more AMD cards, as the parent suggests, doesn't do squat if what you want is 'physx' on a hardware path. Games typicall only have two paths, software or 'physx', so the load either lands on the main-CPU (you only have AMD card(s)) or on the GPU (you have nVidia card with physx enab

    • by rhook ( 943951 )

      My motherboard (MSI Fuzion 870a) lets me mix CrossFireX and SLI cards.

      http://www.newegg.com/Product/Product.aspx?Item=N82E16813130297 [newegg.com]

      Powered by the Fuzion technology that offers Non-Identical & Cross-Vendor Multi-GPU processing, the MSI 870A Fuzion allows you to install two different level and brand graphics cards (even ATI and NVIDIA Hybrid) in a single system, providing flexible upgradability and great 3D performance.

  • RAM (Score:2, Interesting)

    I would be more excited if they had announced a new initiative to enable fast memory access between the GPU and system RAM.

    2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5. Something like Hypertransport that could enable low latency, high bandwidth memory access for expandable system memory on the cheap.

    Either that, or it's high time we got 8GB per core for GPUs.

    • Re:RAM (Score:5, Interesting)

      by adolf ( 21054 ) <flodadolf@gmail.com> on Friday April 29, 2011 @04:39AM (#35972630) Journal

      I would be more excited if they had announced a new initiative to enable fast memory access between the GPU and system RAM.

      Do you really think so? We've been down this road before and while it's sometimes a nice ride, it always leads to a rather anticlimactic dead-end.

      (Notable examples are VLB, EISA, PCI and AGP, plus some very similar variations on each of these.)

      2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5.

      Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

      That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

      Something like Hypertransport that could enable low latency, high bandwidth memory access for expandable system memory on the cheap.

      HTX, which is mostly just Hypertransport wrapped around a familiar card-edge connector, has been around for a good while. HTX3 added a decent speed bump to the format in '08. AFAICT, nobody makes graphics cards for such a bus, and no consumer-oriented systems have ever included it. It's still there, though...

      Either that, or it's high time we got 8GB per core for GPUs.

      This. If there is genuinely a need for substantially bigger chunks of RAM to be available to a GPU, then I'd rather see it nearer to the GPU itself. History indicates that this will happen eventually anyway (no matter how well-intentioned the new-fangled bus might be), so it might make sense to just cut to the chase...

      • Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

        That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

        The most awesome upgrade I ever had was when I went from EGA to a Tseng SVGA card with 1 MB memory. The next awesomest was when I upgraded from a 4 MB card to a Riva TNT2 with 32 MB. Every time I upgrade my video card there's less shock and awe effect. I'm willing to bet that going from 2 GB to 8 GB would be barely perceptible to most people.

        I think the top graphics cards today have gone over the local maximum point of realism. What I have been noticing a lot lately is the "uncanny valley" effect. The only

        • Re:Uncanny valley (Score:4, Interesting)

          by adolf ( 21054 ) <flodadolf@gmail.com> on Friday April 29, 2011 @08:58AM (#35973602) Journal

          You took my practical argument and made it theoretical, but I'll play. ;)

          I never had an EGA adapter. I did have CGA, and the next step was a Diamond Speedstar 24x, with all kinds of (well, one kind of) 24-bit color that would put your Tseng ET3000 (ET4000?) to shame. And, in any event, it was clearly better than CGA, EGA, VGA, or (bog-standard IBM) XGA.

          The 24x was both awesome (pretty!) and lousy (mostly do to its proprietarity nature and lack of software support) at the time. I still keep it in a drawer -- it's the only color ISA video card I still have. (I believe there is also still a monochrome Hercules card kicking around in there somewhere, which I keep because its weird "high-res" mode has infrequently been well-supported by anything else.)

          Anyway...porn was never better than when I was a kid with a 24-bit video card, able to view JPEGs without dithering.

          But what I'd like to express to you is that it's all incremental. There was no magic leap between your EGA card and your Tseng SVGA -- you just skipped some steps.

          And there was no magic leap between your 4MB card (whatever it was) and your 32MB Riva TNT2: I also made a similar progression to a TNT2.

          And, yeah: Around that time, model numbers got blurry. Instead of making one chipset at one speed (TNT2), manufacturers started bin-sorting and making producing a variety of speeds from the same part (Voodoo3 2000, 3000, 3500TV, all with the same GPU).

          And also around that time, drivers (between OpenGL and DirectX) became consistent, adding to the blur.

          I still have a Voodoo3 3500TV, though I don't have a system that can use it. But I assure you that I would much rather play games (and pay the power bill) with my nVidia 9800GT than that old hunk of (ouch! HOT!) 3dfx metal.

          Fast forward a bunch and recently, I've been playing both Rift and Portal 2. The 9800GT is showing its age, especially with Rift, and it's becoming time to look for an upgrade.

          But, really, neither of these games would be worth the time of day on my laptop's ATI x300. This old Dell probably would've played the first Portal OK, but the second one...meh. And the x300 is (IIRC) listed as Rift's minimum spec, but the game loses its prettiness in a hurry when the quality settings are turned down.

          But, you know: I might just install Rift on this 7-year-old x300 laptop, just to see how it works. Just so I can have the same "wow" factor I had when I first installed a Voodoo3 2000, when I play the same game on my desktop with a 3-year-old, not-so-special-at-this-pint 9800GT.

          The steps seem smaller, these days, but progress marches on. You'll have absolute lifelike perfection eventually, but it'll take some doing to get there.

      • by Apocros ( 6119 )

        2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5.

        Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

        That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

        I may be misinformed, but I'm pretty certain systems l

        • by adolf ( 21054 )

          Assuming what you say is true (and I believe it can be if the API is properly implemented, for better or worse), then regular PCI Express x16 cards should be perfectly adequate.

          Not to be offensive, but you used all that verbiage, and failed to realize that nobody needs or wants high-performance tab switching in Firefox. It'd be nice if it happened within a single monitor refresh period (60Hz, these days), but nobody notices if it takes a dozen times as long or more due to PCI Express bus transfers from mai

          • by Apocros ( 6119 )
            It's anecdotal, to be sure... All I can tell you is that when I have tons of windows open on Win7, then switching to old ones takes a while to repaint (and it's quite noticeable). With few windows open, it's effectively instantaneous (i.e presumably within a few VSYNCs). And, no offense taken, but I absolutely do want high-performance tab/window switching in my desktop applications. If I don't have to wait for contents to be repainted, then I don't want to.

            And yes, I'm quite well aware that transfers
            • by Korin43 ( 881732 )

              It's anecdotal, to be sure... All I can tell you is that when I have tons of windows open on Win7, then switching to old ones takes a while to repaint (and it's quite noticeable). With few windows open, it's effectively instantaneous (i.e presumably within a few VSYNCs). And, no offense taken, but I absolutely do want high-performance tab/window switching in my desktop applications. If I don't have to wait for contents to be repainted, then I don't want to.

              Since video memory transfers are so fast, it seems more likely that you're seeing normal swapping behavior -- Windows sees that you have 30 windows open but you're currently only using a few of them, so the rest get swapped out (even if you have a bunch of free memory). On Linux you can change the "swappiness" to fix this. You could see if there's a similar fix on Windows 7 (back when I used Windows XP I just disabled swap files and got a massive performance improvement).

              • by Apocros ( 6119 )
                Well, yeah, could maybe be swapping. But the system can't use the video card framebuffer as general-purpose memory. So swapping the contents to disk would really only make sense if the usage of the framebuffer is at a very high percentage. Hence my (untested) belief that having a larger framebuffer would help. But, maybe they're using the same memory management/swap code as for the normal system VM, that tends to page out to disk earlier than absolutely necessary. So it could be that the "swapiness" o
                • by Korin43 ( 881732 )

                  I don't mean the video card's memory being swapped, just that the memory for the programs you want to use it being swapped. If the program itself isn't ready, it doesn't matter how fast the video card can display it.

                • by bored ( 40072 )

                  That generally not how it works. Both X and the old windows GDI were on demand painters. Basically they simply had the application repaint screen as necessary, clipping the non visible regions. Of course caching a portion of the painting speeds things up, but generally if your running out of ram the image is just thrown away. So having 200 windows open doesn't require sufficient ram/graphics memory to contain 200 maximized windows.

                  • by Apocros ( 6119 )
                    Yeah, but pretty sure Win7 Aero (and OSX quartz extreme, and maybe compiz too) are different. This was the first thing I found along those lines with a simple search:

                    http://www.anandtech.com/show/2760/5 [anandtech.com]

                    So what can you do with WDDM 1.1? For starters, you can significantly curtail memory usage for the Desktop Window Manager when it’s enabled for Aero. With the DWM enabled, every window is an uncompressed texture in order for it to be processed by the video card. The problem with this is that when it comes to windows drawn with Microsoft’s older GDI/GDI+ technology, the DWM needs two copies of the data – one on the video card for rendering purposes, and another copy in main memory for the DWM to work on. Because these textures are uncompressed, the amount of memory a single window takes is the product of its size, specifically: Width X Height x 4 bytes of color information.

                    ...snip...

                    Furthermore while a single window may not be too bad, additional windows compound this problem. In this case Microsoft lists the memory consumption of 15 1600x1200 windows at 109MB. This isn’t a problem for the video card, which has plenty of memory dedicated for the task, but for system memory it’s another issue since it’s eating into memory that could be used for something else. With WDDM 1.1, Microsoft has been able remove the copy of the texture from system memory and operate solely on the contents in video memory. As a result the memory consumption of Windows is immediately reduced, potentially by hundreds of megabytes.

            • by Korin43 ( 881732 )

              Also, just to be more specific about how fast PCI express [wikipedia.org] is, a PCI express 3.0 16x slot transfers at 128 GB/s. Your 8 MB texture should be able to get to it in around 60 microseconds [google.com]. To put that into perspective, rendering the screen at 60 fps means one frame every 17 milliseconds, so even if the texture was transferred from main memory every frame, the actual rendering of the frame would take almost 300x longer.

            • by adolf ( 21054 )

              Thanks for the additional information.

              I'll be the first to say that I do not fully understand any of this -- all I have are anecdotal observations, educated conjecture, and wit.

              That said: As others have mentioned, I think you're swapping, not experiencing a video issue.

              Who knows why -- maybe you've got a memory leak in some program or other, or are just shy on RAM for what you're asking of the system. More investigation is needed on your part (Resource Manager in Vista/7 does an OK job of this). I've had

      • Once upon a time it used to be common for high-end video cards to have display memory and texture memory at different speeds, and sometimes you even got SIMM or DIMM slots for the texture memory, and you could add more. Are there not still cards like this in existence with a halfway decent GPU?

        • by adolf ( 21054 )

          None that I've seen, though I've been wondering the same since I wrote that reply. (The video cards I remember had DIP sockets for extra RAM, but the concept is the same.....)

          Perhaps the time is now for such things to return. It used to be rather common on all manner of stuff to be able to add RAM to the device -- I even used to have a Soundblaster AWE that accepted SIMMs to expand its hardware wavetable synth capacity.

      • by Zencyde ( 850968 )
        My current resolution is 3240x1920. Though I play games at 3510x1920 with bezel correction on. I'm considering grabbing another two monitors and upping to 5400x1920. When playing GTA 4, which is not a new game, it takes up over 900MB with the settings around modest. When upping to two new monitors, I think having 4 GB would be substantial enough. 8 GB would future-proof you a bit.

        Is that hardcore? Maybe. But I don't consider it to be. I consider it to be enjoyable. And enjoyable requires well over a giga
    • Exactly. Professional engines like mental ray, finalRender and V-Ray are moving towards GPU rendering, but right now there's nothing between high-end gaming cards and professional graphics cards with more RAM (and a shitload of money to shell out for it).

    • Ummmm.... How? (Score:4, Interesting)

      by Sycraft-fu ( 314770 ) on Friday April 29, 2011 @05:23AM (#35972748)

      You realize the limiting factor in system RAM access is the PCIe bus, right? It isn't as though that can magically be made faster. I suppose they could start doing 32x slots, that is technically allowed by the spec but that would mean more cost both for motherboards and graphics cards, with no real benefit except to people like you that want massive amounts of RAM.

      In terms of increasing the bandwidth of the bus without increasing the width, well Intel is on that. PCIe 3.0 was finalized in November 2010 and both Intel and AMD are working on implementing it in next gen chipsets. It doubles per lane bandwidth over 2.0/2.1 by increasing the clock rate, and using more efficient (but much more complex) signaling. That would give 16GB/sec of bandwidth which is on par with what you see from DDR3 1333MHz system memory.

      However even if you do that, it isn't really that useful, it'll still be slow. See graphics cards have WAY higher memory bandwidth requirements CPUs. That's why they use GDDR5 instead of DDR3. While GDDR5 is based on DDR3 it is much higher speed and bandwidth. With their huge memory controllers you can see cards with 200GB/sec or more of bandwidth. You just aren't going to get that out of system RAM, even if you had a bus that could transfer it (which you don't have).

      Never mind that you then have to contend with the CPU which needs to use it too.

      There's no magic to be had here to be able to grab system RAM and use it efficiently. Cards can already use it, it is part of the PCIe spec. Things just slow to a crawl when it gets used since there are extreme bandwidth limitations from the graphics card's perspective.

      • While it doesn't solve the problem of DDR3 being slower than GDDR5, AMD has been pushing their "Torrenza [wikipedia.org]" initiative to have assorted specialized 3rd party processors be able to plug directly into the hypertransport bus(either in an HTX slot, or in one or more of the CPU sockets of a multi-socket system). That would give the hypothetical GPU both fast access to the CPU and as-fast-as-the-CPU access to large amounts of cheap RAM.

        Ludicrously uneconomic for gaming purposes, of course; but there are probably
      • by Twinbee ( 767046 )

        For nice fast RAM access, doesn't the new AMD Fusion GPU share the same silicon with the CPU anyway? Nvidia are planning something similar to with their upcoming kepler/maxwell GPU.

        The future is surely where you'll be able to buy a single fully integrated CPU/GPU/RAM module. Not very modular maybe, but speed, programming ease, power efficiency, size and weight would be amazing and more than make up.

        • For nice fast RAM access, doesn't the new AMD Fusion GPU share the same silicon with the CPU anyway?

          Indeed, and the Fusion chips are trouncing Atom-based solutions in graphics benchmarks mainly for this very reason.

          The problem tho is it cant readily be applied to more mainstream desktop solutions because then you have CPU and GPU fighting over precious memory bandwidth. For netbooks and the like, it works well because GPU performance isnt expected to match even midrange cards, so only a fraction of DDR2/DDR3 bandwidth is acceptable. Even midrange desktop graphics cards blow the doors off of DDR3 memory

      • by Khyber ( 864651 )

        "You realize the limiting factor in system RAM access is the PCIe bus, right?"

        Not even close. NOT BY MILES.

        DDR3 - 2.133GT/s
        PCI-E lane Specification 2.0 - 5GT/s
        PCI-E lane Specification 3.0 - 8GT/s

        Actual RAW bandwidth is in plentiful supply on the PCI-E lanes.

        • Look at effective real world transfers. DDR3 RAM at 1333MHz gets in the realm of 16-20GB/sec when actually measured in the system. It transfers more than 1 bit per transfer, and the modules are interleaved (think RAID-0 if you like) to further increase transfer speeds.

          PCIe does transfer 1-bit per transfer per lane. Hence a 16x PCIe 3 slot gets 16GB/sec of throughput.

    • by Khyber ( 864651 )

      "2GB for visualization is just too small"

      Only for a shitty coder. Use a megatexture and/or procedurally generated textures, and you'll only require 8-64MB of video memory, with the rest going to framebuffer and GPU. Did people stop paying attention to Carmack or what?

    • by guruevi ( 827432 )

      Why don't you get a pro video card if you need that? For games you don't need much more than 1GB these days (data for a couple of frames of 1080p just aren't that big). If you need to visualize anything better, you usually go with a Quadro or a Tesla (up to 6GB per card, up to 4 per computer), a Quadro Plex (12GB in an external device) or a rack-based GPU solution (however much you can put in your rack).

  • Good to see the industry playing nicely with each other. Props to them.
  • Seems a smart move (Score:5, Insightful)

    by Red_Chaos1 ( 95148 ) on Friday April 29, 2011 @03:57AM (#35972500)

    Since all the exclusion did was hurt nVidia in sales for people who stay loyal to AMD and refuse to go intel just for SLi. Allowing SLi on AMD boards will boost nVidias sales a bit.

    • Re: (Score:3, Insightful)

      by osu-neko ( 2604 )

      Since all the exclusion did was hurt nVidia in sales for people who stay loyal to AMD and refuse to go intel just for SLi. Allowing SLi on AMD boards will boost nVidias sales a bit.

      It works both ways. nVidia has loyal customers, too, and with CPUs and mobos so much cheaper than a good GPU these days, there are plenty of people who buy the rest of the system to go with the GPU rather than the other way around.

      In any case, more choices are good for everyone, customers most of all.

    • by Kjella ( 173770 )

      I don't think it's a secret that Intel has the fastest processors if you're willing to pay $$$ for it. And since a dual card solution costs quite a bit of $$$ already, I doubt there are that many that want to pair an AMD CPU with a dual nVidia GPU.

      • For the hardcore gamers who don't have unlimited budgets, it might be logical to buy the cheapest CPU that won't bottleneck your games, and pair it with the fastest graphics cards you can afford. Particularly if your games can use the GPU for the physics engine, you might not need even AMD's high-end CPUs to keep a pair of NVidia cards busy.

  • With Intel the only one making SLI motherboards, NVidia needed to buddy up with AMD as leverage? Just a guess.

    • Intel and AMD are the only games in town for current desktop chipsets, as all the other competitors have dropped out. VIA, gone, SiS, gone (and good riddance), Acer Labs, gone, Nvidia, gone.

      It makes plenty of sense not to cut out half the companies in the market, even if that same company also competes with you.

      • It might be that the name of the game is "Let's all gang up on Intel"... given that Intel has squeezed Nvidia put of motherboards, and AMD has integrated graphics all to herself for now, getting closer is sensible, because the dominating player has a)signaled that it wants to enter your arena, and b) has probably reached, as Microsoft has reached, a performance plateau after which further technology advances are not as valuable to consumers .
        Having said that, the SLI market is, and will remain, marginal
  • by Sycraft-fu ( 314770 ) on Friday April 29, 2011 @05:10AM (#35972714)

    nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

    Well it is in the best interests of both companies to play nice, particularly if Bulldozer ends up being any good (either in terms of being high performance, or good performance for the money). In nVidia's case it would be shooting themselves in the foot to not support AMD boards if those start taking off with enthusiasts. In AMD's case their processor market has badly eroded and they don't need any barriers to wooing people back from Intel.

    My hope is this also signals that Bulldozer is good. That nVidia had a look and said "Ya, this has the kind of performance that enthusiasts will want and we want to be on that platform."

    While I'm an Intel fan myself I've no illusions that the reason they are as cheap as they are and try as hard as they do is because they've got to fight AMD. Well AMD has been badly floundering in the CPU arena. Their products are not near Intel's performance level and not really very good price/performance wise. Intel keeps forging ahead with better and better CPUs (the Sandy Bridge CPUs are just excellent) and AMD keeps rehashing, and it has shown in their sales figures.

    I hope Bulldozer is a great product and revitalizes AMD, which means Intel has to try even harder to compete, and so on.

    • Well AMD has been badly floundering in the CPU arena.

      On the top end, power and money is no object, fastest single thread performance, then yes Intel are the clear winners, which is why I buy Intel for desktop development tasks, where I want really good per-thread performance.

      For number crunching servers, AMDs 4x12 core have a slight edge, though which is faster depends rather heavily on the wrokload. The edge is bigger when power and price are taken into account.

      • I've never found any case where they are winners performance wise. 4 core Intel CPUs outperform 6 core AMD CPUs in all the multi-threaded tasks I've looked at, rather badly in some cases. In terms of servers, Intel offers 10 core CPUs which I imagine are faster than AMD's 12 core CPUs much like in the desktop arena though I will say I've not done any particular research in the server arena.

        Likewise the power consumption thing is well in Intel's court in the desktop arena. A Phenom 1100T has a 125watt TDP, a

        • by Narishma ( 822073 ) on Friday April 29, 2011 @06:59AM (#35973028)

          If you only consider the CPU then what you say is true, but you also have to take into account that AMD motherboards generally cost less than Intel ones.

        • by Khyber ( 864651 ) <techkitsune@gmail.com> on Friday April 29, 2011 @09:58AM (#35974168) Homepage Journal

          "4 core Intel CPUs outperform 6 core AMD CPUs in all the multi-threaded tasks I've looked at, rather badly in some cases."

          Do raw x86 without any specialized instructions (minus multi-core stuff) and you'll find the opposite happening, AMD wins hands-down.

          That's why AMD powers our food production systems. We don't need the specialized instructions like SSE3/4/4a/etc. and AMD's raw x86 performance wins.

          Intel NEEDS those specialized instructions added on to keep pace.

          • Intel NEEDS those specialized instructions added on to keep pace.

            Note that Intel's compilers refuse to use those instructions when their output runs on AMD's and, unfortunately, the popular scientific libraries are all compiled with ones of Intel's compilers (ICC or their Fortran compiler) and only use the SIMD paths if they see "GenuineIntel" output from CPUID.

            One of the most renowned software optimization experts studies this in detail in his blog. [agner.org]

            • Nowadays, you can actually force ICC to emit code that will use up to SSSE3 on AMD CPUs, but only if you don't use runtime code-path selection. (More specifically, you have to tell ICC that the least-common-denominator code path should use SSSE3, which defeats the purpose of runtime code-path selection. ICC will always choose the slowest available code path for an AMD CPU, but you can prevent it from including a non-SSE code path.)

        • I've never found any case where they are winners performance wise

          The rest of your post (mostly) focusses on desktop class processors, where Intel certainly win on the high end. In the lower and mid range, AMD are more competitive, especially in flops/$.

          In the quad server socket market, things are a different. AMD's 12 core clock speed has been creeping up, where as Intel's large multicore processors clock relatively slowly. One reason Intel win the top spots on the desktop is due to faster clocks and doing

          • Indeed, AMD is still crushing Intel's 4-chip solutions in performance [cpubenchmark.net]

            This is certainly due to Intel not really refreshing its server lines at all, focusing mainly on the desktop space, while AMD has steadily updated its server lines.

            Lets not forget that AMD is also about to unveil its bulldozer cores, while Intel has recently updated its desktop chips. Until this year, Intel had an extremely hard time competing in performance per $ in the desktop space, and expect that even if bulldozer doesnt match i7
      • by LWATCDR ( 28044 )

        But at each dollar range AMD usually wins. Frankly At this point the CPU is rarely the bottleneck for most desktop users. They will usually get a lot bigger bang for buck with more ram, faster drivers, and a better video card than with a faster CPU. If you are a hard-core gamer then yea but for the 95% of PC users a Core2Duo or an X2, X3, or X4 is more than good enough.

    • nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

      During the Athlon XP era, AMD did make a good chipset in the AMD 750. The problem was that all of the mobo manufactures at the time were using the VIA 686b southbridge instead of the AMD 766, which had a bus mastering bug which tended to cause crashes and eventually hard drive corruption.

      Just about all

      • by rhook ( 943951 )

        It's a shame that this announcement is most likely going to result in the end of Nforce chipsets. Nvidia hasn't announced a new chipset for either Intel or AMD in years, Intel supports SLi, and now that AMD supports SLi, it just supports the rummors that Nvidia is killing the chipset division..

        Nvidia left the chipset market nearly 3 years ago.

    • nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

      My desktop has an ASUS A8N-SLI motherboard based on the nForce 4 chipset. Think its about time for me to upgrade?

  • by RagingMaxx ( 793220 ) on Friday April 29, 2011 @05:19AM (#35972740) Homepage

    Having built my last two gaming rigs to utilize SLI, my opinion is that it's more trouble than it's worth.

    It seems like a great idea: buy the graphics card at the sweet spot in the price / power curve, peg it for all its worth until two years later when games start to push it to its limit. Then buy a second card, which is now very affordable, throw it in SLI and bump your rig back up to a top end performer.

    The reality is less perfect. Want to go dual monitor? Expect to buy a third graphics card to run that second display. Apparently this has been fixed in Vista / Windows 7, but I'm still using XP and it's a massive pain. I'm relegated to using a single monitor in Windows, which is basically fine since I only use it to game, and booting back into Linux for two-display goodness.

    Rare graphics bugs that only affect SLI users are common. I recently bought The Witcher on Steam for $5, this game is a few years old and has been updated many times. However if you're running SLI, expect to be able to see ALL LIGHT SOURCES, ALL THE TIME, THROUGH EVERY SURFACE. Only affects SLI users, so apparently it's a "will not fix". The workaround doesn't work.

    When Borderlands first came out, crashed regularly for about the first two months. The culprit? A bug that only affected SLI users.

    Then there's the heat issue! Having two graphics cards going at full tear will heat up your case extremely quickly. Expect to shell out for an after-market cooling solution unless you want your cards to idle at 80C and easily hit 95C during operation. The lifetime of your cards will be drastically shortened.

    This is my experience with SLI anyway. I'm a hardcore gamer who has always built his own rigs, and this is the last machine I will build with SLI, end of story.

    • Has SLI really been so troublesome? My last system warranted a full replacement by the time I was thinking about going SLI, and ended up going with an AMD system and Crossfire instead. I've yet to have an issue gaming with dual monitors outside of a couple games force minimizing everything on the second monitor when activated in full screen mode, but this was fixed by alt tabbing out and back into the client. It may be worth noting however that I've not tried XP in years.
      • On Windows XP, with the latest nvidia drivers, any card running in SLI mode will only output to one of its ports. The secondary card's outputs don't work at all.

        If you want you can disable SLI every time you exit a game (it only takes about two minutes!), but don't expect Windows to automatically go back to your dual monitor config. It's like you have to set your displays up from scratch every time.

        As annoying as it is though, the dual monitor limitation is really just an annoyance. Having to disable SLI

        • I am running two 460 GTXs in SLI and am not having any heat problems (the cards stay around 40C at idle and 50-55C maxed). I also don't have any problems with my dual monitor config. Finally, turning SLI on and off is as simple as right clicking on the desktop, selecting the NVidia control panel, and going to SLI and hitting disable. I can't see that taking longer than about 10 seconds.
          • So, to paraphrase your entire post, "My computer and operating system are totally different to yours, and I am not experiencing the problems you are having."

            Good talk.

            • by Khyber ( 864651 )

              You have problems reading.

              Also you're doing it wrong.

              http://i.imgur.com/shrBtl.jpg [imgur.com]

              SLI + quad monitor under XP. Yes, I game on it. Go read your manual and figure out what you're doing wrong, because it's guaranteed to be YOU.

              • From the nvidia driver download [nvidia.co.uk] page for their latest driver release:

                *Note: The following SLI features are only supported on Windows Vista and Windows 7: Quad SLI technology using GeForce GTX 590, GeForce 9800 GX2 or GeForce GTX 295, 3-way SLI technology, Hybrid SLI, and SLI multi-monitor support.

                I've tried every possible configuration available, it does not work. But thanks for your helpful and informative post, which yet again fails to invalidate my experience by way of your (highly questionable and com

                • by Khyber ( 864651 )

                  "(highly questionable and completely unsubstantiated)"

                  Oh, I'm sorry, raw photographic evidence isn't enough for you?

                  "The following SLI features are only supported on Windows Vista and Windows 7: Quad SLI technology using GeForce GTX 590, GeForce 9800 GX2"

                  Try to remember when the 9800GX2 came out. Revert to those drivers.

                  Quit using the newer drivers. Support for XP was present in older driver revisions.

                  • Ok, first of all the nvidia Forceware Release 180 drivers are the first drivers to support multi monitor SLI. From the Tom's Hardware story [tomshardware.com] at the time:

                    Big Bang II is codename for ForceWare Release 180 or R180. The biggest improvement is the introduction of SLI multi-monitor. Yes, you’ve read it correctly, Nvidia has finally allowed more than one monitor to use multiple video cards at once, something it’s been trying to do since SLI’s introduction back in 2004.

                    From the nvidia 180 driver [nvidia.com]

                    • by Khyber ( 864651 )

                      "single display"

                      You don't see the monitor built into the actual computer box, or those other monitors to my left, either? Those are running off the same system.

                      Four monitors, Windows XP, SLI 9800GTX+ GPU inside of a Zalman case.

                      I built the thing, I know what's inside. That's my office.

                    • The only way that is working for you is if SLI is disabled while at your desktop.

                    • by Khyber ( 864651 )

                      http://www.tech-forums.net/pc/f78/sli-dual-monitors-works-168567/ [tech-forums.net]

                      *yawn*

                      WIndows XPx64 also doesn't need a workaround as it's based off of Server 2k3, it just WORKS.

                    • That guide refers to using 1 (total) monitor on the SLI cards, and additional monitors on ADDITIONAL cards that aren't part of the SLI set.
                      You're wrong.
                      You got called out.
                      Deal with it.

                    • Reading comprehension fail.

                      As I said in my original post, if you want to run multiple monitors with SLI enabled on your primary display in Windows XP, you have to buy a third card. Which, by the way, is the entire fucking point of the article you posted and is something I know full well because I've done it.

                      Windows XP x64 is hardly an option, it is one of the buggiest, least supported operating systems released in recent memory. "It just WORKS" is probably the single most ironic description of Windows XP

                    • by Khyber ( 864651 )

                      You didnt' read the rest of my post, did you idiot?

                      XPx64 DOESN'T NEED THE WORKAROUND AS IT'S SERVER 2K3/VISTA BASED..

                      Which is what that machine is running.

                      The link was merely posted in case you wanted something to actually read, as I doubt you have any clue as to just how far my nVidia experience goes (GeForce256 was my first nVidia card, FYI, and I've had EVERY generation since.)

                      Also, nVidia uses a UNIFIED DRIVER ARCHITECTURE.

                      One .ini hack is all it takes to re-enable Dual Monitor support in SLI mode.

                      Or yo

                    • by Khyber ( 864651 )

                      "I also enjoy the fact that he's moderated all of his own posts up using a sock puppet."

                      Except I only have one account, which makes your fact a falsehood.

                      Let me guess, you vote Republican and watch Glenn Beck.

                    • by Khyber ( 864651 )

                      I am my employer. No, I don't care. Not when I'm making enough money producing emergency food for Japan since nobody else seems capable of doing it.

                    • by Khyber ( 864651 )

                      Do you even write drivers?

                      No?
                      THen you don't have half a clue, because you haven't stuck your nose into the driver stack.

                      Come back when you've worked very closely with the drivers - hint: I did OmegaDriver dev back when 3dfx was stilla viable company. Workingon nVidia's unified driver architecture is much easier, as it's a package to work across all NT operating systems. THat means you can enable features from one version not present in another version with simple tweaks and hacks.

                      Come back when you can hack

      • by Khyber ( 864651 )

        "Has SLI really been so troublesome"

        Yes. 3Dfx would get pretty much 100% scaling.

        You don't get that with nVidia or AMD/ATi, which IMHO makes it totally fucking useless.

        • by rhook ( 943951 )

          What do you mean by "100% scaling", do you mean both card get full performance? Because I can assure you that was not the case with 3DFX, you would only get about 3/4 the full performance the cards were capable of. The SLI being used now also only shares the name, which has a different meaning, Nvidia is not using the same technology.

          Note: It can now be seen that the original 3DFX SLI was very effective, more so than even current implementations by Nvidia & ATI, capable of achieving near 100% performance increase over a single Voodoo2 card. The reason this could not be observed before as the CPU was the limiting factor.[3]

          How much do you want to bet that the CPU is still the limiting factor?

    • +1. Your experiences basically mimic mine. SLI doesn't even win in terms of bang for buck. People think "oh, I can but a second video card later on and boost performance"... but you might as well buy a previous gen high end card for the same price, same performance, and lower power requirements (than running 2).
    • > Having built my last two gaming rigs to utilize SLI, my opinion is that it's more trouble than it's worth.
      My current Gaming Rig has a XFire 5770 (XFire = AMD/ATI version of nVidia's SLI). I got it almost a year ago -- I paid ~$125 x 2 and saved at least $50 over the 5870.

      I regular play L4D, BFBC2, RB6LV2. I too have mixed opinions on XFire/SLI but for different reasons.

      > worth until two years later when
      No one says you have to wait 2 years :-) I waited 6 months.

      Regardless of when you buy, you ARE sav

  • When the SLI/Crossfire war began it was bad for the consumer.
    Fie, fie, fie on your proprietary video bus arraignments!
    I wish the consumers would bend together and demand an end to it.

    [I used to usually buy an AMD processor and and Nvidia video card. I missed the chipset updates, so this is good news for me.]

  • Didn't NVIDIA stop making motherboard chipsets? It would make sense that they would attempt to get their tech to work on as many platforms as possible.

For God's sake, stop researching for a while and begin to think!

Working...