Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD's Radeon R9 290X Launched, Faster Than GeForce GTX 780 For Roughly $100 Less 157

MojoKid writes "AMD has launched their new top-end Radeon R9 290X graphics card today. The new flagship wasn't ready in time for AMD's recent October 8th launch of midrange product, but their top of the line model, based on the GPU codenamed Hawaii, is ready now. The R9 290 series GPU (Hawaii) is comprised of up to 44 compute units with a total of 2,816 IEEE-2008 compliant shaders. The GPU has four geometry processors (2x the Radeon HD 7970) and can output 64 pixels per clock. The Radeon R9 290X features 2816 Stream Processors and an engine clock of up to 1GHz. The card's 4GB of GDDR5 memory is accessed by the GPU via a wide 512-bit interface and the R290X requires a pair of supplemental PCIe power connectors—one 6-pin and one 8-pin. Save for some minimum frame rate and frame latency issues, the new Radeon R9 290X's performance is impressive overall. AMD still has some obvious driver tuning and optimization to do, but frame rates across the board were very good. And though it wasn't a clean sweep for the Radeon R9 290X versus NVIDIA's flagship GeForce GTX 780 or GeForce GTX Titan cards, AMD's new GPU traded victories depending on the game or application being used, which is to say the cards performed similarly."
This discussion has been archived. No new comments can be posted.

AMD's Radeon R9 290X Launched, Faster Than GeForce GTX 780 For Roughly $100 Less

Comments Filter:
  • by Suiggy ( 1544213 ) on Thursday October 24, 2013 @08:30PM (#45230161)

    That should have been the real headline.

  • by Entropius ( 188861 ) on Thursday October 24, 2013 @10:03PM (#45230655)

    A lot of compute applications are memory bandwidth limited, so single precision will give you only twice as many flop/sec as double.

    There's another thing about the Titans, though: reliability.

    I do lattice gauge theory computations on these cards. We've got a cluster of GTX480's that is a disaster: the damn things crash constantly. We're in the process of replacing them with Titans, which have been rock solid so far, as good as the cluster of K20's I also use. (They're also a bit faster than the K20's.) The 480's are especially bad, but I imagine the Titans are better than (say) GTX580's.

    The Titan doesn't make that much sense as a high-end gaming card, but it makes a great deal of sense as a ghetto compute card for people who don't want to buy the K20's/K40's. (We've benchmarked a K40 and the Titan still beats it, but only barely.)

  • by Anonymous Coward on Thursday October 24, 2013 @10:38PM (#45230823)

    I hate to be that guy but if the reviews are to be trusted, amd overclocked this thing to the very limit of the chip's potential just to beat the competition.
    There's almost no headroom for overclocking and stock, it's ridiculously hot, loud and power-hungry.
    For 400 bucks more, you got something that's still better in most games at a relevant resolution about 8 months ago.

  • by Arkh89 ( 2870391 ) on Friday October 25, 2013 @01:12AM (#45231351)

    You're lucky then... We replaced our cluster of 580s by Titans and these things keep crashing for no apparent reason (about 2/3 of the cards will randomly hang up on computation are run fine on the remaining cards)...

  • by TeXMaster ( 593524 ) on Friday October 25, 2013 @02:59AM (#45231683)

    Either you're trolling or you have no frigging idea what you're talking about.

    It is true that often the low-end cards are just crippled versions of the high-end cards, something which —as despicable as it might be— is nothing new to the world of technology. But going from this to saying that there is no competition and no (or slow) progress is a step into ignorance (or trolling).

    I've been dealing with GPUs (for the purpose of computing, not gaming) for over five years, that is to say almost since the beginning of proper hardware support for computing on GPU. And there has been a lot of progress, even with the very little competition there has been so far.

    NVIDIA alone has produced three major architectures, with very significant differences between them. If you compare the capabilities of a Tesla (1st gen) with those of a Fermi (2nd gen) or a Kepler (3rd gen), for example, you get: Fermi, has introduced an L2 and an L1 cache, which was not present in the Tesla arch, lifting some of the very strict algorithmic restrictions imposed on memory-bound kernels; it also introduced hardware-level support for DP. Kepler is not as big a change over Tesla, but it has introduced things such as the ability for stream processors to swizzle private variables among them, which is a rather revolutionary idea in the GPGPU paradigm. And 6 times more stream processors per compute unit over the previous generation is not exactly something I'd call "not that much different".

    AMD has only had one major overhaul (the introduction of GCN), instead of two, but I'm not really spending more words on how much of a change it was compared to the previous VLIW architectures they had. It's a completely different beast, with the most important benefit being that its huge computing power can be harnessed much more straightforwardly. And if you ever had to hand-vectorize your code looking for the pre-GCN hotspot of workload per wavefront, you'd know what a PITN that was.

    I would actually hope they stopped coming up with new archs, and spent some more time refining their software side. AMD has some of the worst drivers ever seen by a major hardware manufacturer (in fact, considering they've consistently had better, cheaper hardware, there isn't really any other explanation for their inability to gain dominance in the GPU market), but NVIDIA isn't exactly problem free: their support for OpenCL, for example, is ancient and crappy (obviously, since they'd rather have people use CUDA to do compute on their GPUs).

    And hardware-wise, Intel is finally stepping up their game. With their HD4000 chipset they've finally managed to produce an IGP with decent performance (it even supports compute), although AMD's APUs are still top dog. On the HPC side, their Xeon Phi offerings are very interesting competitors to the NVIDIA Tesla (not the arch, the brand name for the HPC-dedicated devices) cards.

  • Re:Driver openness (Score:4, Informative)

    by Xtifr ( 1323 ) on Friday October 25, 2013 @03:10AM (#45231725) Homepage

    ATI Linux drivers have traditionally been crappy, but since they were bought by AMD, they've opened up a lot, and have been steadily contributing to the main kernel. The kernel drivers (as opposed to the proprietary Linux drivers) have been improving by leaps and bounds lately. Kernel 3.5 saw 3D performance improvements of over 35% with some AMD cards, and 3.12 is supposed to have a similar huge boost.

    I don't know how they compare to the closed source drivers from Nvidia *or* ATI, but I'm currently running 3.10, and the in-kernel drivers are definitely working very well for me.

    Phoronix on 3.5 drivers [phoronix.com]

    Phoronix on 3.12 drivers [phoronix.com].

This file will self-destruct in five minutes.

Working...