AMD's Radeon R9 290X Launched, Faster Than GeForce GTX 780 For Roughly $100 Less 157
MojoKid writes "AMD has launched their new top-end Radeon R9 290X graphics card today. The new flagship wasn't ready in time for AMD's recent October 8th launch of midrange product, but their top of the line model, based on the GPU codenamed Hawaii, is ready now. The R9 290 series GPU (Hawaii) is comprised of up to 44 compute units with a total of 2,816 IEEE-2008 compliant shaders. The GPU has four geometry processors (2x the Radeon HD 7970) and can output 64 pixels per clock. The Radeon R9 290X features 2816 Stream Processors and an engine clock of up to 1GHz. The card's 4GB of GDDR5 memory is accessed by the GPU via a wide 512-bit interface and the R290X requires a pair of supplemental PCIe power connectors—one 6-pin and one 8-pin. Save for some minimum frame rate and frame latency issues, the new Radeon R9 290X's performance is impressive overall. AMD still has some obvious driver tuning and optimization to do, but frame rates across the board were very good. And though it wasn't a clean sweep for the Radeon R9 290X versus NVIDIA's flagship GeForce GTX 780 or GeForce GTX Titan cards, AMD's new GPU traded victories depending on the game or application being used, which is to say the cards performed similarly."
Faster than the nVidia GTX TITAN for $400 less (Score:4, Informative)
That should have been the real headline.
Re:Faster than the nVidia GTX TITAN for $400 less (Score:5, Insightful)
Unless you use a titan to do modelling that requires double precision. The titan is a super cheap k20x. It just happens to double as a gaming card.
Re: (Score:3)
Many other consumer/gaming cards support double precision floating-point from both nVidia and AMD. Including all of the AMD R9 2xx cards. Double precision hasn't been exclusive to workstation GPUs for a while now.
Re:Faster than the nVidia GTX TITAN for $400 less (Score:5, Interesting)
Re: (Score:2)
There's a difference between "supported" and "actually usable".
The 780 has a theoretical single-precision compute rate of 4.0TFLOPS, comparable to the Titan's 4.5TFLOPS. Go up to double-precision, and the 780 plummets to 165GFLOPS (1/24th the power), while the Titan remains high at 1.5TFLOPS (1/3rd the power).
I'm not sure what the exact reason for that discrepancy in performance is, whether it's an actual hardware difference, some hardware being disabled during binning, or even just a driver change (althoug
Re: (Score:1, Insightful)
Re: (Score:3)
When they get beat on both axes, though - that makes it a fair comparison.
Re: (Score:2, Insightful)
I'm not saying that the
Re:Faster than the nVidia GTX TITAN for $400 less (Score:5, Informative)
A lot of compute applications are memory bandwidth limited, so single precision will give you only twice as many flop/sec as double.
There's another thing about the Titans, though: reliability.
I do lattice gauge theory computations on these cards. We've got a cluster of GTX480's that is a disaster: the damn things crash constantly. We're in the process of replacing them with Titans, which have been rock solid so far, as good as the cluster of K20's I also use. (They're also a bit faster than the K20's.) The 480's are especially bad, but I imagine the Titans are better than (say) GTX580's.
The Titan doesn't make that much sense as a high-end gaming card, but it makes a great deal of sense as a ghetto compute card for people who don't want to buy the K20's/K40's. (We've benchmarked a K40 and the Titan still beats it, but only barely.)
Re:Faster than the nVidia GTX TITAN for $400 less (Score:5, Informative)
You're lucky then... We replaced our cluster of 580s by Titans and these things keep crashing for no apparent reason (about 2/3 of the cards will randomly hang up on computation are run fine on the remaining cards)...
Re: (Score:2)
Hm, interesting -- if we're going to get Titans as an upgrade this is worth knowing. What are you doing on them? We're doing a computation that uses a lot of single-precision, somewhat less double precision, and occupies about 70% of the 6GB memory on each (they run in pairs).
Oh -- make sure you have the new drivers. I kept getting random crashes and it turns out that the old Linux Nvidia driver was at fault since it didn't really support them. I upgraded the drivers and everything was fine.
Re: (Score:2)
I should be more precise on the context...
We are mainly doing FFTs and Linear algebra in both single and double precisions. To give a little bit more details about the problem : we run the same code on multiple GPUs at the same time (each instance of the program has its own GPU and is not communicating with the other processes). It appears that, after a random number of iterations (it might be 1K, 10K or 100K), a kernel from CuFFT, or CuBLAS, or my own gets stuck and the program is killed by the watchdog of
Re: (Score:1)
You run your program 3 times on 3 different hardware setups, get 2 complete results and compare results, which are the same ? The other computation did not complete.
Re: (Score:2)
The memory is specialized and very fast -- graphics memory is orders of magnitude faster than a SSD. We're getting 160 GB/sec from the memory on a K20.
Re: (Score:3)
What does that even mean? It's for sale and if you wanted that performance you had to pay $1000 for it. Now you don't.
People who want that performance are sure going to be interested in learning they can pay half as much.
The alternative is to say that nvidia has nothing with this performance, but then people are going to say "But the Titan does"... so what are you going to do?
Re: (Score:2)
Re: (Score:2)
Irrelevant. The Titan was never meant to be a consumer-level card, it's something that's meant to be sitting between the consumer (sub-700) market and the professional (1000+) cards.
Is that marketing bullshit that I'm smelling?
an artificial restriction to somewhat justify the cost of professional cards.
I don't think 'justify' is quite the right word here...
Re: (Score:2, Informative)
I hate to be that guy but if the reviews are to be trusted, amd overclocked this thing to the very limit of the chip's potential just to beat the competition.
There's almost no headroom for overclocking and stock, it's ridiculously hot, loud and power-hungry.
For 400 bucks more, you got something that's still better in most games at a relevant resolution about 8 months ago.
Re: (Score:1)
Other reviews show the thing can easily be pushed by another 10%, even without lifting the 40% fan speed restriction.
Re: (Score:2)
Re: (Score:2)
AMD designed this chip for maximum performance in minimum die space. They managed to cram Titan-level performance in 435mm^2. That's including a 512-bit memory bus AND 64 ROPs, so they're not exactly cutting corners!
The Titan uses a 551mm^2 die-size, and although some of that is fused-off, the majority of the difference is because Nvidia designed it wide and slow for power first, performance second. This is because the part was targeted first-and-foremost at professionals, where performance/watt and coole
Re:Faster than the nVidia GTX TITAN for $400 less (Score:4, Interesting)
The GTX Titan is a double-precision computing card that happens to do very well at gaming. It's not really a fair comparison. Ever since the GTX 780 was released, pretty much every review site has recommended it over the Titan for gamers on price/performance grounds.
Re: (Score:1)
HD69xx, HD79xx, R9 280 and R9 290 are all double-precision computing cards that happen to do very well at gaming.
So it is a fair comparison.
Re: (Score:2)
Now if AMD would give us back the Driver Only install instead of forcing .Net4 and Catalyst Control center down our throats. The driver is good but I don't need the damn CCC App crashing and restarting the video driver when it didn't crash.
This is probably the biggest reason I no longer use AMD cards even though their as good performance wise as Nvidia. Or course Nvidia is having driver issues again so I guess I need to stick with Intel only for stable and open source drivers.
Re: (Score:2)
I have a little trouble understanding Mantle. Maybe I just picked the wrong articles to read, but will any 7xxx card be able to use Mantle or only the new ones?
I saw a good deal on an HD7970 but I don't want it if it can't use this new Mantle stuff.
Tomorrow morning, I'm gonna see how my 6850 handles Batman: Arkham Oranges. I was hoping to get a new card by now for this game, but I've been too confused to pull the trigger on a purchase.
Cards from duopoly are artificially crippled (Score:1)
It is a grand shame that we, the consumers (professional and casual/gamer) are left with no other choice than that duopoly Nvidia / ATI pair.
Most of their consumer grade cards are artificially crippled in the attempt to force us to dole out even more of our hard earned cash just to get their GPU to tap to their full potential.
The GPU market is no longer competitives. The duopoly have slowed the competition to a crawl.
Every single year they come out of their "new version" of cards which are not that much dif
Re:Cards from duopoly are artificially crippled (Score:5, Informative)
Either you're trolling or you have no frigging idea what you're talking about.
It is true that often the low-end cards are just crippled versions of the high-end cards, something which —as despicable as it might be— is nothing new to the world of technology. But going from this to saying that there is no competition and no (or slow) progress is a step into ignorance (or trolling).
I've been dealing with GPUs (for the purpose of computing, not gaming) for over five years, that is to say almost since the beginning of proper hardware support for computing on GPU. And there has been a lot of progress, even with the very little competition there has been so far.
NVIDIA alone has produced three major architectures, with very significant differences between them. If you compare the capabilities of a Tesla (1st gen) with those of a Fermi (2nd gen) or a Kepler (3rd gen), for example, you get: Fermi, has introduced an L2 and an L1 cache, which was not present in the Tesla arch, lifting some of the very strict algorithmic restrictions imposed on memory-bound kernels; it also introduced hardware-level support for DP. Kepler is not as big a change over Tesla, but it has introduced things such as the ability for stream processors to swizzle private variables among them, which is a rather revolutionary idea in the GPGPU paradigm. And 6 times more stream processors per compute unit over the previous generation is not exactly something I'd call "not that much different".
AMD has only had one major overhaul (the introduction of GCN), instead of two, but I'm not really spending more words on how much of a change it was compared to the previous VLIW architectures they had. It's a completely different beast, with the most important benefit being that its huge computing power can be harnessed much more straightforwardly. And if you ever had to hand-vectorize your code looking for the pre-GCN hotspot of workload per wavefront, you'd know what a PITN that was.
I would actually hope they stopped coming up with new archs, and spent some more time refining their software side. AMD has some of the worst drivers ever seen by a major hardware manufacturer (in fact, considering they've consistently had better, cheaper hardware, there isn't really any other explanation for their inability to gain dominance in the GPU market), but NVIDIA isn't exactly problem free: their support for OpenCL, for example, is ancient and crappy (obviously, since they'd rather have people use CUDA to do compute on their GPUs).
And hardware-wise, Intel is finally stepping up their game. With their HD4000 chipset they've finally managed to produce an IGP with decent performance (it even supports compute), although AMD's APUs are still top dog. On the HPC side, their Xeon Phi offerings are very interesting competitors to the NVIDIA Tesla (not the arch, the brand name for the HPC-dedicated devices) cards.
Re: (Score:1)
I would actually hope they stopped coming up with new archs, and spent some more time refining their software side. AMD has some of the worst drivers ever seen by a major hardware manufacturer (in fact, considering they've consistently had better, cheaper hardware, there isn't really any other explanation for their inability to gain dominance in the GPU market), but NVIDIA isn't exactly problem free: their support for OpenCL, for example, is ancient and crappy (obviously, since they'd rather have people use CUDA to do compute on their GPUs).
What I'd like to see both Nvidia and AMD do is take the damn time to really optimze the hell out of their chips, dropping power demand while improving performance. In regards to the drivers and software side of the damn things, yea, I'd love to see a driver only offering again. It doesn't have to be the fucking fastest, just give me stability. I want the rock solid drivers that used to be the case and forget about all the extra shit that's included with CCC or the NVCPL.
Re: (Score:1)
I have a little trouble understanding Mantle. Maybe I just picked the wrong articles to read, but will any 7xxx card be able to use Mantle or only the new ones?
I saw a good deal on an HD7970 but I don't want it if it can't use this new Mantle stuff.
Tomorrow morning, I'm gonna see how my 6850 handles Batman: Arkham Oranges. I was hoping to get a new card by now for this game, but I've been too confused to pull the trigger on a purchase.
we're in a similar position. I'm on a 6850 atm. Looks like we're SOL when it comes to this mantle stuff!
By the way, are you sure it wasn't "Arkham oranges and lemons"? I'm sure it was called that and that it involves the "mayor of simpleton" somehow.. Poor skeleton.
Re: (Score:2)
I heard that was a complicated game.
Re: (Score:3)
I'm confused. I thought Slashdot was against GMO... or is WayneTech growing their oranges organically?
Either way, mmmm, oranges!
Deferred shading/lighting + sparse voxel DAGs (Score:5, Interesting)
Mantle support, 4GB of VRAM, 512-bit memory bus for fast transfers... we're in heaven.
With that much VRAM, there should be enough for a rich geometry buffer and room to spare for a decent sized scene represented by a sparse voxel DAG. Ray-cast the voxel DAG into the geometry buffer, then do your polygonal rendering pass, followed by your deferred lighting passes, and final composition.
Re: (Score:2, Insightful)
I'll be excited about this "mantle" thing when I actually see stuff that benefits from it instead of a bunch of theoretical mumbo jumbo.
Graphics API overhead this.. 10x more draw calls that..
Show me a real game and show me how it's actually better than the alternatives that actually exist at that same time. "Look this thing that doesn't really exist (or isn't in use) is faster than stuff that's actually here now and being used". Everyone can win at that game.
Real numbers on real games. Until then, you ca
Re: (Score:2)
Also redo these benchmarks in Linux ..
Re: (Score:2)
If you're interested, one thing to watch is Star Citizen development. It will be a 64 bit game being created for next-gen systems. It will have extremely high poly counts with super realistic physics and shaders coming out the airlocks. There is already a downloadable, small demonstration of the engine called "The Hangar Module", but I think you have to be a contributor to get it.
Trust me, Crysis is dead. The next big question will be, "Does it run SC?"
Re: (Score:3)
I'm really unimpressed with the demo so far. Don't see any real difference to Eve Online hangar rendering for example. Plus, rendering spaceships is probably easiest thing you can aim for - compared to trees/foliage, water, mossy rocks, realistic sky etc.
If SC will require top-end hardware it will be only because they are lazy, not because there are so big requirements to render it nicely. High-poly models doesn't make sense when you end up having 6 polygons for each pixel... and when you can achieve 99% of
Re: (Score:2)
I didn't check out the Hangar Module myself, my PC is not up to it.
But the videos they have floating around do look mighty impressive. Look at the massively detailed cockpits and the realistically moving parts on the ships. Obviously, it's still in development, but the features they are promising, like walking around your own spaceship or walking inside a capital ship and being able to look through a window and watch the battle taking place outside, there is nothing like it at the moment.
And it will take a
Re: (Score:1)
Well if it runs Crysis, it certainly handles Supreme Commander (SC) quite well. Get your Abbreviations right before "Opening your mouth and confirming your a fool" as President Lincoln once said.
Re: (Score:2)
Back off newbie. SC, as everyone knows, officially stands for "Star Control", since 1990 and exactly up to the point at which Star Citizen is released.
However, I came to realize that my claim about Crysis being dead is a little ironic, considering that SC is based on CryEngine 4. :D
Re: (Score:1)
SC is Starcraft.
Re: (Score:3)
>2013
>still browsing 4chan
Looking good so far. (Score:5, Insightful)
Now let's hope to god they have their driver situation hashed out.
AMD/ATI has always put out fairly nice hardware. But, more often than not, they're always falling on their faces because of shoddy drivers.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:1)
XP and Win7 both had driver problems. One driver version bricked my machine and I had to reinstall the entire system and restore from backup. Regular blue screens on startup in both systems.
Sucked it up and bought nVidia last December. Sure, I had driver problems as well but reverting them to 306 has kept me from experiencing a single driver problem since (other than the incessant notifications that my driver is out of date).
[John]
Re: (Score:1)
...if you could reinstall, you didn't brick your machine.
Re: (Score:1)
They've made major improvements recently with regards to performance consistency in their drivers, especially with multi-GPU and multi-monitor setups.
Re: (Score:1)
No one makes decent graphics drivers. Intels drivers have so many strange oddities it's not funny (random garbage textures/shader faults), AMDs are generally naff, nVidias break themselves every so often and need a full reinstall (wiping your configuration out along with it), and Matrox releases updates once every 3 years (if you are lucky).
Re: (Score:2)
Re: (Score:1)
Never had any problems with AMD drivers and ive used their cards since radeon 8500le. Ive also had Nvidia cards in my machine and also didnt have any major problems.
Sure sometimes drivers will give you problems but to say that one company has consistently better drivers is nonsense. At the moment i have no problems with my hd7850, hd4870 and even my nvidia ION HTPC system stopped giving me problems after two years of "fun" (HDMI related).
Re: (Score:2)
Hey, it's 2013, not 2003. I remember there was a time when ATi drivers were shoddy, but I think that they've long since gotten their act together.
I haven't tried an AMD card in five years because last time I did, their drivers were shit. Your decade rant is hyperbole.
Re: (Score:1)
On top of the price rundown.
Re: (Score:2)
yeah, huh? No.
And especially not at high resolutions that you'd actually use a card like this for.
You're not going to run 1080p on something like this. 3x27" is the interesting resolution and it the 290 hauls ass.
Re: (Score:2)
You're not going to run 1080p on something like this.
That's very true. If I buy this I'll be running it with a 1600x1200 20" monitor.
Re: (Score:2)
Given I have games that can drop below 60fps on a gtx780 at 2560x1440 I dread to imagine trying to push three times that resolution through.
Frankly $100 isn't a massive difference (relative to the other $3k my PC cost) and I prefer Nvidia for the driver support so I wouldn't have gone with the ATI card even if it had been available when I purchased, but I'm glad that ATI are still progressing and preventing Nvidia from stagnating too.
Re: (Score:2)
http://www.anandtech.com/show/7457/the-radeon-r9-290x-review/12 [anandtech.com]
has the 290 on top of 780 sli -- in SLOW mode.
3840x2160 is the kind of resolution you want this card for. You don't need it for 1080p, so comparing there is silly.
Re: (Score:3)
Re: (Score:2)
You assume he's running Ubuntu. Ubuntu != Linux (And /. still doesn't support Unicode)
Re: (Score:1)
That's gonna hurt them.
Re: (Score:2)
Yeah.. they'll sell a few thousand copies less.
But 2014! That'll be the year of Linux on the desktop!!!
Re: (Score:2)
Au contraire, they work quite well once you install them. I have OpenCL running on both a HD 5770 and a 7750.
Re: (Score:2)
I don't know exactly how you define the "thermals" to be "terrible". The card is a beast. Pretend 40% is 100%. It's the best card on the market - price at no object (also it's cheaper than the top 2 nVidia cards). The fact that it can do more if you ask it to should just be a bonus.
Re: (Score:1)
And you sound like an nDevious fanboy running at 100%.
Re: (Score:2)
Why do you care about thermals? I care about framerate, visual quality, and noise.
The 290 running with a 40% fan is LOADS faster than the 780 -- especially at high resolution.
Who cares if it could go faster, take it for what it is right now and it's a better card.
And hell -- if you REALLY care, hook up water cooling to this and watch it really scream!
Re: (Score:2)
Why do you care about thermals? I care about framerate, visual quality, and noise.
I care about thermals because they directly influence noise and also because I have a lovely warm house and don't want my computer to catch fire.
My computer causes almost the same noise as my TV while I'm gaming. It can only do that because the fans aren't working very hard, which is because.. well, thermals matter.
headline doesn't exactly match summary or article (Score:2)
One thing that was disappointing about the article is the SLI/crossfire benchmarks. They only compared a couple games that no one plays and only compared it against the 780 in SLI instead of the Titan which is the real king of SLI. They didn't do any 4k or multidisplay testing.
the way games were meant to be played (Score:5, Insightful)
Re: (Score:1)
Re: (Score:2)
You don't usually even need to download anything extra. Just move the existing file to a backup directory and put an empty text file in it's place renamed to match.
Re: (Score:2)
Nethack doesn't have this problem (Score:2, Insightful)
On the other hand, watching TV directly at abc.com has annoying commercial breaks :-)
Re: (Score:1)
What's with all the suggestions in the replies about downloading replacement bink files or replacing them with empty placeholder files? Just delete or name the damn things - most games will simply skip to the next video file, or in the absence of them all, go straight to the menu.
For me though, I try to avoid doing this and if there's a way to skip via the config file (such as with Dishonored or Rage), then it's preferable.
yeah, but is the driver jacked up? (Score:2)
Last January I went with Nvidia because the AMD graphics card was buggy.
Re: (Score:2)
Did the same thing last December.
[John]
7XXX series is much better now (Score:3)
Driver Problems? (Score:1)
I had enough problems with my last Radeons. No thanks.
[John]
The card is good, but the stock cooler sucks (Score:2)
AMD pushed the new Hawaii chip pretty hard to get these results. It will usually bump up against the thermal wall (max 95 degrees C) when gaming at full load, and on 'Uber' mode (there's a switch to choose between that and 'Silent'), it's quite loud. Part of the problem is that AMD is using a mediocre blower-style cooler, which can't run at or near 100% fan speed without putting off an unacceptable level of noise, and can't dissipate enough heat to keep the card from running up against the thermal wall. To
I don't buy Nvidia for performance (Score:3)
I miss the color quality from my 1650, but I haven't had any luck with their hardware since then...
Re: (Score:2)
Funny, last time I bought an Nvidia (a 8600GT with the infamous G86 chip) card it died within months because of internal solder thermal failure.
Re: (Score:2)
Faster than GTX 780 (Score:1)
Great (Score:1)
Now we have a new AMD card that can generate more OpenGL errors per second!
Seriously, working with AMD is hard. Their OpenGL implementation never works properly and we always need workarounds to get the job done.
NVidia on the other hand has always been working better for me as a developer.
good for water cooled systems (Score:2)
This will be a good deal when prices drop below the MSRP of $549 if you are going to water cool it. It still uses 50%+ more power at idle and quite a bit more power when gaming though. It also runs hotter and will stress a water cooling system that much more, especially in crossfire mode. Nevertheless it seems like a good card for a water cooling setup.
What bothers me is that you pretty much *have to* water cool it if you don't want it to sound like a vaccuum cleaner. The Nvidia cards are usable with stock
Stability Performance (Score:1)
Cant seem to find the original nvidia vs ati render stuttering, but these will do.
http://www.anandtech.com/show/6857/amd-stuttering-issues-driver-roadmap-fraps [anandtech.com]
http://techreport.com/review/24022/does-the-radeon-hd-7950-stumble-in-windows-8/10 [techreport.com]
I couldn't care less if this this the cheapest/fastest card on the planet.
Until AMD fix the core stuttering issues with their drivers, instead of just patching it for a AAA game now and then. I'am really not interested.
Frame rate isnt everything, stability and consistent
Re: (Score:2)
You're quoting reviews that are months old. The newer driver updates were designed specifically to fix these problems, and for the most part, they have succeeded. (There are still issues in some specific CrossFire and/or multi-monitor configurations, but these won't affect most users.)
One of the reviews you cited was from The Tech Report, which did a good job of documenting these frame pacing issues with hard numbers a couple of months back. Well, let's see what they have to say about the R9 290X now [techreport.com]:
Re:Driver openness (Score:4, Informative)
ATI Linux drivers have traditionally been crappy, but since they were bought by AMD, they've opened up a lot, and have been steadily contributing to the main kernel. The kernel drivers (as opposed to the proprietary Linux drivers) have been improving by leaps and bounds lately. Kernel 3.5 saw 3D performance improvements of over 35% with some AMD cards, and 3.12 is supposed to have a similar huge boost.
I don't know how they compare to the closed source drivers from Nvidia *or* ATI, but I'm currently running 3.10, and the in-kernel drivers are definitely working very well for me.
Phoronix on 3.5 drivers [phoronix.com]
Phoronix on 3.12 drivers [phoronix.com].
Re: (Score:2)
Power management is one of the big things that's been worked on in the 3.11/3.12 timeframe, FWIW. It'd be lovely to get some hard data on how big the difference is once 3.12 is out.
Re: (Score:2)
Ummm...I wasn't aware there was an issue with multi-display at 70Hz. My Geforce 3 could drive 1 monitor at 150Hz (1024x768), I would think a modern card could do that on two...