Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Graphics Hardware Games

$3000 GeForce GTX TITAN Z Tested, Less Performance Than $1500 R9 295X2 151

Vigile writes: NVIDIA announced its latest dual-GPU flagship card, the GeForce GTX Titan Z, at the GPU Technology Conference in late March with a staggering price point of $2999. Since that time, AMD announced and released the Radeon R9 295X2, its own dual-GPU card with a price tag of $1499. PC Perspective finally put the GTX Titan Z to the test and found that from a PC gamer's view, the card is way overpriced for the performance it offers. At both 2560x1440 and 3840x2160 (4K), the R9 295X2 offered higher and more consistent frame rates, sometimes by as much as 30%. The AMD card also only takes up two slots (though it does have a water cooling radiator to worry about) while the NVIDIA GTX Titan Z is a three-slot design. The Titan Z is quieter and uses much less power, but gamers considering a $1500 or $3000 graphics card selection are likely not overly concerned with power efficiency.
This discussion has been archived. No new comments can be posted.

$3000 GeForce GTX TITAN Z Tested, Less Performance Than $1500 R9 295X2

Comments Filter:
  • So glad it's over (Score:5, Interesting)

    by TWX ( 665546 ) on Tuesday June 10, 2014 @07:38PM (#47207531)
    I'm so glad that I got the gaming bug out of my system when a ridiculously-priced video card was $300, and mainstream cards were in the $90-160 range...

    This is ridiculous.
    • by Anonymous Coward on Tuesday June 10, 2014 @07:51PM (#47207625)

      it ain't for gayming but for game development. hence the floating point power of the card.

    • by Anonymous Coward on Tuesday June 10, 2014 @07:51PM (#47207627)

      They still are in that range. These product are like Bugatti's or Lambos... I really doubt I'll ever know someone personally who owns one.

    • Re:So glad it's over (Score:5, Informative)

      by crioca ( 1394491 ) on Tuesday June 10, 2014 @08:00PM (#47207687)

      I'm so glad that I got the gaming bug out of my system when a ridiculously-priced video card was $300, and mainstream cards were in the $90-160 range... This is ridiculous.

      That's still pretty much the case; the difference today is that some people make, or try to make, their living off playing & broadcasting their gameplay. This means they need to be able to run the latest games at the highest specs, record and livestream all at the same time without missing a beat.

      • by bemymonkey ( 1244086 ) on Wednesday June 11, 2014 @01:45AM (#47209469)

        Isn't the recording & encoding part mostly CPU-dependent? And even if the graphics card is used to encode the video, isn't there dedicated H264 encoder hardware on these cards (meaning a budget card from the same generation shouldn't be any slower in this aspect)?

        • by Emetophobe ( 878584 ) on Wednesday June 11, 2014 @07:10AM (#47210759)

          You're correct. Newer Intel CPUs can use a technology called Intel Quick Sync [wikipedia.org] to speed up streaming and video encoding. Basically it uses the hardware encoder on Intel CPUs to perform the encoding.

          Streaming software like OBS [obsproject.com] supports Quick Sync. Impact on CPU and GPU usage is much lower since it's using the iGPU (which would normally be disabled when playing games with a discrete video card). It's basically using silicon which would otherwise go to waste, since most people disable the integrated video on Intel CPUs. Here's a guide [youtube.com] that explains how to setup Quick Sync with OBS, and it shows that CPU usage goes from 50-75% with a x264 encoder to 1-5% with Quick Sync.

      • by tlhIngan ( 30335 ) <slashdot.worf@net> on Wednesday June 11, 2014 @11:14AM (#47212551)

        That's still pretty much the case; the difference today is that some people make, or try to make, their living off playing & broadcasting their gameplay. This means they need to be able to run the latest games at the highest specs, record and livestream all at the same time without missing a beat.

        Most live streams barely do VGA quality, never mind 1080p. And most video cards can do 1080p quite easily, so even if you live stream, 1080p is the max other people are going to see. Gaming on a 10 4K monitor setup with a honking fast video card? Yeah, you're awesome, but everyone watching you won't notice the difference between your awesome setup and someone gaming on a 24" 1080p monitor.

    • I have 180 dollar gaming card that plays everything very well.
      This is, frankly, stupid.There is no gain, and professional gamers want all the particulates and distractions turned off.

      • by NemoinSpace ( 1118137 ) on Tuesday June 10, 2014 @08:39PM (#47207901) Journal
        I have a built in Nvidia 8300 GS that came with the hand me down dell XPS-410 my wife brought home from work.
        I put 6 GB ram in even though Crucial and Dell both tell you it won't work. Should have went for 8 so I could have a bigger ram drive. - I actually ran out of memory the other night running 64 bit Waterfox! (that was a first.) I put a ragged old OCX SSD in it that I bought for $20 when OCZ put themselves out of business. Then I put windows 8 on it for $30. It refuses to update to 8.1. (how bad are the H1B's over at Microsoft anyway?) I will probably upgrade to CentOS 7 soon because this computer will probably last till 2020.
        Isn't this sad? I can squeeze crap out of a buffalo nickle. But the real challenge is assembling absolute garbage into a fairly usable system.
        I don't knock people that buy $1,000 video cards, they usually pay well for doing things like adjusting their "tiny fonts". Really, get off my lawn.
        • by TWX ( 665546 ) on Tuesday June 10, 2014 @10:54PM (#47208603)
          I'm in the same boat actually. My desktop is a dual-Xeon box that's almost fourteen years old now, still uses AGP, and still plays the few games that I want to play quite well. I am planning on finally migrating to a new box (processors in the current one are only 32 bit so I'm capped at the ~4gb memory limit) but it's served me well for many, many years.

          I'm typing this post on an old Dell Latitude D420, which still works fine for surfing the web, though I have to limit youtube-type video to lower resolutions to keep it smooth.
        • by TapeCutter ( 624760 ) on Tuesday June 10, 2014 @11:58PM (#47208901) Journal
          Video cards are not just for games these days, my $150 GTX 750 maxes out at just over a teraflop, which is significantly faster than any multi-million dollar pre-Y2K super computer ever built. I really can't see how vector processing can help anyone to adjust their fonts, but it can solve all sorts of difficult engineering, logistics, AI, and design problems. The fact you can do calculations on a commodity video card that (even with unlimited military budget) were simply not practical in the 1990's is nothing short of a technological miracle.

          But hey, if you want to install a private sub-station and a 1990's super computer in your shed because your too tight to buy a new PC, who am I to judge?
      • by O('_')O_Bush ( 1162487 ) on Tuesday June 10, 2014 @10:37PM (#47208511)
        But they also want to play at very high resolutions, very high refresh rates (120hz-144hz), and are often recording as well...
        • by ILongForDarkness ( 1134931 ) on Wednesday June 11, 2014 @09:14AM (#47211425)

          People are just stupid that think they need 60+ hz: your eye can't refresh that quickly so who cares if your screen can? Often they are pushing high res on screens that max out at less than the framerate their GPU is pushing. So now not only their eye but their hardware can't use the frames. I get that when the system gets busy (or the game complex) frame rates can drop but I'm not sure upping the peak framerate is the best answer. Gaming rigs likely should be configured to have most system proccesses bound to a subset of the CPUs so some are always free for the game, nothing on the game drive but the games so there is no competing demand for disk access etc. Anyways before dropping 3k on a GPU I'd probably drop $250 on the GPU, about $1500 on a dual socket motherboard and a second quad core CPU, and the rest on ram and harddrives. I beat you'd get a much smoother framerate and better performance when doing other things too.

          • by Immerman ( 2627577 ) on Wednesday June 11, 2014 @12:26PM (#47213297)

            Actually, your eye can detect changes at greater that 60Hz, it simply can't register individual still frames at anywhere near that speed. Much like how it can still detect the existence of detail far smaller than the smallest discrete pixel it can resolve.

            The other place where higher frame rates factor in is latency at 60Hz there is a ~17ms delay between one frame and the next - and any actiontaken at the beginning of the frame will not be reflected until the next frame is rendered. Admittedly that's not terribly important for most current gamers, but apparently it can be a significant contributor to nausea in VR, where your eyes fail to see a prompt response to head motion.

            As for your CPU versus video card discussion, I suspect you would be disappointed - the average gaming cpu has 4-6 cores these days, and it's a rare game that taxes them all simultaneously. Adding another CPU would simply add more underutilized cores, while potentially also introducing performance penalties due to asymmetric multiprocessing - the threads can't intercommunicate between processors nearly as efficiently as between cores, and unless the programmer specifically manages interthread communication for an asymmetric environment that is likely to cause problems.

            Then again, if you're only playing at 1080p and 60Hz you probably don't need to get anywhere near the top of the line CPU or GPU to saturate your frame rate in virtually all scenarios. The most common source of low frame rates though is having too much detail on screen at once, and the CPU generally won't help you with that. Load down Skyrim with a mountain of "prettification" mods and your CPU will see negligible increase in demand, but your GPU will start choking on the shear number of polygons and pixels it has to deal with.

    • by Kjella ( 173770 ) on Tuesday June 10, 2014 @08:13PM (#47207767) Homepage

      I'm so glad that I got the gaming bug out of my system when a ridiculously-priced video card was $300, and mainstream cards were in the $90-160 range...

      These cards exists because they make them for the compute/workstation/enterprise market, why not rebrand and sell for some insane amount of money? Just like Intel's $999 processors wouldn't exist without the Xeon line. You get plenty bang for the buck for $150-250 with the "normal" enthusiast cards topping out at $500-700, which I assume is not that much more after inflation. Of course if you insist on playing Crysis in UltraHD with everything dialed up to max nothing will be enough, but many games the last years have been console ports that'll run on any half-decent gaming PC.

      • by Luckyo ( 1726890 ) on Tuesday June 10, 2014 @08:15PM (#47207781)

        Actually a few 780s in a SLI will run that just fine.

      • by epyT-R ( 613989 ) on Tuesday June 10, 2014 @10:29PM (#47208479)

        That's what the quadro line is for.

        • by gl4ss ( 559668 ) on Tuesday June 10, 2014 @11:21PM (#47208741) Homepage Journal

          yeah but if you got a line of cards where you flipped a bit for the drivers to read and treat it differently then why not make another swipe at that and take the top of the line from that line and flip a bit to say it's something else...

          now there's so many youtube wannabe professionals that they can make good money from it and so many review sites that they'll get to selling 10 000 units for that shit only, easily justifying a production run. of course for 3k you can get a fucking laptop to play every game on the market... or 3 desktop systems to play every game.

          • by epyT-R ( 613989 ) on Tuesday June 10, 2014 @11:54PM (#47208895)

            I agree, it is dumb. There are suckers who'll pay it though.

          • Re:So glad it's over (Score:4, Informative)

            by TapeCutter ( 624760 ) on Wednesday June 11, 2014 @12:20AM (#47209029) Journal
            That's kind of what they do. Not sure about other cards but Nvidia cards handle compatibility with something called compute capability [slashdot.org]. A developer then makes the trade-off that will land somewhere between....

            Extreme compatibility -- work on all nvidia cards and use none of the new hardware features.
            Extreme performance -- work on only the latest cards and use all of the latest hardware features.

            Nobody is buying $3K cards to play video games, they are using them to solve engineering problems, video games are just a convenient way to benchmark performance that is easily understood by laymen.
            • by Immerman ( 2627577 ) on Wednesday June 11, 2014 @12:37PM (#47213415)

              >Nobody is buying $3K cards to play video games, they are using them to solve engineering problems,

              Are you sure? I haven't paid much attention lately, but there was a time when CAD applications demanded far more accurate internal computations that a gaming card simply couldn't deliver. The Quadro, IIRC, was far more expensive than any gaming card, and also considerably slower. What it offered to justify the price was far more accurate rendering, especially where the depth buffer is concerned.

              Of course these days CUDA has taken off, so there's a fair chance these cards are being primarily purchased for reasons having nothing to do with graphics at all. I'd bet good money though that there's at least a few gamers out there buying these things for their gaming rigs. And you know what? I'm okay with that. High-end gaming is a comparatively cheap hobby - one of these cards is probably cheaper than a new set of tires for your Porsche, and if you don't have to replace the tires on your sports car on a regular basis you're obviously a poser.

      • by mwvdlee ( 775178 ) on Wednesday June 11, 2014 @03:21AM (#47209823) Homepage

        Standard consumer goods practice; always make sure you have atleast one ridiculously expensive version.
        Doesn't need to be any better, just far more expensive.
        There's always people who associate "expensive" with "good" and some can even afford it.
        Same goes for TV's, Hifi equipment, musical instruments, tools, sports equipment, cars, etc...

    • by perpenso ( 1613749 ) on Tuesday June 10, 2014 @09:04PM (#47208063)
      People don't buy the highest performing video cards for gaming, they buy them mining virtual currency.

      Keep that in mind when you see that great price for a used high end card. The card probably ran for an extended period of time over clocked to just under its "melting point" and just got replaced by an ASIC miner.
      • by dutchwhizzman ( 817898 ) on Wednesday June 11, 2014 @01:48AM (#47209497)

        People that mine either mine scrypt style currencies that still run better on GPUs or they are using ASIC miners for at least a year already. Used high end cards are either NVidia which are sold because the gamer wants something new or is short on cash, or AMD when the owner wants a faster GPU for either gaming or scrypt coin mining. For scrypt coin mining on AMD, overclocking the GPU doesn't work, in general you have to clock down a bit unless you are lucky and you can overclock the memory enough to maximize output with the GPU at standard clock rates.

        The highest performing video cards tend to cost so much more for their performance, that miners get slightly less performing cards for half the price of the top of the range. You can get the R9 270 and R9 290 cards for much less money than the R9 290x and now the R9 295x2. The amount of coins you can mine for the purchase price and power consumption are such, that you don't want to buy those "highest performing video cards" if all you do is mine.

        The reason you can buy relatively new video cards from miners is because the prices fell after MtGox fell. The get rich quick thing didn't work out and they all need money to pay their power bills. These cards aren't burnt up technically, the miner's wallet is empty and he needs some way to recuperate part of his loss. Sure, some of those are highest end cards, because the miner didn't pay attention to the price/performance thing when he bought them, but most will be high in the midrange, especially since we've seen some new high end cards come out since the prices of crypto coins fell.

        • by PRMan ( 959735 ) on Wednesday June 11, 2014 @09:16AM (#47211441)
          You can't mine bitcoins with GPU (only altcoins) and Mt. Gox only took bitcoins. And the price of bitcoin has risen since Mt. Gox fell. So I don't see your argument at all.
          • by djlemma ( 1053860 ) on Wednesday June 11, 2014 @12:33PM (#47213375)
            The prices of all the altcoins are tightly linked to the price of bitcoin. People speculate with digital currencies, but in the end they are usually trading altcoins for bitcoins which they can then turn into cash. So, when bitcoin went down, all the scrypt coins went down too, so GPU mining was less profitable. Then, a bunch of scrypt ASICs started appearing, making GPU mining less profitable. Bitcoin has been going up, but it's among promises of much higher hashrate Scrypt ASICs just on the horizon- so there's a steady stream of people that want to ditch GPU's.
    • by Sir_Sri ( 199544 ) on Tuesday June 10, 2014 @10:31PM (#47208487)

      Except that Titan isn't really a gaming card. The big draw is the double precision floating point performance. The GTX 780 - which is the same part for gaming purposes, is about 700 dollars (they have almost identical single precision performance, which is what gaming is), so 2 780's would be about 1500 dollars (to compare to the titan black dual GPU monstrosity).

      And you don't need top end parts unless you're gaming on 4k (which is either a 3500 dollar monitor for a good one, or a ~500 dollar Seiki TV that is capped at 30fps with crappy colour).

      There has *always* been more expensive hardware than most people need or want. But for people who have money there's nothing particularly wrong with having super expensive stuff. If you made 500k a year what would you spend it on? What about 5 million? What about 50 million?. If nothing else the power of one of these 3000 or 1500 dollar cards is going to be mainstream for 300 bucks in 3 or 4 years (or sooner if TSMC can get 20nm working), it doesn't do you any harm that someone else can buy it.

      • by timeOday ( 582209 ) on Wednesday June 11, 2014 @01:08AM (#47209297)

        Except that Titan isn't really a gaming card.

        OK, tell that to NVidia [geforce.com]:

        GeForce GTX TITAN Z is a gaming monster, built to power the most extreme gaming rigs on the planet. With a massive 5760 cores and 12 GB of 7 Gbps GDDR5 memory, TITAN Z gives you truly amazing performanceâ"easily making it the fastest graphics card weâ(TM)ve ever made.

        This is a serious card built for serious gamers.

        Hard to get more definitive than that.

        OK, you can argue that NVidia is simply lying; that they engineer these for professional applications and then make a rebadged version to score an easy buck by conning ego-driven gamers. But what kind of defense is that?

    • by eddy ( 18759 ) on Wednesday June 11, 2014 @01:21AM (#47209341) Homepage Journal

      The Titan-Z was and is a PR product. It was conceived simply to create buzz around nVidia. They had the misfortune that AMD put out a better card before they could get the darn thing to market though. First they delayed it, then as pressure mounted they finally sneaked it out without much of the ado they were hoping for. I doubt there exists or will ever exist more than a couple of hundred Titan-Zs IN THE WORLD.

      Anyone who tells you that this card "is for X" where X is something else than PR is wrong and/or lying. It doesn't make sense anywhere.

    • by RogueyWon ( 735973 ) on Wednesday June 11, 2014 @02:50AM (#47209719) Journal

      I've owned two "top end" (as opposed to merely "high end") graphics cards in the days before I had a mortgage and when the top end of the market was still only in the $1,000 range. The first was an Nvidia 7950 GX2 and the second was an Nvidia 590. Both of them, frankly, were cranky, unreliable and difficult. It was also rare I took them anywhere near their performance limits. This latest trends towards super-priced cards is a combination of R&D and willy waving.

      This wouldn't be slashdot without a car analogy, so...

      A Bugatti Veryon sells for around $1.7 million (according to my hasty google search). Even compared to previous generations of supercars, that's pretty insane. But it doesn't mean that cars in general are getting more expensive. You can get something good enough for everyday tasks cheaper than ever. If you want something sportier, with a bit of performance, then adjusted for inflation, the price range is more or less what it always has been. Plus that "something sportier" will probably be a lot easier to manage and maintain than the Veryon, as well as a lot easier to drive to the shops in.

      I'm on an Nvidia 680 now (the 590 crapped out after less than 2 years), paid a sensible price for it and have a card that can handle almost everything at 1080p with max or near-max detail (the exception being Watch Dogs, the PC port of which is a badly coded piece of shite).

    • by KingMotley ( 944240 ) on Wednesday June 11, 2014 @10:54AM (#47212373) Journal

      High end video cards have always been $600-$1000 ever since the 386 days. Unless you quit your gaming bug before then, you never had a current gen high end video card.

  • Wrong premise (Score:5, Insightful)

    by Anonymous Coward on Tuesday June 10, 2014 @07:40PM (#47207541)

    These cards should have been tested from the perspective of high performance computing or scientific application.

    • Re:Wrong premise (Score:5, Insightful)

      by SpankiMonki ( 3493987 ) on Tuesday June 10, 2014 @07:49PM (#47207611)

      These cards should have been tested from the perspective of high performance computing or scientific application.

      I don't think nVidia would want that.

      • Re:Wrong premise (Score:5, Informative)

        by Nemyst ( 1383049 ) on Tuesday June 10, 2014 @09:42PM (#47208265) Homepage
        Um, no, if Nvidia didn't want that, they wouldn't give the Titans full double-precision performance in the first place. I'm thinking that aside from getting a few sales from overenthusiastic gamers, their main motivation for marketing this as a gaming card is so their compute customers don't stop buying Teslas.
      • by mlw4428 ( 1029576 ) on Wednesday June 11, 2014 @12:28AM (#47209087)
        You're absolutely right on that. They artificially lock out features that their higher-end non-gaming cards have (such as VT-d support, etc). Nvidia doesn't want YOU to use GTXs for computing or scientific applications...they want you to use cards like Tesla or Quaddro. In fact I bet the biggest difference between the GTX Titan Z and Telsa K40 is less price and more specific features. In fact when I looked the K40 was a bit pricier but was outranked in sheer performance (CUDA cores, pipelines, etc), but you can't virtualize GTX, it doesn't work with GRID computing, and a few other features.
    • by Anonymous Coward on Tuesday June 10, 2014 @07:57PM (#47207667)

      Gaming graphics cards are optimised for high-end graphics rendering - scientific graphics cards are optimised for crunching numbers/running simulations.

      That's like testing a car by trying to drive it underwater

    • by msauve ( 701917 ) on Tuesday June 10, 2014 @09:00PM (#47208045)
      You're missing the point (and marketing).

      Overpaying by 20X makes you much cooler than overpaying by 10X. The metric is bragging rights, not actual performance, and definitely not some cost/benefit analysis.
    • by perpenso ( 1613749 ) on Tuesday June 10, 2014 @09:05PM (#47208077)

      These cards should have been tested from the perspective of high performance computing or scientific application.

      Nah, virtual currency mining. :-)

    • by dissy ( 172727 ) on Wednesday June 11, 2014 @12:36PM (#47213409)

      These cards should have been tested from the perspective of high performance computing or scientific application.

      Exactly.

      Using the same base assumption, I have conducted research that finds a two billion dollar super computer cluster from IBM is way over priced from grandmas email and facebook browsing point of view.

      I have also concluded my research showing the NASA space shuttles are way over priced from a running to the corner store for milk point of view.

      Now where are my millions of research dollars?!

    • by K10W ( 1705114 ) on Wednesday June 11, 2014 @01:43PM (#47214265)

      These cards should have been tested from the perspective of high performance computing or scientific application.

      5 insightful for complete BS? The nvidia quadro exists for that reason (or some applications tesla). GTX is FOR GAMES and totally unoptimised for cruching other data from 3d modelling/rendering, cad, vid encoding, compositing, scientific models/simulation etc etc. Price for price comparisons find the like of k2000 instead of gtx 780 would be better for none game. GTX work in some none game applications better but generally workstation cards beat them in most areas and software.

      IIRC adobe premiere pro plays nicely and maybe better than workstation cards when talking price to price ratio in the budget end but workstation budget cards are not cheap. In fact the high end workstation cards make running 3 titans in SLI look cheap!

      ATI firepro cards are similar but more toward specific 3D and CAD hence I never looked at them since quadro suits my needs better and are not meant to be ATI equivalent to quadro although they probably perform similar in many applications. I'd rather game on a £800 GTX than a £4200 workstation card aimed at DCC/CAD

  • Quiet is important (Score:5, Insightful)

    by i_ate_god ( 899684 ) on Tuesday June 10, 2014 @07:40PM (#47207543)

    don't underestimate the beauty of a quiet powerful computer.

    I won't buy a $3000 gpu anymore than I'll buy a $1500 one, but I did buy the GTX 780 over the cheaper but somewhat more powerful R9 250 solely on the basis of it being cooler.

  • by martiniturbide ( 1203660 ) on Tuesday June 10, 2014 @07:45PM (#47207583) Homepage Journal
    ...to raise the price of the R9 295X2. :)
  • by rogoshen1 ( 2922505 ) on Tuesday June 10, 2014 @07:50PM (#47207617)

    Do games these days typically take full advantage of such setups? I haven't really paid too much attention to gaming/hardware in the past few years, but it seemed as if support for dual GPU's was less than stellar.

    IE, the only true advantage was an increase in the memory available to apps -- computationally, very few games took advantage of the additional gpu.

    Has this changed, or (equally likely) I am completely off base on the state of afairs ?

  • $3,000?? (Score:0, Interesting)

    by Anonymous Coward on Tuesday June 10, 2014 @07:56PM (#47207651)

    i'm lost. why do people need a $3,000 video card to play games like World of Warcraft? I can play it fine on a $50 video card that takes one slot and a 15 inch monitor. Framerate is so fast that I had to turn on V-sync. I must be missing something.

  • 3000? (Score:2, Informative)

    by geekoid ( 135745 ) <dadinportlandNO@SPAMyahoo.com> on Tuesday June 10, 2014 @07:59PM (#47207677) Homepage Journal

    Gamers spending 3000 on a video card aren't overly burdened with intelligence.

  • Wrong tests (Score:5, Insightful)

    by gman003 ( 1693318 ) on Tuesday June 10, 2014 @08:01PM (#47207691)

    The Titan shouldn't be considered a top-end gaming card. It should be treated as a budget Tesla card - even at $3k, it's the cheapest card in Nvidia's lineup with full double-precision floating point performance (which no game uses, but is common for scientific computing, Tesla's market). And on tests using that, the single-gpu Titan and Titan Black outperform the 295X2 by a large amount [anandtech.com]. AT hasn't gotten to test a Titan Z yet, but you can tell it's going to wipe the floor with the 295X2.

    Yes, Nvidia advertised the original Titan as a super-gaming card, and to be fair it was their top-performing gaming card for a while. But once the 780 Ti came out, that was over, and since everyone expects a 790 dual-GPU gaming card to be announced soon, buying any Titan for gaming is a fool's choice.

    Nvidia seems to still be advertising it as a top-end gaming card, presumably trying to prove the old adage about fools and their money. It just comes off as a scam to me, but anyone willing to spend over a grand without doing some proper research probably deserves to be ripped off.

    • Re:Wrong tests (Score:4, Informative)

      by sshir ( 623215 ) on Tuesday June 10, 2014 @08:31PM (#47207867)
      Result of Nvidia's crippling DP floating point performance on mainstream graphic cards is people started to look for ways around this bullshit.

      Case in point: linear algebra libraries (like 80% of scientific computing). Basically people are modifying algorithms so that bulk of computation is done in single precision and then cleaned up in double. Those mixed mode algorithms often outperform pure DP ones even on non crippled cards (for example MAGMA library).

      People don't like to be screwed with...

      • by Nemyst ( 1383049 ) on Tuesday June 10, 2014 @09:43PM (#47208275) Homepage
        Wait, in essence you're saying that by leveraging single-precision (which is still three times faster than double-precision even for Nvidia's compute cards) computations, libraries have been able to increase performance without compromising the quality of the results. How is that a bad thing, or people "getting screwed with"?
    • by EvilSS ( 557649 ) on Tuesday June 10, 2014 @08:48PM (#47207967)

      The Titan shouldn't be considered a top-end gaming card. It should be treated as a budget Tesla card - even at $3k, it's the cheapest card in Nvidia's lineup with full double-precision floating point performance (which no game uses, but is common for scientific computing, Tesla's market).

      This. For gaming there is virtually no difference between a 780 Ti (~$700) and a Titan Black (~$1000). They look identical on gaming benchmarks. I imagine that a pair of 780Ti's in SLI would outperform the Titan Z when it comes to gaming (the Titan Z is underclocked compared to the Titan Black) and for less than half the price.

      The difference is the unlocked floating point capability and added vram. The Titans are for number crunching. The TitanZ Crushes the AMD R9 295x2. Well, that and gamers looking for epeen cred.

    • by edxwelch ( 600979 ) on Wednesday June 11, 2014 @08:51AM (#47211237)

      If you do a broader range of benchmarks you'll see the 290x beats the Titan on most compute benchmarks:
      http://www.anandtech.com/show/... [anandtech.com]

      • by gman003 ( 1693318 ) on Wednesday June 11, 2014 @09:49AM (#47211713)

        It's not cherry-picking if benchmarks like that are the primary reason to use a Titan. Particularly when I explicitly said so.

        • by edxwelch ( 600979 ) on Wednesday June 11, 2014 @11:16AM (#47212565)

          So, only the benchmarks that you say are important count. That's pretty much the dictionary definition of cherry picking

          • by gman003 ( 1693318 ) on Wednesday June 11, 2014 @12:54PM (#47213653)

            You're arguing with the antecedent. I'm saying "if you care about X, the Titan is good", and you're accusing me of cherry-picking because the Titan is bad at Y and Z, even though I specifically called it out as not being good for anything except X in a performance-per-penny measure.

            I am saying that one of the principal reasons to buy a Titan is if you have a heavy double-precision compute load. I then provided a benchmark showing that a Titan beats the 295X2 in such a load. It would be cherry-picking if I picked the one double-precision benchmark that showed the Titan in a good light, but a single-precision benchmark does not invalidate that.

            If you are accusing me of cherry-picking, please provide a benchmark that shows a 290X beating a Titan in a double-precision workload. AFAIK the only double-precision benchmark Anandtech uses is the F@H benchmark I linked to originally.

            I am not at all arguing that the results in the double-precision benchmark somehow invalidates the single-precision or integer results. If your workload isn't mostly double-precision, the Titan is not for you. But if your workload *is* mostly double-precision, the Titan is a viable card.

    • by anethema ( 99553 ) on Thursday June 12, 2014 @01:07AM (#47219723) Homepage

      To be fair, Upcoming "Star Citizen" will be using double precision floats to model its huge universe in proper detail.

      That being said I still of course agree.

  • by Bleek II ( 878455 ) on Tuesday June 10, 2014 @08:02PM (#47207707)
    The titian line in not a purely gaming GPU! The higher price comes for leveraging it's GPGPU CUDA technology. It's like buying a server hardware and complaining it doesn't run your games and well as an i7 which costs less. Game enthusiasts always ruin hardware news with their one golden spec, the frames per second! "That said it’s clear from NVIDIA’s presentations and discussions with the company that they intend it to be a compute product first and foremost (a fate similar to GTX Titan Black), in which case this is going to be the single most powerful CUDA card NVIDIA has ever released. NVIDIA’s Kepler compute products have been received very well by buyers so far, including the previous Titan cards, so there’s ample evidence that this will continue with GTX Titan Z. At the end of the day the roughly 2.66 TFLOPS of double precision performance on a single card (more than some low-end supercomputers, we hear) is going to be a big deal, especially for users invested in NVIDIA’s CUDA ecosystem." - AnandTech
  • by mysidia ( 191772 ) on Tuesday June 10, 2014 @09:57PM (#47208331)

    It's SCrypt Hashes per Second per Watt of energy consumed. And SCrypt Hashes per Second per Dollar of GPU.

  • by sshir ( 623215 ) on Tuesday June 10, 2014 @10:46PM (#47208571)
    I know that this is not a purely gaming card bla bla bla. But here is another ping: in this month's graphics card review at Tom's hardware AMD totally dominated... in all categories. I mean a clean sweep! What's going on? Or is it just bad timing?
  • by Dan Askme ( 2895283 ) on Wednesday June 11, 2014 @06:46AM (#47210649) Homepage

    the R9 295X2 offered higher and more consistent frame rates

    http://cdn.pcper.com/files/ima... [pcper.com]

    But not "stable", "consistent" or "smooth". This is still a major issue with the core of all AMD cards which hasnt been fixed.
    You get what you pay for. Nvidia might be the "expensive" of the bunch, just wish i forked out a little more instead of getting my HD7770.

    • by ponos ( 122721 ) on Wednesday June 11, 2014 @09:06AM (#47211361)

      the R9 295X2 offered higher and more consistent frame rates

      http://cdn.pcper.com/files/ima... [pcper.com]

      But not "stable", "consistent" or "smooth". This is still a major issue with the core of all AMD cards which hasnt been fixed.
      You get what you pay for. Nvidia might be the "expensive" of the bunch, just wish i forked out a little more instead of getting my HD7770.

      Do you realize that in the graph you linked no card dips below 50fps at any time? In fact, if you count the occasional peaks crossing the (ridiculously low) 15ms/66fps threshold, the Titan Z shows 6 frames slower than 15ms and the 295X2 shows 4 frames at more than 15ms (if I count correctly). You really can't argue that the Titan Z is smoother. All cards are extremely smooth.

  • by psionski ( 1272720 ) on Wednesday June 11, 2014 @08:19AM (#47211035) Homepage
    Working Linux drivers cost $1500 and your soul, apparently...
  • AMD fp64 rate (Score:4, Informative)

    by ponos ( 122721 ) on Wednesday June 11, 2014 @08:56AM (#47211281)

    I would just like to point out that the 295X2 has superior absolute gaming performance and superior fp32 performance but, just like most gaming NVidia products, the fp64 is crippled at 1/8 fp32 rate at configuration in order to create a profit margin for the costlier "pro" products. The hardware itself is capable of 1/2 fp64 rate and should be superior to the Titan Z if AMD decides to offer "pro-level support".

    As proof, consider the fp64 rate of the single-chip AMD W9100, sold at ~$4000, which is 2.6 TFlops (http://www.amd.com/Documents/FirePro_W9100_Data_Sheet.pdf), versus the 2.7 TFlops of the Titan Z (1/3 fp32 rate, see http://en.wikipedia.org/wiki/G... [wikipedia.org]). AMD could unlock the 295X2 at its full potential 5.2 double precision TFlops and release it any day if they want, crushing the Titan Z.

    Honestly, instead of the Titan Z, I'd rather buy the AMD W9100 for $4000 and get equivalent double precision compute rate, better perf/W and, most importanty, certification for pro applications and ECC memory. That is certainly worth the extra $1000 in this product segment.

  • The Titan Z is also overpriced in that it costs significantly more money than TWO Titan Black cards. It really only makes sense if you're planning to buy two of them for quad SLI, and if you've got that kind of crazy money to spend you don't care what it costs. The Titan Z is a statement product meant for people with too much disposable income; it doesn't make much sense for anybody else.

E = MC ** 2 +- 3db

Working...