Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Graphics Hardware Games

$3000 GeForce GTX TITAN Z Tested, Less Performance Than $1500 R9 295X2 151

Vigile writes: NVIDIA announced its latest dual-GPU flagship card, the GeForce GTX Titan Z, at the GPU Technology Conference in late March with a staggering price point of $2999. Since that time, AMD announced and released the Radeon R9 295X2, its own dual-GPU card with a price tag of $1499. PC Perspective finally put the GTX Titan Z to the test and found that from a PC gamer's view, the card is way overpriced for the performance it offers. At both 2560x1440 and 3840x2160 (4K), the R9 295X2 offered higher and more consistent frame rates, sometimes by as much as 30%. The AMD card also only takes up two slots (though it does have a water cooling radiator to worry about) while the NVIDIA GTX Titan Z is a three-slot design. The Titan Z is quieter and uses much less power, but gamers considering a $1500 or $3000 graphics card selection are likely not overly concerned with power efficiency.
This discussion has been archived. No new comments can be posted.

$3000 GeForce GTX TITAN Z Tested, Less Performance Than $1500 R9 295X2

Comments Filter:
  • So glad it's over (Score:5, Interesting)

    by TWX ( 665546 ) on Tuesday June 10, 2014 @06:38PM (#47207531)
    I'm so glad that I got the gaming bug out of my system when a ridiculously-priced video card was $300, and mainstream cards were in the $90-160 range...

    This is ridiculous.
    • Re:So glad it's over (Score:5, Informative)

      by crioca ( 1394491 ) on Tuesday June 10, 2014 @07:00PM (#47207687)

      I'm so glad that I got the gaming bug out of my system when a ridiculously-priced video card was $300, and mainstream cards were in the $90-160 range... This is ridiculous.

      That's still pretty much the case; the difference today is that some people make, or try to make, their living off playing & broadcasting their gameplay. This means they need to be able to run the latest games at the highest specs, record and livestream all at the same time without missing a beat.

      • Isn't the recording & encoding part mostly CPU-dependent? And even if the graphics card is used to encode the video, isn't there dedicated H264 encoder hardware on these cards (meaning a budget card from the same generation shouldn't be any slower in this aspect)?

        • You're correct. Newer Intel CPUs can use a technology called Intel Quick Sync [wikipedia.org] to speed up streaming and video encoding. Basically it uses the hardware encoder on Intel CPUs to perform the encoding.

          Streaming software like OBS [obsproject.com] supports Quick Sync. Impact on CPU and GPU usage is much lower since it's using the iGPU (which would normally be disabled when playing games with a discrete video card). It's basically using silicon which would otherwise go to waste, since most people disable the integrated video on In

      • by tlhIngan ( 30335 )

        That's still pretty much the case; the difference today is that some people make, or try to make, their living off playing & broadcasting their gameplay. This means they need to be able to run the latest games at the highest specs, record and livestream all at the same time without missing a beat.

        Most live streams barely do VGA quality, never mind 1080p. And most video cards can do 1080p quite easily, so even if you live stream, 1080p is the max other people are going to see. Gaming on a 10 4K monitor s

    • by geekoid ( 135745 )

      I have 180 dollar gaming card that plays everything very well.
      This is, frankly, stupid.There is no gain, and professional gamers want all the particulates and distractions turned off.

      • I have a built in Nvidia 8300 GS that came with the hand me down dell XPS-410 my wife brought home from work.
        I put 6 GB ram in even though Crucial and Dell both tell you it won't work. Should have went for 8 so I could have a bigger ram drive. - I actually ran out of memory the other night running 64 bit Waterfox! (that was a first.) I put a ragged old OCX SSD in it that I bought for $20 when OCZ put themselves out of business. Then I put windows 8 on it for $30. It refuses to update to 8.1. (how bad are
        • by TWX ( 665546 )
          I'm in the same boat actually. My desktop is a dual-Xeon box that's almost fourteen years old now, still uses AGP, and still plays the few games that I want to play quite well. I am planning on finally migrating to a new box (processors in the current one are only 32 bit so I'm capped at the ~4gb memory limit) but it's served me well for many, many years.

          I'm typing this post on an old Dell Latitude D420, which still works fine for surfing the web, though I have to limit youtube-type video to lower reso
        • Video cards are not just for games these days, my $150 GTX 750 maxes out at just over a teraflop, which is significantly faster than any multi-million dollar pre-Y2K super computer ever built. I really can't see how vector processing can help anyone to adjust their fonts, but it can solve all sorts of difficult engineering, logistics, AI, and design problems. The fact you can do calculations on a commodity video card that (even with unlimited military budget) were simply not practical in the 1990's is nothi
      • But they also want to play at very high resolutions, very high refresh rates (120hz-144hz), and are often recording as well...
        • People are just stupid that think they need 60+ hz: your eye can't refresh that quickly so who cares if your screen can? Often they are pushing high res on screens that max out at less than the framerate their GPU is pushing. So now not only their eye but their hardware can't use the frames. I get that when the system gets busy (or the game complex) frame rates can drop but I'm not sure upping the peak framerate is the best answer. Gaming rigs likely should be configured to have most system proccesses bound

          • Actually, your eye can detect changes at greater that 60Hz, it simply can't register individual still frames at anywhere near that speed. Much like how it can still detect the existence of detail far smaller than the smallest discrete pixel it can resolve.

            The other place where higher frame rates factor in is latency at 60Hz there is a ~17ms delay between one frame and the next - and any actiontaken at the beginning of the frame will not be reflected until the next frame is rendered. Admittedly that's not

    • by Kjella ( 173770 )

      I'm so glad that I got the gaming bug out of my system when a ridiculously-priced video card was $300, and mainstream cards were in the $90-160 range...

      These cards exists because they make them for the compute/workstation/enterprise market, why not rebrand and sell for some insane amount of money? Just like Intel's $999 processors wouldn't exist without the Xeon line. You get plenty bang for the buck for $150-250 with the "normal" enthusiast cards topping out at $500-700, which I assume is not that much more after inflation. Of course if you insist on playing Crysis in UltraHD with everything dialed up to max nothing will be enough, but many games the last

      • by Luckyo ( 1726890 )

        Actually a few 780s in a SLI will run that just fine.

      • by epyT-R ( 613989 )

        That's what the quadro line is for.

        • by gl4ss ( 559668 )

          yeah but if you got a line of cards where you flipped a bit for the drivers to read and treat it differently then why not make another swipe at that and take the top of the line from that line and flip a bit to say it's something else...

          now there's so many youtube wannabe professionals that they can make good money from it and so many review sites that they'll get to selling 10 000 units for that shit only, easily justifying a production run. of course for 3k you can get a fucking laptop to play every game

          • by epyT-R ( 613989 )

            I agree, it is dumb. There are suckers who'll pay it though.

          • Re:So glad it's over (Score:4, Informative)

            by TapeCutter ( 624760 ) on Tuesday June 10, 2014 @11:20PM (#47209029) Journal
            That's kind of what they do. Not sure about other cards but Nvidia cards handle compatibility with something called compute capability [slashdot.org]. A developer then makes the trade-off that will land somewhere between....

            Extreme compatibility -- work on all nvidia cards and use none of the new hardware features.
            Extreme performance -- work on only the latest cards and use all of the latest hardware features.

            Nobody is buying $3K cards to play video games, they are using them to solve engineering problems, video games are just a convenient way to benchmark performance that is easily understood by laymen.
            • >Nobody is buying $3K cards to play video games, they are using them to solve engineering problems,

              Are you sure? I haven't paid much attention lately, but there was a time when CAD applications demanded far more accurate internal computations that a gaming card simply couldn't deliver. The Quadro, IIRC, was far more expensive than any gaming card, and also considerably slower. What it offered to justify the price was far more accurate rendering, especially where the depth buffer is concerned.

              Of course

      • by mwvdlee ( 775178 )

        Standard consumer goods practice; always make sure you have atleast one ridiculously expensive version.
        Doesn't need to be any better, just far more expensive.
        There's always people who associate "expensive" with "good" and some can even afford it.
        Same goes for TV's, Hifi equipment, musical instruments, tools, sports equipment, cars, etc...

    • People don't buy the highest performing video cards for gaming, they buy them mining virtual currency.

      Keep that in mind when you see that great price for a used high end card. The card probably ran for an extended period of time over clocked to just under its "melting point" and just got replaced by an ASIC miner.
      • People that mine either mine scrypt style currencies that still run better on GPUs or they are using ASIC miners for at least a year already. Used high end cards are either NVidia which are sold because the gamer wants something new or is short on cash, or AMD when the owner wants a faster GPU for either gaming or scrypt coin mining. For scrypt coin mining on AMD, overclocking the GPU doesn't work, in general you have to clock down a bit unless you are lucky and you can overclock the memory enough to maximi

        • by PRMan ( 959735 )
          You can't mine bitcoins with GPU (only altcoins) and Mt. Gox only took bitcoins. And the price of bitcoin has risen since Mt. Gox fell. So I don't see your argument at all.
          • The prices of all the altcoins are tightly linked to the price of bitcoin. People speculate with digital currencies, but in the end they are usually trading altcoins for bitcoins which they can then turn into cash. So, when bitcoin went down, all the scrypt coins went down too, so GPU mining was less profitable. Then, a bunch of scrypt ASICs started appearing, making GPU mining less profitable. Bitcoin has been going up, but it's among promises of much higher hashrate Scrypt ASICs just on the horizon- s
    • by Sir_Sri ( 199544 )

      Except that Titan isn't really a gaming card. The big draw is the double precision floating point performance. The GTX 780 - which is the same part for gaming purposes, is about 700 dollars (they have almost identical single precision performance, which is what gaming is), so 2 780's would be about 1500 dollars (to compare to the titan black dual GPU monstrosity).

      And you don't need top end parts unless you're gaming on 4k (which is either a 3500 dollar monitor for a good one, or a ~500 dollar Seiki TV tha

      • Except that Titan isn't really a gaming card.

        OK, tell that to NVidia [geforce.com]:

        GeForce GTX TITAN Z is a gaming monster, built to power the most extreme gaming rigs on the planet. With a massive 5760 cores and 12 GB of 7 Gbps GDDR5 memory, TITAN Z gives you truly amazing performanceâ"easily making it the fastest graphics card weâ(TM)ve ever made.

        This is a serious card built for serious gamers.

        Hard to get more definitive than that.

        OK, you can argue that NVidia is simply lying; that they engineer the

    • by eddy ( 18759 )

      The Titan-Z was and is a PR product. It was conceived simply to create buzz around nVidia. They had the misfortune that AMD put out a better card before they could get the darn thing to market though. First they delayed it, then as pressure mounted they finally sneaked it out without much of the ado they were hoping for. I doubt there exists or will ever exist more than a couple of hundred Titan-Zs IN THE WORLD.

      Anyone who tells you that this card "is for X" where X is something else than PR is wrong and/or

    • I've owned two "top end" (as opposed to merely "high end") graphics cards in the days before I had a mortgage and when the top end of the market was still only in the $1,000 range. The first was an Nvidia 7950 GX2 and the second was an Nvidia 590. Both of them, frankly, were cranky, unreliable and difficult. It was also rare I took them anywhere near their performance limits. This latest trends towards super-priced cards is a combination of R&D and willy waving.

      This wouldn't be slashdot without a car an

    • High end video cards have always been $600-$1000 ever since the 386 days. Unless you quit your gaming bug before then, you never had a current gen high end video card.

  • Wrong premise (Score:5, Insightful)

    by Anonymous Coward on Tuesday June 10, 2014 @06:40PM (#47207541)

    These cards should have been tested from the perspective of high performance computing or scientific application.

    • Re:Wrong premise (Score:5, Insightful)

      by SpankiMonki ( 3493987 ) on Tuesday June 10, 2014 @06:49PM (#47207611)

      These cards should have been tested from the perspective of high performance computing or scientific application.

      I don't think nVidia would want that.

      • Re:Wrong premise (Score:5, Informative)

        by Nemyst ( 1383049 ) on Tuesday June 10, 2014 @08:42PM (#47208265) Homepage
        Um, no, if Nvidia didn't want that, they wouldn't give the Titans full double-precision performance in the first place. I'm thinking that aside from getting a few sales from overenthusiastic gamers, their main motivation for marketing this as a gaming card is so their compute customers don't stop buying Teslas.
      • You're absolutely right on that. They artificially lock out features that their higher-end non-gaming cards have (such as VT-d support, etc). Nvidia doesn't want YOU to use GTXs for computing or scientific applications...they want you to use cards like Tesla or Quaddro. In fact I bet the biggest difference between the GTX Titan Z and Telsa K40 is less price and more specific features. In fact when I looked the K40 was a bit pricier but was outranked in sheer performance (CUDA cores, pipelines, etc), but you
    • by Anonymous Coward

      Gaming graphics cards are optimised for high-end graphics rendering - scientific graphics cards are optimised for crunching numbers/running simulations.

      That's like testing a car by trying to drive it underwater

    • by msauve ( 701917 )
      You're missing the point (and marketing).

      Overpaying by 20X makes you much cooler than overpaying by 10X. The metric is bragging rights, not actual performance, and definitely not some cost/benefit analysis.
    • These cards should have been tested from the perspective of high performance computing or scientific application.

      Nah, virtual currency mining. :-)

    • by dissy ( 172727 )

      These cards should have been tested from the perspective of high performance computing or scientific application.

      Exactly.

      Using the same base assumption, I have conducted research that finds a two billion dollar super computer cluster from IBM is way over priced from grandmas email and facebook browsing point of view.

      I have also concluded my research showing the NASA space shuttles are way over priced from a running to the corner store for milk point of view.

      Now where are my millions of research dollars?!

    • by K10W ( 1705114 )

      These cards should have been tested from the perspective of high performance computing or scientific application.

      5 insightful for complete BS? The nvidia quadro exists for that reason (or some applications tesla). GTX is FOR GAMES and totally unoptimised for cruching other data from 3d modelling/rendering, cad, vid encoding, compositing, scientific models/simulation etc etc. Price for price comparisons find the like of k2000 instead of gtx 780 would be better for none game. GTX work in some none game applications better but generally workstation cards beat them in most areas and software.

      IIRC adobe premiere pro pla

  • Quiet is important (Score:5, Insightful)

    by i_ate_god ( 899684 ) on Tuesday June 10, 2014 @06:40PM (#47207543)

    don't underestimate the beauty of a quiet powerful computer.

    I won't buy a $3000 gpu anymore than I'll buy a $1500 one, but I did buy the GTX 780 over the cheaper but somewhat more powerful R9 250 solely on the basis of it being cooler.

    • GTX 780 over the cheaper but somewhat more powerful R9 250

      That's one heckuva typo. (I *hope* that's a typo)

      • yes, it was the 290, not 250, sorry.

        They were competing against each other, the amd card had slighter better bang for the buck but was reportedly quite hot and some boards were quite noisy.

    • I really just wish desktops were capable of only turning on the discrete GPU when playing games, and relying on the CPU built-in one the rest of the time. (Or is it possible nowadays and I never found out?)

      • by Qzukk ( 229616 )

        It's a common laptop feature, but it works because both the awesome GPU and the cheap GPU are integrated.

        You can do it on the desktop, you just have to buy a 3dfx Voodoo card :) (it had a passthrough cable so you would plug it into your regular video card then your monitor into the 3dfx card... without that you'd need to plug your monitor into your fancy gaming video card whenever you wanted to use it).

      • It might get possible in the future, or in select integrated desktops ; for now at least the modern big GPUs have much better power management than before. Showing the desktop or even idling with the screen turned off was a huge power waste when you ran a e.g. Radeon 4870 or GTX 275, but with a GTX 780 or Radeon 7970 it's almost a gentle power bump next to not having the card in the first place. Of note is Radeon "zerocore power" which does shut the card down, but only when the PC's display goes stand by.

        Nv

      • I use Lucid Virtu. My 3770K with Intel graphics runs my desktop and my HD7970 kicks in for games.
      • What's the idle power consumption on one of these bad boys? Many systems with many-hundred-watt TDPs idle under 100W...

    • I did buy the GTX 780 over the cheaper but somewhat more powerful R9 250 solely on the basis of it being cooler.

      Damn hipsters!

  • ...to raise the price of the R9 295X2. :)
  • by rogoshen1 ( 2922505 ) on Tuesday June 10, 2014 @06:50PM (#47207617)

    Do games these days typically take full advantage of such setups? I haven't really paid too much attention to gaming/hardware in the past few years, but it seemed as if support for dual GPU's was less than stellar.

    IE, the only true advantage was an increase in the memory available to apps -- computationally, very few games took advantage of the additional gpu.

    Has this changed, or (equally likely) I am completely off base on the state of afairs ?

    • Most games support it but not always take full advantage of SLI/CF. Titan was more focus on low cost option as they said for people that need the double precision. As you see in that test 2x 780TI is 200$ cheaper and got some OC to it.
  • 3000? (Score:2, Informative)

    by geekoid ( 135745 )

    Gamers spending 3000 on a video card aren't overly burdened with intelligence.

    • Man I wish I could be that dumb.

  • Wrong tests (Score:5, Insightful)

    by gman003 ( 1693318 ) on Tuesday June 10, 2014 @07:01PM (#47207691)

    The Titan shouldn't be considered a top-end gaming card. It should be treated as a budget Tesla card - even at $3k, it's the cheapest card in Nvidia's lineup with full double-precision floating point performance (which no game uses, but is common for scientific computing, Tesla's market). And on tests using that, the single-gpu Titan and Titan Black outperform the 295X2 by a large amount [anandtech.com]. AT hasn't gotten to test a Titan Z yet, but you can tell it's going to wipe the floor with the 295X2.

    Yes, Nvidia advertised the original Titan as a super-gaming card, and to be fair it was their top-performing gaming card for a while. But once the 780 Ti came out, that was over, and since everyone expects a 790 dual-GPU gaming card to be announced soon, buying any Titan for gaming is a fool's choice.

    Nvidia seems to still be advertising it as a top-end gaming card, presumably trying to prove the old adage about fools and their money. It just comes off as a scam to me, but anyone willing to spend over a grand without doing some proper research probably deserves to be ripped off.

    • Re:Wrong tests (Score:4, Informative)

      by sshir ( 623215 ) on Tuesday June 10, 2014 @07:31PM (#47207867)
      Result of Nvidia's crippling DP floating point performance on mainstream graphic cards is people started to look for ways around this bullshit.

      Case in point: linear algebra libraries (like 80% of scientific computing). Basically people are modifying algorithms so that bulk of computation is done in single precision and then cleaned up in double. Those mixed mode algorithms often outperform pure DP ones even on non crippled cards (for example MAGMA library).

      People don't like to be screwed with...

      • by Nemyst ( 1383049 )
        Wait, in essence you're saying that by leveraging single-precision (which is still three times faster than double-precision even for Nvidia's compute cards) computations, libraries have been able to increase performance without compromising the quality of the results. How is that a bad thing, or people "getting screwed with"?
    • by EvilSS ( 557649 )

      The Titan shouldn't be considered a top-end gaming card. It should be treated as a budget Tesla card - even at $3k, it's the cheapest card in Nvidia's lineup with full double-precision floating point performance (which no game uses, but is common for scientific computing, Tesla's market).

      This. For gaming there is virtually no difference between a 780 Ti (~$700) and a Titan Black (~$1000). They look identical on gaming benchmarks. I imagine that a pair of 780Ti's in SLI would outperform the Titan Z when it comes to gaming (the Titan Z is underclocked compared to the Titan Black) and for less than half the price.

      The difference is the unlocked floating point capability and added vram. The Titans are for number crunching. The TitanZ Crushes the AMD R9 295x2. Well, that and gamers looki

    • If you do a broader range of benchmarks you'll see the 290x beats the Titan on most compute benchmarks:
      http://www.anandtech.com/show/... [anandtech.com]

      • It's not cherry-picking if benchmarks like that are the primary reason to use a Titan. Particularly when I explicitly said so.

        • So, only the benchmarks that you say are important count. That's pretty much the dictionary definition of cherry picking

          • You're arguing with the antecedent. I'm saying "if you care about X, the Titan is good", and you're accusing me of cherry-picking because the Titan is bad at Y and Z, even though I specifically called it out as not being good for anything except X in a performance-per-penny measure.

            I am saying that one of the principal reasons to buy a Titan is if you have a heavy double-precision compute load. I then provided a benchmark showing that a Titan beats the 295X2 in such a load. It would be cherry-picking if I p

    • by anethema ( 99553 )

      To be fair, Upcoming "Star Citizen" will be using double precision floats to model its huge universe in proper detail.

      That being said I still of course agree.

  • by Bleek II ( 878455 ) on Tuesday June 10, 2014 @07:02PM (#47207707)
    The titian line in not a purely gaming GPU! The higher price comes for leveraging it's GPGPU CUDA technology. It's like buying a server hardware and complaining it doesn't run your games and well as an i7 which costs less. Game enthusiasts always ruin hardware news with their one golden spec, the frames per second! "That said it’s clear from NVIDIA’s presentations and discussions with the company that they intend it to be a compute product first and foremost (a fate similar to GTX Titan Black), in which case this is going to be the single most powerful CUDA card NVIDIA has ever released. NVIDIA’s Kepler compute products have been received very well by buyers so far, including the previous Titan cards, so there’s ample evidence that this will continue with GTX Titan Z. At the end of the day the roughly 2.66 TFLOPS of double precision performance on a single card (more than some low-end supercomputers, we hear) is going to be a big deal, especially for users invested in NVIDIA’s CUDA ecosystem." - AnandTech
    • by Salgat ( 1098063 )
      The Titan line is marketed by NVidia for gaming. Compound this with the fact that the Titan line does not come with workstation drivers and that cheaper alternatives exist for workstation GPUs (dedicated to GPGPU mind you), and it makes no sense to try to argue that the Titan line was not meant for gaming.
      • Could you provide an example of a single car solution which can provide 2.6 TeraFLOPS at 64bit precision while also allowing the user to run consumer applications?
  • It's SCrypt Hashes per Second per Watt of energy consumed. And SCrypt Hashes per Second per Dollar of GPU.

    • If you live in a cold climate, don't forget to take into account the fact that the heat generated by your GPU can reduce your heating-bill.
  • I know that this is not a purely gaming card bla bla bla. But here is another ping: in this month's graphics card review at Tom's hardware AMD totally dominated... in all categories. I mean a clean sweep! What's going on? Or is it just bad timing?
    • by Barny ( 103770 )

      Link? The only benchmark lists I could find there only tested FPS. And we all know that such tests are quite silly now as quality and to some extent latency are important these days.

  • the R9 295X2 offered higher and more consistent frame rates

    http://cdn.pcper.com/files/ima... [pcper.com]

    But not "stable", "consistent" or "smooth". This is still a major issue with the core of all AMD cards which hasnt been fixed.
    You get what you pay for. Nvidia might be the "expensive" of the bunch, just wish i forked out a little more instead of getting my HD7770.

    • by ponos ( 122721 )

      the R9 295X2 offered higher and more consistent frame rates

      http://cdn.pcper.com/files/ima... [pcper.com]

      But not "stable", "consistent" or "smooth". This is still a major issue with the core of all AMD cards which hasnt been fixed.
      You get what you pay for. Nvidia might be the "expensive" of the bunch, just wish i forked out a little more instead of getting my HD7770.

      Do you realize that in the graph you linked no card dips below 50fps at any time? In fact, if you count the occasional peaks crossing the (ridiculously low) 15ms/66fps threshold, the Titan Z shows 6 frames slower than 15ms and the 295X2 shows 4 frames at more than 15ms (if I count correctly). You really can't argue that the Titan Z is smoother. All cards are extremely smooth.

  • Working Linux drivers cost $1500 and your soul, apparently...
  • AMD fp64 rate (Score:4, Informative)

    by ponos ( 122721 ) on Wednesday June 11, 2014 @07:56AM (#47211281)

    I would just like to point out that the 295X2 has superior absolute gaming performance and superior fp32 performance but, just like most gaming NVidia products, the fp64 is crippled at 1/8 fp32 rate at configuration in order to create a profit margin for the costlier "pro" products. The hardware itself is capable of 1/2 fp64 rate and should be superior to the Titan Z if AMD decides to offer "pro-level support".

    As proof, consider the fp64 rate of the single-chip AMD W9100, sold at ~$4000, which is 2.6 TFlops (http://www.amd.com/Documents/FirePro_W9100_Data_Sheet.pdf), versus the 2.7 TFlops of the Titan Z (1/3 fp32 rate, see http://en.wikipedia.org/wiki/G... [wikipedia.org]). AMD could unlock the 295X2 at its full potential 5.2 double precision TFlops and release it any day if they want, crushing the Titan Z.

    Honestly, instead of the Titan Z, I'd rather buy the AMD W9100 for $4000 and get equivalent double precision compute rate, better perf/W and, most importanty, certification for pro applications and ECC memory. That is certainly worth the extra $1000 in this product segment.

  • The Titan Z is also overpriced in that it costs significantly more money than TWO Titan Black cards. It really only makes sense if you're planning to buy two of them for quad SLI, and if you've got that kind of crazy money to spend you don't care what it costs. The Titan Z is a statement product meant for people with too much disposable income; it doesn't make much sense for anybody else.

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...