NVIDIA Quadro M6000 12GB Maxwell Workstation Graphics Tested Showing Solid Gains 66
MojoKid writes: NVIDIA's Maxwell GPU architecture has has been well-received in the gaming world, thanks to cards like the GeForce GTX Titan X and the GeForce GTX 980. NVIDIA recently took time to bring that same Maxwell goodness over the workstation market as well and the result is the new Quadro M6000, NVIDIA's new highest-end workstation platform. Like the Titan X, the M6000 is based on the full-fat version of the Maxwell GPU, the G200. Also, like the GeForce GTX Titan X, the Quadro M6000 has 12GB of GDDR5, 3072 GPU cores, 192 texture units (TMUs), and 96 render outputs (ROPs). NVIDIA has said that the M6000 will beat out their previous gen Quadro K6000 in a significant way in pro workstation applications as well as GPGPU or rendering and encoding applications that can be GPU-accelerated. One thing that's changed with the launch of the M6000 is that AMD no longer trades shots with NVIDIA for the top pro graphics performance spot. Last time around, there were some benchmarks that still favored team red. Now, the NVIDIA Quadro M6000 puts up pretty much a clean sweep.
No use (Score:3, Funny)
Re: (Score:1)
How did you manage a score of two? Is my sarcasm detector on the blink?
It has a DVI-I output, so a DVI-I to D-Sub adapter could be used.
Re: (Score:3, Funny)
I created an account over 10-15 years ago, treated it with utmost care and love, only now and then jesting, but very cautiously and carefully.
With the occasional post, my precious karma went silently up and up, and up and up.
Until today, I felt the longing for OMG frist post, succumbed and finally blew it all on an Nvidia topic.
Perhaps on topic, what useful task would this card capabilities be insufficient for? (useful in the sense with video output)
Re: (Score:2)
what useful task would this card capabilities be insufficient for? (useful in the sense with video output)
Video editing, image processing, or VFX.
Avid and Premier, Photoshop, and After Effects can use the CUDA processing power of this card.
Re: (Score:2)
Cables aren't analog or digital. They're cables.
Re: (Score:2)
Re: (Score:2)
BNC are only for analog you say? SDI disagree's with you, as does CXP. Hell 4x CXP6 which goes over 4 BNC terminated coax cables is pretty close bandwidth wise to display port but over a far greater distance.
Re: (Score:2)
tell that to Monster
..but make sure that the wires in your phone are they right way round, or the signal might not be clear enough for them to understand what you're saying.
Re: (Score:2)
I like to make sure that the wires are very slightly tipped when I talk so that the electrons go downhill into my phone. Otherwise the wire gets clogged.
Isn't that why the phone lines run overhead? Otherwise the people on the second floor couldn't use the phone!
rgb
Other way 'round (Score:1)
For a customer service phone drone, asking someone to turn their ethernet cables around the other way is actually a brilliant tactic. Of course we all know the cables are bidirectional, but most lusers don't have a clue. By asking them them to flip the ends around you're really asking them to reseat the connections at either end, but it's seemingly strange and arcane enough that they'll actually do it, instead of just making some fake noises into the phone and saying "There, I've done it." You might be surp
Re: (Score:2)
Some people do use 2560x1440 over VGA (at a reduced refresh), that's a workaround that can be used on some Intel graphics.
Too bad (Score:5, Interesting)
its too bad there is no double precision performance to speak of on these newer cards lately. good for games, not much else.
Re: (Score:3)
nVidia doesn't care about double precision. The benchmark suites don't, either.
But researchers do, and that (and price/performance) makes them go AMD.
Re: (Score:3, Informative)
Yea, they do go Nvidia is the unfortunate reality but it's somewhat understood if you look at all the extra capabilities Nvidia's architecture exposes - this becomes clear once you've really soaked in on AMD and OpenCL. Also AMD flubbed up on double precision after the 7990s. No one gives a crap about double precision for the foreseeable future except all the researchers and engineers programming the darn things. Haha. Wait, why is no one laughing?
Re: (Score:1)
I think there's only very small niche of applications for GPGPU cards where double precision is absolutely necessary.
Re:Too bad (Score:4, Informative)
If your algorithm is unstable at single precision floating point, it's going to be unstable at double precision as well.
Do you even know what you're talking about? error propagation, does that ring a bell? I'll give you a hint, if your computation requires a large number of operations then the absolute magnitude of machine rounding errors is critical. And, lest you think this is a 'very small niche', any matrix multiplication has O(n) operation per element. Chain a few those (say, in a Markov chain type of random walk) and it's an exponential growth. Using double instead of single is like being able to do periodic (expensive) full recomputations to control stability of a fast-updating chain instead of barely having enough precision when doing the full recomputations. Fast and stable versus extremely slow and perhaps (depending on today's $DEITY's mood) barely stable.
To put it differently, by the time error accumulation in double precision leaves you with a single-precision-worth of valid digits, single precision error accumulation has long ago made the computed value completely meaningless.
Re: (Score:3)
To paraphrase, you can't be too rich, too thin, or have too many bits of precision in a calculation. With single precision you have to be enormously careful not to drop digits even in comparatively modest loops; with double precision you can many digits before you run out. You can see it in almost any computations involving trig and pi -- single precision pi degrades in series much faster than double precision pi. It isn't just a matter of not using forward recursion to evaluate bessel functions, which
Re: (Score:3)
Ha! I see what you did there. Unfortunately, your verb wasn't double-precision.
Re: (Score:3)
High end 3D CAD packages can benefit greatly from accelerated graphics processing, and having gobs of memory on the video card can help store all that data.
Alternatively; imagine you work for Pixar.
=Smidge=
Re: (Score:2)
3D volume rendering and large scale 2D image rendering. Modern biological and medical imaging systems can create gargantuan datasets, and visualising them is computationally expensive. The largest single images I have are ~5GiB from tiled confocal z stacks (large 3D volume e.g. 4096x4096x64 with 4x 16-bit channels is 8GiB). Current lightsheet microscopes can acquire over 1TB *per sample* in just a few minutes, with the final processed/renderable volume still being many tens of GBs. MRI scans can also be
Re: (Score:1)
But use 2 slots, suffer from frame jitter due to the SLI and less bang per buck.
The cooling system is a tiny bit of a hassle, especially if you also have cpu water cooling requiring a specialized case. I'd like to see Fiji, the next GPU coming out soon. Nvidia does have the drop on AMD though. No question about that.
Re: Not THAT surprising (Score:1)
Well, when I was at AMD years ago, the graphics GM basically said that since workstation graphics was so low volume, we (AMD) weren't going to do much of anything to specifically target it. If we got wins by doing nothing, well great, otherwise, oh well. That mindset still persists, me thinks...
Shouldn't Slashdot put up a (Score:3)
"Sponsored Content" banner at the top of this post?
Re: (Score:2)
"Sponsored Content" banner at the top of this post?
Right next to the "Guaranteed to Run Crysis" stamp!
Tired of Gforce / Quadro Marketing Crap (Score:2)
The triangles shouldn't care if they are being rendered for a game or cad.
With Nvidia, you have a choice of a workstation card that cannot cool itself, or a a gaming card that has been intentionally crippled for CAD.
Lame. (Score:2)
Less space (12GB vs 16GB) than AMD
No DisplayPort 1.3
No Wireless
Lame.