Nvidia Announces 'Nvidia Titan V' Video Card: GV100 for $3000 (anandtech.com) 51
Nvidia has announced the Titan V, the "world's most powerful PC GPU." It's based on Nvidia's Volta, the same architecture as the Nvidia Tesla V100 GPUs behind Amazon Web Service's recently launched top-end P3 instances, which are dedicated to artificial-intelligence applications. From a report: A mere 7 months after Volta was announced with the Tesla V100 accelerator and the GV100 GPU inside it, Nvidia continues its breakneck pace by releasing the GV100-powered Titan V, available for sale today. Aimed at a decidedly more compute-oriented market than ever before, the 815 mm2 behemoth die that is GV100 is now available to the broader public. [...] The Titan V, by extension, sees the Titan lineup finally switch loyalties and start using Nvidia's high-end compute-focused GPUs, in this case the Volta architecture based V100. The end result is that rather than being Nvidia's top prosumer card, the Titan V is decidedly more focused on compute, particularly due to the combination of the price tag and the unique feature set that comes from using the GV100 GPU. Which isn't to say that you can't do graphics on the card -- this is still very much a video card, outputs and all -- but Nvidia is first and foremost promoting it as a workstation-level AI compute card, and by extension focusing on the GV100 GPU's unique tensor cores and the massive neural networking performance advantages they offer over earlier Nvidia cards.
Why even call this a video card? (Score:1)
But can it pay for itself mining Coin? (Score:3)
Somehow, I seriously doubt it.
Re: (Score:2)
This is a workstation card, or card for a workstation.
Is it? As far as I know, all nVidia's workstation cards are named Quadro and have the ability to run certified drivers tweaked for workstation applications, like CAD/CAM and rendering apps.
I expect that there will be a corresponding workstation card, which will cost far more for similar hardware, but this is to my knowledge not one.
Re: (Score:2)
Re: (Score:2)
EPYC has 128 PCIe lanes so it can support 16 8x cards and it has 2 cores for each of those cards. I doubt there are any motherboards that support that many lanes but one could be built (most I've seen is 8 full length slots).
Re: (Score:1)
Re: (Score:1)
Here's one from Supermicro with 8 PCI-E 3.0 x16: https://www.supermicro.com/products/system/4U/4028/SYS-4028GR-TRT.cfm
Re: (Score:1)
Re: (Score:2)
It still comes down to how the PCI-E root complex is setup. There are situations where communicating over Infiniband to a different GPU in the same machine is faster than going through the root complex.
Re: (Score:2)
Look up the numbers for mining Monero.
CUDA ? (Score:2)
Normally, this kind of graphic cards are supported by the proprietary closed-source drivers for Linux, and corresponding CUDA SDK.
It wouldn't make much sense for Nvidia to NOT release Linux support for it, as an GPU-based AI-accelerator, they would be missing on the big Linux HPC market segment (common is there any super computer still rellevant nowadays that doesn't run any unix ?)
Though they would definitely be supporting Windows too (not to miss on the lucrative "extreme gamer enthousiast" market, and pe
But will the drivers work? (Score:1)
It's funny because I used to think AMD's drivers were crap, but since switching from an AMD A-10 to an NVidia GTX, and updated drivers, all kinds of bugs and errors manifest. What good is a $3000 video board if the drivers are acting up?
Re: (Score:3)
Lots of drivers are crap these days...
Why? Which one of you can write C or C++ anymore?
I have a Linksys router that has crappy wireless drivers with memory leaks. Paid a pile of money for it. It locks up about twice a week and requires a full factory reset to fix it. Linksys got the wireless drivers from the chip maker (or so they say) as a blob so they claim to be at their mercy. Who over there at the WiFi chip manufacturer doesn't know how to track down and fix memory leaks? I can see the first re
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Which one of you can write C or C++ anymore?
I do, but I choose to work on legacy VB6 instead. Also, driver optimizations should be done with assembly.
Re: (Score:2)
Lots of drivers are crap these days... Why? Which one of you can write C or C++ anymore?
Lots of people can. They're just busy doing more important things than writing drivers for consumer trinkets.
Drivers.... (Score:2)
What good is a $3000 video board if the drivers are acting up?
The main target segment for this card is scientific computation.
Most of the Titan Vs sold are going to end-up on Linux compute nodes in some universities or other.
Their output connectors are almost not likely to get plugged into anything ever.
Not a problem if their Direct X drivers are acting up a bit.
CUDA SDK is the thing that they most definitely need to get working right.
In terms of volume sold, the few hardcore extreme enthusiast gamers who are going to stuff them into Windows workstations are only icin
I'll wait (Score:2)
I'll wait until I can purchase it for $100.
Re: (Score:2)
That's fine... that's my non-shifting price point for a video card. You can always get a "good enough" video card for $100... that logic has held true for 20 years.
Re: (Score:2)
Actually, $100 always gets you a "too slow to play the latest games on high settings" video card, while $200 (nearly) always gets you a more than satisfactory experience. That said, I put a $50 fanless GPU in my primary workstation just for the blessed silence. The gaming machine beside it has a $140 GPU (RX 460) that will easily meet my needs until AMD's next process shrink arrives. What I want in my next card: 3X throughput bump, but fans completely stopped when not cranking 3D. Reasonable to expect give
Imagine a Beowulf cluster of these (Score:3, Funny)
Imagine a Beowulf cluster of these
Re: (Score:2)
Do you want to heat your house?
Because that's how you heat your house.
Built for number crunching (Score:5, Informative)
Before we get the deluge of "What's this used for?" we need to take a look at the specs.
Float64 performance is only 1/2 of float32 -- WOW! This thing is built for number crunching! (The original Titan has 1/3 float64 performance. Gamers screamed bloody murder when it sold at $1,000 but they weren't the target audience.)
Bandwidth has been neutered at only 653 GB/sec due to the 3,072 bit Memory Bus Width compared to 900 GB/sec of the Tesla V100.
Compared to spending to $10,000 at $3,000 this is basically the "poor man's" Tesla V100 specifically designed for AI. I see the full 640 Tensor Cores.
TL:DR; If you are doing number crunching (C's "double"), or AI / ML (machine learning) this might be a bargain GPU. Otherwise, it has almost zero practical value from a Gamer's POV.
Re: (Score:3, Interesting)
What's interesting is the difference between the NVidia approach and Google's TensorFlow approach. NVidia is beefing up FLOAT64 performance while Google focused on 8bit and 16bit performance (OPS/W) which is why Googles newest gaming challenge used a single TPU running at 40W to run the AI (after training on 5,000 TPUs and thousands of cores).
Re: (Score:2)
I caught that too. nVidia, in traditional fashion, is "going narrow and deep", while Google is "going wide but shallow".
You bring up an excellent point. Things are about to get REAL interesting in the ML space. It will be very exciting to see what/where each respective card excels at (pardon the pun) along with the benchmarks.
Re: (Score:3)
Re: (Score:3)
Sure, (Score:2)