Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Nvidia Announces 'Nvidia Titan V' Video Card: GV100 for $3000 (anandtech.com) 51

Nvidia has announced the Titan V, the "world's most powerful PC GPU." It's based on Nvidia's Volta, the same architecture as the Nvidia Tesla V100 GPUs behind Amazon Web Service's recently launched top-end P3 instances, which are dedicated to artificial-intelligence applications. From a report: A mere 7 months after Volta was announced with the Tesla V100 accelerator and the GV100 GPU inside it, Nvidia continues its breakneck pace by releasing the GV100-powered Titan V, available for sale today. Aimed at a decidedly more compute-oriented market than ever before, the 815 mm2 behemoth die that is GV100 is now available to the broader public. [...] The Titan V, by extension, sees the Titan lineup finally switch loyalties and start using Nvidia's high-end compute-focused GPUs, in this case the Volta architecture based V100. The end result is that rather than being Nvidia's top prosumer card, the Titan V is decidedly more focused on compute, particularly due to the combination of the price tag and the unique feature set that comes from using the GV100 GPU. Which isn't to say that you can't do graphics on the card -- this is still very much a video card, outputs and all -- but Nvidia is first and foremost promoting it as a workstation-level AI compute card, and by extension focusing on the GV100 GPU's unique tensor cores and the massive neural networking performance advantages they offer over earlier Nvidia cards.
This discussion has been archived. No new comments can be posted.

Nvidia Announces 'Nvidia Titan V' Video Card: GV100 for $3000

Comments Filter:
  • by Anonymous Coward
    It's like a number crunching daughtercard that can maybe do video too as a secondary feature.
  • by bobbied ( 2522392 ) on Friday December 08, 2017 @10:26AM (#55701527)

    Somehow, I seriously doubt it.

    • It might be more useful in the realm of folding. Folding is a much more complex operation and tends to need more RAM, more cores being useful as well, and fast throughput on the bus. The big issue with mining cards when applied to folding is that you can't get a motherboard supporting more than 4, maybe 6 PCIe 8x-16x bus lines then you need a CPU core for each card on top of it. If you can pack 2 GPUs worth of power into a single card suddenly you have double the capacity (or rather, nearly on part with
      • by afidel ( 530433 )

        EPYC has 128 PCIe lanes so it can support 16 8x cards and it has 2 cores for each of those cards. I doubt there are any motherboards that support that many lanes but one could be built (most I've seen is 8 full length slots).

        • What the name of the motherboard with 8 PCIe 8x or higher slots?
        • by Shinobi ( 19308 )

          It still comes down to how the PCI-E root complex is setup. There are situations where communicating over Infiniband to a different GPU in the same machine is faster than going through the root complex.

    • Look up the numbers for mining Monero.

  • by Anonymous Coward

    It's funny because I used to think AMD's drivers were crap, but since switching from an AMD A-10 to an NVidia GTX, and updated drivers, all kinds of bugs and errors manifest. What good is a $3000 video board if the drivers are acting up?

    • Lots of drivers are crap these days...

      Why? Which one of you can write C or C++ anymore?

      I have a Linksys router that has crappy wireless drivers with memory leaks. Paid a pile of money for it. It locks up about twice a week and requires a full factory reset to fix it. Linksys got the wireless drivers from the chip maker (or so they say) as a blob so they claim to be at their mercy. Who over there at the WiFi chip manufacturer doesn't know how to track down and fix memory leaks? I can see the first re

      • All the software on your Linksys is made in Asia now.
        • Maybe the chipset stuff is, but under the covers the routers I have from Linksys run OpenWRT with their own UI glued on. I run OpenWRT/LUCI on them myself, once the warranty period is over. In fact, I don't buy residential routers that OpenWRT won't run on anymore...
      • Which one of you can write C or C++ anymore?

        I do, but I choose to work on legacy VB6 instead. Also, driver optimizations should be done with assembly.

      • by Kjella ( 173770 )

        Lots of drivers are crap these days... Why? Which one of you can write C or C++ anymore?

        Lots of people can. They're just busy doing more important things than writing drivers for consumer trinkets.

    • What good is a $3000 video board if the drivers are acting up?

      The main target segment for this card is scientific computation.
      Most of the Titan Vs sold are going to end-up on Linux compute nodes in some universities or other.
      Their output connectors are almost not likely to get plugged into anything ever.
      Not a problem if their Direct X drivers are acting up a bit.
      CUDA SDK is the thing that they most definitely need to get working right.

      In terms of volume sold, the few hardcore extreme enthusiast gamers who are going to stuff them into Windows workstations are only icin

  • I'll wait until I can purchase it for $100.

  • by Anonymous Coward on Friday December 08, 2017 @10:30AM (#55701571)

    Imagine a Beowulf cluster of these

  • by UnknownSoldier ( 67820 ) on Friday December 08, 2017 @10:37AM (#55701617)

    Before we get the deluge of "What's this used for?" we need to take a look at the specs.

    Float64 performance is only 1/2 of float32 -- WOW! This thing is built for number crunching! (The original Titan has 1/3 float64 performance. Gamers screamed bloody murder when it sold at $1,000 but they weren't the target audience.)

    Bandwidth has been neutered at only 653 GB/sec due to the 3,072 bit Memory Bus Width compared to 900 GB/sec of the Tesla V100.

    Compared to spending to $10,000 at $3,000 this is basically the "poor man's" Tesla V100 specifically designed for AI. I see the full 640 Tensor Cores.

    TL:DR; If you are doing number crunching (C's "double"), or AI / ML (machine learning) this might be a bargain GPU. Otherwise, it has almost zero practical value from a Gamer's POV.

    • Re: (Score:3, Interesting)

      by afidel ( 530433 )

      What's interesting is the difference between the NVidia approach and Google's TensorFlow approach. NVidia is beefing up FLOAT64 performance while Google focused on 8bit and 16bit performance (OPS/W) which is why Googles newest gaming challenge used a single TPU running at 40W to run the AI (after training on 5,000 TPUs and thousands of cores).

      • I caught that too. nVidia, in traditional fashion, is "going narrow and deep", while Google is "going wide but shallow".

        You bring up an excellent point. Things are about to get REAL interesting in the ML space. It will be very exciting to see what/where each respective card excels at (pardon the pun) along with the benchmarks.

        • Quite the opposite. The focus seems to be on fp16 with tensor cores massively increasing the throughput at that precision. But fp64 is good as well, which could be good for other professional applications.
    • by enjar ( 249223 )
      Exactly. We had it in the plans to acquire some V100 servers as part of an upgrade this year, as well as update some of our Dev/QE desktops to the very dearly priced GP100. Management had seen the numbers for that and were kind of holding their noses while saying "yes" because 1) they had real-live business reasons for us to do it and 2) those business reasons are considered a high priority but 3) the plan was expensive, no two ways about it. Now that this is an option, the numbers look considerably better.
  • But can it run Crysis?

To stay youthful, stay useful.

Working...