NVIDIA Unveils Tesla V100 AI Accelerator Powered By 5120 CUDA Core Volta GPU (hothardware.com) 37
MojoKid writes: NVIDIA CEO Jen-Hsun Huang just offered the first public unveiling of a product based on the company's next generation GPU architecture, codenamed Volta. NVIDIA just announced its new Tesla V100 accelerator that's designed for AI and machine learning applications, and at the heart of the Tesla V100 is NVIDIA's Volta GV100 GPU. The chip features a 21.1 billion transistors on a die that measures 815mm2 (compared to 12 billion transistors and 610mm2 respectively for the previous gen Pascal GP100). The GV100 is built on a 12nm FinFET manufacturing process by TSMC. It is comprised of 5,120 CUDA cores with a boost clock of 1455MHz, compared to 3585 CUDA cores for the GeForce GTX 1080 Ti and previous gen Tesla P100 AI accelerator, for example. The new Volta GPU delivers 15 TFLOPS FP32 compute performance and 7.5 TFLOPS of FP64 compute performance. Also on board is 16MB of cache and 16GB of second generation High Bandwidth (HBM2) memory with 900GB/sec of bandwidth via a 4096-bit interface. The GV100 also has dedicated Tensor cores (640 in total) accelerating AI workloads. NVIDIA notes the dedicated Tensor cores also allow for a 12x uplift in deep learning performance compared to Pascal, which relies solely on its CUDA cores. NVIDIA is targeting a Q3 2017 release for the Tesla V100 with Volta, but the timetable for a GeForce derivative family of consumer graphics cards has has not been disclosed.
LEAVE TESLA ALONE!! (Score:1)
He's dead. He died crazy, poor and lonely, because he was an unappreciated genius who'd been repeatedly robbed by corporate villains. I swear if the next thing some greedy corporate bastards name after Tesla isn't a cure for mental illness or a solid-state generator that provides unlimited free wireless power I'm going to blow a gasket.
Re:LEAVE TESLA ALONE!! (Score:5, Interesting)
Where you see shame, I see honour and respect. It took generations for the public to learn the truth about his genius and tragedy. What better historical revenge than to slap HIS name on all the best and brightest things mankind creates with electricity? I can't think of a more just legacy. I think if science were to resurrect him, we would see tears of joy as the world lovingly respects his discoveries and hard work.
Re: (Score:3, Insightful)
Having a unit of measurement named after you by the scientific community is quite enough honour and respect, and it sure beats having a corporation trying to make an extra buck by exploiting your name (and posthumous fame) for an ephemeral product line, without your consent.
Besides, do you really think that the marketing oils at Nvidia sat in a conference room asking themselves: “Guys, what deserving hero could we possibly honour with this product?”, rather than: “What name is likely to st
Re: (Score:2)
LEAVE TESLA ALONE!
He died ... lonely
Eh...mission accomplished?
Re: (Score:2)
They are servicing an entirely different market, if you want better CUDA performance, especially double precision you need to get the "Pro" line because it simply has better double precision performance but then you don't get as good graphics/gaming performance (which requires mostly single precision).
On the other hand, the GeForce lines don't have any protections against data issues like ECC memory but ECC memory is also slower.
You don't play Crysis on a Tesla (it doesn't even have an output port) or on a
they have been doing it for nearly 20 years. (Score:1)
look, why believe nvidia now when they have been bullshitting about it in the past so long? they were selling for a long time exact same chips with an out of chip resistor deciding if it would accept the pro opengl drivers or not. just a flip switch. nothing more. meaning pretty much for a long time that if you bought a quadro you were a sucker. still are. so give him some slack.
ati has been doing exact same though, so there's that.
and you don't really need an output port on the card doing the accelerating
Re: (Score:2)
There are various differences, as I pointed out, the resistor hack is mostly an urban myth.
Yes, you can make a GeForce appear to be a Quadro or even a Tesla (and trick some proprietary software) but the GeForce still won't have the same double precision performance, ECC memory or thermal management, a Quadro will still be twice as fast as your mod and more importantly, won't crash. You pay $4k for the card because the performance, stability and memory increases over the GeForce is worth it, and yes, I have
Re:Born crippled (Score:5, Informative)
There does come a time though later in their production cycle where the production line begins to be well tuned and provides a high yield of pro level chips that surpasses the demand for those chips, in that case the vendor just sets the core count to what is required and ships to match demand.
No real ill intent here just good business practice, you are paying for what is promised to you, and if you find a way to re-enable the extra hardware so be it. This was done in many quadro/geforce cards in the past.
Re: (Score:3)
You are actually applying a lot of ill intent here where they are just using a standard business practice among both GPU and CPU companies. The majority of the chips come from the same production line. Chips that fail QA on a certain % of their CUDA cores are "binned down" to consumer level chips. This allow them to recoup costs and provide an adequate supply of pro chips while keeping prices relatively low.
Well there's certainly that from the supply side, but they're hardly that innocent. Every company tries to create products that make sure the people who can afford it pick that product and not a cheaper one. The classic quote on this is Dupuit (1849):
It is not because of the few thousand francs which would have to be spent to put a roof over the third-class carriage or to upholster the third-class seats that some company or other has open carriages with wooden benches... What the company is trying to do is prevent the passengers who can pay the second-class fee from travelling third class; it hits the poor, not because it wants to hurt them, but to frighten the rich... And it is again for the same reason that the companies, having proved almost cruel to the third-class passengers and mean to the second-class ones, become lavish in dealing with first-class customers. Having refused the poor what is necessary, they give the rich what is superfluous.
This is how you choose to not include some feature like Intel's missing consumer ECC support - which apparently AMD can afford to include, so clearly it's not that expensive - simply so the right pick people pick the "right" product. You can certainly claim som
Earth Simulator (Score:2)
Remember back when the Earth Simulator was new and exciting? That thing pushed apparently ~35 TFlops of compute performance. This card just announced can push 15 TFlops of compute performance. So, what you're saying, is that pretty much two of these new cards is about the same performance profile as the Earth Simulator? (of course, different architecture entirely, less ram, without storage, etc)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:3)
When I got my first CUDA card, my Seti@home totals over 7 years doubled in two weeks. My next upgrade redoubled all that in three months.
This would redouble all that in days. I concluded there was little need to sweat working on it all along because doing nothing all those years, then buying one of these, say, would only put you a week or two behind where you'd otherwise be
Re: (Score:2)
Well, one, that was 1997.
Two, the comparison would be 7.5 Tflops, since V100 is 7.5 DP64, and top500 focuses exclusively on FP64 performance
Three, we are comparing Rpeak to Rmax (and Rpeak is increasingly not sensible).
Of course, all that said it's still an impressive acheivement, and their big headline about 120 'tensor tflops' is what they seem particularly focused on, though I have no sense of how impressed I should or shouldn't be, since I don't know tensor performance so much.
Re: (Score:1)
blurb porn (Score:2)
Long ago, the television spent many years instructing me that "lifts and separates" is the real cigar. Accept no substitutes. That's the key.
won't be affordable for non-corporate buyers (Score:2)
Unfortunately these cards will probably cost so much that only corporations, aka 'full citizens,' will be able to afford them. Too bad. I'd love to play around with neural net stuff if the cards were not more expensive than their regular graphics cards, but obviously that's not going to happen. How is it that these companies used to be able to make a profit at $299 for their high end cards? Did their costs rise so dramatically?
Re: (Score:2)
Well, at this phase, it won't even physically go into anything apart from server designs built specifically around this specific card.
Here there's an issue of volumes. While the enthusiast gaming market is small, the number of units to move of this sort of accelerator makes it look gigantic by comparison. It's interesting, since nVidia began coming to prominence when people started figuring out how to use off the shelf GPU to accelearte HPC workload, because the accelerator market couldn't deliver a viabl
In Laymans terms (Score:1)
Can we please stop calling these GPUs? (Score:3)
These are co-processors. Basically entire second computers added alongside the primary. GPU functions are only a minor part of their capabilities. It's like calling my mobile device a "phone" because it has one app called "Phone" which I use twice a year.
Does no one remember when installing a match co-processor in your PC was the new hotness? This is the same thing!
And now, here are the thousands of cores (Score:2)
My one and only submission to make it to the main page [slashdot.org]. 9 years.
Smoked by NVIDIA. Nobody wants Pentium cores anymore anyway.