AMD Radeon Fury and Fury X Specs Leaked, HBM-Powered Graphics On the Way 66
MojoKid writes: A fresh alleged leak of next AMD Fiji graphics info has just hit the web and there's an abundance of supposedly confirmed specifications for what will be AMD's most powerful graphics card to date. Fiji will initially be available in both Pro and XT variants with the Fiji Pro dubbed "Fury" and Fiji XT being dubbed "Fury X." The garden variety Fury touts single-precision floating point (SPFP) performance of 7.2 TFLOPS compared to 5.6 TFLOPS for a bone stock Radeon R9 290X. That's a roughly 29-percent performance improvement. The Fury X with its 4096 stream processors, 64 compute units, and 256 texture mapping units manages to deliver 8.6 TFLOPS, or a 54-percent increase over a Radeon R9 290X. The star of the show, however, will be AMD's High Bandwidth Memory (HBM) interface. Unlike traditional GDDR5 memory, HBM is stacked vertically, decreasing the PCB footprint required. It's also integrated directly into the same package as the GPU/SoC, leading to further efficiencies, reduced latency and a blistering 100GB/sec of bandwidth per stack (4 stacks per card). On average HBM is said to deliver three times the performance-per-watt of GDDR5 memory. With that being said, the specs listed are by no means confirmed by AMD, yet. We shall find out soon enough during AMD's E3 press conference scheduled for June 16.
Re: (Score:3)
This card has 400GB/sec throughput on memory. Not that far away, and that's just the first model limited to 4 stacks.
No idea where that imaginary goal of yours came from though. They always marked this as around 100TB/stack. And of course like all such memory, it's going to run in parallel, so the more stacks on die, the more bandwidth.
Re:100GB/sec? (Score:5, Interesting)
HBM1 gives 1GB and 128GB/s per stack, so 4GB and 512GB/s in this model with 4 stacks.
HBM2 will double both performance and capacity, and is expected some time next year.
Suck those watts (Score:2)
YAY for the new chip and memory. I just hope the TDP isn't so power sucking as the R9 series.
Re: (Score:1)
Yes, but at least the overall efficiency is similar to Nvidia now.
Personally, I find the Fury too expensive and power-hungry, so I might get a new machine next year or 2017. When Zen and midrange HBM graphics cards are available.
Re: (Score:1)
Per what's been said so far (grain of salt) the consumption is 10 Watts more than the 290. 60% more performance with 10 more watts is impressive and definitely more efficient than Nvidia is right now.
This might be the first time AMD has better performance, power and price in a long time.
Re: (Score:3, Informative)
YAY for the new chip and memory.
BOO for the same buggy drivers, regardless of operating system.
Re: (Score:1)
Amd's suit is called Raptor and it's a steaming pile of shit.
You mean that 100% optional stuff that doesn't hinder the card's performance in any way if you simply don't install it?
Yes, let's bash AMD for your inability to recognize software that isn't required and completely ignore Nvidia's equivalent known as the "GeForce Experience"
Re: (Score:3)
Ehh... this is oft-repeated, but Nvidia has made some pretty shitty drivers in the past, too. I haven't had many problems with mine, and I'm running two 280X's in CrossFire and 3 monitors. One of the more complex setups they offer.
Re: (Score:2)
YAY for the new chip and memory.
BOO for the same buggy drivers, regardless of operating system.
Actually, there'll be brand new Linux drivers [phoronix.com] for this card in Linux 4.2, which confine the binary blob to user-space. Whether or not they qualify as buggy remains to be seen though...
November 2001 (Score:3, Informative)
In November 2001, one of the Fury X cards would beat the worlds top supercomputer on raw FLOPS.
7.2 FLOPS ? (Score:2, Funny)
"performance of 7.2 FLOPS compared to 5.6 TFLOPS"
I think I would stick with the faster card...
Re: (Score:2)
But that's working with 32-terabit floating point values.
Pro and XT variants? (Score:2)
Why not XT and XTR?
Re: (Score:2)
Re: (Score:2)
Isn't this general of all things in life?
When you can get your hands on it, it's worth researching if you want to buy it, not before.
When you research it, you need to find someone who's used it under a similar benchmark to your intended use (and, here, I do NOT mean benchmarks themselves - you need to test under similar usage, e.g. a particular game with certain settings, etc.).
When you go to buy it, you need to test it before your right to return it runs out.
Believing anything on a spec-sheet, on a review,
Yeah, right... (Score:2)
The specs are "leaked".
AMD has been hyping the card for weeks already.
Re: (Score:2)
if the information is "leaked" by design... it is still leaked !!??
Re: (Score:2)
It probably makes things easier to cool than before since the memory now gets cooling it would not have previously gotten... and Memory is really the most important part of the GPU.
Re: (Score:2)
There's some screenshots sitting around on /r/pcmasterrace showing the 980ti, the ram chips are hitting 100C, so yeah using HBM to deal with not only heat but transfer speed is going to make a huge difference.
Re:Staked GPU / RAM = Reduced Heat Tranfer Eficien (Score:4, Informative)
The HBM memories run at a much lower clockspeed than the GDDR5, but compensate it by using a very, VERY wide bus, so they're probably a lot colder.
Re: (Score:2)
And the reason why GDDR5 runs at a much higher clock rate is? You should know the answer, it's because of the distance from the GPU to the chips themselves. HBM is stacked at the GPU, in turn speed isn't as much of an issue.
Re: (Score:2)
I think the key feature wasn't so much the reduction in heat, but the reduction of the amount of power required to run them, which of course has the positive side effect of a reduction of heat...
Their release process; (Score:2, Funny)
As part of AMD's QA process they first submit their driver and sample hardware to the FSF and debian for rigorous testing.
Once approved it will be submitted to Linus for inclusion in the official kernel.
Only when all parties are satisfied the product is stable and efficient and all 3 major OSS's will QA sign off on release of the product.
Oh wait, no, that isnt the plan at all.
Re: (Score:2)
Well there was this http://www.fsf.org/news/endors... [fsf.org]
But i was being facetious, so ...
Made up (Score:1)
Proofreading Fail (Score:2)
If you only need 7 operations per second, a discrete board seem overkill. Most CPUs can handle that easily.
Wait & see .. compared to 980 Ti ? (Score:2)
I just ordered a nVidia 980 Ti for my man dev box. While I would love to root for the underdog we need to be realistic and compare _actual_ silicon as opposed to theoretical paper specs of AMD hardware.
Should take your own advice (Score:2)
Hey, get a clue, don't start your comment in the subject line. That's not what it's for. If you're going to do that just asdfjkl;.
Wait & see .. compared to 980 Ti ?
I just ordered a nVidia 980 Ti for my man dev box. While I would love to root for the underdog we need to be realistic and compare _actual_ silicon as opposed to theoretical paper specs of AMD hardware.
If you were as clever as you think you are, you would have waited for the next AMD card to hit the streets... and the price of the nVidia cards to drop, let alone for benchmarks to happen.
Re: (Score:2)
I know I shouldn't feed the trolls ... but you just gotta love the clueless internet armchair critic -- a self proclaimed 'expert' whining about a non-issue when they don't have all the facts, but I digress.
I need a CUDA GPU card **today** -- not in a month+ when they are _might_ be available.
I'll be ordering a R290X + FX 8350 in the Fall anyways to have a AMD box for testing / dev. so Fury will definitely be considered then.
Re: (Score:1)
Dude, if you needed a CUDA card, the 980ti is DOGSHIT compared to a K2 GRID.
Quit fronting and admit you don't know jack about hardware.
How so? (Score:2)
The limitation of the consumer nVidia cards is double precision floating point. He may not need that. There are plenty of problems that need only single precision math, the extra precision is wasted. In that case, you don't see much benefit going to the pro cards, certainly not enough to justify the price.
Re: (Score:3)
Dude, if you needed a CUDA card, the 980ti is DOGSHIT compared to a K2 GRID.
Only in double precision.
Quit fronting and admit you don't know jack about hardware.
His hardware choice might make perfect sense, depending on his use case. Perhaps you should lighten up a little.
Re: (Score:2)
In fact, the Grid K2 has slow double precision as well, and less computing features because it's older.
Re: (Score:2)
> the 980ti is DOGSHIT compared to a K2 GRID.
So you're offering to pay for that? Sweet!
Like others said, I already have a Titan (the original) for when I _need_ FP64 performance. The FP32 and 4 GB of VRAM of the 980Ti is perfectly fine.
Re: (Score:1)
Hey, get a clue, don't start your comment in the subject line. That's not what it's for. If you're going to do that just asdfjkl;.
We'd all be much better off if you would just limit your posts to whining about starting comments in the subject line. Then we wouldn't have to endure your attempts at writing a relevant post.
Re: (Score:2)
I just ordered a nVidia 980 Ti for my man dev box.
Is that something you keep in your man cave?
Re: (Score:2)
Actually yup. :-) The wife got me this awesome gift:
http://www.amazon.com/CODER-MA... [amazon.com]
Important question!! (Score:1)
How long? (Score:2)
This sounds like a great competitor to nVidia's Maxwell architecture, but is there an estimate about when AMD is going to release low-end/low-cost Fury-based cards to compete with the GTX 750, which AFAIK is the best power/watt card at the moment. I haven't followed GPU news for over a year though, so maybe there's already something better in the 100-150$ range?
Firepro version could be killer card (Score:1)
So, it's slightly slower than the Titan X in games, but the Maxwell architecture suffers poor double precision performance and that could be where AMD make their money. The Firepro version of this card would smoke anything Nvidia has to offer.
Cool for ShaderToy (Score:2)
Cool, finally a card that can run practically all entries in ShaderToy [shadertoy.com] in real-time :-)
Re: (Score:1)
Re: (Score:2)
Did you try Eye of Sauron or Dolphin at 4K resolution? I doubt you reach 60FPS on those, but if you do, I have to check my drivers ;-)
Re: Cool for ShaderToy (Score:1)