Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Graphics Hardware

AMD Radeon Fury and Fury X Specs Leaked, HBM-Powered Graphics On the Way 66

MojoKid writes: A fresh alleged leak of next AMD Fiji graphics info has just hit the web and there's an abundance of supposedly confirmed specifications for what will be AMD's most powerful graphics card to date. Fiji will initially be available in both Pro and XT variants with the Fiji Pro dubbed "Fury" and Fiji XT being dubbed "Fury X." The garden variety Fury touts single-precision floating point (SPFP) performance of 7.2 TFLOPS compared to 5.6 TFLOPS for a bone stock Radeon R9 290X. That's a roughly 29-percent performance improvement. The Fury X with its 4096 stream processors, 64 compute units, and 256 texture mapping units manages to deliver 8.6 TFLOPS, or a 54-percent increase over a Radeon R9 290X. The star of the show, however, will be AMD's High Bandwidth Memory (HBM) interface. Unlike traditional GDDR5 memory, HBM is stacked vertically, decreasing the PCB footprint required. It's also integrated directly into the same package as the GPU/SoC, leading to further efficiencies, reduced latency and a blistering 100GB/sec of bandwidth per stack (4 stacks per card). On average HBM is said to deliver three times the performance-per-watt of GDDR5 memory. With that being said, the specs listed are by no means confirmed by AMD, yet. We shall find out soon enough during AMD's E3 press conference scheduled for June 16.
This discussion has been archived. No new comments can be posted.

AMD Radeon Fury and Fury X Specs Leaked, HBM-Powered Graphics On the Way

Comments Filter:
  • YAY for the new chip and memory. I just hope the TDP isn't so power sucking as the R9 series.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      YAY for the new chip and memory.

      BOO for the same buggy drivers, regardless of operating system.

      • Ehh... this is oft-repeated, but Nvidia has made some pretty shitty drivers in the past, too. I haven't had many problems with mine, and I'm running two 280X's in CrossFire and 3 monitors. One of the more complex setups they offer.

      • by rdnetto ( 955205 )

        YAY for the new chip and memory.

        BOO for the same buggy drivers, regardless of operating system.

        Actually, there'll be brand new Linux drivers [phoronix.com] for this card in Linux 4.2, which confine the binary blob to user-space. Whether or not they qualify as buggy remains to be seen though...

  • November 2001 (Score:3, Informative)

    by Anonymous Coward on Thursday June 11, 2015 @05:17AM (#49889645)

    In November 2001, one of the Fury X cards would beat the worlds top supercomputer on raw FLOPS.

  • 7.2 FLOPS ? (Score:2, Funny)

    by Anonymous Coward

    "performance of 7.2 FLOPS compared to 5.6 TFLOPS"

    I think I would stick with the faster card...

  • Why not XT and XTR?

  • The specs are "leaked".

    AMD has been hyping the card for weeks already.

  • As part of AMD's QA process they first submit their driver and sample hardware to the FSF and debian for rigorous testing.
    Once approved it will be submitted to Linus for inclusion in the official kernel.
    Only when all parties are satisfied the product is stable and efficient and all 3 major OSS's will QA sign off on release of the product.

    Oh wait, no, that isnt the plan at all.

  • Made up specs posted days ago and rehashed without even attributing the original made up source. News?
  • The garden variety Fury touts single-precision floating point (SPFP) performance of 7.2 FLOPS compared to 5.6 TFLOPS for a bone stock Radeon R9 290X.

    If you only need 7 operations per second, a discrete board seem overkill. Most CPUs can handle that easily.

  • I just ordered a nVidia 980 Ti for my man dev box. While I would love to root for the underdog we need to be realistic and compare _actual_ silicon as opposed to theoretical paper specs of AMD hardware.

    • Hey, get a clue, don't start your comment in the subject line. That's not what it's for. If you're going to do that just asdfjkl;.

      Wait & see .. compared to 980 Ti ?
      I just ordered a nVidia 980 Ti for my man dev box. While I would love to root for the underdog we need to be realistic and compare _actual_ silicon as opposed to theoretical paper specs of AMD hardware.

      If you were as clever as you think you are, you would have waited for the next AMD card to hit the streets... and the price of the nVidia cards to drop, let alone for benchmarks to happen.

      • I know I shouldn't feed the trolls ... but you just gotta love the clueless internet armchair critic -- a self proclaimed 'expert' whining about a non-issue when they don't have all the facts, but I digress.

        I need a CUDA GPU card **today** -- not in a month+ when they are _might_ be available.

        I'll be ordering a R290X + FX 8350 in the Fall anyways to have a AMD box for testing / dev. so Fury will definitely be considered then.

        • by Khyber ( 864651 )

          Dude, if you needed a CUDA card, the 980ti is DOGSHIT compared to a K2 GRID.

          Quit fronting and admit you don't know jack about hardware.

          • The limitation of the consumer nVidia cards is double precision floating point. He may not need that. There are plenty of problems that need only single precision math, the extra precision is wasted. In that case, you don't see much benefit going to the pro cards, certainly not enough to justify the price.

          • Dude, if you needed a CUDA card, the 980ti is DOGSHIT compared to a K2 GRID.

            Only in double precision.

            Quit fronting and admit you don't know jack about hardware.

            His hardware choice might make perfect sense, depending on his use case. Perhaps you should lighten up a little.

            • In fact, the Grid K2 has slow double precision as well, and less computing features because it's older.

          • > the 980ti is DOGSHIT compared to a K2 GRID.

            So you're offering to pay for that? Sweet!

            Like others said, I already have a Titan (the original) for when I _need_ FP64 performance. The FP32 and 4 GB of VRAM of the 980Ti is perfectly fine.

      • by Anonymous Coward

        Hey, get a clue, don't start your comment in the subject line. That's not what it's for. If you're going to do that just asdfjkl;.

        We'd all be much better off if you would just limit your posts to whining about starting comments in the subject line. Then we wouldn't have to endure your attempts at writing a relevant post.

    • I just ordered a nVidia 980 Ti for my man dev box.

      Is that something you keep in your man cave?

  • How many MH/s will this new card mine for Script and other hash currencies?
  • This sounds like a great competitor to nVidia's Maxwell architecture, but is there an estimate about when AMD is going to release low-end/low-cost Fury-based cards to compete with the GTX 750, which AFAIK is the best power/watt card at the moment. I haven't followed GPU news for over a year though, so maybe there's already something better in the 100-150$ range?

  • So, it's slightly slower than the Titan X in games, but the Maxwell architecture suffers poor double precision performance and that could be where AMD make their money. The Firepro version of this card would smoke anything Nvidia has to offer.

  • Cool, finally a card that can run practically all entries in ShaderToy [shadertoy.com] in real-time :-)

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...