Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Graphics Hardware Technology

AMD's Latest Server Compute GPU Packs In 32GB of Memory 46

Deathspawner writes: Following-up on the release of 12GB and 16GB FirePro compute cards last fall, AMD has just announced a brand-new top-end: the 32GB FirePro S9170. Targeted at DGEMM computation, the S9170 sets a new record for GPU memory on a single card, and does so without a dual-GPU design. Architecturally, the S9170 is similar to the S9150, but is clocked a bit faster, and is set to cost about the same as well, at between $3,000~$4,000. While AMD's recent desktop Radeon launch might have left a bit to be desired, the company has proven with its S9170 that it's still able to push boundaries.
This discussion has been archived. No new comments can be posted.

AMD's Latest Server Compute GPU Packs In 32GB of Memory

Comments Filter:
  • Finally (Score:3, Funny)

    by blackt0wer ( 2714221 ) on Thursday July 09, 2015 @10:38AM (#50075635)
    This will cut my rendering time of Hentai down dramatically.
    • by Anonymous Coward

      Starting at about the time that the "read the X comments" link was removed, I noticed a big drop in the number and quality of comments and moderation. Am I the only one?

      Also, what happened to polls? I've only seen one since they removed the sidebar. Have those been killed off?

      • Slashdot has decided that this website shall be used as a marketing tool to further their own existence rather than a site that is "News for Nerds" as originally intended. Therefore, it's a tragic consequence that some users will feel it's not worth writing insightful messages 100% of the time. Besides, there are only a few production houses where these cards will really be able to be used, and they've already got their orders places, so for the rest of us, these cards are merely measuring sticks by which
      • Re: (Score:1, Informative)

        by Anonymous Coward
        The quality went down some time ago. All the dumb conspiracy nuts, and the women-hating cave-men, made this a place that smarter people make fun of, rather than participating in.
      • Rather than improving the site and taking on refugees from Reddit they decided to double down on terrible design and appeal to the worst of them.

        It's like Italy deciding to turn away all the moderates and only deciding to accept

        Meanwhile those of us that have been around Slashdot since 2000 have pretty much thrown our hands up.

        I wish I had enough knowledge and freetime to rewrite INN with moderation. Usenet 2.0. It seems that the Eternal September has hit the web.

        • by LWATCDR ( 28044 )

          "Meanwhile those of us that have been around Slashdot since 2000 have pretty much thrown our hands up."
          Imagine how I feel.
          I have to agree with you. I really want a new old slashdot but I doubt that the changes I would make would make everyone happy.

  • by Anonymous Coward

    1) You're old... We get it!
    2) 2013 called, they want their fad back
    3) "Render" Hentai? Unless you count encoding DVD rips in some lossy codec to be "rendering": it really won't. Do you have some fucked up 3D animation hentai running on OpenGL or DirectX?

    • Do you have some fucked up 3D animation hentai running on OpenGL or DirectX?

      In a world where we have VR headsets there's a possibility their answer is yes.

  • Excuse me. For what are these used?
    • by faway ( 4112407 )
      I expected Hollywood is the main user but I know I don't know for sure. Really? Hhentai makers. By the way, AFAIK all hentai sucks unless you have a counterexample.
    • by AqD ( 1885732 )

      These are just huge waste of money. You can use the same money to buy several of gaming cards - each of them is way faster than workstation card for many types of operations. Also since GPU computation is highly parallel it makes no point to have one super card instead of many cheap cards combined together.

      • by chuckymonkey ( 1059244 ) <charles@d@burton.gmail@com> on Thursday July 09, 2015 @11:58AM (#50076095) Journal
        Not necessarily, in scientific computing cards like this are important. The biggest problem with GPU computing in general is the time it takes to copy from main memory to GPU memory and back. It really makes GPU difficult to work with and generally the gains in parallelization don't really pay off considering the amount of time it takes to make those memory copies. Being able to load more into memory and have it stay there is a big deal.
      • by Anonymous Coward

        each of them is way faster than workstation card for many types of operations

        Many, but not all. There are a wide variety of computational uses for GPUs these days. Some are processing speed bound, some of bandwidth bound, others are memory bound. Adding a bunch of cards isn't going to help with some computations that require large intermediate buffers with a lot of cross dependence, as you'll just be stuck waiting for the same memory to be moved back and forth between cards, assuming it can even fit in them.

      • You dont deal with large data or making textures in say Mari.
    • Virtual desktops, scientific computing, big render farms. There's all kinds of things, they're not really for home users though. I mean, you could buy one but you're unlikely to be able to use it effectively. This is targeted at a market and computation scale that's much much larger than most people work with. People who buy these don't buy one at a time, they buy them by the dozens.
      • by Anonymous Coward

        I don't believe the Hawaii chip used for this card has ECC protection for any of its shader array SRAMs. That may significantly reduce the usefulness of this product for high-volume installations (MTBF will likely be unacceptably low). Well, maybe one could make-do, but computations would have to be double-checked or otherwise qualified. A 32GB framebuffer is nifty, for sure, but definitely a niche product for the foreseeable future.

    • by Anonymous Coward

      Scientific applications that need to process what we in the business call "An unbelievable metric fuckton of linear algebra."

      A (now depressingly standard) billion-cell 3D CFD simulation = solving a minimum of 5 billion simultaneous nonlinear equations per timestep = solving probably some dozens of 5-billion-variable linear algebra problems per timestep since implicit methods are all the rage.

  • So how fast does it run SETI@home?
  • by Wargames ( 91725 ) on Thursday July 09, 2015 @11:08AM (#50075789) Journal
    DGEMM:

    D = Double precision (as opposed to S = Single precision, C - Complex single precision, or Z - Complex Double precision)

    GE = GEneral, as opposed to HErmetian for example.

    M = Matrix

    M = Multiplication

  • An APU with even 16GB of integrated memory would be news, let alone 32. A video card with 32GB of memory is not. A GPU with 32GB of integrated memory would be news, but that is not what this is. This is a video card with 32GB of memory.

    Of course, AMD's own press release gets this wrong, but that's not excuse. It just means this is C&P bullshit.

    • by Anonymous Coward

      Putting 32GB of ram on a card with a 512bit memory bus is impressive (that's a lot of traces!) regardless of the fact that it isn't an APU. It is interesting to see this dual direction from AMD however. While they're releasing HBM and the associated "Fury" products the hard 4GB limit is obviously impacting competition and makes the newly released product non-competitive. It does seem like once they can improve the memory limit and get the cost down, that type of product would be a great addition to an AP

    • I'm confused. How would a GPU with 32GB of "integrated memory" be news, but a GPU with 32GB of [non-integrated*] memory is not news? I'm not sure what you mean when you say "integrated memory". This is not the graphics half of an APU, it is a discrete card, and nowhere do the summary or the press release state otherwise. The term "compute GPU" just means it's targeted at computing workloads, not graphics workloads.

      What exactly is your complaint?

      * Not even sure what this means, but you seem to be contrasting

      • My suspicion is the GP meant "on-die" not "integrated" Still a silly post though, for one thing I dont think, as you pointed out, that they know what a compute GPU is
    • by wbr1 ( 2538558 )
      This is a discrete card with 1 GPU and non-integrated memory. Not an APU. This is designed for rendering and other parallel compute tasks. Go google.. do you see any single GPU card that addresses this much RAM? I do not.
  • Is it still a "Graphics Processing Unit" [wikipedia.org], if it does not even offer any way to connect a display to it?

    Well, maybe, it just means "Ginormous" now...

    • by MrDoh! ( 71235 )
      If I can throw this in a machine with 2 SLI'd Titan X's for another 5 frames per second, it'll still sell.
      • by mi ( 197448 )
        Point is, this hardware has nothing to do with graphics — it is put into servers to aid specialized computations using chips originally developed for graphics. But they aren't used for graphics — not in this application. Thus "GPU" is a misnomer.
  • The real question here is: when did "compute" go from being a verb to an adjective?

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...