Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Graphics Data Storage Hardware Technology

AMD Unveils Vega GPU Architecture With 512 Terabytes of Memory Address Space (hothardware.com) 125

MojoKid writes: AMD lifted the veil on its next generation GPU architecture, codenamed Vega, this morning. One of the underlying forces behind Vega's design is that conventional GPU architectures have not been scaling well for diverse data types. Gaming and graphics workloads have shown steady progress, but today's GPUs are used for much more than just graphics. In addition, the compute capability of GPUs may have been increasing at a good pace, but memory capacity has not kept up. Vega aims to improve both compute performance and addressable memory capacity, however, through some new technologies not available on any previous-gen architecture. First, is that Vega has the most scalable GPU memory architecture built to date with 512TB of address space. It also has a new geometry pipeline tuned for more performance and better efficiency with over 2X peak throughput per clock, a new Compute Unit design, and a revamped pixel engine. The pixel engine features a new draw stream binning rasterizer (DSBR), which reportedly improves performance and saves power. All told, Vega should offer significant improvements in terms of performance and efficiency when products based on the architecture begin shipping in a few months.
This discussion has been archived. No new comments can be posted.

AMD Unveils Vega GPU Architecture With 512 Terabytes of Memory Address Space

Comments Filter:
  • Most high end GPU cards available have 8Gb, a large number of budget versions settle for 4Gb, and only a few offer 16Gb. Marketing this as a stand out point is iffy.

    • Most high end GPU cards available have 8Gb, a large number of budget versions settle for 4Gb, and only a few offer 16Gb. Marketing this as a stand out point is iffy.

      What you will find is that most cards have only a fraction of their RAM as addressable, so a 16GB card either 4 or 8 gigs addressable. The increase to 512GB is a godsend to AI researchers and other fields with large datasets.

      • 512Gb GPUs? Doubt you'll see that anytime soon. The very largest memory on any commercial GPU is AMD's own FirePro W9100, at 32gb. It is more of a cost issue than a limitation on addresable space; 64Gb is right around the corner though.

        • by Anonymous Coward

          yea thats 32 gig's in banks of 4 bankswitching

          even if you do not use it all just the wasted time and overhead saved is quite a bit in serious applications (and maybe even a few fps in gamez!)

        • Re: (Score:3, Informative)

          by Anonymous Coward
          They're actually using NVMe drives as the extra "memory". This works out well for huge datasets where you take a performance hit streaming it from the host. Load up the data on one of those 4GiB/s NVMe SSDs. They already have a product out that does this and it makes certain workloads much faster. Just waiting to see an 8x PCIe 4.0 NVMe XPoint SSD. Will be wicked fast for what they use it.
        • by Anonymous Coward

          There is already technology available to feed this monster. Things like the EMC DSSD can have 1/2 PB of NVMe flash connected via a PCIe bridge, and presented as a single shared memory mapped space to an entire rack if servers. I assume that is the use case for these cards, mostly in the supercomputing space.

      • by dfghjk ( 711126 )

        "The increase to 512GB is a godsend to AI researchers and other fields with large datasets."

        References?

        While there's no doubt that there is SOME application that could use that amount of physical addressability, it would seem extraordinarily unlikely that a single GPU would be sufficient for such an application and, even so, it's absurd to refer to such a niche as a "godsend". Meaningless hyperbole, most likely without any supporting insight.

      • What you will find is that most cards have only a fraction of their RAM as addressable, so a 16GB card either 4 or 8 gigs addressable. The increase to 512GB is a godsend to AI researchers and other fields with large datasets.

        Nope.

        1: The GPU addresses the whole damn pool.

        2: We're talking about 512 TB, not GB.

        3: They're not planning to release a card with 512 TB of RAM, but they are releasing professional cards with lots of RAM (8 GB, 16 GB, or more) AND onboard connections for flash storage (SSDs). Vega will likely continue and extend this. By having a huge address space, you simply have the ability to keep the entire dataset in your cache on the card. The memory controller then decides what needs to live in the fast HBM2

        • You just spend two hours loading everything into the GPU's memory. Then you start managing it, updating what parts of it change, etc.

          That PCIe bandwidth you mentioned? It's pretty scanty when you're shuttling 512 TB of data through it.

          • You don't understand.

            The architecture can address that much, but the actual product will only address what's available.
            There will be on-package HBM2 and the ability to connect to on-board (but off-package) storage in the form of fast flash.

            512 TB of addressable space is just future proofing to allow for seamless work with a dataset regardless of whether it's on the 16 GB of ball-smackingly fast HBM2, on the SSD on your RadeonPro card, or in your system memory (or potential even abstracted out to disk storag

        • Or maybe custom super-computers, a lat HP's "The Machine", with gobs and gobs on non-volatile system memory (persistent RAM), or some other configuration where the Vega chip can access a lot of other memory over a dedicated fast fabric.
    • by Anonymous Coward on Thursday January 05, 2017 @07:28PM (#53614263)

      Lisandro you COULD RTFA, you know? It's even an effing meme around here.

      The HBCC gives the GPU access to 512TB (half a petabyte) of virtual address space and gives the GPU fine-grained control, for adaptable and programmable data movement. Often, more memory is allocated for a particular workload than is necessary; the HBCC will allow the GPU to better manage disparities like this for more efficient use of memory. The huge address space will also allow the GPU to better handle datasets that exceed the size of the GPU’s local cache. AMD showed a dataset being rendered in real-time on Vega using its ProRender technology, consisting of hundreds of gigabytes of data. Each frame with this dataset takes hours to render on a CPU, but Vega handled it in real-time.

    • Except for AMD cards with massive storage for (semi)fixed datasets... Kind of the reason for this, in fact.
      • by Lisandro ( 799651 ) on Thursday January 05, 2017 @08:12PM (#53614437)

        But this is not new at all. IIRC Nvidia's CUDA 5 already gives you 49 bits of unified address space. Don't really know the addressing limitations on previous AMD architectures, but I doubt it was substantially lower.

        Realistically, large address spaces when you can only practically fill 0.05-0.1% means little for performance. I don't want to attack AMD with this, who usually manufacture really good GPU hardware, but this sounds like a marketing gimmick and nothing more. I particularly enjoyed the "hours to real-time" comparison... against a CPU.

        • Address space is nice, physical cells even better. Unlike nVidia's CUDA, APUs at least give you *actual* unified address space, including memory protection, and SSG Radeons push the cells closer to the chip on the other side of the PCIe bus.
          • Oh, it is nice, don't get me wrong :) I'm just saying that promoting this as a dealbreaker is insane.

            • It can be once you get used to the convenient programming model. The same thing happened a quarter century ago regarding the segmented memory model.
    • 2^64 is 1.8x10^19. 1TiB is 2^40. So any 64 bit addressing scheme should at least cover 2^63 or 1,048,576TiB, assuming 1 address bit is sacrificed for the firmware or whatever else is needed.

      So 512TiB is nothing, given what a flat 63 bit address space is capable of achieving. Also, supporting it on the address ain't difficult, given that there have been both data-address multiplexed lines as well as address-multiplexed lines.

  • by ITRambo ( 1467509 ) on Thursday January 05, 2017 @07:23PM (#53614243)
    With Rizen coming out soon and a new GPU design that looks very advanced, AMD is set to make substantial progress in market share, as long as they don't screw up. I'm rooting for them. I had switched all of our shops new PC's to Intel when they released their 6th gen Core series as AMD was just too far behind. Teh consumer PC's were all AMD for the past five years or so. I wanna go back to AMD, as long as the new stuff performs. Don't let us down AMD!
    • Re: (Score:3, Informative)

      by fisted ( 2295862 )

      But 2017 is already the year of the Linux desktop...

  • News for nerds? (Score:5, Informative)

    by Orgasmatron ( 8103 ) on Thursday January 05, 2017 @07:30PM (#53614267)

    The "news for nerds" version of this story's headline is "AMD Unveils Vega GPU Architecture With 49 bits of Memory Address Space"

    • by Luthair ( 847766 )
      What we really need to know is if they tested a Beowulf cluster of them.
    • Re: (Score:3, Insightful)

      by Kjella ( 173770 )

      The "news for cynics" version of this story's headline is "AMD unveils yet another set of powerpoints". Where is (Ry)Zen? Where is Vega? Every month is another month Intel and nVidia rule unchallenged on the high end. We need actual product on the shelf, not more tech demos. And I bet so does AMDs financials, you have to actually hard launch it before you get any revenue. I'm a bit hyped out, now it's more like hoping for a miracle.

  • Is that how the saying goes? I am not sure :p

  • The early 2000's was the last time had a Nvidia Geforce video card (256MB) with more RAM than my PC (192MB).
  • What's the point? By the time we hit that amount of memory on a GPU, we're looking at this architecture being entirely obsolete.

    Should've just said "We're slapping 1TB on this bitch!" and been done with it. No point in fussing about the scalability of the architecture when we're likely never going to see it hit full potential until long after its deprecated (AGP slot, anyone? When PCI-E cards came out, we'd barely even thought of saturating a 4X AGP slot.)

    • If you have a small address space then you need to write code that manually pages / caches the working set for an algorithm from storage. If you have a large address space then you use an interface similar to mmap and address the large dataset directly. It makes the code easier to write, and means that the paging / caching can be handled in hardware, where there are opportunities to speed it up.

      • Keep in mind large address spaces were here long before Vega. Hell, AMDs own "Graphics Core Next" architecture already supports flat 64-bit addressing, an that's been out since 2011.

    • Comment removed based on user account deletion
    • AGP, try VESA Local Bus [wikipedia.org] when talking about a limited use bus that was deprecated quickly. Add in that VLB cards were a real bitch to get in to the slots, or at least that is my most vivid memory of them.
  • by zifn4b ( 1040588 ) on Friday January 06, 2017 @07:35AM (#53616079)

    Someone check me on my logic here. The way I read this article is that AMD has created a new architecture with a memory controller that can address 512TB of memory address space. That's great and all but are we going to see cards any time in the near future with 512TB of GDDR on them? Not likely. How many years away are we? Who knows. It seems to me this is highly theoretical and possibly to put pressure on the memory industry to innovate on even more dense memory to push graphics even farther to the limit. It could also be to get some investor interest in the next "big thing".

    Side question: How did AMD validate that their architecture works without actually being able to fabricate an actual board in practice, simulation?

    • Side question: How did AMD validate that their architecture works without actually being able to fabricate an actual board in practice, simulation?

      You don't need to actually hook up memory to see if a memory bus works correctly. I used to test addressing on 8-bit CPUs using a Tektronix logic analyzer back in college.

  • Given the relative sizes of CPU's and GPU's, it makes sense that an 'APU' will be a GPU with a bundled CPU, rather than the other way round. Having a large address space is one requirement for doing virtual memory on a card.

  • Then they effectively discontinue a whole range of DX11 4gb GPUs before even writing working drivers. Waste of time, money and silicon.
  • It's funny how not RTFA in Slashdot is even a meme but yet I can see that 95% of the people here didn't read it. The 512TB number is the amount of ADDRESSABLE memory, which means that you can reserve for example 300Gb of that memory to read a texture file that big. Then, as you start reading it, a secondary controller will transfer data there from main memory, directly from disk or from wherever. To you it will be as if you were reading a 300GB block from Video memory and thanks to that external controller

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...