Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

NVIDIA Cancels GeForce RTX 3080 20GB and RTX 3070 16GB: Report (videocardz.com) 44

VideoCardz reports: NVIDIA has just told its board partners that it will not launch GeForce RTX 3080 20GB and RTX 3070 16GB cards as planned. NVIDIA allegedly cancels its December launch of GeForce RTX 3080 20GB and RTX 3070 16GB. This still very fresh information comes from two independent sources. Technically GeForce RTX 3080 20GB and RTX 3070 16GB could launch at a later time, but the information that we have clearly stated that those SKUs have been canceled, not postponed. NVIDIA has already canceled its RTX 3070 Ti model (PG141 SKU 0), so the RTX 3070 16GB (PG141 SKU5) and RTX 3080 20GB (PG132 SKU20) will be joining the list. The GeForce RTX 3080 20GB was expected to be a response to AMD Radeon RX 6900/6800 series featuring Navi 21 GPU. All three AMD SKUs will feature 16GB of memory, leaving NVIDIA with a smaller frame buffer to compete with. We do not know the official reason for the cancellation. The RTX 3080 20GB might have been scrapped due to low GDDR6X yield issues, one source claims. The reason behind RTX 3070 16GB cancellation is unknown (this SKU uses GDDR6 memory). The plans for GeForce RTX 3060 Ti remain unchanged. The PG190 SKU 10 remains on track for mid-November launch.
This discussion has been archived. No new comments can be posted.

NVIDIA Cancels GeForce RTX 3080 20GB and RTX 3070 16GB: Report

Comments Filter:
  • I can't find any of those to order!
  • by Artem S. Tashkinov ( 764309 ) on Friday October 23, 2020 @03:22PM (#60641352) Homepage
    You cannot cancel something you haven't even announced. Rumors upon rumors.
    • You cannot cancel something you haven't even announced. Rumors upon rumors.

      Indeed you can't. You just assume that everything needs to be announced to *you*. Don't be so self centrered. Partners know what cards are coming and their specs long before NVIDIA announces them publicly. Hell the 3000 series launch has been the "leakiest" NVIDIA launch in history. The full specs were known long before the event thanks to subtle leaks of drivers firmwares and even finished webpages by partners.

  • by RyanFenton ( 230700 ) on Friday October 23, 2020 @03:24PM (#60641360)

    I was able to get one of the 3080s on launch day, (purely by luck, browsing Newegg just at the right time well after the bulk had sold out).

    It works as advertised - I can comfortably play basically any modern game at 4k at 60fps, max setting - even on a 4790k (8 threaded CPU at 4.4ghz, 5 years old).

    I'm a programmer who works in graphics on a frequent basis - so it's been nice for that.

    I can see a reason for a 20gig card - but not for gaming, by and large.

    The reason is console games - almost every major high end gaming project limits its performance expectations based on what game consoles can present to a user. There's not a huge benefit at least for gaming in the next 5 years to having 20gigs compared to 10gigs of video memory.

    Eventually - sure, but in that same timeframe, I don't see why you'd want the jump to be 10 to 20, when you could go to 32 gigs or further... I just don't see the 'float' of usable time you'd get by paying so much extra now for 20 gigs.

    That's largely my own vision being limited though - I'm not claiming a special insight that's future proof - just that I can't see any wisdom in paying extra now for 20gigs, for the next console generation at least.

    Ryan Fenton

    • by Luckyo ( 1726890 )

      >The reason is console games - almost every major high end gaming project limits its performance expectations based on what game consoles can present to a user. There's not a huge benefit at least for gaming in the next 5 years to having 20gigs compared to 10gigs of video memory.

      Serious question: do you think that SSD caching techniques promised in next gen consoles can potentially change this variable to the point where PC will need to have a lot more GPU RAM to compensate for lack of such technologies

      • by RyanFenton ( 230700 ) on Friday October 23, 2020 @03:51PM (#60641448)

        >Serious question: do you think that SSD caching techniques promised in next gen consoles can potentially change this variable to the point where PC will need to have a lot more GPU RAM to compensate for lack of such technologies in current PCs.

        I'm not seeing that, no.

        SSDs are faster than HDs, to be sure - but not THAT much faster, compared to ram in general.

        Graphics cards are there to rapidly compose just what you see on screen at a moment. Waiting on SSD for that is too slow - that's more of a staging area.

        The whole reason long elevator rides and "mysterious" texture flickering in Assassins creed were a thing in their time was to mask the loading of texture data from HD on consoles into RAM,

        Once it's in general memory, then it can go to Video memory and shaders, and into the on-screen buffers.

        So - even though that'll be more smooth in upcoming console games, you're still going to have a transport tunnel take a bit longer than 'usual' on occasion ... because it's loading textures the game doesn't want to show as an ugly single-color gradient for a second before it's set in video ram.

        That's why you're still going to see time-consuming animations in spots,before the camera finally swings over to show you the new stuff that it was setting up to show you.

        That's also why PC architecture isn't going to be especially challenged by these new console partially combined memory schemes (which aren't a new concept).

        Not that I'm an authority - the last console game I worked on was a Tiger Woods PGA tour game.

        Ryan Fenton

        • by Luckyo ( 1726890 )

          Thanks, that was informative and appears mostly in line with what I heard on the topic from others.

        • by tlhIngan ( 30335 )

          Except consoles are shared memory devices - a lot of delay on the PC comes with loading textures and other material from system memory into GPU memory, or rather pre-loading the content from system RAM into GPU RAM.

          Consoles don't need this - the SSDs (I think 3GB/sec on Xbox and 5GB/sec on PS5) so a copy of the assets is loaded just once into memory and doesn't have to be transferred.

          That's why loading times are dramatically reduced - not having to preload RAM (which costs a couple of seconds or so in data

          • Uum, cheap on-board GPUs had shared RAM for almost decades.
            It was just not special GPU-optimized RAM.
            Which shouls absolutely no problem for AMD to sovbe on PC mainboards.
            I mean they freaking made both console processor solutions.

            They could offer putting GDDR6 or whatever into the RAM slots, and tell some RAM makers to sell them and get early dominance.
            And faster RAM and a more direct GPU/APU to general RAM interface would quickly offset any advantages of using the G variant of DDR anyway.

            Because frankly, it

          • by MrL0G1C ( 867445 )

            a lot of delay on the PC comes with loading textures and other material from system memory into GPU memory, or rather pre-loading the content from system RAM into GPU RAM.

            eh? PCI3 = 15.8GB/s, DDR4 is faster, GDDR is faster. So that's about 1 second to fill the GPU ram from main ram. No bottleneck here relative to SSD.

      • Serious question: do you think that SSD caching techniques promised in next gen consoles can potentially change this variable to the point where PC will need to have a lot more GPU RAM to compensate for lack of such technologies in current PCs.

        No. You misunderstand how it works on the consoles (as well as on the PC since the technology is here too and the APIs to use it are being worked on right now). The GPU memory is still absolutely required and several orders of magnitude faster (they are pushing very close to 1TB/s). The GPU still needs this high speed data co-located.

        What the console is doing is pushing it from the SSD to the GPU memory so it's ready to be processed when needed, without the overhead of going through the CPU and system RAM,

        • by Luckyo ( 1726890 )

          Wait, are you telling me that there's no texture caching on GPU memory side at all for reasons of slow disk streaming speeds? It's always all in system RAM?

          I always assumed there was some caching going on in GPU RAM itself.

      • Serious question: do you think that SSD caching techniques promised in next gen consoles can potentially change this variable to the point where PC will need to have a lot more GPU RAM to compensate for lack of such technologies in current PCs.

        Computational Biology dev here, running code on giant HPC:
        There's nothing extraordinary in the "caching techniques" promised in "next gen consoles". There's nothing new at all.
        It boils down to PCIe-based SSD which are insanely fast because they have way more parallel channels available, but with some marketing speak wrapped around to make it sound cool by Sony.
        (See here [lenovopress.com] for a random quickly googled example of such tech in servers that is PCIe 4.0 8x. The PCIe 12x in the PS5 doesn't seem alien in comparison.

        • by Luckyo ( 1726890 )

          Yeah, but that was kind of my point. Can this be temporarily mitigated by the game devs using larger GPU memory as effective texture cache while Microsoft, Intel, AMD and Nvidia work out this out on driver level. Not sure if SSD makers are needed here, since they technically already supply PCI-E Gen4 devices that should be able to handle those data transfer speeds to the bus. It's where data goes from there that's changing.

          Because the usual problem for PC is that there are many makers and countless variants

    • 20GB important for neural network training

      • >20GB important for neural network training

        Oh, certainly - and also large video conversion projects, especially like those involving AI code that are rather fascinating.

        It'll also be rather nice for upcoming emulators...

        I'm just saying for gaming writ large it's not a wise choice when it comes to that decision making logic.

        Ryan Fenton

      • by fazig ( 2909523 )
        And digital artists working with large texture sizes.

        A shame if nVidia scratches these and only leaves the $1500 RTX 3090 with the high VRAM option and of course the very expensive workstation cards, where you pay for driver certification that you'll probably never use as an artist.

        This makes me really hope that AMD gets their shit together and offers better support for 3D artists in the future.
      • by MrL0G1C ( 867445 )

        And what's that got to do with general PC games? It pigs me off that we get junk shoved on the GPU that's not for games or gamers and we have to pay for that anyway.

    • by tlhIngan ( 30335 )

      It's cryptocurrency rush 2.0 actually.

      The computation rate and massive memory of a 3090 means the average payback time is around 8 months or so. A lot of cryptocurrencies try to be host CPU only, but when you have 20-28GB of RAM to play with on the GPU, it's a computing monster meaning mining can be efficiently done on the GPUs.

    • What game developer wants to sell a game that can only work on the most expensive 5% of cards sold? Games must be capable of running on 4 GB or less of video memory today, and that won't change past 8 for at least ten years. Sure there are advantages to being able to stretch that out for higher capacities but when you're designing for 4 there's a huge drop in additional utility after 8.

      So who is going to buy a new video card with more memory than any game can make decent use of for the next 6 years? By th
      • Because there are never options in games that can be turned on in order to make use of newer hardware than the lowest common denominator? Or probing the GPU to see how much VRAM is present so you can preload and precache more aggressively?

    • "I can see a reason for a 20gig card"

      640GB will be enough for everybody.

      • My Rolodex only holds 250 cards. After that I have to file less often used contacts in secondary storage (filing cabinet).

        With GPU texture memory, it's kind of the same thing. You can render a few large atlas texture, or several smaller textures in multiple iterations. There is a performance penalty, but you can minimize this penalty by organizing your process.

    • Vr games can use it. I'm not saying everyone can use, but I can. ðY(TM)
    • The reason is console games - almost every major high end gaming project limits its performance expectations based on what game consoles can present to a user.

      Sorry but that is false and has nothing to do with PC GPU RAM size. In fact that is the single easiest differentiator in performance between consoles and PCs that developers effectively don't need to put effort into optimising: Texture size.

      Almost universally PC releases have had sharper looking graphics precisely because the PC releases aren't as RAM limited as their console counterparts. And you don't need some magical remaster to make this work. The studios literally have high resolution textures and dow

    • Alt-Coin miners would use larger amounts of VRAM. One of the strategies for designing GPU friendly ASiC unfriendly mining algorithms is to require "lots" of RAM. "Lots" being relative. I think Etherium has gotten to the point where 4GB VRAM is not enough.
  • by aaron44126 ( 2631375 ) on Friday October 23, 2020 @03:28PM (#60641376) Homepage
    Wild speculation — There have also been multiple reports that NVIDIA is looking to switch over to TSMC or use them more heavily for Ampere GPU chip production next year and has put in a hefty order to lock in capacity with them. Possibly because of yield issues with the 8nm chips coming out of Samsung. Maybe these cards were cancelled as plans have shifted to launch upgraded cards with TSMC-produced 7nm chips next year instead...?
    • I don't think TSMC has any capacity left, to be frank.

      By the way: Will the M stand for "monopoly" soon?
      I heard rumors of even Intel having had plans for using TSMC's process, just to get any modern competitive chips at all. You know ... In an emergeny, even the devil will eat flies. ... And apparently were turned down due to lack of free capacity. (I presume with a lot of loud laughing.)

      • "I heard rumors of even Intel having had plans for using TSMC's process"

        All except the last version of the Intel cellular modems were fabbed at TSMC. (The Intel Mobile Communications group was purchased from Infineon. This is also why they used ARM cores until the last version as well.)

  • They would have had a customer

    Eh, maybe they'll come around. There's no rush

  • by MtHuurne ( 602934 ) on Friday October 23, 2020 @05:17PM (#60641686) Homepage

    All three AMD SKUs will feature 16GB of memory, leaving NVIDIA with a smaller frame buffer to compete with.

    Eh, even if you have an 8K monitor, you don't need 16GB of video memory for your frame buffer (it's about 126MB, times two or three for double or triple buffering). The memory is useful for high-resolution textures instead. A site specializing in video cards should get their terminology right.

  • We do not know the official reason for the cancellation.

    Perhaps it's because Nvidia's algos noticed the world is going down the f**king toilet, and there just might be a sales cliff ahead.

  • Maybe Nvidia decided that they should devote their production capabilities to producing the 3000 cards they just released since they've shown themselves completely unable to keep up with demand.

    If Nvidia's cards are still only available largely through resellers at 3 times the MSRP when the AMD cards launch Nvidia stands to lose some real market share, especially if AMD's launch is even moderately more competent than Nvidia's.

  • They just don't need to. Same reason we were stuck with IE 5.5-6.0 when dumbasses fell for that particular spaghetti code trainwreck, back then. :)

    They will release those exact cards, under bigger model numbers, as soon as AMD become enough of a threat again. But just enough to keep the dominance, but no more.

    (On a tangent: I just wish capitalism would not always inevitably progress towards monopolosm, one buy-up or competitor failing at a time. Cause the reason government-run stuff is progressing so slowly

    • The solution you are looking for is progressive company tax. Little companies pay low rates. Big companies pay big rates. This makes monopolies less attractive and helps new competitors to enter a market. The reason you'll never see this in practice is that the big companies have bought all the politicians.

    • They will release those exact cards, under bigger model numbers, as soon as AMD become enough of a threat again.

      You mean the AMD which are releasing 16GB cards? The AMD who in 4 days are announcing their next gen which early reports think may actually be serious completion?

      You may be right, but that leads to one of 3 possible scenarios:
      a) NVIDIA has engaged in some industrial espionage and is aware that AMD's announcement in 3 days is going to flop.
      b) NVIDIA has balls so big they desperately need to go get them checked by a doctor for cancer.
      c) NVIDIA is grossly incompetent making such a move right before a competito

It is clear that the individual who persecutes a man, his brother, because he is not of the same opinion, is a monster. - Voltaire

Working...