Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Technology

Nvidia RTX 4070 Ti Leak Reveals Specs From 'Unlaunched' RTX 4080 (theverge.com) 40

A new leak could confirm rumors that Nvidia's planning on releasing the "unlaunched" 12GB RTX 4080 graphics card as the RTX 4070 Ti. From a report: The company briefly posted the specs for its upcoming RTX 4070 Ti GPU on its website, but Twitter user @momomo_us managed to snag a screenshot before Nvidia pulled the page down. So far, the leaked specs look identical to that of the 12GB RTX 4080, with the chip sporting 7,680 CUDA cores, a 2.61 GHz boost clock, and 12GB of memory. It also says the GPU could run 4K at up to 240Hz or 8K at 60Hz with DSC and HDR, while an included chart indicates that the RTX 4070 Ti could outperform the RTX 3080 by about 3.5 times when playing Cyberpunk 2077 with its new Ray-Tracing: Overdrive mode. In October, Nvidia faced criticism over its decision to launch the 12GB RTX 4080 GPU under the RTX 4080 moniker because of how much it differs from its much more powerful 16GB counterpart.
This discussion has been archived. No new comments can be posted.

Nvidia RTX 4070 Ti Leak Reveals Specs From 'Unlaunched' RTX 4080

Comments Filter:
  • everyone on the planet knew that would be the case since they unlaunched it

  • I play games. I see the prices for the "good" cards, and think that maybe, I really don't need to buy a new card. My old games still work fine. I'll just wait for the mid ranged ones to go on sale. I look up those prices, and think, how about I just wait 10 years instead. Suddenly, I really couldn't care less about the latest and greatest in gaming video cards. 4080? 5080? Whatever. It's all the same if it cost more than $500.
    • by evanh ( 627108 )

      And anyone that wants a newer card now is buying cheap GTX3000/RX6000 ex-mining cards.

      • yep, would not touch these new overpriced pieces of shit. The performance increase doesn't come close to validating the massive price gouging.
    • Turns out that if you have enough fast ram and a fast enough cpu, you don't really need the newer cards anyway.
      • Turns out that if you have enough fast ram and a fast enough cpu, you don't really need the newer cards anyway.

        Well, I maxed out my fast RAM at 4MB, but I still think I'd benefit from a Picasso II

    • This. Exactly.
    • I wish some must play game would come out so I could justify spending stupid money on a gaming system. I just play old games or low graphics games (wurm unlimited).

      It would be kind of cool if I could play SC2 on my 70" projector but I'm not going to run out and spend tons of money just to do that. I'll just use my 24" monitor at 1920x1080 but that would be kind of cool.

      I can play movies with VLC on the projector but gaming just becomes to intensive for my system. Rocking an amd tri-core 3.7ghz, 8gm ram, 750

    • I'm still waiting for a reasonably priced upgrade for a GeForce 2060 Super that's under $400.

      Maybe the 4060 will be better than expected?

      • I doubt there will be a meaningful entry in the sub $400 space this generation. a 4050 or something barely better than on-die graphics.
    • I bought an RTX 2070 Super for gaming and especially VR a couple years ago. Always go for the mid-range card (in case of Nvidia, the '70's series) as they have the best performance/price ratio. I was just about to suggest that you get one of those, because unless you want to play at 4K ultra-raytraced, it still plays everything you would want to at high quality.
      But then I looked at the prices, and it's still at 400 - 550 Euros.
      What gives? I think that's about the price I bought it for 2 years ago.

  • I use my video card for Cycles Rendering on Rhino 3D. I already have a 8GB RTX 1080, which is getting long in the tooth, but I would be foolish to get something that only as 4GB more RAM, when I've seen things tight already on a few projects.

    I think it's about time for the video card to use system memory, somehow. That way the slow bus transfer is out of the equation.
    • I think it's about time for the video card to use system memory, somehow.

      Welcome to the Mac M-series!

    • Well, it's a good thing you mentioned the slow bus transfer, because that is a factor, my initial reaction was to wonder why you wanted to use SLOWER memory for your processing. Still, they've done quite a bit of work to make the bus work faster for transferring information back and forth. For example, the ability to load information directly into the video card from storage devices without involving the CPU or main RAM.

      Now, having a big glob of central memory you use for everything without moving it woul

      • by edwdig ( 47888 )

        What you're describing is how the Xbox and PlayStation are designed. The whole system uses GDDR instead of DDR, and there's just one shared memory pool. You've also got multiple memory busses, with different performance characteristics and cache coherency rules to help with performance.

        It's a great design for gaming, but CPU performance takes a hit in the process. GDDR gives you higher bandwidth but comes with higher latency. Your rendering speeds up, but your random memory accesses on the CPU side slow dow

        • Our current computers are a mix of compromises to enable decent general performance at affordable prices? Shocking...

          As you say, you can enable the graphical system more, but at the cost of the CPU's ability. You can enable the CPU more, but it costs graphical. You can paper over many things with "more money".

          We use graphics cards for so much today because the development work into making them has been enough that a makeshift use of a graphics card is relatively cheap, not that it's particularly good. A

    • If you want memory coherence between pcie devices and system RAM, you should look into CXL and the enterprise hardware that utilizes it.

    • I think it's about time for the video card to use system memory, somehow. That way the slow bus transfer is out of the equation.

      The system memory is on the other side of the bus, so it is the opposite of being out of the equation in that scenario. The GPU can access system memory via DMA, with boundaries controlled by the IOMMU, but it is several times slower than accessing local memory so nobody does this if they can avoid it. This is why they put so much RAM on GPGPU cards, to avoid having to go to the bus.

      Whether a card with only 4GB more would be enough for you when you're running out of 8GB now is obviously application-specific

    • I think it's about time for the video card to use system memory, somehow.

      Wait... what? Tell us you don't know how computers architectures work without telling us you don't know how computer architectures work.

      There's a reason system memory is shared only with the lowest of low end GPUs. I'll let you research why.

  • And they'll focus on bang for the buck.

    Keep their halo lineup of RTX xx90, and xx80, and the salvage parts from them, but let them think about frame/per dollar for the cards underneath those.

    For example, for their next series use the 3nm node for the flagships, but use the existing node for the undercards, and refine it for better yields. And let's not forget that the console generation that dictates the standard for image quality is still just getting its legs underneath it.

    Lots of sales can me made if the

    • Yup. Until new consoles are launched (whether they're named as a new generation or just the "PRO" versions of the current ones does not matter) you could easily do with Nvidia 3xxx class cards.

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...