Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Hardware

NVIDIA Launches GTX 750 Ti With New Maxwell Architecture 110

Vigile writes "NVIDIA is launching the GeForce GTX 750 Ti today, which would normally just be a passing mention for a new $150 mainstream graphics card. But company is using this as the starting point for its Maxwell architecture, which is actually pretty interesting. With a new GPU design that reorganizes the compute structure into smaller blocks, Maxwell is able to provide 66% more CUDA cores with a die size that is just 25% bigger than the previous generation all while continuing to use the same 28nm process technology we have today. Power and area efficiency were the target design points for Maxwell as it will eventually be integrated into NVIDIA's Tegra line, too. As a result the GeForce GTX 750 Ti is able to outperform AMD's Radeon R7 260X by 5-10% while using 35 watts less power at the same time."
This discussion has been archived. No new comments can be posted.

NVIDIA Launches GTX 750 Ti With New Maxwell Architecture

Comments Filter:
  • ... five expansion slots to fit the fans, this time!

    • Not if you pick a lower-power card from the upcoming Maxwell line-up.
  • Maxwell? (Score:4, Funny)

    by ackthpt ( 218170 ) on Tuesday February 18, 2014 @02:43PM (#46278923) Homepage Journal

    That sounds Smart...

    I'll get me coat.

  • Heat and noise.... (Score:4, Interesting)

    by Kenja ( 541830 ) on Tuesday February 18, 2014 @02:45PM (#46278939)
    Am I the only one annoyed that average operating temperature and noise output are not standard graphic card benchmarks?
    • by ustolemyname ( 1301665 ) on Tuesday February 18, 2014 @02:55PM (#46279057)

      The benchmarks on Phoronix [phoronix.com] did temperature, and commented on (though didn't measure) noise. Was actually a fairly comprehensive, well done benchmark, the only thing missing was frame latency measurements.

    • by gman003 ( 1693318 ) on Tuesday February 18, 2014 @02:59PM (#46279093)

      They are at Anandtech. [anandtech.com] They do noise/temps/power at idle, in a game, or under full synthetic load. They even do an overclock and then re-compare game/synth numbers.

    • They are not because both of those numbers are highly subjective to not only the devices use, but also it's enclosure. Do you have the card in SLI mode, inside a rack with 100 other cards all running bitcoin miners? Well then, noise and heat will be through the roof. Do you spend most of your time on in a terminal emulator and your case is water cooled? Well then it's going to be pretty low. "Average" is totally subjective and I think it best to leave those measurements up to an external review.

      • by Rhywden ( 1940872 ) on Tuesday February 18, 2014 @03:42PM (#46279469)

        By your standard, almost anything would be subjective. Let's go through your line of thinking:

        The tester chose an enclosure you probably don't have at home. As such, the card will not demonstrate the same values in your enclosure at home. As a result the tests are "subjective".

        Power consumption? Well, you've probably got a different PSU. Subjective.

        FPS? You've probably got a different CPU, different OS configuration, motherboard, harddisc... Subjective!

        In summary: If the tester uses the same enclosure for every card they test, I don't see how it's subjective. Sone or dB as a unit of loudness are measurable, as is temperature. Or do you want to tell us that, say, the distance to Betelgeuze is subjective just because you don't happen to have the proper equipment to measure it?

      • by geekoid ( 135745 )

        Forget enclosures. Power up the device with no enclosure, give me the numbers at 1 meter distance.

        Now we have a COMMON bar to use to judge.

      • I doubt miners care all that much about sound, ops/watt/s being far more relevant to their usage. If you've got a bank of dozens or hundreds of cards being hammered 24/7 it's going to be loud, period.

        For everyone else I would think that the relevant questions would be how loud it is while being hammered by a graphically demanding game, watching a movie, and using a word processor. And these days it sounds like the last two usage cases tend to be comparable. As for the influence of case, etc. I'd say that

    • by geekoid ( 135745 )

      No, I've been saying they should do that for any femputer part with a fan.
      What's the noise and 1 meter at mid power and max power?
      TYVM

    • Yes

      Operating temperature and noise output would only be valid measurements for the reference card. Once it starts getting manufactured by PNY or Diamond or eVGA or whomever they'll be using their own coolers and their own variations of the NVIDIA board.
    • Power consumption is heat generation. If you decrease power consumption, this should also reduce noise since a slower fan can be used.

  • going from 1 unit to 1.25 unit size is 56% bigger in area, so they gained 10%?

  • 5-10% better than a cheaper rival card that came out 5 months ago.
    Go nvidia, go!

    • 5-10% better than a cheaper rival card that came out 5 months ago.
      Go nvidia, go!

      I'm no nVidiot, but 5-10% improvement at a substantial power savings in the same price bracket is indeed an impressive feat.
      This being a brand new architecture means that later cards can also reap these benefits.

      AMD and their OEMs are still slowly trotting out 290X cards with decent cooling at inflated prices. The sooner nVidia gets their next architecture out there the sooner we'll see new products / price drops on the AMD side. The sooner that happens, the sooner we see new products / price drops on the

      • Re: (Score:3, Insightful)

        by fsck-beta ( 3539217 )

        Meanwhile, in CPU land we've been stuck for years of Intel charging $BUTT for marginally better

        If you think Haswell, Ivy Bridge or Sandy Bridge were 'marginally better' you aren't paying attention.

        • by Bengie ( 1121981 )
          I forget which synthetic benchmark, but it was non-SIMD and had Intel 2x faster single threaded and about 50% faster multi-threaded, while consuming about 25% less power at idle and 50% less power at load, than AMD's offering. AMD's very recent APU actually is about on par with Haswell clock-for-clock, but at the expense of 100% more power.
      • Re:Wow (Score:4, Insightful)

        by viperidaenz ( 2515578 ) on Tuesday February 18, 2014 @04:50PM (#46280069)

        You say the big advance is in power, then mention the 290X, which has a single precision GLOPS/W figure of 19.4, between the new GTX750's 19.0 and the GTX750TI's 21.8

        The 290X has a double precision GFLOPS/W of 2.6, the GTX750TI gets 0.68. Compared to the 65W TDP Radeon 250's double precision performance of 0.74, its a loser.

        This is just hype and selective benchmarks for a new architecture that was supposed to be 20nm. They couldn't get it built on 20nm so they've had to stick with 28.
        If it was 20nm, it probably would be better all round.

        • You say the big advance is in power, then mention the 290X, which has a single precision GLOPS/W figure of 19.4, between the new GTX750's 19.0 and the GTX750TI's 21.8

          The 290X has a double precision GFLOPS/W of 2.6, the GTX750TI gets 0.68. Compared to the 65W TDP Radeon 250's double precision performance of 0.74, its a loser.

          This is just hype and selective benchmarks for a new architecture that was supposed to be 20nm. They couldn't get it built on 20nm so they've had to stick with 28.
          If it was 20nm, it probably would be better all round.

          You can't compare the flagship 290x to the low-end 750, lol.

          Beyond that, nVidia doesn't give a shit about double precision compute - never have, never will. Most other people don't, either, even people doing computing on GPUs.
          nVidia is still very much gamer-focused, despite CUDA still having a huge market advantage over ATI Stream / OpenCL / DirectCompute / AMD APP Acceleration / etc. nVidia's new architecture is promising, and this card's launch is giving us a taste.

          I currently have only AMD cards in my

        • Games don't make use of double precision math on a GPU. Really the only thing that does is some GPGPU apps (plenty of others are SP). So it makes no sense to optimize for it, and nVidia does not in their consumer cards, particularly low end ones like the 750.

          Don't go and try to sniff around to find benchmarks that make your favourite product win, as it is rather silly. Ya, there's a lot the 290X is better at, but that doesn't mean it is relevant. The idea here is for reasonable graphics (as in gaming, multi

      • by Ecuador ( 740021 )

        I'm no nVidiot, but 5-10% improvement at a substantial power savings in the same price bracket is indeed an impressive feat.

        Substantial power savings, but certainly not the same price bracket. It is a $149 card so most reviews don't pit it against the much cheaper $119 R7 260X. In fact newegg right now sells an XFX OC edition of 260X for $119 and a Sapphire for $114 after rebate. Let's not mention that the Maxwell card gets trounced even by the cheaper 260X at many OpenCL tests - and reduced FP64 to 1/32 (vs the previous gen 1/24), so compute is also out. You have to pay dearly for the efficiency, nVidia as usual demands a price

        • I'm no nVidiot, but 5-10% improvement at a substantial power savings in the same price bracket is indeed an impressive feat.

          Substantial power savings, but certainly not the same price bracket. It is a $149 card so most reviews don't pit it against the much cheaper $119 R7 260X. In fact newegg right now sells an XFX OC edition of 260X for $119 and a Sapphire for $114 after rebate. Let's not mention that the Maxwell card gets trounced even by the cheaper 260X at many OpenCL tests - and reduced FP64 to 1/32 (vs the previous gen 1/24), so compute is also out.
          You have to pay dearly for the efficiency, nVidia as usual demands a price premium. Basically thank god for AMD being able to keep up, otherwise nVidia would be selling their cards 2X and 3X the price!

          Nobody buys AMD for compute except Bitcoin miners. The market is priced for gaming. nVidia's 750 is priced exactly where it needs to be for gaming.

    • 5-10% better than a cheaper rival card that came out 5 months ago.

      This is a mobile-first design. Look at the power consumption figures to see why it is a major advance.

  • I think the most drastic thing about this new chipset is the fact Nvidia bumped the L2 cache up past 2 MB.

    The Radeon R7 260 it is being compared against has only 768 KB and Kepler units had 256-320 KBs.

    The performance improvement could simply be the L2 being larger, which means it is paging out to it's memory less.
    • It's also only measuring single precision performance. The AMD GPU's are more power efficient at double precision.

      • by Nemyst ( 1383049 )
        The amusing thing is that it's actually a choice on NVIDIA's part. They're crippling their DP performance on consumer-level cards to push their workstation cards, which have full DP performance but cost in the many thousands of dollars. I have to wonder if it actually works out well for them, though I suppose gamers must be thankful they can at least get one vendor's cards at reasonable prices with the ludicrous mining craze.
  • How does it perform at h.264 encoding? Compared with let's say Intel's Quick Sync on HD4600?
  • I thought Maxwell was going to be under the 800 series [wikipedia.org]. For sure, the 700 series [wikipedia.org] already exists and has used the older Kepler architecture. Why confuse customers with ambiguous product naming?
    • by Nemyst ( 1383049 )
      NVIDIA's done it a lot in the past: they'll introduce a tweaked or completely new architecture as a midrange card to test the grounds and tweak things. This most likely lets them get actual production cards out, which they can then test for problems and use as foundation to build their top-end cards. I guess it also lets them test the grounds with reviewer reactions and consumers.
      • by Twinbee ( 767046 )
        Interesting, I might wait for the 8xx cards instead. Is x50 always 'reserved' for 'beta' versions of new architectures, and if so is it just x50 or are others like say x30 or x70 also reserved?

        Unfortunately, Wikipedia doesn't even bother to list the architecture in its giant table [wikipedia.org] - a very important aspect of a graphics card.
  • Please tell me the browser cache is screwing with me. Please tell me that my wife wants to have sex more often ( ok that isn't going to happen, I have a 12 and 15 year old) Do we really have Slashdot.org back?
  • It looks like all they may have left is price.
  • I wonder who they want to sell to, when it comes to "mainstream" cards.

    When it comes to graphics I consider myself mainstream. Watching video, running the OS, an occasional photo edit - that's about the heaviest it goes. I rarely play games (and those are not graphics intensive, just online games), I don't do CAD or anything else that's graphically intensive.

    Motherboards come with graphics built in, and that works just fine for those not into hardcore gaming or hardcore graphic design work. Both relative sm

  • As an owner of several nVidia produts, I appalled what nVidia does to the non-Quadro cards!

    So you can choose a crippled gaming cards that can't do math well, or choose a workstation card that can't cool itself, and doesn't really know what to shaders.

    Tell your marketing department, a loyal customer will seriously give AMD/ATI a close look the next time around.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...