Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Graphics Stats Hardware

First Radeon HD 8000M GPU Benchmarked 68

J. Dzhugashvili writes "As Slashdot noted earlier this week, AMD has a new line of mid-range Radeon GPUs aimed at notebooks. The chips are based on the Graphics Core Next microarchitecture, and they're slated to show up in systems early next year. While the initial report was limited to specification details, the first review of the Radeon HD 8790M is now out, complete with benchmark data from the latest games. The 8790M is about 35% smaller than its 7690M predecessor but offers substantially better gaming performance across the board. Impressively, the new chip has similar power draw as the outgoing model under load, and its idle power consumption is slightly lower. Notebook makers should have no problems making the switch. However, it is worth noting that this new mobile GPU exhibits some of the same frame latency spikes observed on desktop Radeons, including in games that AMD itself has sponsored."
This discussion has been archived. No new comments can be posted.

First Radeon HD 8000M GPU Benchmarked

Comments Filter:
  • by Guspaz ( 556486 ) on Friday December 21, 2012 @09:50PM (#42366431)

    The subject might look like I'm trying to troll, but... I'm actually referring to TFA. AMD sent the TechReport reviewer a Gigabyte Z77 motherboard with an Intel i7-3770K processor. So it says on the first page of TFA.

    AMD... sent an Intel processor... to review an AMD GPU...

    Talk about lack of faith in your own products.

    • Re: (Score:1, Insightful)

      by zenlessyank ( 748553 )
      I believe AMD licenses its technology to Intel since the Itanium sucked. Intel might know how to fabricate the 'engine' better, but AMD DESIGNS better engines. So AMD is getting paid whether it is an Intel or AMD proc.
      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Actually they cross license.

    • Depends on your definition of "suck". Price-for-price, AMD and Intel are fairly comparable right now (each one is better at some things, sometimes embarrassingly so, but in most cases they aren't far apart). However, Intel's line goes a lot higher than AMD's. A top-of-the-line AMD desktop processor is currently around $200 (less on sale, which isn't hard to find this time of year). A top-of-the-line Intel CPU will run you over $1000, and that's on sale. The 3770K isn't top of the line, but it is well over $300 on sale. Note that that's not including the cost of the motherboards either, which also seem to be higher for Intel chipsets.

      To people who want the absolute best performance and money is no problem, Intel is the current king. Since the goal of the benchmarking is to test the graphics processor, they wanted to make sure that the performance wouldn't be CPU bottlenecked.

      I'm saying this by way of giving you the benefit of a doubt, but since anybody who pays attention to current benchmarks and hardware prices knew it already, it really does in fact look like you're trolling.

      • by Rockoon ( 1252108 ) on Friday December 21, 2012 @11:53PM (#42366931)
        Anyone who has spent some time listing their alternatives within any sort of normal budget margin knows that there will be a lot of AMD chips under consideration and very few Intel chips under consideration.
        • by gagol ( 583737 )
          Problem is, Joe six pack goes to a store where the advices he receives is from a salesman paid in part by commission.
      • by Guspaz ( 556486 )

        I'm not trolling, and I've owned a few AMD CPUs in my day (four, I think? Does the Geode count?), I was just flabbergasted that AMD would send out Intel CPUs to review their GPUs with. I mean, eating your own dogfood is kind of a fundamental thing, and when you do something like this, it sends the message that your own products aren't good enough for the purpose. I chose an inflammatory title to highlight how ridiculous this is.

        I would actually argue that AMD only holds an advantage at the extremely low-end

        • I would actually argue that AMD only holds an advantage at the extremely low-end, below the $40 pricepoint (you can get a dual-core sandy bridge for about that),

          Why? For multithreaded benchmarks, the latest AMD ones seem to slot in somewhere between the i5 and i7 (ususally closer to the i7) and sometimes beat the i7 handily. For single threaded stuff, they're at around 75% of the i5.

          And about the same price as the i5, but support better features such as ECC memory.

          As far as I can see, unless you're heavily

          • by Guspaz ( 556486 )

            Consumers need some light multithreading, but single-threaded performance is still king. ECC is not something consumers care about (enterprise does, sure, but not consumers).

            Power matters less in desktops, so using more power to do the same thing is not as big a problem there. But in the mobile space, it's a big problem.

            The use cases you're talking about aren't really what a typical consumer or even office machine is used for.

      • A top-of-the-line AMD desktop processor is currently around $200 (less on sale, which isn't hard to find this time of year). A top-of-the-line Intel CPU will run you over $1000

        .
        A top-of-the-line Chevy Suburban is currently around $200. A top-of-the-line Dodge Challenger will run you over $1000.

      • by Kjella ( 173770 )

        Depends on your definition of "suck". Price-for-price, AMD and Intel are fairly comparable right now

        We heard the same when AMD tried to sell their FX-8150 for $245 and the customers didn't agree, now their FX-8350 sells for $195. When you have to sell a considerably better processor for $50 less while your competitors prices are practically unchanged, it's a good hint that the former wasn't particularly good value. I think AMDs reputation for always providing good value has gotten more than a little tarnished at their high end, sure if you found a good sale but that's usually a case of inventory they can'

        • Vishera is enough of a step up that I think there's still hope for AMD. I only own one AMD processor (and at least 3 Intel ones) but the 8350 looks good enough for its price point and I guess I have a certain degree of "support the underdog" here. Nobody wants Intel to be even more of a monopoly than they are right now. Granted that the 81xx series was a huge disappointment, but that doesn't mean the company is automatically doomed to never again be relevant. The original P4s from Intel were crap too...

          Also

    • It really depends on what you're doing and what you're spending. If your task can use all 8 cores of a piledriver cpu, it's very competitive. I have to wonder if a large part of amd's problem is intel is at 220nm, while amd is still stuck at 320nm. It would take an incredible design to be competitive.

      • by Guspaz ( 556486 )

        I think the fact that AMD's server chips need twice as many cores just to keep up with Intel's parts is kind of indicative of the problem, and I really do hope that they have something competitive in the market. Intel's current products are great, but only because AMD kicked them in the ass with the Athlon 64 years ago. If they go too long without a real competitor in the higher end of the market (beyond where ARM can reach), they'll stagnate.

      • Nitpick: 22nm, 32nm. You may be thinking of Angstroms, which are 1/10 of a nanometer. Also, AMD has 28nm in their GPUs; I'm not sure why their CPUs are still using a 32nm process.

    • by Noishe ( 829350 )

      Wouldn't AMD be targeting the 8000M for intel boards? If you're going to get an AMD cpu and integrated graphics, they want you to go for a Trinity solution.

      • by Guspaz ( 556486 )

        Amd's APUs are faster than Intel's iGPUs, but much slower than discrete chips. Why shouldn't you have an AMD CPU with a discrete GPU in the notebook market?

    • by Anonymous Coward

      If you want the reporters to do fancy graphs comparing performance with each different component, using an Intel chip would be the only way to go. Now the reporter can show the performance difference between the new AMD card, an Nvidia card, and Intel's HD4000 which may as well be shown as the baseline.

      It also shows that AMD hasn't tinkered with their GPU architecture to favor their own CPUs over competitor's.

      Disclaimer: I am using an AMD GPU on an Intel CPU system.

      • by Guspaz ( 556486 )

        Benchmarks of the HD4000 would have been useless, as AMD sent a desktop chip, performance would not be representative of the mobile HD4000's performance. AMD could just have easily have sent an AMD CPU for apples-to-apples comparisons.

    • The subject might look like I'm trying to troll, but... I'm actually referring to TFA. AMD sent the TechReport reviewer a Gigabyte Z77 motherboard with an Intel i7-3770K processor. So it says on the first page of TFA.

      AMD... sent an Intel processor... to review an AMD GPU...

      Talk about lack of faith in your own products.

      ===
      AMD would surely have their products run on in-house stuff. They would also want to show that if you had an I-7 or whatever Intel processor (atom excluded I guess), that the gpu would run well too

  • When people are looking to get better performance, they seek the processing power of other processors. Yes, I know "GPUs are optimized for [blah blah blah]" but in the end, they are still a processor and are efficient at what they do. x86 is just not so efficient but we've got all this legacy crap... and why? Because the software business liked to keep the sources to themselves so we need to keep out x86 processors. If everything was under Linux and we wanted to move to a better performing processor? R

    • by adolf ( 21054 )

      How the 486 was the combination of 386 and 387?

      No. A 486 is not a combination of a 386 and a 387.

    • The 486 was the first x86 cpu that was:
      pipelined
      had cache (8KB)
      had built in FPU (387)

      Basically, they took concepts that were being done in risc processors and used them in the x86 world.

      Following up... Pentium brought superscalar design, and IIRC, pipelined fpu. The Pentium MMX brought integer SIMD. The Pentium 2 brought Out of Order design.

  • by Nimey ( 114278 ) on Friday December 21, 2012 @11:11PM (#42366757) Homepage Journal

    You can't fool me, submitter!

  • My old Geforce9600 GT, and my slightly less old GTS250 plays every (shitty xbox360 port) game no problem in higher resolution than any notebook provides with little stress, seriously doubt facebook, or shit even solidworks (which runs like butter on a mobile intel chip) gives a shit.

    Software has once again peaked, and stagnated for a half decade, while hardware is running nuts for no real reason

    • by tibman ( 623933 )

      You need some new games : )

    • by fyngyrz ( 762201 )

      while hardware is running nuts for no real reason

      I have project builds (in c) that take many minutes to complete on an 8-core, 3 GHz machine with many gigs of memory available. I have at least one application that consumes all eight cores just to run -- and yes, it's written efficiently. I have others that consume a core or two... and less would be better. I *do* multitask. As far as I'm concerned, neither software (c compiler and linker in this case) or hardware are anywhere *near* where I'd like them to

      • while hardware is running nuts for no real reason

        I have project builds (in c) that take many minutes to complete on an 8-core, 3 GHz machine with many gigs of memory available. I have at least one application that consumes all eight cores just to run -- and yes, it's written efficiently. I have others that consume a core or two... and less would be better. I *do* multitask. As far as I'm concerned, neither software (c compiler and linker in this case) or hardware are anywhere *near* where I'd like them to be. Your assertion of "no reason" strikes me as ludicrous.

        It's too bad you couldn't use an example which includes a video card, since that's what we're talking about right now. People out there are buying video cards that consumes more power than their entire computer system including the display and for what? Most of them, for nothing. GPGPU tools for the average user are essentially nonexistent, and you can only perceive so many FPS.

        • by fyngyrz ( 762201 )

          I am using examples that include video cards (4 of them, in my case.) My OS uses video cards to accelerate mainline processes; furthermore, this is done through a standard system mechanism and is relatively easy to incorporate in a considerable range of code; end users see benefits commensurate with the graphics engines they have installed. The processors in graphics cards are specialized for certain types of operations that can be very useful, and in use will significantly outperform, general purpose CPU i

          • by gmhowell ( 26755 )

            I am using examples that include video cards (4 of them, in my case.) My OS uses video cards to accelerate mainline processes; furthermore, this is done through a standard system mechanism and is relatively easy to incorporate in a considerable range of code; end users see benefits commensurate with the graphics engines they have installed. The processors in graphics cards are specialized for certain types of operations that can be very useful, and in use will significantly outperform, general purpose CPU instructions.

            It really doesn't matter (other than as a market force) if gamers are buying cards that can handle more display activity than they're throwing at them. What matters is if these cards (and CPUs, for that matter) are overpowered in all places they are utilized, and the answer to that is flat-out no.

            I'm sensing that you use a BSD derived OS...

          • My OS uses video cards to accelerate mainline processes; furthermore, this is done through a standard system mechanism and is relatively easy to incorporate in a considerable range of code; end users see benefits commensurate with the graphics engines they have installed.

            The problem is, it sucks, because you have to send data to the GPU to be processed. If you had more CPU, you wouldn't have to do that. GPGPU is the wrong answer to the problem. Making a CPU that looks more like a GPU is a better one. GPGPU only exists because of a bunch of gamers who keep buying faster and faster parts. If they weren't doing that, people would spend that effort figuring how how to make CPUs faster instead. So instead of just getting faster CPUs that any program can use with a mere recompile

    • by Anonymous Coward

      Certainly an ATI 4870 can play all current games well at 1280x1024 if the visual settings are adjusted a little. By adjusted, I certainly don't mean removing the 'eye candy'- just turning down or off some options that exist purely to burn off the obscene performance available with today's high-end cards.

      Only those who play at much higher resolutions (largely silly, given the games derive from console version assets optimised for much lower resolutions) need much newer cards.

      The situation for older GPU owne

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...