Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Intel AMD Software Stats Hardware

Casting a Jaundiced Eye On AnTuTu Benchmark Claims Favoring Intel 82

MojoKid writes "Recently, industry analysts came forward with the dubious claim that Intel's Clover Trail+ low power processor for mobile devices had somehow seized a massive lead over ARM's products, though there were suspicious discrepancies in the popular AnTuTu benchmark that was utilized to showcase performance. It turns out that the situation is far shadier than initially thought. The version used in testing with the benchmark isn't just tilted to favor Intel — it seems to flat-out cheat to accomplish it. The new 3.3 version of AnTuTu was compiled using Intel's C++ Compiler, while GCC was used for the ARM variants. The Intel code was auto-vectorized, the ARM code wasn't — there are no NEON instructions in the ARM version of the application. Granted, GCC isn't currently very good at auto-vectorization, but NEON is now standard on every Cortex-A9 and Cortex-A15 SoC — and these are the parts people will be benchmarking. But compiler optimizations are just the beginning. Apparently the Intel code deliberately breaks the benchmark's function. At a certain point, it runs a loop that's meant to be performed 32x just once, then reports to the benchmark that the task completed successfully. Now, the optimization in question is part of ICC (the Intel C++ compiler), but was only added recently. It's not the kind of procedure you'd call by accident. AnTuTu has released an updated "new" version of the benchmark in which Intel performance drops back down 20-50%. Systems based on high-end ARM devices again win the benchmark overall, as they did previously."
This discussion has been archived. No new comments can be posted.

Casting a Jaundiced Eye On AnTuTu Benchmark Claims Favoring Intel

Comments Filter:
  • by Anonymous Coward

    Of course not.

    Make them do real work loads. With Monkeys.

    • What became of the famous "gaussian blur benchmark"? What could be more universal? My personal favorite is hitting the "x^2" key on the calculator until it took more than two minutes to get a result.

    • by hairyfeet ( 841228 ) <bassbeast1968 AT gmail DOT com> on Saturday July 13, 2013 @05:43PM (#44271775) Journal

      Look up "Intel cripples compiler" and you'll see its MUCH worse than merely tilting the benchmarks in favor of ARM, this bullshit means that ANY chip that doesn't have a CPUID of "Genuine intel" gets royally fucked by ALL SOFTWARE that is compiled with the Intel compiler.

      If you look up the above in google you'll find a researcher that has done studies and if that doesn't deserve antitrust i don't know what does, he started looking into it when he found that his code would run faster on an old P4 than on a new AMD and it is soooo nasty that if you take a Via chip, the only chip that lets you change the CPUID, and change it from "Centaur hauls" to genuine Intel it jumps nearly 30% in the benches!

      So do NOT buy chips based on the benches, they are as rigged as the old "quack.exe" but this is a thousand times worse because ANY program that is compiled with this is crippled and WILL run slower on ANY non Intel chip. So please programmers, use GCC, use AMD's compiler (which is based on GCC and doesn't favor one chip over another) and for those looking for a system DO NOT buy Intel if you can help it, since you are supporting this kind of market rigging bullshit. after seeing the results and seeing just how badly Intel is rigging I went exclusively AMD in my shop and even in my family with NO regrets, at least this way i'm supporting a company that isn't bribing OEMs and rigging markets.

      seriously guys don't take MY word for it, look it up. They have even rigged it in the past to push shittier chips over better ones, the guy doing the tests found that even though the early P4 was a slow as hell chip when you ran a program compiled with ICC on both the P3 and P4 surprise! P4 would win. same program compiled with GCC? P3 won by over 30%.

      • by Anonymous Coward

        Why would you use an Intel compiler on a non Intel cpu?

        • by Anonymous Coward

          The real question is why would you use an Intel compiler. At all. Period.

          • by Anonymous Coward

            Because for intel cpus it is actually really good?

        • The intel compiler puts code in the compiled binary that does the checking. It doesn't matter whether you compile on Intel, the resulting binary is crippled and runs slower than necessary on AMD.
          I worked with a guy that wrote a program to patch said binaries to remove the checking - this resulted in a nice speedup on all our boxes, since it was an AMD shop.
          GP is absolutely right.

      • by Macman408 ( 1308925 ) on Saturday July 13, 2013 @08:20PM (#44272533)

        To be fair, any use of a benchmark to judge which system to buy is pretty silly. The best benchmark you can make is something that is identical to your intended workload; eg play a game or use an application on several systems, and see which feels better to you.

        Taking some code written in a high-level language and compiling it for a platform is a great benchmark - if that's what you're going to be doing with the system. But you'd better be using the compiler you'll be using on the system. If you need free, you should test GCC on both. If you are considering buying Intel's compiler (it's not free, is it?), then add it in as another test to see if it's worth the extra outlay of cash. Intel puts a lot of work into making compilers very good on its systems, so if you're going to use the Intel compilers for Intel systems, it's perfectly valid to compare against using GCC on an ARM platform, if that's what you'd be using on ARM.

        But if most of what you're running will be compiled in GCC for either platform, yes, you should absolutely test GCC on both.

        That said, much of what's noted isn't necessarily intentional wrongdoing. For the example of breaking functionality, it's quite possible that the compiler made a perfectly valid optimization to get rid of 31 of the 32 loop iterations. One of my professors once told a story about how he wrote a benchmark, and upon compiling it, found that he was getting some unbelievably fast results. As in literally unbelievable - upon investigation, he discovered that the main loop of the benchmark had been completely optimized away, because the loop was producing no externally visible results. (As an example, if the loop were to do "add r3 = r2, r1" 32 times, a good compiler could certainly optimize that down to a single iteration of the loop; as long as r2 and r1 are unchanging, then you only need to do it once. Similarly, even if r1 and r2 are changing on each iteration, you need to use the result in r3 from each iteration of the loop, otherwise you could optimize it to only perform the final iteration, and the compiler could pre-compute the values that would be in r2 and r1 for that final iteration.)

        So perhaps it's a bad benchmark - but I wouldn't default to calling it malicious, just that the benchmark isn't measuring what you might want it to measure. And quite frankly, most users aren't going to be doing anything that even vaguely resembles a benchmark anyway, so they really have little justification to make a buying decision based on them.

        • The best benchmark you can make is something that is identical to your intended workload; eg play a game or use an application on several systems, and see which feels better to you.

          And that's exactly what benchmarks are supposed to approximate. If they aren't doing that, it's because they are bad benchmarks.

          People can't go and get hands-on with every system out there, and even if they couldn't, they can't just install all their own software on it and try it out for a few days... so we need some objective

        • I'm sorry dude but while you started out well, you quickly ran into bullshit. Look up what I said to look up, "Intel cripples compiler" and there you WILL see the smoking gun....The Pentium 3. If your arguments were valid, that its JUST Intel knows their own chips well? Then the P3 wouldn't get penalized by ICC...but it does. and again if you ONLY change the CPUID, which frankly ANY compiler that uses CPUID to judge what a chip can do instead of the flags? Bullshit. But you switch from Centaur hauls to genu

          • Let me start this by saying I'm no fan of Intel - quite frankly, many of their business practices are a little suspect, and they've had some downright nasty ones before (like selling a bundle of CPU + Northbridge for less than the CPU alone, and then saying it violated the agreement if the OEM buyer decided to toss the Northbridge in the garbage in lieu of a different manufacturer's chipset.). But I don't see a slam-dunk case for antitrust in this alone.

            The first reason is that there may actually be technic

            • You final sentence is the case dude, Intel gives big discounts to major software companies, hence why nearly every benchmark uses ICC. And again the smoking gun is the P3, if what you were saying is true then the P3 would NOT be penalized, since they know their own chips, but it is. again take the same program and run it on both the P3 and the P4 of the same speed, with ICC the P4 will get a 30% speed boost while the P3 gets a boat anchor tied to it, the same code with GCC? the P3 will win by 30%.

              And as o

      • by godrik ( 1287354 )

        Are you talking about the compiler that was checking the processor ID instead of the capabilities of the processor? That's an old story that has been fixed a long time ago.

        In all fairness, compiler optimizations are close to black magic. The only reasonnable way to know what is best is to test multiple compilers and see what comes. Depending on codes some compiler will be better than some other one. Even on intel platforms, depending on benchmarks sometimes gcc performs much better, sometimes icc performs b

        • by OneAhead ( 1495535 ) on Sunday July 14, 2013 @12:47AM (#44274191)
          If by fixed you mean "Intel put a disclaimer [] on its compiler saying [ICC] may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors", then yes, it is fixed. Otherwise, not so much. I happen to have tested ICC performance against other compilers not too long ago, and it refuses to genereate AVX instructions that are reachable when running on an AMD CPU. The -xO flag [] didn't help - all it did was turn off AVX altogether. Adding flags that prevent it from generating other execution paths than the AVX one didn't help either; when started, the binary would just generate a clean (but false) error message that the processor doesn't support its instructions, and exit immediately. From this, I concluded that after all these years, they still check for "GenuineIntel" instead of looking at the actual capability flags. In the end, we found absolutely no way to make ICC generate AVX instructions that would be executed on an AMD processor.
          • Have you tried the AMD compiler yet? Its free and according to the one that originally found the "Intel cripple code" it does NOT favor any one chip over another but actually checks the use flags like a compiler SHOULD, not this CPUID market rigging bullshit. while it was originally based on GCC they have added a bunch of optimizations and updates to GCC to make it faster and support the latest and greatest.

            Here is the link [] at least I think, its been awhile since I went looking for dev tools, but as you

            • Our benchmarks were done with a scientific workload that is not even representative for scientific workloads in general, so I think they will be all but useless to general users. That said, we did try Open64 (not sure if it's fair to call it "the AMD compiler"), and it came out pretty good. But... so did GCC 4.7.2. To my surprise, it was way faster than the GCC 4.5 we used before, and scored virtually on par with ICC, Open64 and Portland (aka. PGI). One important thing to note is that we gave gcc the -ffast

          • I have a question, not a programmer so I was wondering what advantage do you get by using AVX over SSE? because i took a quick look at AVX and its only supported on Bulldozer and Sandy Bridge, which would be a very VERY small portion of the chips out there. it seems to me if you wanted the widest possible support you'd use the SSE flags since SSE 4 has been around since Phenom I and Core 2.
      • I've been all AMD almost forever, for this reason among others. [] 2010 [] 2009 [] 2005

        I found those three on the first page of my search results, and quit looking. Different search terms and a more determined search will find hits as old as about 1999, maybe even older. Hard to remember, but

        • Glad to see somebody else put their money where their mouth is and support market competition over market rigging. I was a big Intel chip user until the bribery and ICC scandals came out and since then I've not bought a single Intel chip at the shop and even my family and I are 100% AMD, 4 desktops, a laptop, and a notebook and they ALL run great.

          The thing most folks just don't seem to realize is how much better the bang for the buck is on AMD, you can pick up a 1035T or 1045T for around $100, quads for les

      • This is why you should use more than one benchmark when testing newly released hardware, especially if you're going to write an article on your findings.
  • But still... (Score:3, Insightful)

    by sunking2 ( 521698 ) on Saturday July 13, 2013 @04:26PM (#44271391)
    It is the suite of tools, not just the processor. If intel offers a better processor/compiler package than is available for arm why shouldn't they tout it? I'm not saying they are presenting it in the correct way, but I do think they have a valid point they want to make. That with Intel you get more than a CPU, you get a heck of a lot of tool expertise. And for some people that is worth something.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      if you use icc instead of gcc for x86 then you should use the ARMCC compiler or Keil or one of the others for arm.

    • by boorack ( 1345877 ) on Saturday July 13, 2013 @04:36PM (#44271457)
      Compiler was one of many skews in this "honest" benchmark. Aside of deliberately "fixing" benchmark code for intel and deliberately breaking ARM benchmark by disabling NEON. In my opinion they should run identical code, trying to maximize its performance on both platforme and in case of Intel use both compilers and post both results. This would lead potential customer to correct conclusions - as opposed to a bunch of lies and misinterpretations AnTuTu actually posted.
      • by godrik ( 1287354 )

        Well, that is stupid. You NEVER run identical codes on different architectures. Especially when they are not even binary compatible. You almost always optimize the code for a given architecture in fractions of code that are particularly important. Querying cache sizes, checking the number of hardware contexts are common things.

        For instance libcairo has some NEON specialized code path. ffmpeg contains codepath for pretty much every single architecture out there.

        • by dryeo ( 100693 )

          You run configure (various options such as --enable-gpl for FFmpeg) && make for each platform. For benchmarking I guess you could do make check for Cairo but that is not a very good test as make check needs exactly the right versions of ghostscrpt, various fonts and I don't know what else. For FFmpeg you could run make fate after downloading the samples and time it. This would be a fairly good C benchmark for various CPU's because as you stated there are code paths for a hell of a lot of CPU's. The

        • by gl4ss ( 559668 )

          he probably meant identical in the sense that the input->output is identical. which is what benching two systems should be about anyways.

    • That's true, but in this case it looks like it is simply a broken version of a benchmark that Intel latched on to for marketing purposes: "At a certain point, it runs a loop that's meant to be performed 32x just once, then reports to the benchmark that the task completed successfully."
    • They're rigging results by using parameters and optimizations useful only for the benchmarks in question. In other words, unless the only thing you use processors for is benchmarks, you have learned absolutely nothing about how this processor will work in any real world application.

    • It is the suite of tools, not just the processor. If intel offers a better processor/compiler package than is available for arm why shouldn't they tout it?

      Because you'll be stuck with the architecture for quite some time, while the SW tools may evolve faster than you think (not to mention the fact that there's always the profiler, compiler intrinsics, and inline assembly, if you need top performance right here, right nor, for a particular piece of code, and then, only your brain and the piece of silicon come into the equation, not some silly compilers).

    • by jthill ( 303417 )

      If you want to know why they shouldn't present honest results, it looks like you;'re going to have to ask them, because it seems they didn't. Until they explain why, the usual reason people put their thumb on the scale is that they know they can't win honestly.

    • It is the suite of tools, not just the processor. If intel offers a better processor/compiler package than is available for arm why shouldn't they tout it? I'm not saying they are presenting it in the correct way, but I do think they have a valid point they want to make. That with Intel you get more than a CPU, you get a heck of a lot of tool expertise. And for some people that is worth something.

      Absolute correct, you should judge the combination of processor + commonly used compiler. For example, if Apple built an iPad with an Intel processor, then any iPad app would be built with Clang for ARMv7, Clang for ARMv7s, and Clang for x86_64, and you could directly compare all three versions.

      However, you must be careful. You need to check real-life code. If you run identical code 32 times and an opimising compiler figures out you need to do it only once, that's not real-life. If this is what your benc

    • by bcmm ( 768152 )
      But they aren't using the other compiler properly - their results effectively rely on the lie that they can do SIMD and ARM can not. And even without the actual dishonesty, it's a synthetic benchmark selected specially to show off their compiler/processor's strong points.
  • by Anonymous Coward

    • by Molochi ( 555357 )

      Just the controversy. The news, buried at the bottom of the article, is that AnTuTu has a newer version that drops Intel performance back to where it was before.

  • Fixed, apparently (Score:3, Informative)

    by edxwelch ( 600979 ) on Saturday July 13, 2013 @04:44PM (#44271493)

    In fairness to AnTuTu they released a new version which tries to rectify the problem: []

    • by gl4ss ( 559668 )

      yeah but does the new version remove optimizations from the intel compile or(the right way) add those to the arm version?

      seriously though.. who gives a fuck. the tests should be done with the usual android toolchain... it's not like anyone is going to use _that_ intel processor for scientific computing.

  • by gTsiros ( 205624 ) on Saturday July 13, 2013 @05:12PM (#44271649)

    ...where companies used to rig benchmarks?

    Oh right, we're still not past them.


    Always use real world applications, in actual, real usage. Never benchmarks.

  • I know some ignorant people that will take these benchmarks as gospel in their righteous views.

  • by citizenr ( 871508 ) on Saturday July 13, 2013 @06:17PM (#44271965) Homepage

    ARM looks like a sore loser here.

    >GCC isn't currently very good at auto-vectorization, but NEON is now standard on every Cortex-A9 and Cortex-A15 SoC

    So the conclusion is to remove intel optimizations instead of improving ARM ones?

    • by imgod2u ( 812837 )

      Well, no. There are better compilers out there for ARM. Keil for one. More importantly though is the fact that real code that cares about performance won't just write a loop and let the compiler take care of it; they'll use optimized libraries (which both Intel and ARM provide).

      Compiler features like auto-vectorization are neat and do improve spaghetti code performance somewhat but anyone really concerned with performance will take Intel's optimized libraries over them. So if we're going to compare performa

    • ARM will invest very little in GCC now, because of GPLv3. The question should be is why the benchmark used a generic compiler for ARM (gcc) versus a vendor specific compiler for Intel (icc). Why was the ARM produced compiler not used?
  • Instead of pointing a finger and tell people about it, why not fix it and SHOW people the actual numbers after optimisations for both platforms are set into place, and without those optimisations on both platforms.... Meanwhile you can also say that Intel's C++ compiler is just better than GCC's as appearantly Intel compiler already has all optimisations ON by default and GCC doesn't....

If I have seen farther than others, it is because I was standing on the shoulders of giants. -- Isaac Newton