Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

NVIDIA Tegra X1 Performance Exceeds Intel Bay Trail SoCs, AMD AM1 APUs 57

An anonymous reader writes: A NVIDIA SHIELD Android TV modified to run Ubuntu Linux is providing interesting data on how NVIDIA's latest "Tegra X1" 64-bit ARM big.LITTLE SoC compares to various Intel/AMD/MIPS systems of varying form factors. Tegra X1 benchmarks on Ubuntu show strong performance with the X1 SoC in this $200 Android TV device, beating out low-power Intel Atom/Celeron Bay Trail SoCs, AMD AM1 APUs, and in some workloads is even getting close to an Intel Core i3 "Broadwell" NUC. The Tegra X1 features Maxwell "GM20B" graphics and the total power consumption is less than 10 Watts.
This discussion has been archived. No new comments can be posted.

NVIDIA Tegra X1 Performance Exceeds Intel Bay Trail SoCs, AMD AM1 APUs

Comments Filter:
  • Not an AMD CPU (Score:5, Interesting)

    by Guspaz ( 556486 ) on Wednesday July 29, 2015 @12:28AM (#50201977)

    The X1 uses a standard ARM Cortex A57 (specifically it's an A57/A53 big.LITTLE 4+4 config), so this says more about ARM's chip than anything nVidia did...

    Now if you compared nVidia's Denver CPU, their in-house processor... The Denver is nearly twice as fast as the A57, but only comes in a dual-core config, so it's probably drawing a good deal more power. When you compare a quad-core A57 to a dual-core Denver, the A57 comes out slightly ahead in multicore benchmarks. Of course, single core performance is important too, so I'd be tempted to take a dual-core part over a quad-core if the dual-core had twice the performance per-core...

    Why the X1 didn't use a variant of Denver isn't something that nVidia has said, but the assumption most make is that it wasn't ready for the die shrink to 20nm that the X1 entailed.

    • by Guspaz ( 556486 )

      Sorry, I meant not an *nVidia* CPU.

    • by mcrbids ( 148650 )

      I'm bully on ARM, with the (almost) collapse of AMD as a "first rate" processor, it's good to see Intel get some serious competition in a significant market space.

      My only beef with ARM is that comparing CPUs is harder than comparing video cards! the ARM space is so fragmented with licensed cores and seemly random numbers indicating the "version" that I have no idea how, for example, a SnapDragon 808 processor compares to a Cortex A9 or an Apple A7.

      Really, I'm lost. But the $40 TV stick with the 4x core A9 w

      • by robi5 ( 1261542 )

        > I'm bully on ARM

        Is it some new slang, or you meant bullish?

        • Actually it is old slang, dating back as far as 1609 , to Merriam-Webster. It enjoyed being in the popular lexicon during the latter part of the 19th Century and was a commonly used expression by U.S. President Theodore Roosevelt.

          • by robi5 ( 1261542 )

            Interesting, even a literal 'bully on' search didn't turn up anything (m-w.com doesn't describe this usage either), I've only heard 'schoolyard bully' and 'bully for you' in the past, and of course, somebody is 'bullish on' or 'bearish on' something.

      • I'm feeling rather sad. AMD graphics still have completely open drivers. Nvidia relies on blobs at a level higher than on device firmware.
    • No, it says more about the bad benchmark than anything else.

      I'm not impressed that ARM can NOP as fast as an i3 to put it bluntly.

      I say this because if you look at the bench marks and the way they did it. They compile arm variants in fully optimized mode, and x86 variants generic x86 code. From that point on, reading is a waste of time. Might as well compile with debugging on from a bench mark perspective.

      Its intentionally skewed.

  • by the_humeister ( 922869 ) on Wednesday July 29, 2015 @12:48AM (#50202035)

    Look here at the compiler settings [phoronix.com]. The x86 processors are somewhat hampered by non-optimal settings. For example the i3 5010U is set to -mtune=generic. In my experience, that's basically going to default to AMD K8 optimization with no AVX/AVX2 support. The better option would be using -mtune=native or better yet -march=native, which would detect the CPU and produce a more optimized binary.

    • it does worse? You just found out that the benchmark is bullshit.

      • This is exactly why the benchmarks include

        1) a way to repeat the benchmarks as described in the article see page 4 - 'phoronix-test-suite benchmark 1507285-BE-POWERLOW159'.
        2) The compiler options are included

        Armed with those two pieces of information, you can go and "prove" that the benchmark is, as you called it - bullshit. Although rarely, if ever that I am aware of, does anyone respond to an article with those two pieces of information and say - "here, if you run it in thi

        • >If you really want to prove that the benchmark is crap, then by all means make meaningful suggestions to _any_ of the existing machine benchmarks.

          That's a bit facetious. If you've been around the benchmarking world as long as you say you have, you'll know that the compiler settings are *always* a cause of controversy.

          Nobody is happy when compiler settings are made that don't favor their side (whatever it is).

    • by Bert64 ( 520050 )

      Well with the possible exception of those running gentoo, 99% of end users will be running precompiled software that has to be compiled for a generic cpu as the distributor doesn't know exactly what type of processor its going to end up running on.

      • I run Gentoo!!!

        Besides that, I did some very recent Intel CPU benchmarking as I tried to figure out IPC gains over CPU generations. I ran my benchmarks on GCC 4.8/4.9/5.2 and LLVM 3.6 on Nehalem and Ivy Bridge. I also included march=generic vs march=native. Quick summary: For generic integer/floating-point code, the Intel Core-i7 CPUs don't actually benefit much from optimizations for newer architectures, especially on x86-64. The exception here is that 32-bit generic FPU x87 code is slower than SSE2, but t

      • by Junta ( 36770 )

        The problem being they didn't do the same to ARM. Either that argument applies to both sides or neither. They need to be held to same standard.

  • by robi5 ( 1261542 )

    What incompetence led Intel to use a temporally relative name. It's on par with 'new' in the product name. Seems to work OK until it doesn't and looks idiotic in retrospect.

    • What incompetence led Intel to use a temporally relative name. It's on par with 'new' in the product name. Seems to work OK until it doesn't and looks idiotic in retrospect.

      What looks idiotic in retrospect is your comment. The name only has to make sense long enough to sell a bunch of units. Then they're on to the next product.

  • by tji ( 74570 ) on Wednesday July 29, 2015 @02:17AM (#50202281)

    Do VDPAU ( Nvidia video decode hardware acceleration API) drivers exist for this platform? In the past, I believe only the x86 binary blob drivers supported VDPAU.

    If they exist, this would make an excellent MythTV DVR frontend device.

    • It should run about the same driver as a graphics card under a linux PC (which shares much with the Windows driver too).
      When nvidia first showed off Tegra K1, it was Ubuntu 12.04 with a screenshot of nvidia-settings along a few things.

  • How "modified" was the Shield to run Ubuntu? Can I buy a Shield and get Ubuntu on it today? Or is this benchmarking an exercise in futility?
  • 10 times out of 10 NVidia non-GPU chip benchmarks are paid for by NVidia and are complete bullshit, designed to get fanboys to buy their latest chip. There have been no exceptions with Tegra to date.

    • Oh, the problem just always has been that the benchmarks were done on prototype hardware, which ran at higher frequencies than final products, had better cooling, and a power supply. But the Shield Android TV is a final product that has all this, so the benchmarks are accurate.
  • by mtippett ( 110279 ) on Wednesday July 29, 2015 @04:13AM (#50202645) Homepage

    10W is incredibly hot for any sort of passively cooled, enclosed device.

    The machine would be quite warm (almost hot) to the touch unless they use some inventive cooling. The current Gen Apple TV is about 6W, and your typical smartphone is around 2-3 W.

    There is a reason that NV has only really been able to get a foothold in tablets, android TV, cars and their own shield product. Quite simply put, they have historically been fast and hot. Great as a SOC within certain markets.

    • by Anonymous Coward

      You are correct only if comparing in the same form factor. For me, I think this race to 10-20W for plugged in computing is crazy -- power isn't that expensive as long as it doesn't consume much during idle, I don't really care how much it uses during operation as long as it isn't >300W or so, or 100W in a mostly enclosed space or 50-80W in a fully enclosed space.

      Give me the better performance please.

      • A 5W SOC gets you down to passive cooling territory. With small form factor desktops, replacing whiny or dead fans are a bother since the fans are special parts that you need to hunt down online.
    • by nvm ( 3984313 )
      It's not doesnt have passive cooling ( http://www.tomshardware.com/ga... [tomshardware.com] ) look at that fan. So you comment is pointless.
    • by tlhIngan ( 30335 )

      10W is incredibly hot for any sort of passively cooled, enclosed device.

      The machine would be quite warm (almost hot) to the touch unless they use some inventive cooling. The current Gen Apple TV is about 6W, and your typical smartphone is around 2-3 W.

      There is a reason that NV has only really been able to get a foothold in tablets, android TV, cars and their own shield product. Quite simply put, they have historically been fast and hot. Great as a SOC within certain markets.

      Actually, it isn't too hot. ARM t

  • by serviscope_minor ( 664417 ) on Wednesday July 29, 2015 @04:27AM (#50202681) Journal

    Interesting take-home from the benchmark: the AMD desktop processors did prtty respectably well compared to the i7s. Ususally a bit slower, sometimes actually faster and we know an AMD setup is certainly cheaper.

    Interesting that in the open source, repeatable, examinable benchmarks the difference between Itel and AMD is a lot less pronounced.

  • NVidia should have spent more money on engineering and less on advertising. All the Tegra chip sets overpromised and underdelivered. I see no reason why this one should be different.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...