Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

NVIDIA Releases JTX1 ARM Board That Competes With Intel's Skylake i7-6700K (phoronix.com) 84

An anonymous reader writes: NVIDIA has unveiled the Jetson TX1 development board powered by their Tegra X1 SoC. The Jetson TX1 has a Maxwell GPU capable of 1 TFLOP/s, four 64-bit ARM A57 processors, 4GB of RAM, and 16GB of onboard storage. NVIDIA isn't yet allowing media to publish benchmarks, but the company's reported figures show the graphics and deep learning performance to be comparable to an Intel Core i7-6700K while scoring multiple times better on performance-per-Watt. This development board costs $599 (or $299 for the educational version) and consumes less than 10 Watts.
This discussion has been archived. No new comments can be posted.

NVIDIA Releases JTX1 ARM Board That Competes With Intel's Skylake i7-6700K

Comments Filter:
  • by hkultala ( 69204 ) on Wednesday November 11, 2015 @03:21AM (#50906989)

    The "deep learning" benchmark is a GPGPU workload which does practically nothing on CPU.

    Nvidia has just made a SoC Chip that has about equally fast iGPU than what Intel has, for a lower energy consumption.

    But in CPU performance, the Skylake is MUCH faster.

    • by KGIII ( 973947 )

      I guess my question is, what could/would I do with one as a layman with a passing (but growing) interest? Would this be a pricey replacement for a RPi or maybe a controller hub type of thing for a collection of RPis? I do have a project in mind to finally make use of these things - I've even got a half dozen of the RPi still sitting in their boxes (except for one that I opened and poked at) but I'm not exactly sure where to begin. Well, I know where I will begin - I'm just not sure that I should begin there

      • Re: (Score:3, Funny)

        by Anonymous Coward

        what could/would I do with one as a layman with a passing (but growing) interest?

        You could buy one and leave it in the box, then post vague questions on Slashdot that don't give any hint as to what your project actually is :p

      • This thing is for when you're doing something that can benefit from GPGPU, and a R-Pi isn't providing enough CPU power. The obvious example is machine vision, and I'm pretty sure that's the prime example that nVidia actually gave when announcing the thing: robotics. It's got a tiny little power footprint, which is the advantage over something from intel.

      • by AHuxley ( 892839 )
        Think back to https://en.wikipedia.org/wiki/... [wikipedia.org] like ideas. Can the math be spread over a lot of cores, new gpu's and then work out quick, better, sooner, with less heat?
        If yes, great. If no, buy into a different CPU for the calculations.
        • by KGIII ( 973947 )

          I think I might get one, then. Thanks. This would be an area where there some maths - I posted as an AC earlier. My VPN is still being screwy so I just logged out.

          It'll give me an excuse to brush up on my C and learn about the whole RFID methods. I've been meaning to do both for a while now. If you're curious or inclined to opine the AC post is above. I identify myself.

    • Meh, they matched the GPU performance of GT2 for twice the $, now let's compare it to gt4 with 128MB of eDRAM...

      • by Anonymous Coward

        Tegra X1 is an embedded chip. What NVIDIA claim it is designed to do is basically make a self-driving car out of it. For this purpose the GPGPU capability would actually be important and also Skylake would not meet as Intel likely don't offer them in industrial/automotive temperature ranges.
        In reality the best thing it can do might be a digital signage or laggy infotainment system, but in that ground it should perform better than its competitors.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      And I'd like to see actual benchmarks, not "We used CUDA based benchmarks that are designed to run well only on Nvidia GPUs!" As a benchmark, as last I looked Intel had the best performance per watt GPUs around.

      • by WarJolt ( 990309 )

        And I'd like to see actual benchmarks, not "We used CUDA based benchmarks that are designed to run well only on Nvidia GPUs!" As a benchmark, as last I looked Intel had the best performance per watt GPUs around.

        And I'd like to see actual benchmarks, not "We used CUDA based benchmarks that are designed to run well only on Nvidia GPUs!" As a benchmark, as last I looked Intel had the best performance per watt GPUs around.

        Of course they use benchmarks that run well on CUDA. Some algorithms can't be parallelized effectively over hundreds of GPU cores. Other algorithms can take a hit due to the branching required. However, there are some real world applications that can be effectively parallelized on CUDA that really make sense.

        Theres no point in comparing algorithms poorly suited for GPUs. NVidia might as well throw in the towel now for those applications. However theres a reason why OpenCV contains so many CUDA implementatio

    • by bloodhawk ( 813939 ) on Wednesday November 11, 2015 @06:26AM (#50907323)
      being equally fast as intels graphics is like crowing about beating a legless man in a foot race.
      • being equally fast as intels graphics is like crowing about beating a legless man in a foot race.

        The only ones you'll hear complaining about Intel's built-in graphics are the PC gamers and benchmarking sites. I'm actually quite happy downgrading from a Core i3-3227U to a Pentium N3700.

      • by Anonymous Coward

        In a race to the feet, the legless man always wins. And runs Linux while running Crysis in a Wine while in a Beowulf cluster of itself.

    • The new A9X in the new iPad leaves the X1 in the dust. The A9X scores 80 in Manhatten test, while X1 only scores 65

    • Hold on guys no benchmarks yet nvidia is still paying out kickbacks for good results lol.
  • by cachimaster ( 127194 ) on Wednesday November 11, 2015 @04:09AM (#50907043)

    This is just a particular benchmark that happens to run entirely in the GPU.

    Just because its low power does not means it have the same performance.

    In performance per watt, Intel and ARM are mostly the same [extremetech.com].

    • Re: (Score:2, Informative)

      by Anonymous Coward

      The referenced article is comparing 14nm Intel to 28nm ARM, so yes, the performance per watt is the same provided the Intel chip is built on a massively superior process.

  • Meh (Score:5, Informative)

    by shione ( 666388 ) on Wednesday November 11, 2015 @04:21AM (#50907055) Journal

    The article is silly. Who would buy a i7-6700K purely for the GPU. If you want that kinda gpu power you can get a dedicated graphics card for much less.

    • by Anonymous Coward

      you can get a dedicated graphics card for much less

      Not from Intel though.

  • Freedom of speech? How can a company "allow" or "disallow" journalists to publish benchmarks? Do they have to sign an NDA?

    • Yes, and if they break the NDA they are uninvited from future events, won't receive demo units for evaluation and I think may have to pay some damages. I find this a bad relationship, as the press acts as puppets, but that's how it works...
  • by Required Snark ( 1702878 ) on Wednesday November 11, 2015 @05:56AM (#50907247)
    No, seriously.

    For some parallel tasks it could be cost effective. A TFLOP of GPU with only 10 watts is nothing to sneer at. It might even be lower watt/flop then an FPGA, which tend to be power hogs. Of course, the 10 watt figure is for the card form factor SOC only, so the power and size is greater for the SOC plugged into the carrier board. And the cost needs to come down quite a bit for their likely market place. Either the price falls by a huge amount or it goes nowhere.

    Even so, this could be interesting for some niche markets.

    • It's TFLOP at 16 bits. AT 32 bits it will be 500 GFLOP. Apples for Apples Geforce 980 will do 5 Tflops so 10x more compute. At much less than 10x the price. Actually almost the *same* price. Per watt you'd have a slight advantage at the GPU level but once you consider having to buy 10 boards just to compete that advantage would vaporize too. Then you have to consider memory bandwidth where the 980 will crush this device natively, and of course even more if you consider having to distribute work via ethernet
    • What's the likely market place?

      I see this doing on-board video processing in autonomous vehicles.... not sure that there's a particular cost sensitivity there to the GPU module, power weight and size much more than cost (at this level).

      As for consumer applications that would be cost sensitive, this thing requires far too much fancy stuff around it to make it interesting, and all that stuff is still out of mass-market consumer price range, regardless if this board were free.

      • What fancy stuff? This question is not a troll, really. As far as I can tell, everything supporting the carrier board for the SOC is vanilla. That's why the price of the unit is so frustrating, it's not chock full of expensive or esoteric components. So why couldn't it be a lot cheaper?

        It's not like the market for visual processing for autonomous vehicles is that big, or will be big enough soon enough to make this SOC a worthwhile effort by NVIDIA. One way or another the price has got to come down, or the

        • Onboard FPGA? Depending on how big that it, that could explain the cost of the whole thing.

          But, fancy stuff I'm talking about is real-world I/O - cameras, servos, things to provide the data to be crunched and act on the crunched data... I'm not sure there's a point to a small, low power consuming, high(ish) compute power board if all you're going to do is connect it to a keyboard, mouse, monitor and ethernet. Plenty of bigger, more powerful, commodity hardware doing that already.

    • by waTeim ( 2818975 )
      Beowolf? Don't think so. If anything then maybe Spark with the map or reduce step being executed on the GPU, or better yet Tensor Flow [tensorflow.org]. But as pointed out elsewhere, this chip is not even for that, especially next year when the new cards with NVlink [nvidia.com] blow away 980/Titan-X stuff of this year. No this thing is for drones, AR, or image recognition on embedded anything where power consumption and latency are the overwhelming factors. Otherwise graphics cards will outperform, or if latency is not a factor, t
  • Until you see the $1495 pricetag!
  • So it doesn't run a mainline Linux kernel? Or does someone know otherwise. I couldn't find anything on the nvidia web site. Nor do I see how to buy one at the educational price.
    • by Anonymous Coward
      Why do people want to infect the ARM world with the worst parts of the x86 world, being UEFI and ACPI? I never got this. Surely, something like IEEE OpenBoot would be much nicer and not cruft up the blossoming ARM world?
    • by Predius ( 560344 )

      UEFI isn't required for ARMv8 mainline kernel support. Devicetree is.

  • I'm assuming SteamOS and the games it supports would not run on this unless everything was compiled for ARM, yes/no?

  • The Jetson TK1 sold for $192.

    I was really looking forward to a Tegra X1 version of the Jetson, but not at $599 and not at 6+ months after the chipset started appearing in consumer products at a significantly lower price.

    (The Jetson TK1 was the first K1 device to launch and was priced similar to or below fully assembled consumer products like the SHIELD Tablet.)

BLISS is ignorance.

Working...