NVIDIA Releases JTX1 ARM Board That Competes With Intel's Skylake i7-6700K (phoronix.com) 84
An anonymous reader writes: NVIDIA has unveiled the Jetson TX1 development board powered by their Tegra X1 SoC. The Jetson TX1 has a Maxwell GPU capable of 1 TFLOP/s, four 64-bit ARM A57 processors, 4GB of RAM, and 16GB of onboard storage. NVIDIA isn't yet allowing media to publish benchmarks, but the company's reported figures show the graphics and deep learning performance to be comparable to an Intel Core i7-6700K while scoring multiple times better on performance-per-Watt. This development board costs $599 (or $299 for the educational version) and consumes less than 10 Watts.
No it does not compete with Skylake, those are GPU (Score:5, Informative)
The "deep learning" benchmark is a GPGPU workload which does practically nothing on CPU.
Nvidia has just made a SoC Chip that has about equally fast iGPU than what Intel has, for a lower energy consumption.
But in CPU performance, the Skylake is MUCH faster.
Re: (Score:2)
I guess my question is, what could/would I do with one as a layman with a passing (but growing) interest? Would this be a pricey replacement for a RPi or maybe a controller hub type of thing for a collection of RPis? I do have a project in mind to finally make use of these things - I've even got a half dozen of the RPi still sitting in their boxes (except for one that I opened and poked at) but I'm not exactly sure where to begin. Well, I know where I will begin - I'm just not sure that I should begin there
Re: (Score:3, Funny)
what could/would I do with one as a layman with a passing (but growing) interest?
You could buy one and leave it in the box, then post vague questions on Slashdot that don't give any hint as to what your project actually is :p
Re: (Score:3)
My virtual 8-bit CPU in my Minecraft world has enough oomf...
Re: (Score:2)
Except for the "Speaking' and "Camera" parts
Re: (Score:2)
That's what I'm thinking. I need something that can push and, maybe, compress video and sound. It's probably also going to have storage attached to record something like snapshots at 3 second intervals or the likes. I don't want to "make do" with something. I want to just make it, learn about it, and forget about it - until I need to repair or update and realize that I should have followed good documentation methods. Then, I'll learn it, fix it, and forget about it! Seems pretty good to me.
Re: (Score:3)
This thing is for when you're doing something that can benefit from GPGPU, and a R-Pi isn't providing enough CPU power. The obvious example is machine vision, and I'm pretty sure that's the prime example that nVidia actually gave when announcing the thing: robotics. It's got a tiny little power footprint, which is the advantage over something from intel.
Re: (Score:2)
If yes, great. If no, buy into a different CPU for the calculations.
Re: (Score:2)
I think I might get one, then. Thanks. This would be an area where there some maths - I posted as an AC earlier. My VPN is still being screwy so I just logged out.
It'll give me an excuse to brush up on my C and learn about the whole RFID methods. I've been meaning to do both for a while now. If you're curious or inclined to opine the AC post is above. I identify myself.
Re: No it does not compete with Skylake, those are (Score:1)
Meh, they matched the GPU performance of GT2 for twice the $, now let's compare it to gt4 with 128MB of eDRAM...
Re: (Score:1)
Tegra X1 is an embedded chip. What NVIDIA claim it is designed to do is basically make a self-driving car out of it. For this purpose the GPGPU capability would actually be important and also Skylake would not meet as Intel likely don't offer them in industrial/automotive temperature ranges.
In reality the best thing it can do might be a digital signage or laggy infotainment system, but in that ground it should perform better than its competitors.
Re: (Score:2, Insightful)
And I'd like to see actual benchmarks, not "We used CUDA based benchmarks that are designed to run well only on Nvidia GPUs!" As a benchmark, as last I looked Intel had the best performance per watt GPUs around.
Re: (Score:3)
And I'd like to see actual benchmarks, not "We used CUDA based benchmarks that are designed to run well only on Nvidia GPUs!" As a benchmark, as last I looked Intel had the best performance per watt GPUs around.
And I'd like to see actual benchmarks, not "We used CUDA based benchmarks that are designed to run well only on Nvidia GPUs!" As a benchmark, as last I looked Intel had the best performance per watt GPUs around.
Of course they use benchmarks that run well on CUDA. Some algorithms can't be parallelized effectively over hundreds of GPU cores. Other algorithms can take a hit due to the branching required. However, there are some real world applications that can be effectively parallelized on CUDA that really make sense.
Theres no point in comparing algorithms poorly suited for GPUs. NVidia might as well throw in the towel now for those applications. However theres a reason why OpenCV contains so many CUDA implementatio
Re: (Score:2)
Who the hell cares? Seriously, it's a graphics card!
Re:No it does not compete with Skylake, those are (Score:4, Funny)
Re: (Score:3)
being equally fast as intels graphics is like crowing about beating a legless man in a foot race.
The only ones you'll hear complaining about Intel's built-in graphics are the PC gamers and benchmarking sites. I'm actually quite happy downgrading from a Core i3-3227U to a Pentium N3700.
Re: (Score:1)
In a race to the feet, the legless man always wins. And runs Linux while running Crysis in a Wine while in a Beowulf cluster of itself.
even the GPU isn't the best anymore (Score:1)
The new A9X in the new iPad leaves the X1 in the dust. The A9X scores 80 in Manhatten test, while X1 only scores 65
Re: No it does not compete with Skylake, those are (Score:1)
Not even close (Score:3)
This is just a particular benchmark that happens to run entirely in the GPU.
Just because its low power does not means it have the same performance.
In performance per watt, Intel and ARM are mostly the same [extremetech.com].
Re: (Score:2, Informative)
The referenced article is comparing 14nm Intel to 28nm ARM, so yes, the performance per watt is the same provided the Intel chip is built on a massively superior process.
Re: (Score:2)
Re: (Score:1)
The point obviously being that it's not even close to an apples to apples comparison. How about we compare 14nm ARM to 14nm Intel?
That's an easy one. The 14 nm Intel exists, the 14 nm ARM doesn't, so the Intel one wins any comparison except the performance per Watt. Considering the 14 nm ARM uses 0 W, its performance per Watt should be mathematically interesting.
Meh (Score:5, Informative)
The article is silly. Who would buy a i7-6700K purely for the GPU. If you want that kinda gpu power you can get a dedicated graphics card for much less.
Re: (Score:1)
you can get a dedicated graphics card for much less
Not from Intel though.
Isn't allowing media..? (Score:2)
Freedom of speech? How can a company "allow" or "disallow" journalists to publish benchmarks? Do they have to sign an NDA?
Re: (Score:2)
How about a Beowolf cluster of these (Score:3)
For some parallel tasks it could be cost effective. A TFLOP of GPU with only 10 watts is nothing to sneer at. It might even be lower watt/flop then an FPGA, which tend to be power hogs. Of course, the 10 watt figure is for the card form factor SOC only, so the power and size is greater for the SOC plugged into the carrier board. And the cost needs to come down quite a bit for their likely market place. Either the price falls by a huge amount or it goes nowhere.
Even so, this could be interesting for some niche markets.
Re: (Score:2)
Re: (Score:2)
Not everyone will be using them to mine Bitcoins.
Re: (Score:1)
Re: (Score:2)
He said "if you distribute to a cluster" everything else seems pointed at your average Bitcoin miner.
Re: (Score:2)
What's the likely market place?
I see this doing on-board video processing in autonomous vehicles.... not sure that there's a particular cost sensitivity there to the GPU module, power weight and size much more than cost (at this level).
As for consumer applications that would be cost sensitive, this thing requires far too much fancy stuff around it to make it interesting, and all that stuff is still out of mass-market consumer price range, regardless if this board were free.
Re: (Score:2)
It's not like the market for visual processing for autonomous vehicles is that big, or will be big enough soon enough to make this SOC a worthwhile effort by NVIDIA. One way or another the price has got to come down, or the
Re: (Score:2)
Onboard FPGA? Depending on how big that it, that could explain the cost of the whole thing.
But, fancy stuff I'm talking about is real-world I/O - cameras, servos, things to provide the data to be crunched and act on the crunched data... I'm not sure there's a point to a small, low power consuming, high(ish) compute power board if all you're going to do is connect it to a keyboard, mouse, monitor and ethernet. Plenty of bigger, more powerful, commodity hardware doing that already.
Re: (Score:1)
Looks like a nice Mini-ITX board (Score:2)
Apparently still no UEFI (Score:1)
Re: (Score:1)
Re: (Score:2)
UEFI isn't required for ARMv8 mainline kernel support. Devicetree is.
Question (Score:2)
I'm assuming SteamOS and the games it supports would not run on this unless everything was compiled for ARM, yes/no?
Re: (Score:2)
Correct.
Grumble grumble... (Score:2)
The Jetson TK1 sold for $192.
I was really looking forward to a Tegra X1 version of the Jetson, but not at $599 and not at 6+ months after the chipset started appearing in consumer products at a significantly lower price.
(The Jetson TK1 was the first K1 device to launch and was priced similar to or below fully assembled consumer products like the SHIELD Tablet.)