



NVIDIA Hopes To Sell More Chips By Bringing AI Programming To the Masses 35
jfruh writes: Artificial intelligence typically requires heavy computing power, which can only help manufacturers of specialized chip manufacturers like NVIDIA. That's why the company is pushing its Digits software, which helps users design and experiment with neural networks. Version 2 of digits moves out of the command line and comes with a GUI interface in an attempt to move interest beyond the current academic market; it also makes programming for multichip configurations possible.
Chips! (Score:1)
Crunch all you want, we'll make more. Sounds artificially intelligent.
Re:Chips! (Score:5, Interesting)
Crunch all you want, we'll make more. Sounds artificially intelligent.
I am not sure if their strategy will work. Training a neural net requires massive compute resources, usually in the form of GPUs. But once the NN is trained, it doesn't require much computing to use it. For instance, a Go playing NN [arxiv.org] took 5 days to train, running on high end GPUs, but once trained, could consistently beat Gnu Go (which can consistently beat me) while using far less computing time.
Re: (Score:2)
Re: Chips! (Score:1)
After the latest scams I trust them as far as I could move a 1 ton boulder.
Re: (Score:2)
Don't buy based on any promises (Score:4, Insightful)
This is Nvidia. Don't buy into this based on any promises of what is to come, no matter how reasonable they seem.
Last summer I bought a Nvidia Tegra Note 7 tablet based on promises that Android 5 (Lollipop) was coming out for it "real soon". They even stated that it was easy to port Lollipop on the Tegra Note 7 since it was basically a stock Android design with little or on deviation from the standard design. That "real soon" slipped to February of 2015 and when February 2015 came and went Nvidia became strangely mute on the subject, ignoring customers' inquiries.
A claimed Nvidia employee even posted here as an AC that it was a shame what happened to the Tegra Note 7 customers, but explained that the U.S.A. developers wanted to work on the new stuff and the Tegra Note 7 project was shipped overseas, where no one wanted to work on it either (and apparently did not).
My Tegra Note 7 tablet is the last thing that Nvidia will ever sell me. If you chose to do business with them then I may not be able to talk you out of it, but do so based on what they deliver today, not on promises of things that will never come.
Re: (Score:2)
Last summer I bought a Nvidia Tegra Note 7 tablet based on promises that Android 5 (Lollipop) was coming out for it "real soon". They even stated that it was easy to port Lollipop on the Tegra Note 7 since it was basically a stock Android design with little or on deviation from the standard design. That "real soon" slipped to February of 2015 and when February 2015 came and went Nvidia became strangely mute on the subject, ignoring customers' inquiries.
What you describe is basically every tablet seller out there save for Google themselves. They save the new versions for their upcoming products, and only after those get put out do they update the old stuff.
Re: Don't buy based on any promises (Score:2)
Re: (Score:3)
Last summer I bought a Nvidia Tegra Note 7 tablet based on promises that Android 5 (Lollipop) was coming out for it "real soon". They even stated that it was easy to port Lollipop on the Tegra Note 7 since it was basically a stock Android design with little or on deviation from the standard design. That "real soon" slipped to February of 2015 and when February 2015 came and went Nvidia became strangely mute on the subject, ignoring customers' inquiries.
What you describe is basically every tablet seller out there save for Google themselves. They save the new versions for their upcoming products, and only after those get put out do they update the old stuff.
what about their GTX 970 design, top of the line generation that just came out, that has 512MB of VRAM that runs 87.5% slower than the main GDDR5 [digitaltrends.com], causing massive hitching and stuttering in any games that use more than 3.5 of the 4GB onboard?
what's that? you say they patched it?
yeah, and the company that conceived such a thing to begin with will also be the company to remove the patch in a year's time to get people to upgrade.
Re: Don't buy based on any promises (Score:1)
Re: (Score:1)
Disappointed with their software support? Just be thankful you're not the proud owner of an HP TouchPad! https://en.wikipedia.org/wiki/HP_TouchPad
Holy Hardware Batman (Score:5, Informative)
From TFA there was no pic of the UI, nor any mention of tech specs aside from a lot of nebulous details. From nVidia's website ...
* https://developer.nvidia.com/d... [nvidia.com]
They are really trying to get people on board about how much better / faster their GPU solutions are ...
* http://www.nvidia.com/object/m... [nvidia.com]
The problem is that there are lot of "niche" use cases. If your problem domain maps to the GPU then yeah, mjaor speedup. If not, well, then you're SOL running on "slow" CPUs.
Re: (Score:2)
"NVidia Hopes to Sell"... CUDA (Score:5, Insightful)
NV open sourced CUDA in 2011, but I don't believe there are any other implementations out there. The rest of the world continues adopting OpenCL and now the whole Khronos supergroup is super hyper for Vulkan (NV even giving a solid thumbs up), with Apple and NV being the two rogue vendors pushing proprietary wares (Metal and CUDA). Even with NVidia doing really *really* well in the GPGPU market, even with a really great dev env, the extreme proprietary-ness of CUDA makes it really hard to sell to the alpha techies.
Cuda has a lot of traction in academic and applied fields, but the technical industry doesn't take it seriously, isn't comfortable saddling themselves to a one-trick-horse offering from NVidia. This ridiculously powerful box, and it's cool software with cool visibility into a neat problem, but it's really a pipeline play, to get you into NVidia's world. For some, going full in on NVidia is ok, but I don't think it's unlike going full in as a MS Developer or iOS developer- you're picking up, putting on the blinders, and all you'll be able to do is sprint towards a fixed, not too far away point.
Re: (Score:1)
Re: (Score:2)
To also be fair there is no legitimate reason for CUDA to continue to exist as anything other than legacy support.
OpenCL exists, and as a broad open development platform that is not tied to any manufacturer it is the platform that should be used. CUDA is just nvidia trying to lock you to them. The rise of a Linux has shown the power of platforms and solutions that are manufacturer agnostic, don't fall for the old proprietary lockin trick.
Re: "NVidia Hopes to Sell"... CUDA (Score:1)
Re: "NVidia Hopes to Sell"... CUDA (Score:1)
Re: (Score:2)
That's true in theory; the problem is that OpenCL still feels a few years (or more...) behind CUDA. I have used both, and while OpenCL is undoubtedly the future, CUDA is still by far the better choice for GPGPU today.
The worrying thing is that I've been saying that for the past 5 years, and it hasn't shown any signs of changing. AMD's OpenCL implementation (everything from the drivers to the compiler) are a total crapshoot. With each release they fix one bug, but introduce 1 new one and 1 regression. (comp
NVidia and programming : Ha (Score:2)
The company that doesn't support open programming hope to support open programming !
I'm skeptical.
Maybe they should try to respect their customer (Score:2)
If I want to put both an AMD and an NVidia graphic card in my computer, I should be able to use the NVidia card to its full extent. Instead, they disable some of their proprietary technology in this situation (namely PhysX), with the official answer being "you can't have both running at the same time" and then never answering back.
Guess what? Yes I can have
We Need a New Class of Processor for AI NN not GPU (Score:2)
Neural networks do benefit from parallelism, and i'm sure GPUs will help them run a bit faster, but it's not enough...
I'm convinced by Simon Knowle's analysis of learning and inference compute patterns: which if you accept - focusing on making NNs run faster on CPUs and GPUs appears to be an approach with severely limited potential. This isn't about some basic hardware optimisations gained by turning something into an ASIC, it's because design features of CPUs and GPUs actively work against NN compute patte
Re: (Score:2)
Nvidia uses the concept of neural networks to promote its oncoming Pascal GPU : the new features are massive internal and external bandwith (the attached stacked memory is a big deal, and there's an interconnect ; well, I suppose internal buses are wider/faster) and FP16 (half precision floats) carried over from mobile GPU.
The half floats are meant to save bandwith and power.
So nvidia goes and say, well we can use that for NN. It's kind of marketing spin but at least using a GPU is cheap (next to designing
Thanks, NVIDIA (Score:1)
Thanks a lot for bringing us THAT MUCH CLOSER to the singularity!
AI for the masses (Score:2)