Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Programming Software Hardware

NVIDIA Hopes To Sell More Chips By Bringing AI Programming To the Masses 35

jfruh writes: Artificial intelligence typically requires heavy computing power, which can only help manufacturers of specialized chip manufacturers like NVIDIA. That's why the company is pushing its Digits software, which helps users design and experiment with neural networks. Version 2 of digits moves out of the command line and comes with a GUI interface in an attempt to move interest beyond the current academic market; it also makes programming for multichip configurations possible.
This discussion has been archived. No new comments can be posted.

NVIDIA Hopes To Sell More Chips By Bringing AI Programming To the Masses

Comments Filter:
  • by khr ( 708262 )

    Crunch all you want, we'll make more. Sounds artificially intelligent.

    • Re:Chips! (Score:5, Interesting)

      by ShanghaiBill ( 739463 ) on Tuesday July 07, 2015 @04:16PM (#50064991)

      Crunch all you want, we'll make more. Sounds artificially intelligent.

      I am not sure if their strategy will work. Training a neural net requires massive compute resources, usually in the form of GPUs. But once the NN is trained, it doesn't require much computing to use it. For instance, a Go playing NN [arxiv.org] took 5 days to train, running on high end GPUs, but once trained, could consistently beat Gnu Go (which can consistently beat me) while using far less computing time.

      • Yes, but the point is to enable people to create their own, which involves the training parts. When I was at uni (at the end of the century), I only got to play around with neural nets comprising 10 to 20 nodes, because our Sparqstations couldn't handle anything bigger. With an appropriate toolkit for NNs on standard GPUs, people will be able to run 1000-node nets at home. It won't be research-grade stuff, but it will give the opportunity to add practical NNs into artificial intelligence MOOCs and even high
    • If they run it like they are currently running the gpu market no thank you.
      After the latest scams I trust them as far as I could move a 1 ton boulder.
  • by frovingslosh ( 582462 ) on Tuesday July 07, 2015 @04:26PM (#50065043)

    This is Nvidia. Don't buy into this based on any promises of what is to come, no matter how reasonable they seem.

    Last summer I bought a Nvidia Tegra Note 7 tablet based on promises that Android 5 (Lollipop) was coming out for it "real soon". They even stated that it was easy to port Lollipop on the Tegra Note 7 since it was basically a stock Android design with little or on deviation from the standard design. That "real soon" slipped to February of 2015 and when February 2015 came and went Nvidia became strangely mute on the subject, ignoring customers' inquiries.

    A claimed Nvidia employee even posted here as an AC that it was a shame what happened to the Tegra Note 7 customers, but explained that the U.S.A. developers wanted to work on the new stuff and the Tegra Note 7 project was shipped overseas, where no one wanted to work on it either (and apparently did not).

    My Tegra Note 7 tablet is the last thing that Nvidia will ever sell me. If you chose to do business with them then I may not be able to talk you out of it, but do so based on what they deliver today, not on promises of things that will never come.

    • Last summer I bought a Nvidia Tegra Note 7 tablet based on promises that Android 5 (Lollipop) was coming out for it "real soon". They even stated that it was easy to port Lollipop on the Tegra Note 7 since it was basically a stock Android design with little or on deviation from the standard design. That "real soon" slipped to February of 2015 and when February 2015 came and went Nvidia became strangely mute on the subject, ignoring customers' inquiries.

      What you describe is basically every tablet seller out there save for Google themselves. They save the new versions for their upcoming products, and only after those get put out do they update the old stuff.

      • Yea, which is ironic given almost every Google Nexus 7 owner has screamed "NO GOOGLE I DO NOT WANT THAT UPDATE" to 5.1, which bricked or permabogged down tens of thousands of devices and caused people to have to manually reflash to 5.0 or older.
      • Last summer I bought a Nvidia Tegra Note 7 tablet based on promises that Android 5 (Lollipop) was coming out for it "real soon". They even stated that it was easy to port Lollipop on the Tegra Note 7 since it was basically a stock Android design with little or on deviation from the standard design. That "real soon" slipped to February of 2015 and when February 2015 came and went Nvidia became strangely mute on the subject, ignoring customers' inquiries.

        What you describe is basically every tablet seller out there save for Google themselves. They save the new versions for their upcoming products, and only after those get put out do they update the old stuff.

        what about their GTX 970 design, top of the line generation that just came out, that has 512MB of VRAM that runs 87.5% slower than the main GDDR5 [digitaltrends.com], causing massive hitching and stuttering in any games that use more than 3.5 of the 4GB onboard?

        what's that? you say they patched it?

        yeah, and the company that conceived such a thing to begin with will also be the company to remove the patch in a year's time to get people to upgrade.

        • Because they are a scam, gpu markets been steady until recently.... Now everyone "needs" a new nvidia gpu for the new features that have marginal benifits and they are paying companies to use them to increase sales... They keep them closed source and have caused nothing but trouble for AAA titles.. They were confronted about their false specs during a press release and they refused to answer or comment on them at all.
    • by El Barto ( 1116 )

      Disappointed with their software support? Just be thankful you're not the proud owner of an HP TouchPad! https://en.wikipedia.org/wiki/HP_TouchPad

  • Holy Hardware Batman (Score:5, Informative)

    by UnknownSoldier ( 67820 ) on Tuesday July 07, 2015 @04:43PM (#50065153)

    From TFA there was no pic of the UI, nor any mention of tech specs aside from a lot of nebulous details. From nVidia's website ...

    * https://developer.nvidia.com/d... [nvidia.com]

    DIGITS DevBox includes:

    * Four TITAN X GPUs with 12GB of memory per GPU
    * 64GB DDR4
    * Asus X99-E WS workstation class motherboard with 4-way PCI-E Gen3 x16 support
    * Core i7-5930K 6 Core 3.5GHz desktop processor
    * Three 3TB SATA 6Gb 3.5â Enterprise Hard Drive in RAID5
    * 512GB PCI-E M.2 SSD cache for RAID
    * 250GB SATA 6Gb Internal SSD
    * 1600W Power Supply Unit
    * Ubuntu 14.04
    * NVIDIA-qualified driver
    * NVIDIA® CUDA® Toolkit 7.0
    * NVIDIA® DIGITSâ SW
    * Caffe, Theano, Torch, BIDMach

    .. holy crap is that a lot of GPU horsepower "just" for AI. Oh look, they are running Ubuntu :-)

    They are really trying to get people on board about how much better / faster their GPU solutions are ...

    * http://www.nvidia.com/object/m... [nvidia.com]

    The problem is that there are lot of "niche" use cases. If your problem domain maps to the GPU then yeah, mjaor speedup. If not, well, then you're SOL running on "slow" CPUs.

    • by jandrese ( 485 )
      I'm a little dubious of calling this a consumer technology if they're recommending a $8k build to run it.
  • by LordMyren ( 15499 ) on Tuesday July 07, 2015 @05:16PM (#50065311) Homepage

    NV open sourced CUDA in 2011, but I don't believe there are any other implementations out there. The rest of the world continues adopting OpenCL and now the whole Khronos supergroup is super hyper for Vulkan (NV even giving a solid thumbs up), with Apple and NV being the two rogue vendors pushing proprietary wares (Metal and CUDA). Even with NVidia doing really *really* well in the GPGPU market, even with a really great dev env, the extreme proprietary-ness of CUDA makes it really hard to sell to the alpha techies.

    Cuda has a lot of traction in academic and applied fields, but the technical industry doesn't take it seriously, isn't comfortable saddling themselves to a one-trick-horse offering from NVidia. This ridiculously powerful box, and it's cool software with cool visibility into a neat problem, but it's really a pipeline play, to get you into NVidia's world. For some, going full in on NVidia is ok, but I don't think it's unlike going full in as a MS Developer or iOS developer- you're picking up, putting on the blinders, and all you'll be able to do is sprint towards a fixed, not too far away point.

    • by Sulik ( 1849922 )
      To be fair, virtually all of CUDA is reasonably close to standard C++, so the learning curve is relatively small compared to previous graphics-oriented languages, and it's easier to get more gpu !/$ than with OpenCL. NVIDIA is also pretty much the only serious GPGPU HW available, so you're kind of tied to your GPU vendor no matter what language you use.
      • To also be fair there is no legitimate reason for CUDA to continue to exist as anything other than legacy support.

        OpenCL exists, and as a broad open development platform that is not tied to any manufacturer it is the platform that should be used. CUDA is just nvidia trying to lock you to them. The rise of a Linux has shown the power of platforms and solutions that are manufacturer agnostic, don't fall for the old proprietary lockin trick.

        • Nvidia wants a way to force users to need them. That's why they keep creating closed source that no one can look at in steading of proving to the bigger picture.
        • That's true in theory; the problem is that OpenCL still feels a few years (or more...) behind CUDA. I have used both, and while OpenCL is undoubtedly the future, CUDA is still by far the better choice for GPGPU today.

          The worrying thing is that I've been saying that for the past 5 years, and it hasn't shown any signs of changing. AMD's OpenCL implementation (everything from the drivers to the compiler) are a total crapshoot. With each release they fix one bug, but introduce 1 new one and 1 regression. (comp

  • The company that doesn't support open programming hope to support open programming !

    I'm skeptical.

  • If nvidia hopes to sell me anything, they better start by not voluntarily crippling their own software the moment it detect some competitor hardware.
    If I want to put both an AMD and an NVidia graphic card in my computer, I should be able to use the NVidia card to its full extent. Instead, they disable some of their proprietary technology in this situation (namely PhysX), with the official answer being "you can't have both running at the same time" and then never answering back.
    Guess what? Yes I can have
  • Neural networks do benefit from parallelism, and i'm sure GPUs will help them run a bit faster, but it's not enough...

    I'm convinced by Simon Knowle's analysis of learning and inference compute patterns: which if you accept - focusing on making NNs run faster on CPUs and GPUs appears to be an approach with severely limited potential. This isn't about some basic hardware optimisations gained by turning something into an ASIC, it's because design features of CPUs and GPUs actively work against NN compute patte

    • Nvidia uses the concept of neural networks to promote its oncoming Pascal GPU : the new features are massive internal and external bandwith (the attached stacked memory is a big deal, and there's an interconnect ; well, I suppose internal buses are wider/faster) and FP16 (half precision floats) carried over from mobile GPU.
      The half floats are meant to save bandwith and power.

      So nvidia goes and say, well we can use that for NN. It's kind of marketing spin but at least using a GPU is cheap (next to designing

  • Thanks a lot for bringing us THAT MUCH CLOSER to the singularity!

  • .. anyone like to share a simple out-of-the-box neural network trainer/demonstrator to play around with neural nets on a novice-to-intermediate level?

Kleeneness is next to Godelness.

Working...