Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
AI IBM Stats Hardware Science Technology

IBM Researchers Propose Device To Dramatically Speed Up Neural-Net Learning (arxiv.org) 87

skywire writes: We've all followed the recent story of AlphaGo beating a top Go master. Now IBM researchers Tayfun Gokmen and Yurii Vlasov have described what could be a game changer for machine learning — an array of resistive processing units that would use stochastic techniques to dramatically accelerate the backpropagation algorithm, speeding up neural network training by a factor of 30,000. They argue that such an array would be reliable, low in power use, and buildable with current CMOS fabrication technology. "Even Google's AlphaGo still needed thousands of chips to achieve its level of intelligence," adds Tom's Hardware. "IBM researchers are now working to power that level of intelligence with a single chip, which means thousands of them put together could lead to even more breakthroughs in AI capabilities in the future."
This discussion has been archived. No new comments can be posted.

IBM Researchers Propose Device To Dramatically Speed Up Neural-Net Learning

Comments Filter:
  • Yes! (Score:5, Funny)

    by GrumpySteen ( 1250194 ) on Saturday March 26, 2016 @03:26PM (#51783009)

    With this technology, chatbots can become neo-nazi holocaust deniers in less than two hours!

    • I could be wrong, but something interesting I found about Tay: It wasn't nearly as advanced as microsoft claimed.

      http://research.microsoft.com/... [microsoft.com]

      Looks like it was mainly a modified language translator, instead of translating language A to language B it translated a statement or question to a response.

      Disappointing but neat. Worse, it wasn't even a parrot. The possible responses look to be mined from over a million twitter posts.

      • "The possible responses look to be mined from over a million twitter posts."

        There's 'yer problem!

      • ...Looks like it was mainly a modified language translator, instead of translating language A to language B it translated a statement or question to a response..... The possible responses look to be mined from over a million twitter posts.

        And that's different from real meatspace human talk in what significant way?

    • Makes me wonder if watson is a closet nazi sympathizer too. I wonder if IBM is hurriedly writing code to check for that.
  • by Anonymous Coward

    Knowing that they can possibly speed it up to this extent? I might have to bother thinking about what may come, if true. I never had a concern about AI, since making a strong one has always been in the realm of fantasy, where we are just scratching at the toes of the giant.

    I have always thought that AI techniques lacked elegance, but I never put forth the effort to sort it out and look for a better method, other things I wanted to do. This may be part of the answer to that problem that annoyed me at that ti

    • We have been looking at it for only a few decades. Building intelligence in nature took millions of years. One could argue we are well on our way up the evolutionary ladder of AI.

      In response to your question:

      Would this technique be used to bring it closer to making a human-like mind, or simply a better mind?

      I would say a more interesting question is:

      How will we know when we have created something with a human-like mind?
      Our first one will likely be much simpler, but it gets complicated, because all our inputs come filtered through biological inputs. The first human like AI will not be received through filte

    • by HiThere ( 15173 )

      What they're talking about is a way of improving the speed of learning. Nothing else (that I know of). Also nothing less. This is quite important WRT the practicality of using current deep learning approaches, but it doesn't make the end state any more powerful (except that it can continue learning faster).

      This doesn't address motivation, which I see as the current major stumbling block in front of General AI. This doesn't make the AI more human...except that humans learn continually.

      What it does do is

      • What they're talking about is a way of improving the speed of learning. Nothing else (that I know of). Also nothing less. This is quite important WRT the practicality of using current deep learning approaches, but it doesn't make the end state any more powerful (except that it can continue learning faster).

        They're also talking about reducing the size by several orders of magnitude.

    • by gweihir ( 88907 )

      Neural networks are non-linear classificators. They are not anything intelligent, and speeding up the parametrization process (misleadingly called "learning") does not do a lot except possibly make them cheaper. They do not gain capabilities by this.

  • by LetterRip ( 30937 ) on Saturday March 26, 2016 @06:48PM (#51783991)

    'cause this is how we get skynet

  • by RandCraw ( 1047302 ) on Saturday March 26, 2016 @08:25PM (#51784481)

    The article abstract suggests that a Resistive Processing Unit will run 30,000 times faster than a cluster of CPUs using less power. But nobody runs neural nets on CPUs; they use GPUs.

    So then, how does a RPU compare to a GPU?

    • by gweihir ( 88907 )

      Not nearly as well, obviously. That is why they did not do that far more appropriate comparison. Just like the D-Wave scammers that compare their machine to a simulation of their machine on a single CPU and get ridiculous speed-ups, when in actual reality they are slower when said far cheaper single CPU actually runs an algorithm suitable for it. It is lying with numbers and it has gotten very bad indeed because a lot of people fall for it.

  • So far, AI has only been pursued in ways that destroy without room for replacement. When (on net) do they start becoming a force that helps humanity without requiring retraining?

Those who do things in a noble spirit of self-sacrifice are to be avoided at all costs. -- N. Alexander.

Working...