Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Android Education Graphics Robotics Software Hardware Technology

Google's Tensor Processing Unit Could Advance Moore's Law 7 Years Into The Future (pcworld.com) 86

An anonymous reader writes from a report via PCWorld: Google says its Tensor Processing Unit (TPU) advances machine learning capability by a factor of three generations. "TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA," said Google CEO Sundar Pichai during the company's I/O developer conference on Wednesday. The chips powered the AlphaGo computer that beat Lee Sedol, world champion of the game called Go. "We've been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law)," said Google's blog post. "TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly." The chip is called the Tensor Processing Unit because it underpins TensorFlow, the software engine that powers its deep learning services under an open-source license.
This discussion has been archived. No new comments can be posted.

Google's Tensor Processing Unit Could Advance Moore's Law 7 Years Into The Future

Comments Filter:
  • by starless ( 60879 ) on Wednesday May 18, 2016 @07:32PM (#52138745)

    Moore's law relates to the number of components in an integrated circuit.
    I really doubt these things put more transistors onto a piece of silicon.

    • by AchilleTalon ( 540925 ) on Wednesday May 18, 2016 @07:44PM (#52138795) Homepage
      You are right. This is bullshit. He didn't mention anything about the density of transistors. So, since this is a specialized chip, the performance claim cannot be compared to a general purpose chip.
    • by Anonymous Coward

      Came here to say the same thing.

      Moore's law isn't about performance, it's about economy. But smaller components do allow higher clock speeds and more functionality, so performance tends to increase too.

    • by dbIII ( 701233 ) on Wednesday May 18, 2016 @08:50PM (#52139021)
      Also from the summary:

      "which means it requires fewer transistors per operation"

      So as you suggest it's definitely a misuse of the term used to roughly describe part of Intel's roadmap when Moore was there.

    • Many people confuse Moore's law and Dennard scaling. Dennard scaling is dead. We can still etch smaller transistors. We have trouble dissapating the heat, so even if we have more transistors most of it has to be dark.

      Once you boil it down to the math they are doing mostly giant matrix multiplies. Optimizing a particular type of computation is not really related to transistor density or power dissapating. What has evolved is our understanding of algorithms.

  • Yeah, like DSPs... (Score:5, Interesting)

    by RyanFenton ( 230700 ) on Wednesday May 18, 2016 @07:57PM (#52138827)

    Specialized processing chips have been several 'generations' ahead in terms of processing per dollar for many decades. In the 90's at least, DSPs were doing audio/video processing much cheaper by performing many machine-level steps simultaneously in one 'cycle' with less power than a general processor, by leaving out the features and cost of a general processor. And all you had to do to use them was test them on a hardware emulator, flash them, then pop them into production test run until you were good enough to deploy. Depending on the chip, they could run on a trickle of power, without active cooling, and match a much most costly general chip for pennies.

    I mean, it's how we got cell phones, and LOTS of other things, including most things in a computer that aren't the CPU.

    But isn't Moore's law more about transistors per unit cost, rather than performance per cost? Seems like a fundamental misunderstanding in the headline... which seems about as common as specialized chips in modern technology.

    Ryan Fenton

    • by wjcofkc ( 964165 )
      Moore's law as it stands has always been a wall we would hit sooner than predicted by any self respecting nerd. It's time that it gets redefined.
    • by Kjella ( 173770 )

      But isn't Moore's law more about transistors per unit cost, rather than performance per cost? Seems like a fundamental misunderstanding in the headline... which seems about as common as specialized chips in modern technology.

      Actually just transistor density, not cost at all. But in the popular press there's no invalid use of computers and Moore's law, it can be about performance, cost, size, IPC, battery life or whatever. Anything that's twice as good in two years or whatever best fits your story follows Moore's law. In this case it's not even an actual use, it's comparing totally unrelated improvements to imaginary iterations of it.

      • by RyanFenton ( 230700 ) on Wednesday May 18, 2016 @10:02PM (#52139175)

        Linky [wikipedia.org]

        "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years."

        - Gordon E. Moore, April 19, 1965

        It's both cost and density, and continued to be so as it crystallized into the transistor doubling every 18 months figure. To double the density, only at exorbitant cost would not really be an increase in accessible technology. It's not just the technology being invented and monopolized, but being rolled out and usable by the entire field. Increasing computational complexity is still the most important part - but cost has always been a part of the idea too.

  • I will trust Intel, AMD and NVidia to sustain Moore's law as it pertains to general purpose computing, not Google. Google gets plaudits for advancing neural net hardware, if indeed they didn't just buy the tech and slap the Google brand on it, which is likely. Thw hyperbole just erodes credibility, in other words, makes me wonder how many other exaggerations will turn up in this department. Yes, it's a fact it plays Go well, and no doubt does a lot of other things well. Let's stick to the facts please. It d

  • by Anonymous Coward

    End of message.

  • This is the FIFTH Google story today.

    • Yeah, but OTOH they also have an article about AIDS, so it's all good.....

    • Re: (Score:3, Informative)

      by Anonymous Coward

      Google I/O is happening at the moment.

  • by Shinobi ( 19308 ) on Wednesday May 18, 2016 @08:54PM (#52139039)

    Tensor Processing Units are not new. SGI used to offer that for their Octane, aimed pretty heavily at the satellite image analysis crowd.

    • by Shinobi ( 19308 )

      Now that I check, there was also an XIO board with a TPU for the Origin/Onyx 2/Origin 200 machines

    • by jasnw ( 1913892 )
      I strongly suspect that Google's "Tensor" is not the same as a mathematical tensor, which is what the SGI chips were working with. This use of the work smells more of marketing than mathematics. As in "OMG, Google is using TENSORS!!! to do their AI language processing. TENSORS dammit!!!"
      • by ceoyoyo ( 59147 )

        A tensor is an array, possibly with dimension > 2 if you want to be picky. TensorFlow absolutely does use tensors.

        • Yup, and if you want to be really pedantic even scalars are tensors of rank zero.
        • by jasnw ( 1913892 )
          And anything that does anything using arrays of any kind is "technically" using tensors. However, I suspect that Google is just using it for the psuedo-geek-sexy sound of it. Or maybe they're using an array of old Tensor high-intensity lights?? If so, my humble apologies.
  • Perhaps when it isn't something that is likely to take multiple lines of work out in one shot, I'll be for it.

  • TPU sounds familiar (Score:3, Interesting)

    by FreshnFurter ( 599451 ) <{moc.oohay} {ta} {hdv_knarf}> on Thursday May 19, 2016 @04:12AM (#52140141)

    Was that not something introduces about 20 years ago by Silicon Graphics? Or am I getting old
    http://manx.classiccmp.org/mir... [classiccmp.org]

    • by slew ( 2918 )

      Was that not something introduces about 20 years ago by Silicon Graphics? Or am I getting old
      http://manx.classiccmp.org/mir... [classiccmp.org]

      If you are old enough, you might remember, some of the googleplex used to be SGI buildings...
      Maybe google did some remodeling and found some antiques ;^)

  • Not related to the wizard Tensor of Tensor's Floating Disc.

  • Am I reading this right, the basic gist of the article is that custom, purposebuilt chips are faster at very specific tasks than general all-purpose chips?

    Such amaze, much fast, wow.

  • "Google's Law" states that whenever you need to make something sound cool and innovative, just misuse and existing term like "Moore's Law", because reporters are stupid.

  • The real question on advances over GPU's: Can it mine bitcoin faster and for less electricity?

    • Shirley, you're joking. FPGAs took over GPUs for Bitcoin mining in 2011 [github.com], and later ASICs.
      • Actually, I did not know that.

        Meanwhile, given that this new TPU has lower watt per computation, maybe a better question is, can it mine bitcoins cheaper?

Perfection is acheived only on the point of collapse. - C. N. Parkinson

Working...