Google's Tensor Processing Unit Could Advance Moore's Law 7 Years Into The Future (pcworld.com) 86
An anonymous reader writes from a report via PCWorld: Google says its Tensor Processing Unit (TPU) advances machine learning capability by a factor of three generations. "TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA," said Google CEO Sundar Pichai during the company's I/O developer conference on Wednesday. The chips powered the AlphaGo computer that beat Lee Sedol, world champion of the game called Go. "We've been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law)," said Google's blog post. "TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly." The chip is called the Tensor Processing Unit because it underpins TensorFlow, the software engine that powers its deep learning services under an open-source license.
Re: (Score:3, Insightful)
It's easy to disparage the efforts of somebody trying big things when they don't go as planned isn't it. But they are trying and the next step they take could be that 'big thing'. What have you done lately?
Re: (Score:2)
Re: (Score:1)
Except it's normally done on the algorithm level, not the chip level.
Re: (Score:1)
So, you're saying this is the first time they are making computations on a computer?
Re: (Score:1)
Re: (Score:2)
How many working, deployed hardware implementations have there been?
Re: (Score:3)
"order of magnitude better...performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law)"
Uh, no. Moore's law says nothing at all about performance. It speaks to the number of transistors. It was Dave House who predicted a doubling in performance every 18 months (Moore predicted doubling transisto
Re: (Score:3)
Most people who aren't chip architects don't really care one way or another about transistor density, other than that it was a convenient proxy for performance, the frequent doubling of which ordinary people do care about. Now that transistor density has largely hit a physics wall, perhaps we need a new term for the projected trajectory of performance that would have continued had physics allowed transistors to be infinitely small, which engineers are attempting to satisfy by coming up with novel architectu
Re: (Score:1)
Nope. People do care about density, which provides the ability to condense useful functions which used to require a desktop sized computer or a large video camera onto a pocketable device like a smartphone. People don't care about performance. We've got computers a thousand times faster than they were 20 years ago, but they're only slightly faster
Re: (Score:3)
Nope. People do care about density, which [...]
haha, you're cute.
Go ask 50 random people on the street if they care about transistor density and report back to us.
Polling issues (Score:1)
See, this is what's wrong with polls.
"Do you care about transistor density?", will get a lot of "Huh?", which will be interpreted as "no, does not care".
"Transistor density is one of the biggest factors in computer speed. Do you care about making transistors more dense in the future to improve computer speed?" will get a lot of "Yes", because people want the computer speed, even if they don't understand transistor density.
Polling has gotten a bad name because the loaded questions spew the results, and make
Re: (Score:2)
Wouldn't seven years be between 4 and 5 generations?
Probably not advancing Moore's law (Score:5, Insightful)
Moore's law relates to the number of components in an integrated circuit.
I really doubt these things put more transistors onto a piece of silicon.
Re:Probably not advancing Moore's law (Score:4, Insightful)
Re: (Score:1)
Came here to say the same thing.
Moore's law isn't about performance, it's about economy. But smaller components do allow higher clock speeds and more functionality, so performance tends to increase too.
Re:Probably not advancing Moore's law (Score:4, Insightful)
So as you suggest it's definitely a misuse of the term used to roughly describe part of Intel's roadmap when Moore was there.
Re: Probably not advancing Moore's law (Score:3)
Many people confuse Moore's law and Dennard scaling. Dennard scaling is dead. We can still etch smaller transistors. We have trouble dissapating the heat, so even if we have more transistors most of it has to be dark.
Once you boil it down to the math they are doing mostly giant matrix multiplies. Optimizing a particular type of computation is not really related to transistor density or power dissapating. What has evolved is our understanding of algorithms.
Yeah, like DSPs... (Score:5, Interesting)
Specialized processing chips have been several 'generations' ahead in terms of processing per dollar for many decades. In the 90's at least, DSPs were doing audio/video processing much cheaper by performing many machine-level steps simultaneously in one 'cycle' with less power than a general processor, by leaving out the features and cost of a general processor. And all you had to do to use them was test them on a hardware emulator, flash them, then pop them into production test run until you were good enough to deploy. Depending on the chip, they could run on a trickle of power, without active cooling, and match a much most costly general chip for pennies.
I mean, it's how we got cell phones, and LOTS of other things, including most things in a computer that aren't the CPU.
But isn't Moore's law more about transistors per unit cost, rather than performance per cost? Seems like a fundamental misunderstanding in the headline... which seems about as common as specialized chips in modern technology.
Ryan Fenton
Re: (Score:2)
Re: (Score:3)
True, our knowledge of physics has limited moose's law. once we get into quantum scale computing though it should have another good run.
Re:Yeah, like DSPs... (Score:4, Funny)
Perhaps is time for squirrels law, no?
Re: (Score:2)
*sigh* There's never a mod point around when you need one.
Re: (Score:2)
Re: (Score:2)
But isn't Moore's law more about transistors per unit cost, rather than performance per cost? Seems like a fundamental misunderstanding in the headline... which seems about as common as specialized chips in modern technology.
Actually just transistor density, not cost at all. But in the popular press there's no invalid use of computers and Moore's law, it can be about performance, cost, size, IPC, battery life or whatever. Anything that's twice as good in two years or whatever best fits your story follows Moore's law. In this case it's not even an actual use, it's comparing totally unrelated improvements to imaginary iterations of it.
Re:Yeah, like DSPs... (Score:4, Informative)
Linky [wikipedia.org]
"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years."
- Gordon E. Moore, April 19, 1965
It's both cost and density, and continued to be so as it crystallized into the transistor doubling every 18 months figure. To double the density, only at exorbitant cost would not really be an increase in accessible technology. It's not just the technology being invented and monopolized, but being rolled out and usable by the entire field. Increasing computational complexity is still the most important part - but cost has always been a part of the idea too.
Hyperbole (Score:2)
I will trust Intel, AMD and NVidia to sustain Moore's law as it pertains to general purpose computing, not Google. Google gets plaudits for advancing neural net hardware, if indeed they didn't just buy the tech and slap the Google brand on it, which is likely. Thw hyperbole just erodes credibility, in other words, makes me wonder how many other exaggerations will turn up in this department. Yes, it's a fact it plays Go well, and no doubt does a lot of other things well. Let's stick to the facts please. It d
I'm done with slashdot (Score:1)
End of message.
Google! YEY! (Score:2)
This is the FIFTH Google story today.
Re: (Score:1)
Yeah, but OTOH they also have an article about AIDS, so it's all good.....
Re: (Score:3, Informative)
Google I/O is happening at the moment.
Tensor Processing Units not new (Score:5, Informative)
Tensor Processing Units are not new. SGI used to offer that for their Octane, aimed pretty heavily at the satellite image analysis crowd.
Re: (Score:2)
Now that I check, there was also an XIO board with a TPU for the Origin/Onyx 2/Origin 200 machines
Re:Tensor Processing Units not new (Score:4, Informative)
Right, this isn't a general-purpose DSP but a custom ASIC [googleblog.com] designed to run their TensorFlow [tensorflow.org] graphs efficiently.
Re: (Score:2)
The SGI TPU was not merely a rebadged DSP. You had multiple application specific pipelines on it for example.
Re: (Score:3)
Re: (Score:3)
A tensor is an array, possibly with dimension > 2 if you want to be picky. TensorFlow absolutely does use tensors.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Not if your stuff is massively parallel. For example on a 100 watt GPU, doubled performance per watt translates to doubled performance - if there's no significant bandwith bottleneck etc.
If you're the bad ass dude who wants a 250 watt GPU and nothing else it's the exact same deal. There are a few cards that eat even more but nobody will make a kilowatt GPU (single) just for you.
Nice, but no thanks for a threat to humanity. (Score:2)
Perhaps when it isn't something that is likely to take multiple lines of work out in one shot, I'll be for it.
TPU sounds familiar (Score:3, Interesting)
Was that not something introduces about 20 years ago by Silicon Graphics? Or am I getting old
http://manx.classiccmp.org/mir... [classiccmp.org]
Re: (Score:2)
Was that not something introduces about 20 years ago by Silicon Graphics? Or am I getting old
http://manx.classiccmp.org/mir... [classiccmp.org]
If you are old enough, you might remember, some of the googleplex used to be SGI buildings... ;^)
Maybe google did some remodeling and found some antiques
Drat (Score:2)
Not related to the wizard Tensor of Tensor's Floating Disc.
Stop the presses (Score:2)
Am I reading this right, the basic gist of the article is that custom, purposebuilt chips are faster at very specific tasks than general all-purpose chips?
Such amaze, much fast, wow.
They should just call it "Google's Law" (Score:2)
"Google's Law" states that whenever you need to make something sound cool and innovative, just misuse and existing term like "Moore's Law", because reporters are stupid.
Can it mine bitcoin? (Score:1)
The real question on advances over GPU's: Can it mine bitcoin faster and for less electricity?
Re: (Score:2)
Re: (Score:1)
Actually, I did not know that.
Meanwhile, given that this new TPU has lower watt per computation, maybe a better question is, can it mine bitcoins cheaper?