'Huang's Law Is the New Moore's Law' (wsj.com) 55
As chip makers have reached the limits of atomic-scale circuitry and the physics of electrons, Moore's law has slowed, and some say it's over. But a different law, potentially no less consequential for computing's next half century, has arisen. WSJ: I call it Huang's Law, after Nvidia chief executive and co-founder Jensen Huang. It describes how the silicon chips that power artificial intelligence more than double in performance every two years. While the increase can be attributed to both hardware and software, its steady progress makes it a unique enabler of everything from autonomous cars, trucks and ships to the face, voice and object recognition in our personal gadgets. Between November 2012 and this May, performance of Nvidia's chips increased 317 times for an important class of AI calculations, says Bill Dally, chief scientist and senior vice president of research at Nvidia. On average, in other words, the performance of these chips more than doubled every year, a rate of progress that makes Moore's Law pale in comparison.
Nvidia's specialty has long been graphics processing units, or GPUs, which operate efficiently when there are many independent tasks to be done simultaneously. Central processing units, or CPUs, like the kind that Intel specializes in, are on the other hand much less efficient but better at executing a single, serial task very quickly. You can't chop up every computing process so that it can be efficiently handled by a GPU, but for the ones you can -- including many AI applications -- you can perform it many times as fast while expending the same power. Intel was a primary driver of Moore's Law, but it was hardly the only one. Perpetuating it required tens of thousands of engineers and billions of dollars in investment across hundreds of companies around the globe. Similarly, Nvidia isn't alone in driving Huang's Law -- and in fact its own type of AI processing might, in some applications, be losing its appeal. That's probably a major reason it has moved to acquire chip architect Arm Holdings this month, another company key to ongoing improvement in the speed of AI, for $40 billion.
Nvidia's specialty has long been graphics processing units, or GPUs, which operate efficiently when there are many independent tasks to be done simultaneously. Central processing units, or CPUs, like the kind that Intel specializes in, are on the other hand much less efficient but better at executing a single, serial task very quickly. You can't chop up every computing process so that it can be efficiently handled by a GPU, but for the ones you can -- including many AI applications -- you can perform it many times as fast while expending the same power. Intel was a primary driver of Moore's Law, but it was hardly the only one. Perpetuating it required tens of thousands of engineers and billions of dollars in investment across hundreds of companies around the globe. Similarly, Nvidia isn't alone in driving Huang's Law -- and in fact its own type of AI processing might, in some applications, be losing its appeal. That's probably a major reason it has moved to acquire chip architect Arm Holdings this month, another company key to ongoing improvement in the speed of AI, for $40 billion.
Transistor count (Score:5, Insightful)
Moore's law says nothing of performance. It is transistor count. Why does everyone get this wrong?
Re:Transistor count (Score:5, Insightful)
Because "transistor count" has too many syllables. Oh, and also because "journalists" are generally morons with English degrees.
Re: (Score:1)
Re: (Score:2)
> You misspelt "Slashdot Editors".
I have yet to see any evidence that Slashdot editors possess English degrees in any of their "editing."
Re: (Score:2)
Re: (Score:2)
Message to "journalists": don't be morans!
Re: (Score:2)
Moore's law says nothing of performance. It is transistor count. Why does everyone get this wrong?
TFA is paywalled, but TFS does not get it wrong. It doesn't say that Moore's Law is about performance. It only says that Huang's Law is about performance, which it is.
Exactly, transistor count favors CISC over RISC (Score:2)
As long as moore's law was active then increasing transistor count meant that the route to more speed was more complex CPUs.
But were entering the age where data bandwidth is more important than single CPU speed.
if the CPU is never waiting for data arrive then it will be faster. that's the key.
The way we have band-aided this problem up until recently is to invent caches, and speculative execution, and out of order operations and multi-tasking. This gives the CPU something to do with it's idle sili
Re: (Score:2)
Wonder if the Connection Machine will make a comeback :)
Intel hasn't been CISC in decades (Score:2)
Re: (Score:2)
So it's cisc. just think of it as a black box. It looks like CISC from the outside. it uses extra silicon to make it CISC instructructions. So it has the same drawbacks and advantages CISC does. And since you don't sent it RISC instructions it isn't risc.
Re: (Score:2)
So it's cisc. just think of it as a black box. It looks like CISC from the outside. it uses extra silicon to make it CISC instructructions. So it has the same drawbacks and advantages CISC does. And since you don't sent it RISC instructions it isn't risc.
It *is* sent RISC instructions (micro ops) for scheduling and execution. The vast amount of the transistors on the device are for executing these RISC instructions. The legacy CISC instructions seen by the programmer are not executed, they are translated into one or more RISC instructions.
The programmer's instruction set is CISC.
The hardware's instruction set is RISC.
Since we are talking about transistor count here the hardware's reality seems the more appropriate. If we were talking about compilers th
Re: (Score:1)
GPUs are not optimal for neural nets; ASICs are.
wrong (Score:2)
neurons are optimal than asics if were talking general hardware.
But GPU's are more general than asics
Re: (Score:2)
Moore's law says nothing of performance. It is transistor count. Why does everyone get this wrong?
Most people have absolutely no clue about performance and that transistor count is anything but. Especially since complex chips have been interconnect-limited for 20 years or so now.
Re: (Score:2)
Not only that, but Moore's Law—realized after the fact—is exactly what allowed the growth in AI development today.
There's nothing special about the field of AI that allows it to outpace Moore's Law. Rather, the reason that machine learning and other AI applications have seemingly been advancing more rapidly than Moore's Law in the last few years is because there was an "overhang" in the industry: technology had advanced—at the pace of Moore's Law—beyond the necessary point to do X, b
Re: (Score:2)
Moore's law says nothing of performance. It is transistor count. Why does everyone get this wrong?
I've always believed Moore's law is really about cost per transistor.
Essentially how many transistors can you afford to shove into your product in order to provide a level of value at a price people are willing to spend.
In the end you can call it transistor count I suppose yet what really powers the underlying feedback loop is a market based on cost.
Re: (Score:2)
rubbish summary (Score:2)
CPU and graphics chips haven't reached the limits of semiconductors yet, 5 nm and 3nm node test devices have been done. Saying something that doesn't exist, reliable self-driving vehicles, depend on AI which depends on small chips is just making a circular track with 3 stations in the middle of nowhere.
You morons! (Score:2)
We were almost out of reach of Moore's law and immediately you morons had to invent a new one!
If you would just stop making new laws, technology would have less limits and could increase so much faster!
IT'S NOT AI (Score:1)
Artificial Intelligence doesn't exist.
Quit calling this kind of stuff "AI".
Re: (Score:2)
Artificial Intelligence doesn't exist.
Quit calling this kind of stuff "AI".
This exactly.
There is not now, nor anything on the horizon, that comes anywhere close to an actual AI. All we have now is advanced pattern recognition, and primitive machine learning. The best you could possible call any of the current "AI" would be an "Expert system", and even that is a stretch.
Re: (Score:2)
Oh really? Then what do you call this [wikipedia.org] then?
Why Huang ? (Score:1)
Re: (Score:2)
And if you do why not go with his family name Jensen, instead of his given name?
Re: (Score:2)
Huang is his family name. Jen-Hsun, often written as Jensen for English-speaking audiences, is his given name.
Re: (Score:2)
So they already reversed the name order compared to native?
Re: (Score:2)
And if you do why not go with his family name Jensen, instead of his given name?
His family name is Huang, which means "Yellow". Jensen ("Renxun" in Pinyin) is his given name.
Re: (Score:2)
It's either "U-hang" or "Bob".
Wow! You call it... You're a genius. (Score:1)
Doesn't work. (Score:2)
The law doesn't work.
AI chips more than double in performance because we basically started from 0. So of course the first increase is close to an infinite percentage.
Also, like smartphones, if you make them more expensive every two years, you get better performance increase than mature segments (such as computers).
Intel could also beat (it's own) Moore's law if they doubled the price of their chips every two years, and people were still buying them.
So of course, that $5 AI chip is now surpassed by a new $10
Re: (Score:2)
AI chips more than double in performance because we basically started from 0.
The field of AI started in the 1950s, long before computers were based on "chips".
The first CPU on an integrated circuit ("chip") was in 1971.
Re: (Score:2)
Of course. But they didn't call it an AI chip. From what I understand, they are talking about dedicated AI chips.
Re: (Score:2)
From what I understand, they are talking about dedicated AI chips.
Huang is mostly talking about GPUs, which are not dedicated AI chips.
Re: (Score:2)
OK but still more specialized than CPUs. In the past decade, GPUs began to become useful for tasks other than rendering 3D graphics.
I doubt transistor count of GPUs surpassed Moore's law. And I also doubt AI performance (whatever their metric is) will continue the current trend of doubling every year for long. Therefore this law won't hold.
No it's not the new Moore's Law (Score:5, Insightful)
No, it's not. For starters, no one will be referring to Huang's Law because some WSJ author named it after the CEO of some company. That's ridiculous. You might as well call it Trump's Law because he's the president of the country at the moment. That's inconsequential.
Next, Moore's Law has nothing to do with processing power, processor speed, MIPS, or any other higher-level concept. It purely refers to the physical density of transistors (and similar components) on silicon. Moore made that near-future observation based on the performance he'd seen by his company and the industry to that point. He never claimed it to be a "law" or any such thing. It was an observation.
Finally, AI processing has absolutely nothing to do with this in any way. There is no reason that AI processing has to be done in a single chip, so the physical constraints of chip manufacture is no limitation to what AI can or cannot do, nor does it limit how fast the AI can process. If AI needs more computational power then you throw more cores at it. Heck, most AI processing for things like speech recognition is done off-device in the "cloud" anyway.
This whole thing is just stupid and inconsequential.
Re: (Score:1)
You seem to be assuming social memes must gel with logic. They often don't. A catch phrase that has some truth can live on even if it's full of caveats when carefully dissected. I've learned over the decades that IT is full of buzzwords, hype, and bullshit; and thus one should take such with a grain of salt.
Re: (Score:2)
I've learned over the decades that IT is full of buzzwords, hype, and bullshit; and thus one should take such with a grain of salt.
Very, very true. And some refuses to die, no matter how stupid.
You are wrong about "AI" simply needing cores (Score:1)
If AI needs more computational power then you throw more cores at it.
Why is that not true of games as well? Why are we still using a GPU at all then?
Turns out the answer to that, is the same as for AI.
The simple fact is that modern AI is almost all neural networks, with very specific mathematical operations the way game engines have very specific mathematical operations that they both need to do a lot of, as rapidly as possible.
That is why modern AI really took off when people started leaning on the GPU t
Re: (Score:2)
For starters, no one will be referring to Huang's Law because some WSJ author named it after the CEO of some company.
You mean like some tech publication picked up Moore's Law based on a statement of a CEO of some company?
Let's see:
Both Moore and Huang are billionaires,
Both Moore and Huang were CEOs of a major tech company specialising in the design of microprocessors at the time they made their claim,
Both Moore and Huang are business men,
Both Moore and Huang are electrical engineers.
So let's just come out and address the elephant in the room: One isn't a white American. ... Are you racist?
Re: (Score:2)
You mean like some tech publication picked up Moore's Law based on a statement of a CEO of some company?
Let's see: ...
Both Moore and Huang were CEOs of a major tech company specialising in the design of microprocessors at the time they made their claim
In 1965 when Gordon Moore made his transistor density "claim", he was at Fairchild, and he wasn't the CEO. In fact, he was never the CEO of Fairchild. When he co-founded Intel, he didn't become their CEO until some 7 years later, 10 years after he wrote the paper [intel.com] that gave birth to his "law".
Moore's "Law" posits a doubling of transistor density every two years. "Huang's Law" is supposedly based on statements he made at nVidia's GPU conference in 2018. But Huang's statements, and those of his people for
Which will win? (Score:2)
Interesting. That last article I remember reading on this subject (I don't recall where - anyone?) basically said that development of "AI" would stall because training it had become too expensive. I wonder which will win out?
No it's not, so just say no (Score:2)
Paywall, can't read (Score:2)
Please stop posting links to paywalled sites
Shitty 'algorithms' doubling in shittiness (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
So basically, shitty half-assed excuses for 'AI' doubling in shittiness every 2 years?
The real question is, can it do crap faster, or can it do worse crap?
Re: (Score:2)
Re: (Score:2)
Hehehe, indeed.
Scumbag law? (Score:2)
I name it the scumbag law, after nVidia's founder.