AI Drives Innovators To Build Entirely New Semiconductors (forbes.com) 59
"AI has ushered in a new golden age of semiconductor innovation," reports Forbes:
For most of the history of computing, the prevailing chip architecture has been the CPU, or central processing unit... But while CPUs' key advantage is versatility, today's leading AI techniques demand a very specific — and intensive — set of computations. Deep learning entails the iterative execution of millions or billions of relatively simple multiplication and addition steps... CPUs process computations sequentially, not in parallel. Their computational core and memory are generally located on separate modules and connected via a communication system (a bus) with limited bandwidth. This creates a choke point in data movement known as the "von Neumann bottleneck". The upshot: it is prohibitively inefficient to train a neural network on a CPU...
In the early 2010s, the AI community began to realize that Nvidia's gaming chips were in fact well suited to handle the types of workloads that machine learning algorithms demanded. Through sheer good fortune, the GPU had found a massive new market. Nvidia capitalized on the opportunity, positioning itself as the market-leading provider of AI hardware. The company has reaped incredible gains as a result: Nvidia's market capitalization jumped twenty-fold from 2013 to 2018.
Yet as Gartner analyst Mark Hung put it, "Everyone agrees that GPUs are not optimized for an AI workload." The GPU has been adopted by the AI community, but it was not born for AI. In recent years, a new crop of entrepreneurs and technologists has set out to reimagine the computer chip, optimizing it from the ground up in order to unlock the limitless potential of AI. In the memorable words of Alan Kay: "People who are really serious about software should make their own hardware...."
The race is on to develop the hardware that will power the upcoming era of AI. More innovation is happening in the semiconductor industry today than at any time since Silicon Valley's earliest days. Untold billions of dollars are in play.
Some highlights from the article:
In the early 2010s, the AI community began to realize that Nvidia's gaming chips were in fact well suited to handle the types of workloads that machine learning algorithms demanded. Through sheer good fortune, the GPU had found a massive new market. Nvidia capitalized on the opportunity, positioning itself as the market-leading provider of AI hardware. The company has reaped incredible gains as a result: Nvidia's market capitalization jumped twenty-fold from 2013 to 2018.
Yet as Gartner analyst Mark Hung put it, "Everyone agrees that GPUs are not optimized for an AI workload." The GPU has been adopted by the AI community, but it was not born for AI. In recent years, a new crop of entrepreneurs and technologists has set out to reimagine the computer chip, optimizing it from the ground up in order to unlock the limitless potential of AI. In the memorable words of Alan Kay: "People who are really serious about software should make their own hardware...."
The race is on to develop the hardware that will power the upcoming era of AI. More innovation is happening in the semiconductor industry today than at any time since Silicon Valley's earliest days. Untold billions of dollars are in play.
Some highlights from the article:
- Google, Amazon, Tesla, Facebook and Alibaba, among other technology giants, all have in-house AI chip programs.
- Groq has announced a chip performing one quadrillion operations per second. "If true, this would make it the fastest single-die chip in history."
- Cerebras' chip "is about 60 times larger than a typical microprocessor. It is the first chip in history to house over one trillion transistors (1.2 trillion, to be exact). It has 18 GB memory on-chip — again, the most ever."
- Lightmatter believes using light instead of electricity "will enable its chip to outperform existing solutions by a factor of ten."
There are interesting chips coming (Score:4)
I did part of a training course on Xilinx's Versal [xilinx.com] product. It's pretty cool. Up to 400 AI engine cores on one chip, plus four ARM cores and a bucketload of FPGA fabric.
Re: (Score:2)
Is it enough for a smart and interactive augmented reality waifu?
Re: (Score:2)
Re: There are interesting chips coming (Score:1)
Al Bundy obviously!
*Jefferson grin*
Re: (Score:2)
I have an older friend who's said on several occasions "you guys think you want a smart wife. You don't really."
Re: There are interesting chips coming (Score:1)
I have a smart wife, and in reality, it depends on how smart you yourself are, and how much you are confident and leading or a spinless walking purse.
Don't ever enter a relationship with anyone on a pedestal, unless it's both partners. And keep it that way.
If she happens to be both smarter and more self-confident, and you want it anyway, accept that she's gonna be the leader and that's probably a good thing for you dumb loser ass. ;)
Just make sure that *still* nobody's on a pedestal, and she gives you just
Re: (Score:1)
Re: (Score:2, Insightful)
Are they just reinventing DSPs? (Score:2)
Re: (Score:2)
Re: Are they just reinventing DSPs? (Score:2)
Nah, they are mostly reduced precision matrix multipliers. Everything else is only just as fast to keep the multipliers busy for MLPs.
For general purpose signal processing they are lousy.
Re: (Score:2)
Nah, they are mostly reduced precision matrix multipliers. Everything else is only just as fast to keep the multipliers busy for MLPs.
This. GPUs have had massive parallelity and matrix math for a long time. OpenGL is all about 4x4 matrix multiplication, and this is within a single shader, not counting how many shaders can operate in parallel. What's new here seem to be much wider matmul units, as well as computing with tensors of higher rank (matrices have rank 2).
Is AI real now? (Score:2)
My wife was watching the "Shark Tank" video. I think most of those are fools, but maybe they are entertaining. In one episode there were two that claimed AI in their product. Huh?! Sure, I get machine learning, and there are neural net simulations. But is AI real?! I think not, and I said as much. I am open to facts that AI is actually real today. Until then, I say AI is an interesting research project, but that's all. Thanks for new information.
Re:Is AI real now? (Score:4, Funny)
When we have people burning 5G towers because they think it causes COVID-19, it means the bar for "real AI" isn't as high as we might think.
Re: (Score:3)
When we have people burning 5G towers because they think it causes COVID-19, it means the bar for "real AI" isn't as high as we might think.
Or rather, natural intelligence may be present but is not successfully used by a significant part of the human race.
Re: (Score:2)
But is AI real?! I think not, and I said as much. I am open to facts that AI is actually real today
A more precise accurate way to say this concept:
We currently have weak AI.
We do not currently have strong AI.
Re: (Score:2)
Ah yes, "weak AI", the "AI" without the "I".
Re: (Score:2)
AI is both incredibly impressive and incredibly stupid at the same time. Kinda like humans ;)
Re: Is AI real now? (Score:1)
There are no actual neural net simulations.
All they are doing, is taking an input vector of numbers, and multiplying it with static large matrices of weights that map it to.an output vector. And the weights are created by optimization cycles. Passing the vector through it, looking for the delta between output and expected output, slightly tweaking the weights in that direction, rinse and repeat until good enough. Which I call an universal function. For.when you don't know how to actually code it, or are too
Re: (Score:2)
Well, "AI" has been real since the marketeers misappropriated the term for all sorts of non-intelligent cognitive systems and statistical classifiers. There is no intelligence in these things in the sense that a real, intelligent person would define intelligence. And there may never be and certainly not anytime soon.
Re: (Score:2)
Re: (Score:3)
I can't wait until we get a model of Roomba that's so advanced that it gets depressed when it realizes it can't go down the stairs to clean the rest of the house.
Re: (Score:2)
Re: (Score:2)
Roombas have wifi. Bet it could get up again if it really thought about it.
https://xkcd.com/416/ [xkcd.com]
Re: (Score:2)
LO! That's gold. Haven't seen that one!
Re: (Score:3)
I can't wait until we get a model of Roomba that's so advanced that it gets depressed when it realizes it can't go down the stairs to clean the rest of the house.
Here I am, brain the size of a planet, and they ask me to clean the floor...
Yield. (Score:1)
It's half the reason Intel is failing while AMD is succeeding currently.
AMD realized that if you just combine a bunch of chiplets, you get much higher yield more easily, because you can combine them into a working chip more often, having to throw away good large CPUs because of a single failure fal less often. ;)
(Intel is catching on though. But then there's that process node problem.
And yes, I got that it was a joke. :)
Re: (Score:2)
Simple: All other approaches have failed to produce even a glimmer of actual intelligence, so they are (again) trying to brute-force the problem. They will (again) fail, of course. But I suspect that at this time it is more about the headlines and keeping their company in the news than about delivering any actual results.
Don't give Apple ideas (Score:2)
"Cerebras' chip "is about 60 times larger than a typical microprocessor. It is the first chip in history to house over one trillion transistors (1.2 trillion, to be exact). It has 18 GB memory on-chip — again, the most ever."
Don't give Apple any ideas - their devices are barely repairable as it is. If they make the entire Macbook in a single chip - we are done.
Re: Don't give Apple ideas (Score:2)
Can't be (economically) done. The yield will be abysmal. ;)
And if they tried to create an equivalent, I guess Louis Rossmann would have to learn to replace chiplets.
Re: (Score:2)
No worries, there always have been better alternatives than apple hardware. Cheaper too.
The writing on the wall since the Tesla FSD (Score:3)
Tesla has been claiming their FSD has a 21 fold performance gain [theverge.com] or a 16 fold MIPS/Watt gain over the Nvidia GPU. The AI engine in the FSD can do little more than integer 8 bit dot products, so it's a very simple thing compared to a GPU with floats and what not, so that performance gain is not surprising. You can push it back further - Google's TPU gets it's performance advantage for exactly the reasons. It's also the reason you see IPU / NPU's appearing in mobile phones, rather than them just using the GPU.
So this is old news. This is really just a puff piece for a CPU chip that's the size of my laptop. But that's also old news. Trilogy systems [wikipedia.org] tried in in the 1980's. That attempt failed because some of the wafer will have defects, which in Trilogy's case killed the entire wafer. But maybe is easier this time around since it's AI processing units are whole pile of of identical cores, so perhaps routing around the bad ones is easier.
Re: (Score:3)
But there are newer techniques to grow wafers now. If the engineers use float-zone silicon instead, the defect rate might be low enough to have CPUs of a size that they're talking about.
Not sure, and instead of looking at wafer defect levels compared to what they need and deciding early, I guess as per many silicon valley projects the
Re: The writing on the wall since the Tesla FSD (Score:1)
Doesn't help. See: Intel's 10nm node. AMD is currently succeeding because they use a chiplet design that lets them get around the defects.
Re: (Score:2)
> But there are newer techniques to grow wafers now.
nit: they don't grow wafers, they grow ingots, which then get sliced into wafers
Re: The writing on the wall since the Tesla FSD (Score:2)
Prior designs have tried to solve the bottleneck (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
The only thing interesting about this article is the photonics claim.
Pity they're charlatans. The Mach-Zehnder interferometers they use as the building blocks for their tech are a bit over a thousand times bigger than current semiconductor transistor technology. And the fundamental physics of light means that they can't shrink them down. So to match performance with modern AI chips, they'll eventually need silicon wafers 1000 times bigger than everyone else.
They've kept the VCs happy and the money coming through by demonstrating low node count chips, but they have no reali
Re: Lightmatter (Score:1)
Not really true.
The electrons in electronics also just push each other forward with the EM force aka photons aka "light". So logically, the idea is to take away some electrons (miniaturization) and have those photons travel further (photonics).
Their implementation is just shit, I agree. But let's let them research some more. Nothing starts out perfect.
Re: Lightmatter (Score:2)
The fundamental physics of the situation doesn't allow for shrinking of these building blocks to 5nm to match current transistors. That's about 1/500th of a wave
marketing and vc bullshit (Score:2)
We don't even have AI other than stuff done decades ago. Symbolic AI and inference engines, neural nets, genetic algorithms... all old hat and we just run it on fast hardware. There is no golden age of AI coming.
Re: (Score:2)
yes, but it's just a decade ago that we actually got the hardware to put those decades old techniques to actual use, which considerably changed the game and is what the article is about, how it can go forward.
Re: marketing and vc bullshit (Score:1)
Wrong. I remember seeing picures of neural net hardware that was much more realistic in ita simulation than any of ther recent crap in books about designing neural nets in the mid-90s, and the chips were already treated as an old hat back then.
The rage back then was that by doing it in software, anyone could do it, "even you(TM)".
Of course they did not mention that you would have to choose between slow as hell or extremely unrealistic and losing "intelligence" per "neuron" at a factor or 10-100.
Hell, the en
P.S.: s/ita/their/ (Score:1)
Just to clarify.
Sorry, missed that.
Re: (Score:2)
Not seeing any game changing though, just the same old crap and marketing hype.
Re: (Score:2)
Re: (Score:2)
Very much so. I learned this stuff when I was at university 30 years ago. All they are doing is making things faster. They are not marking them smarter in any way. All they can do today they could have done 30 years ago. It would just have taken forever. Just a tool that got a bit more usable but is still pretty bad overall. Maybe some niche applications (e.g. recognizing street-signs) may have some real impact, but that is essentially it. Most present-day uses will be abandoned eventually because they do n
We may go back to analog computing (Score:2)
Neural networks are emulated analog computers, so the fastest way to run would be on an analog chip. Analog computing might make a return.
Re: (Score:2)
There is no point. "AI" is deplorably dumb, it just gets faster being dumb and there really is no application that needs this.
Re: (Score:2)
Just because AI is overhyped doesn't mean it's useless. Instead of "Artificial Intelligence", try to think of AI as "Artificial Instinct". Neural networks managed to solve things like voice and image recognition that are trivial for humans (and most animals), but used to be impossible for computers. It also turns out that many activities that we used to consider intelligent, such as diagnosing cancer patients or playing Go rely mostly on gut feeling. Now current AI may not capable of high level reasoning, b
Re: (Score:2)
Now current AI may not capable of high level reasoning,
Current "AI" is not capable of any level of reasoning at all. You seem to have fallen for some marketing lies.
Casually saying "AI". (Score:1)
I bet those morons /actually believe/ they are creating artificial intelligence with their shitty universal functions based on matrix multiplication that are to actual AI what a perfectly spherical horse on a sinusoidal trajectory is to a horse race. (And it's only a matter of time, before Slashdotters believe it too, and I will be downmodded. Happened to "IP".)
If they create some damn hardware, they could aswell just create a proper simulation of neurons! It would be just as fast, yet perform 10-100 times
Based on the title (Score:2)
I thought it'll be something about AI being used to tweak litography, processes, gate level design, materials etc. which would've been cool.
Is This "Everything Old Becoming New Again"? (Score:2)
This might be a bit of a stretch, but it reminds me of the approach offered by Transmeta Technologies between 1995 and their demise in 2009, with the idea that one could optimize through the use of "code morphing software" that would get you - if I understood it at the time - something
News for nerds, but not from nerds ? (Score:2)
> AI Drives Innovators To Build Entirely New Semiconductors
No.. they are still using Silicon.
Maybe you meant "chips".