Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Hardware

AI Drives Innovators To Build Entirely New Semiconductors (forbes.com) 59

"AI has ushered in a new golden age of semiconductor innovation," reports Forbes: For most of the history of computing, the prevailing chip architecture has been the CPU, or central processing unit... But while CPUs' key advantage is versatility, today's leading AI techniques demand a very specific — and intensive — set of computations. Deep learning entails the iterative execution of millions or billions of relatively simple multiplication and addition steps... CPUs process computations sequentially, not in parallel. Their computational core and memory are generally located on separate modules and connected via a communication system (a bus) with limited bandwidth. This creates a choke point in data movement known as the "von Neumann bottleneck". The upshot: it is prohibitively inefficient to train a neural network on a CPU...

In the early 2010s, the AI community began to realize that Nvidia's gaming chips were in fact well suited to handle the types of workloads that machine learning algorithms demanded. Through sheer good fortune, the GPU had found a massive new market. Nvidia capitalized on the opportunity, positioning itself as the market-leading provider of AI hardware. The company has reaped incredible gains as a result: Nvidia's market capitalization jumped twenty-fold from 2013 to 2018.

Yet as Gartner analyst Mark Hung put it, "Everyone agrees that GPUs are not optimized for an AI workload." The GPU has been adopted by the AI community, but it was not born for AI. In recent years, a new crop of entrepreneurs and technologists has set out to reimagine the computer chip, optimizing it from the ground up in order to unlock the limitless potential of AI. In the memorable words of Alan Kay: "People who are really serious about software should make their own hardware...."

The race is on to develop the hardware that will power the upcoming era of AI. More innovation is happening in the semiconductor industry today than at any time since Silicon Valley's earliest days. Untold billions of dollars are in play.

Some highlights from the article:
  • Google, Amazon, Tesla, Facebook and Alibaba, among other technology giants, all have in-house AI chip programs.
  • Groq has announced a chip performing one quadrillion operations per second. "If true, this would make it the fastest single-die chip in history."
  • Cerebras' chip "is about 60 times larger than a typical microprocessor. It is the first chip in history to house over one trillion transistors (1.2 trillion, to be exact). It has 18 GB memory on-chip — again, the most ever."
  • Lightmatter believes using light instead of electricity "will enable its chip to outperform existing solutions by a factor of ten."

This discussion has been archived. No new comments can be posted.

AI Drives Innovators To Build Entirely New Semiconductors

Comments Filter:
  • by dskoll ( 99328 ) on Sunday May 10, 2020 @06:55PM (#60045924) Homepage

    I did part of a training course on Xilinx's Versal [xilinx.com] product. It's pretty cool. Up to 400 AI engine cores on one chip, plus four ARM cores and a bucketload of FPGA fabric.

    • Is it enough for a smart and interactive augmented reality waifu?

      • by b05 ( 6837210 )
        Are we talking Al Gore or the Al from Home Improvement here?
      • by ceoyoyo ( 59147 )

        I have an older friend who's said on several occasions "you guys think you want a smart wife. You don't really."

        • I have a smart wife, and in reality, it depends on how smart you yourself are, and how much you are confident and leading or a spinless walking purse.

          Don't ever enter a relationship with anyone on a pedestal, unless it's both partners. And keep it that way.

          If she happens to be both smarter and more self-confident, and you want it anyway, accept that she's gonna be the leader and that's probably a good thing for you dumb loser ass. ;)
          Just make sure that *still* nobody's on a pedestal, and she gives you just

    • You can also do training on our website tacthub.in with less prize and live lecture will be there so we will learn more
  • Re: (Score:2, Insightful)

    Comment removed based on user account deletion
  • Digital Signal Processors, as long as it isn't the IBM mWave, ah good class action times - they paid for my Sound Blaster replacement.
    • No, there are a lot of different things people are trying (a lot of them aren't new, though). The Tensorflow units, for example, were basically 8-bit integer arithmetic ASICs.
    • Nah, they are mostly reduced precision matrix multipliers. Everything else is only just as fast to keep the multipliers busy for MLPs.

      For general purpose signal processing they are lousy.

      • Nah, they are mostly reduced precision matrix multipliers. Everything else is only just as fast to keep the multipliers busy for MLPs.

        This. GPUs have had massive parallelity and matrix math for a long time. OpenGL is all about 4x4 matrix multiplication, and this is within a single shader, not counting how many shaders can operate in parallel. What's new here seem to be much wider matmul units, as well as computing with tensors of higher rank (matrices have rank 2).

  • My wife was watching the "Shark Tank" video. I think most of those are fools, but maybe they are entertaining. In one episode there were two that claimed AI in their product. Huh?! Sure, I get machine learning, and there are neural net simulations. But is AI real?! I think not, and I said as much. I am open to facts that AI is actually real today. Until then, I say AI is an interesting research project, but that's all. Thanks for new information.

    • by DontBeAMoran ( 4843879 ) on Sunday May 10, 2020 @07:36PM (#60046044)

      When we have people burning 5G towers because they think it causes COVID-19, it means the bar for "real AI" isn't as high as we might think.

      • by gweihir ( 88907 )

        When we have people burning 5G towers because they think it causes COVID-19, it means the bar for "real AI" isn't as high as we might think.

        Or rather, natural intelligence may be present but is not successfully used by a significant part of the human race.
         

    • But is AI real?! I think not, and I said as much. I am open to facts that AI is actually real today

      A more precise accurate way to say this concept:

      We currently have weak AI.
      We do not currently have strong AI.

    • by Kjella ( 173770 )

      AI is both incredibly impressive and incredibly stupid at the same time. Kinda like humans ;)

    • There are no actual neural net simulations.

      All they are doing, is taking an input vector of numbers, and multiplying it with static large matrices of weights that map it to.an output vector. And the weights are created by optimization cycles. Passing the vector through it, looking for the delta between output and expected output, slightly tweaking the weights in that direction, rinse and repeat until good enough. Which I call an universal function. For.when you don't know how to actually code it, or are too

    • by gweihir ( 88907 )

      Well, "AI" has been real since the marketeers misappropriated the term for all sorts of non-intelligent cognitive systems and statistical classifiers. There is no intelligence in these things in the sense that a real, intelligent person would define intelligence. And there may never be and certainly not anytime soon.

  • Comment removed based on user account deletion
    • I can't wait until we get a model of Roomba that's so advanced that it gets depressed when it realizes it can't go down the stairs to clean the rest of the house.

    • It's half the reason Intel is failing while AMD is succeeding currently.

      AMD realized that if you just combine a bunch of chiplets, you get much higher yield more easily, because you can combine them into a working chip more often, having to throw away good large CPUs because of a single failure fal less often.
      (Intel is catching on though. But then there's that process node problem. ;)

      And yes, I got that it was a joke. :)

    • by gweihir ( 88907 )

      Simple: All other approaches have failed to produce even a glimmer of actual intelligence, so they are (again) trying to brute-force the problem. They will (again) fail, of course. But I suspect that at this time it is more about the headlines and keeping their company in the news than about delivering any actual results.

  • "Cerebras' chip "is about 60 times larger than a typical microprocessor. It is the first chip in history to house over one trillion transistors (1.2 trillion, to be exact). It has 18 GB memory on-chip — again, the most ever."

    Don't give Apple any ideas - their devices are barely repairable as it is. If they make the entire Macbook in a single chip - we are done.

  • Tesla has been claiming their FSD has a 21 fold performance gain [theverge.com] or a 16 fold MIPS/Watt gain over the Nvidia GPU. The AI engine in the FSD can do little more than integer 8 bit dot products, so it's a very simple thing compared to a GPU with floats and what not, so that performance gain is not surprising. You can push it back further - Google's TPU gets it's performance advantage for exactly the reasons. It's also the reason you see IPU / NPU's appearing in mobile phones, rather than them just using the GPU.

    So this is old news. This is really just a puff piece for a CPU chip that's the size of my laptop. But that's also old news. Trilogy systems [wikipedia.org] tried in in the 1980's. That attempt failed because some of the wafer will have defects, which in Trilogy's case killed the entire wafer. But maybe is easier this time around since it's AI processing units are whole pile of of identical cores, so perhaps routing around the bad ones is easier.

    • Good point, however, back in the 80's they used the Czochralski method to grow the wafers, which does have quite a few defects at the wafer level and so made it impossible.

      But there are newer techniques to grow wafers now. If the engineers use float-zone silicon instead, the defect rate might be low enough to have CPUs of a size that they're talking about.

      Not sure, and instead of looking at wafer defect levels compared to what they need and deciding early, I guess as per many silicon valley projects the
  • by Kalendraf ( 830012 ) on Sunday May 10, 2020 @08:28PM (#60046166)
    In the 1970's, Cray's early designs tackled the data processing bottleneck by maximizing IO throughput and minimizing path lengths. By the 1980s, Thinking Machines, nCUBE, KSR, and MasPar had developed massively parallel processing designs that tried to tackle the bottleneck by distributing the workload. In the '90s, Cray Research's T3D and T3E architecture and more recently IBM's Blue Gene architecture are other examples of massively parallel systems with memory located close to the compute nodes to try to overcome the bottleneck issue. Thus, finding ways to overcome that bottleneck has been a consistent focus of designs for decades. With ever-shrinking semiconductor scales, this latest effort looks to be more of a natural progression based on earlier designs.
    • When Von Neumann was working out the design for EDVAC, he declared that he would build so as to have no bottlenecks.
  • We don't even have AI other than stuff done decades ago. Symbolic AI and inference engines, neural nets, genetic algorithms... all old hat and we just run it on fast hardware. There is no golden age of AI coming.

    • by znrt ( 2424692 )

      yes, but it's just a decade ago that we actually got the hardware to put those decades old techniques to actual use, which considerably changed the game and is what the article is about, how it can go forward.

      • Wrong. I remember seeing picures of neural net hardware that was much more realistic in ita simulation than any of ther recent crap in books about designing neural nets in the mid-90s, and the chips were already treated as an old hat back then.
        The rage back then was that by doing it in software, anyone could do it, "even you(TM)".

        Of course they did not mention that you would have to choose between slow as hell or extremely unrealistic and losing "intelligence" per "neuron" at a factor or 10-100.

        Hell, the en

      • Not seeing any game changing though, just the same old crap and marketing hype.

    • Mod up. What do you think old supercomputers were? Parallel pipelines that worked when you had a parallel solution.Brilliance - no - compare that with 850 bytes 8 bit Chess programs. Hell try to do B+ tree pruning today in that much memory! In 2005 some supercomputer places determined old FORTRAN compilers beat the pants of modern offerings. And DDR4 has been amped so much soft errors occur. For video games this is not crucial, but other stuff can be.
    • by gweihir ( 88907 )

      Very much so. I learned this stuff when I was at university 30 years ago. All they are doing is making things faster. They are not marking them smarter in any way. All they can do today they could have done 30 years ago. It would just have taken forever. Just a tool that got a bit more usable but is still pretty bad overall. Maybe some niche applications (e.g. recognizing street-signs) may have some real impact, but that is essentially it. Most present-day uses will be abandoned eventually because they do n

  • Neural networks are emulated analog computers, so the fastest way to run would be on an analog chip. Analog computing might make a return.

    • by gweihir ( 88907 )

      There is no point. "AI" is deplorably dumb, it just gets faster being dumb and there really is no application that needs this.

      • by Hentes ( 2461350 )

        Just because AI is overhyped doesn't mean it's useless. Instead of "Artificial Intelligence", try to think of AI as "Artificial Instinct". Neural networks managed to solve things like voice and image recognition that are trivial for humans (and most animals), but used to be impossible for computers. It also turns out that many activities that we used to consider intelligent, such as diagnosing cancer patients or playing Go rely mostly on gut feeling. Now current AI may not capable of high level reasoning, b

        • by gweihir ( 88907 )

          Now current AI may not capable of high level reasoning,

          Current "AI" is not capable of any level of reasoning at all. You seem to have fallen for some marketing lies.

  • I bet those morons /actually believe/ they are creating artificial intelligence with their shitty universal functions based on matrix multiplication that are to actual AI what a perfectly spherical horse on a sinusoidal trajectory is to a horse race. (And it's only a matter of time, before Slashdotters believe it too, and I will be downmodded. Happened to "IP".)

    If they create some damn hardware, they could aswell just create a proper simulation of neurons! It would be just as fast, yet perform 10-100 times

  • I thought it'll be something about AI being used to tweak litography, processes, gate level design, materials etc. which would've been cool.

  • I'm particularly intrigued by the comment that "everyone who is serious about software designs their own hardware"... It's a reflection on the idea that if you want ultimate performance, then optimization is the way to go.

    This might be a bit of a stretch, but it reminds me of the approach offered by Transmeta Technologies between 1995 and their demise in 2009, with the idea that one could optimize through the use of "code morphing software" that would get you - if I understood it at the time - something
  • > AI Drives Innovators To Build Entirely New Semiconductors

    No.. they are still using Silicon.

    Maybe you meant "chips".

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...