Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel AI Hardware

Intel Unveils New AI Chip To Compete With Nvidia and AMD (cnbc.com) 13

Intel unveiled new computer chips on Thursday, including Gaudi3, an AI chip for generative AI software. Gaudi3 will launch next year and will compete with rival chips from Nvidia and AMD that power big and power-hungry AI models. From a report: The most prominent AI models, like OpenAI's ChatGPT, run on Nvidia GPUs in the cloud. It's one reason Nvidia stock has been up nearly 230% year-to-date while Intel shares are up 68%. And it's why companies like AMD and, now Intel, have announced chips that they hope will attract AI companies away from Nvidia's dominant position in the market.

While the company was light on details, Gaudi3 will compete with Nvidia's H100, the main choice among companies that build huge farms of the chips to power AI applications, and AMD's forthcoming MI300X, when it starts shipping to customers in 2024. Intel has been building Gaudi chips since 2019, when it bought a chip developer called Habana Labs.

This discussion has been archived. No new comments can be posted.

Intel Unveils New AI Chip To Compete With Nvidia and AMD

Comments Filter:
  • by account_deleted ( 4530225 ) on Thursday December 14, 2023 @02:33PM (#64082145)
    Comment removed based on user account deletion
    • For home users? Are you sure about that?

      • Re:YAWN! (Score:5, Informative)

        by thegarbz ( 1787294 ) on Thursday December 14, 2023 @05:40PM (#64082551)

        OP is absolutely on point. Not only do several home user applications depend on CUDA, but AMD and Intel are not even in the correct ballpark on a dollar for dollar basis. E.g. on Stable Diffusion AMD's RX 7900XTX is about 65% of the speed of the RTX 4070 Ti, the latter which has a $200 lower RRP.

        AMD and Intel are currently abysmal in this field, and it shouldn't be a surprise since NVIDIA has been packing consumer GPUs with dedicated ML hardware since the RTX cards hit the market. RDNA3 based cards' equivalent to Tensor cores are about on par with NVIDIA's Turing architecture. They are literally several generations behind in this.

        • Gelsinger has to keep Intel in the "game", but it'll take years for Intel to bind this into its own chipsets, which will be DOA as the pace of change once again eludes them.

          Oh, and these won't be able to be sold to China, either.

          And wait for it-- they're essentially version one-point-oh.

          He should've stayed at VMWare. Oh, wait....

        • It's the CUDA part that I'm wondering about. Will these new chips have seamless integration with existing frameworks like PyTorch, Tensorflow etc. If companies have to spend development resources rewriting their code to work with these chips its not going to be a clear cost savings.
          • by Targon ( 17348 )
            I know that AMD as that integration with PyTorch. Tensorflow for ROCm is also there. So, while things that are specifically designed for CUDA are limited to NVIDIA, once you get out of that mindset, AMD at least will be a good option with growing support. Intel on the other hand, it's always about future products.
  • Who knows where the road may lead us, only a fool would say
    Who knows if we'll meet along the way
    Follow the brightest star as far as the brave may dare
    What will we find when we get there

    La Sagrada Familia - we pray the storm will soon be over
    La Sagrada Familia - for the lion and the lamb

  • One thing seems certain given the proliferation of specialized CPU instructions, GPUs and NPUs across multiple vendors there is growing demand for abstractions to make all this shit work across the board.

  • I built it in my mom's basement with spare parts and such. It will compete about as well as Intel's offering. I'll keep you all posted.

You know you've landed gear-up when it takes full power to taxi.

Working...