Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Microsoft AI Hardware

Report: Microsoft is Partnering with AMD on Athena AI Chipset 17

According to Bloomberg (paywalled), Microsoft is helping finance AMD's expansion into AI chips. Meanwhile, AMD is working with Microsoft to create an in-house chipset, codenamed Athena, for the software giant's data centers. Paul Thurrott reports: Athena is designed as a cost-effective replacement for AI chipsets from Nvidia, which currently dominates this market. And it comes with newfound urgency as Microsoft's ChatGPT-powered Bing chatbot workloads are incredibly expensive using third-party chips. With Microsoft planning to expand its use of AI dramatically this year, it needs a cheaper alternative.

Microsoft's secretive hardware efforts also come amid a period of Big Tech layoffs. But the firm's new Microsoft Silicon business, led by former Intel executive Rani Borkar, is growing and now has almost 1,000 employees, several hundred of which are working on Athena. The software giant has invested about $2 billion on this effort so far, Bloomberg says. (And that's above the $11 billion it's invested in ChatGPT maker OpenAI.) Bloomberg also says that Microsoft intends to keep partnering with Nvidia too, and that it will continue buying Nvidia chipsets as needed.
This discussion has been archived. No new comments can be posted.

Report: Microsoft is Partnering with AMD on Athena AI Chipset

Comments Filter:
  • Nobody knows what will be really left from the current AI craze. It is clear it will be a lot less than many people xpect. It does seems very doubtful custom hardware will be a good idea though.

    • What we will end up with is a bunch of API's for hardware that isnt at all being used for what it was intended for.
    • Re:Far too early (Score:4, Insightful)

      by Tony Isaac ( 1301187 ) on Thursday May 04, 2023 @08:31PM (#63498164) Homepage

      It's true that nobody knows *exactly* what the AI reality will be. But there are some clear indications that ChatGPT is in fact ushering in a new computing reality, similar to how Google shook up the web search landscape in the late 90's.

      ChatGPT's helpfulness with coding alone will be huge. At least a couple of times a day, I've been able to get a boost from ChatGPT with code that I would otherwise have had to look up on Stack Overflow. Things like how to formulate a complex XPATH query, or how to write a javascript function to perform an operation I wasn't familiar with. After asking ChatGPT, and getting _exactly_ what I needed, I went back and searched the old way, with Google / Stack Overflow. I could find the same answers all right, but it took me a lot longer and required a lot more effort.

      That kind of convenience goes way beyond coding. I predict that it will be too hard for people to resist, and that Microsoft is absolutely right to start building custom chips.

      • It's changing the job market [vox.com].

      • I think you're likely to get exactly what you needed from GPT if:

        • What you needed isn't particularly niche or weird (i.e lots of similar examples online)
        • What you needed isn't very complicated (nothing too long or technical)
        • What you needed is code (highly structured information)

        Stray from those conditions and you might find it's considerably less reliable.

        Convenience is great but accuracy is often more important. It'll be interesting the first time someone is injured or dies because a "programmer" didn't san

        • Your first point about niches is valid. But I find ChatGPT to be able to give very good answers on highly technical topics. And it's good with a broad variety of domains, not just code. I asked it, for example, to write a short story about a dog wearing a sweater. What it came up with was quite reasonable and readable, thought that query is the opposite of "highly structured."

          • Yea, but that's only because it was able to basically plagiarize components wholesale from presumably better-written source material that's also publicly available but you just didn't know about. The CS101 rule of "garbage-in, garbage-out" still applies.

    • Nobody knows what will be really left from the current AI craze. It is clear it will be a lot less than many people xpect. It does seems very doubtful custom hardware will be a good idea though.

      I would call this the first step into Phase 2. Phase 1 was the creation of the software.

      Phase 3 could be machines designing machines without any human help.

      Phase 4 could be machines building machines without any human help.

      Phase 5 could be really interesting for humans.

  • by williamyf ( 227051 ) on Thursday May 04, 2023 @08:10PM (#63498124)

    XBox One and XBox series X comes to mind.
    IIRC, the AMD chip in the surface laptop is also a custom Jobbie...

    Also, I think they have some collaboration in Microsoft's Intelnal Cloud and Azure.

    So, no news here, is probably yet another chip out of the "SemiCustom" AMD division.

    If anything, this will HELP AMD's chances in the MS ecosystem, because the MS Std AI libraries will be optimized for AMD tech.

    • by brad0 ( 6833410 )
      Nowhere does it say this is the first time working together. What an irrelevant post.
  • ...of course, "Microsoft Silicon".

  • by NimbleSquirrel ( 587564 ) on Friday May 05, 2023 @06:28AM (#63498746)
    I'd much rather see AMD expand ROCm to support more of their existing chipsets, and work on things like improving integrations with frameworks like PyTorch. It is one thing to build an expensive Tensor GPU, but not everyone has $40,000 per GPU (eg. Nvidia H100) to throw at AI. Yes, AMD are well behind the curve, but I think there is an arguement to be had that the lower end is also valuable too, and an area that AMD could really capitalize on, only their software support leaves a lot to be desired. They are already touting that they have a VRAM advantage. As anyone that has played with Stable Diffusion or LLaMA text generation may tell you, VRAM is really important to handle larger and larger models. Their software support lets them down. I tried to get an RX6600 working with Stable Diffusion... evenually I did, but it was a massive pain as my card was not 'officially' supported by ROCm. It did work, and was far better than CPU alone.... But, in the end I ended up switching to an Nvidia RTX3060, mainly because it had 12GB VRAM (the most I could afford at the time). Getting Nvidia working under PyTorch was super simple compared to the hoops I had to jump through with AMD.

Good day to avoid cops. Crawl to work.

Working...