Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Hardware

Sam Altman's $7 Trillion Chip Dreams Are Way Off the Mark, Says Nvidia CEO (businessinsider.com) 21

Jensen Huang took an indirect jab at Sam Altman when he said $7 trillion can buy "apparently all the GPUs." From a report: The Nvidia CEO made the quip at the World Governments Summit in Dubai on Monday when asked how many GPU chips that much money could buy. Altman, the OpenAI chief, is reportedly trying to raise trillions to boost supplies of the chips needed for AI processing. Huang told the United Arab Emirates' AI minister, Omar Al Olama, that developing AI wouldn't cost as much as the amount Altman is seeking to raise. The Nvidia CEO said AI infrastructure costs would be considerably less than the $5 trillion to $7 trillion Altman is reportedly trying to raise because of expected advances in computing.

"You can't assume just that you will buy more computers. You have to also assume that the computers are going to become faster and therefore the total amount that you need is not as much," Huang said. He also suggested that the cost of building AI data centers globally would amount to $2 trillion by 2029. Huang said: "There's about a trillion dollars' worth of installed base of data centers. Over the course of the next four or five years, we'll have $2 trillion worth of data centers that will be powering software around the world."

This discussion has been archived. No new comments can be posted.

Sam Altman's $7 Trillion Chip Dreams Are Way Off the Mark, Says Nvidia CEO

Comments Filter:
  • by david.emery ( 127135 ) on Tuesday February 13, 2024 @01:56PM (#64237036)

    And the cooling for that many chips doing AI? Would AI/cloud companies look to relocate to countries with the cheapest power sources? (And what would that do to efforts to control carbon emissions?)

    • Build the compute center in Iceland, which has plenty of cheap hydropower.

      Cool with a seawater heat exchanger. Use the waste heat for residential heating.

  • ...he can pay himself from what is left, so no reason for him to raise any less than the market will bear.

  • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday February 13, 2024 @02:33PM (#64237114) Journal
    It's not a huge surprise that the general response to Altman's scheme would be that it's grandiose puffery(even aside from his "I will create the machine god, but in a responsible way" vibes; leaving a price estimate of 5-7 trillion creates the impression that you've not really nailed the details down if the window of uncertainty is quite large relative to both the low and high values; and stupefying large in absolute terms); it seems a bit more interesting that Nvidia would be publicly pushing for a markedly smaller figure when they are one of the ones who would seem to stand to benefit.

    Disagreement between Altman and Huang over whether 'AI' is the emerging superintelligence or just a tool for churning out 'content' real fast, with correspondingly different estimates for how much people will actually want to spend on it? Nvidia perturbed because they think that Altman's plan involves trying to expand fab capacity enough to making taking his pick of second-tier fabless designers, rather than paying Nvidia a premium, the preferred strategy? Fundamentally greater optimism on Nvidia's side; with assumptions that improved efficiency will actually deliver as much 'AI' as the market wants for $2 trillion or so without huge shakeups in the supply chain; while Altman thinks that only maximum brute force will deliver what the problem requires?
    • by dgatwood ( 11270 )

      It's not a huge surprise that the general response to Altman's scheme would be that it's grandiose puffery(even aside from his "I will create the machine god, but in a responsible way" vibes; leaving a price estimate of 5-7 trillion creates the impression that you've not really nailed the details down if the window of uncertainty is quite large relative to both the low and high values; and stupefying large in absolute terms); it seems a bit more interesting that Nvidia would be publicly pushing for a markedly smaller figure when they are one of the ones who would seem to stand to benefit.

      The bigger the number, the more incentive exists for other competitors to get into the market, so although NVIDIA's stock price might take a bump from bigger numbers causing investors to see dollar signs, publicly saying that the number is smaller means NVIDIA is likely to capture a bigger chunk of the pie.

  • by awwshit ( 6214476 ) on Tuesday February 13, 2024 @03:01PM (#64237156)

    Sam needs $7T, because he has already calculated his cut. Real question is, Can the world afford Sam? Clearly the answer is No. His plan is to burn the world to the ground while he runs off with all the cash.

    What kind of double-speak is it when you call it OpenAI but it is not open at all?

  • by methano ( 519830 ) on Tuesday February 13, 2024 @03:10PM (#64237182)
    I seem to be missing something here. Seems like there are a lot of better ways to spend $7T than this.
  • ...Says the guy who raises prices so that the price to performance metric does not move. If better more advanced devices stayed at the same old price then the price to performance gets better. But NGreedia abandoned that policy.

    That said, the money doesn't go all to AI components. You need the rest of the computer, racks, data centers, utilities, and software.

  • Sam Altman is toxic. Get rid of him.

  • by LostMyBeaver ( 1226054 ) on Wednesday February 14, 2024 @12:36AM (#64238074)
    What I think many people are missing is... well Seymore Cray.

    I make this point because Cray started his super computing journey by building highly advanced machines connected to a single centralized memory system.

    As things stand right now, probably the biggest problem we're facing in AI is cache coherence. The bigger the machines we build, the bigger this problem is. Currently, I'm trying to troubleshoot a fairly small HPC, about a thousand cores. As the system exists in its current state, the more cores to a single node I add, the slower the machine gets. This is because the cost of sharing memory between the cores is just too high. HBM2 and HBM3 don't help at all because it's an operating design issue. Thrashing the memory which is what AI does means the number of CPU spinlocks increase. Historically, the cheapest form of shared memory has been atomic variables. They exist in a single page and are always cache coherent. Right now, every access of such a variable is taking a very long time because the kernels are initiating spinlocks to wait for coherence. As such, 128 core or larger processors are generally a lot slower than much smaller processors with duplicated memory in readonly regions.

    We need to see progress made in high performance multi-ported memory systems. I think that specialized data lines, for example entirely separate LVDS pairs for reading and writing synchronized (coherent) memory regions could be helpful. Any writes to memory in specific regions would be multicast across a full mesh and spinlocks on reads could be local to the core. As part of the multicast LVDS mesh, there can be a "dirty" status line where a centralized broker would identify writes to a region (as any mmu would) and with minimal propagation delay, raise a dirty flag at the speed of electricity to all subscribers of notification to said region.

    Honestly, Cray would probably come up with something much more useful. But, with optimizations like these, performance can improve drastically enough that substantially fewer cores could achieve the same tasks.

    From what I've been looking at, GPU is trash for AI. I have racks full of NVidia and AMD's best systems, in a few cases, I have access to several of the computers ranked in the top 10 on the Top500 list. And the obscenely wasted cycles and transistors in general for AI processing is unforgiveable. a chip specifically designed to run transformers should hold at least 100 times the capacity of a single GPU. Then there's the additional fact that by optimizing the data path for AI in combination with smarter cache coherency, we could fit maybe thousands of times more capacity into a single chip.

    Strangely, right now, I think the two most interesting players in the market are GraphCore and Huawei. They both have substantially smarter solutions to this problem than either NVidia or AMD.
  • I suppose that amount might just be enough to license ASML's patents on the latest lithography processes, since AFAIK they've always been the supply bottleneck in the process of actually churning out more cutting-edge chips.

    Of course, if I was the sole supplier of the latest process node silicon I wouldn't necessarily be excited about the idea of licencing that technology.

    So... sure, you can build 100 new fabs, but if ASML don't have enough EUV machines to put in them, then... what's the point?

White dwarf seeks red giant for binary relationship.

Working...