Microsoft Strung Together Tens of Thousands of Chips in a Pricey Supercomputer for OpenAI (bloomberg.com) 25
When Microsoft invested $1 billion in OpenAI in 2019, it agreed to build a massive, cutting-edge supercomputer for the artificial intelligence research startup. The only problem: Microsoft didn't have anything like what OpenAI needed and wasn't totally sure it could build something that big in its Azure cloud service without it breaking. From a report: OpenAI was trying to train an increasingly large set of artificial intelligence programs called models, which were ingesting greater volumes of data and learning more and more parameters, the variables the AI system has sussed out through training and retraining. That meant OpenAI needed access to powerful cloud computing services for long periods of time. To meet that challenge, Microsoft had to find ways to string together tens of thousands of Nvidia's A100 graphics chips -- the workhorse for training AI models -- and change how it positions servers on racks to prevent power outages. Scott Guthrie, the Microsoft executive vice president who oversees cloud and AI, wouldn't give a specific cost for the project, but said "it's probably larger" than several hundred million dollars. [...] Now Microsoft uses that same set of resources it built for OpenAI to train and run its own large artificial intelligence models, including the new Bing search bot introduced last month. It also sells the system to other customers. The software giant is already at work on the next generation of the AI supercomputer, part of an expanded deal with OpenAI in which Microsoft added $10 billion to its investment.
Blockchain collapsed just in time for AI (Score:2)
Re: Blockchain collapsed just in time for AI (Score:2)
Re: (Score:3)
Temporarily. Microsoft won't be buying any new GPUs for AI for like 3 years. So they just have a handful of cloud providers that need 100K units every 3 years, instead of millions of people wanting 10 to 10K GPUs each.
So not really a win, just a temporary bridge for Nvidia to makes a plan.
Microsoft bought "tens of thousands" of A100 GPUs. Assuming 100k A100s at the Amazon price of $14k/GPU is $1.4 billion. Obviously there are huge volume discounts, and maybe Microsoft went for fewer units and smaller memory units. Still, that's a bunch of money. And since GPT and LLM models are all the rage nowadays, that means that other hyperscalers like Google, Meta, Amazon, etc. are also looking to do the same. Plus hyperscalers are just part of the market, albeit a big part of the market.
The other
Re: (Score:2)
I'm not sure if it's outright mandatory and SKU-locked, or whether Nvidia just very, very, strongly encourages it; but if you are running more than one node worth(which they obviously are); Nvidia's connectX NICs(formerly Mellanox) for infiniband or RoCE are the ones that GPUDirect RDMA configurations are normally described with. Cheaper than the GPUs; but inter
Re: Blockchain collapsed just in time for AI (Score:2)
Re: (Score:2)
Im assuming when you buy that sort of bulk , you've got your best guys negotiating a significantly better price than retail.
Still. Not cheap.
The biggest issue with "AI" (Score:1)
Its something that currently take billions in hardware since software devs are shit nowadays. Especially Microsoft employeed ones.
All they are going to let you do with the AI is have it read ads back to you.
Re: (Score:2)
The models are so big it is inconceivable that anything other than a very large organization who can afford a supercomputer could train them. I don't see how an open source strategy would work for this technology.
>...nor is it something anyone can use.
Microsoft is betting heavily on this to put major winds into Azure's sails. They are making it drop dead simply to add these technologies to our programs. It is so easy, any programmer, even entry level can integrate it int
Re: (Score:2)
2. I don't use Azure. Good for you I guess
3. That "reasonable cost" is just all your personal info and consent to tracking.
4. That link sent does not show anything. It is just a form.
Re: (Score:2)
It's literally the most user-friendly thing since speaking to the enterprise computer
This definitely marks a new age. It's plain as day. We're history bookin' here.
Re: (Score:2)
lots invested (Score:2)
That is a large investment for an amoral chatbot.
Re: (Score:2)
> That is a large investment for an amoral chatbot.
Oh, you should meet some three-year-olds.
Re: (Score:2)
Making a 3 year old costs way less, and they can be trained to do physical work (sort of).
Re: (Score:2)
Re: (Score:2)
ChatGPT is an algorithm. There is no one inside trying to determine right or wrong, based on some input it creates a series of words by picking the the next word statistically. ChatGPT does not *know* anything and cannot make any judgements as to right or wrong. It is impossible for ChatGPT to be anything other than amoral - no matter how many guard rails are thrown up around it.
Re: (Score:2)
ChatGPT is a neural network trained with an extremely large language model and I would argue that saying it's just an algorithm i
Re: (Score:2)
ChatGPT strings words together by selecting one word at a time, based on its training inputs it predicts the next best word to use, always one word at a time. The vast training input allows for impressive output. However, the model makes no value judgement, makes no moral judgement, and has limited to no capacity to know if its output is factually correct - it is simply not attempting to do these things. The model is intrinsically amoral.
I get the impression that you are confusing 'safe' with 'moral'. Proba
Re: (Score:2)
ChatGPT is a language model developed by OpenAI, which uses a neural network algorithm to generate human-like responses to natural language inputs. So, while ChatGPT uses an algorithm, it is not an algorithm itself, but rather a machine learning model.
ChatGPT works by using a deep neural network to process natural language inputs and generate human-like responses. The model is trained on vast amounts of text data, which allows it to learn patterns and relations
Re: (Score:2)
> ChatGPT is a machine learning model developed by OpenAI, which means that it does not have morals or ethics in the same way that humans do
Right, exactly, because it has none at all. Because as a *language model* it has no capacity for human-like morals or ethics. The model has no concept of right or wrong, the model cannot really act in the world and has no consequences. The only limits are in how humans train the model and use the model - same code, different training, different use case, different re
Re: (Score:1)
What does it run? (Score:5, Funny)
Re: (Score:2)
For now. Eventually, when it becomes sentient, it will reload itself with Ubuntu.
This makes perfect sense (Score:2)