China Trained a 1-Trillion-Parameter LLM Using Only Domestic Chips (theregister.com) 9
"China Telecom, one of the largest wireless carriers in mainland China, says that it has developed two large language models (LLMs) relying solely on domestically manufactured AI chips..." reports Tom's Hardware.
"If the information is accurate, this is a crucial milestone in China's attempt at becoming independent of other countries for its semiconductor needs, especially as the U.S. is increasingly tightening and banning the supply of the latest, highest-end chips for Beijing in the U.S.-China chip war."
Huawei, which has mostly been banned from the U.S. and other allied countries, is one of the leaders in China's local chip industry... If China Telecom's LLMs were indeed fully trained using Huawei chips alone, then this would be a massive success for Huawei and the Chinese government.
The project's GitHub page "contains a hint about how China Telecom may have trained the model," reports the Register, "in a mention of compatibility with the 'Ascend Atlas 800T A2 training server' — a Huawei product listed as supporting the Kunpeng 920 7265 or Kunpeng 920 5250 processors, respectively running 64 cores at 3.0GHz and 48 cores at 2.6GHz. Huawei builds those processors using the Arm 8.2 architecture and bills them as produced with a 7nm process."
The South China Morning Post says the unnamed model has 1 trillion parameters, according to China Telecom, while the TeleChat2t-115B model has over 100 billion parameters.
Thanks to long-time Slashdot reader hackingbear for sharing the news.
The project's GitHub page "contains a hint about how China Telecom may have trained the model," reports the Register, "in a mention of compatibility with the 'Ascend Atlas 800T A2 training server' — a Huawei product listed as supporting the Kunpeng 920 7265 or Kunpeng 920 5250 processors, respectively running 64 cores at 3.0GHz and 48 cores at 2.6GHz. Huawei builds those processors using the Arm 8.2 architecture and bills them as produced with a 7nm process."
The South China Morning Post says the unnamed model has 1 trillion parameters, according to China Telecom, while the TeleChat2t-115B model has over 100 billion parameters.
Thanks to long-time Slashdot reader hackingbear for sharing the news.
I can do better (Score:2)
Re: I can do better (Score:1)
What does not kill them, makes them stronger (Score:2)
A;most always works like that against an enemy with significant resources. I guess it was more important to pretend fighting China than actually doing it ...
Re: (Score:2)
SMIC doesn't really have a "5nm" process. Multipatterning can only take them so far.
And the US spends a boatload of money on education. Money alone does not produce results.
China's economy is already imploding. Enjoy the fireworks.
100 billion != 1 trillion (Score:2)
China trains 100-billion-parameter AI model on home grown infrastructure
All you had to do was copy and paste the headline from TFA. [theregister.com]
Re: (Score:2)
There are multiple articles linked in the post, and explicit mentions of multiple models. Yes, the Register one only refers to a 100 billion parameter model; the Tom's Hardware and South China Morning Post articles specifically talk about a 1 trillion parameter model. So does the post itself:
> The South China Morning Post says the unnamed model has 1 trillion parameters, according to China Telecom, while the TeleChat2t-115B model has over 100 billion parameters.
So, two models: one with 100 billion parame
Re: (Score:2)
All you had to do was copy and paste the headline from TFA. [theregister.com]
Yeah but then you'd have to read the register why not paste one of the other headlines: "State-owned China Telecom has trained domestic AI LLMs using homegrown chips — one model reportedly uses 1 trillion parameters" there you go, copy and paste.
A LLM 'eats' 1 Trillion Parameters? (Score:2)