Hugging Face Is Sharing $10 Million Worth of Compute To Help Beat the Big AI Companies (theverge.com) 10
Kylie Robison reports via The Verge: Hugging Face, one of the biggest names in machine learning, is committing $10 million in free shared GPUs to help developers create new AI technologies. The goal is to help small developers, academics, and startups counter the centralization of AI advancements. [...] Delangue is concerned about AI startups' ability to compete with the tech giants. Most significant advancements in artificial intelligence -- like GPT-4, the algorithms behind Google Search, and Tesla's Full Self-Driving system -- remain hidden within the confines of major tech companies. Not only are these corporations financially incentivized to keep their models proprietary, but with billions of dollars at their disposal for computational resources, they can compound those gains and race ahead of competitors, making it impossible for startups to keep up. Hugging Face aims to make state-of-the-art AI technologies accessible to everyone, not just the tech giants. [...]
Access to compute poses a significant challenge to constructing large language models, often favoring companies like OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU. The shared GPUs are accessible to multiple users or applications concurrently, eliminating the need for each user or application to have a dedicated GPU. ZeroGPU will be available via Hugging Face's Spaces, a hosting platform for publishing apps, which has over 300,000 AI demos created so far on CPU or paid GPU, according to the company.
Access to the shared GPUs is determined by usage, so if a portion of the GPU capacity is not actively utilized, that capacity becomes available for use by someone else. This makes them cost-effective, energy-efficient, and ideal for community-wide utilization. ZeroGPU uses Nvidia A100 GPU devices to power this operation -- which offer about half the computation speed of the popular and more expensive H100s. "It's very difficult to get enough GPUs from the main cloud providers, and the way to get them -- which is creating a high barrier to entry -- is to commit on very big numbers for long periods of times," Delangue said. Typically, a company would commit to a cloud provider like Amazon Web Services for one or more years to secure GPU resources. This arrangement disadvantages small companies, indie developers, and academics who build on a small scale and can't predict if their projects will gain traction. Regardless of usage, they still have to pay for the GPUs. "It's also a prediction nightmare to know how many GPUs and what kind of budget you need," Delangue said.
Access to compute poses a significant challenge to constructing large language models, often favoring companies like OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU. The shared GPUs are accessible to multiple users or applications concurrently, eliminating the need for each user or application to have a dedicated GPU. ZeroGPU will be available via Hugging Face's Spaces, a hosting platform for publishing apps, which has over 300,000 AI demos created so far on CPU or paid GPU, according to the company.
Access to the shared GPUs is determined by usage, so if a portion of the GPU capacity is not actively utilized, that capacity becomes available for use by someone else. This makes them cost-effective, energy-efficient, and ideal for community-wide utilization. ZeroGPU uses Nvidia A100 GPU devices to power this operation -- which offer about half the computation speed of the popular and more expensive H100s. "It's very difficult to get enough GPUs from the main cloud providers, and the way to get them -- which is creating a high barrier to entry -- is to commit on very big numbers for long periods of times," Delangue said. Typically, a company would commit to a cloud provider like Amazon Web Services for one or more years to secure GPU resources. This arrangement disadvantages small companies, indie developers, and academics who build on a small scale and can't predict if their projects will gain traction. Regardless of usage, they still have to pay for the GPUs. "It's also a prediction nightmare to know how many GPUs and what kind of budget you need," Delangue said.
Re: (Score:1)
So, what happened to that one? Did you retire?
Weaker entities were always disadvantaged (Score:2)
Between the larger stone age man and the smaller one, both fighting over whatever, the bigger dude had an advantage.
Unless the smaller dude was smarter.
Hint, hint.
This is Important (Score:1)
If there is no Linux of AI to counter the growing dominance of the Windows of AI, we are headed for a very dark future.
I'm pretty sure I don't have to go into the gigantic tidal wave of shit we had to clean up after Microsoft ran wild during the dot com boom wiping out programming languages, browsers, multimedia companies and game developers and fucking up the web for 20 years.
Compute? (Score:3)
When did "compute" become a noun?
Re: (Score:3)
Re: Compute? (Score:2)
A serious response to a tongue-in-cheek question, with a link that not only answers my question, but also has cool stuff like graphs on how often words are used!
Thank you! This is why I still love Slashdot.
Re: (Score:2)