Nvidia Wants To Speed Up Data Transfer By Connecting Data Center GPUs To SSDs (arstechnica.com) 15
Microsoft brought DirectStorage to Windows PCs this week. The API promises faster load times and more detailed graphics by letting game developers make apps that load graphical data from the SSD directly to the GPU. Now, Nvidia and IBM have created a similar SSD/GPU technology, but they are aiming it at the massive data sets in data centers. From a report: Instead of targeting console or PC gaming like DirectStorage, Big accelerator Memory (BaM) is meant to provide data centers quick access to vast amounts of data in GPU-intensive applications, like machine-learning training, analytics, and high-performance computing, according to a research paper spotted by The Register this week. Entitled "BaM: A Case for Enabling Fine-grain High Throughput GPU-Orchestrated Access to Storage" (PDF), the paper by researchers at Nvidia, IBM, and a few US universities proposes a more efficient way to run next-generation applications in data centers with massive computing power and memory bandwidth. BaM also differs from DirectStorage in that the creators of the system architecture plan to make it open source.
7 ssd one + other card only on 1 X16 link? (Score:2)
7 ssd one + other card only on 1 X16 link?
Re: (Score:2)
Is the benefit supposed to be bandwidth or improved latency? Both are named "speed" but probably only one is relevant.
Re: (Score:2)
We've gone full circle and arrived back at the Connection Machine. https://en.wikipedia.org/wiki/... [wikipedia.org]
Didn't AMD already do this? (Score:4, Insightful)
Re: (Score:2)
Saw that video the other day. And then saw this news the next day. I'm pretty sure that LTT had some knowledge that NVidia was coming out with this and decided to make a video to capitalize on the views the news came out. The timing was just too perfect.
Re: (Score:2)
No not really. AMD's solution was a proprietary and hardware specific product. Nvidia is working on something far more generic like direct storage, allowing their A100s to access I/O via the motherboard, not bundling a product together in such a limited fashion.
AMD's solution is a bit of a non-starter in a datacentre.
expand slice()? (Score:2)
Last time I looked at the Linux kernel source, splice() did not allow this kind of transfer (wanted to improve some NAS performance). Inherently, though, the concept is very similar. With file descriptors for both the media data and the GPU memory, the CPU could set up the transfer and wait for the completion interrupt.
Of course, "minor details" like security need to be carefully considered, but the GPU memory should not be cacheable in the first place. The media blocks holding the data might have to be
Just great. (Score:2)
How do you like all that web-GPU stuff now?
Just add more security... (Score:2)
[ X ]Allow GPU direct writes to storage devices*.
[ X ]Allow SSD cross-link communications.
[ X ]Allow USB direct writes to storage devices.
*you must opt out then reboot the machine while standing on your head for security selections to take effect.
Evolution of 'PC' Architecture (Score:2)
The PC architecture should finally stop pretending that the GPU is "just a graphics card" and admit that it's a vector co-processor, and start building system boards with two sockets on them, one for the scalar processor (what we call the CPU today), and one for the vector processor (what we call the GPU today). The vector processor socket should get its own memory slots as well. Let's get a nice unified power plane going and finally provide the vector processor with some proper real estate for a real coo
Errr what? (Score:2)
Microsoft brought DirectStorage to Windows PCs this week.
How is that at all relevant. If you're going to talk about what NVIDIA is doing *now* why not compare it to RTX I/O, which is what NVIDIA introduced 2 years before Microsoft's DirectStorage.