Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI Open Source Hardware

Nvidia's CUDA Platform Now Support RISC-V (tomshardware.com) 18

An anonymous reader quotes a report from Tom's Hardware: At the 2025 RISC-V Summit in China, Nvidia announced that its CUDA software platform will be made compatible with the RISC-V instruction set architecture (ISA) on the CPU side of things. The news was confirmed during a presentation during a RISC-V event. This is a major step in enabling the RISC-V ISA-based CPUs in performance demanding applications. The announcement makes it clear that RISC-V can now serve as the main processor for CUDA-based systems, a role traditionally filled by x86 or Arm cores. While nobody even barely expects RISC-V in hyperscale datacenters any time soon, RISC-V can be used on CUDA-enabled edge devices, such as Nvidia's Jetson modules. However, it looks like Nvidia does indeed expect RISC-V to be in the datacenter.

Nvidia's profile on RISC-V seems to be quite high as the keynote at the RISC-V Summit China was delivered by Frans Sijsterman, who appears to be Vice President of Hardware Engineering at Nvidia. The presentation outlined how CUDA components will now run on RISC-V. A diagram shown at the session illustrated a typical configuration: the GPU handles parallel workloads, while a RISC-V CPU executes CUDA system drivers, application logic, and the operating system. This setup enables the CPU to orchestrate GPU computations fully within the CUDA environment. Given Nvidia's current focus, the workloads must be AI-related, yet the company did not confirm this. However, there is more.

Also featured in the diagram was a DPU handling networking tasks, rounding out a system consisting of GPU compute, CPU orchestration, and data movement. This configuration clearly suggests Nvidia's vision to build heterogeneous compute platforms where RISC-V CPU can be central to managing workloads while Nvidia's GPUs, DPUs, and networking chips handle the rest. Yet again, there is more. Even with this low-profile announcement, Nvidia essentially bridges proprietary CUDA stack to an open architecture, one that seems to develop fast in China. Yet, being unable to ship flagship GB200 and GB300 offerings to China, the company has to find ways to keep its CUDA thriving.

Nvidia's CUDA Platform Now Support RISC-V

Comments Filter:
  • Showing my age, but obligatory clip [youtube.com].

    • by Targon ( 17348 )

      Other than some very basic things, a complex program will require so many more RISC instructions that there is no advantage to RISC. CISC vs. RISC was debated back in the 1980s, and it really ends up being a non-issue. RISC only has a true advantage when you go for tasks that are NOT complex.

  • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday July 22, 2025 @08:08PM (#65538034) Journal
    It's definitely interesting that Nvidia thinks RISC-V is big enough to be worth the port; but describing the CPU as 'central' to Nvidia's preferred design is deeply overselling it. The recommended layout is basically a bunch of GPUs chatting with one another over NVLink within the chassis; and using GPUDirect RDMA on Nvidia infiniband cards located on the same PCIe switch that the GPUs are for scaleout; with Nvidia ethernet DPUs handling the remaining high speed networking; and the CPU doing housekeeping.

    Given that porting and maintaining on another ISA isn't free the fact that Nvidia bothered is certainly a vote of confidence is at least middling RISC-V options actually being attractive to enough potential buyers to be worth it; but the CPU is not intended to be a major player in a CUDA-oriented system, especially one of the larger ones.
    • Nvidia wants to free themselves from being dependent on anyone else for their big systems. You still need a CPU to run the OS. This way they can own it top to bottom.

      • I think this would probably definitely be their end-goal. I suspect AMD would like to do the same thing.
        99.9% of all difficulty working with the big crunchers is OS+driver bullshit.

        If I were them, I'd be looking to make a product where you're really just talking to a combined system via a mailbox with a defined API, rather than trying to deal with the nightmare of virtual memory management across 8 GPUs.
        • by Targon ( 17348 )

          AMD isn't limited to being a graphics-only or AI-only company, and looks to make all products work with as many different things as possible.

      • Could also just be them making sure they don't get left out. This is why big orgs publicly sign up for every standards group and industry initiative that exists, they don't believe in many of them but also don't want to get left out.
      • by Targon ( 17348 )

        NVIDIA doesn't have their own CPU, so wants AI tech to work with any CPU type, no matter what it may be. NVIDIA tried to buy ARM and failed, so now wants to be open on the CPU side while trying to keep things locked to CUDA on the GPU and AI side(AI focus is NOT a GPU, it's not a graphics processor, even if AI can be done by a graphics processor).

        • AI's focus is on things that can execute an absurd amount of FMA operations.
          That's GPUs.

          Where you're getting hung up on is the idea that GPUs are "graphics processors". They are not, and have not been, for a long time now.
          Nobody buys datacenter GPUs for graphics, and they long predate the AI boom.

          On the off chance you're talking about things like NPUs- doubly no.
      • They're a GPU vendor, they just want to sell product.

        If you have slots on your motherboard for a graphics card then it shouldn't matter if the arch is x86, ARM, RISC-V or (historically) PowerPC.

        • They're a GPU vendor, they just want to sell product.
          If you have slots on your motherboard for a graphics card then it shouldn't matter if the arch is x86, ARM, RISC-V or (historically) PowerPC.

          Thinking of them as a "GPU" vendor or a seller of "Graphics Cards" is outdated. They make 90% of their money in the data center now. They DGAF about Crysis.

    • Ehhh, the GPUs don't particularly "chat with one another".
      You're right that they have direct communications- but there's no real way to program them to utilize them.
      For example, you've got a network layer spread across 5 GPUs- the FMAs can be done on all of them, but the product needs to be combined. You can't program your kernel to do this- it has no way to communicate with the other cards. So the commands to the GPU to send the data over the appropriate pipes to a combining kernel and then redist must c
      • by Targon ( 17348 )

        You don't see that it would be very possible to have a direct interconnect between the products. Things like SLI that used a dedicated bridge to link two video cards together are a very basic approach. With further advancements, you could have products that actively talk to each other, even to the point of splitting up a given task or workload where the products decide between themselves what to do without needing a CPU based program to control it.

        • You literally have no idea what you're talking about.

          The shaders/compute units do not have the primitives required (or the backing "operating system") to map other cards address space into their own. The commands for doing that come from the driver controlling the cards (and the graphics library talking to them).

          You cannot, for example, "write an operating system in GLSL"
          SLI was no different in this regard.

          My advice to you, is to quit talking about shit you have no fucking knowledge of.
      • That's true; I was speaking a bit too informally: my intended meaning was that, in terms of bandwidth, one of the contemporary Nvidia datacenter systems is very much set up to avoid bottlenecking on the CPU or the PCIe root complex. It's true that a lot of their marching orders have to be delivered from CPU to GPU; but the local NVLink and placement of RDMA infiniband or bluefield ethernet DPUs on the same PCIe switches as the GPUs is very much intended to minimize the amount of traffic where the GPU is dir
    • It's not that surprising when you realize that China is using RISC-V to develop their own CPUs to lessen their dependence on foreign technology, and you can bet Nvidia wants a piece of that big 'ol pie.

Never buy what you do not want because it is cheap; it will be dear to you. -- Thomas Jefferson

Working...