Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Hardware

Libre-RISC-V 3D CPU/GPU Seeks Grants For Ambitious Expansion (google.com) 21

The NLNet Foundation is a non-profit supporting privacy, security, and the "open internet". Now the open source Libre RISC-V hybrid CPU/GPU is applying for eight additional grants from the NLNet Foundation, according to this update from the project's Luke Kenneth Casson Leighton (Slashdot reader #517,947): Details on each Grant Application are on the newly-opened RISC-V Community Forum.

The general idea is to kick RISC-V into a commercially-viable mass-volume high gear by putting forward funding proposals for NEON/SSE-style Video Acceleration to be upstreamed for use by ffmpeg, vlc, mplayer and gstreamer; hardware-assisted Mesa 3D (a port of the RADV Vulkan Driver to RISC-V), and a hardware-accelerated OpenCL port to RISC-V. This all in a "Hybrid" fashion (a la NEON/SSE) as opposed to the "usual" way that 3D and Video is done, which hugely complicate both software drivers and applications debugging.

In addition, the Libre RISC-V SoC itself is applying for grants to do a gcc port supporting its Vectorisation Engine including auto-vectorisation, and, crucially, to do an entirely Libre-licensed ASIC Layout using LIP6.fr coriolis2, working in tandem with Chips4Makers to create a 180nm commercially-viable single-core dual-issue test ASIC.

The process takes approximately 2-3 months for approval. Once accepted, anyone may be the direct (tax-deductible) recipient of NLNet donations, for sub-tasks completed. Worth noting: Puri.sm is sponsoring the project, and, given NLNet's Charitable Status, donations from Corporations (or individuals) are 100% tax-deductible.

This discussion has been archived. No new comments can be posted.

Libre-RISC-V 3D CPU/GPU Seeks Grants For Ambitious Expansion

Comments Filter:
  • by Crashmarik ( 635988 ) on Sunday September 29, 2019 @02:41PM (#59250396)

    that are open source in graphics systems. At the very least I can hope this will displace a good portion of the proprietary graphics architectures, that are in a screw with the market and each other mode.

    • by lkcl ( 517947 )

      that are open source in graphics systems. At the very least I can hope this will displace a good portion of the proprietary graphics architectures, that are in a screw with the market and each other mode.

      a meetup in the bay area last month went extremely well, this was for an independent effort by Pixilica to form an Open 3D Graphics Alliance:
      https://groups.google.com/a/gr... [google.com]

      several extremely experienced 3D Graphics Engineers basically echoed your sentiment, Crashmarik. a fused (hybrid) CPU-GPU is just so much simpler it's ridiculous. no more kernel driver nonsense, no CPU-to-GPU built-in home-grown RPC mechanism, no need to transfer huge shader data blocks from main memory over a PCIe bus to GPU memory,

      • But..but... Microsoft/Intel/Nvidia assured us for 2 decades that their CPU/GPU setup delivers blazing speed. =P You simply cannot do ultra-hyper-polygon-realism without the brilliance that is Windows DirectX! Because it is "direct"! They would never lie to us about there not being a much better, cheaper way to do it! (Ha ha ha ha)
      • by Brama ( 80257 )

        So it's both easier because of the architecture, and more reliable because all of the specs are there and don't have to be reverse-engineered? That is so great. Looks like this is the SoC that the purism librem phone would really want to use in a future iteration.

        How about patents? Is there reason to worry that litigation may throw a wrench in the process?

        • by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday September 29, 2019 @03:13PM (#59250526) Homepage

          So it's both easier because of the architecture, and more reliable because all of the specs are there and don't have to be reverse-engineered?

          you got it. after wasting 3 years of my life reverse-engineering microsoft NT domains, and another 2 years reverse-engineering *NINE* HTC smartphones back in 2003-4, it pains me to see younger engineers wasting their time just to get a product working that would have been at the concept phase *TWO YEARs prior*

          That is so great. Looks like this is the SoC that the purism librem phone would really want to use in a future iteration.

          and / or some other product / products. plenty of options once the base is done

          How about patents? Is there reason to worry that litigation may throw a wrench in the process?

          *sigh* everywhere but the USA it will be fine, due to the "right" of pretty much any idiot to take quite literally a random discussion off the internet - not even one they wrote - and patent it. this was what happened with that fiirst commercial 3D printer company, and boooy was the community pissed.

          one strategy here is to follow what the BBC did, which was to put in a patent application for Dirac after basing it entirely on *expiired* patents, get it registered, and then let it lapse. this to protect the BBC Backstage Archives from patent trolls. now they can just send a form letter referring the troll to the patent, and it's done.

          regarding other patents from NVIDIA etc: i think we just have to (a) rely on good-old-fashioned "david vs goliath" here. it would be so embarrassing they'll steer clear and (b) at some point use IBM's trick of providing automatic royalty-free patent licenses that are also just as automatically revoked should a patent troll attempt to sue.

          • Doesn't Nvidia give source for Tegra drivers? Last i looked that was the case. Nvidia isn't the problem with Nvidia drivers at all. From what I've heard, the agreement necessary to get GeForce into Xbox is the primary blocker to sharing driver source with us.

            But if that agreement requires Nvidia to protect relevant patents, they will do that...

            • by lkcl ( 517947 )

              Doesn't Nvidia give source for Tegra drivers? Last i looked that was the case. Nvidia isn't the problem with Nvidia drivers at all. From what I've heard, the agreement necessary to get GeForce into Xbox is the primary blocker to sharing driver source with us.

              But if that agreement requires Nvidia to protect relevant patents, they will do that...

              NVIDIA provides some source code, but the portion that enables the GPU to exit the default speed is excluded. nouveau will therefore be *permanently* driven at a much lower clockrate than the GPU is actually capable of, because the firmware controlling the clock speed is RSA-signed.

              AMD on the other hand has released not just AMDVLK https://github.com/GPUOpen-Dri... [github.com] they also work with the MESA RADV developers and are a direct contributor to LLVM.

    • by AHuxley ( 892839 )
      Guess it will be a long wait for good drivers in the USA. Everyone is doing all that history and social science :)
      https://it.slashdot.org/story/... [slashdot.org]
  • Godspeed To Them (Score:4, Interesting)

    by dryriver ( 1010635 ) on Sunday September 29, 2019 @02:55PM (#59250454)
    Intel, Nvidia et cetera have demonstrated again and again and again that as soon as there is no meaningful competition, they simply stop innovating. Back in the 1990s we thought that we'd have completely Photorealistic 3D games / VR by 2020. Nvidia's best, most high-end, most expensive realtime 3D game graphics on the 2080 are within about 30% of that goal today, with the hardest part - not being able to tell at all that you are looking at 3D polygons or Computer Generated images - having gone completely untouched technologically speaking. We PC geeks spent ourselves silly year after year after year buying the latest Intel/Nivida gear, as soon as it came out, and the gentlemen who took that money from millions of PC enthusiasts couldn't be bothered to deliver anything better than "meh mediocrity" once gaming went mass-market in the 2000s. The next generation of consoles won't deliver anything resembling reality either, nor will the two generations after that. So I fully support Libre-RISC-V and hope to have a Linux - not Windows 10 - box 3 years from now that runs on this open source CPU/GPU combo. Microsoft/Intel/Nvidia take the money and then FAIL to deliver, without fail, every year. Intentionally. Bring on the RISC boxes!
    • Intel, Nvidia et cetera have demonstrated again and again and again that as soon as there is no meaningful competition, they simply stop innovating. Back in the 1990s we thought that we'd have completely Photorealistic 3D games / VR by 2020. Nvidia's best, most high-end, most expensive realtime 3D game graphics on the 2080 are within about 30% of that goal today, with the hardest part - not being able to tell at all that you are looking at 3D polygons or Computer Generated images - having gone completely untouched technologically speaking. We PC geeks spent ourselves silly year after year after year buying the latest Intel/Nivida gear, as soon as it came out, and the gentlemen who took that money from millions of PC enthusiasts couldn't be bothered to deliver anything better than "meh mediocrity" once gaming went mass-market in the 2000s. The next generation of consoles won't deliver anything resembling reality either, nor will the two generations after that. So I fully support Libre-RISC-V and hope to have a Linux - not Windows 10 - box 3 years from now that runs on this open source CPU/GPU combo. Microsoft/Intel/Nvidia take the money and then FAIL to deliver, without fail, every year. Intentionally. Bring on the RISC boxes!

      It is definitely interesting work. I think that Huawei is going to adopt this processor if these sanctions against them proceed. That'll probably bring some advancements. However, I did sit through a security presentation by one of the engineers working on Risc-V and they are quite a bit behind when it comes to any sort of security features you may want built into the silicon. I suspect it'll take them 2-3 years to catch up. Maybe not important to everyone, but definitely a requirement for serious data

  • by Misagon ( 1135 ) on Sunday September 29, 2019 @03:18PM (#59250558)

    The design of the vector instructions in RISC-V is a little different than in mainstream processors. The vector width is not fixed but scalable.
    This means that same machine code could run on from machines with only short vectors to machines with very wide vectors. The ISA also supports predicates for all vector sizes, not just the widest.

    This means that the spec allows a CPU to have a super-wide SIMD implementation ... and super-wide SIMD cores with predicates are pretty much what the computational cores in GPUs such as those from AMD and Nvidia are.

    BTW. There is also a Scalable Vector Extension for ARM. It is implemented on the NEC SX-Aurora. There is support for this in the LLVM-SVE compiler project. (But I can not tell whether LLVM-SVE's model is a good match for RISC-V)

    • by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday September 29, 2019 @03:24PM (#59250582) Homepage

      The design of the vector instructions in RISC-V is a little different than in mainstream processors. The vector width is not fixed but scalable.

      indeed. it's based on the Cray Vector Architecture. see "SIMD Considered Harmful" for... well... a bit of a wake-up, call, really: https://www.sigarch.org/simd-i... [sigarch.org]

      (But I can not tell whether LLVM-SVE's model is a good match for RISC-V)

      luckily, robin kruppe is doing some excellent work on an opportunistic auto-vectorisation system in LLVM-IR which is being added as part of the RVV LLVM port.

      in addition, because this is software libre, fascinatingly there is open discussion on the best way to improve both LLVM IR and gcc's GIMPLE, between all the architecture vendors, regarding the various different vectorisation systems [and really painfully-long SIMD ones, yes looking at you, intel, what's next - AVX1024??] currently being developed.

    • by AHuxley ( 892839 )
      Open, free, fast and it exists. A CPU ready for work.
      Rather than more work needed to "design" and "make" and "test" and "code" for new hardware for "one" task/project.
      Design a new supercomputer for every new project and wait to have it "made" as a project for that task?
      Buy a lot of exiting CPU hardware and get started using skills that can be found quickly?
      Sell a lot of easy CPU power and let do the math around that.
      Try and get a small/average company to describe the new CPU and GPU they need for ev
      • by lkcl ( 517947 )

        Fast math but a long wait to understand the design of the product.

        fascinatingly, we're basing it on the CDC 6600, with help from Mitch Alsup (designer of the Motorola 68000). you can find a book online called "Design of a Computer" by J Thornton.

        one of the important things about this project is for it to be a "legacy" (in the positive sense of the word) i.e. that it exists as an educational resource.

  • Are they going to toggle the clock by hand ?

  • Your topic is receiving a lot of followers and so am I. Thank you for sharing. geometry dash [geometrydash2.co]

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...