NLNet Funds Development of a Libre RISC-V 3D CPU (crowdsupply.com) 75
The NLNet Foundation is a non-profit supporting privacy, security, and the "open internet". Now the group has approved funding for the hybrid Libre RISC-V CPU/VPU/GPU, which will "pay for full-time engineering work to be carried out over the next year, and to pay for bounty-style tasks."
Long-time Slashdot reader lkcl explains why that's significant: High security software is irrelevant if the hardware is fundamentally compromised, for example with the Intel spying backdoor co-processor known as the Management Engine. The Libre RISCV SoC was begun as a way for users to regain trust and ownership of the hardware that they legitimately purchase.
This processor will be the first of its kind, as the first commercial SoC designed to give users the hardware and software source code of the 3D GPU, Video Decoder, main processor, boot process and the OS.
Shockingly, in the year 2019, whilst there are dozens of SoCs with full source code that are missing either a VPU or a GPU (such as the TI OMAP Series and Xilinx ZYNQ7000s), there does not exist a single commercial embedded SoC which has full source code for the bootloader, CPU, VPU and GPU. The iMX6 for example has etnaviv support for its GPU however the VPU is proprietary, and all of Rockchip and Allwinner's offerings use either MALI or PowerVR yet their VPUs have full source (reverse engineered in the case of Allwinner).
This processor, which will be quad core dual issue 800mhz RV64GC and capable of running full GNU/Linux SMP OSes, with 720p video playback and embedded level 25fps 3D performance in around 2.5 watts at 28nm, is designed to address that imbalance. Links and details on the Libre RISC-V SoC wiki.
The real question is: why is this project the only one of its kind, and why has no well funded existing Fabless Semiconductor Company tried something like this before? The benefits to businesses of having full source code are already well-known.
Long-time Slashdot reader lkcl explains why that's significant: High security software is irrelevant if the hardware is fundamentally compromised, for example with the Intel spying backdoor co-processor known as the Management Engine. The Libre RISCV SoC was begun as a way for users to regain trust and ownership of the hardware that they legitimately purchase.
This processor will be the first of its kind, as the first commercial SoC designed to give users the hardware and software source code of the 3D GPU, Video Decoder, main processor, boot process and the OS.
Shockingly, in the year 2019, whilst there are dozens of SoCs with full source code that are missing either a VPU or a GPU (such as the TI OMAP Series and Xilinx ZYNQ7000s), there does not exist a single commercial embedded SoC which has full source code for the bootloader, CPU, VPU and GPU. The iMX6 for example has etnaviv support for its GPU however the VPU is proprietary, and all of Rockchip and Allwinner's offerings use either MALI or PowerVR yet their VPUs have full source (reverse engineered in the case of Allwinner).
This processor, which will be quad core dual issue 800mhz RV64GC and capable of running full GNU/Linux SMP OSes, with 720p video playback and embedded level 25fps 3D performance in around 2.5 watts at 28nm, is designed to address that imbalance. Links and details on the Libre RISC-V SoC wiki.
The real question is: why is this project the only one of its kind, and why has no well funded existing Fabless Semiconductor Company tried something like this before? The benefits to businesses of having full source code are already well-known.
Re: (Score:2)
This thing will cost an arm and a leg and still be slower than any of the $20 Android boards. But hey principles.
Re: (Score:2)
Snowden is a hero.
Re: (Score:1)
If it's patent free and widely sourced (which is not sufficiently true for anything semiconductor now), the use of similar chips can be legislated as the new lower bar of technology. Even in the construction and agricultural industries.
Re: (Score:2)
99% of manufacturers and customers don't give a crap about open source or whether they have to pay a wee percent royalties. This is yet another great-sounding project that will go nowhere.
Re:Why (Score:4, Insightful)
It will indeed, but I do fully support the principles behind it. While the desktop market at least had common components and the ability to run whatever software you want, all the big tech companies are trying their hardest to limit mobile devices to only their own proprietary software. Governments don't mind this too much either, assuming they can get into bed with the tech companies. If no one would care, we'll soon find ourselves completely at the whim of a few very, very powerful companies or government (hey China!)
This combined with viable cloud alternatives (such as NextCloud that is the closest thing to it) is a spark of bright light in the free world.
Re:Why (Score:5, Insightful)
Re:Why (Score:5, Interesting)
So it's slower, so what ? It should be at least fast enough to do what most people ( i.e. not developers or content creators ) need to do in their daily lives.
For the first version, yes. There's a well known strategy / phenomenon where startups rapidly create first version "entry level" products, free from bureaucracy of encumbents.
This guves them the experience to do a Monster Version later, and that's exactly what we are planning. Several potential majir performance enhancements have the groundwork already laid yet are *not* being pursued immediately.
By sticking to 2.5 watts we can use a $0.50 PMIC, the AXP209, the PCB development for an SBC will be around the $10k mark, and there will be no fans. Also, a 28nm MVP for 50 samples at around 4mm^2 is only $100k.
If we went with a 100 watt monster straight away, those costs would easily be ten times that.
Modern processors are already so fast that most spend most of their time idling, and this trend will only increase. It's fun to have the equivalent of a super computer from twenty years ago in your pocket to play Pokemon on, but that only leaves most of those cycles available to spy on you. I'd rather have a computer in my pocket that I could prove was only doing what I wanted it to do.
The "good enough computing" paradigm is already being so far exceeded that some people are installing compilers on their smartphones and are able to compile large applications in a reasonable time, comparable to desktop machines of 5 years ago.
btw you will be interested to know that I met TL Lim here in Taiwan a couple days ago, he is developing an open smartphone. No OS (remember the openmoko) the hardware has been specifically picked to be mainline, and all dodgy pieces (WIFI, BT) have internal hard kill switches. The battery is removable and is massproduced (Galaxy 7). Full schematics to be available.
Re: (Score:2)
Re: (Score:2)
Having recently been in a position to make a decision about Apple vs Android for a new phone, I was acutely interested in an open phone. My kid bought me a new device, so I didn't really have to make a decision. But the fact remains: having only two major options - neither of which are "open" - is unacceptable. Do you have a link we can follow to keep tabs / offer assistance on TL Lim's developments?
Google "pinephone" or "pine64 phone". I Want One. I had a trolltech greenphone, if you remember those. I wish buglabs had worked out.
Re:"Libre" aka "not invented here" (Score:5, Interesting)
"Libre" seems to be a new form of "not invented here"
Look, I get it, the OSS religious movements has their zealots as well. Let's not let them dictate the direction of anything.
I'm am for new kinds of hardware. What I'm against is the needless forking of hardware and software in the name of "open" when the end result is something that is closer to incredibly expensive vaporware.
In a hardware context, this is especially problematic. It's not like someone can create an Apple A15 on a 5nm fab overnight and then sell millions of them. No, really the problem is that nobody wants to go back to the drawing board and build SoC hardware as though it was 1979.
>
That is incredibly funny, because the inspiration for the design is the CDC 6600 Mainframe. Designed in the 1950s, Seymour Cray and James Thornton actually had to wait until the design of the silicon transistor before going ahead, and making the first commercial versions available in the mid 1960s.
Look up the book "Design of a Computer" it is stunning and fascinating. Core memory, as in *actual magnetic rings*! Banks of switches and a button to toggle the clock manually, as a bootstrap loader!
The core design however is absolutely rock solid, and was efficient out of necessity. It was one of the world's first superscalar out of order machines, four times more effective per clock than its competitors.
Mitch Alsup - the designer of the 68000, the 88120, AMD's Opteron Series, and a consultant on the recent Samsung GPU,.has been educating me on an augmented modernisation of the 6600.
The result is that we will be able to do sustained multi issue execution without stalling and without major bottlenecks. Remember how Opteron had to publish "equivalent" speed numbers compared to Intel? That was down to Mitch's work.
Don't knock old! Sometimes old is screamingly fast when modernised, because it *had* to be good as there were far less resources. Russia used to have the world's best assembly programmers, due to the Iron Curtain...
Re:Why (Score:4, Insightful)
Maybe not larger corporations, but RISC-V is gaining traction due to the desire to know what's going on at the hardware level. Our little user group bought a couple of the SiFive HiFive1 Rev B boards to support RISC-V development--not because they're heavy hitting hardware. And we'll be doing the same if this group produces a chipset and board.
Nobody's going to build an empire off my friends and I buying a handful of parts, but we're putting a little bit of money where our mouths are to indicate what we'd like to buy in the future.
Re: (Score:2, Informative)
You have totally misunderstood what RISC V is.
RISC V is not any kind of open source design for a processor let alone anything to do with open logic/chip design tools.
RISC V is simply an open standard for an instruction set architecture.
One can implement it how ever one likes. Using open source design tools or not. Open sourcing ones final design or not.
In short, you missed the mark by miles.
NDA's and licensing agreements. (Score:3)
It was tried with the Vivaldi tablet, and one of the issues ran into was IP companies freaking out about and blacklisting open source developers. Open source driver developers can't be hired on projects that use the proprietary driver, and and licensing agreements for the proprietary driver and supporting libraries are revoked if the company releases a alternative driver with their product. That and the amount of people able and willing to install alternate firmware on an embedded device is low.
Re:NDA's and licensing agreements. (Score:5, Funny)
Plus the Linux kernel dev team is too chickenshit to actually enforce the GPL terms against people distributing binary-only drivers.
If you read further in... (Score:2, Interesting)
They can either use the GC800 core using a one time 250k license, OR produce a whole open source GPU before the deadline capable of a minimum of 6GFLOPS of calculations.
Based on what I'm reading they are targetting GL/GLES support, when I think the real solution is targetting a Vulkan 1.0 core with features designed to provide accelerated software emulation of any legacy features needed. Vulkan to OGL capabilities are already being produced for Mesa, so a Vulkan capable mobile GPU would provide maximal repr
Re: (Score:3)
you should contact them and offer some of your expertise/comments. sometimes the nerds who do the low level stuff have no common sense and need a little help early on to see when they are making mistakes with long term consequences.
Been there :) Unlike a lot of people I am quite happy to say "I have no idea what I am doing", and if other people want to do a task GREAT, it us less work for me.
I care that the goal *is completed*, not that I personally was the one that completed it, you know what I mean?
The libre nature of the project means that I can do things like get on comp.arch and dang me if Mitch Alsup, the designer of the 68000, pops up.
Without his expertise I would in no way be able to tackle this project. 6600 style scoreboards
Re: If you read further in... (Score:4, Interesting)
Best of luck to you all, make sure you set up a Patreon or Kickstarter or whatever other begging site link and pretty sure you'll get some large anonymous donations ;)
:) https://www.crowdsupply.com/li... [crowdsupply.com]
They have a beta mode where donations are accepted, must talk to Joshua about enabling it.
Re: Lol, 2-4 GHz... (Score:2)
No, it's not. I've run Ubuntu 16.04 on a laptop w/700mhz Pentium III w/512m and a SSD, and websites like Amazon & Walmart ("one-page webapps") were unbearably slow. Webapps are "lightweight" only because browers are now the most heavyweight apps in the history of mainstream computing.
Re: (Score:2)
No, it's not. I've run Ubuntu 16.04 on a laptop w/700mhz Pentium III w/512m and a SSD, and websites like Amazon & Walmart ("one-page webapps") were unbearably slow. Webapps are "lightweight" only because browers are now the most heavyweight apps in the history of mainstream computing.
If you go to the "mobile" version, either manually or by using UserAgentSwitcher, it's fine.
Re:If you read further in... (Score:5, Interesting)
They can either use the GC800 core using a one time 250k license, OR produce a whole open source GPU before the deadline capable of a minimum of 6GFLOPS of calculations.
Based on what I'm reading they are targetting GL/GLES support, when I think the real solution is targetting a Vulkan 1.0 core with features designed to provide accelerated software emulation of any legacy features needed. Vulkan to OGL capabilities are already being produced for Mesa, so a Vulkan capable mobile GPU would provide maximal reprogrammability, if the featureset can be fit into the fraction of a watt left for the GPU (It's a Dual core 64 bit RISC+single core GPU running in 2.5 Watts.
It's only limited to dual issue on standard RV64GC.
With Mitch Alsup being interested in the project (because I respect his expertise and have been implementing his ideas) he has taught me how to do n order multi issue.
I was stunned to find that if the right infrastructure is in place,.multi issue is easy.
The Vector Engine for the GPU and VPU are basically using multi issue to throw as many "elements" as possible at the parallel processing engine, reconstructing the order by maintaining a "Dependency Matrix". 6 months spent getting educated on that :)
By subdividing the register file into Hi32 Lo32 we can DOUBLE the 32 bit FP issue compared to 64 bit performance (register ports are the main technical bottleneck and power sink)
So the processor is extremely weird: 64 bit is dual issue, 32 bit is quad or possibly octa issue, and 16 bit FP will be octa or even 16 way issue. This in part because whilst it is a Vector Frontend, it is a transparent predicated SIMD (aka SIMT) engine at the backend.
All of this will, yes, be behind the Vulkan API, however unlike most GPUs it will be run *from the user app*. Most GPUs have an IPC mechanism, we will be executing GPU instructions *directly* because the GPU *is* the CPU.
The Vulkan driver is written in rust, however it is NOT an INTERPRETER (unlike gallium3d). It is a COMPILER that turns SPIRV directly into LLVM IR. That LLVM bytecode is handed over to a JIT compiler, and the 3D shader then runs as ASSEMBLY code, NOT as interpreted 3D commands. Performance should therefore be pretty good.
Interestingly our first major milestone for the Vulkan engine is to run on standard x86, using the x86 LLVM IR. That helps us get a stable base so we can move to the SoC port, confident that the Vulkan engine works.
Lots to do!
Re: (Score:2)
I am intreiged impressed, and terrified. Particularly in reference to speculative side channel attacks. I'll definitely be following the project. though.
Not everybody is out to get rich. (Score:1)
Some of us realize we aren't going to live forever, and we just want to see important stuff being actually made before we die.
scale (Score:2)
Re: (Score:2)
Will they scale it to a 8-thread 4GHz version?
8 core 8 multi issue is not a problem as far as the design is concerned. 4ghz needs to be in 14nm or lower and needs very special attention paid to the number of gates that propagate between flipflops (pipeline stages).
The cost in tine and NREs is just far too high... however the next version when we have customers etc and a stable husiness, effort can be put into doing that.
Not before however. Walk first, run later.
Too bad.... (Score:3)
Re: (Score:2)
I'm a mouse, and I'm stirring.
Any time you're working on an embedded device that is not designed to run general purpose software, you should consider either not using an OS, or writing a one-off one based on something like FreeRTOS.
The more your use case benefits from security, the more true this is.
It is more financially rewarding not have security problems that result in returns or recalls, than to have those problems. Being a digital-rights-asshole has nothing to do with any of the important use cases fo
Short Answer: (Score:2)
Patents and IP.
But good luck and have an IP lawyer on speed-dial.
Re: (Score:2)
Patents and IP.
But good luck and have an IP lawyer on speed-dial.
Funnily enough, the NLNet Foundation has precisely those resources available. Not on speed dial, on a mailing list.
Re: (Score:2)
If you're worried about patents, having a lawyer handy isn't the only route to take.
Another is to design and build the thing now, and in 20 years you'll be able to use it.
If you wait 5 years and then design and build the same thing, you'd still get sued over patents filed between now and then, because the system is so broken and "obvious" doesn't exist anymore.
For hobbyists, whatever they build now we know we can build in 20 years. If we want to be allowed to have this in the future, somebody has to actuall
Re: (Score:2)
Because what's even the point, if you're still going to get it etched at some Chinese or US fab with close ties to spies? (The NSA has its own fab, by the way. Exactly 1. for this reason, and 2. for this reason!)
This is the one thing most desperately needed: An all-in-one EUV chip fab in a 3D-printer-sized box with a similar price.
THIS is what we need an X Prize for!
Would be nice, huh? There's an article about reverse engineering came out on IEEE , requires a synchotron radiation source to do it, they take multiple xray images and do holographic reconstruction.
So if a fab gets caught putting spyware in a chip, you publish that fact and their trillion dollar business ends, ad nobody will trust them ever again.
No the real reason why the NSA had its own fab is because they have to keep the design secret.
India on the other hand *is* paranoid about spying being put into fab
Re: (Score:2)
Huh, the fab has the masks at a minimum and more likely the gds2. NSA and DoD are both very worried about this. I'd heard they have had RFP's for how to verify the chip design you sent to fab is the one you got back via test vector test methods.
And there was a very interesting DARPA talk at one of the RISCV conferences, DARPA has put up $150m for a fully automated chip design suite. Goal: one off million gate ASIC, $500.
They basically want Synopsis and Cadence out of business, replaced entirely with libre tools. I am planning to use coriolis2 / alliance, because of the same reasons, no way I am getting this ASIC laid out by proprietary tools, as it is potentially yet another insertion point.
LIP6.fr is doing a tiny RISCV core in 45nm, using FreePDK
Re: (Score:2)
They're almost Woke [libre-riscv.org]; they'll probably go broke.
You know you rhymed there, right? :)
So now we go into overkill mode. WE DO NOT HAVE A FUCKWIT POISONOUS INSANRLY TOXIC CODE OF CONDUCT.
I have written about the dangers of such documents several times. You mistook our Charter for one of these incredibly poisonous documents so you probably know exactly what I am talking about.
Most people have absolutely no idea, and I have discovered, sadly, that trying to educate them is pointless. As in, they are LITERALLY incapable of seeing how a toxic proscribed list of
Reasons for not opening the bootloader (Score:2)
Most semiconductor SoC devvices have a single silicon supporting multiple SKUs. This means there is a significant amount of configuration of the device that needs to happen without allowing the customer to interfere. This configuration can be disabling parts of the hardware or limiting performance etc. This is done simply because it is expensive to make and verify a die, and it is better to accept a bit worse margins on the downrated parts than to make actual cost-down dies to meet those markets.
While it wo
Re:Reasons for not opening the bootloader (Score:5, Informative)
by the customer. The code must be signed and running it must be enforced, otherwise the customer would be able to override the vendor configuration
Sigh you are unfortunately quite well informed but not well informed enough, if that makes any sense.
I have done a LOT of reverse engineering and work with embedded systems, and the key area that is a bitch is the DDR3/4 Memory Controller initialisation. This is usually kept proprietary for the incredibly simple and stupid reason that it is 3rd party licensed - no other reason.
In the SiFive bootloader someone reverse engineered Cadence DRAM initialisation for the Denali PHY. A mere 200 entries in a table was all that was needed to be published. Utter pathetic insanity that a few numbers can be considered proprietary.
Once the DRAM is up (the PLLs need to be programmed beforehand), a few GPIOs can be initialised, and that is enough to be able to talk to e.g Quad SPI, or NAND, or eMMC, or Sd/MMC, to get a larger bootloader into the DRAM, and, once executed, that can then start taking care of loading uboot or even a kexec capable linux kernel with an initramfs directly.
There is absolutely nothing here that is either complex or needs to remain proprietary.
The ONLY reason to use DRM is to chain-lock the user out of their own legitimately purchased device. The worst case of that I ever heard of was the first Surface Tablet, using an NVIDIA ARM core.
Landfill.
Users rejected it because you simply could not install your own apps, and despite running Windows it was an *ARM* cersion of Windows, using NVIDIA DRM to prevent and prohibit users from installing anything except what MICROSOFT said you can install.
You get how that works? If you don't, look up the home system that google bought then remotely shut down.
Great (Score:1)