Linus Torvalds Says RISC-V Will Make the Same Mistakes As ARM and x86 (tomshardware.com) 73
Jowi Morales reports via Tom's Hardware: There's a vast difference between hardware and software developers, which opens up pitfalls for those trying to coordinate the two teams. Arm and x86 researchers encountered it years ago -- and Linus Torvalds, the creator of Linux, fears RISC-V development may fall into the same chasm again. "Even when you do hardware design in a more open manner, hardware people are different enough from software people [that] there's a fairly big gulf between the Verilog and even the kernel, much less higher up the stack where you are working in what [is] so far away from the hardware that you really have no idea how the hardware works," he said (video here). "So, it's really hard to kind of work across this very wide gulf of things and I suspect the hardware designers, some of them have some overlap, but they will learn by doing mistakes -- all the same mistakes that have been done before." [...]
"They'll have all the same issues we have on the Arm side and that x86 had before them," he says. "It will take a few generations for them to say, 'Oh, we didn't think about that,' because they have new people involved." But even if RISC-V development is still expected to make many mistakes, he also said it will be much easier to develop the hardware now. Linus says, "It took a few decades to really get to the point where Arm and x86 are competing on fairly equal ground because there was al this software that was fairly PC-centric and that has passed. That will make it easier for new architectures like RISC-V to then come in."
"They'll have all the same issues we have on the Arm side and that x86 had before them," he says. "It will take a few generations for them to say, 'Oh, we didn't think about that,' because they have new people involved." But even if RISC-V development is still expected to make many mistakes, he also said it will be much easier to develop the hardware now. Linus says, "It took a few decades to really get to the point where Arm and x86 are competing on fairly equal ground because there was al this software that was fairly PC-centric and that has passed. That will make it easier for new architectures like RISC-V to then come in."
You don't need to look at hardware to see this (Score:5, Interesting)
Re: You don't need to look at hardware to see this (Score:2, Interesting)
Or network layers that ignore how high level software expects to operate.
Re: (Score:2)
I think you mean "high level software that ignores unavoidable trade-offs in how network protocols can work". (No, you got chocolate in my peanut butter!)
Re:You don't need to look at hardware to see this (Score:5, Insightful)
Assuming they haven't been laid off or encouraged to retire in favor of younger/cheaper employees ...
Re:You don't need to look at hardware to see this (Score:5, Interesting)
The fundamental problem here is that everyone thinks RISC-V is open
It is, but only the ISA. The actual core is not open. The ISA just tells you what the core will do when you present it with certain bit patterns that we give names to.
The RISC-V cores are not necessarily open. You can take the ISA and design your own core that implements the RISC-V instruction set. I've seen cores implemented with TTL chips to very fancy cores used in higher end machines, usually using a SiFive chip.
SiFive has some of the most performant chips out there, but their core isn't open, so it isn't like you can take their core and tweak it to make it better. I think they will license the core to you for an ASIC, but that's about it.
Meanwhile, the open cores are generally so-so - not because of their implementation method (most are on FPGAs, which limits your speeds) but architecture wise.
Of course, what I see happening is a company like SiFive being dominant in the RISC-V space because if you need performance, you go with them, and they'll be like ARM offering the core designs everyone uses.
I recently had a conversation with some architects at my company who were considering RISC-V. I asked them what core they were planning to go with and they looked at me oddly - they assumed with RISC-V, being open, that they could just plop something on an ASIC and be done with it. I told them no, what's open is the ISA. You either need to design your own core, or license a core from someone (like we do with ARM). They all assumed that by adopting RISC-V they'd never have to pay license fees again - you'd get a full design and everything that you tweak, like the Linux kernel.
So yeah, everyone's going to fall into the same traps over and over again. When SiFive implements speculative execution on their cores, I would expect Spectre and Meltdown to apply to those cores. Of course, a key problem is the people who design ASICs for a living make very good money and generally their knowledge is so specialized it's not something that's shared other than by working together. After all, ASICs cost millions of spin out, and simulation and design another few million dollars. The closest anyone "normal" can get to do it is using an FPGA board. But that's around the same scale of going from "hello, world" to an OS kernel.
Re: (Score:2)
An open spec and free license is a step up from the alternatives we have dealt with for decades. And there are open core available for RISC-V, even if they aren't the fastest, cheapest, or most readily available.
This situation has parallels in the software world. Some applications are closed and some are open source. You can use the GIMP, which is inferior in many ways, or you can use something closed from an abusive company like Adobe to get your work done.
It is theoretically possible to build a completely
Re: (Score:3)
There are reasonably advanced open source RISC-V cores available.
https://github.com/XUANTIE-RV/... [github.com]
That is the T-head Xuantie 910 RISC-V core's Verilog. This is a 12-stage, out of order, dual-issue, 64-bit RISC-V core.
You can also buy chips with it fairly cheap.
Re: (Score:3)
nobody likes to consult with the experienced grumpy old men
Except in this case, the grumpy old man is Turning Award winner David Patterson. In addition to RISC-V, he also led the teams that designed RISC-1, RISC-2, RISC-3, and RISC-4.
Re: You don't need to look at hardware to see this (Score:2)
Re: (Score:2)
And MIPS which was one of the early RISC architectures and a major competitor for X86 and ARM and SPARC and POWER and Alpha back in the 90s. mIPS was used to render Jurassic Park and Terminator 2 and is still widely used in routers. RISC-V is basically the chip where the experienced grumpy old-timer looked at his design for MIPS, took the lessons learned and fixed his old mistakes.
It is. It like RISC-V is new. It is a redesign of the original RISC
Re: (Score:2)
64-bit mode on ARM is quite a bit more MIPS-like than it is ARM-like. It really blurred the lines for me when I had to switch over my development.
SPARC adopted some interesting aspects of the early RISC projects that were abandoned by some others. I think technically it was a superior architecture. But Sun and eventually Oracle bungled the marketing side of things.
Re: (Score:1)
Was it all the delay slots everywhere? Wait- no. Don't have those.
Re: (Score:3)
AArch64 is more like PowerPC than MIPS, but either way, it's not very ARM-like. It has a bunch o
Re: (Score:2)
It's still *nothing* like MIPS.
I love the "doubled general purpose registers", as if that's a defining ISA characteristic.
x86-64 doesn't have a general-purpose PC, either.
Sure, it shares some features with PPC, some with MISP (but really, not really), and some with x86-64.
Really, it's its own thing entirely.
Re: (Score:2)
I was criticizing the idea that it's MIPS-like, which is frankly fucking absurd.
I've been writing MIPS and arm assembly for a decade and a half.
aarch64 made arm a lot less fun to play with for me, but that's ok. I understand why they did it. It's time for the ISA to grow up and be a big boy ISA.
MIPS/64 is going to ride that original RISC idea right to the grave where that ISA is buried.
MIPS held a commanding lead in small routing devices like SO
Re: (Score:2)
Turning Award winner David Patterson.
"Turing Award", please.
Re: (Score:2, Troll)
because nobody likes to consult with the experienced grumpy old men, who have seen it all happening before.
The big problem is that it's the experienced grumpy old men causing this. In my professional experience (not in IT) experienced grumpy old men fall into two categories with a virtually 50/50 chance:
1. Experienced grumpy man upset that kids are repeating mistakes, whom if you consult can give you a solution that fits with modern ways of working to the betterment of all (modern tech + not repeating mistakes).
2. Experienced grump man who hasn't seen the failures of the past and declared the modern world unfit
Re: (Score:3)
So... instead of risking the loss of an hour or two and some disrespect, it's better to reinvent the bicycle? A little humility would pay off.
Re: (Score:2)
I implied nothing of the sort, but yes you're entirely correct. The opposite of experienced grumpy old men are inexperienced young carefree ... women technically... ignore the gender here ;-). Someone young and inexperienced repeating the same mistakes is equally inexperienced with interpersonal communication.
I can apply the 50/50 rule to young people in my career as well, half of them are actively afraid of talking to some people. Humility in a professional setting is a learned skill.
Re: (Score:2)
I agree with you 100%. Last compan some grumpy old man still uses unreliable XFS instead of anything newer cause that’s the wa it was in1995. And then lost customer data.
Re: (Score:2)
How about they talk to FAILED software projects like the micro kernel people? It's better to push most everything outside the the kernel but nobody succeeds in this because of the performance hit ... which hardware could address! they added VM acceleration features...
We take speed hits in many other areas which we allow but not in this area.... why not? Me, I like Multics abilities... how about exploring that too? would be nice if one could just slap cpus, ram, (and cpu upgrades) etc together at runtime
Arguably as long as it's not the same as transmeta (Score:3)
Too bad it didn't work out for transmeta.
We have a different meta into hardware these days ðY£, very different
Re: (Score:2)
Right. I'm sure RISC-V folks would be open to suggestions.
But hardware folks /also/ understand some things kernel folks don't. It goes both ways.
Re:Arguably as long as it's not the same as transm (Score:5, Interesting)
But hardware folks /also/ understand some things kernel folks don't. It goes both ways.
Absolutely! Spending time programming assembler, then VHDL, then assembler again, makes one really see things from a different perspective. I for one would argue that designers of new hardware or (low-level-)software architecture should definitely spend some time "on the other side" to understand it better.
Re: And how is Linus qualified to comment? (Score:2, Funny)
Re:And how is Linus qualified to comment? (Score:5, Insightful)
Since when is Linus an expert on processor design? And why does he assume that RISC-V designers do not have experience? And why do "verilog" authors need to consult with kernel developers.
Linus is just showing how fame entitles him to comment on things he does not understand. Sure, mistakes will be made...because people do stupid things and the wheel is constantly re-invented. Linus thinks if he didn't invent it, then it doesn't exist.
Because he worked for a CPU company for years customizing the kernel to take advantage of the hardware. He worked WITH the hardware designers and garnered a better understanding for how it works. That's why. A team that understands other team-members roles and how they work can make for a better work product. Silos breed waste and inefficiency.
Re:And how is Linus qualified to comment? (Score:5, Interesting)
Linus has been telling "experts" that they were doing it wrong since he was in undergraduate school in Finland. The archived USENET debates between him, and Professor Tannenbaum (the guy that wrote the operating system book that everyone was teaching out of back then) are absolutely legendary. If you haven't read them, then you absolutely should.
Let's just say that despite the fact that Professor Tannenbaum was arguably the world's highest authority on operating systems, Linus debated with him, and has been proved by time to be correct. What's more, Linus worked for years for Transmeta, building processors.
Let's just say that if he spouted off about mistakes than RISC is making that he has seen x86 and Arm make, then the folks working on RISC probably should pay attention.
Re:And how is Linus qualified to comment? (Score:5, Insightful)
Don't forget the time when he decided that the version control system experts were doing it wrong.
Re: (Score:2)
Don't forget the time when he decided that the version control system experts were doing it wrong.
I'm trying to figure out what your point is here? : )
Doesn't almost everyone use the version control system he created [wikipedia.org] now?
Re: (Score:2)
We all use it now, but he originally had to create it because most of the experts were wrong, and the ones who weren't wrong were giving him static on software licenses.
Re: (Score:2)
Doesn't almost everyone use the version control system he created [wikipedia.org] now?
That was his point.
That Linus was right.
Re: (Score:2)
What he decided was that the open source version control experts were doing it wrong, and also the experts who weren't doing version control wrong were doing licensing wrong.
Re: (Score:3)
Don't forget the time when he decided that the version control system experts were doing it wrong.
Git out of here with that nonsense. *blink*
Re: (Score:3, Interesting)
Linus has been telling "experts" that they were doing it wrong since he was in undergraduate school in Finland. The archived USENET debates between him, and Professor Tannenbaum (the guy that wrote the operating system book that everyone was teaching out of back then) are absolutely legendary. If you haven't read them, then you absolutely should.
From what I remember, Professor Tannebaum was advocating for the idealized solution of a microkernel. Academically microkernel is the solution to all kernel design. Linus who was actually working on a kernel that people could use was advocating for a monolithic design as that was the fastest and most practical solution as microkernels only work on paper and in academia.
In the real world and 30 years later, most kernels for operating systems are monolithic or hybrid. There are very few microkernels that wo
Re: (Score:3)
Linux originally started with a microkernel called GNU Hurd
No he (nor Linux) never did. You are confusing this with the GNU userland tools.
Re: (Score:2)
Re: (Score:2)
Re: And how is Linus qualified to comment? (Score:2)
Re: (Score:2)
It's hybrid.
The BSD subsystem doesn't run "on" mach. It's a cancerous tumor grafted onto mach.
The 2 are first-class residents of kernel space.
It's frankly fucking hideous.
Re: (Score:2)
Re: (Score:2)
A user process in macOS has access to FreeBSD syscalls, and mach ports.
Neither is really on top of each other- they're both first-class citizens.
Processes and threads are implemented on mach tasks and threads, but mach isn't really a microkernel in XNU- all of the kernel, including the FreeBSD portion, runs in shared address space.
Apple has definitely modified mach since mach 3- but it's still very much mach, and the BSD portion is still very much FreeBSD.
And every single TrashKit
Re: (Score:2)
XNU is mach+FreeBSD.
XNU is a hybrid kernel based on Mach. I think one reason for the creation of XNU was Mach stopped development in 1994. XNU is derived from it. One of the main changes is XNU is not a microkernel. It is a hybrid. It does some things like Mach and some things not like Mach. An OS that used Mach would be mostly compatible with using NEXT's version of XNU. In the 30 years since Mach stopped development, I would imagine that compatibility has decreased. However the person stating that "Darwin (the macOS kernel)
Re: (Score:2)
XNU is a hybrid kernel based on Mach.
As much as it is based on FreeBSD.
I think one reason for the creation of XNU was Mach stopped development in 1994.
Heh. If you or anyone can figure out the reason XNU exists- then you're a modern day nostradamus.
It gets nothing from mach that it couldn't have gotten from simply modifying FreeBSD to be as they liked.
XNU is derived from it.
As much as it is derived from FreeBSD.
One of the main changes is XNU is not a microkernel.
Correct.
It is a hybrid.
It's more accurate to simply say it's a monolithic kernel. It is a hybrid of mach and FreeBSD, but the microkernel aspects are gone.
It does some things like Mach and some things not like Mach.
The mach portions
Re: (Score:2)
Microkernels are a great idea, but they need better hardware support (literally different architectural decisions) to work optimally. They can be made to work on modern CPUs, but they're never going to be great there. This presents a chicken/egg problem because nobody is going to invest a billion dollars making a new kind of CPU for an OS that doesn't yet exist.
Re: (Score:2)
They have some cool ideas in them.
But the fact remains, that they are- by design- poor performers.
Microkernels are built around what was basically my first attempt at same-process-space IPC when I was 18.
It seems elegant, but it performs like shit.
Re: (Score:2)
The actual debate is worth reading [oreilly.com]. Let's just say that your recollection is more than a little faulty. Professor Tannenbaum boldly stated that in the future we would all be running microkernels. He even went so far as to say that he would give Linus a poor grade for Linux.
Fast forward over 30 years and Professor Tannenbaum could not have been more wrong. Not only have microkernels all but disappeared, but Linux has essentially swallowed the entire OS space. It proved to perform not only better on x8
Re:And how is Linus qualified to comment? (Score:5, Informative)
Since when is Linus an expert on processor design? And why does he assume that RISC-V designers do not have experience? And why do "verilog" authors need to consult with kernel developers.
I do not know if anyone even Linus calls himself an "expert" but he has experience on the hardware side from when he worked for Transmeta [wikipedia.org] for a few years.
Linus is just showing how fame entitles him to comment on things he does not understand.
Let's be clear on what happened. Someone asked him a question and he answered it. This was not Linus ranting about RISC-V unprompted on a blog.
Sure, mistakes will be made...because people do stupid things and the wheel is constantly re-invented.
His exact quote: "My fear is that RISC-V will do all the same mistakes that everyone did before them. . ." His concern is that "Those who cannot remember the past are condemned to repeat it."
Linus thinks if he didn't invent it, then it doesn't exist.
Please cite where Linus said this.
Re: (Score:2)
Well, he's worked on the interface between hardware and software for decades with considerable success.
He is still very actively engaged, down to the minutiae of the processor design elements, so is as authoritative a source as we have, imho.
Follow the discussions in RWT (https://www.realworldtech.com/forum/?roomid=1 ), he participates there regularly.
They cover the interface between hardware and kernel software in depth, better than anything outside of the AMD or Intel workshops.
Re: (Score:2)
And why do "verilog" authors need to consult with kernel developers?
Because otherwise, hardware designers will spend enormous resources building and optimising structures that software developers won't use. The consultation needs to go both ways. Sometimes the software developers are insistent on usage models that the hardware developers can never make efficient.
Re: (Score:2)
And why is “Verilog” in sarcasm quotes?
Re: (Score:2)
Because he was paid work experience in a CPU design company.
Also, "verilog" is the easy language, the one without rigorous type checking. Not sure why VHDL has fallen out of favor, maybe it's too hard?
Re: (Score:2)
Too verbose. VHDL looks like COBOL on steroids.
Re: (Score:2)
VHDL looks more like Ada to me. But yes it's very verbose, too bad nobody pays by the line anymore.
Its easier to whip up something concise and easy to understand in SystemVerilog. But if you need to do formal proofs, VHDL can be a little stronger or at least a little more math-like and less C-like weirdness. But SystemVerilog has much nicer verification features built-in that it's kind of lead to it taking over in industries that require a fast time from design to verification ready to tape out, such as con
Re: (Score:3)
Linus is just showing how fame entitles him to comment on things he does not understand.
Giving you the benefit of the doubt and assuming that you too are famous - aren't you doing exactly the same thing?
Re: (Score:2)
I'm not famous, so I have no responsibility in this respect and can talk freely on topics that I do not understand. I can feel safe in the knowledge that nobody is going to take me very seriously and I won't alter the course of some business investment or career in the process.
But more seriously. Linus has a great deal of experience being on the receiving end of the chip designs and has to deal with how those decisions affect real world software.
Even if he isn't directly involved in how the sausage is made,
Seems like an argument for Apple's approach (Score:2)
Single owner of hardware, software and the trade-offs between them, viewing the whole system, well, "systemically".
Re: (Score:2)
Why would RISC V have it easier than ARM? (Score:1)
Without ARM, x86 might have completely taken over. But ARM didn't start in an x86 exclusive world and at no point did it even become only x86 and ARM. It started in and there is still are multiple architectures. The mainstream might not know them well but the hardware and kernel people do. All the problems that ARM faced are still there to trip up RISC-V and the solutions employed by ARM aren't really special or unique.
Re: (Score:2)
With aarch64 they have abandoned some of the coolest aspects of their ISA.
Tape-out might be easier now... (Score:2)
Making a new CPU is difficult, but with things like Cadence's Palladium and Synopsys's ZeBu, you can get the ISA and the main parts of the CPU coded, then run it in the simulator. It may take days to weeks for it to boot a Linux kernel... but it can. With the tools that Synopsys and Cadence provide, life is a lot easier for CPU design than in the past, where one can let a well-debugged AI handle floor plan, layout, and other items. This doesn't mean it is easy to make a CPU, but it means the development
Re: (Score:1)
Re: (Score:2)
The fact that RISC-V is free and doesn't require CPU core payments are likely what will get them in the running. Especially in China, where from what I see with new SBCs coming out all the time, it only is a matter of time before Chinese companies hammer out some type of standard, with SMIC doing their chip fabrication. In the past, I'm pretty sure they were going to go on Zhaoxin style x86 CPUs, but with Centaur gone, the goose that laid the magic eggs there is gone, so it is up to the Chinese engineers
Re: (Score:2)
Itanic. Intels VLIW sinking ship. The failure of that wannabe monopoly is not missed.
Re: (Score:2)
No standardized bus, or even standardized MMU on the parts that had it.
It was absolutely the single largest factor that kept it back- every singe arm CPU, that while having the same ISA- had basically nothing else in common. They each had a manufacturer proprietary bus, and a manufacturer proprietary memory and peripheral mapping. I loved it at the time, because it felt so raw, but it quickly became annoying.
How is this even news? (Score:3)
I've just watched the Interview. There's very little content past the headline. Linus thinks the RISC-V World will make the same mistakes other ecosystems did before. He doesn't go into details or anything.
However it is obvious that it is making at least some mistakes. For example, unlike x86, OS-Images are non portable. You cannot just get an SD-Card with an operating system for a Raspberry PI and boot it up in any other ARM-based computer. Same goes for RISC-V.
Re: (Score:2)
I also wanted the article to at least list a couple of the mistakes Linus think will be repeated.
I thought it was the programmers who should code for the hardware and not the hardware constructors who should build for the software that will be run on it.
Re: (Score:2)
I thought it was the programmers who should code for the hardware and not the hardware constructors who should build for the software that will be run on it.
Surely both is best? I remember that during the VAX/VMS design process, hardware and software people worked closely together and iterated design proposals.
That's the main reason it was called VAX/VMS - the name included both the hardware architecture and the operating system.
Re: (Score:2)
Yes, underwhelming. I will go ahead and speculate that Linus is mainly thinking about side channel attacks and the speculative execution designs that facilitate them. Other obvious pitfalls, e.g., broken metavirtualization, I hope will be avoided. It's not like IBM didn't get those sorted back in the 70s.
RISC-V working groups on safety / sidechannels etc (Score:1)
He is right (Score:2)
On the plus side, as soon as the less experienced hardware people find out they have a problem, the smarter ones can look up history to find what works as a fix.
What issues exactly? (Score:2)
Lame that Linus did not mention any specific issues. I can speculate (... virtualized interrupt gating??? ...) but why? Linus could have just rattled off a few. Or was he just fishing? Linus does that from time to time, master of rhetoric that he is.
Re: (Score:2)
...tagging on. He could be thinking about side channel attacks, currently bedeviling every major CPU architecture. And frankly, I'm ok with RISC-V not being perfectly airtight against Spectre in its initial incarnations, if it delivers the performance. Just don't be a whole lot worse than AMD, please, and lets keep the bad guys out by improving defenses at the browser level. And of course, taking a pass on closed source binaries, particularly from untrusted sources like Microsoft.