Nvidia, Western Digital Turn to Open Source RISC-V Processors (ieee.org) 95
An anonymous reader quotes IEEE Spectrum:
[W]hat's so compelling about RISC-V isn't the technology -- it's the economics. The instruction set is open source. Anyone can download it and design a chip based on the architecture without paying a fee. If you wanted to do that with ARM, you'd have to pay its developer, Arm Holding, a few million dollars for a license. If you wanted to use x86, you're out of luck because Intel licenses its instruction set only to Advanced Micro Devices. For manufacturers, the open-source approach could lower the risks associated with building custom chips.
Already, Nvidia and Western Digital Corp. have decided to use RISC-V in their own internally developed silicon. Western Digital's chief technology officer has said that in 2019 or 2020, the company will unveil a new RISC-V processor for the more than 1 billion cores the storage firm ships each year. Likewise, Nvidia is using RISC-V for a governing microcontroller that it places on the board to manage its massively multicore graphics processors.
Already, Nvidia and Western Digital Corp. have decided to use RISC-V in their own internally developed silicon. Western Digital's chief technology officer has said that in 2019 or 2020, the company will unveil a new RISC-V processor for the more than 1 billion cores the storage firm ships each year. Likewise, Nvidia is using RISC-V for a governing microcontroller that it places on the board to manage its massively multicore graphics processors.
Um... didn't AMD (Score:3)
Re: (Score:1)
but last I checked that was required by law (that's kind of the point of patents)
No, it's not. You're conflating patents and copyrights.
Re: (Score:1)
No, he's not.
https://www.extremetech.com/co... [extremetech.com]
E
Re: (Score:1)
Nowhere in that article is there any proof of a law requiring compulsory licensing of patents.
Re: (Score:2)
Try here.
https://en.wikipedia.org/wiki/Compulsory_license [wikipedia.org]
Re: (Score:1)
Yes, as I said the US has compulsory licensing of certain copyrighted works, but it does not work that way for patents. Did you even read the article?
Re: (Score:2)
but it does not work that way for patents.
Sometimes it does. Intel is compelled to license some of their patents to AMD by the DoJ as part of an anti-trust consent decree.
Re: (Score:2)
but it does not work that way for patents.
Sometimes it does. Intel is compelled to license some of their patents to AMD by the DoJ as part of an anti-trust consent decree.
You didn't understand the word "consent" in "consent decree." That means it is a contract between DoJ and Intel. It isn't something they were compelled to do; it is something they agreed to do to prevent being compelled to do things, potentially more or different things than the things they were allowed to agree to do.
Re: (Score:3)
There is a long backstory, with a timeline detailed at https://www.cnet.com/news/inte... [cnet.com]
* in the early 1980s, when IBM brought out the PC, they threw in their standard demand for a "second source". Back then, IBM was YUUUUGE, and if you wanted their business, you complied with their demands.
* as per IBM's demand, Intel licenced 8086/8088 and 80286 tech to AMD
* later, Intel claimed that the licence did not cover 80386 and further cpus. AMD claimed that the licence did cover future X86 cpus. Court battles ens
Re: (Score:1)
BTW if patents licenses are compulsory, how do you explain Intel not being forced to license x86 patents? If they were, more than just AMd would have a license from Intel. Or how do you explain the Apple v. Samsung lawsuit over patents? If patent licensing were compulsory, there would be no case since Apple would have been forced by law to license the patwnts to Samsung.
Re: Um... didn't AMD (Score:1)
AMD really only has an x86 license because back in the day they were an 80286 second source. Their main thing back then were some 'bit slice' ALU processors they sold in 4 bit chunks that you could bolt together. I think I still have some stuck away in a box somewhere.
Re: (Score:1)
AMD really only has an x86 license because back in the day they were an 80286 second source.
Oh, and that other thing called AMD64 that pretty much every x86 CPU (even Intel's) has become.
Re: (Score:1)
Re: Um... didn't AMD (Score:1)
The only reason they had an architecture to extend was because of having been a licensed second source decades ago.
The knob polishing that AMD gets from their fanboys is amazing. As is the presumption some will make that this comment makes me an Intel fanboy.
For pete's sake, they're not football teams. Nothing we say or do matters to the people running Intel or AMD. You're not going to get invited to the party afterwards if you're the most slavish fan in the mosh pit.
Re: (Score:1)
Just as proof from the article:
It’s entirely possible that AMD wants to use the building blocks of Zen or AMD’s SeaMicro fabric and combine them with its own homegrown capabilities in ways that wouldn’t infringe on patents held by Intel or other players in the industry.
If patent licensing was compulsory why would they care about infringing Intel's patents? Intel would have to give them a license. Oh wait, except there is nothing requiring Intel to license its patents which is why they are working around that as backed up by the paragraph above this one:
This suggests that the JV is structured to bypass restrictions in AMD’s x86 license agreement with Intel that would otherwise prevent the company from signing any such agreement.
Re: (Score:2)
The OP said, "I'm sure they have to license patents from Intel, but last I checked that was required by law"
In other words, the law requires that AMD license any Intel patents they use. Not sure how you construed his comment to mean Intel is required to license their patents to AMD.
Re: (Score:2)
no, AMD engineered financial arrangement that makes it look like Chinese company is manufacturing processors, all in an effort to skirt Chinese import taxes.
Re: Why not others? (Score:1)
Well... considering nVidia IS considering RISC-V, shouldn't that tell those others you mention that they might be missing out?
Re: (Score:1)
Nvidia's Risc V CPU is intended to replace Nvidia's Falcon (FAst Logic CONtroller) CPU, which is the internal controller of their graphics cards.
Re: (Score:2)
Assuming you meant workloads and not warlords, it still does in terms of processing power per watt in some situations if you are talking about t-series.
Personally, I would love more server manufacturers to sell "the most power you can get in a box for 100W/200W", for which I use Sparc, but would be happy to use Arm or Mips or Risc-V. (I use very little floating point and quite a lot of I/O bandwidth).
Granny always told me not to put my eggs in a bas
Re: (Score:2)
I think the most common computer at a Warlord's base is going to be the Toyota ECM. Without that, your technicals won't have good traction control or extreme weather performance.
Re: Why not others? (Score:5, Interesting)
Not necessarily.
Instruction set is far less important than toolchain in 2018.
In 1999 when I was working with ARM, Ericsson, Redhat, Opera and some others, we were investing very heavily in sorting out Linux on non-x86 processors. It was a disaster because so much of GCC, Linux, globe and binutils were optimized to death for x86.
We had our biggest challenge trying to make dynamic software (software with indeterminate memory requirements) operate on CPUs lacking an MMU. The web changed everything. Because none of the software vendors involved in the project could dictate the data sets to be consumed on the devices, we had serious memory fragmentationâ(TM)s issues. In a multi-process operating system that needed to support HTML in mail and in the web browser, Linux was suffering terribly on systems lacking MMUs.
We had a lot of other problems as well. GCC 2.91 was such a horrible codebase that had spaghetti everywhere in code generation because Stallman did such a painfully piss poor job in his design. Academics and companies everywhere had been spamming the codebase for years inserting AST reduction oriented optimizations which would be carried over to code generation. And since GCC didnâ(TM)t really have a maintainer in the sense that Linus maintains the kernel and CVS was also a nightmare, letâ(TM)s just say that GCC worked almost by accident.
Binutils was ugly too. Even today, binutils is not nearly what it should be. This is still a point of clear superiority for Microsoft. If for no other reason than that Microsoft made it a standard requirement of Windows DLL files to explicitly describe entry points which would permit far more intelligent linkers to be written.
Of course any language that actually needs a compile time linker in 2018 is a piece of crap by design. Most C/C++ code would be substantially better if all the source files were included from a single source file which then would be compiled and linked with clear entry point definitions by GCC or CLang instead of using a linker which lacks an AST.
So, the year is 2018 and both GCC and LLVM are highly retargetable. Binutils works much better than ever. Most JITs are well designed and easily portable. .NET Core run on x86, x64, ARM, ARM64, and apparently one of Microsoftâ(TM)s own CPU designs. Java runs everywhere. Oh and Mono can run pretty much anywhere a C compiler is available.
If you want to make a new CPU design, you need to port code generators and binutils to the new CPU and then porting Linux is pretty straight forward. Most of the platform native code in Linux these days is a single directory and that directory can be very lightweight. The DEC Alpha directory is ridiculously easy to port as itâ(TM)s mostly C code tweaked to produce good code. Alternatively, thereâ(TM)s the ZPU project which worked pretty well and is almost all C.
Last, you need to make a first stage bootloader and porting some UEFI code is easier than youâ(TM)d think.
Once those things are done, compiling Fedora or Ubuntu for the platform is pretty easy.
Iâ(TM)ve seen and end to end new CPU bootstrap on modern Debian by 5 developers in a month. It took a small team a year to implement optimizations since the CPU was an extremely different architecture, but it was done by a robot dink $10 a year company.
Now enter RISC V or other CPUs which already have a toolchain... there is no value in using ARM anymore since those toolchain are already stable and the platforms are also stable. They have some disadvantages, for example, theyâ(TM)re designed mostly for FPGA which means that design decisions have been made based on structure or LUs. Multipliers are probably based on stacked 9 but pyramid multipliers and dividers are probably suboptimal. As NVidia and others get their hands on it, they will contribute better ASIC blocks because they have the skill set required and also have more than enough of their own IP for those things... like reduced gate d
Good tech (Score:5, Informative)
I think the article should say "[W]hat's so compelling about RISC-V isn't just the technology".
The instruction set is modern and tight, made to be easy to pipeline and scale. There are RISC-V chips that rival ARM in performance / watt at the same manufacturing process.
The ISA is modular so engineers could strip out the parts they don't need and get more power savings that way.
But I would not say that it is mature yet. There are important parts, such as the memory consistency model that I have not yet seen set in stone.
The Greater the RISC ... (Score:5, Funny)
Re:Whatever happened to step changes? (Score:5, Informative)
RISC really shined during a brief period where there was an extreme premium on getting every part of a CPU on a single die, and memory speeds weren't totally out of wack with CPU speeds. That favored its approach of the minimum number of transistors on a chip and using memory a bit more wastefully than older approaches grounded in the days when memory was both slow and very expensive, e.g. during the transition from core to DRAM.
Now, of course, we can put relative to those days an infinite number of transistors on a die, and memory speeds are again out of wack with CPU speeds. We've got plenty of main memory, but cache is still dear. To the point that pretty much any execution micro-optimization that causes your working set to exceed a level of caching ends up running slower. And Intel's IA-32 macro architecture didn't make any fatal mistakes like e.g. the VAX's so that it could be made to run quickly without insane effort.
Re: Whatever happened to step changes? (Score:1)
Intel excels at main memory access performance and cache utilization due to variable length instruction. Most ARM processors have relatively poor main memory access performance even when the run at higher core speeds.
Re: (Score:3)
Indeed. The higher transistor count needed to quickly decode Intel's variable length instructions is now trivially affordable, so ARM processors don't gain much there, and tend to lose it when memory hierarchy performance is factored in.
Except that we should note that "lots more" transistors to decode or otherwise make a CPU run quickly is harmful for energy usage, so ARM still wins in mobile (although some of Intel's issues there are due to it just not seriously pursuing that potentially high volume but l
Re: (Score:2)
Re: (Score:2)
The Intel went full fast.
Re: (Score:2)
He said he's drunk, can't you even read?!
Re: (Score:2)
It helped that a lot of the OS was in ROM.
Stealth CPUs (Score:5, Interesting)
So RISC-V's market is going to be mostly in non-exposed, internal processors running secret unreplacable firmware doing unknown things our GPUs and SSDs... Kinda like the Intel ME and AMD PSP. Are we supposed to feel good about that?
I find it ironic that the first thing that comes out of an open CPU design is more of the closed systems that supposedly RISC-V was designed to discourage. I don't think we can blindly apply the same approach to open hardware that was taken for open software, the economics of hardware production is very different than the economics of distributing software on the Internet.
Re: (Score:3)
Comment removed (Score:4, Insightful)
Re: (Score:1)
Not only ARM but many lesser known ones. For example ARC has a bit of notoriety for being used in nvidia GPUs before being replaced by a RISC-V or in the Intel Management Engine before they put in an x86 instead (their small 486/586 clone that was sold as Quark I believe)
There's MIPS likely. PowerPC was used on RAID cards. And probably a ton of vendor-specific architectures and simple micro-controllers.
Re:Stealth CPUs (Score:5, Interesting)
Not being funny - but almost every chip you've ever used could have secret unreplaceable firmware and you'd know nothing about it.
This has been true throughout the history of computing, really. Sure, we know now that the Z80 was okay but we had no way of sensibly telling back then and it was all we could use.
Has anyone ever decapped a 386? What about those old AMI BIOS chips, sure we know what firmware can load onto them, but how do we know that's all that's in that chip and there isn't a secret ROM activated under certain conditions? We don't, until the chip is dead and out of the market, and even then we may never know.
Sorry, but "open" hardware of any significant specification is a fallacy... because you cannot verify it without an awful lot of very expensive equipment, even if it operates as if it were a RISC-V processor. Anything could be tapping into that core specification and leaking or acting on data secretly and you'd never know - it would just look and work like RISC-V chips all do to all outside appearances.
Honestly, if you think that nVidia using RISC-V is a bad thing, and isn't going to boost RISC-V adoption, reputation and development, or that your system is somehow going to avoid all such avenues of compromise, you're so wrong that it's laughable.
In fact, if anything, such code makes it incredibly easy to modify such a thing, use its name AND get away with it because nobody will ever check and/or ever be able to sue, that doing that to some big-name chip manufacturer.
Re: (Score:2)
Well, in the olden days some of us didn't like buying BIOS chips, and updates weren't always user-installable. So some of us went to a local electronics store, and if you bought a blank EEPROM of the correct size and pinout they'd be happy to copy the firmware off of a real BIOS chip they had laying around. It would work great, but it wasn't even the same brand of ROM, it wouldn't have had the stock secret circuit. And it wouldn't have been sold in a supply chain that expected it to end up as a BIOS chip.
An
Re: (Score:2)
The portion of people who would have bought their own chips and copied a BIOS onto them was vanishingly small. I was really deep into my IT as a kid, and very much considered a "BIOS Saviour" purchase several times, but could never justify it.
But the fact remains that you still wouldn't have known - you may have unwittingly saved yourself from such an attack, but the BIOS could easily program in an innocent-looking presence check and carry on regardless if on a non-compromised device. Hell, it doesn't eve
Re: (Score:2)
The hacks are much older, in x86 it's all based on the 1990s' Pentium Pro, first shipped using a 50 um process, and the basic concept was first developed by IBM in the mid-late 1960s [wikipedia.org] back when they w
Re: (Score:2)
I find it ironic that the first thing that comes out of an open CPU design is more of the closed systems that supposedly RISC-V was designed to discourage.
That is the whole tech industries goal, you seem to be unaware the tech industry from the very beginning - (games, hardware mfg'ers, holly wood, etc) have always hated people owning and controlling their own computers and software. With the rise of high speed internet they are using the ignorant half of mankind to slowly boil the frog and take away control of our machines because they know we can't reach these companies and hold them accountable.
The last 20 years in videogames and software has been towards
Open hardware won't make itself... (Score:2)
Re: (Score:2)
Kinda sounds like comments of the early 1990s when Linux was considered a toy that nobody would ever use "in real life".
Why not SPARC? (Score:2)
Re:Why not SPARC? (Score:4, Funny)
Re: (Score:2)
RISC-V got around any issues with the flags register by dispensing with one altogether. They defend this in part by claiming the most popular programming languages don't care about integer overflow and the like, but they also don't provide any other facilities to make it quick to discover that. The people who need to do big integer math are not amused, and I take this as a clear sign it's a "worse is better" architecture.
Why don't cell phones use RISC-V? (Score:2)
Re: (Score:1)
The key word being "yet". It would be more surprising if it weren't being worked on, and given that android is almost architecture neutral and linux support is forthcoming, there are few obstacles. Google is a platinum member of the RISC-V foundation, and eventually it seems very likely that it will power their servers and more.
As for a compelling reason, a simple and uniform platform would be extremely desirable. ARM is a horror on phones and tablets, basically requiring a custom OS image for every device.
Re: (Score:3)
Yes, RISC-V is lowend for now...
But ARM was always lowend, and so was x86, support from a large number of vendors and huge sales volume push the lowend up while the highend architectures have all failed or been pushed into tiny expensive niches.
Re: (Score:2)
Couldn't they have used MIPS, SPARC or POWER? (Score:2)
How the hell is it possible to pay fee for ISA? (Score:2)
It's outrageous you can copyright an instruction set. Its a language, and you shouldnt be able to copyright languages, protocols, etc. Didnt transmeta and several other companies implement x86 without paying Intel a fee? It seems the fee should only be for licensing Intels schematics. but if your going to use all of your own electronic designs it should be fine to support an existing ISA. ISAs are not difficult to implement. In fact you can create one in an hour. This certainly is not worth millions of doll