Startup Offers A Chip Based On The Open Source RISC-V Architecture (computerworld.com.au) 73
angry tapir shared this news from Computerworld:
An open-source chip project is out to break the dominance of proprietary chips offered by Intel, AMD, and ARM... A startup called SiFive is the first to make a business out of the [open source] RISC-V architecture. The company is also the first to convert the RISC-V instruction set architecture into actual silicon. The company on Thursday announced it has created two new chip designs that can be licensed... but the company will not charge royalties. That makes it attractive alternative compared to chip designs from ARM and Imagination Technologies, which charge licensing fees and royalties.
One of RISC-V's inventors co-founded the company, and he says that support is growing -- pointing out that there's already a fork of Linux for RISC-V.
One of RISC-V's inventors co-founded the company, and he says that support is growing -- pointing out that there's already a fork of Linux for RISC-V.
You don't have to pay per unit royalties sure... (Score:4, Informative)
But you have to pay a shitton of money to get the license. Well a shitton to a regular person anyway. If you can afford to manufacture one of these chips the license cost is probably a drop in the bucket.
Re: (Score:2, Informative)
I see you've been courted by ARM's marketing department. They've been coming to us as well with exciting charts about how affordable ARM licensing is versus the evil expensive RISC-V. But look closely, it's bullocks.
If all you want to do is mark Cortex-M0 chips, then ARM's DesignStart license is cheaper than hiring an engineer to put together a RISC-V. ($40K iirc). Of course ARM makes that up if you manage to ship a lot of M0 units. But if you need a wide range of ARM products, the licenses quickly get more
Re: (Score:3)
Re: You don't have to pay per unit royalties sure. (Score:2)
That MUST be a good solution. You know that spending a lot of money on an FPGA makes the best products.
Re: (Score:2, Informative)
This is about security. Cost is second. If I can implement an open source RISC-V processor, don't you get that I could audit every instruction executed, as well as the means to execute those instructions? Most people blindly support a black box known as Intel or AMD to execute their instructions. There could be a dozen things undocumented hiding in that box that you never knew about, whether planted there unintentionally as bugs, or intentionally by governments and large companies.
Re: (Score:2)
Cost is second. If I can implement an open source RISC-V processor, don't you get that I could audit every instruction executed, as well as the means to execute those instructions?
If cost is second, you can also get an source license for an ARM core, and audit every instruction.
Bigger issue I am still not seeing resolved: (Score:1)
A Pi-like SoC offering low end desktop equivalent memory+i/o capabilities and/or a Desktop compatible chip capable of using standardized bus technologies to interface with expansion cards, peripherals, and memory busses to allow it to compete, even 'unfairly' from a performance/utility point of view, against the modern Wintel desktop PC design, perhaps opening the floodgates for other cpus/systems utilizing standardized busses and expansion cards on a variety of alternative architectures and operating syste
Re: (Score:2)
Exactly. With the RISC-V you just have to fab a more or less orphan design yourself and then invent the entire support infrastructure from top to bottom as you do it. As opposed to buying a universally-supported device for $1 or so (at the performance level of the RISC-V), in units of millions if you really need that many, from any random vendor or supplier you care to name. If you're Microchip (PIC32) or Atmel (AVR32) you can afford to do your own custom architecture (and even those are somewhat niche-m
Re: (Score:2)
The point of RISC-V is not to come out with a standalone chip. The biggest market is replacement of existing cores in SoC designs. If you're already making a SoC, switching out the core is not a huge complicated ordeal.
A perfect application would be something like the ESP8266 WiFi module https://en.wikipedia.org/wiki/... [wikipedia.org]
These modules sell for less than $2 a piece on AliExpress. I'm sure the manufacturer does not want to pay $1 in royalties for the CPU core. There are many more of those kinds of IoT devices
Re: (Score:2)
They're not paying $1 per IP core, that's why the whole module can sell for under $2. They'll be paying some insignificant percentage that's lost in the noise. It's an irrelevant amount. What they're getting in return is a complete support ecosystem that lets them build a module that sells for under $2. This can never happen with RISC-V because the licensing cost is an infinitesimal fraction of the total cost. Sure, if you state the licensing as being $200K then that sounds a lot, until you spread it a
Re: (Score:3)
the Tensilica Xtensa CPU which is the bit with the ARM license
The Tensilica CPU is not an ARM. It is presumably cheaper, but not free, and which burdens them with the cost of learning a relatively unknown CPU architecture. If you're going to take that cost, you might as well drop in a RISC-V core, and pay nothing, plus you get to benefit from the growing open source infrastructure around it.
Now compare that with the cost of fabbing your own RISC-V
The company that makes the ESP8266 is already fabbing the SoC, so there is no extra cost for the RISC-V.
it's not competing with the hundredth-of-a-cent licensing costs per ARM core shipped,
ARM charges 1.2% of the chip price for a Cortex. So, for a $1 chip, that's 1
Re: (Score:2)
Re: (Score:3)
unless they can magic the masks and other components out of nothing, the cost of creating RISC-V stuff is going to be considerable
Not really. They are already making ASICs with their own stuff. And those ASICs already have a core of some sort. Taking out the HDL from the core, and inserting other HDL for another core, doesn't really change anything in their process. A CPU core is relatively simple piece to synthesise, all straight digital CMOS with standard library components.
If and you're starting with a new ASIC, it's even easier to pick a free core from the beginning. And when you're doing a second ASIC based on the same core, it's
Re: (Score:2)
"unless they can magic the masks and other components out of nothing, the cost of creating RISC-V stuff is going to be considerable"
You have to generate masks for your silicon whatever processor you use.
This might not be a bad idea (Score:2)
The component part of the licensing, that is. I imagine if you are mass manufacturing a specific device and need mostly *some* CPU functionality for performance and battery life you can avoid paying for the parts you don't need, as opposed to buying "bulk" CPU functionality. So this might be a way to pack more processing power in the device for the same cost. The only question is how the mostly theoretical RISC-V design will hold against the well baked Intel and Arm architectures that have had so many real
It's not the only question (Score:2)
Re:open source? $25K for every chip feature... (Score:4, Insightful)
Anyone can produce a RISC-V. But you can pay these guys to have a ready to synthesize solution.
There is a free open source version, and it looks pretty good. But the free one (E310) is not going to have all of those extra bus interfaces. That makes it difficult to connect to high speed peripherals or have operate in a multiprocessor environment (especially SMP).
Ultimately SiFive is operating a business that serves other businesses, not end users. The licensing of IP is the classic (and safest) way to operate a fabless silicon business.
Nothing is preventing you from making your own RISC-V from scratch. Unlike something like ARM where you have to pay them fees to cover patents and trademark even if you don't use ARM's implementation. And honestly it's cheaper to license ARM's implementations than to get permission to make your own. With RISC-V the opposite is true, it's free to make your own or you can pay money to license from multiple companies, who hopefully compete with one another or differentiate in other ways.
Re: (Score:2)
Re:open source? $25K for every chip feature... (Score:5, Informative)
RISC-V can be embedded into your ASIC design, which is not something you can do with an x86-64 from Intel or AMD. Not only because RISC-V tends to be smaller, but because Intel and AMD do not license their designs in such a way to allow vendors to embed the design. The other aspect or RISC-V that I think is quite interesting is that it has a wide range of configurations, allowing you to tailor an instance of the processor for your particular application.
I expect in the near future you will see RISC-V popping up as embedded processors in cameras, TVs, smart appliances, cars, routers, and more. Places where you might have had a MIPS or ARM in the past could also be serviced by a RISC-V. And the configurations available are quite a bit more flexible than MIPS and quite a bit cheaper than ARM, making it fit a broad range of markets. In a few years you will likely be using a RISC-V in some way, even if you are still stuck on trying to force your comparisons to a narrow market of desktop PCs. (your PC's GPU will probably have one or more RISC-V cores on it to manage power and orchestrate jobs to shaders)
Price and flexibility is the advantage here. Power is basically a solved problem, we know theoretically what the best power we can achieve for a particular computation (thanks to the laws of thermodynamics), and at a very low level that is already done by all the low power architectures, including x86. Theoretically RISC-V is as scalable as any other modern CPU architecture, and maybe someone will make a super computer out of the 64-bit variant of it some day.
Re: (Score:1)
"RISC-V can be embedded into your ASIC design, which is not something you can do with an x86-64 from Intel or AMD."
Boy, you must have a shitty understanding of how x86/x64 procs work now days. The entire x86 instruction set is practically stuck on a RISC-like core now days.
See what happens when you fuck around with non-bare-metal languages? You get idiots like this that don't know the fucking architecture and make entirely wrong statements like this.
Re: (Score:2)
Boy, you must have a shitty understanding of how x86/x64 procs work now days.
He's 100% correct. Neither AMD nor Intel will license their processors for you to embed in your ASIC design. And frankly you wouldn't want them to either, because on the top end, the processors are closely tied into the fab too.
Just because you could in theory make a small x86 core on an ASIC, doesn't mean there's a decent x86 one out there to license. Plus you still have to pay for that big instruction decoder. That's fine on a d
Re: (Score:3)
That's not needed since Pentium patents expired.
Great, now where can I get the HDL sources to embed a Pentium in my own ASIC ? Or, are you suggesting that I can just clone a Pentium myself ?
Re: (Score:3)
To embed an x86 into an ASIC without permission from Intel you'd have to choose one that is not patent encumbered, so a 20 year old architecture would theoretically be possible. Starting this year(2017) you could do a Pentium II or Pentium Pro, which is not too shabby really. If you took the original masks, you could start right away, but the process differences may be difficult to resolve and you'd be saddled with a bus architecture that is incompatible with the rest of your ASIC's IP. And starting from sc
Re: (Score:2)
Nope. masks are not copyright. They are under a a special 10 year law for silicon masks.
Re: (Score:2)
"If you took the original masks,"
The last PII was on a 0.25 micron process. Have you ever done a 10x pure optical shrink and gotten *any* of polygons to print?
Re: (Score:2)
You could do 0.25 micron process and not shrink, but that'd basically make your ASIC as expensive as it was for Intel 20 years ago.
I'm only listing ideas, I'm not filtering out the bad ideas
Re: (Score:2)
"RISC-V can be embedded into your ASIC design, which is not something you can do with an x86-64 from Intel or AMD."
Atom is definitely available as a hard IP. A quick search didn't turn up any PR on the soft IP version begin available, so I won't claim that Intel has finished it yet.
Re: (Score:2)
Atom is definitely available as a hard IP. A quick search didn't turn up any PR on the soft IP version begin available, so I won't claim that Intel has finished it yet.
I wouldn't say "definitely". A lot of these guys make press releases about licensing their IP. NVIDIA made announcements a few years ago they would license GPU IP, but there is nothing on the market today. I'm going to guess the Intels and NVIDIAs of the world want huge per chip royalties. Maybe we'll see some Atoms in aerospace where the royalties are not as significant. Rad-hardened x86 could be fun.
Re: (Score:2)
"I'm going to guess the Intels and NVIDIAs of the world want huge per chip royalties. "
Intel is desperate to make the Custom Foundry a thing. There are at least two sweetheart deals to get other companies to put x86 in their own SoC.
This is a PR blurb (Score:2)
Heavy on the "rah rah", light on the details. None of the things they are saying will matter if the chip they produce isn't good. The current chip makers out there make chips that are VERY good for their given purposes, and they have a lot of R&D going in to that. It isn't as though designing a CPU that is fast, efficient, highly capable, etc, etc is some easy feat.
Now maybe these guys did that... but then let's see some info. What are the specs on the chip(s) and what are they designed to compete with?
Re: (Score:2)
Intel processors are RISC internally, and have a micro-compiler to convert the CISC Intel instructions into the RISC equivalents. It's a small overhead, but given the superscaler architecture, high clock speed, it's not noticeable.
It used to be the case that the 80386 instructions had the REP prefix, which would allow one instruction to be repeated up to 256 times. For some reason, they didn't extend this to the floating-point instructions and now seem to have dropped it entirely.
Re: (Score:2)
It's a small overhead, but given the superscaler architecture, high clock speed, it's not noticeable.
It's not actually an overhead, because the translation also does a bunch of stuff to optimize the code, such as:
the REP prefix, which would allow one instruction to be repeated up to 256 times
They don't need REP, because the instruction translator automatically unrolls loops.
RISC V (Score:1)
I have been following the whole RISC V thing for a while now and all I can say is finally!
This is good for security. Finally a CPU that isn't backdoored out of the box. Right now there isn't really a good alternative although there have been many attempts at producing free (as in freedom) hardware.
https://www.crowdsupply.com/raptor-computing-systems/talos-secure-workstation
https://puri.sm/ (not completely free)
Re: (Score:3)
Did you even look at their website or the product manual? It's already designed around a 64 bit core.
https://www.sifive.com/documentation/coreplex/e51-coreplex-manual/ [sifive.com]
Re: (Score:2)
You mean the Amazon Echo:
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://www.ifixit.com/Teardow... [ifixit.com]
With 256Mb RAM, 4Gb NAND, and a 32-bit chip:
http://www.ti.com/product/DM37... [ti.com]
that's based on an ARM Cortex-A8:
https://en.wikipedia.org/wiki/... [wikipedia.org]
Which is a 32-bit CPU?
Yeah. You're a twat, who thinks that the bigger the number the better and you "can't possibly" do stuff with anything else.
I mean, seriously, did you even spend two fucking seconds thinking about it, given that one Google search and the first couple of Wi
Re: (Score:2)
Because they translate internally to RISC, and then leverage superscaler out of order pipelines. On the silicon RISC won. On the machine code side, CISC gives your compiler a bit of wiggle room in not needing to know the micro-optimizations for every variation of the silicon.
Re: (Score:3)
Both CISC and RISC translate to an internal CPU microcode. The difference is that the RISC translation is generally much simpler requiring a smaller portion of the CPU die. But as CPUs get larger, this difference becomes less relevant when looking at the overall CPU.
There are no real advantages to CISC. One used to be able to argue that the required instructions are more compact but ARM CPUs demonstrated how a similar effect can be accomplished with their Thumb2 instructions. But there are also no lo
Re: (Score:2)
There is a benefit. Compatibility. Not even Intel who tried the impossible to kill the x86 with HP failed when AMD made 64 bit standard. Funny pentium IV's mysteriously started being 64 bit compabile. Hmm my hunch is Intel disabled it to make Itanium look better and with a simple patch enabled the other bits
But x86 today translates the instructions to risc internally anyway. But powermacs used more ram in the 1990s than WIndows because risc cpus created more code to bloat it in disk and ram
Re: (Score:2)
But x86 today translates the instructions to risc internally anyway.
No, internally there is no CISC or RISC - just multiple stages of the CPU. The internal microcode is derived from the input CISC/RISC instruction and controls the hardware along the various stages of the pipeline. To suggest the CPU is RISC internally is erroneous because internally it is neither RISC or CISC.
But powermacs used more ram in the 1990s than WIndows because risc cpus created more code to bloat it in disk and ram
A bit of a stretch considering the platforms ran different operating systems. Remember that we are taking about an OS (MacOS) that never even had real virtual memory. I would suggest that compil
Intel/AMD background (Score:2)
> There is a benefit. Compatibility. Not even Intel who tried the impossible to kill
> the x86 with HP failed when AMD made 64 bit standard. Funny pentium IV's
> mysteriously started being 64 bit compabile. Hmm my hunch is Intel disabled
> it to make Itanium look better and with a simple patch enabled the other bits.
Yes, there was a conspiracy, but you've got it wrong.
* The original IBM PC ran on an 8088. This was an 8086, 16-bit real mode CPU, with an 8-bit bus. 16-bit peripherals were scarce back
Re: (Score:2)
That's was before DLL's and .SO shared object library files. Before then, every application available on the system had to be statically linked with every library that it used. X-windows/Motif applications using sockets, RPC, multi-threading would be hundreds of megabytes in size. Once it was figured out that these libraries were being duplicated, the use of dynamic linking removed the bloat.
The problems usually don't lie in the CPU cores... (Score:2)
... CPU cores typically are publically described in minute detail. After all people need to directly write software for those...
Today the problem lies in proprietary hardware. Hardware for which you cannot write a decent driver as there is no public documentation available. That's the problem with modern SoCs, and that's why the mobile operation system scene is so dead right now.
Forking the kernel is the wrong approach. (Score:2)
...there's already a fork of Linux for RISC-V.
Wrong approach.
The right approach is to get involved with the upstream kernel community and get the changes they need into the kernel. Forking just means it'll always be on the sideline.
Availability of a low cost SoC – a la RaspPi3, Pine64, or ExpressoBin – would be good too.
Re: (Score:2, Informative)
...there's already a fork of Linux for RISC-V.
Wrong approach.
The right approach is to take FreeBSD who's upstream is already mature on RISC-V and you don't have to go to look for patches.
Re: (Score:2)
Re: (Score:2)
Pretty much everything new starts as a fork, to some degree; it doesn't mean the plan is to maintain a fork - simply that the fork hasn't been merged into the kernel yet.
Getting the code merged into the Kernel is a process, and will take time (several months at a minimum -- and probably longer, unless the code is magically perfect out of the gate). Are developers and designers supposed sit on their thumbs until then?
As far as getting a low cost SBC computer - that's not even an apples-to-apples comparision.
Re: (Score:2)
Pretty much everything new starts as a fork, to some degree; it doesn't mean the plan is to maintain a fork - simply that the fork hasn't been merged into the kernel yet.
Another yeahthanx. Tell us something we don't know. How long has this fork been around already? How much is already upstream?
Getting the code merged into the Kernel is a process, and will take time (several months at a minimum -- and probably longer, unless the code is magically perfect out of the gate). Are developers and designers supposed sit on their thumbs until then?
Yeah. Guess what part of my job is. I know exactly what's involved and how long it takes. And I've seen plenty of kernel forks that are going nowhere. Ever. Because a) getting into the mainline kernel isn't even on their radar; or b) they want to wait until they're done and think it's perfect; or c) they're too embarrassed to show their work, piecewise, to kernel devs.
But once they've
SiFive (Score:2)
Re: (Score:2)
is this going to be like the $400 bag squeezer?
That was a useless product. This is an embeddable core for SoCs, which is a multi billion dollar industry.