Linux Now Has its First Open Source RISC-V Processor (designnews.com) 161
"SiFive has declared that 2018 will be the year of RISC V Linux processors," writes Design News. An anonymous reader quotes their report:
When it released its first open-source system on a chip, the Freeform Everywhere 310, last year, Silicon Valley startup SiFive was aiming to push the RISC-V architecture to transform the hardware industry in the way that Linux transformed the software industry. Now the company has delivered further on that promise with the release of the U54-MC Coreplex, the first RISC-V-based chip that supports Linux, Unix, and FreeBSD... This latest development has RISC-V enthusiasts particularly excited because now it opens up a whole new world of use cases for the architecture and paves the way for RISC-V processors to compete with ARM cores and similar offerings in the enterprise and consumer space...
"The U54 Coreplexes are great for companies looking to build SoC's around RISC-V," Andrew Waterman co-founder and chief engineer at SiFive, as well as the one of the co-creators of RISC-V, told Design News. "The forthcoming silicon is going to enable much better software development for RISC-V." Waterman said that, while SiFive had developed low-level software such as compilers for RISC-V the company really hopes that the open-source community will be taking a much broader role going forward and really pushing the technology forward. "No matter how big of a role we would want to have we can't make a dent," Waterman said. "But what we can do is make sure the army of engineers out there are empowered."
"The U54 Coreplexes are great for companies looking to build SoC's around RISC-V," Andrew Waterman co-founder and chief engineer at SiFive, as well as the one of the co-creators of RISC-V, told Design News. "The forthcoming silicon is going to enable much better software development for RISC-V." Waterman said that, while SiFive had developed low-level software such as compilers for RISC-V the company really hopes that the open-source community will be taking a much broader role going forward and really pushing the technology forward. "No matter how big of a role we would want to have we can't make a dent," Waterman said. "But what we can do is make sure the army of engineers out there are empowered."
For those of us that don't know (Score:1)
Re:For those of us that don't know (Score:5, Insightful)
Licensing costs
Re: For those of us that don't know (Score:2, Insightful)
Re: (Score:3)
So what kind of performance are we talking about? Are they equivalent to the latest and greatest (and thus most expensive licensing) ARMs? Or are we only running them at 25MHz on an FPGA? (And what kind of FPGA? Since there's a range from $10 FPGAs to $100,000 FPGAs).
Also
Re: (Score:3)
I'd guess it'll probably become ubiquitous in devices that are either very small or very large, but won't make much of a dent in the PC market (where x86 is already entrenched) or the tablet/phone market (where ARM is already entrenched). Kind of like how we've got the Linux kernel on supercomputers and servers, tablets, phones, watches, routers, etc., but not so much on PCs, where MS Windows was already entrenched.
I think I remember a vide
Re: (Score:2)
Those costs are microscopic compared to the loss of sales from producing a CPU that doesn't run the operating systems and applications people actually want. Maybe once they get this thing running Android it might start to make sense, but I doubt it is competitive in terms of performance per watt.
Re: (Score:2)
Those costs are microscopic compared to the loss of sales from producing a CPU that doesn't run the operating systems and applications people actually want.
The first wave of RISC-V users had no intention to have it as a user-facing component. These days it's common for a SoC or a GPU to have its own orchestration/housekeeping CPU, and manufacturers would prefer to avoid ARM licensing cost for that. Nvidia is probably the highest-profile early user; a talk [youtube.com] by one of their engineers goes into quite some detail.
Re: (Score:2, Insightful)
Less instruction sets makes assemblers and compilers easier to implement, also is easier to anyone check if there is a bug or abusable feature (There are people and businesses that do not require or want ARM TrustZone, AMD PSP or Intel ATM).
Licensing also matters a lot, is easier to develop further without fear of litigation and research groups can find and publish better reviews and recommendations without fear of being sued.
Re:For those of us that don't know (Score:5, Informative)
Less instruction sets makes assemblers and compilers easier to implement
I'll give you assemblers (though assemblers are so trivial that there's little benefit from this), but not compilers. A big motivation for the original RISC revolution was that compilers were only using a tiny fraction of the microcoded instructions added to CISC chips and you could make the hardware a lot faster by throwing away all of the decoder logic required to support them. Compilers can always restrict themselves to a Turing-complete subset of any ISA.
RISC-V is very simple, but that's not always a good thing. For example, most modern architectures have a way of checking the carry flag for integer addition, which is important for things like constant-time crypto (or anything that uses big integer arithmetic) and also for automatic boxing for dynamic languages. RISC-V doesn't, which makes these operations a lot harder to implement. On x86 or ARM, you have direct access to the carry bit as a condition code.
Similarly, RISC-V lacks a conditional move / select instruction. Krste and I have had some very long arguments about this. Two years ago, I had a student add a conditional move to RISC-V and demonstrate that, for an in-order pipeline, you get around a 20% speedup from an area overhead of under 1%. You can get the same speedup by (roughly) quadrupling the amount of branch predictor state. Krste's objection to conditional move comes from the Alpha, where the conditional move was the only instruction requiring three read ports on the register file. On in-order systems, this is very cheap. On superscalar out-of-order implementations, you effectively get it for free from your register rename engine (executing a conditional move is a register rename operation). On in-order superscalar designs without register renaming, it's a bit painful, but that's a weird space (no ARM chips are in this window anymore, for example). Krste's counter argument is that you can do micro-op fusion on the high-end parts to spot the conditional-branch-move sequence, but that complicates decoder logic (ARM avoids micro-op fusion because of the power cost).
Most of the other instructions in modern ISAs are there for a reason. For example, ARMv7 and ARMv8 have a rich set of bitfield insert and extract instructions. These are rarely used, but they are used in a few critical paths that have a big impact on overall performance. The scaled addressing modes on RISC-V initially look like a good way of saving opcode space, but unfortunately they preclude a common optimisation in dynamic languages, where you use the low bit to differentiate pointers from integers. If you set the low bit in valid pointers, then you can fold the -1 into your conventional loads. For example, if you want to load the field at offset 8 in an object, you do a load with an immediate offset 7. In RISC-V, a 32-bit load must have an immediate that's a multiple of 4, so this is not possible and you end up requiring an extra arithmetic instruction (and, often, an extra register) for each object / method pair.
At a higher level, the lack of instruction cache coherency between cores makes JITs very inefficient on multicore RISC-V. Every time you generate code, you must do a system call, the OS must send an IPI to every core, and then run the i-cache invalidate instruction. All other modern instruction sets require this to be piggybacked on the normal cache coherency logic (where it's a few orders of magnitude cheaper). SPARC was the last holdout, but Java running far faster on x86 than SPARC put pressure on them to change.
Licensing also matters a lot
This is true, but not in the way that you think. Companies don't pay an ARM license because they like giving ARM money, they pay an ARM license because it buys them entry into the ARM ecosystem. Apple spends a lot of money developing ARM compilers, but they spend a lot less money developing ARM compilers than the rest of the ARM
Re: (Score:2)
Than Raven, this is the best post I've read in a very long time. Wish I had mod-points.
Re: (Score:2)
Less instruction sets makes assemblers and compilers easier to implement
I'll give you assemblers (though assemblers are so trivial that there's little benefit from this), but not compilers. A big motivation for the original RISC revolution was that compilers were only using a tiny fraction of the microcoded instructions added to CISC chips and you could make the hardware a lot faster by throwing away all of the decoder logic required to support them. Compilers can always restrict themselves to a Turing-complete subset of any ISA.
RISC-V is very simple, but that's not always a good thing. For example, most modern architectures have a way of checking the carry flag for integer addition, which is important for things like constant-time crypto (or anything that uses big integer arithmetic) and also for automatic boxing for dynamic languages. RISC-V doesn't, which makes these operations a lot harder to implement. On x86 or ARM, you have direct access to the carry bit as a condition code.
Similarly, RISC-V lacks a conditional move / select instruction. Krste and I have had some very long arguments about this. Two years ago, I had a student add a conditional move to RISC-V and demonstrate that, for an in-order pipeline, you get around a 20% speedup from an area overhead of under 1%. You can get the same speedup by (roughly) quadrupling the amount of branch predictor state. Krste's objection to conditional move comes from the Alpha, where the conditional move was the only instruction requiring three read ports on the register file. On in-order systems, this is very cheap. On superscalar out-of-order implementations, you effectively get it for free from your register rename engine (executing a conditional move is a register rename operation). On in-order superscalar designs without register renaming, it's a bit painful, but that's a weird space (no ARM chips are in this window anymore, for example). Krste's counter argument is that you can do micro-op fusion on the high-end parts to spot the conditional-branch-move sequence, but that complicates decoder logic (ARM avoids micro-op fusion because of the power cost).
Exactly how do you expect conditional moves to be executed at the renaming stage? They are conditional which means either one have to have the condition ready at the rename stage (extremely unlikely) or one have to speculate. To speculate one have to have a predictor, a way to rollback the operation (and dependents) and tracking logic. This isn't free. One would also need to verify the prediction so some kind of operation have to be executed*.
Just using branches instead would at worst add a few cycles of mi
Re:For those of us that don't know (Score:5, Informative)
Exactly how do you expect conditional moves to be executed at the renaming stage?
The conventional way is to enqueue the operation just as you do any other operation that has not-yet-ready dependencies. When the condition is known, the rename logic collapses the two candidate rename registers into a single one and forwards this to the pipeline. Variations of this technique are used in most mainstream superscalar cores. The rename engine is already one of the most complex bits of logic in your CPU, supporting conditional moves adds very little extra complexity and gives a huge boost to code density.
This is a disadvantage if one expect that all processors are the same and expect the code optimized for one ISA (and likely microarchitecture) should run well on other ISAs. Really bad.
If you come along with a new ISA and say 'oh, so you've spent the last 30 years working out how to optimise this category of languages? That's nice, but those techniques won't work with our ISA' then you'd better have a compelling alternative.
That isn't the only way to solve that problem, in fact that sounds like a very bad design.
It is on RISC-V. For the J extension, we'll probably mandate coherent i-caches, because that's the only sane way of solving this problem. Lazy updates or indirection don't help this, unless you want to add a conditional i-cache flush on every branch, and even that would break on-stack replacement (deoptimisation), where is not always a branch in the old code, but there is in the new code, and it is essential for correctness that you run the new code and not the old.
MIPS was killed?
Yes. It's still hanging on a bit at the low end, mostly in routers, where some vendors have ancient licenses and don't care that power and performance both suck in comparison to newer cores. It's dead at the high end - Cavium was the last vendor doing decent new designs and they've moved entirely to ARMv8. ImagTec tried to get people interested in MIPSr6, but the only thing that MIPS had going for it was the ability to run legacy MIPS code, and MIPSr6 wasn't backwards compatible.
Custom instruction support is a requirement for a subset of the market and it doesn't cause any problem
Really? ARM seems to be doing very well without it. And ARM partners seem to do very well being able to put their own specialised cores in SoCs, but have a common ARM ISA driving them. ARM was just bought by Softbank for $32bn, meanwhile, all of the surviving bits of MIPS were just sold off by a dying ImagTec for $65m. Which strategy do you think worked better?
Can't run the code from a microcontroller interfacing a custom LIDAR on the desktop computer? Who the fuck cares? Really?
How much does it cost you to validate the toolchain for that custom LIDAR? If it's the same toolchain that every other vendor's chip uses, not much. If it's a custom one that handles your special magic instructions, that cost goes up. And now your vendor can't upstream the changes to the compiler, because they break other implementations (as happened with almost all of the MIPS vendor GCC forks), so how much is it going to cost you if you have to back-port the library that you want to use in your LIDAR system from C++20 or C21 to the dialect supported by your vendor's compiler? All of these are the reasons that people abandoned MIPS.
Re: (Score:2)
I can tell you that the vendor I work for did add custom instructions to MIPS. Some were not difficult to deal with because MIPS reserved coprocessor 2 for just this reason, others are more complicated. We also have a very sizeable compiler and toolchain team which also has upstreamed most of the changes. With MIPS we were able to do some interesting extensions such as adding a lot more encryption and hashing algorithms, though in many cases these would not be used in most environments. We also added transa
Re: (Score:2)
The common design for a high-performance OoO core today is something like this:
(Warning! Very simplified!)
Fetch - Decode - Rename - Schedule - Execute - Retire
With a common register file for architectural and speculative data.
The Fetch stage require the predicted next instruction (chunk) address and produces raw instruction data for the decoders.
Decoders chops up instructions, identifies and extracts fields including register specifiers.
The Rename stage allocates registers from the register file for all reg
Re: (Score:2)
I agree with much of what you said. I work at a company that designs its own CPUs from the ground up. We migrated in the last few years from multi-core 64-bit MIPS to ARMv8.x. We actually added a number of instructions to the MIPS standard including insert, extract and a host of atomic instructions and I can tell you that insert/extract are used quite extensively in the compiler once the proper tuning was added. Most of my work has been with the MIPS processors and I can tell you that, especially in embedde
Some notes on branch prediction vs conditional exe (Score:2)
First of all, if all you care about is single-issue non-superscalar with a relatively deep pipeline, conditional execution is probably a good idea in my experience due to the very low implementation cost. Especially if your branch prediction is lousy. However, if you are aiming for high-end systems conditional move may not be that big of a deal. See for example the following analysis from Linus Torvalds regarding cmov: http://yarchive.net/c [yarchive.net]
Re: (Score:2)
However, if you are aiming for high-end systems conditional move may not be that big of a deal. See for example the following analysis from Linus Torvalds regarding cmov
The problem with Torvalds' analysis (which is otherwise pretty good and worth reading) is that it only looks at local effects. The problem with branches is not that they're individually expensive, it's that each one makes all of them slightly more expensive. A toy branch predictor is basically a record of what happened at each branch, to hint what to do next time. Modern predictors use a variety of different strategies (normally in parallel) with local state stored in something like a hash table and glob
Re:Should have added this on your SN post :) (Score:4, Interesting)
How do the J2/3/4 open source SuperH designs compare?
I've not looked at SuperH in detail, so I can't really compare.
I seem to remember there were other pitfalls to their architecture, but getting a processor that is Management Engine (Aka Clipper+Palladium+TPM) free is a huge boon to the future of computer security
I disagree. A TPM, secure enclave, or equivalent, is increasingly vital for computer security. It is absolutely essential that you have some write-only storage for encryption keys into a coprocessor that will perform signing / signature verification / encryption / decryption, but which does not allow the keys to be exfiltrated. Anything less than this and a single OS-level compromise means that you need to reset every password and revoke every key that you've used on that machine.
Having said all this: Is it perhaps time for a different CPU project, or a fork of RISC-V with these missing features added, at the risk of binary incompatibility, but to the benefit of performance and perhaps security?
There are lots of extensions to RISC-V, but the problem there is fragmentation. You need the A extension if you want to run a real OS. You probably need the C extension, because compilers are starting to default to using it. The M extension is useful, so people will probably start using it soon. Hardware floating point is expensive on low-end parts, so you're going to end up with some having F, some having D, and some having neither (this was a pain for OS support for ARM until recently - now ARM basically mandates floating point on anything that is likely to run an OS), and a few will support Q. L is unlikely to be used outside of COBOL and Java, so isn't too much of an issue (one is niche, the other is typically JIT'd so it doesn't matter too much if only some targets support it). And that's before there's any widely deployed silicon. Expect vendors to add their own special RISC-V instructions, making their own versions of toolchains and operating systems incompatible.
RISC-V isn't the first project to try this. OpenRISC has been around for a lot longer, but RISC-V managed to get a lot more momentum. I don't think that a competing project would find it easy to get any of this. It remains to be seen whether this momentum can translate to a viable ecosystem.
Re: (Score:2)
This isnt true. With implementing a compiler, you want to have AVX, SSE instructions etc so that you can more easily optimize your code. A simple instruction set would mean less ways to optimize the code. The compiler can choose which instructions to use it.
Writing a compiler for a Turing Tarpit is more difficult, the smaller the instruction set, the more code that the compiler has to write to emulate things not implemented on the CPU.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Licensing? If your making your own electronic schematics, you dont need to license anything. As far as instruction set architecture itself, the ISA is basically a language, and it has been independantly implemented without licensing, such as BOCHS, since language are not copyrightable. Since RISC-V is not using Intel schematics, it could have easily supported x86-64 without any licensing fees, with its own electronics implementation.
Re: (Score:1)
ARM is a RISC chip.
Re: (Score:1)
ARM is a RISC chip.
So is Intel, on the inside. :-)
Re: (Score:2)
Not really. The Pentium Pro could perhaps be called internally RISC as it executed simple 2 in - 1 out operations (though they were more complicated than normal RISC instructions). This is easiest visible in relatively simple instructions that have more than one input like ADC.
Modern x86 execute complicated operations that are designed for x86 execution efficiency. Simplified compared to the (worst case) x86 instructions? Absolutely - but far from any RISC design.
Re: (Score:2)
Re: (Score:2)
actually there really aren't CISC chips any more, the current x86_64 for example really only emulates that instruction set with RISC and microcode
Re: (Score:1)
RISC + microcode = CISC. Maybe not in terms of instruction set design; but certainly this is how old CISC-era CPUs worked. What really happens is more of a data flow processor. x86 instructions get rewritten on the fly. Sometimes there is fallback to microcode. It's more like a JVM JIT but at the hardware/CPU level.
Re: (Score:2)
X86 is CISC. Even if x86 chips were internally RISC with a translation layer they would still be CISC as the ISA is CISC. Implementation doesn't matter.
But x86 chips aren't RISC processors just CISC processors using a simplified internal representation, a representation that is designed to execute effectively and be a good fit to x86 instructions. They are still far more complicated than RISC instructions.
Re:For those of us that don't know (Score:5, Informative)
What's the big advantage with RISC over ARM or x86?
ARM is a RISC chip. Originally, ARM stood for Acorn RISC Machine.
The news here is not that it is RISC, but that it is open source.
So as long as you have your own fab, or a few million $ to rent one, you can make your own chips ... but the real advantage is that you can look at the design files and see for yourself that there are no backdoors.
Re:For those of us that don't know (Score:4, Insightful)
But you can't verify that the design you're looking at is what the plant actually implemented on the chip.
Re: (Score:2)
Way back in the mists of time I worked for a chip fab company. They bought some competitor chips, whipped the tops off them and examined them under a microscope.
Granted, you're unlikely to see a one-transistor change or something, but it's incredibly unlikely any change that actually does more than introduce some bugs is going to be that small. It's a tedious process though, and the chip you examine is waste afterwards, so you can only check a small subset, and even then you don't know if the one you chose
Re:For those of us that don't know (Score:4, Informative)
Way back in the mists of time
I guess that's the thing.
AFAIK these days dies have too small of a feature size for meaningful optical inspection (feature size way smaller than the wavelength of light), and dozens of layers from which, even if you could, you'd only see the topmost one, and simply way too many features to begin with.
Re: (Score:2)
The feature size size today is in the lower range of the visible wave length spectrum
According to wikipedia, the visible spectrum ends at around 390nm.
Intel is currently doing 10nm and planning to go down to 7nm next year.
Here's an example of what can be done: https://www.bunniestudios.com/... [bunniestudios.com]
Now that's pretty impressive. However, it says the best possible current resolution is 14.6nm, so it can't currently resolve state of the art dies.
Also those pretty pictures? Their scale is in micrometers, as shown on them.
Re: (Score:2)
You can. De-cap the chip, use a microscope to photograph it and computer vision software to compare it to your design files.
People have done it for older chips, e.g. decoding ROMs visually or just trying to figure out how something works. With a modern process you will need more expensive equipment due to everything being smaller, but it's far from impossible to do.
Re: (Score:2)
far from impossible
Are you sure about that? I agree that it is theoretically possible, but in practical terms, I believe it is.
People have done it for older chips
Yep, and older chips in comparison are huge and have something like 2-3 interconnect layers. Modern chips have a tiny feature size, and on top of the silicon, a stack of >10 interconnect layers, your microscope will have a hard time looking through those (that is, provided these things could be optically inspected in the first place -- the wavelength of light currently is two orders of magnitude
Re: (Score:2)
Could you rephrase that in a way that doesn't suggest you just had an OCD attack? Preferrably not before cooling down for a few hours. I have honestly no idea what you were even trying to say. You can't verify the design, but that's not a problem because you can't verify other designs either? Maybe?
Re: For those of us that don't know (Score:2)
RISC is generally considered "The better Architecture" (TM). Of course that statement is super-broad but truth be told, ARM was initially designed with lots of modern day improvements in mind whilst x86 was made with a more "make it work and get it to mass market ASAP" approach. Hence the success of x86 despite ARM microcomputers being roughly 2 decades ahead back in the late 80ies/early 90ies.
ARM actually is the newer architecture but the Acorn Archimedes was proprietary and closed, just like the Amiga bac
Re: (Score:2)
There are no technical advantages. RISC is basically dead as a serious idea, most chips today are CISC with complex parallel instruction sets for math. So called RISC instruction sets such as ARM are quite complex, as complex as x86 is, certainly far more complex than an 8086.
It has been mentioned that there is no significant overhead in implementing x86 over other CPU ISAs. Its an old myth that doesn't hold water.
Licensing is cited as a reason for another ISA. I think that this if I am not mistaken applies
Re: (Score:2)
So, I dont see any logic in them inventing an incompatable ISA rather than just using x86.
That does not wonder me, as every claim you make in your post is wrong.
RISC is basically dead as a serious idea,
Wrong.
most chips today are CISC
Wrong. Wrong by chip type, and wrong by sold units.
Supports "Unix"? Which fucking one?! (Score:1)
When they say "Unix", which OS are they talking about? Solaris? AIX? HP-UX? macOS? UnixWare? OpenServer? One of the many other variants?
Seriously, how the fuck did that crap end up in the summary? Yes, I realize it's from the article, but EditorDavid should've seen that it's nonsensical and should have fixed up the summary before it ended up on the Slashdot front page!
Even timothy probably wouldn't have screwed up like this!
Re: (Score:2)
EditorDavid should've seen that it's nonsensical
LOLOL
Well Done ! (Score:3)
Quite an achievement !
It always amazes me that governments dont invest in this level, for example the french military will avoid certain american tech but seem happy to pay an unauditable Intel corporation
at least the European Space Agency made their own Sparc processor but I've seen little other investments made with public money that might actually benefit the public and be verifiable by outsiders...
Re: (Score:2)
China has its own line of MIPS CPUs that are pretty competitive.
They are actually one of the few fully open platforms in existence, where everything is fully documented. Well, the masks used to make the silicon are not, but you can at least verify the operation of the CPU yourself to a larger extent, and you don't need binary blob microcode updates.
Re: (Score:2)
You mean the loongson chips? I've not been able to actually buy any of those chips (at least not the newer multi core variant)...
Re: (Score:2)
Yeah, those. They are hard to get hold of outside China. Seems like a trip to Guangzhou is required.
Comment removed (Score:5, Interesting)
Re: (Score:3)
Re: (Score:3)
This and much worse.
The chip that you get from the fab needs to be correspond to the RTL that you sent.
The actual chip ROM that they program has to correspond to the ROM that you want.
The firmware programmed* onto any of the peripherals has to correspond to the firmware you want.
The compiler has to be known not to dynamically insert backdoors when compiling. And no, you cannot verify this by inspecting the compiler source [PDF] [cmu.edu].
* No, I'll recompile the open-source firmware and reprogram it. Besides the fact
Re: (Score:3)
Maybe after months of trying to create exactly reproducible builds (hint: it took Debian three frickin years [debian.org]), fighting with the fact that compilers randomize their optimizations [mindfab.net] (for good and unrelated reasons) and all that noise. That's what's required to get the same version of gcc to produce identical binaries of your regular compiler.
Then you'll figure out that you didn't really appreciate the full scope of the problem because the compiler is just one small place in the system. In fact, the original AC
I'd buy in a heartbeat if no IME or UEFI net stack (Score:1)
This company needs to (if they haven't already) get an international, non-goverment group of silicon and firmware security experts do a full audit to ensure the architecture and reference designs contain no Intel ME or UEFI stuff and no undocumented instructions; no silicon- or BIOS-level network stack, no DMA memory access, and a fully-open BIOS. They would have a real comfy niche that neither Intel, AMD nor ARM (with their non-TrustZones) are now willing to fill.
Best get those designs hosted and fabbed ou
Re: (Score:1)
TrustZone is basically a masked boot ROM included in many ARM SoC's. As RISC V isn't compatible with ARM it isn't really applicable. The problem here is can you trust that when you get the silicon the first instruction you execute is really the first instruction. It's hard to say that an evil chip manufacturer doesn't put in a small virtualization ROM in the chip at the bootstrap address. So even if you are writ
Re: (Score:2)
This is a rather tricky problem. It would take the likes of an organization like DARPA to solve it.
DARPA's SSITH program is attempting to address this, but I suspect that a complete solution is much bigger than a single DARPA program.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Trusted Foundries??? (Score:3, Interesting)
I am a huge supporter of open hardware projects, especially the ESA and Oracle supported opensparc architectures.
https://en.m.wikipedia.org/wik... [wikipedia.org]
However without a trusted silicon foundry to make chips without hardware back doors, all of the vetting of the hardware design "source" RTL won't be enough to establish trust. Even running netlists in FPGAs won't be enough if you can't trust the FPGA manufacturer or the foundry that built it.
In the end, we as consumers are stuck without any truly secure hardware options, free of backdoors.
My advice, assume all processors have backdoors and select those designed and made in places that cannot be compelled by the country in which you live for backdoor access.
Re: (Score:1)
My advice, assume all processors have backdoors and select those designed and made in places that cannot be compelled by the country in which you live for backdoor access.
Here in Sweden the authorities aren't allowed to register your political opinions.
However I assume Interpol are and they co-operate with them. .. And they are ~everywhere.
Re: (Score:2)
I don't think you realize what Interpol is. It's mainly a system to enable some level of co-operation of police forces between different countries. Criminals don't care about borders after all...
Interpol doesn't really do anything but pass information.
Re: (Score:2)
Yep. But read about the "European Gendarmerie Force"...
Re: (Score:1)
Interpol doesn't really do anything but pass information.
So you mean they won't get information about Swedish residents anyway?
What made me wonder was because in a video two(?) guys from NMR was stopped by the police who was very sure they would go to Gothenburg, where NMR would have a demonstration.
That would to me suggest the police knew they had sympathies with them or even had gathered intelligence about them going there, which is a political event.
In some way that would to me seem to suggest they do care about their political sympathies after all and since I
Re: (Score:1)
Which was illegal in Russia? Even though I was in Sweden?
Not sure Swedish police would arrest me for having an opinion in Sweden which is ok regardless of what Russia thought about it?
Here in Sweden you are free to have your opinion. Just not express it.
Re: (Score:2)
Baby steps, you can't go from completely closed to completely open in one step.
I believe it was this talk by Bunnie that talks about the usefulness and uselessness of where we are now and how long a way there is still to get to get something we trust:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
There are other effective mitigation strategies for potentially compromised hardware. For example, you could mix vendors with carefully controlled cross-domain access so that an exploit in one does not compromise others.
I'd love to see an open source security processor for this reason. It would be extremely valuable to have a crypto engine and secure key storage that you could trust. Unfortunately such things are also very difficult to design and fabricate.
Re: (Score:2)
Can someone please explain? (Score:2)
Both. They are compatible (Score:2)
It goes both ways. A manufacturer isn't going to release a new high-power CPU that can't run any operating system. The CPU needs to "support" (be compatible with) some operating system, and the company making the CPU will likely need to be involved with the first OS port.
The case of AMD64, aka x64, is a good example. Before the CPU was actually produced, AMD made a emulator, then AMD and Suse ported Linux to the new instruction set. By actually running Linux on the new instruction set they could identify
Skylake is more than x64. Applications will work (Score:2)
>. But if Windows 7 is truly incompatible with Skylake CPUs, then doesn't that mean the Skylake CPU is not compatible with x64?
Applications written on / for Windows 7 should work on Skylake. The operating system needs to be concerned with more than just the instruction set. The OS has to support the busses in use (NVME and USB3), the power management scheme, the boot process (UEFI, not BIOS), hardware interrupts, etc. A few manufacturers have tested some of their Skylake models with Windows 7.
Disables the ability to install and unknown things (Score:2)
> "Running Windows 7 disables or restricts these features:"
They don't know what all features in various updates may not work, and they don't want to figure that out for an officially obselete operating system. One thing that often may not work is booting. Installing from a OEM disk probably won't work because it won't have the drivers for a USB3 keyboard and mouse. Power management will often not work - once the machine goes to sleep, it may not wake up again.
APPLICATION code generally is tied to a spec
Re: (Score:3)
Can this CPU be implemented on FPGA? (Score:2)
If not, the "open source" part does not mean a lot for most people.
Re: (Score:2)
Re: (Score:2)
Still interesting. Thanks. I think that eventually high-security computing will have to go that way, probably with master-checker pairs implemented in different FPGAs or something like it on top of it.
Re: (Score:2)
Re: (Score:2)
Do you have a source of open source FPGAs? Most that I know of are very closed.
I'm not sure this is what the GP means. There are already several opensource CPU designs ready for FPGA implementation, for example at opencores [opencores.org].
It's a good point to keep in mind that a closed FPGA toolchain could introduce unintended features in your opensource CPU. However, it's basically the same issue as running Linux on an Intel processor -- the practical implementation is not fully open source, even if the original software is.
Re: (Score:2)
Indeed. Of course, a fully open tool-chain and a fully open FPGA would be better, bit there are things you can do to prevent the tool-chain from messing with your design or to make it obvious. And I really doubt FPGA vendors would hide significant extra hardware in there on the off chance they can compromise CPUs.
Has it backdoors? (Score:3)
I'm not optimistic this cpu would be allowed to be mass-produced, since it appears it won't have any of backdoors the Intel and AMD ones have.
Re: (Score:2)
I'm not optimistic this cpu would be allowed to be mass-produced, since it appears it won't have any of backdoors the Intel and AMD ones have.
That’s the difference between implementation and specific implementations. Who is even to say there isn’t a secret instruction in a given implementation? The problem will end up being whether the open implementation provides enough value of a licensed technology like ARM.
Sometimes the open solution is more costly than the paid solution, so we will see what happens down the road.
Re: (Score:2)
That was meant to be ‘reference implementation’ vs ‘actual implementation’, but I failed in putting that.
First OpenSource ISA in mainline was LEON (Score:1)
This is not the first. That would have been LEON SPARC. There are a few others also, but the next 'actively maintained' might be J-Core (SH). RISC-V is interesting, I'm a big fan of open computing. But no, it's not a first.
But... (Score:2)
Not Open Source (Score:2)
CISC is, ahem, WAY better than RISC for ASM coders (Score:1)
1995 prophesized Risc will change everything (Score:2)
Risc architecture will change everything!
triple the speed of a pentium!
it even has a pci bus!
https://youtu.be/wPrUmViN_5c [youtu.be]
Re: (Score:1)
And interestingly enough, neither of you two can spell it.
Re: (Score:3)
2 Billion devices running Android, not to mention all the other cheap computing devices (IoT), isn't enough???
Re: (Score:2)
Even Microsoft is shipping Ubuntu with their Windows these days. I'd say it transformed the software industry quite a bit.
Re: (Score:2)
I'd expect someone with such an extreme opinion to at least try to argue for it.
Floats are generally enough. They require less hardware and the hardware consume less power per operation. Less memory bandwidth and less memory space are needed. IMO most uses of doubles are either to be safe or to try to compensate for lack of analysis of the problem at hand.
But I can be even more extreme than you: only 128 bit fixpoint numbers should be allowed. Of course stored in ones complement.
Re: (Score:2)