Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Hardware Linux

Linux Now Has its First Open Source RISC-V Processor (designnews.com) 161

"SiFive has declared that 2018 will be the year of RISC V Linux processors," writes Design News. An anonymous reader quotes their report: When it released its first open-source system on a chip, the Freeform Everywhere 310, last year, Silicon Valley startup SiFive was aiming to push the RISC-V architecture to transform the hardware industry in the way that Linux transformed the software industry. Now the company has delivered further on that promise with the release of the U54-MC Coreplex, the first RISC-V-based chip that supports Linux, Unix, and FreeBSD... This latest development has RISC-V enthusiasts particularly excited because now it opens up a whole new world of use cases for the architecture and paves the way for RISC-V processors to compete with ARM cores and similar offerings in the enterprise and consumer space...

"The U54 Coreplexes are great for companies looking to build SoC's around RISC-V," Andrew Waterman co-founder and chief engineer at SiFive, as well as the one of the co-creators of RISC-V, told Design News. "The forthcoming silicon is going to enable much better software development for RISC-V." Waterman said that, while SiFive had developed low-level software such as compilers for RISC-V the company really hopes that the open-source community will be taking a much broader role going forward and really pushing the technology forward. "No matter how big of a role we would want to have we can't make a dent," Waterman said. "But what we can do is make sure the army of engineers out there are empowered."

This discussion has been archived. No new comments can be posted.

Linux Now Has its First Open Source RISC-V Processor

Comments Filter:
  • by Anonymous Coward
    What's the big advantage with RISC over ARM or x86? I'm especially curious as to the advantages with embedded devices since that's what this seams to be geared towards.
    • by TheRealMindChild ( 743925 ) on Sunday October 08, 2017 @09:11PM (#55333421) Homepage Journal
      What's the big advantage with RISC over ARM or x86

      Licensing costs
      • Bingo. All the companies involved in making SoCs will be looking to cut out the ARM licensing fee. ARM typically takes $1-10 million up front plus 1-2% per chip, so you can see how their customers would be eager to keep that for themselves.
        • by tlhIngan ( 30335 )

          Bingo. All the companies involved in making SoCs will be looking to cut out the ARM licensing fee. ARM typically takes $1-10 million up front plus 1-2% per chip, so you can see how their customers would be eager to keep that for themselves.

          So what kind of performance are we talking about? Are they equivalent to the latest and greatest (and thus most expensive licensing) ARMs? Or are we only running them at 25MHz on an FPGA? (And what kind of FPGA? Since there's a range from $10 FPGAs to $100,000 FPGAs).

          Also

          • other than niche open-source hardware laptops... is there a market?

            I'd guess it'll probably become ubiquitous in devices that are either very small or very large, but won't make much of a dent in the PC market (where x86 is already entrenched) or the tablet/phone market (where ARM is already entrenched). Kind of like how we've got the Linux kernel on supercomputers and servers, tablets, phones, watches, routers, etc., but not so much on PCs, where MS Windows was already entrenched.

            I think I remember a vide

        • by AmiMoJo ( 196126 )

          Those costs are microscopic compared to the loss of sales from producing a CPU that doesn't run the operating systems and applications people actually want. Maybe once they get this thing running Android it might start to make sense, but I doubt it is competitive in terms of performance per watt.

          • by IAN ( 30 )

            Those costs are microscopic compared to the loss of sales from producing a CPU that doesn't run the operating systems and applications people actually want.

            The first wave of RISC-V users had no intention to have it as a user-facing component. These days it's common for a SoC or a GPU to have its own orchestration/housekeeping CPU, and manufacturers would prefer to avoid ARM licensing cost for that. Nvidia is probably the highest-profile early user; a talk [youtube.com] by one of their engineers goes into quite some detail.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Less instruction sets makes assemblers and compilers easier to implement, also is easier to anyone check if there is a bug or abusable feature (There are people and businesses that do not require or want ARM TrustZone, AMD PSP or Intel ATM).
        Licensing also matters a lot, is easier to develop further without fear of litigation and research groups can find and publish better reviews and recommendations without fear of being sued.

        • by TheRaven64 ( 641858 ) on Monday October 09, 2017 @04:43AM (#55334521) Journal

          Less instruction sets makes assemblers and compilers easier to implement

          I'll give you assemblers (though assemblers are so trivial that there's little benefit from this), but not compilers. A big motivation for the original RISC revolution was that compilers were only using a tiny fraction of the microcoded instructions added to CISC chips and you could make the hardware a lot faster by throwing away all of the decoder logic required to support them. Compilers can always restrict themselves to a Turing-complete subset of any ISA.

          RISC-V is very simple, but that's not always a good thing. For example, most modern architectures have a way of checking the carry flag for integer addition, which is important for things like constant-time crypto (or anything that uses big integer arithmetic) and also for automatic boxing for dynamic languages. RISC-V doesn't, which makes these operations a lot harder to implement. On x86 or ARM, you have direct access to the carry bit as a condition code.

          Similarly, RISC-V lacks a conditional move / select instruction. Krste and I have had some very long arguments about this. Two years ago, I had a student add a conditional move to RISC-V and demonstrate that, for an in-order pipeline, you get around a 20% speedup from an area overhead of under 1%. You can get the same speedup by (roughly) quadrupling the amount of branch predictor state. Krste's objection to conditional move comes from the Alpha, where the conditional move was the only instruction requiring three read ports on the register file. On in-order systems, this is very cheap. On superscalar out-of-order implementations, you effectively get it for free from your register rename engine (executing a conditional move is a register rename operation). On in-order superscalar designs without register renaming, it's a bit painful, but that's a weird space (no ARM chips are in this window anymore, for example). Krste's counter argument is that you can do micro-op fusion on the high-end parts to spot the conditional-branch-move sequence, but that complicates decoder logic (ARM avoids micro-op fusion because of the power cost).

          Most of the other instructions in modern ISAs are there for a reason. For example, ARMv7 and ARMv8 have a rich set of bitfield insert and extract instructions. These are rarely used, but they are used in a few critical paths that have a big impact on overall performance. The scaled addressing modes on RISC-V initially look like a good way of saving opcode space, but unfortunately they preclude a common optimisation in dynamic languages, where you use the low bit to differentiate pointers from integers. If you set the low bit in valid pointers, then you can fold the -1 into your conventional loads. For example, if you want to load the field at offset 8 in an object, you do a load with an immediate offset 7. In RISC-V, a 32-bit load must have an immediate that's a multiple of 4, so this is not possible and you end up requiring an extra arithmetic instruction (and, often, an extra register) for each object / method pair.

          At a higher level, the lack of instruction cache coherency between cores makes JITs very inefficient on multicore RISC-V. Every time you generate code, you must do a system call, the OS must send an IPI to every core, and then run the i-cache invalidate instruction. All other modern instruction sets require this to be piggybacked on the normal cache coherency logic (where it's a few orders of magnitude cheaper). SPARC was the last holdout, but Java running far faster on x86 than SPARC put pressure on them to change.

          Licensing also matters a lot

          This is true, but not in the way that you think. Companies don't pay an ARM license because they like giving ARM money, they pay an ARM license because it buys them entry into the ARM ecosystem. Apple spends a lot of money developing ARM compilers, but they spend a lot less money developing ARM compilers than the rest of the ARM

          • by AmiMoJo ( 196126 )

            Than Raven, this is the best post I've read in a very long time. Wish I had mod-points.

          • by Megol ( 3135005 )

            Less instruction sets makes assemblers and compilers easier to implement

            I'll give you assemblers (though assemblers are so trivial that there's little benefit from this), but not compilers. A big motivation for the original RISC revolution was that compilers were only using a tiny fraction of the microcoded instructions added to CISC chips and you could make the hardware a lot faster by throwing away all of the decoder logic required to support them. Compilers can always restrict themselves to a Turing-complete subset of any ISA.

            RISC-V is very simple, but that's not always a good thing. For example, most modern architectures have a way of checking the carry flag for integer addition, which is important for things like constant-time crypto (or anything that uses big integer arithmetic) and also for automatic boxing for dynamic languages. RISC-V doesn't, which makes these operations a lot harder to implement. On x86 or ARM, you have direct access to the carry bit as a condition code.

            Similarly, RISC-V lacks a conditional move / select instruction. Krste and I have had some very long arguments about this. Two years ago, I had a student add a conditional move to RISC-V and demonstrate that, for an in-order pipeline, you get around a 20% speedup from an area overhead of under 1%. You can get the same speedup by (roughly) quadrupling the amount of branch predictor state. Krste's objection to conditional move comes from the Alpha, where the conditional move was the only instruction requiring three read ports on the register file. On in-order systems, this is very cheap. On superscalar out-of-order implementations, you effectively get it for free from your register rename engine (executing a conditional move is a register rename operation). On in-order superscalar designs without register renaming, it's a bit painful, but that's a weird space (no ARM chips are in this window anymore, for example). Krste's counter argument is that you can do micro-op fusion on the high-end parts to spot the conditional-branch-move sequence, but that complicates decoder logic (ARM avoids micro-op fusion because of the power cost).

            Exactly how do you expect conditional moves to be executed at the renaming stage? They are conditional which means either one have to have the condition ready at the rename stage (extremely unlikely) or one have to speculate. To speculate one have to have a predictor, a way to rollback the operation (and dependents) and tracking logic. This isn't free. One would also need to verify the prediction so some kind of operation have to be executed*.

            Just using branches instead would at worst add a few cycles of mi

            • by TheRaven64 ( 641858 ) on Monday October 09, 2017 @10:50AM (#55336061) Journal

              Exactly how do you expect conditional moves to be executed at the renaming stage?

              The conventional way is to enqueue the operation just as you do any other operation that has not-yet-ready dependencies. When the condition is known, the rename logic collapses the two candidate rename registers into a single one and forwards this to the pipeline. Variations of this technique are used in most mainstream superscalar cores. The rename engine is already one of the most complex bits of logic in your CPU, supporting conditional moves adds very little extra complexity and gives a huge boost to code density.

              This is a disadvantage if one expect that all processors are the same and expect the code optimized for one ISA (and likely microarchitecture) should run well on other ISAs. Really bad.

              If you come along with a new ISA and say 'oh, so you've spent the last 30 years working out how to optimise this category of languages? That's nice, but those techniques won't work with our ISA' then you'd better have a compelling alternative.

              That isn't the only way to solve that problem, in fact that sounds like a very bad design.

              It is on RISC-V. For the J extension, we'll probably mandate coherent i-caches, because that's the only sane way of solving this problem. Lazy updates or indirection don't help this, unless you want to add a conditional i-cache flush on every branch, and even that would break on-stack replacement (deoptimisation), where is not always a branch in the old code, but there is in the new code, and it is essential for correctness that you run the new code and not the old.

              MIPS was killed?

              Yes. It's still hanging on a bit at the low end, mostly in routers, where some vendors have ancient licenses and don't care that power and performance both suck in comparison to newer cores. It's dead at the high end - Cavium was the last vendor doing decent new designs and they've moved entirely to ARMv8. ImagTec tried to get people interested in MIPSr6, but the only thing that MIPS had going for it was the ability to run legacy MIPS code, and MIPSr6 wasn't backwards compatible.

              Custom instruction support is a requirement for a subset of the market and it doesn't cause any problem

              Really? ARM seems to be doing very well without it. And ARM partners seem to do very well being able to put their own specialised cores in SoCs, but have a common ARM ISA driving them. ARM was just bought by Softbank for $32bn, meanwhile, all of the surviving bits of MIPS were just sold off by a dying ImagTec for $65m. Which strategy do you think worked better?

              Can't run the code from a microcontroller interfacing a custom LIDAR on the desktop computer? Who the fuck cares? Really?

              How much does it cost you to validate the toolchain for that custom LIDAR? If it's the same toolchain that every other vendor's chip uses, not much. If it's a custom one that handles your special magic instructions, that cost goes up. And now your vendor can't upstream the changes to the compiler, because they break other implementations (as happened with almost all of the MIPS vendor GCC forks), so how much is it going to cost you if you have to back-port the library that you want to use in your LIDAR system from C++20 or C21 to the dialect supported by your vendor's compiler? All of these are the reasons that people abandoned MIPS.

              • by AaronW ( 33736 )

                I can tell you that the vendor I work for did add custom instructions to MIPS. Some were not difficult to deal with because MIPS reserved coprocessor 2 for just this reason, others are more complicated. We also have a very sizeable compiler and toolchain team which also has upstreamed most of the changes. With MIPS we were able to do some interesting extensions such as adding a lot more encryption and hashing algorithms, though in many cases these would not be used in most environments. We also added transa

              • by Megol ( 3135005 )

                The common design for a high-performance OoO core today is something like this:
                (Warning! Very simplified!)

                Fetch - Decode - Rename - Schedule - Execute - Retire

                With a common register file for architectural and speculative data.

                The Fetch stage require the predicted next instruction (chunk) address and produces raw instruction data for the decoders.
                Decoders chops up instructions, identifies and extracts fields including register specifiers.
                The Rename stage allocates registers from the register file for all reg

          • by AaronW ( 33736 )

            I agree with much of what you said. I work at a company that designs its own CPUs from the ground up. We migrated in the last few years from multi-core 64-bit MIPS to ARMv8.x. We actually added a number of instructions to the MIPS standard including insert, extract and a host of atomic instructions and I can tell you that insert/extract are used quite extensively in the compiler once the proper tuning was added. Most of my work has been with the MIPS processors and I can tell you that, especially in embedde

          • Just wanted to chime in with some notes on conditional execution:

            First of all, if all you care about is single-issue non-superscalar with a relatively deep pipeline, conditional execution is probably a good idea in my experience due to the very low implementation cost. Especially if your branch prediction is lousy. However, if you are aiming for high-end systems conditional move may not be that big of a deal. See for example the following analysis from Linus Torvalds regarding cmov: http://yarchive.net/c [yarchive.net]
            • However, if you are aiming for high-end systems conditional move may not be that big of a deal. See for example the following analysis from Linus Torvalds regarding cmov

              The problem with Torvalds' analysis (which is otherwise pretty good and worth reading) is that it only looks at local effects. The problem with branches is not that they're individually expensive, it's that each one makes all of them slightly more expensive. A toy branch predictor is basically a record of what happened at each branch, to hint what to do next time. Modern predictors use a variety of different strategies (normally in parallel) with local state stored in something like a hash table and glob

        • This isnt true. With implementing a compiler, you want to have AVX, SSE instructions etc so that you can more easily optimize your code. A simple instruction set would mean less ways to optimize the code. The compiler can choose which instructions to use it.

          Writing a compiler for a Turing Tarpit is more difficult, the smaller the instruction set, the more code that the compiler has to write to emulate things not implemented on the CPU.

      • Someone needs to publish instructions now on how to DIY fab and we're all set. I did this a billion years ago in college, but with much simpler devices.
      • Also, cleaner design, presumably. Good for compiler writers, perhaps?
      • Licensing? If your making your own electronic schematics, you dont need to license anything. As far as instruction set architecture itself, the ISA is basically a language, and it has been independantly implemented without licensing, such as BOCHS, since language are not copyrightable. Since RISC-V is not using Intel schematics, it could have easily supported x86-64 without any licensing fees, with its own electronics implementation.

    • by Anonymous Coward

      ARM is a RISC chip.

      • by Anonymous Coward

        ARM is a RISC chip.

        So is Intel, on the inside. :-)

        • by Megol ( 3135005 )

          Not really. The Pentium Pro could perhaps be called internally RISC as it executed simple 2 in - 1 out operations (though they were more complicated than normal RISC instructions). This is easiest visible in relatively simple instructions that have more than one input like ADC.

          Modern x86 execute complicated operations that are designed for x86 execution efficiency. Simplified compared to the (worst case) x86 instructions? Absolutely - but far from any RISC design.

    • by I75BJC ( 4590021 )
      Reduced Instruction Set Computer = a set of attributes that allows it to have a lower cycles per instruction (CPI) than a complex instruction set computer (CISC) (Wikipedia) Intel's Core chips are CISC. RISC chips can do more work at the same speed than CISC chips. So, RISC chips are cheaper to run. All the older UNIX boxes used RISC chips and were powerhouses.
      • actually there really aren't CISC chips any more, the current x86_64 for example really only emulates that instruction set with RISC and microcode

        • by Anonymous Coward

          RISC + microcode = CISC. Maybe not in terms of instruction set design; but certainly this is how old CISC-era CPUs worked. What really happens is more of a data flow processor. x86 instructions get rewritten on the fly. Sometimes there is fallback to microcode. It's more like a JVM JIT but at the hardware/CPU level.

        • by Megol ( 3135005 )

          X86 is CISC. Even if x86 chips were internally RISC with a translation layer they would still be CISC as the ISA is CISC. Implementation doesn't matter.

          But x86 chips aren't RISC processors just CISC processors using a simplified internal representation, a representation that is designed to execute effectively and be a good fit to x86 instructions. They are still far more complicated than RISC instructions.

    • by ShanghaiBill ( 739463 ) on Sunday October 08, 2017 @10:16PM (#55333593)

      What's the big advantage with RISC over ARM or x86?

      ARM is a RISC chip. Originally, ARM stood for Acorn RISC Machine.

      The news here is not that it is RISC, but that it is open source.

      So as long as you have your own fab, or a few million $ to rent one, you can make your own chips ... but the real advantage is that you can look at the design files and see for yourself that there are no backdoors.

      • by fisted ( 2295862 ) on Monday October 09, 2017 @02:38AM (#55334293)

        But you can't verify that the design you're looking at is what the plant actually implemented on the chip.

        • Way back in the mists of time I worked for a chip fab company. They bought some competitor chips, whipped the tops off them and examined them under a microscope.

          Granted, you're unlikely to see a one-transistor change or something, but it's incredibly unlikely any change that actually does more than introduce some bugs is going to be that small. It's a tedious process though, and the chip you examine is waste afterwards, so you can only check a small subset, and even then you don't know if the one you chose

          • by fisted ( 2295862 ) on Monday October 09, 2017 @08:07AM (#55335063)

            Way back in the mists of time

            I guess that's the thing.

            AFAIK these days dies have too small of a feature size for meaningful optical inspection (feature size way smaller than the wavelength of light), and dozens of layers from which, even if you could, you'd only see the topmost one, and simply way too many features to begin with.

        • by AmiMoJo ( 196126 )

          You can. De-cap the chip, use a microscope to photograph it and computer vision software to compare it to your design files.

          People have done it for older chips, e.g. decoding ROMs visually or just trying to figure out how something works. With a modern process you will need more expensive equipment due to everything being smaller, but it's far from impossible to do.

          • by fisted ( 2295862 )

            far from impossible

            Are you sure about that? I agree that it is theoretically possible, but in practical terms, I believe it is.

            People have done it for older chips

            Yep, and older chips in comparison are huge and have something like 2-3 interconnect layers. Modern chips have a tiny feature size, and on top of the silicon, a stack of >10 interconnect layers, your microscope will have a hard time looking through those (that is, provided these things could be optically inspected in the first place -- the wavelength of light currently is two orders of magnitude

    • RISC is generally considered "The better Architecture" (TM). Of course that statement is super-broad but truth be told, ARM was initially designed with lots of modern day improvements in mind whilst x86 was made with a more "make it work and get it to mass market ASAP" approach. Hence the success of x86 despite ARM microcomputers being roughly 2 decades ahead back in the late 80ies/early 90ies.
      ARM actually is the newer architecture but the Acorn Archimedes was proprietary and closed, just like the Amiga bac

    • There are no technical advantages. RISC is basically dead as a serious idea, most chips today are CISC with complex parallel instruction sets for math. So called RISC instruction sets such as ARM are quite complex, as complex as x86 is, certainly far more complex than an 8086.

      It has been mentioned that there is no significant overhead in implementing x86 over other CPU ISAs. Its an old myth that doesn't hold water.

      Licensing is cited as a reason for another ISA. I think that this if I am not mistaken applies

      • So, I dont see any logic in them inventing an incompatable ISA rather than just using x86.
        That does not wonder me, as every claim you make in your post is wrong.

        RISC is basically dead as a serious idea,
        Wrong.
        most chips today are CISC
        Wrong. Wrong by chip type, and wrong by sold units.

  • by Anonymous Coward

    ... supports Linux, Unix, and FreeBSD. ...

    When they say "Unix", which OS are they talking about? Solaris? AIX? HP-UX? macOS? UnixWare? OpenServer? One of the many other variants?

    Seriously, how the fuck did that crap end up in the summary? Yes, I realize it's from the article, but EditorDavid should've seen that it's nonsensical and should have fixed up the summary before it ended up on the Slashdot front page!

    Even timothy probably wouldn't have screwed up like this!

  • by johnjones ( 14274 ) on Sunday October 08, 2017 @09:02PM (#55333399) Homepage Journal

    Quite an achievement !

    It always amazes me that governments dont invest in this level, for example the french military will avoid certain american tech but seem happy to pay an unauditable Intel corporation

    at least the European Space Agency made their own Sparc processor but I've seen little other investments made with public money that might actually benefit the public and be verifiable by outsiders...
                         

    • by AmiMoJo ( 196126 )

      China has its own line of MIPS CPUs that are pretty competitive.

      They are actually one of the few fully open platforms in existence, where everything is fully documented. Well, the masks used to make the silicon are not, but you can at least verify the operation of the CPU yourself to a larger extent, and you don't need binary blob microcode updates.

      • by Bert64 ( 520050 )

        You mean the loongson chips? I've not been able to actually buy any of those chips (at least not the newer multi core variant)...

        • by AmiMoJo ( 196126 )

          Yeah, those. They are hard to get hold of outside China. Seems like a trip to Guangzhou is required.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Sunday October 08, 2017 @09:14PM (#55333431)
    Comment removed based on user account deletion
    • No they can't and the DARPA SSITH program (yes, DARPA sometimes names projects with Star Wars references) is explicitly intended to try to address this problem. At present, unless you not only run your own fab, but also build your own equipment and don't license things from the likes of Cadence, you have no guarantees that the thing that you get back doesn't have secret vulnerabilities introduced. Trying to verify that the chip you get back corresponds to the RTL that you sent to the fab is a very hard pr
      • This and much worse.

        The chip that you get from the fab needs to be correspond to the RTL that you sent.

        The actual chip ROM that they program has to correspond to the ROM that you want.

        The firmware programmed* onto any of the peripherals has to correspond to the firmware you want.

        The compiler has to be known not to dynamically insert backdoors when compiling. And no, you cannot verify this by inspecting the compiler source [PDF] [cmu.edu].

        * No, I'll recompile the open-source firmware and reprogram it. Besides the fact

  • This company needs to (if they haven't already) get an international, non-goverment group of silicon and firmware security experts do a full audit to ensure the architecture and reference designs contain no Intel ME or UEFI stuff and no undocumented instructions; no silicon- or BIOS-level network stack, no DMA memory access, and a fully-open BIOS. They would have a real comfy niche that neither Intel, AMD nor ARM (with their non-TrustZones) are now willing to fill.

    Best get those designs hosted and fabbed ou

    • Comment removed based on user account deletion
      • The guy in the office diagonally across from me has one on his desk, not sure if you count that as 'real evidence'. They were the boards that ARM was selling as early developer systems for ARMv8. I've not seen much evidence of AMD trying to make them into a large-scale product.
  • Trusted Foundries??? (Score:3, Interesting)

    by ad454 ( 325846 ) on Sunday October 08, 2017 @09:21PM (#55333445) Journal

    I am a huge supporter of open hardware projects, especially the ESA and Oracle supported opensparc architectures.

    https://en.m.wikipedia.org/wik... [wikipedia.org]

    However without a trusted silicon foundry to make chips without hardware back doors, all of the vetting of the hardware design "source" RTL won't be enough to establish trust. Even running netlists in FPGAs won't be enough if you can't trust the FPGA manufacturer or the foundry that built it.

    In the end, we as consumers are stuck without any truly secure hardware options, free of backdoors.

    My advice, assume all processors have backdoors and select those designed and made in places that cannot be compelled by the country in which you live for backdoor access.

    • by aliquis ( 678370 )

      My advice, assume all processors have backdoors and select those designed and made in places that cannot be compelled by the country in which you live for backdoor access.

      Here in Sweden the authorities aren't allowed to register your political opinions.

      However I assume Interpol are and they co-operate with them. .. And they are ~everywhere.

      • by Megol ( 3135005 )

        I don't think you realize what Interpol is. It's mainly a system to enable some level of co-operation of police forces between different countries. Criminals don't care about borders after all...

        Interpol doesn't really do anything but pass information.

        • Yep. But read about the "European Gendarmerie Force"...

        • by aliquis ( 678370 )

          Interpol doesn't really do anything but pass information.

          So you mean they won't get information about Swedish residents anyway?

          What made me wonder was because in a video two(?) guys from NMR was stopped by the police who was very sure they would go to Gothenburg, where NMR would have a demonstration.

          That would to me suggest the police knew they had sympathies with them or even had gathered intelligence about them going there, which is a political event.

          In some way that would to me seem to suggest they do care about their political sympathies after all and since I

    • by Lennie ( 16154 )

      Baby steps, you can't go from completely closed to completely open in one step.

      I believe it was this talk by Bunnie that talks about the usefulness and uselessness of where we are now and how long a way there is still to get to get something we trust:

      https://www.youtube.com/watch?... [youtube.com]

    • by AmiMoJo ( 196126 )

      There are other effective mitigation strategies for potentially compromised hardware. For example, you could mix vendors with carefully controlled cross-domain access so that an exploit in one does not compromise others.

      I'd love to see an open source security processor for this reason. It would be extremely valuable to have a crypto engine and secure key storage that you could trust. Unfortunately such things are also very difficult to design and fabricate.

    • by tamyrlin ( 51 )
      If you are really really paranoid you could build your own processor using TTL logic (or perhaps CMOS logic may be better). It is not going to be very fast, but it is unlikely that the TTL chips are backdoored (and even if they are backdoored it is unlikely that the backdoor will be able to harm this system since the design of your processor is unlikely to be known by the vendor). The performance will of course not be good enough for running a web browser for example, but it could be good enough in many emb
  • The article says RISC-V supports Linux. I always assumed an operating systems supports the processor. Not the other way around.
    • It goes both ways. A manufacturer isn't going to release a new high-power CPU that can't run any operating system. The CPU needs to "support" (be compatible with) some operating system, and the company making the CPU will likely need to be involved with the first OS port.

      The case of AMD64, aka x64, is a good example. Before the CPU was actually produced, AMD made a emulator, then AMD and Suse ported Linux to the new instruction set. By actually running Linux on the new instruction set they could identify

  • If not, the "open source" part does not mean a lot for most people.

    • Not sure about this one, but the RISC-V Rocket cores can run in various FPGAs (as can the Sodor cores, which are more useful if you want to learn about computer architecture, but are a bit out of date in terms of conformance to the RISC-V spec). The lowRISC SoC includes the Rocket core and can also run in FPGA. The FreeBSD RISC-V bringup was done in a mixture of software emulator and lowRISC in FPGA.
      • by gweihir ( 88907 )

        Still interesting. Thanks. I think that eventually high-security computing will have to go that way, probably with master-checker pairs implemented in different FPGAs or something like it on top of it.

  • by UBfusion ( 1303959 ) on Sunday October 08, 2017 @10:47PM (#55333701)

    I'm not optimistic this cpu would be allowed to be mass-produced, since it appears it won't have any of backdoors the Intel and AMD ones have.

    • I'm not optimistic this cpu would be allowed to be mass-produced, since it appears it won't have any of backdoors the Intel and AMD ones have.

      That’s the difference between implementation and specific implementations. Who is even to say there isn’t a secret instruction in a given implementation? The problem will end up being whether the open implementation provides enough value of a licensed technology like ARM.

      Sometimes the open solution is more costly than the paid solution, so we will see what happens down the road.

  • by Anonymous Coward

    This is not the first. That would have been LEON SPARC. There are a few others also, but the next 'actively maintained' might be J-Core (SH). RISC-V is interesting, I'm a big fan of open computing. But no, it's not a first.

  • ...does it run Windows ? (ducks...)
  • While the RISC-4 IA is open, the U54 is a closed design. So while the having an open IA is better than nothing, don't expect to be looking at the intricate details of how processors like the U54 are designed as that part is strictly closed. It is unfortunate that semiconductor fabrication is so prohibitively expensive leaving FPGAs as the only viable option for community designs.
  • RISC-V is cool insomuch that it's free as in freedom. However, as an ASM programmer, I won't be touching it. As others have already pointed out more eloquently, the RISC idea was to make up for lack of decent ISA with compilers. For the most part, it worked out. However, it doesn't change the fact that most RISC processors are freakin MISERABLE to program. I'm speaking from everyday experience. People might complain about x86 having some stupid addressing modes, but trust me, that's NOTHING compared to how
  • Risc architecture will change everything!

    triple the speed of a pentium!

    it even has a pci bus!

    https://youtu.be/wPrUmViN_5c [youtu.be]

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...