Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Open Source Hardware

'Is It Time For Open Processors?' (lwn.net) 179

Linux kernel developer (and LWN.net co-founder) Jonathan Corbet recently posted an essay with a tantalizing title: "Is it time for open processors?" He cited several "serious initiatives", including the OpenPOWER effort, OpenSPARC, and OpenRISC, adding that "much of the momentum" appears to be with the RISC-V architecture. An anonymous reader quotes LWN.net: The [RISC-V] project is primarily focused on the instruction-set architecture, rather than on specific implementations, but free hardware designs do exist. Western Digital recently announced that it will be using RISC-V processors in its storage products, a decision that could lead to the shipment of RISC-V by the billion. There is a development kit available for those who would like to play with this processor and a number of designs for cores are available... RISC-V seems to have quite a bit of commercial support behind it -- the RISC-V Foundation has a long list of members. It seems likely that this architecture will continue to progress for some time.
Here's some of the reasons that Corbet argues open souce hardware "would certainly offer some benefits, but it would be no panacea."
  • "While compilers can be had for free, the same is not true of chip fabrication facilities, especially the expensive fabs needed to create high-end processors... It will never be as easy or as cheap as typing 'make'..."
  • "Without some way of verifying underlying design of an actual piece of hardware, we'll never really know if a given chip implements the design that we're told it does..."
  • "Even if RISC-V becomes successful in the marketplace, chances are good that the processors we can actually buy will not come with freely licensed designs..."
  • "Finally, even if we end up with entirely open processors, that will not bring an end to vulnerabilities at that level. We have a free kernel, but the kernel vulnerabilities come just the same. Open hardware may give us more confidence in the long term that we can retain control of our systems, but it is certainly not a magic wand that will wave our problems away."

"None of this should prevent us from trying to bring more openness and freedom to the design of our hardware, though. Once upon a time, creating a free operating system seemed like an insurmountably difficult task, but we have done it, multiple times over. Moving away from proprietary hardware designs may be one of our best chances for keeping our freedom; it would be foolish not to try."

This discussion has been archived. No new comments can be posted.

'Is It Time For Open Processors?'

Comments Filter:
  • by Anonymous Coward

    It seems doubtful that any person could understand all the complexities involved in a modern high end processor. It takes several teams of designers to design them. An open hardware project is unlikely to get the manpower required.

    • Re: (Score:3, Insightful)

      by OrangeTide ( 124937 )

      There are several dozen teams designing RISC-V implementations. And many ASICs have RISC-V cores buried in them today. With a handful of designs being open.
      The main barrier for ordinary people and software developers to have a proper R5 workstation is for there to be a market for such a chip. Right now the market is driven by the needs of ASICs, and that's not really what people are asking for when they say an "Open" processor.

      • by TechyImmigrant ( 175943 ) on Saturday January 20, 2018 @11:03PM (#55970181) Homepage Journal

        There are several dozen teams designing RISC-V implementations. And many ASICs have RISC-V cores buried in them today. With a handful of designs being open.
        The main barrier for ordinary people and software developers to have a proper R5 workstation is for there to be a market for such a chip. Right now the market is driven by the needs of ASICs, and that's not really what people are asking for when they say an "Open" processor.

        Designing the architecture and logic is fraction of the engineering effort necessary to design and build a modern high end microprocessor.

        • by religionofpeas ( 4511805 ) on Sunday January 21, 2018 @06:18AM (#55971039)

          Designing the architecture and logic is fraction of the engineering effort necessary to design and build a modern high end microprocessor.

          In addition, a high end processor needs a complicated motherboard to run it, with high speed memory, and various peripheral I/O systems, driven by separate ASICs, or integrated in the CPU. A desktop PC motherboard is a very complex design, which is only made affordable by huge volumes.

    • by alvinrod ( 889928 ) on Saturday January 20, 2018 @09:37PM (#55969979)
      It might find some niche even if never becomes a mainstream product, much like Linux never really took off on the desktop, but became insanely important in the server space. I suspect that this could be successful for low-cost devices that need a lightweight processor. As overall device costs decrease, the extra costs from buying a third party SoC become larger and using an old process node and an open design is going to result in some potentially significant savings.

      I also think something like this has some value in education even if it doesn't do much commercially.
      • Low cost devices that need a lightweight processor are well served by the lower end ARM core chips which don't do speculative execution and the other things that make them vulnerable to Spectre and Meltdown. The Cortex-m series aren't on the list of vulnerable CPUs [techarp.com], for example.
        • There are two problems with that. The first is that negotiating even a simple ARM license is complex and time consuming. If all you want is an off-the-shelf SoC with an ARM core, then that's fine and you don't need to worry because someone else has done it, but if you want to produce a new SoC that has a generic CPU core and some domain-specific accelerator then that means you need a license and you're looking at two years to negotiate it. The second is that those ARM royalties add up quickly in low-volu
    • The exact same thing was said about Linux in 1991-1992, that it would never compete against "real" operating systems like Solaris, ULTRIX, and others.

      What is needed is to get critical mass. However, this may not be as hard as people think. One can bring up the Intel ME debacle, and show that this chipset is open from design to the masking process to the fab... and companies will buy those, if only to ensure that the C-level PCs are not compromised, one of the few places where security tends to be valued.

      T

      • Did a lot of people really say that Linux would never compete against "real" operating systems in 1991-1992? But what's the connection anyway?

        First person: "You can't travel faster than the speed of light"
        Second person: "They said the exact same thing about traveling to the moon".

        • Because it was hard and expensive, not because it was impossible (human going faster than speed of light and living). Not equal.
          • No faster-than-light communication or travel seems to be a very fundamental part of the way the universe works. The laws of physics conspire against our sci-fi dreams.

            • No faster-than-light communication or travel seems to be a very fundamental part of the way the universe works. The laws of physics conspire against our sci-fi dreams.

              Really depends on how you define 'faster than light' and which sets of physics you are using. It seems impossible in Minkowski (flat) space because things like mass, energy and time get imaginary. However, in Reinmannian (curved) space, where the general theory of relativity gets used, it depends on the topology. It's already trivial, such as in the case of gravitational lensing, to show that there are two separate lightlike paths between two points, and one gets there in less time, technically 'faster than

        • by sjames ( 1099 )

          Yes. They said that a bunch of unorganized hobbiests could never get it together and manage the complexity that is a modern OS. Very similar to the arguments that an open processor can't happen.

      • The exact same thing was said about Linux in 1991-1992, that it would never compete against "real" operating systems like Solaris, ULTRIX, and others.

        That is not my recollection. There was a demo of X11 running on SLS Linux at the 1992 SUG meeting, and the folks from Sun were giving each other very concerned looks. They clearly saw it as a serious threat.

      • The exact same thing was said about Linux in 1991-1992, that it would never compete against "real" operating systems like Solaris, ULTRIX, and others.

        No sane person ever said any such thing. The whole of Unix was written by Ken and Dennis in their lunch breaks, not just the kernel. Minix was not such a great deal either.

        An OS was smaller then, because
        (a) there were no GUIs
        (b) most of the "utilities" we expect now were not considered part of the system (and that often included the compiler and linker)

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Not even that. There is actual expense involved. You will not be downloading a Risc-V or any other processor core and then going to a 3d printer to print it. That will never happen. The technology for 3D-printing right now couldn't even 3d-print a tube for an analog computer.

      What people are using right now are FPGA's which cost 100x more than the chip core they are capable of emulating. Most of these FPGA's that are in affordable range can barely emulate an 8-bit computer. So unless you want to sacrifice th

      • by gtall ( 79522 )

        I take you want computers that are collections of discrete components again. The entire System-on-a-Chip world more or less negated that as a design philosophy. It's too complicated and too slow in execution, and is a security nightmare.

    • by sjames ( 1099 )

      But it did. Further, WD has plans [designnews.com] to use RISC-V cores in future products.

  • Yes, but... (Score:4, Informative)

    by DogDude ( 805747 ) on Saturday January 20, 2018 @09:46PM (#55969995)
    ... but it takes a massive amount of money to design and make chips. It's not going to happen "open source" unless some very wealthy individual or organization decides to do so for altruistic reasons.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      That is definitely a way of looking at it. The other way to look at it is that somewhat-wealthy organisations already do invest significantly to other open projects [ not just/only open-source projects ], because it benefits them to do so.

      • Yes but nearly all of them do so as a means to an end rather than as an end itself. Designing a CPU is and end. The resource and R&D requirements are many orders of magnitude higher than many other open source projects put together.

        • by tepples ( 727027 )

          From the summary:

          Western Digital recently announced that it will be using RISC-V processors in its storage products

          thegarbz wrote:

          nearly all [companies that fund development of free software and similarly licensed tech projects] do so as a means to an end rather than as an end itself. Designing a CPU is and end.

          Designing an embedded CPU is a means. The product in which it is embedded is an end. See, for example, RISC-V in Western Digital hard drives.

          • The performance requirements and power limitations of an embedded CPU are completely different than those of the primary processor in a general-purpose computing system. Unless you think that general-purpose computing systems are going away (I don't), then the CPUs needed for them will continue to be expensive to design and build.
    • The same could be said for operating systems. So, I think it could be done.

      Design would be a patent and licensing hell, but I think it could be done. In terms of manufacturing, it'd need some sort of Kickstarter approach to pay for runs from TMSC or GlobalFoundries.

      • The same could be said for operating systems. So, I think it could be done.

        No one has created an open source OS from the ground up. The Linux we hold for granted is the efforts of hundreds of projects maintained and contributed to by thousands of people, just to get a base system going. And there's little motivation to do the same thing with a CPU given the order of magnitude difference in complexity and the requirement for something to be complete on release (rather than say some dude creating Linux, some other dude porting some utilities to it).

    • by AHuxley ( 892839 )
      Step 1 Find and get the rights to some new open chip design in the USA.
      Step 2. Goto the UK government and tell them of a new educational chipset design that is 100% Russian/China resistant.
      Step 3. Offer to set up a "production" line with lots of good paying local jobs in a Northern Ireland, Wales like region of the UK if granted gov funding.
      Agree to terms and get the CPU made in a low wage nation.
      Step 4. Get the money granted and fab the CPU. Ensure the CPU becomes a part of the UK educational
      • Goto the UK government

        No, No, a thousand times NO!!!!!!

        Any project involving a government committing to spending money is doomed, and any IT project involving the British Government, doubly or even quadruply so.

        Never mind Babbage, and Harrison's clocks, look what killed the Transputer - Mrs Thatcher promised £50M the same week United Technologies scrapped their microprocessor project saying "in the world of computers, $50M is peanuts". The funding was indeed too little, too late, and the pr

        • I would not say killed by accident.
          Transputers were strong in military hardware, especially the french one.
          The main companies involved where state owned. Besides tecchnical difficulties with the latest generations of Transputers, the state wanted to sell the companies that were invooved in producing Transputers.
          In the end the only high bidder was a jap. consortium.
          Mind: that was late 1980s early 1990s. Instead of selling, they feared they would be military dependent on a foreign, and even Asian, force. So t

        • look what killed the Transputer

          Advances in "ordinary" multiprocessor computing technology, and the difficulty of developing for the platform in a strange and limited language? How is that relevant here? The transputer died because nobody wanted to use it.

        • by AHuxley ( 892839 )
          But all the new jobs in Northern Ireland and Wales... with computers. Just keep adding more funding this time. It won't be like the 1980's :)
    • by Anonymous Coward

      This is what happened after China acquired AMD license to produce x64 chips in China, and acquired VIA's x86 license which VIA got from acquiring Cyrix.

      The CPU license pool is cracked opened. Soon CPUs in China will be 1/4 the price of Intel/AMD but has better performance.

      https://www.reddit.com/r/hardw... [reddit.com]

      Zhaoxin launched KX-5000 quad/octa-core x86 processors on Dec 28, 2017 in Shanghai, China: image [semidata.info], report [eefocus.com], translation [google.com].

      Zhaoxin revealed KX-6000 & KX-7000 roadmap: image [semi.org.cn], report [semi.org.cn], translation [google.com].

      Other reports

      • by AHuxley ( 892839 )
        Yes AC consider the average CPU speed and generation.
        http://store.steampowered.com/... [steampowered.com]
        All China has to do is be in that CPU speed range for desktop games at a much lower cost every generation.
    • It's not going to happen "open source" unless some very wealthy individual or organization decides to do so for altruistic reasons.

      That is certainly the Windows buyer PHB perspective.

      In reality, large corporations (you have probably heard of IBM) put money into open source because it is a way to share the cost of creating and supporting infrastructure which their actual product depends on.

      We have to hope/pray that Larry Ellison has a "Road to Damascus" event, and realises that a truly open Sparc system

      • by gtall ( 79522 )

        IBM put money into open source because they couldn't stand Microsoft. And in OSes, they'd already been though the Unix wars so Linux looked like a good alternative. It had little to do with creating infrastructure their products depended upon, rather it was creating infrastructure that wasn't controlled by others. At the time, they thought of themselves as a hardware company. Now they see themselves as an India company.

      • We have to hope/pray that Larry Ellison has a "Road to Damascus" event, and realises that a truly open Sparc system (not just CPU) might lead to Oracle being in a far stronger position than it already is, and seriously weaken his competition.

        How?

        Currently, Oracle is in the business of preventing access to drivers and microcode for machines over 5 years old and out of support,

        SPARC is already 5 years behind. Fujitsu already makes SPARC processors. How would giving away the IP of 5-year-behind (actually it's more like 10 or even 15 now, the single-thread performance was pathetic even compared to the available competition last time they were selling) processors help Oracle? And how would it help anyone else?

      • The idea that Larry "Lay 5% off every 6 months to keep 'em at each other's throats" Ellison would ever focus on anything but immediate gains is one of the most laughable things I've ever read on Slashdot.

      • We have to hope/pray that Larry Ellison has a "Road to Damascus" event, and realises that a truly open Sparc system (not just CPU) might lead to Oracle being in a far stronger position than it already is, and seriously weaken his competition

        I really hope not. At the moment, SPARC and Itanium are dead architectures and no one has to worry about abominations like register windows coming back. We have been able to kill a load of complexity in operating systems and compilers that had to deal with these things. The last thing that we need is return of the zombie architectures.

    • by lkcl ( 517947 )

      ... but it takes a massive amount of money to design and make chips. It's not going to happen "open source" unless some very wealthy individual or organization decides to do so for altruistic reasons.

      funnily enough this is precisely what has happened, quite recently, in the form of the Indian "Shakti Project". we could, up until a couple of years ago, have dismissed the Indian Government's security "paranoia" as simply... well... "paranoia"... except that it's not paranoia if they *really are* out to Get You. and thanks to the Intel ME fiasco, we know that the NSA really is screwing everybody.

      so the Indian Government has basically given the Shakti Project UNLIMITED resources to, and i quote, "Piss Al

      • The shakti stuff looks really interesting. The target is essentially an open source hardware version of a raspberry pi! That's great! If you want great market acceptance, built it to be physically compatible (board size, connector placement, pinouts) with a raspberry pi, and there will be good acceptance. It would be great if appliance makers would adopt the r-pi hw form-factor as a standard and have appliances that hooked into (relays and such) so that the main processor board was cheaper, more standard
  • by dsgrntlxmply ( 610492 ) on Saturday January 20, 2018 @10:07PM (#55970035)
    One online article notes 16nm Finfet fab entry cost at $80M, 66 mask steps. You would need a very wealthy patron.
    • by Anonymous Coward

      That's okay. We'll just 3D print them floor to ceiling and run them at a few MHz. Move over 286, we're coming for you! X^D

    • One online article notes 16nm Finfet fab entry cost at $80M, 66 mask steps.

      You don't need to build a sawmill before you can build a house, an apartment complex, or a line of cabinetry. You don't need to build a steel mill to build cars. Why should building your own fab be a prerequisite for building a line of semiconductors?

      Many big-name semiconductor companies have been "fabless", and many more have started that way. Design the chip, commission the masks, rent the fab services, split the swag.

      Let the fa

      • by gman003 ( 1693318 ) on Saturday January 20, 2018 @10:56PM (#55970165)

        That $80M is the cost to use a fab - the cost in setting up the masks to have the fab make your processor. Building a modern fab is on the order of tens of billions of dollars.

        • by Ungrounded Lightning ( 62228 ) on Saturday January 20, 2018 @11:34PM (#55970247) Journal

          That's more than an order of magnitude higher than the NREs we were paying for the ASICs (including sea-of-RISC network processors) the last time I was doing ASICs - abouit 5 years back.

          Has it gotten that expensive? I sincerely doubt it. But even if it has:

          You can do your prototyping at fabs that combine the prototypes from several customers into one combo wafer, split the NREs among them, and do a small run - then repeat a couple months later, ad-infinitim. If kyour design works you've already got your mask design placed and routed, and it's just a matter of making another set where you step-and-repeat for a whole wafer. (Meanwhile you can do small volumes and proofs-of-concept with the few dozen you got from the prototype run - or even get a few more made from the old masks and just get your piece.)

          • That's more than an order of magnitude higher than the NREs we were paying for the ASICs (including sea-of-RISC network processors) the last time I was doing ASICs - abouit 5 years back.

            Has it gotten that expensive? I sincerely doubt it. But even if it has:

            It has IF you want the absolute top-end pervformance fabbing. You can still get fabbed off the bellding edge much more cheaply. Intel's introduction of finfets hailed a massive change in the industry which had not been seen before: it was the first time

            • by AmiMoJo ( 196126 )

              Maybe we don't need bleeding edge performance. Have an untrusted CPU for performance, and a trusted one for important stuff. The trusted one doesn't have to be super high performance.

              That's basically the technique used by most security systems these days. Have a secure, low performance sub-processor just for handling secrets and validating the activity of the high performance main processor.

          • by tlhIngan ( 30335 )

            If you want cutting edge, yes.

            If you can stay back a few nodes, not so bad, a few million bucks is needed (masks are expensive at about $100K/each for the older processes), so perhaps a regular 10 metal chip requires a couple million bucks.

            And while most of it is autorouted and autoplaced, you still want to hand edit the designs. Remember the reason we're at 10 metal is because for most general random logic, the limiting factor is wiring. The vast majority of transistors in any design is used in memory - ca

            • (masks are expensive at about $100K/each for the older processes)

              For sure. It was kind of entertaining when I was onsite at Infineon's fab in Munich many years back with a team installing one of our femtosecond-laser defect repair systems and one of our guys (not me, I swear!) got a little careless and put his thumb through the pellicle on a mask. The customer was not pleased.
          • Has it gotten that expensive?

            Yup. Krste has some interesting slides on this. The take-home summary is that the ROI for newer processes is not currently worth it. It used to be that one generation old was cheap, two was basically free, because the newer processes were so much better than the old and still won on price/performance ratios. Now, the sweet spot is closer to 4-5 generations old. You can spend a lot more on the newer processes, but you don't get very much return and it probably isn't worth it.

  • by Goldsmith ( 561202 ) on Saturday January 20, 2018 @11:16PM (#55970209)

    DARPA had (has?) a program to try and figure out how to ensure the computer hardware DoD is purchasing is what is actually being delivered. There are more problems with hardware than simply design and the cost of buying fab time. Validation that the design was produced correctly is not trivial in complex hardware. Opening the whole process would help solve that problem, and the DoD may have the deep pockets necessary to pay for actual hardware builds.

    • Sure the DoD has the money, but even if they did fund a CPU design they never, ever would release it as open source. It would remain a classified component of one of the DoDs weapons systems. In fact, the DoD has funded specialized ASIC development, typically for stuff only they would ever need... stuff like ultra high frequency ADC's that can digitize the signal from an enemy radar or other things they can't buy commercially.
      • by gtall ( 79522 )

        DoD funds a lot of work that is let out to general industry, stop talking like it is closed shop.

      • Not even slightly true. The DoD has a policy of trying to avoid being responsible for their supply chain. DARPA regards technology transfer as one of their key metrics for success in a project like this: they want companies (ideally US companies, and especially companies that provide critical bits of national infrastructure) to adopt the results of these projects. They are also well aware of both how much open source they depend on and of how good open source is as a route for technology transfer: even i
    • by gtall ( 79522 )

      Opening the process won't solve squat. The problems remain regardless of whether the designs are open or closed. The U.S. Military can already get access to designs, what it and the industry lacks are methods to ensure they are what they say they are. I'm not optimistic they will be successful given one of their approaches which one fellow relayed to me, "we'll just test the products and see that they do what they are supposed to do".

      • I agree that the ways they've approached this so far have been pretty dumb. Opening the design process is trivial. Opening the fab process is not. From an industry point of view (my point of view), opening the process completely is not necessary. I simply need to be able to validate the manufacturing process, which means it needs to be open to me (the customer). Many electronics manufacturers don't understand that "I need to validate" is not the same as "you validate for me." For chip fab, that's going

  • And if sufficiently open out-of-order implementations (resistant to Spectre class exploits) don't show up, we'll emulate it in a JIT runtime that'll eventually pony up better performance than Intel chips with the TLB flush-a-rama patches. Tanenbaum's old argument about users and developers gladly suffering greater than 5% penalties to use languages like Perl and Java, and this making microkernel performance hits palatable, was recently made all too true with the fix for Meltdown turning monolithic kernels i

  • by gabrieltss ( 64078 ) on Sunday January 21, 2018 @12:33AM (#55970351)

    How about an open version of the Motorola 68000 series of CPU's? Those were great in the day. maybe Motorola would open up the tech on them and let them be advanced. Assembly for them was easy to learn and had a very small instruction set to learn. Learning assembly on the Commodore Amiga's was a snap with the Motorola 68000 series of CPU's.

    • by ClosedSource ( 238333 ) on Sunday January 21, 2018 @01:59AM (#55970515)

      I'm not sure how useful this would be today, but clearly the 68000 was far superior to an 8088 (or even an 8086). My guess is that Intel's segmented address approach sucked-up about 20% of developer productivity on the PC. All those crazy memory models would have never existed had IBM chosen the 68000. Not to mention Extended Memory and Expanded Memory.

      • by Agripa ( 139780 )

        I'm not sure how useful this would be today, but clearly the 68000 was far superior to an 8088 (or even an 8086). My guess is that Intel's segmented address approach sucked-up about 20% of developer productivity on the PC. All those crazy memory models would have never existed had IBM chosen the 68000. Not to mention Extended Memory and Expanded Memory.

        The 68K has even worse problems. For instance unlike segmented addressing, the double indirect addressing present in the 68K involves the instruction pipeline itself.

    • ARM and other RISC machines have similar simple instruction sets.
      And looking particulary at ARM much more powerfull ideas, like every instruction can be conditional and nearly all arithmetic instructions can include a shift operation (add and shift same time).

      • by joib ( 70841 )

        looking particulary at ARM much more powerfull ideas, like every instruction can be conditional.

        arm64 got rid of this.

      • ARM and other RISC machines have similar simple instruction sets.

        This hasn't been true in years. The ARM instruction set that your smartphone processor supports is big, complicated, and includes almost as much crufty legacy support as a modern x86.

        • Nothing can compare with the x86 for cruftiness. Legacy support is one of the reasons why x86 has never been able to compete in the low power space with ARM.

          ARM32 is my favorite assembly language to program in. It's simple, easy to understand, and because its lacks a lot of mal-features like doing operations directly in memory, it's not loaded with as many traps for the unwary.

        • The old instrucction set did not change.
          Adding new suff does not necessarily make it crufty.
          Do you have an example for cruftiness on ARMs?

          Modern ARMs I only programmed in C++, but looking at the assembly code, I noticed nothing strange.

          • Aside from Apple, the rest of the ARM smartphones have to support ARMv7 which requires supporting Thumb/Thumb2. ARMv7 has predication on most instructions, which is a pain for an out-of-order machine, as well as a stupid FP architectural register addressing scheme. Thumb/Thumb2 is a form of instruction compression and requires the CPU to decode 2B instructions, which means that the whole "every instruction is 4B in length and on a 4B boundary" is throw out the window. ARMv8 is pretty clean, but in order

            • Yeah, thanks for the info, I just read up a bit about Aarm64 ... The amount of 'architectures' is quite confusing :)

    • There really isn't any 68000 tech to be "opened up" per se - other manufacturers made 68000-compatible chips, and they're still being made today. 68000 implementations using FPGA hardware are also quite common, and often available for free. The main problem is that the 68K architecture isn't really comparable in performance to today's general-purpose offerings by Intel and AMD and there's no financial incentive to try to make them so. They're still great for embedded stuff, and as you said, they're reall
    • You mean like this one [opencores.org]? The problem with emulating a 68k is that the best kind of performance you can hope for is enough to make a fast amiga [apollo-core.com]. That's pretty poky even by embedded standards, these days. And it's all well and good to say "but imagine how many of them you could have on a chip!" but then you have to figure out some sensible way to glue them all together. You'd basically end up with an inferior version of the latest SPARC chips, which weren't really competitive anyway. They're only capable of ac

    • by Misagon ( 1135 )

      I love the 680x0. It had the most modern ISA back in its day. But it was a CISC design, not developed past the 68060 at 66MHz, which was comparable to somewhere in-between a Intel '486-DX2 and a Pentium at that clock.
      Motorola dropped the 68K in favour of the 88000 and then that for the PowerPC supposedly after it had been revealed that the 88K architecture had serious design flaws.
      You can today run a 68040 on a FPGA at 100 MHz though. (Vampire card for the Amiga).

      There is an open-hardware 32-bit RISC archit

    • Not sure whether you want more Amiga nostalgia or 68000-family processing, but either way, your cake's already baked.

      Here's a top-quality Amiga hardware emulator. https://www.armigaproject.com/ [armigaproject.com]

      If you want to get more hobbyist than that, here's an open high-performance 68000 processor core you can load onto an FPGA - possibly along with MiniMig or some other FPGA implementation of the Amiga. http://www.apollo-core.com/ [apollo-core.com]

  • We need OpenARM not only because ARM is ubiquitous but so there can be a company called "JOint United Research for National Exascale deliverY" making OpenARM processors. OpenARM by JOURNEY.

  • by Voyager529 ( 1363959 ) <[moc.oohay] [ta] [925regayov]> on Sunday January 21, 2018 @03:33AM (#55970719)

    Let's assume for a moment we had a rousing speech from the ghost of John F. Kennedy saying that this community should commit itself, to achieving the goal, before this decade is out, of creating an open processor, and installing it safely in a computer. And Jeff Bezos thought it was a good idea and committed to writing a blank check to make it happen.

    And enough of the the few thousand people in the world who can ground-up design a processor have willingly donated their time to the effort, and have made a perfect, error-free processor with very little physical testing, and one or two of the few-dozen-at-best CPU fab plants in the world have committed their time to retool their assembly lines to decrease the output of Intel and AMD and ARM and Qualcomm chips to make a few hundred thousand of this OpenProc. Also, we're assuming that all of this is done such that there are zero patent infringements from the existing guys, and thus at no point are there any lawsuits from Intel or AMD.

    We're already comfortably in 'not happening' territory, but let's keep going.

    These CPUs need to fit into motherboards somewhere, right? I mean, the implication here is that we're looking for desktop and server chips. They're not going to work in standard Intel or AMD sockets, I'm assuming...so on the heels of designing an open processor, we need an open motherboard to fit it (which again, avoids any and all litigation as it's being designed). Somewhere in that process, we also end up with an OpenNIC and an OpenSoundBlaster and OpenSATA and FreeUSB and FreePCIe et al. Also, someone codes a ground-up open UEFI or BIOS or something that interacts with all of this hardware properly and without issue or conflict, because any issue faced in this scenario becomes the biggest possible nightmare to test. Also, Foxconn agrees to produce this MagicMobo alongside standard, more profitable units.

    Now, we've got all that hardware and can get to a boot device. What are we booting? Linux successfully compiled for this barely tested hardware using a compiler that assumes all the specs are, in fact, working as intended? Okay, great! now let's get some more software on it, because a full Linux distro, even something as relatively-simple as DSL or Puppy is going to require all of its software to be recompiled, so it's yet another race to start porting over applications, with some applications never leaving x86 due to a lack of developer interest.

    Everyone, everywhere, ever, has willingly done their part to support this new architecture. Now, to convince people to use it. Who, exactly, would that be? Some software developers and hobbyists, I mean sure, but then who? End users, even tech savvy ones, are going to be wary of an architecture where the best case scenario is a subset of standard Linux software, to say nothing about the countless Windows and OSX titles, niche hardware, and lack of laptop iterations.

    Maybe if it were heavily optimized for database loads it might have a bit of a niche there, but now you have to have someone's name on it. Who is going to be the OEM to sell these machines? Companies aren't buying motherboards and rackmount cases to start using these as servers; Dell or HP or Lenovo will have to get on board, which is rough when Intel has been their poison of choice for so long.

    So, in summary, even if everyone volunteers everything they need at every step of the way, what is the expectation? A niche market at best, which will always be treated as a second class citizen, and whose selling point is the great sacrifice made to bring it into existence.

    • I think this narrative both misses a lot of niches where open processing could make a difference, and overestimates the barriers to entry.

      First, RISC-V is already being put into silicon. It's great wherever there's a need for a small, efficient core, and this means that embedded systems, microcontrollers, all that are up for grabs. Think Raspberry Pi and smaller. Think an upcoming generation of smartphones and wearables. Think more of competing with ARM than Intel and AMD.

      Second, we need this to replace

  • by cardpuncher ( 713057 ) on Sunday January 21, 2018 @05:59AM (#55970997)
    RISC-V is just an instruction set architecture - and one that simply bundles up some well-established practice into a neater package. It offers nothing that a current processor cannot provide - with the exception of having an IPR-free instruction set. Which would at best a marginal gain because the first thing any of the mainstream chip vendors would do would be to "enhance" it with a bunch of proprietary instructions so they had a distinctive product.

    There's nothing in the spec about implementation - you're free to recreate Meltdown and Spectre and be fully compliant as far as I can tell - so I can see no benefit.

    What we are going to need going forwards - if we're serious about battling malicious software - are things like more protection rings (or similar) and hence faster mode-switching, better memory protection, container-oriented virtualisation (including better support for DMA), and possibly realising that we now have sufficient memory to run kernels mostly without address translation. That will probably involve some sort of Virtual Memory system in which an Address Space ID is part of the address for both cache efficiency and protection purposes. I don't think we'll get them, because it would involve significant changes, not to the silicon, but to the mindset of Operating System developers, most of whom seem to have been desperately reinventing Multics for the last 50 years.

    • by Misagon ( 1135 )

      I like the way that The Mill [slashdot.org] architecture (proprietary, not yet in silicon) is going with regards of security features.

      It does some things in hardware fast that a microkernel would otherwise do with a penalty compared to current monolithic kernels. For instance, IPC is done through cross-domain calls in one clock cycle.
      This means that many libraries could be moved into separate protected domains ("services") without loss of performance.

      It also has fine-grained memory protection (separate from paging), akin

    • Slashdot largely seems to be missing the point of RISC-V. It isn't so much about having an open source processor, as an open specification that anyone can easily and freely implement and extend. The basic open designs are implemented in a high level design language and may be readily composed with a rich and growing selection of peripheral hardware in a flourishing ecosystem. The ISA itself is just a simple and elegant RISC, but the offer of escape from vendor lock-in or maintaining custom designs and toolc

  • Something neat and simple, like a raspberry SoC or something ... .. (Listen to me in 2018 ..."neat and simple, like a raspberry SoC" ... Isn't progress awesome!?)

    Back on track:
    we need this like now. When the 3d printers for electronics come about it should be trivial to print your type a FOSS smartphone model. IMHO.

    • by gtall ( 79522 )

      Hmmm...I hear you. And as soon as we discover a pink unicorn, there will be world peace. We just need to find one. Should be a piece of cake and then there will be peace in our time.

  • Comment removed based on user account deletion
  • Comment removed based on user account deletion
  • Here's some of the reasons that Corbet argues open souce hardware "would certainly offer some benefits, but it would be no panacea."

    Well, if it doesn't solve every problem 100%, then we're NOT FUCKING INTERESTED!!!
    We ONLY WANT PANACEAS.

The gent who wakes up and finds himself a success hasn't been asleep.

Working...