Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Design Your Very Own Microprocessor 231

LightJockey writes: "CircuitCellar has a great article on designing and building your own microprocessor using FPGAs and openly available processor designs, ranging from ARM and MIPS based to custom designs, and even a couple SPARC based chips, and also a really cool 'processor toaster,' start with a base processor design, and using a webpage to select upgraded components, it spits out the VHDL file you need to create it. Brings garage hackerdom up to a whole new level!"
This discussion has been archived. No new comments can be posted.

Design Your Very Own Microprocessor

Comments Filter:
  • by AlaskanUnderachiever ( 561294 ) on Saturday May 11, 2002 @11:24AM (#3502139) Homepage
    That's nothing new. I've been toasting processors for years now. All you need is any AMD chip, a failed heat sink, and 30 seconds of Half Life.
    • Well, since this one seems to be going off-topic from the very start...

      You are one UNLUCKY dude! I have never heard of a heatsink failing, but you have seen this multiple times? What is the most common failure mode of aluminum heatsinks? :)
      • I actually had a 'heatsink failure' once, thankfully it happened before I had finished setting up the machine so nothing died. The mode of failure was that one of the plastic clips that the retention mechanism connects to bent. It was on a very old MOBO so I just tossed it rather than try toasting chips. Theoretically I could see this cheap bit of plastic failing during operation.
        • by TNT_JR ( 460277 )
          Nope. That was a failure of the 'retention mechanism', *not* the heatsink? ;)
          • One would assume that the retention mechanism of the heatsink would be a part of the unit as a whole, therefore, the heatsink failed.
      • What is the most common failure mode of aluminum heatsinks?

        Temporary suspension of the black body radiation law.
      • OK. I help maintain a lab that is full of AMD's of various flavors. Now take a large heavy heat sink. Be a cheap ass admin and make sure everything is secured with inexpensive plastic bolts and cooled with cheap sleave bearing fans. Remember kids, even though these are going to get daily use from students and be suspended in an odd manner there's no reason to buy anything other than the cheap clips that came with the heat sinks. It's not terribly uncommon for someone to accidentally (yeah right) wack or move a machine with enough violence to weaken the clips holding the heatsink on. (ok I'm assuming that's what does it and we don't have an active army of super intel powered gremlins on premises). Heatsink slowly pulls away from CPU, eventually seperates one day, tada, rapid failure. Also the fans can fail. Either way, within a few minutes (or seconds if it's actually the heat sink and not just the fan) you have nice crispy chips. It doesn't happen very often, but take a lab of 40 computers x18 hours a day x freshmen and you've got a nice equation for failure.
    • In my experience, Half Life is not necessary.
    • Not only is it not new, it's no longer a Hobby. My self and my firends hire our processor toasting services to disgruntled computer convention attendies pissed at the peopl esucking prescious bandwidth with their counterstrike server.

      One subtly toasted processor + failed server for $20. And since they're all rent-a-sys type computers, no one really cares
    • It takes you 30 seconds? I'm much more efficient than that, it takes me only 15 seconds. Then again, I use my Whole Life, so it balances out...
  • by linzeal ( 197905 ) on Saturday May 11, 2002 @11:28AM (#3502157) Journal
    The damn thing would incorporate circuitry for a garage door opener, a missile guidance system, and would have all 20 megs of emacs stored in microcode.
  • by MrHat ( 102062 )
    1. Create VHDL from 'processor toaster', and take it into the garage on a CD-R.
    2. ...
    3. Profit!

    Hell, someone had to do it. If you guys have some spare chip fabrication equipment in your garage, can I borrow it?

    • Aren't you supposed to say this in haiku or limerick form?

      Besides, if AMD discovers that you've stole their trade secrets, they'll sue you into the ground.
  • by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Saturday May 11, 2002 @11:30AM (#3502165) Homepage Journal
    Without training and experience in hardware design at the college level, it is doubtful that any amateur could come up with a design that improved on existing chip designs or create a fundamentally new design that would be of interest to chip companies.

    The hope springs eternal, though.
    • Yes, but could thousands of amateurs, many with college degrees, work on a distributed project to design microprocessor that improves on existing commercial models, a la Linux? (Maybe the article talks about this, but it's slashdotted.)
      • This is already there...

        The Free CPU project http://www.f-cpu.org has this purpose
    • I just think that it is interesting how this wouldn't have been possible until recently. The expense involved in chip making plants is making chip manufacturers different companies than designers. The designers then outsource the manufacturing to the plants.

      Why can't an individual outsource too? The beauty of all this is that an individual can. Even though the plant may have large sunk costs involved, the cost of making an individual chip may get to the point where people really can design their own advanced chips.

      Hey, perhaps we'll have Open Source chip designs that'll be traded online. Then we'll have an entire machine based on completely open standards.

      That'll be hot.
      • by tius ( 455341 )
        Actually, this has been possible for a long time (i.e. 10 years or so). The only differences now are that there are single FPGA chips that can accomodate the entire design, and that there are also some fairly mature Open Source cores available.
      • by cyr ( 571397 )
        Silicon is only cheap if you make *many* copies of the same chip.

        However, you can design something in VHDL and put it into a CPLD och FPGA chip (programmable logic).

        BTW, check out www.opencores.org and similar sites. There are already a number of open source "chip designs" available, in the form of VHDL or Verilog source code.
      • Sure you can outsource. Practically all design houses outsorce fabbing (current estimates indicate that you need a turnaround of approx $7 billion to justify your own fab)

        As a small time operator no fab is going to talk to you however. You are going to go through a middleman (just as well since these often supply design services like P&R(Place and route) and synthesis, withouth these services you'll be looking at a investment at approx $200 000 in tools)

        For a pure digital diesign you can then get away with a tool investment of $2-5000 for simulation.

        For fabbing you should expect $20-50000 in expenses to ready your design for tape-out. The cost of the manufacture will depend of wether you are going for an engineering run of MPW (Multi Project Wafer). An MPW will cost you $10-100 000 depending on process sophistication and size and yield 10-200 chips. An engineering run requires a dedicated mask set which will cost $100-500 000. The engineering run itself is consisderably cheaper and the masks may be reused for manufacture.

        If you are going to do any leading edge design you will however need to do your own synthesis and P&R. If you target 0.18um of better you propably are going to need som degree of physical synthesis capability ($100000 and up). Fo manufacture you will also need to prepare a test procedure (ATPG tools (Automatic Test Pattern Generator) check in at approx $100000)

        Also remeber that all tools will usually require a maintenance fee of 10-20% annually of purchase price (pays for upgrades and support)

        At last don't forget computer hardware to run your tools on. Linux suffices for most tools, but some will only run on Sun/HP workstations.
        • by imnoteddy ( 568836 ) on Saturday May 11, 2002 @12:47PM (#3502446)
          Prototyping can be done much cheaper through MOSIS [mosis.org]. If you just want to play with a simple processor (say an 8 bit processor in the 0.5 micron process) you can get in the game for $5,900 US [mosis.org]. If you want to play in a 32-bit world, but don't need the hottest process, big onboard cache, etc., consider $15,500 US [mosis.org] for 40 parts in a 0.25 micron TSMC process.
          In amy case, the real advantage to a roll-your-own processor is not to build a better general purpose processor better than P4/SPARC/ARM/MIPS/PPC but to create a special purpose processor that does the one thing you care most about very well.
          • I was assuming at least 0.35um.

            These prices are however just for the fabrication, no?

            If so you will still need to do synthesis and P&R.

            Of cource your point about just dropping in a processor as beeing uninteresting is well taken. Indeed a CPU is a very inefficient piece of logic. Dropping CPUs in FPGAs seem to me as a particularly stupid thing to do.

            The whole reason many want a CPU in a SoC product is that they want the flexibility to reconfigure the chip and update it's algorithms without having to fabricate a new device. On a FPGA you allready have the flexibility to update at any time at practically no cost, so on those you will want to forgo the CPU entirely and implement the algorithm completely in hardware.

            Also since FPGAs generally are unable to reach high clockspeeds, the designer really need to paralellize his algorithm to achieve any kind of performance.

            (We recently have done a SoC project which was to be prototyped in a FPGA which included a 16 bit single issue unpipelined RISC core. On a virtex II 3000-4 this achieved a speed of 18MHz max)
    • It is not about designing a chip that you could sell to another company. It is about the home electronic hobbyist. I remember reading Circuit Cellar in Byte magazine a decade ago. It was always cool to sit down at your bench with a bread board and a collection of parts from Radio Shack and do it yourself. I'm glad Circuit Cellar is still alive and kicking.
    • by mrm677 ( 456727 ) on Saturday May 11, 2002 @11:51AM (#3502243)
      Designing a modern microprocessor can not be done by amateurs or a group of people with a B.S. degrees in electrical engineering. Sure, many of us have taken undergraduate architecture classes and maybe have designed a simple pipelined microprocessor in Mentor Graphics or VHDL/Verilog. Some of us maybe even implemented it with FPGAs.

      However, anything close to being as complex as Intel/AMD chips requires an army of highly experienced architects/engineers with many of them having pHD's. Even the software design tools, such as Mentor, cost well over $100,000

      Then building the chip is another beast requiring a fab facility in the order of $1 billion for any process with feature sizes smaller than 0.5.

      Microprocessors are becoming so complex to design and build, that only a few companies are surviving. Sort of like the aircraft industry. There are only 2 remaining companies in this world that design and build 300+ passenger commercial aircraft (Boeing and Airbus). It is infeasible for a new competitor to arise because of the capital involved (unless of course it is nationally sponsored).
      • by Bobzibub ( 20561 ) on Saturday May 11, 2002 @12:18PM (#3502334)
        Replace
        'Microprocessor' with 'Operating System'
        'Intel' with 'Microsoft'
        'AMD' with 'Sun'
        ....
        Read the above comment again. ; )


        Building a chip in a fab would have to be a traditional commercial endevour. Agreed. Aren't Boeing and Airbus the only two airline manufacturers because they are subsidized and therefore others cannot compete? Cheers!
        • Replace
          'Microprocessor' with 'Operating System'
          'Intel' with 'Microsoft'
          'AMD' with 'Sun'
          ....
          Read the above comment again. ; )

          Just because you can claim that other complex products have been created by people with fewer resources, does not invalidate the original post. The cost of entry into the software market is HUGELY less than entry into hardware. Within hardware design, there are many fields where the bar to entry is very low (simple Data Acquisition/Control interfaces come to mind) and many amateurs are selling commercial products. But most of the high-end stuff requiring expensive tools is beyond the reach of the guy in the garage. I guess my point is that, to take Linux as an example, you can write a kernel and have it be immediately useful. Heck,I've done this by writing a small preemptive RTOS kernel for my own use. But simply building, say, a pipelined arithmetic processing unit gets you nothing without the rest of the CPU around it.
          There are many areas in electronics where those of us struggling in the basement can build high-performance equipment, but CPU design is not one of them.

          Apples and onions, dude.
        • wrong assumption.
          The fact is that there is a limited market for large commercial transports, as most planes are either flying (which does only cost the gas&pilots to the companies) and parked at the airport gate (that costs much more).
          Thus there must exist only the number of gates + the number of possible flying planes + the ones undergoing checks, with the total number of planes having the proper capacity to fly every requesting passenger.
          The only new planes required are replacements (with the old ones either being scrapped or stored in some desert).
          Thus there is only a limited production of planes required.
          Furthermore, the development of planes costing billions before seeing any returns, I enjoin you to start your own commercial plane building company and see in how many seconds you will sell your shirt off...
      • I could swear I read something about this being the reaction Steve & Steve got from people about home computers. Too complex to buid a useful computer, never be able to compete with the big guys. Guess they were wrong too.

        Besides, I wouldn't be aiming to build a computer processor. I'd just wna tto build a processor that could process something.
      • I agree with most of your points but the ARM is an interesting counter-example. It was designed by four or five guys at Acorn Computers in the UK. They had just been told to sod off by Intel when they wanted to license the 8086 as a base design. It took about five man years of work - five guys working for just under a year - everything worked first time when plugged in (including all the IO and the peripherals), and they got around the manufacturing problem by licensing the design to OEMs who wanted to embed it. It was (and is) a joy to program for, has very low power consumption and is easily extensible.

        In a supreme irony, Intel ended up licensing the ARM from Acorn RISC machines in the early 90s. Right now ARMs are everywhere - PDAs, cellphones, routers and switches. Now of course a 200Mhz ARM running in an iPAQ is a little less complex than a modern P4 with SSE 2 and all its other bells and whistles, but it's close. I think its encouraging that designing a successful microprocessor has been shown to be not solely the domain of giant corporations with billions of dollars in fabs and armies of PHD-wielding staff.
        • ARM is really an instruction set specification. Most ARM implementations I know of are rather simple (except for the engineering expended to make them low-power).

          I can't think of an ARM implementation that is superscaler, speculative, and performs out-of-order execution.
          • I can't think of an ARM implementation that is superscaler, speculative, and performs out-of-order execution.
            But that's not the point. Not all processors were designed for general PC use. A good majority of the processors out there are embedded processors. The ARM/Thumb spec was designed specifically for this. It's other key feature is low power usage (which is very important for battery operated devices).

      • It is infeasible for a new competitor to arise because of the capital involved.

        That's not true actually. The costs of design (not manufacture) are coming way down as simulation and development technologies streamline the design process. There is heavy competition in the microprocessor space for servers and networking chips. Intel and AMD's strangle hold on the PC and general server market is due to the overwelming developer support of the x86 platform, not the costs of developing new chips. Look at the Itanium. If it wasn't for the x86-64 they wouldn't be able to sell one of them. If you look at the embedded space, where developer support is less relevant, you'll see a wide spectrum of chip makers and healthy competition.

        • Very true about the embedded space...lots of competition. Embedded processors are actually very simple cores. They are usually in-order and have simple pipelines. Plus there exists a large design space of customization that stimulates competition and offers many different companies the ability to offer something different. Just look at all the different "systems on a chip" out there.

          And yes, standard tools exist that can automatically transform high-level specifications into mask-level designs. However, for anything that will come close to the price/performance ratio that Intel and AMD have achieved for general-purpose microprocessors, full custom design is usually required.
        • The problem isn't really the amount of capital (you can get capital if you have a good business model). The problem is that there is a huge barrier for a new company to enter the market. I mean, just look at Transmeta. They were said to be able to take over Intel's position as the number one processor producers. Where are they now?

          The main problem is that the industry has narrowed down to these two giants. They have brand recognition. I mean, if a CompUSA started selling PCs or Laptops with Transmeta chips next to a PC or laptop with a "Pentium" chip, which is the consumer going to pick?
          • The main problem is that the industry has narrowed down to these two giants. They have brand recognition. I mean, if a CompUSA started selling PCs or Laptops with Transmeta chips next to a PC or laptop with a "Pentium" chip, which is the consumer going to pick?

            The one without the fan.
      • Designing a modern microprocessor can not be done by amateurs or a group of people with a B.S. degrees in electrical engineering. Sure, many of us have taken undergraduate architecture classes and maybe have designed a simple pipelined microprocessor in Mentor Graphics or VHDL/Verilog. Some of us maybe even implemented it with FPGAs.

        This is either irrelevant or just stupid, depending on how you look at it,

        It is true that no amateurs are going to build their own 747 either, but there are no lack of people who build their own planes and gliders. Using FPGAsof modest cost, amateurs can implement processors which are perhaps 8 years back in the power curve. I don't know about you, but I found the computer that I owned 8 years ago to be quite a useful gadget. The ability to reprogram the core of your microprocessor to (say) add new instructions, peripherals and capabilities seems to be a cool one. As the FPGA industry moves forward, experimenters in this technology will also track Moore's law improvements. Yes, they will always be behind what billion dollar fabs can produce, but I fail to see why this is a problem for amateur chip designers.

        Microprocessors are becoming so complex to design and build, that only a few companies are surviving. Sort of like the aircraft industry. There are only 2 remaining companies in this world that design and build 300+ passenger commercial aircraft (Boeing and Airbus). It is infeasible for a new competitor to arise because of the capital involved (unless of course it is nationally sponsored).

        Again, so what? We were talking about amateur designs, not going into competition with Intel and AMD. I imagine that Linus heard similar arguments about the infeasibility of writing his own operating system.

        Linus took the wide availability of inexpensive PC computers and leveraged those to create a new operating system. Amateur FPGA designers could try to leverage the availability of inexpensive FPGA chips to design their own processors. If you asked me the likelihood that anyone would be using them in a commercial environment a year from now, I'd say it was pretty low, but in a ten year time span....

      • Designing a modern microprocessor can not be done by amateurs or a group of people with a B.S. degrees in electrical engineering. Sure, many of us have taken undergraduate architecture classes and maybe have designed a simple pipelined microprocessor in Mentor Graphics or VHDL/Verilog. Some of us maybe even implemented it with FPGAs.

        That's a very misleading statement, as you define MODERN as meaning something bloated and complex like AMD/Intel chips. The problem isn't that making a MODERN processor is very difficult, it's that implementing a poorly designed ISA is. If you insist on using x86 (I would *NEVER* make an x86 processor on my life) then of course you'll never get anywhere. I do not think, however, that it would be that difficult to make a full MIPS R2000 chip, ala Nintendo 64. Most MIPS instructions are very simple to implement, it becomes mostly an issue of pipeline control, and then caching/memory interfacing. I will grant that the designing of an FPU from scratch would be somewhat difficult (the MIPS processor I designed lacked an FPU, so I have not done that). MIPS, or heck even a PowerPC chip would not be prohibitively complex. There's no future in CISC, and I do not see why you choose to use that as your metric for determining feasability. RISC chips are really not all that complex (depends on how many execution units you want, etc) especially if you use a simple and well-thought-out ISA.

        I don't think many people honestly WANT to implement x86, because it's so difficult to do, and it is difficult to add cool features to, whereas RISC ISAs are usually rather simple to extend in many ways.

        Just because an ISA is not created by Intel does nto make it modern (in fact, the x86 ISA is the LEAST modern ISA still in wide use).

        Just a thought. (disclaimer: I'm not a computer engineer, and never will be. I'm a CS/Physics major, and I've taken one class in computer architecture)
        • Take a few more advanced architecture classes.

          ISA's are mostly irrelevant in terms of performance potential (except for IA-64 which I will get to). Both AMD and Intel devote a (small) portion of their transistor budget to dynamically convert the CISC instructions into RISC-like "micro-ops". Thus the actual execution core of the AMD K7 and Intel P6 micro-architectures are very similar to say the MIPS R10000 core. Now if Intel and AMD had a decent ISA to begin with, they could devote those transistors (used to convert CISC to RISC) to things like bigger caches. Thus the performance penalty of using a lousy ISA is really not that much as evident by the success of Intel and AMD in raw computational power.

          Your comment about "RISC chips are really not all that complex" is extremely ignorant and uneducated. Please tell me again that the MIPS R12000 core is "not all that complex" after studying about superscaler speculative out-of-order execution.

          The IA-64 ISA really is different because it takes a radical approach to achieving instruction-level parallelism. It is very VLIW-like and contains many advanced features like "poison bits", register windows (not SPARC windows), software pipeline support, etc. Thus the parallelism is discovered by the compiler and can be expressed to the architecture unlike RISC and CISC ISAs which rely on the hardware to discover and provide parallelism (through OOO execution).
          • ISA's are mostly irrelevant in terms of performance potential (except for IA-64 which I will get to). Both AMD and Intel devote a (small) portion of their transistor budget to dynamically convert the CISC instructions into RISC-like "micro-ops". Thus the actual execution core of the AMD K7 and Intel P6 micro-architectures are very similar to say the MIPS R10000 core. Now if Intel and AMD had a decent ISA to begin with, they could devote those transistors (used to convert CISC to RISC) to things like bigger caches. Thus the performance penalty of using a lousy ISA is really not that much as evident by the success of Intel and AMD in raw computational power.

            I don't know about that one... I've read much about various architectures, even some about the IA-64 architecture (I'm rather excited about it, because a group of scientists here used it to achieve a threefold increase in their finite-element simulation performance using Itanium processors). I would say that having a lousy ISA constricts what compile-time optimisations can be done dramatically. Look at things such as compile-time branch prediction/prefetch instruction, predication bits, register rotation (granted that need not be part of the ISA, however to take full advantage of it, you have to know it's going to be there), speculative loads, cache hints, etc... You just can't do most of that stuff if your ISA doesn't support it! Implicit paralellism is often not good enough for intense applications. Also you must balance this against design issues with things like multiple instruction lengths, and worst of all about the x86- lack of registers! I'm quite aware that modern x86 implementations have many more internal registers, however that CANNOT POSSIBLY BE AS GOOD as having more visible, usable registers.

            Often times, it's the compiler (or the programmer) that knows best, it sees the big picture. Having an ISA that doesn't allow you to define parallism, that doesn't allow you to save cycles in critical parts of loops, preload things, and makes you do a lot of unecessary branching, that CERTAINLY has a lot to do with performance. And lets not forget about SIMD instructions, or vector-based register operations (okay I know that hasn't been popular for a long time, but when you have a really slow processor it's actually rather attractive).

            Your comment about "RISC chips are really not all that complex" is extremely ignorant and uneducated.

            I actually meant to say that "RISC chips do not really need to be all that complex." If you don't do branch prediction, or register renaming/rotation, if you don't do multiple parallel instruction units, they are actually not that bad. I argue that a bunch of guys with bachelors degrees from a decent school CAN design something reasonably modern, just not as fancy and overly complex as an x86 CPU. I don't think the point of rolling your own CPU is to make something better than what you can buy for $100 at a computer show, but rather to make explore something that's quite explorable.

            Also notice I didn't mention Itanium anywhere... that would be a nightmare, trying to design something even remotely similar. There's nothing too bad about doing explicit parallelism, in fact I would think that it's actually easier than implicit parallism, however some of the features they include are just wacky!

            Then again, what do I know?
      • It is infeasible for a new competitor to arise because of the capital involved

        Rule #1 of an econ class I'm taking right now: capital is never a constraint for entering a market. It's out there; you just need to get it.

        When the government is looking at antitrust cases, they get concerned when entry into a market isn't easy. And they don't consider capital an issue.
      • Then building the chip is another beast requiring a fab facility in the order of $1 billion for any process with feature sizes smaller than 0.5.

        You don't need to build your own fab, there are fabs out there that will gladly build your IC for you, the most popular being TSMC [tsmc.com]. Many companies use external fabs (so called "fabless" semiconductor companies), including house hold names like Nvidia or ATI.

        Mind you its still expensive as hell (0.25 ~ 1million US$ for your own mask set for an advanced process) which is why many amatures use FPGAs instead.

  • FPGAs (Score:3, Interesting)

    by zephc ( 225327 ) on Saturday May 11, 2002 @11:39AM (#3502199)
    I attended an IEEE meeting at my school [cogswell.edu] recently, and a guy from Xilinx [xilinx.com] presented and demoed FPGAs (their brand of course) and told us why we should use FPGAs for our signal processing needs. Of course, being an SE student, there were quite a few thngs that were over my head, but of course talking about the massive paralellism clicked with me, and of course hearing that one client of theirs had OC-768 signal processing within one FPGA chip, well, that was pretty damn cool. Also, being able to design your circuits with a nice GUI interface, rather than in VHDL or Verilog or whatever, looked pretty damn cool.
    • Re:FPGAs (Score:3, Insightful)

      by svirre ( 39068 )
      Right. You do _not_ want to deped on the supplied xilinx software for synthesis. It's pretty much crap. Use Synplicity or Leonardo Spectrum (to be replaced with Precicion synthesis this summer) instead.

      Also, you do most definetly not want to design your circuit graphically. The time you are going to use to draw a single state machine graphically, I will have designed the whole circuit. Graphical design tools are OK for the structural design phase (this is however a miniscule part of the whole process), otherwise they ar pretty much toys. The best digital hardware design tool availible is Emacs.

      Once you have learnt hardware design, and understood the difference between a programming language and a hardware description language, VHDL is quite easy to deal with (I don't know much verilog, and from what I have see I don't want to deal with it. It's a verification hell)
  • /.ed allready (Score:5, Interesting)

    by Kizzle ( 555439 ) on Saturday May 11, 2002 @11:50AM (#3502240)
    Since most articles are /.ed as soon as they are posted. I think a great feature for subscribers would be a mirror to each article that is hosted on slashdot.

    • NOTE TO MODERATORS: Yeah, this is off-topic, but comes up often enough that I thought I'd take a stab at it anyhow. Thanks.

      This would probably make a lot of people angry. Your motives are great; you want the subscriber base of /. to enjoy the articles, without having to brutally flood some guy's server(s).

      Trouble is, a lot of sites look to ad revenue to pay for at least some of the cost of hosting and bandwidth. If you mirror the article, most ad systems are "cut out of the equation." Now, this is sounding better and better for /. readers, but not so hot for the site operators' bottom lines. Even if the server goes down, the revenue from our traffic may be well worth the downtime (depending on the site, of course).

      Maybe mirroring of academic articles (without ads or other profit-generation methods) would be appropriate, though. Or, maybe /. could try to contact a site owner prior to posting an article. Say, give the owner a couple of hours advance notice, and let the guy decide for himself if he'd rather be mirrored or /.'ed.

      Just a few thought. :)

      • I disagree (Score:2, Informative)

        by hendridm ( 302246 )
        > Trouble is, a lot of sites look to ad revenue to pay for at least some of the cost of hosting and bandwidth.

        First of all, this would be no different than what's in Google cache, which are often posted with Slashdot articles.

        Second, if a site is Slashdotted, it has the maximum amount of viewers the site owners intended to visit at any given time, all exposed to their ads. Since they did not purchase the infrastructure to allow any more visitors to view their ads+content (by choice), it seems that they were not targetting anyone above that amount. So is it really a big deal if the rest of us see the content cached without ads?
  • by brejc8 ( 223089 ) on Saturday May 11, 2002 @11:53AM (#3502252) Homepage Journal
    The whole point of having an FPGA implementation is to allow you to get the latest version of the processor with a patch debug or improvement. Imagine compiling the latest distribution down to your processor and off you go. If you want it to do something special then hack the code.
    www.opencores.org has many processors allready. I made a MIPS R3000 with a cache and MMU etc with minimal knowledge of hardware design.
  • I took a class in college where we learned how do this, with the last assignment ending with implementing a processor with 12 or so instructions.

    The one thing I think I came away with is that you can built just about anything with FPGA's, whether you mean CPU's, or just controllers for large LED's, garage door openers, mp3 players or whatever...

    There is a huge gap to fill in terms of geeks designing neat household or hobby chips that just do something that you need to implement in hardware (or firmware i guess). These devices don't need to be as fast as Intel or something, but there can certainly do something Intel's never done. I've always wondered why there aren't more open source projects built on this idea...any know? Anyone know where to look for these projects?

    I guess a reality to recognize is that miniaturization (sp?) and faster processors with more features will eventually drive almost everything into the software arena (arguably already happened), so you might as well just write your cool device in and run it on your Linux iPaq or whatever replaces it...

    Of course, I'm far from being an expert in this arena so this is just amatuer speculation...
  • by Anonymous Coward on Saturday May 11, 2002 @11:57AM (#3502263)
    Only properly government licensed and monitored programmers and technical people should be allowed to work on such technology as the potential for using this technology to violate the DMCA exists. Anyone who disagrees with this is a terrorist.

    GOD BLESS AMERICA
  • When I made my MIPS clone MIPS hot straight on my back sending me many threatening letters. Firstly they wanted to make sure I wasn't breaking any of their IPs. Then they wanted me to place a massive blurb to state I want anything to do with their company. Then they went down to the level of requesting my report of the building of this processor to use mips and a perfect adjective rather than a noun. Each time recommending that it would me much easier if I just gave up and took it off the web.
    • Then they went down to the level of requesting my report of the building of this processor to use mips and a perfect adjective rather than a noun.

      That's just common practice with trademarks. For instance, you'll never hear a commercial for the "Pentium" unless Pentium is followed by "processor". Further examples: SPAM is an adjective [spam.com] and should be followed with "luncheon meat." Java is an adjective [sun.com] and should be followed with "technology," "platform," or "language." Macintosh is an adjective [apple.com] and should be followed with "computer."

      MIPS as a noun does not refer to a processor architecture. It refers to an easily-fudgeable benchmark.

  • Ack! I just turned in my senior design project yesterday, after spending almost two all-nighters in a row getting everything to work. And guess what it was? A CPU designed from the ground up, implemented on a Xilinx XC4010E FPGA!

    Course I did mine completely in schematic entry -- VHDL code is for wimps ;)
    • I open sourced my third year project which was made in schematics and I keep getting emails from people saying things like "I sound the schematics on your website but I cant find the VHDL source code".
    • Actually it's the other way around: VHDL is better, and schematic entry is for wimps.

      The reason is that what you draw in the graphic editor is not what actually gets put in the FPGA. I don't know much in detail about how FPGA's work internally, but basically, the compiler can look at a VHDL description and produce the most efficient gate-array implementation for it. Given a schematic design, it doesn't have a high-level sense of what the logic is supposed to do, so it's harder to produce an optimal FPGA implementation.

      Your schematic design would be more efficient if it were implemented as you actually drew it, but not on an FPGA.

    • Schematic entry is for people who do not know VHDL. There is hardly any other reason to use schematic entry when doing CPLD or FPGA programming, because schematic entry does hardly give you more control over the PAR process.

      Cleverly done VHDL can also give you close control over the actual logic. Just look at this CPU: 8 Bit CPU in CPLD [tu-harburg.de]. Even though it is done in VHDL it is optimized to fit just into the smallest CPLDs available.

      Btw. I found above link on http://www.fpgacpu.org [fpgacpu.org] which is another good starting point for FPGA based cpus.

  • Design a processor completely in schematic. I'd say thats reasonably impressive. I have trouble doing friggin pong in a combination of schematic and VHDL =P. Then again, I"m not exactly well versed in either. And Max+Plus 2, the student version, could be easier to work with.
  • by johnjones ( 14274 ) on Saturday May 11, 2002 @12:19PM (#3502335) Homepage Journal
    ok people wake UP

    you cant just go out and write an ARM clone or a MIPS clone ....

    because there implementation is covered with patents ... if you think thats bull many companies have tried to get around it and failed

    there is nothing stopping you from doing a SPARC [sparc.org] clone as its a ISO standard

    the european Space agency Made a SPARC clone and the source is LGPL its called LEON [estec.esa.nl]

    seriously if you want a micro then design a new ISA dont clone an existing one

    INVENT dont clone

    regards

    john jones

    p.s. http://www.opencores.org [opencores.org] is also a good starting place
    • MIPS have only one patent left after Lexra went though them proving each one was made by IBM in the sixties or other things. This pantent is on unaligned loads and stores which are going to be scrapped in later MIPS ISAs. So not many people know this but MIPS 32 ISA is virtualy open. MIPS will make grunting noises but they cant do very mutch. ARM is very heavily patented and they are just as aggresive if not more so. The point of using these ISAs is that theya re very well supported. Compilers debuggers OSs are allready made for them.
    • you cant just go out and write an ARM clone or a MIPS clone ....

      Dude, this is for HOBBY USE. No one is going to be selling these for less money or with more performance than the real thing.

  • by Anonymous Coward
    Be sure to research all patents and all intellectual propertly laws, including the DMCA, and all software licenses very well before proceeding. If you don't follow and obey the above steps before doing anything with your computer and your licensed software then you are a criminal and a terrorist and will have to be tried before a military tribunal and executed as such.

    GOD BLESS AMERICA
  • a whole series of One Instruction Processors which can directly run one or more of the one instruction computer languages out there?

    -Rusty
  • That's one of the nice things about patents -- they expire. And even before they do that, they can become unenforceable for a wide variety of abuses that companies like Intel and AMD are quite likely to commit at some point judging from their past legal shenanigans.
    Of course the real tasty thing would be a nano FPGA churned out by viruses or RNA or some such thing combined with some patent fumble on the latest generation CPUs, yowzza. Can you imagine a. . . nevermind.
    • Even if they expire, there is still a great cost barrier for small companies (or individuals) to produce chips. Secondly, the life span of a computer chip is significantly shorter than the lifespan of a patent. I mean, fifteen years from now, would you be willing to buy a computer with a P4 processor? (If you are, let me know, I'll sell you mine in 15 years)

      The only applicable use that comes out of a patent ending is when you have a load of legacy systems that need their parts replaced. (Just look at the other article from today..about how NASA is buying 8086 chips from ebay)
  • by Dr. Awktagon ( 233360 ) on Saturday May 11, 2002 @12:54PM (#3502481) Homepage

    It's probably a good idea to start getting into this as a hobby because when a weakened version of the CBPTA (?) gets passed (and you know it will), we'll be left on the leashes of the entertainment companies, even if we don't buy their products.

    So all you young'uns better ask mommy and daddy for a FPGA programmer this Christmas!

    I think it would be incredibly cool to have a machine entirely made of open hardware and open software. Don't need that FPU? Take it out and re-burn your CPU, use those extra gates for something else! Need some kind of custom operand in your assembly code? No problem!

  • I want to design my first own baby processor, but Can anyone tell me which, but can anyone tell me , for a novice what is the best place to start with. I saw the list of cores in OPENCORES but I don't understand which one to begin with.
  • by Colonel Panic ( 15235 ) on Saturday May 11, 2002 @01:07PM (#3502529)
    For those who know C++ and don't want to learn a specialized HDL (like VHDL or Verilog) you might want to check out System-C (which happens to be available under an open source license - you can download the whole thing including a simulator).

    System-C promises to allow you to develop hardware models at a higher level of abstraction than either VHDL or Verilog - and you won't have to learn a new language.

    You can find out more at: http:www.systemc.org
    • Having spent some time using SystemC this past semester (implementing a MIPS R10000-like instruction dispatch/issue with branch misspeculation and exception recovery), I have a little insight into this.

      It's definately great to have the C preprocessor, because you can (ab)use macros until your heart is content. If you enjoy debugging output that contains many printf's or cout's, it's great. It also has VCD output for your waveform viewer.

      Its error messages come from two sources, however. First, there are C++ compiler errors. It uses templates all over the place, so if you have a slight type error, you will commonly see 100+ lines of error output when you try to build. Making sense of this isn't fun. Second, there are runtime errors, generated by the SystemC library. SystemC has a long way to go in improving the text of the messages and pointing to where your design is breaking (saying "port 126 is not connected" is not helpful. A port name would be extremely nice).

      It's extremely tempting to design hardware as if you were writing a C/C++ program. This is likely to produce impossibly huge hardware constructs (think about unrolling all of your loops). Breaking this tempation is tough, even for hardware people. Wires and arrays of wires are simple in Verilog. They are horrendous in SystemC (if you follow their suggestion and don't use bit vectors for types 64 bits wide).

      I don't understand why it has to require combinational and sequential logic to be separated into separate functions (aside from syntactic reasons, since we really deal with C++). To make it worse, the designation of whether a function is combination or sequential is in a separate file. This sort of thing should be obvious in an HDL.

      Of course, all of this is opinion. You might really love SystemC. More power to you! I'm only lukewarm to it.
  • by MrChuck ( 14227 )
    I used to read him in Byte in the 80s. Learned how Microcomputers worked by staring at and deconstructing wiring diagrams of his simple single board computers - okay these address lines go to this chip selector (multiplexor) and that line enables the RAM and the R/W line causes it to read in whatever is on the DATA bus.


    Bitching.


    He turned me on to X10 in 1986 (predating many of you! :)


    Good magazine, tho sometimes too Windows oriented.


    So what if all the /.ers who just hammered the site into submission and who had actual hardware interest subscribed for a year? Perhaps wrote an article based on (Free) Unix-likes?



    Hardware needn't be "design a CPU" - I just finished a serial based hack to a kensington power master so my computer can turn off other peripherals and the other computers remotely - with a tad more reliability that X10 can provide.



    So support the little mag, learn how to solder or at least how your machines work inside - being a hardware expert is more than assembling the MoBo you got with that nice RAM and the overclocked CPU.


    If you don't have solder burns on your fingers, your just a poser!

  • I have been researching the possibility of creating my own transputer [geocities.com] for a while now using FPGA's to experiment with parallel pprocessing [aggregate.org]. I have many empty PCI slots and would love to populate them with interesting gizmos.

    As for the difficulty of designing one's own core, take a look at the F-CPU Homepage [f-cpu.org] where the developers have gotten pretty far along with an interesting "Son-of-MIPS" core with 128 64-bit combination integer/floating registers and a superpipelined architecture. The project is maybe 50% complete, but is interesting nonetheless. Also take a look at OpenCores.Org [opencores.org] which has a bunch of cores for free download. Now if only somebody would donate a chip fab to GNU or Debian :-)

    Still, I believe it is possible to use FPGA boards as reconfigurable daughtercards. I wish somebody could post some more information about how this is done, or how to make a PCI FPGA experimenter card.

  • FPGA Fun (Score:4, Interesting)

    by CajunArson ( 465943 ) on Saturday May 11, 2002 @01:46PM (#3502686) Journal
    OK, you can reimplement a modern processor core in an
    FPGA if you really want to (I can guarentee you that
    the FPGA will NEVER run anywhere near as fast as the
    regular chip) or you can do what I did for our senior [purdue.edu]
    design project

    We used a Xilinx Spartan II to run the main board on a model helicopter control. The idea was that several sensors, including a 2 axis tilt, accelerometers, RF controller and an ultrasonic sonar could be easily integrated into the VHDL core, and then the chip would calculate 4 PWM outputs that drove the 4 motors. While the thing unfortunately didn't fly (weight problems, but hey, we're CompE's not aeros!) the board itself worked
    great and the software UART outputted all sorts of fun data about what was going on.

    Here's the interesting kicker: The entire system was clocked at a grand total of 1MHz (that's right folks, 1Mhz) and even that was too fast for most of the onboard operations that we internally clock divided. This thing operated all of the components completely in parallel, so there were no interrupts needed at all. The reconfigurability of the FPGA means you can quickly adapt it to solve a whole bunch of specialized problems very efficiently and quickly. This thing definitely met the criterion for a hard realtime system (motor updates within 1ms of a sensor or RF input) and it did it all
    via VHDL code, no OS or any high level software needed.

    Now obviously this is a very embedded solution and is not extremely flexible, but sometimes you need to step back and look at the true advantages that the hardware provides for you, and use it for something other than reimplementing someone else's CPU core, (of course, that
    can be a hell of alot of fun too.... mmm... 21st Century overclocked Trash 80)

    PS--> use my spam address: foxcm2000@hotmail.com and
    I'll be more than happy to send you all the VHDL we used
    to implement the project since I just graduated yesterday! :)
  • by Niscenus ( 267969 ) <ericzen@ez-n[ ]com ['et.' in gap]> on Saturday May 11, 2002 @02:00PM (#3502749) Homepage Journal

    I had an article on this awhile back ago (toasted like AlaskanUnderachiever's previous four AMD's), but with the site now gone, I can't seem to find it in either google or wayback.

    Anyhow, I think it is important that even hardware move over to the open source world. There are three requirements for this to kick off:

    An inexpensive system for creating them

    Knowledge and understanding of the standards involved

    A central repository for updating and dissemination

    If a common public utility for creating wafers could come out at fair cost (say, atleast equal to a computer, estimate $800 or so) that would be a major step for the first part. If the group [ieee.org] involved at the IEEE [ieee.org] for processor standards could freely distribute some or all of the necessary information, similar to as PARC [pasc.org] did with POSIX, that would assist in the second. Finally, we would need a FreshMeat [freshmeat.net] equivelant for hardware designs.

    Processors are only a beginning...solid state technology, drives and cards would come fast thereafter. Is it an emerging field or something that will remain in the hands of the elite few who actually know the difference between a PSU and an FPU? I can wait you people out...I've been waiting out for the creation of massively distributed Open Source Software before many of you were born!

    • An inexpensive system for creating them

      Good luck.. as a fab engineer I can attest that the last thing companies want to do is to make a die for every Joe Schmoe that comes along. The name of the game in fab is yield, yield, yield. As like a recipe for different cakes, each chip design has it's own recipe that must have the kinks worked out of. There is an enormous ammount of overhead going into starting a process and an enormous ammount of money going into improving the yield of a process. In short, the companies care about the bottom line, and unless people had millions to pony up for their custom designs it isn't going to be happening anytime soon. A company isn't going to let hundreds of millions of dollars of equipment run Joe Schmoe's home grown microprocessor when they could be churning out far more profitable Pentiums, etc. It's just like Boeing or Airbus, as someone mentioned earlier. Boeing can't afford to build a plane from scratch (ie VHDL) for everyone who had $800, it's just not feasible.
  • MMIX (Score:2, Funny)

    by Merlin42 ( 148225 )
    So when can we expect to see actual MMIX hardware?
  • I did this in college, (some 4 years ago). It was a lot of fun.

    I designed a processor that had a stack for a register file. It worked like a charm. It was pretty serious design too with a pipeline of 4 or 5 stages and instruction forwarding etc.

    It would have actually been usefull for an embedded processor that would be dedicated to run a stack based language, like Java.

    Of course the next step is to design the whole thing on transistor level. And that is kind of a pain. Then you have to worry about having enough space to put everything etc, sizing all the transistors just right etc. Also you cant put that on FPGA, you have to be content with spice simulations.

    But the gate level design is fun.
  • See my company's FPGA CPU News [fpgacpu.org] site, and my three part (March-May 2000) Circuit Cellar series, Building a RISC CPU and a System-on-a-Chip in an FPGA [fpgacpu.org] and the accompanying XSOC/xr16 Kit [fpgacpu.org], which includes schematic and Verilog versions of the processor, SoC, as well as C compiler (based upon lcc), assembler, simulator, specs, docs, test suites, demos, etc.

    There's also an FPGA CPU [yahoo.com] mailing list, with almost 500 subscribers. Send mail to fpga-cpu-subscribe@yahoogroups.com to subscribe.

    Many of us FPGA CPU hackers also frequent comp.arch.fpga on Usenet.

    "I used to envy CPU designers, those lucky engineers with access to expensive tools and fabs. Now field-programmable gate arrays make custom processor and integrated system design accessible to everyone. These days I design my own systems-on-a-chip, and it's great fun."
    You can too.

    Jan Gray, Gray Research LLC

  • Anyone got the VHDL to a Stella-compatible video chip core? Add a 6502 core, and you've got 2/3 of an Atari 2600!
  • testing (Score:2, Interesting)

    by lingqi ( 577227 )
    I am kinda late in reply -- so no karma for me -- but for the record: i wonder how they test these suckers as they come off the toaster (haha, toaster)?

    usually any chip would require a custom program to be run on a (very expensive, i might add) tester that will test the thing; writing the program is not cheap, i wonder how they factor in those costs? I wonder if anybody beside me on slashdot thought of this as a serious challenge?
  • Just for kicks, I have been experimenting with my own processor design using TkGate [cmu.edu] for the past few weeks. TkGate is a great digital circuit simulator with lots of neat features.

    I built a working lcd display simulator out of the built-in LED outputs that is connected to some video memory. I also built a data bus that is partially working. I am currently playing with connecting the ALU. I even built an assembler and a cheap assembly language for it :-) Once I add block device support, I want to write a simple OS with a built-in shell for it :-)
  • My partner and I design and implemented (on a large FPGA) a VLIW microprocessor. Our processor had 2 pipelines and a 16-bit registry.

    The trouble is that you can't even come close to the number of pipelines or complexity required for a *real* modern processor using an FPGA. For example, in order to save space, we had to eliminate some of the more complex operations (e.g. divide & floating point instructions, on-chip cache management, etc.). And of course we were limited to only 2 pipelines, the minimum necessary to demonstrate parallel execution, which was kinda the point of our project. This was using the largest FPGA available at the time (250k gates, although there are bigger ones out now). Also, the clock speed of our processor was only 1-2 MHz depending upon how we tweaked it. FPGA designs are nowhere near what you could get with a design layed out and etched into silicon. A typical modern processor uses gate counts in the millions, easily 10-20 times what's available in a large FPGA.

    While FPGAs are useful for simulation and experimentation, in the current day and age they just aren't fast and big enough to replace modern processors. If you're into making a small 8-bit RISC processor, or maybe implementing your own 6800 (or maybe a 6502 for you non-embedded folks) design, you can probably do pretty well, however.

    • Hmm. I have a 32-bit processor, an i- & d-cache, an SDRAM controller, some h/w image processing, VGA output, and an audio/video interface on a single FPGA, using about 50% of a Spartan 300E.

      I'm currently working on adding a JPEG enccoder and an ethernet MAC to the same, single, device. An S2300E has ~300k "marketing" gates on it, which isn't immensely larger than your own. Perhaps your design is more complex than my own.

      The CPU runs at ~25MHz using just the synthesis tools PAR option set to max (takes about 10 mins to synthesize). I think I should be able to just about double that (I've had a similar CPU running at 48MHz on its own after applying a lot of RLOC's to the code).

      The real advantage of this is that I don't have to have a computer - ultimately this will be a nice *small* device that will cost a lot less than even the cheapest PC + video capture + network card. An S2300E development board is only $140 from www.fpgacpu.com... I don't know how much the cjip itself is, but they must be factoring in *some* profit :-)

      Simon
      • I would guess my design is/was more complex. Is your proc a single pipelined RISC?

        As soon as you add parallel execution, the amount of silicon required goes up dramatically. A 2-pipeline proc will take up much more than 2X the space of a single pipelined proc. Also keep in mind the 250k were "marketing" gates (effectively we had more like 180k-200k, of which a large portion were used for the writeback registy), and we also built a fairly advanced run-time debugger into the silicon as well.

        It also sounds like there have been dramatic improvements in FPGAs since I did the project about 4 years ago, as one might expect given Moore's law.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...