Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware Hacking Supercomputing Build

DIY 1980s "Non-Von" Supercomputer 135

Brietech writes "Ever wanted to own your own supercomputer? This guy recreated a 31-processor SIMD supercomputer from the early 1980s called the 'Non-Von 1' in an FPGA. It uses a 'Non-Von Neumann' architecture, and was intended for extremely fast database searches and artificial intelligence applications. Full-scale models were intended to have more than a million processors. It's a cool project for those interested in 'alternative' computer architectures, and yes, full source code (Verilog) is available, along with a python library to program it with." Hope the WIPO patent has expired.
This discussion has been archived. No new comments can be posted.

DIY 1980s "Non-Von" Supercomputer

Comments Filter:
  • Neat... (Score:2, Insightful)

    by jetsci ( 1470207 )
    So, that's neat and all but did I misunderstand something. His model doesn't seem that powerful unless he was using modern processors?
    • by jetsci ( 1470207 )
      I can't help but wonder if this couldn't be emulated for a fraction of the price. Are there any virtualization systems out there that could accomplish what this guy did? I imagine something along the lines of GNS3 might work...
      • Re:Neat... (Score:5, Interesting)

        by mea37 ( 1201159 ) on Friday February 20, 2009 @11:03AM (#26930233)

        Of course a modern computer can simulate a 1980's computer. It would probably take about a day to write a functional simulation in Java.

        For that matter, it's not like this computer can do anything that a modern computer can't do in spite of the different architecture. It was designed to do certain things fast, but anything off the shelf today could run circles around these relics regardless of such optimization. (To GP's point -- since the article indicates that he was building to the functional design of the original, it's probably not powerful by today's standards. He may have used faster components than they had back then -- and he obviously used smaller components than they had back then -- but we're not looking at a modern billions-and-billions-of-transistors-on-a-chip optimized-in-ways-you-cannot-comprehend heat-sink-needing CPU.) So once you talk about "what can it do" at a useful level of abstraction, the answer is "nothing all that practical".

        But that's not the point, is it? This kind of stuff is a hobby and a fascination to some people. I'm interested enough that I might write a software simulation of the machine, but not interested enough to build one. This guy was interested enough to build one.

        It's not like stamp collectors are saving up for a big letter-writing campaign...

        • Re: (Score:2, Funny)

          by jetsci ( 1470207 )

          It's not like stamp collectors are saving up for a big letter-writing campaign...

          That's what they want us to think....all the while, draining the stamp supply!

          But seriously, I understand your point and I can respect that. I just wanted to know if it could actually do anything useful. If not, cool; he's got a pretty neat toy now and that's well worth it.

        • Re: (Score:3, Interesting)

          by hardburn ( 141468 )

          An architecture like this is useful for massively parallel algorithms. It could theoretically outperform modern desktop or server systems within that domain.

          In fact, a rebuild of COLOSSUS [wikipedia.org] was estimated to be 240 times slower than a modern desktop at decoding old German cryptographic signals. That might not sound that good, but if you run Moore's Law in reverse over 60 years, you should get a factor a lot higher than 240.

        • Re: (Score:3, Insightful)

          by yttrstein ( 891553 )
          "But that's not the point, is it? This kind of stuff is a hobby and a fascination to some people. I'm interested enough that I might write a software simulation of the machine, but not interested enough to build one."

          I'd be very interested to hear your opinions of these monsters after you've actually attempted to convert one into a real hardware emulator.

          Something tells me if you're really serious about doing it in Java though, I'll be waiting a bit more than "about a day".
          • by mea37 ( 1201159 )

            I'm curious what "monster" you think I'm talking about simulating. Having RTFA (before my initial post, no less), I'd say modeling the nodes of the NON-VON will be pretty simple.

            As to how long you'll be waiting... especially since I said I might do it, it could indeed be a while in terms of calendar time. I didn't say "I'm going to do this by tomorrow"; I said I expect the programming to take about a day's effort. If/when that effort might start, or how spread out it might be even then, nobody knows.

        • Re: (Score:3, Informative)

          by e2d2 ( 115622 )

          Agreed. Every time something is posted on slashdot there inevitably comes a "yeah but this isn't useful to me" post. Ok, we get it. It wont solve _your_ problem. In fact it might not solve any problem at all. It may just be cool.

          It baffles me that some people don't get that. It's like they just tossed the "right brain" out the window because it wasn't relevant to the logical problem at the forefront of their mind. Think outside that box you call a head for once.

          I am always surprised where I find inspiration

    • Re:Neat... (Score:5, Informative)

      by kdawson (3715) ( 1344097 ) on Friday February 20, 2009 @10:26AM (#26929667)
      Check this out...

      Article at /dev/null [dnull.com].

      Everything that you wanted to know and more. An interesting read.
    • A Field Programmable Gate Array (FPGA) is a fundamentally different type of architecture than what you think of as "modern processors." Instead of serial instruction stream execution engines you are provided with an array of programmable logic blocks in a sea of programmable routing. The programming model is akin to programming a spreadsheet in which each cell updates in parallel. Traditionally we think of this as "reconfigurable hardware" since the languages we use for designing physical hardware and FPGA
    • by sjames ( 1099 )

      For one, it's just a model. For two, though it's easy to forget, we've come a LONG way in a short time in computing. Your cellphone is vastly more powerful than the flight computers on Apollo. Your desktop machine a way faster than the Crays of the '80s.

      The same general concept done today would necessarily look different simply because CPUs have gotten a lot faster relative to communications channels. The more modern CPUs would spend too long idling in that sort of setup.

      I have seen related designs on PCI c

  • Obligatory (Score:1, Redundant)

    by Big Nothing ( 229456 )

    Does it run Linux?

    • Re: (Score:1, Redundant)

      by Lord Lode ( 1290856 )
      Maybe if you make a beowulf cluster of them.
      • I was just thinking the same thing! Maybe we can use it to display pictures of Natalie Portman, naked and petrified and covered in hot grits. I just bought the goatse.cx domain, we can use it!

        • by wcb4 ( 75520 )
          In Soviet Russia, the gostse.cx domain buys you. But its a lie, you don't own it... all your goatse.cx domains are belong to us.... next.
    • Re: (Score:1, Redundant)

      by Big Nothing ( 229456 )

      To he who modded me down:

      All your memes are belong to us!!

  • Call me... (Score:4, Funny)

    by K8Fan ( 37875 ) on Friday February 20, 2009 @10:29AM (#26929749) Journal
    ...when he's emulated a Connection Machine [wikipedia.org].
    • Re: (Score:3, Insightful)

      Who the hell modded the parent insightful? To the parent: call me when you ever do anything except post here.

    • by clary ( 141424 ) on Friday February 20, 2009 @12:42PM (#26931813)

      I got a chance to use a Connection Machine (real, not emulated) in the late 1980s, just a couple of years out of college. It was an internal R&D project for a defense contractor, porting a computational fluid dynamics program I didn't understand from Cray vectorized Fortran to the CM's *Lisp. Fun stuff.

      I even got a chance to visit Thinking Machines headquarters in Boston, and hear Danny Hillis speak. Here he was speaking to a room full of suits, dressed in jeans, sneakers, and a T-shirt. I remember thinking at the time that being able to do that was quite an indicator of success.

      Yeah, yeah, I know...offtopic, overrated, etc. So mod me down if you must. (Or is that just reverse psychology on you moderators? Muhahaha!)

      • by agilen ( 410830 )

        You mean this Thinking Machines? http://www.inc.com/magazine/19950915/2622.html

        I wouldn't exactly call that "success"...he did cool stuff, but his company was one of the biggest failures in tech history...

  • Holy CRAP! (Score:3, Interesting)

    by zappepcs ( 820751 ) on Friday February 20, 2009 @10:32AM (#26929797) Journal

    FTFA:

    What if I want to build my own?

    Yay open-source! The code isnâ(TM)t exactly polished, but in the interest of promoting weird retro computer architectures, Iâ(TM)ve provided the python library I wrote for it and the verilog code for the Processing Elements. Wire together as many as youâ(TM)d like! Use it to catalog all of your WhiteSnake and Duran Duran tapes!

    How the hell did he know about my music collection?

    This is pretty cool. 32 core non-von computing architecture on an FPGA. This is more or less the ARM process... license the IP and put it in an ASIC, except this is free. I've often wondered what might be done with the millions of 30xx series FPGAs that are out there in the world. I could lay my hands on probably 40-50 free. If there were some way to do something like this with them, that would be awesome. I like hobby robotics so it's tempting even though they would not be very power efficient. Still, that's a lot of potential processing for free. Now I'm going to have to look for free/open source code for them.

    • You know, I always wanted to implement a VAX processor on an FPGA. Or better yet (but impossibly expensive), a real, 45nm, full-out printed die.

      I used to collect VAXen. They were powerful boxes for their day. And yes, I /did/ have a Beowulf cluster of them - 8 VAXStation 3100s, a VAXStation 4000/60, and a MicroVAX 4000/200. NetBSD ran great on them. But with modern, fast processors with tons of memory... muahahaha.....

      But seriously - take all those poor legacy folks out there who still have VAXen in busines

      • That's awesome. I worked with a company once whose lifeblood was a single redundant PDP-11... yikes.

        I've got a hand full of Motorola MVME systems in the garage that I'm either going to eBay or play with... I can't decide what to do with them really. Guess I need to get one up and running. Only have AUI network cards for them, so could be awkward. I've been called a bit weird at times too... sigh

        • by ogdenk ( 712300 )

          So get an AUI twisted pair transceiver and plug the machine into just about any hub or switch. It plugs into your AUI port, has some blinkenlights and an RJ-45 jack. They can be had for cheap on eBay. I have a few in my closet. Works great for my old VAXstation 3100.

          AUI ports don't have to be used with ThickNet. I don't think I've seen a functional 10Base5 installation in a LONG time.

      • Re: (Score:2, Interesting)

        i'm currently wrapping up a PDP-11 emulator on FPGA. I'm writing this post while waiting for the simulator to run test code that was written before I was born.... our contract also has us replacing a fixed-head disk with magneto-RAM.
    • Re: (Score:3, Informative)

      by mmkkbb ( 816035 )

      This is more or less the ARM process... license the IP and put it in an ASIC, except this is free.

      Yes, ARM ships soft cores, ST Microelectronics uses soft cores (at least internally), Altera, Xilinx, OpenCores. Here's [wikipedia.org] a list so you don't search for "soft core" and find something totally different.

      • I hope you know that you just gained karma points by passing up the opportunity for a goatse link. I have to say that I've been rather impressed by some of the ARM processors that are out there. Some tweaked for multimedia apps, others for motion control etc. I've not done detailed comparison between the TI Omap and ARM, but I have an ARM-7 proto board I've yet to play with. May hack an older phone for the Omap complete with screen for a small robot. Thanks for the links

    • by e2d2 ( 115622 )

      The hurdle for FPGAs seems to be the learning curve for programming using VHDL. It's a bit daunting coming from programming on a Von Neumann architecture. That's just my experience so far, being new at FPGA programming. But they do intrigue me now that their prices are coming down.

      • Verilog and HDL et al are very different, though there are scripting setups that look/feel similar... ugh, switching between things like C, Perl, javascript, shell, JAL (PIC mcu), NQC (Lego), Python, HTML, and so on, then go home and try to learn VHDL.... you'll find me in a fetal position in the corner of my home office, under a foldup table, whimpering and praying for sudden savant skills. The shallow smears of blood on the backspace key and gibberish on the screen will be the only clues as to how I got t

  • Transputer? (Score:5, Interesting)

    by Muad'Dave ( 255648 ) on Friday February 20, 2009 @10:34AM (#26929827) Homepage
    Wasn't the transputer [wikipedia.org] an example of this architecture? I'm old enough to be able to say "Get off my lawn!" and remember when the transputer came out; it caused quite a stir.
    • by Anonymous Coward

      Still not.

       

    • by N Monkey ( 313423 ) on Friday February 20, 2009 @10:53AM (#26930079)

      Wasn't the transputer [wikipedia.org] an example of this architecture? I'm old enough to be able to say "Get off my lawn!" and remember when the transputer came out; it caused quite a stir.

      The transputer was a RISC-ish CPU with 4 high speed DMA/serial links allowing it to be easily connected to other Transputers (each with its own local memory) to form a network. As such, it could be used to build a large MIMD system - not a SIMD one.

      Transputers (+ the Occam language) supported multi-threaded programming with very fast context switches and, for its time, they also had very good FP performance when compared to the contemporary x86+float coprocessor.

    • Re:Transputer? (Score:4, Interesting)

      by AtrN ( 87501 ) * on Friday February 20, 2009 @12:57PM (#26932069) Homepage
      The transputer architecture was quite different. It wasn't SIMD but just a processor with communications links, some on-chip RAM and h/w support for CSP - a scheduler for threads (called a process in occam/transputer-land) and comms via synchronous, uni-directional channels. The scheduler and stack machine architecture made context switches very fast and communications easy. The h/w was notable that you just needed some power and a clock to get a transputer machine up and building multi-processor systems wasn't too difficult.
  • by Anonymous Coward on Friday February 20, 2009 @10:44AM (#26929935)

    Folks just don't understand what FPGA's do.

    "So, that's neat and all but did I misunderstand something. His model doesn't seem that powerful unless he was using modern processors?"

    It's implemented in HARDWARE. Everything runs in parallel. To do the same on a "modern" processor, would require 300-400Mhz. A FPGA running at a [modest] 25Mhz could get the same or better performance.

    "I can't help but wonder if this couldn't be emulated for a fraction of the price. Are there any virtualization systems out there that could accomplish what this guy did? I imagine something along the lines of GNS3 might work..."

    FPGA's are cheap. A Spartan-3 board can be had for 100-200, and probably hold 2-3 32 node cpu's.

    Programmers just don't understand the difference between say verilog, and C/C++/Java.

    verilog is the basic building block of CPU's. Everything is done in PARALLEL by default, while in C++/Java everything is done SERIALLY.

    • Interestingly, though, while FPGAs are cheap, they're still not as cheap as MIPS on a modern, mass-market Intel or AMD chip.

      There's a fundamental natural limit -- some kind of physical constant that nobody has named yet -- that governs how much computing work can be done on a given amount of silicon, using a given amount of power. It's far from clear that massive parallelism is the way to get closest to that limit.

      The demise of Moore's Law tells us that at least some parallelism will have to be involved, b

    • I know that FPGA's do get better performance than a generalized processor but is it really to the level that you've described?

      I may have to screw around with Verilog again.
      • I was referring more to 25mhz being equivalent to 400mhz.
        • FPGA's are really slow for some applications. This means a narrow window exists between an application that can be done on a micro-controller in software, and the equivalent application being done in an FPGA.

          Many of my applications require long chains of counters and magnitude comparators. FPGA's seem to be particularly bad at implementing them. The estimates I use are:
          A 50 MHz Schottky TTL counter can count at 25 MHz fairly easily.
          A 100 MHz FPGA can count at 2-4 MHz, and
          A 25 MHz micro-controller can

      • The speedup is not always of that magnitude. If what you do is embarrassingly parallel, then yes, it will be a good candidate for an FPGA. However these days, processing cycles are cheap and getting cheaper, so it does not always make sense to do things in hardware.

    • by virtue3 ( 888450 )
      Oh, I'm a programmer, and I understand the difference. I'm just terrified of trying to write a massively parallel program :D
    • by nurb432 ( 527695 )

      Its not true hardware, the FPGA is processing a really low level application, technically.

      Until you put it into silicon its not really hardware.

      But it is pretty damned close to hardware and still really cool, and a good example of the power of programmable logic.

      • So, Intel processors use microcode patches. Does that make them "not really hardware"?

        • by nurb432 ( 527695 )

          In my view of the world, microcode does not qualify as true hardware.

          MicroCODE qualifies as software. Sure its really really really low level ( i have done my share of microcoding on dec processors ), but its still software.

  • but it ain't a supercomputer (at least not any more, by today's standards.)

  • I managed to catch this one before the site went down.

    Cool stuff. The author says that these were originally designed to have each processor operate on a record in a database. All concurrently.

    I imagine the speed of such a system would be staggering... though tough to implement for large data sets. Still pretty cool.

    The Python library apparently implements machine code functions so he can debug in real time from the command line. Not my cup of tea, but cool for people that like to fiddle with machine code.

    • by mea37 ( 1201159 )

      I found this original intent of 1 processor = 1 record interesting as well, but more from a "so that's why this isn't around today" standpoint.

      In the day it may have sounded like a promising approach. 1M records may have sounded like a big dataset. But today there are two types of dataset -- those so small that a conventional computer can handle them just fine, and those so big that this architecture cannot scale to them. I work in a data warehouse that loads several million records per day.

      To be fair, I

  • Seriously, nice post, nice work by the engineer. Inspiring, and learned something new. FPGA... who wouldn't want to try one himself or herself?

  • IANAL, but it appears 20 years has elapsed internationally, and that US patents 5,021,945 and 4,847,755 are beyond their 17 year life. This is assuming these are the only live applications or continuations.
  • by Consul ( 119169 )

    I wonder if something like this would be good for DSP (not necessarily real-time) work. There is a DSP method called "lumped modeling" which uses binary trees to attach together small algorithms derived from bilinear transforms of electronic components (resistors, inductors, capacitors). The networks go together in a way that look almost just like electronic circuits.

  • MasPar (Score:4, Interesting)

    by mdegerne ( 1482827 ) on Friday February 20, 2009 @11:39AM (#26930801)
    For several years I worked on a SIMD system called MasPar. The system had 8192 processors. It was installed in 1991 and it was not until about 1998 that conventional computers running Oracle could even come close to the performance for data warehouse applications. Sure, it's slow by today's standards, but I bet a modern version custom built would be an awesome code breaking and data analysis system.
    BTW: the system was used to help with the human genome project and to search Medical Services Plan data by the Province of BC. It finally decommissioned in 2000 (or early 2001).
    • Re: (Score:1, Interesting)

      by Anonymous Coward

      I helped install the MasPar at our Uni (as a student employee in computer services) and helped with its upgrade to 8k processors. It got used for a few simulations and some password cracking :)

      It was too bad that it never got more use for any of the tasks it'd have been really good at. Eventually the vendor failed, the software stopped being updated, and the MasPar was recycled... almost.

      At that time the U sold older computers in a biannual surplus auction, so students could get a 386 or whatever for thei

      • A customer of mine moved to Windows servers and away from the DEC Microvax they'd used for several years. My partner talked me into trading them two mice (logitech mice) for it and we loaded it up in my truck along with cables, workstations, software, manuals, etc. We figured we could load up a version of Unix on it and make it do something fun. Unfortunately it had no FP capability and we couldn't come up with any free unix to run on it. The thing spent a few years in a storage closet until we finally thre

    • One of my classmates was a Masspar founder. In the 1980s it readily doable for a 2 to 5 person team to design a custom CPU with the new Mead-Conway type circuit compilers and Silicon-fab factories out there. Lots of clever ideas too. Plus UNIX (before Linux) was a low cost way of porting an operating system that customer scientists were familar with. They all claimed C-compilers that made porting code easy. NOT! I put energy industry code on a half-dozen of them.

      The problems was the second generatio
    • Such a beautiful machine! I loved working on the MasPar back in the 90's. I remember the eureka moment when my brain clicked, and I stopped thinking Von Neumannly. To this day, those experiences shape how I approach Clojure and Scala.

    • It was slow for 1 process. But when you started to use all the processors that thing sped up. If well designed code you can drop it by 1 Big O segment O(n) can be done in O(1), O(n^n) can be done in O(n)

  • It had been a wonderful evening and what I needed now, to give it the perfect ending, was a little of the Non-Van.
  • Hope the WIPO patent has expired.

    Don't worry, it's probably public domain by now.

    Unless someone put it to music, that is. [royaltyfreemusic.com]

  • "...a python library..." Hopefully a MONTY Python library? Using all that technology to house all the Flying Circus episodes seems like a good idea.

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...