Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

End of The Von Neumann Computing Age? 243

olafo writes "Three recent Forbes articles: Chipping Away, Flexible Flyers and Super-Cheap Supercomputers cite attractive alternatives to traditional Von Neumann computers and microprocessors. One even mentions we're approaching the end of the Von Neumann age and the beginning of a new Reconfigurable computing age. Are we ready?"
This discussion has been archived. No new comments can be posted.

End of The Von Neumann Computing Age?

Comments Filter:
  • by B3ryllium ( 571199 ) on Wednesday April 09, 2003 @02:35PM (#5694430) Homepage
    "Neumann!"
  • by chimpo13 ( 471212 ) <slashdot@nokilli.com> on Wednesday April 09, 2003 @02:35PM (#5694441) Homepage Journal
    Of an Alfred E. Neuman computing age. I can't wait to see Dave Berg's take.

    Roger Kaputnik where art thou?
  • by Soft ( 266615 )
    IANAReal Computer Scientist, but aren't all current microprocessors and computers Turing machines? Aren't Von Neumann machines self-replicating devices, which AFAIK we don't have?
    • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Wednesday April 09, 2003 @02:42PM (#5694516) Homepage Journal
      Von Neumann means a processor hooked up to a single memory that contains both the program and the data, executing instructions one at a time in a sequence.

      Compare this to the Harvard architecture used on some embedded processors: a processor hooked up to two separate memories, one containing the program, and the other containing the data. This is useful when you have your program in an EEPROM and your data in a little static RAM. Two types of memories naturally fit into a Harvard architecture, though it's simple enough to do the same thing with some memory mapping circuits.
      • OK I'm not to hot on this but does any CPU with a cache do this such as holding the program in one set of registrars and the data in another. I have a vague memory of itaniums doing this but could be horribly wrong

        Rus
    • A terrible Karma Whore opportunity, but from FOLDOC..

      John von Neumann /jon von noy'mahn/ Born 1903-12-28, died 1957-02-08.

      A Hungarian-born mathematician who did pioneering work in
      quantum physics and computer science.

      While serving on the BRL Scientific Advisory Committee, von
      Neumann joined the developers of {ENIAC} and made some
      critical contributions. In 1947, while working on the design
      for the successor machine, {EDVAC}, von Neumann realized that
      ENIAC's lack of
      • The PIC microcontrollers are one line of non-von Neumann controllers I use regularly. They're Harvard architecture. I would assume most MCU's are not Von Neumann for the same reason as the PICs, but I only have experience with PICs.

        You're definition of von Neumann architecture is wrong; a von Neumann machine has one data bus for connecting with memory. That means it has to share the bus for program and data memory.

        As far as I know, a VLIW chip that uses one memory bus is still von Neumann architectur
    • by sketerpot ( 454020 ) <sketerpot&gmail,com> on Wednesday April 09, 2003 @02:44PM (#5694549)
      I'm not a real computer scientist either, but I think that Von Neumann came up with the basic model of computers that we take for granted today. For example, a processor that accesses memory, and does instructions in a linear sequence, or something like that.

      The implication is that we are approaching a transition to some seriously wacked out computer designs. I look forward to seing what these people are coming up with. DNA computers, for example, have a different model of computation.

    • No computer is a Turing machine in implementation (icky tape heads wandering around on an infinite tape - or a finite tape if you knew in advance what algorithm you were about to run), and von Neumann machine refers to implementation.

      A von Neumann architecture treats memory as one big serially addressable hunk of unlabeled "stuff". There's no way to look at the memory and know what anything is (instruction or data? what type of data? what's the meaning of this data?) until you try and execute the memory an
      • Previewing is only helpful if you read the preview. I, of course, meant self-replicating where I put the second self-modifying.

        La la la ... are two minutes up yet?

      • icky tape heads wandering around on an infinite tape - or a finite tape if you knew in advance what algorithm you were about to run


        An implementation of a Turing machine doesn't have to be tape, ect. It's mathmatical abstraction, the tape description is just a metaphor for visualization.

    • by Mr. Slippery ( 47854 ) <.tms. .at. .infamous.net.> on Wednesday April 09, 2003 @02:52PM (#5694610) Homepage
      Aren't Von Neumann machines self-replicating devices, which AFAIK we don't have?

      Von Neumann was smart enough that there is more than one thing named after him. A Von Neumann machine is a self-replicator. A Von Neumann architecture is a computer architecture where programs and data are stored in the same manner.

      Sometimes the latter is also referred to as a Von Neumann machine.

    • Heh. I actually know the difference between von Neumann and Harvard architectures, but I still made this mistake for a few seconds. I guess it's because I read "too much" science fiction, and the concept of vN-machines occurs from time to time. They're good as a sort of "final nemesis" very-bad-concept, and the construction of such machines is often outlawed in whatever social system is being described. Just last week, I read The Wellstone [sff.net] by Wil McCarthy [wilmccarthy.com], which mentioned von Neumann machines. :)
    • You're confusing "Von Neumann device" with "Von Neumann {computer,architecture}", which is an easy mistake to make.

      VN devices are what you said they are, and no, they don't exist yet.

      A VN architecture (or "stored-program architecture") is one where the code for the program gets loaded into the same memory as the data for the program, i.e., essentially everything that you use today. This was in contrast to earlier architectures where the memory was used to store only runtime data, and the code was read

    • As other replies mention, a Von Neumann machine is a conceptual computer which is somewhat more realistic than a Turing machine (although equivalent in the problems it can solve). But why is a relentless science-fiction monster named after a computational theorist?

      The distinguishing characteristic of a Von Neumann machine is that code and data are treated the same. Both are stored in the same memory, which seems natural to a modern user, but was revolutionary back when it was introduced.

      One might say th
  • Well, (Score:5, Insightful)

    by 0x00000dcc ( 614432 ) on Wednesday April 09, 2003 @02:38PM (#5694477) Journal
    Paradigms in science are not meant to last forever, they are usually broken, and computer science is no stranger to this candy.

    • No stranger to this candy? [google.com] What on Earth does that mean?

      (Only "semi" off-topic because technically it's a clarification request about someone's on-topic post, which is therefore itself on topic; "off-topic" because the clarification request is a thinly veiled slam based on one of the oddests turns of phrase I've seen in a while.)
    • Paradigms don't last forever, but some ideas do. (Where forever is a relative term.)

      The Von Neumann architecture presented us with a model for the conventional computer, where instructions are stored as data, which helped us to think of computing and programming in an abstract manner. Even as researchers are trying to advance into new computing architectures, such as FPGA's or quantum computing, the idea of storing instructions as data is permanently plastered into our heads. Universal quantum computers
  • Three articles (Score:4, Insightful)

    by stratjakt ( 596332 ) on Wednesday April 09, 2003 @02:42PM (#5694523) Journal
    Two requiring a subscription, and one a goofy PR piece about wingnut FPGA "computers" that cost 200Gs and up.

    Anyways. The FPGA machines sound intriguing, but really arent as 'all powerful' as the non-techie Forbes piece makes them out to be. Not everything is parralellizable, not everything is conducive to dynamically altering the instruction set as you run it.

    The traditional von neumman architecture is the best solution for many processing tasks, lots of stuff is just conducive to a sequentially operating processor. It's probably the best for all around general computing.

    And 200 grand is probably better spent on a beowulf cluster of something than one of these boxes, but I'm sure they have a niche of usefulness somewhere.

    I dont expect to see the traditional computer go anywhere anytime soon.
    • Hear hear. It sounds like a neat idea, and a great machine to be an idiot savant's idiot savant. But I don't think it will have as big an impact as the sensationalistic articles make it out to be. It's cool though.
    • Re:Three articles (Score:2, Informative)

      by olafo ( 155551 )
      All 3 articles are FREE! Try thousands, not 200Gs.
      An interesting quote regarding a FPGA web server application [forbes.com] (in case you didn't get your free login ID just like /.): "The result is that a WinCom server with a few $2,000 FPGAs can blow the doors off a Sun or an Intel-based machine. "We're 50 to 300 times faster."

    • ..is where this sort of stuff really belongs.
      A family member is working here [qstech.com], and the biggest markets they have lined up for their new design are the mobile-phone vendors, and image processing [olympus.co.jp]. They aren't interested at all to pitch it towards general-purpose computing.

      Interestingly enough though, the software-defined-radio teams have been eyeing the product with drool in their mouth ever since it was demonstrated [eedesign.com]. Said family member remembers trade conventions the company's been to, where the SDR teams sh
    • Actually, Field Programmable Array's have the potential, combined with evolutionary programming, to be very powerfull. They've already booked some damn surprising and usable results.

      The only problem being that they sometimes come up with solutions which are unreplicable, due to the fact that they've solved the problem using the unique imperfections in the fpga itself.
    • But if you used FPGAs with asynchronous logic and base3 transistors then you would be cooking with gas! All we need to do is reverse the polarity and we've got an inexpensive supercomputer!
  • by TerryAtWork ( 598364 ) <research@aceretail.com> on Wednesday April 09, 2003 @02:43PM (#5694538)
    I'm sure these articles mention the 'Von Neumann Bottleneck' which is a power distribution in instruction execution, as 10 % of the instructions get executed 90 % of the time.

    But *I* say the REAL VNBN is that only 90 % of all computer scientists are only 10 % as smart as Von Neumann.

  • not a hoax... (Score:5, Informative)

    by Anonymous Coward on Wednesday April 09, 2003 @02:46PM (#5694561)
    For those of you skeptics (like myself when I first saw the articles) and for those that didn't RTFA:

    Allan Snavely, a computer scientist at the University of California at San Diego Supercomputer Center, has been using a Star Bridge machine for about a year. He says he originally contacted Star Bridge because he suspected the company was pulling a hoax. "I thought I might expose some fraud," he says.

    But after meeting with Gilson and seeing a machine run, he changed his mind. "They're not hoaxers," he says. "As I came to understand the technical side I thought it had a lot of potential. After talking to Kent Gilson I found he was very technically savvy."


    Silicon Graphics has also asked Star Bridge to send along a copy of its hardware and software. The $1.3 billion (fiscal-year 2002 sales) supercomputer maker wants to explore ways to make a Star Bridge system work with a Silicon Graphics machine.

    Over the past two years Star Bridge has sold about a dozen prototype machines based on an earlier design to the Air Force, the National Security Agency and the National Aeronautics and Space Administration, among others. It has also sold seven of the new models.

    Olaf Storaasli, a senior research scientist at NASA's Langley Research Center in Hampton, Va., has been using Star Bridge machines for two years and says they are very fast but not yet ready to handle production work at NASA. "It's really a far-out research machine," he says. "It's more about what's coming in the future. I would not consider it a production machine."

    One problem, Storaasli says, is that you can't take programs that run on NASA's Cray (nasdaq: CRAY - news - people ) supercomputers and make them run on a Star Bridge machine. Still, he says, "This is a real breakthrough."


  • ...Well, that's what the article says. I guess they haven't heard about pipelining, multiple execution units, SIMD etc etc.

    • by IWannaBeAnAC ( 653701 ) on Wednesday April 09, 2003 @02:58PM (#5694660)
      Sure, but they are only variations on the theme of single threaded execution. There is still only one Instruction Pointer, even if it is not always exactly defined due to out-of-order execution or other trickery. Logically, there is still only a single instruction sequence that appears as if it is executed in order. It is nothing like the concurrent processing of, say, the brain, or even a transputer.

      Even hyperthreading is only a minor improvement in parallelism, exchanging one instruction pointer for a small number (2? 4?). Hardly a different architecture.

      • There is still only one Instruction Pointer, even if it is not always exactly defined due to out-of-order execution or other trickery.
        It's a quantum instruction pointer! The shorter the chunk of time being considered, the less you know about which instruction is being executed!
        • Well, just to be picky (I am a physicist ;) it isn't _quantum_ as such because that implies there are non-commuting observables, i.e. that it is NOT possible, even in principle, to make measurements on the system without affecting the results of other measurements.

          But even on an out-of-order CPU, you CAN completely describe the state of all gates exactly at all times (at least, asuming the behaviour is that of an 'ideal' digital circuit). This is not true for a quantum circuit.

          But you have a point, a p

  • The article mentions that Star Bridge has a custom programming language and OS, which is sure to slow adoption to a crawl. Assuming, of course, this thing isn't vaporware to begin with.

    Another point the article makes is that it has been traditionally very difficult to build general purpose FPGA based machines. This got me thinking, anyone else remember a slashdot article from a couple of years ago where a fellow used genetic programming to produce an FPGA instruction set that could differentiate between tw
  • Futureware (Score:4, Insightful)

    by Ghoser777 ( 113623 ) <fahrenba@@@mac...com> on Wednesday April 09, 2003 @02:52PM (#5694611) Homepage
    "Gilson has not subjected his machines to industry benchmark tests."

    Yeah, I have a computer doing 1 trillion giggaflops a second powered by my pet hamster. No test results can disprove me yet!

    "I live in the future."

    Clearly.

    "'It's really a far-out research machine,' he says. 'It's more about what's coming in the future.'"

    Yep. So the title is kind of misleading. This is all stuff in the future, like flying cars and such. We could make flying cars if we wanted to, but we really don't want to yet (economic and regulatory reasons). This technology has the impedments of still really being explored and economic feasibility.

    It'll rock when they're ready, but it's nothing to go nuts over yet.

    F-bacher
    • We could make flying cars if we wanted to, but we really don't want to yet (economic and regulatory reasons).

      Uh, no, we can't make flying cars. We can make small airplanes, but they can't stop at an intersection like a car can. We can make helicopters, but rotors have a much bigger footprint than a car. We can make vehicles with small rocket thrusters, but probably not with the range of a car.

      • We can make small airplanes, but they can't stop at an intersection like a car can.

        No amount of air traffic will ever require an intersection. It's a three-dimensional world out there.
  • by Traa ( 158207 ) on Wednesday April 09, 2003 @02:52PM (#5694612) Homepage Journal
    A Parallel computer can't actually do anything that a serial computer can't do, other then doing things more efficiently. Any von Neumann based computer can simulate a parallel computer and thus achieve the same computed results.

    The hyped 'we are on the eve of the next generation of computing era' seems added by the startup companies marketing departments and eagerly taken over by the reporters.

    Not to say that the new generation of reconfigurable computers (FPGA are what...30 years old now?) arn't a cool thing to have.
    • Nit Picking (Score:2, Insightful)

      I disagree, but respectfully.

      You are correct in the general case BUT there are cases where this is not correct. Let's suppose that we've got a task which, using von Neumann architecture, will take an amount of time that exceeds the expected lifetime of Earth. Now, using a parallel computer, in the theoretical sense will see this task take a reduced amount of time. Ignoring the possibility that the von Neumann based computer is shuttled to a safe environment before the destruction of Earth, the task will nev

      • Nitpicking is fine :-)

        while we are doing that...your argument doesn't hold. The sequential (I incorrectly used the word 'serial' earlier) computer runs on a clock speed and all it needs to do EXACTLY what the parallel computer is doing is run at a higher clock speed. FPGA's are known for their slowness, so it is not trivial to claim that the parallel implementation of an algorithm on an FPGA can be done faster then with an equavalent ($$) amount of sequential processors.

    • A Parallel computer can't actually do anything that a serial computer can't do, other then doing things more efficiently.

      I'd like to know what you mean exactly. I think there may be some limitations to that assertion.

      I agree that an SMP computer or a Beowulf cluster can't do anything that a serial computer can do. The main reason being that in some ways (e.g. memory access) these devices are somewhat serial. Any given bit of data can only be accessed (e.g. written to) by one process at a time.

      On the
  • by leek ( 579908 ) on Wednesday April 09, 2003 @02:54PM (#5694623)
    BOPS [bops.com] tried the same thing with FPGAs, and look what happened to them [eetimes.com].

    Also see this thread [google.com].

  • by Fnkmaster ( 89084 ) on Wednesday April 09, 2003 @02:54PM (#5694628)
    Much like custom vertex shaders and reconfigurable GPUs have greatly increased the capability of modern graphics cards and greatly reduced the amount of CPU cycles required for very complex real-time 3D graphics, I think that a reconfigurable logic coprocessor model has real potential to take certain computationally intensive repetitive tasks off the hands of a dedicated CPU. The problem of course is that the technology doesn't currently exist to, say, compile an arbitrary chunk of C code into a program that can run on an FPGA computer - the compiler technology is mentioned in the article as the current limiting reagent. A common understanding of how this should work needs to be developed, and rules for when it's useful, and the relationship between I/O constraints and processing speedups needs to be taken into consideration.


    In general this "partitioning" process seems to be somewhat domain-specific and difficult. If you could do something like integrate into a JIT environment something that identified computationally intensive, repetitive, small-sized chunks that aren't I/O constrained, and be able to generate FPGA code on the fly, that would be tres cool.


    Can anybody really explain why it's so hard to make a somewhat higher level language that can be compiled down to VHDL and combined with various chunks of library code into a specific FPGA configuration?

    • by Obsequious ( 28966 ) on Wednesday April 09, 2003 @03:19PM (#5694810) Homepage
      It depends on what you're trying to do.

      Usually when you are trying to compile something down to logic gates, you have to handle instruction scheduling. For example, in any conceivable situation, division always takes longer than addition. So, you have to make sure that while you're waiting for a division to complete all the rest of your data doesn't evaporate.

      This isn't like a general purpose processor == there are no persistent registers here. Use it or lose it. So you have to stick in tons of shift registers everywhere, as pipeline delayers.

      So it's not as simple as just saying res = (a + b) /r + (q * p); or something, because you have to synchronize all the data. All this, of course, is just for a calculation: imagine the difficulty when you are waiting for signals on off-chip pins, when you don't even know how long you're going to be waiting. Also consider how you handle cases where you have to talk to memory: you usually have to write your own memory CONTROLLER, or at least use someone else's, meaning you actually have to worry about row and column strobes, whether it's DDR or not, and so on.

      If you've done multithreading programming and understand those difficulties, then take that and multiply the difficulty by a couple times, and you just about have it.

      All that said, though, you're right: it shouldn't be that hard. If all you want to do is use C to express a calculation, that is fairly easy to boil down to a Verilog or VHDL module.

      The problem is that most of the 3GL-to-HDL vendors try and boil the whole ocean. They want you to use nothing but their tool, and never have to look at Verilog. That is where things really start to break down.

      An example of this done mostly RIGHT is a company whose name I can't remember. (AccelChip?) They make a product that takes Matlab code and reduces it to hardware. That's easier in a lot of ways, because Matlab is really all about simply allowing you to easily express a mathematical system or problem. There aren't all these control flow, I/O, and other random effects. My understanding is that this Matlab-to-VHDL tool works quite well.

      So, it all depends on what you want to do with the FPGA. :)
      • by Fnkmaster ( 89084 ) on Wednesday April 09, 2003 @03:35PM (#5694943)
        That makes a lot of sense. I have done simple VHDL programming before, so I do have some sense of the complexity involved in synchronizing data flow between different circuits and logic paths (and god knows, I know all about the complexity of multithreaded programming - I'd say only about 10%-20% of programmers have a proper sense of how to write safe multithreaded code).


        I think you're right - handling arbitrary control flow, branching and so forth is a complex part of modern compilers, and of modern CPU hardware - and it is only possible because the CPU hardware handles all of the crazy stuff like ordering instructions, managing register contents (especially with all the voodoo that goes on behind the scenes in a modern CPU) and so forth. If you tried to do all of that in the compiler (which is effectively what you are talking about here), the compiler seems like it would have to do a lot more work than standard compilers generating machine code.


        The instruction set of a modern CPU serves as the API, the contract between software land and hardware land, and that is what allows the CPU designers to go behind the scenes and do all sorts of optimization, only incrementally versioning the instruction set for large changes (like SIMD). When you eliminate that contract with the generalized computing hardware, and basically are compiling down to arbitrary HDL and gate configurations, it seems like too many degrees of freedom to manage the complexity, without additional constraints (like only trying to solve matrix or other mathematical problems, like the interesting product you point out).

      • For example, in any conceivable situation, division always takes longer than addition.

        What about division by a power of two? For those non-CS people out there multiplication and division by powers of two can be implemented by shifting bits. Shifts are commonly faster than addition/subtraction.

        Anyway, interesting point.
    • Because VHDL looks suspiciously like pascal ;).
  • by Nyarly ( 104096 ) <nyarly.redfivellc@com> on Wednesday April 09, 2003 @02:59PM (#5694671) Homepage Journal
    The basis of almost every digital computer is a basic cycle, viz:

    1. Load the next instruction from the memory location indicated by the program counter.
    2. Decode the instruction.
    3. Execute the instruction.

    Some implementations add a step between 1 and 2 that says "increment the program counter" and leave jumps up to specific instructions. Others associate program counter changes with every instruction (i.e. jumps go to somewhere specific, every other instruction also implies PC++.)

    There's nothing more to Von Neumann machines. They are unrelated to finite state machines or Turing machines, except that every Von Neuman machine can be modelled as a Turing machine. The difference is that a Turing machine is a mathematical abstraction, whereas Von Neuman machines are an architecture for implementing them.

    Whoo hoo. And yes, I am a computer scientist. Or maybe a cogigrex.

    • by Anonymous Coward
      You don't even mention why a Von Neumann machine was so important to the advancement of computers. Von Neumann came up with the idea of storing computer instructions as data. Previous computers were programmed by changing jumpers, boards, etc. They had to be reprogrammed between tasks. Von Neumann let computers reprogram themselves, an amazing advance.

      The program counter stuff and instruction cycle is just an implimentation. It's not the important part.

  • Can anyone really see the end of CPU + RAM architecture on the desktop, laptop, or handheld in the next 10 years? No? I didn't think so.
  • by Obsequious ( 28966 ) on Wednesday April 09, 2003 @03:09PM (#5694738) Homepage
    Okay, no. FPGAs are NOT going to completely change computing.

    First, you have to understand what they are: basically an FPGA is an SRAM core arranged in a grid, with a layer of logic cells (Configurable Logic Blocks, in Xilinx's parlance) layered on top. These logic cells consist of basically function generators that use the data in the underlying SRAM to configure their outputs. Typically they are used as look-up tables (LUTs) -- basically truth tables that can represent arbitrary logic functions -- or as shift registers, or as memories. On top of THAT layer is an interconnection layer used for connecting CLBs in useful ways. The FPGA is re-configured by loading the underlying SRAM with a fresh bitmap image, and rebuilding connections in the routing fabric layer.

    You write for FPGAs the same way you build ASICs. You use the same languages (Verilog, VHDL) and sometimes the same toolchain. The point being: this is HARD. Trust me, I've been doing it. Verilog is damn cool, but remember that you're still building this stuff almost gate-by-gate.

    There are a number of tools out there that do things like translate 3GL languages (such as Xilinx's Forge tool for Java, or Celoxica's DK1 suite for Handel-C) to an HDL like Verilog. Other tools like BYU's JHDL are essentially scripting frameworks for generating parameterized designs that can be dumped directly into netlist (roughly equivalent to a .o file.)

    My job for the past several months has been to obtain and evaluate these tools. I can tell you that these tools are not there yet.

    So what do you use FPGAs for? Well, for the next 5 years, likely one of two things: either really cheap supercomputers (which is what we are working on) or as a "3D Graphics card play." The supercomputing play is obvious, the the other one bears explanation.

    Anything you can think of goes faster if you implement it in hardware. 3D graphics is a great example: most cards today consist of a bunch of matrix multipliers plus some memory for the framebuffer, and a bunch of convenience operations that you do in hardware as well (like textures and lighting and so on.) Because it's in hardware, it's way faster than anything you could do on a general purpose processor.

    Now, the problem is that hardware means ASICs (until recently.) ASICs are only cheap in large volumes. Thus, for applications that are not mass-market (like graphics cards are) it is not practical to build out an industry building hardware accelerators for them.

    That's where FPGAs come in. FPGAs cost more per ASIC, but less than ASIC in small volumes. This suddenly makes it practical to make custom hardware accelerators for almost anything you can think of.

    This is also true of supercomputing: supercomputers are still general-purpose, just not THAT general-purpose. Your algorithm still benefits when you can just reduce it to logic and load it onto a chip. You might only be running at 200MHz, but when you get a full answer every clock cycle, you suddenly do a lot better than when you get an answer every 2000 cycles on your 2GHz processor.

    So to get back on topic, where will we see FPGAs? Well, you might expect to see an FPGA appear alongside the CPU on every desktop made in a few years; programs that have a routine that needs hardware acceleration can just make use of it. (Think PlayStation 4, here.)

    You might also see things like PDAs come with FPGA chips: if your car's engine dies, you can just download (off your wireless net which will be ubiqutious *cough*) the diagnostic routine for you car and load it into that FPGA and have your car tell you what's wrong.

    Aerospace companies will love them, too. Whoops, didn't catch that unit conversion bug in your satellite firmware before launch? Well, just reprogram the FPGA! No need to send up an astronaut to swap out an ASIC or a board.

    What you're NOT going to see is every application ported to FPGAs willy-nilly, because like I said, this stuff is not easy. I'm coming a
    • You can probably answer better than anyone else here...

      How fast are modern FPGAs? Can you actually run data through and get the result back in a clock cycle? If not, can you pipeline?

      Are these clocked as fast as modern CPUs?

      • by Obsequious ( 28966 ) on Wednesday April 09, 2003 @03:43PM (#5695069) Homepage
        I am mostly familiar with Xilinx's parts, but my understanding is that really the only other maker is Altera and they are a couple years behind.

        The Virtex II (Xilinx's latest) clocks at up to 200MHz, though the more complicated your circuitry, the lower it gets. 200MHz is a theoretical max -- like Ethernet; you never quite reach it in practice.

        It includes a number of on-chip resources, such as block memories (which are more like cache SRAM than DRAM DIMMs you are probably used to) and 18-bit-wide hardware multipliers. The Virtex II Pro line is a Virtex II plus an actual processor core -- PowerPC, ARM, or their own MicroBlaze I believe. (That alone is proof enough that von Neumann machines aren't dead -- Xilinx INCLUDES one in some of their FPGA parts!)

        You can get them in various sizes, which basically means how many CLBs they have. Xilinx measures these in "logic gates" though that is really a somewhat sketchy metric (like bogomips, sort of.)

        And yes, you can actually run data through and get results back one per cycle. To accomplish this, you usually HAVE to pipeline the design. Typically you end up with a scenario where you fill up the chip's logic with your design, and start feeding it data at some clock speed. Then a few hundred cycles later, you start getting results back. Once you do, they come at one per cycle.

        We have an application where we are actually clocking the thing at 166MHz -- which is the speed of a memory bus, not coincidentally. Given this config, we are basically clocking the chip as fast as the memory can feed us data. The idea is that we read from one bank at 166MHz, and write to another at 166MHz.

        One way to think of this is as a memory copy operation, with an "invisible" calculation wedged in between. When you consider what a Pentium 4 would have to do (fetch instructions from cache/memory, fetch data from cache/memory, populat registers, perform assembly operations, store data back, not to mention task switching on the OS, checking for pointer validity, and so on) you begin to see the advantage of FPGAs.
      • I'll tackle this answer, since I'm fairly qualified to answer. FPGA's very typically run at clock rates of 100 to 300 MHz. You might see 100 MHz rates done by beginners or sloppy ASIC coders, and 300 MHz by experienced designers who spend extra timing making sure there are enough pipeline registers. The *highest* speeds I've seen are around 650 MHz, and these only occur in extremely specialized circuits designed at a very low level by guys with Godlike powers of intuition and creativity (& patience
    • You might also see things like PDAs come with FPGA chips:

      The IBM PDA reference design using a PowerPC chip also contained an FPGA. I haven't seen any reports on what it would be used for.

    • I've had the pleasure of doodling with an XSA-100 board for some time now, this has a nice little sdram (8mb) some flashram, a CPLD connected to your parallel port, plus, every geeks favorite, it has a vga-port connected to it with some simple two-bit/channel resistor based DA converter for your rgb. Add the free (beer) Xilinx Foundation kit, and you've got yourself a hip VHDL (=language) setup.

      Hw vs. Sw - which is more difficult to "doodle" with?

      Me also having a software background allowed me to relate

      • One of the things I do is that they'll hand me Yet Another Board(TM) and tell me to make it work. This basically means making the pretty LEDs blink, generate square waves on pins to view on the o-scope, etc. This is always fairly easy, and fun.

        The next step up is useful things, like the recent colored globe thingy. That's mostly electronics, with a little bit of hardware thrown in for good measure. Replace the PIC with an FPGA or CPLD and away you go. I once wrote a framebuffer that talked to the RAMDAC -
    • Another thing you didn't mention is that despite the cost tradeoff, FPGAs are MUCH slower than ASICs for the same logic. They do not run as fast (which you touch on in a subsequent reply) but they also cannot do even close to same amount of work at the same speed often due to wiring congestion/routing issues.
      You usually have to do more than a linear amount of pipelining, put it that way.

      As far as aerospace applications, i doubt this very much. Being that they are a vast sea of SRAM, charged particles and
  • I can't help but think that these would be great to write emulators. Instead of interpreting the code, hop!, just reconfigure the chips. Instant fullspeed. :-)

  • by studboy ( 64792 ) on Wednesday April 09, 2003 @03:16PM (#5694787) Homepage
    I've programmed on the old bit-sliced Connection Machines, which are vaguely similar. Two points to ponder:

    - it was a *tremendous* pain in the ass. This Star Bridge machine isnt a general-purpose solution, it's only for applications that can stand writing 100% custom software in a custom language.

    - the data has to come from somewhere. So you can do 1G operations per second. What's the I/O like? Do they use a PC for a host or an SGI or ...? Is there a bunch of DRAM somewhere or do you carve memory out of the (expensive) FPGA?

  • Wouldnt it be damned smart to start standardizing some sort of FPGA addon card? There's plenty of obvious applications: crypto, 3d acceleration.

    Hardware would just be a PCI-X card with a bunch of FPGA's thrown on, and a microcontroller to handle programming of them and PCI arbitration.

    The real trick isnt the hardware, its standardizing the software to make it readily accessible to anyone and everyone. When Quake can start using your FPGA, it'll be a happy day in the neighborhood (RIP).

    To he who gets r
  • I remember when they first announced this on slashdot quite sometime ago that the original specs called for a backup P3 processor and they ran Windows 98.

    I thought for certain they were vaporware at that point. Not sure now.
  • 1. This article is worthwhile reading:

    "The future of computing-new architectures and new technologies" [iee.org]
    By Paul Warren (04-Dec-2002)
    The worlds of biology and physics both provide massive parallelism that can be exploited to speed up lengthy computations-with profound consequences for both everyday computing and cryptography.

    2. Yes, it's been apparent for the last few years that computing is entering a new phase with diversity of computing 'substrates' as one key theme. Ameoba, Java, .NET, CORBA an

  • Who cares? (Score:3, Insightful)

    by WolfWithoutAClause ( 162946 ) on Wednesday April 09, 2003 @03:41PM (#5695050) Homepage
    My laptop has more processing power than a Cray-1, I don't even know what to do with my 750 Mhz desktop half the time.

    What do I need more processing power for exactly? Seriously?

    Most applications that need more grunt probably already have ASICs designed for them (e.g. graphics cards), and ASICs are much more efficient anyway; and in quantity, cheaper.

    So you're looking for an application that doesn't already have any hardware for it, and can't be attacked by a bunch of cheap Athlons or Intels or other supercomputers. What exactly?

    • Re:Who cares? (Score:3, Insightful)

      Alan Kay said (and I paraphrase): A big enough quantitative difference becomes a qualitative difference.

      After a point, you're not just running the same software slightly faster; you're running whole new classes of software. Just think, ripping and playing audio mp3s wasn't possible on home computers of 10 years ago. Even if you'd had the software, it would have taken forever to rip stuff, and you wouldn't get realtime playback of good enough quality. Now You can rip, mix and burn on almost anything you b
  • It's the beginning of the A. E. Newman age!
  • by Salamander ( 33735 ) <jeff@ p l . a t y p.us> on Wednesday April 09, 2003 @04:06PM (#5695393) Homepage Journal

    I can only wonder what sort of favors Daniel Lyons is receiving from Star Bridge. The only news here is that Forbes is being so blatant about whoring themselves out as a PR machine for a troubled company. No wait, that's not news either.

  • by Skjellifetti ( 561341 ) on Wednesday April 09, 2003 @04:51PM (#5696028) Journal
    About three years ago, Forbes ran an article on 64 bit computing in which they claimed that a 64 bit computer could address 64! bytes of memory. That same article called Unix a programming language and had several other silly inaccuracies. Be wary, your PHB will soon be asking for a demo.
  • Troubleshooting a broken Flexible Flyer [tripod.com] is pretty simple. Everything's very accessible on one. Thing is, they're not really a year-round device.
  • Hypercomputing. Gilson is a salesman. What I want to know is who is the technical designer on his team? Note that Gilson's machine is based on a paper published by Mark Oskin, Fred Chong, and Tim Sherwood. (This paper was about something called "Active Pages" and has a lot to do with Processing-In-Memory, research that we are also working on). I would think of Chong as being the lead investigator. Here's his homepage: Active Pages [ucdavis.edu] This article is chock full of no-namers, but one name does have weight.
  • in his now famous Turing Award lecture: "Can Programming Be Liberated from the von Neumann Style?"
  • http://dol.uni-leipzig.de/pub/showDoc.Fulltext?la n g=en&doc=1996-24&format=text&compressi on= Sample Code AGENT a % elevator "a" EXTENSIONAL PREDICATES CREATE TABLE at( Floor INTEGER UNIQUE); CREATE TABLE up( Floor INTEGER UNIQUE); CREATE TABLE down( Floor INTEGER UNIQUE); CREATE TABLE req( Floor INTEGER UNIQUE); INTEGRITY CONSTRAINTS exists X: at(X) | up(X) | down(X); DEFINE PERCEPTION EVENT reqTo( Floor INTEGER); DEFINE PERCEPTION EVENT arrAt( Floor INTEGER); DEFINE ACTION mvup() REALIZE
  • This might lead to a resurgence of the language. It can be retooled to be object oriented (anybody remember Neon?)

    Smalltalk might be another IDE for FPGAs as objects can be defined which represent gates and ...

    I think I'll shut up now and find a Xylinx manual on the web somewhere.

  • Already there (Score:4, Informative)

    by Tim Sweeney ( 59452 ) on Thursday April 10, 2003 @03:37AM (#5699615)
    If you're running a 3D-accelerated PC game or modelling application, the majority of your computer's FLOPS are already consumed by a non Von Neumann computing device.

    For better or worse, most of the PlayStation2's computing power is locked up in a non Von Neumann architecture.

    So the evolution of computing to non Von Neuman architectures isn't so much news as a gradual shift that began about 5 years ago with 3dfx, and is really starting to happen large-scale right now.

    The justification for FPGA's in consumer computing devices could be seen as a generalization of the rationale behind 3D accelerators: they bring you the ability to get a 10X-100X speedup in certain key pieces of code that are inherently very parallel and have very predictable memory access patterns.

    I think the timeframe for mainstream FPGA style devices is quite far off, though. They need to evolve a lot before they'll be able to beat the combination of a Von Neumann CPU augumented with several usage-specific non Von Neumann coprocessors (the GPU, hardware TCP/IP acceleration, hardware sound...)

    Here are the major issues:

    - You'll need a lot more local memory than these devices have now -- there is a very limited set of useful stuff you can compute given a 32K buffer (a la PS2) and significant setup overhead.

    - The big lesson from CPU's (and I expect from GPU's in the next few years) is that things REALLY flourish once you have virtualization of all resources, with a cache hierarchy extending from registers to L1 to L2 to DRAM to hard disk. For virtualization to make sense with FPGA's, Star Bridge's quoted reprogram times (40 msec) would need to improve by about 10,000X. Without this, you can really only run one task at a time, and that task can only have a fixed number of modules that use the FPGA.

    Even then, it's not clear whether the FPGA's will be able to compete with massively parallel CPU's. In 3 more process generations, you should be able to put 8 Pentium 4 class CPU's on a chip, each running at over 10 GHz, at the same cost as current .13 micron CPU. Such a system would be VERY easy to program, a couple orders of magnitude more so than an FPGA. So even though it wouldn't have as much theoretical computing power as an FPGA, massively parallel CPU's are likely to win out because they have the best cost/performance when you factor in development cost.
  • Generality (Score:3, Insightful)

    by AlecC ( 512609 ) <aleccawley@gmail.com> on Thursday April 10, 2003 @08:13AM (#5700326)
    All three articles are talking about highly specialised, basically single function, machines. As other posters have correctly pointed out , programming such machines is very, very difficult. When you manage to do so, they can be very powerful indeed. But they do only one job, even though they do it very, very well. Saying that they are likely to replace general putpose CPUS is like sayign that F1 cars of Indy racers about to replace pickups or family cars. They may do a job worth doing in their specialist area, and they may make money, bu they are never going to replace the VN machine in 90% of the places it is used.

    One of them is a specialised web server. Fine, there are a lot of web pages out there that need serving. I can well believe that you can build an FPGA-based static-page web server which will beat the pants of a Sun/Intel server doing the same thing. But what about dynamic content? is their DBMS as good as the latest Oracle or MySQL? Willit, say, handle the internationalisation issues that those systems will? Bet it won't. Will it runs PHP or Python natively? I doubt it - I bet it hands that over to a traditional back-end processor.

    As has also been said elsewhere, thus kind of hype is a repeated event. A specialist machine outperforms a generalist machine at its specialist task, and journalists claim that the world has turned upside down. Connection Machine, Deep Blue, GAPP, transputer... Just a few I can call to mind.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...