Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Start-up Could Kick Opteron into Overdrive 127

An anonymous reader writes "The Register is reporting that a new start-up, DRC Computer, has created a reprogrammable co-processor that can slot directly into Opteron sockets. This new product has the potential to boost the Opteron chips well ahead of their Xeon-based competition. From the article: 'Customers can then offload a wide variety of software jobs to the co-processor running in a standard server, instead of buying unique, more expensive types of accelerators from third parties as they have in the past.'"
This discussion has been archived. No new comments can be posted.

Start-up Could Kick Opteron into Overdrive

Comments Filter:
  • Berkeley (Score:2, Interesting)

    by 2.7182 ( 819680 )
    I thougt they had done this out at Berkeley a while back. Is it really a new thing ?
    • Well, I think Bajcsy and Sastry at EECS Berkeley had done something like this, but I don't know if that is the same gadget as this to tell you the truth.
    • Re:Berkeley (Score:2, Insightful)

      by dingDaShan ( 818817 )
      I'm sorry but 5k for a little chip that makes my opteron a little faster? I could just buy another opteron for that price: http://www.pricewatch.com/cpu/419325-1.htm> The price is supposed to drop to 3k next year. How does this affect cooling?
      • Re:Berkeley (Score:5, Interesting)

        by Whiney Mac Fanboy ( 963289 ) * <whineymacfanboy@gmail.com> on Monday April 24, 2006 @08:00AM (#15188983) Homepage Journal
        I'm sorry but 5k for a little chip that makes my opteron a little faster? I could just buy another opteron for that price: http://www.pricewatch.com/cpu/419325-1.htm [pricewatch.com]> The price is supposed to drop to 3k next year.

        You're quite right that these are not for you - their to run highly specialised calculations (the oil & gas industries are mentioned in TFA).

        They make some operations much faster (think of a hardware mpeg decoder, useless for most things, but much more efficient for the single thing it can do then a general purpose CPU)

        How does this affect cooling?

        These things consume 10-20 watts compared to an Opeteron's 80, so it's affect on cooling is minimal (far less then adding the second opteron that you propose)
        • they should optimize this for gcc & C code compilation.

          if that would be the case, perhaps my gentoo machine
          would be complete before christmas:)

        • Re:Berkeley (Score:3, Interesting)

          by drgonzo59 ( 747139 )
          A fast FFT processor, for example, would make the life easier for a lot of Photoshop filters users (with the help of special drivers and plugins), it would also help the GNU Radio [gnu.org] quite a bit, as well as other multimedia/signal/data processing applications.
          • But according to another /. article http://politics.slashdot.org/article.pl?sid=06/04 / 24/0358210 [slashdot.org]
            This will fund terrorisim by allowing us to transcode media files at an absolutely astounding rate*.
            -nB

            * Actually this looks great for the likes of LAL, Pixar, and other video shops. I'm a die hard Intel fanboi (last used AMD on my 386sx33) and this has me looking to buy a platform....
            Didn't someone try this on the memory bus once? Someone by the name of neuron? Whatever happened with that?
            -nB
            • Even though you mentioned terrorism as (half?)-joke, a fast FFT processor would probably be regualated by the govt., what that would mean is that people could just program in software a soffisticated and fast signal decoder that would normally cost tens or hundreds of thousands of dollars to buy as hardware. In a second it could all be reprogrammed into something else. So imagine having a police scanner, an HDTV, FM radio, etc etc all in just a laptop with some kind of a simple RF antenna input and amplifie
          • Re:Berkeley (Score:3, Informative)

            by hackstraw ( 262471 ) *
            A fast FFT processor, for example, would make the life easier for a lot of Photoshop filters users (with the help of special drivers and plugins), it would also help the GNU Radio quite a bit, as well as other multimedia/signal/data processing applications.

            There have been tons of addon cards that do FFTs, TCP offloading NICs, physics engines, or whatever you want. The problem is twofold. 1) These cards are expensive, or at the least nonfree and nonstandard as the rest of the computer and need software sup
      • RTFA

        They claim 10-20x the performance of an Opteron for specific tasks. They also claim 3x the price/performance of an Opteron.
        Since it costs about 3x the price of an Opteron, and performs atleast 10x better, their 3x price/performance claim seems pretty valid.
        Ofcourse, it needs to be programmed for highly specific tasks. But chances are that, if you're in the Opteron-buying market, you need it for highly specific tasks.
      • I've seen research done on speeding up XML queries with Xilinx FPGA. An opteron 8xx costs about $2000 a piece, so if one of those little sucker can give you at least 3x the performance of an opteron doing SQL query, I say we have a good contender in database applications!
  • So... (Score:2, Insightful)

    by Morosoph ( 693565 )
    What do folks here really want to optimise?

    Rendering comes to mind, but I'm biased [slashdot.org]. But I'm sure that a glorified graphics card isn't the most interesting use...

    If these become popular enough, will we be seeing a back-end to GCC for this FPGA?

    • Re:So... (Score:5, Interesting)

      by BenjyD ( 316700 ) on Monday April 24, 2006 @07:53AM (#15188965)
      The article mentions applications in gas and oil companies. I would guess that means things like:

      - MINLP/MILP [wikipedia.org] (Wikipedia article is a bit weak) and Branch and Bound optimisation for things like pipeline routing, well selection etc.
      - fluid mechanics for pipeline design
      - geological data-mining for finding reservoirs
      Those kind of jobs can have runtimes measured in days and weeks, so an accelerator could make a real difference.
    • Re:So... (Score:3, Interesting)

      by bhima ( 46039 )
      I would dearly love a cryptoprocessor and looking at the specs it doesn't look at that far away.
    • Seti stats... (Score:1, Offtopic)

      by way2trivial ( 601132 )
      previously I'd decided I was willing to pay a hunnert bucks per free slot on my machine for boards that could process boinc faster..

      (they'd need fans though)

      I'm in the top 3% worldwide.. and so are the 18,055 people above me.
      And I don't believe I'll ever see top 1%
      • I'm doing Folding@Home [stanford.edu], myself. I don't mind spare capacity being used, but I was thinking more selfishly, I'll have to admit :o)

        Flight simulation might be fun. Not just the graphics: air turbulance, AI for other aircraft, birds, etcetera...
      • > I'm in the top 3% worldwide.. and so are the 18,055 people above me.
        > And I don't believe I'll ever see top 1%

        I also doubt that you'll ever see the top 1%.

        If there are six billion (6,000,000,000) people in this world, then the top 3% is one-hundred-and-eighty-million (180,000,000.)

        Andy Out!
    • The sweet spot for plug in like this, IMO, would be similar to what you see a few board manufacturers doing now -- digital signal processing routines like Fourier transforms and other general calculus functions that are used in all kinds of data analysis where raw data comes in as analog variations, or where the moment by moment changes in state need to be modeled for engineering applications like fluid dynamics and harmonics.

      I'd imagine you'll need to have the application compiled in such a way that it is aware of the additional processing capability, so its not likely to be a plug-n-pray solution to your general game player's graphical wet dreams.
    • Optimize Audio (Score:2, Interesting)

      New polyphonic software instruments rely on CPU cycles. More cycles sound not only better but much different. Musicians are at a tipping point at this moment in time. Old fashioned instruments which are standards on stage and tour are becoming brittle and expensive. Collectors are snapping up the old instruments at prices north of $5K USD reducing the availability of instruments for playing professionals. The Hammond B3's are going as high as $16K. Selmer Mk6 saxes $6K.

      Software instruments are a neces
    • will we be seeing a back-end to GCC for this FPGA?

      Hardly. The languages for which gcc has front-ends (C, Fortran, C++, Ada, etc.) are heavily biased toward CPUs that process a stream of instructions: load, store, add, and, branch, compare, etc. The highly parallelized and pipelined designs that make FPGAs so much faster than microprocessors can't be expressed in software languages, and producing a good hardware design that is equivalent to a given C program is, well, a much harder problem than creating

  • Kick ass synth? (Score:4, Interesting)

    by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Monday April 24, 2006 @07:43AM (#15188921) Homepage
    This could really be an interesting way to boost real time soft synths... Even with top of the line processors the more complex ones will bring a CPU to it's knees. Seems like a more sensible option compared to a DSP-filled expansion card. Too bad this thing is still a little on the expensive side for a viable market on the music software side.
    • Re:Kick ass synth? (Score:2, Informative)

      by alienw ( 585907 )
      An FPGA does not make a very good DSP for the price. I suppose if it's one of the nicer ones from the Virtex series, you can get it to do DSP, but it won't be as good as the processor already in the PC. I'd say your best bet would be hacking a videocard to do the synth stuff -- it's optimized for the kind of parallel computation that DSP requires.
      • Re:Kick ass synth? (Score:4, Interesting)

        by Pulzar ( 81031 ) on Monday April 24, 2006 @10:06AM (#15189552)
        An FPGA does not make a very good DSP for the price. I suppose if it's one of the nicer ones from the Virtex series, you can get it to do DSP, but it won't be as good as the processor already in the PC.

        That's not true, at all. An FPGA will not be as good of a general-prupose DSP as a custom-made DSP, but it will still be better than a CPU -- even the low-cost Cyclone II comes with 150 dedicated multiplers coupled with embedded memory, so they can do parallel multiply/accumulate at 700+ MHz. And these are the low-end FPGAs...

        Now, if you're actually programming the FPGA using custom-designed circuitry optimized for the task you're workin on, the FPGA will work a lot better than a general-purpose DSP, and be way ahead of an even more general purpose CPU. That's why you don't see generic DSPs being used in heavy DSP work (say, in telcos), but custom and semi-custom ASICs, and FPGAs in smaller environments.
        • Have you ever used one? It's tough to get the timing much above 100MHz. Even a 66 MHz PCI interface is extremely difficult to implement. No way those multipliers in a cyclone will work at 700MHz. Maybe in a $150 virtex 4 chip, but still unlikely. Besides, how the hell are you going to get that much data into the chip? There's a reason they don't put 150 parallel multipliers in a CPU, and it's not because they can't.

          That's why you don't see generic DSPs being used in heavy DSP work (say, in telcos), bu
    • Too bad this thing is still a little on the expensive side for a viable market on the music software side.
      ...as compared to a ProTools DSP card?
  • by Threni ( 635302 ) on Monday April 24, 2006 @07:46AM (#15188937)
    when you can just read about it on the company's website?

    http://www.drccomputer.com/pages/products.html [drccomputer.com]
  • Quality? (Score:1, Offtopic)

    by wetfeetl33t ( 935949 )
    Yup, seems like a pretty neat piece of hardware. The only thing I'd be worried about is quality. All of these alternative processors usually seem to good to be true, until you use them. At work, we ended up buying 15 computers with a similiar item, and they have been nothing but trouble. They underperform, they break, etc. Granted, this may be a high quality product, but I sure won't buy one right away.
  • by subreality ( 157447 ) on Monday April 24, 2006 @07:54AM (#15188970)
    They basically made a FPGA (field programmable gate array) that can plug directly into HyperTransport (the Opteron CPU bus). FPGAs let you efficiently solve many problems that a general purpose processor can't. This has been done with PCI cards before, but the PCI is too slow for many uses. Giving it direct access to HT solves that problem.

    That's a pretty cool niche.
    • The bigger question.. what to do with all those nand gates?
      • That's an easy one : write a VHDL app, generate the corresponding firmware, build a Linux driver for it, there you go.

        Additional question : are there any generic driver templates for Hypertransport-based devices ?

    • There are Hypertransport add-in connectors (HTX connectors) on some server motherboards that would be much better suited for this sort of application.
      • I don't know about that; very few motherboards have HTX slots, but lots of motherboards have multiple processor sockets.
        • Ah, but unless you've rewritten the BIOS to expect a (presumably non-coherent) HT device in that socket, your SOL. Any motherboard with an HTX slot should have a BIOS designed to expect a random non-coherent device in that slot. Most Opteron motherboard bios's that I've had experience with have a fixed topology that basically says that the HT link between cpus is either not connected (no cpu present) or fully coherent (cpu present). In fact, most bios's don't even allow different speed and rev levels of opt
    • So, for all of what you've said, this is basically a piece of hardware acceleration for a CPU? First we had sound acceleration, then 3d video acceleration, now we have CPU acceleration? If this is basically what it boils down to - why not just build the damned thing onto the die, like we did with the math co-processors?
      • Because it is an expensive piece of very niche hardware. The average user doesn't need extremely fast FFT, DSP, or whatever operations on the cpu.

        The reason Math Coprocessors (like the 487) got built into the die was because as more and more people used things like photoshop, the floating point performance came out of the realm of 'niche' and into the mainstream. All in all, most people would be better off with a second opteron in the server to hand out web pages and e-mail, rather than this which is more s
    • could open graphics [duskglow.com] run on it?
  • by sfraggle ( 212671 ) on Monday April 24, 2006 @07:56AM (#15188972)
    From the article:
    "We have taken the approach that we must deliver three times the price-performance of a standard blade."
    Isn't this BAD? Three times the Price/performance ratio [wikipedia.org] would imply a higher price, or worse performance.
  • Er.... question (Score:1, Interesting)

    by brunes69 ( 86786 )

    "DRC's flagship product is the DRC Coprocessor Module that plugs directly into an open processor socket in a multi-way Opteron system," the company notes on its web site.

    If you have an open Opteron socket on your multi-way box, wouldn't you probably achieve better performance by shoving another Opteron into there?

    I mean, sure, I can see the benefit of having a co-processor customized to handle your specific workload. But another Opteron would likely run at multiples of the clockspeed of that thing, and it

    • Re:Er.... question (Score:4, Informative)

      by kinnell ( 607819 ) on Monday April 24, 2006 @08:23AM (#15189063)
      another Opteron would likely run at multiples of the clockspeed of that thing, and it would also be able to offload work from the *othewr* Opterons, such as disk I/O etc, giving your overall application more performance.

      Clockspeed is not a measurement of performance unless you are comparing similar architectures. With FPGAs you can do everything in parallel, whereas microprocessors are inherently sequential. In effect, you can potentially complete hundreds of instructions per clock cycle, whereas a microprocessor will complete 2 or 3.

      In practical terms, this product lends itself to compute intensive tasks such as signal processing, not data serving.

      • Re:Er.... question (Score:3, Informative)

        by brunes69 ( 86786 )

        With FPGAs you can do everything in parallel, whereas microprocessors are inherently sequential. In effect, you can potentially complete hundreds of instructions per clock cycle, whereas a microprocessor will complete 2 or 3.

        True, but if the microprocessor's clock speed is hundreds of thousands of times fater than the FPGA, then you are even again. There's no clock speed for this device in the article so we can't really compare.

        • Re:Er.... question (Score:4, Interesting)

          by andrewmc ( 88496 ) on Monday April 24, 2006 @09:43AM (#15189433)
          True, but if the microprocessor's clock speed is hundreds of thousands of times fater than the FPGA, then you are even again. There's no clock speed for this device in the article so we can't really compare.

          Clock speed often depends on the circuit design put onto the FPGA. If you got your FPGA design running at even 100MHz (not unrealistic), you're maybe 30 times behind a general-purpose CPU. But not only are you running hundreds of instructions per cycle, but those instructions are specific to the application and probably many times more efficient.

          It's probably not useful for making short-lived applications faster, but for seriously repetitive number-crunchy work like weather predictions, oil drilling, etc, where there are trillions of small-scale computations, the highly-parallel nature of the FPGA has great potential.

          Also, if those small-scale computations need to interact for any reason, on-chip communication is far faster than any chip-to-chip could be. And that's happening in parallel, too.

        • Re:Er.... question (Score:4, Informative)

          by kinnell ( 607819 ) on Monday April 24, 2006 @09:46AM (#15189455)
          The Virtex 4 [xilinx.com] FPGAs can be clocked at up to 500MHz, so we are talking about ~10-15 times slower than the processor, depending on the application. Even a simple digital filter would be faster when implemented in the FPGA, and this would only take a small fraction of the FPGA resources.
    • "f you have an open Opteron socket on your multi-way box, wouldn't you probably achieve better performance by shoving another Opteron into there?" No. Let's assume you are computing an FFT on some data and the Operon CPU can do (say) 100 per second. Adding a second CPU would at beat get you up to 200 per second. But hardware FFTs can go maybe 10X faster than software FFTS so with the new chip you can do 1100 per second. With a general purpose CPU there needs to be balance but if you know you will be co
  • by nietsch ( 112711 ) on Monday April 24, 2006 @08:02AM (#15188996) Homepage Journal
    There are plenty of others that have tried this, and plenty of them failed. A FGPA does have a significantly slower clockspeed and you need to have fairly sophisticated software that can make most of the flexible design. Before this thing came out in most instances it turned out to be cheaper to buy more horsepower and staying on a regular hardwareprogramming path than to risk it with special hard and software.
    These guys claim their stuff is cheaper than more horsepower and that you get the extra speedboost from the hypertransport (over pci).
    It clearly is a pr-release that has been regurgitated by a lazy journalist, as I found no or few critical notes, something this product might deserve. for one thing I don't see how they have solved the special software & programmers problem or how they really have taclked the economics of scale: this thing costs a couple of grands, vs a couple of hundres for a amd top notch processor. the regular processor has double cores and runs an order of magnitude faster than the fpga. The scarecity of programmers that can write software for this thing adds another order of magnitude to the wrong side of the equation.
    Roughly, the fgpa solution must be a thousand times quicker/better than the regular-proc-with-lots-of-horsepower solution. I don't see that happen soon.

    OTOH, the rosy images of a computer that can render a pixar animation in a few minutes the next mintes be used as a realtime sound-processing thing or simulate a neural net with as much neurons in it as in the human brain, that makes the geek in me drool. Computer, tell me it isn't so!
  • High end gameing? (Score:4, Interesting)

    by Barny ( 103770 ) on Monday April 24, 2006 @08:03AM (#15188999) Journal
    Even though I only know of 3 people that use 940 socket machines for gameing (2 of them dual cpu rigs) I believe an ageia physX processor modded to the socket would be a good idea. The combination of extremely fast cpu-ppu bus combined with being able to use stock (well, reg ecc ram is kinda stock) ram to feed it would help to make multi socket opterons a very viable gameing platform, although as those 3 peeps (and me after seeing the BoM) know, it would not be a cheap one.
  • by Janek Kozicki ( 722688 ) on Monday April 24, 2006 @08:06AM (#15189004) Journal
    I'm not a fan of java, but imagine JVM programmed into such co-processor on the hardware level (just as it is capable to). I bet it will be a very interesting option for some people. Servers running on java, anyone?

    But I'm a fan of neural networks, and I imagine that if such coprocessor was programmed exactly to perform NN tasks it could bring "brain simulation" a few steps closer - especially if many such coprocessors were put into the system.
    • by Anonymous Coward on Monday April 24, 2006 @08:23AM (#15189062)
      Java co-processor: it has been tried before, with negative success. Main reason: it turns out that compiling byte-code to CISC CPU assembler and running the native code gives more speed than executing byte-code directly.
      In late 90's, I've been burned off in precisely such start-up. We built an ASIC Java piggy-back byte-code CPU. It worked... as a proof of an idea. It didn't give much performance boost, at best, in 20-30% range. Noone wanted it.
    • "...Servers running on java, anyone?"

      Why not? This sounds perfectly complementary to me since most Sys Admins also run on java.
    • java, anyone?

      Azul does that, but it is a fully specialized hardware. No idea if you can take their core unit and transplant it into an Opteron socket.

  • I had never thought about using a hypertransport connection to get an fpga connected to a cpu, but I had often wondered about fitting an fpga into an sdram socket. You just write your block of data out to memory and read the results back.
    • by Flying pig ( 925874 ) on Monday April 24, 2006 @08:41AM (#15189134)
      Worked fine in the days of embedded systems when all memory was static (and usually only 16 bits wide), also when it was easy to wire an interrupt line so when the add-on had finished you could read the results. Nowadays much more difficult because of the need to integrate with DRAM controllers and timing, absence of convenient interrupts ( so need to poll a location to see when it completed). Whereas Hypertransport is designed to do the job and do it efficiently.

      Another nice approach was the "swinging gate" RAM method in which you had two blocks of physical RAM in the same memory space. The main CPU filled one block with data, then flicked the switch so the co-processor could read that data while the CPU read the results from the other block, then put in new data for processing in the next cycle. Very easy to implement, much cheaper than FIFOs. It meant you could use a cheap DSP (from TI) in a system using a cheaper 8086 series processor for which you could get cheap tools and an embedded OS.

    • There is actually a lot of active research going on in this field right now. It is called "Processor-in-memory" architecture, and it's best for handling things like array-based calculations, where you need to make a number of off-chip memory calls to complete. Staying completely on-chip makes it much faster, and it allows the embedded proc to take advantage of the internally wide (~256-bit) data path of modern memory. Look up Project DIVA and Project MONARCH, it is all DARPA-sponsored research, but the univ
    • It was done several times, for example, Nuron AcB, Pilchard, and SmartDIMM.

  • How about programming it as an x86 processor and then booting from it? That would be pretty interesting.
  • by maxwell demon ( 590494 ) on Monday April 24, 2006 @08:19AM (#15189047) Journal
    I think the most important sentence in the article is this:
    AMD's decision to open Hypertransport could end up being a key factor in Opteron's future success.

  • About Time! (Score:5, Interesting)

    by evilviper ( 135110 ) on Monday April 24, 2006 @08:39AM (#15189130) Journal
    I have to say, I'm surprised it has taken so long. Seems a few years past-due, IMHO.

    One of first signs that PCs needed an FPGA or similar was hardware MPEG capture cards... They could do the job so much faster, and so much cheaper than your primary CPU, that the alternative is disappearing.

    High-end graphics cards have been the most telling development. It's not that OpenGL is something magical, it's just that an ASIC can do many things so much better than a CPU that transfering much, much more raw data over the bus was still cheaper than actually processing it (despite the fact that interrupts are rather costly themselves).

    PS2 clusters, Crypto cards, Hardware-accelerated NICs, SLI, all are a symptom of almost excatly the same problem...

    The rising popularity of GPU programming made it extremely clear that there is a vaccuum here. Using the videocard isn't a very good method to accomplish this, just a stop-gap necessity. I thought from the beginning that FPGAs would become like the old math-coprocessors, and have their own motherboard socket, but neither AMD nor Intel were stepping up to fill this clear need. Installing it into a normal CPU socket, to get around this appathy, is a very clever hack I hadn't thought of.

    I expect, with popularity, it will be cheaper to put a custom FPGA socket on motherboards, rather than building a full-fledged SMP motherboard for the purpose. After that, who knows... Maybe FPGAs will go the way of the math-coprocessors and get itegrated into future CPUs.

    I know if I was running ATI or NVidia (or Hauppauge, or Level5), I'd be very worried about this thing eating the most profitable segment of my market.
    • Re:About Time! (Score:5, Interesting)

      by TheRaven64 ( 641858 ) on Monday April 24, 2006 @08:55AM (#15189188) Journal
      I was at a talk by Bob Colwell a few weeks ago. One of the points he made was that within the next ten years we will be able to (economically) fit far more transistors on a chip than we realistically know what to do with. His example was using all of that space to have a vast array of P6 cores. If you did this, then:
      1. You would not be able to get enough power to the chip to make it work.
      2. You would not be able to dissipate the heat that it would draw.
      3. You would not be able to get enough data to it for more than about 10% (on a good day) of the chip real-estate to be actually doing anything.
      One possible solution is to have a hundred or so general purpose cores, and fill the rest up with simple algorithmic accelerators (e.g. FFT, crypto, [i]D{C,W}T, etc). These would spend most of their time turned off (not using power), but when a workload hit the chip that needed them they could be turned on to give a significant performance boost.
      • How do you tackle leakage current? Even if part of a chip is not in use it still uses power in current designs, and AFAIK not even clockless designs completely eliminate parasitic current loss in unused components.
      • Comment removed based on user account deletion
        • MMX, SSE, 3DNow! and AltiVec are all still very general purpose instructions. They are just like the standard instructions that the chip can execute, except that they work on several inputs at once (e.g. do 4 adds in parallel instead of just doing one at a time). It's possible to build things like FFT quite efficiently out of these, but you are still using general purpose hardware. An FPGA running at 100MHz or so can still outperform a high-end general purpose CPU doing several of these tasks, because it
  • On opteron motherboards each processor manages its own bank of memory and makes it availble to the other processors via the hypertransport. Since this FPGA replaces one of the processors, how does it manage the associated memory bank?
    • Just a guess, but after RTFA, the include photo shows and Opteron CPU with populated memory banks and the DRC product's memory banks empty. Perhaps this is a hint?
  • yes this has been done before (different socket for sure). Most of them have failed. But this is getting picked up by others lately and seems to have legs (technologically speaking).

    http://www.cray.com/products/xd1/index.html [cray.com]

    oh BTW a single 3U is around $45k. For certain memory bound calculations and some sequential algorithms, HFFPGA work well (high frequency FPGA).

  • 386 DX? (Score:1, Interesting)

    by Anonymous Coward
    Does anyone remember the gool ol' days?

    I still have my 386/40MHz + coprocessor.

    And yes, AMD have called me to come in to their lab with my ancient relic about a year ago.
  • Weitek? (Score:1, Interesting)

    by Anonymous Coward
    Do you remember the Weitek math co-processor (i386-era stuff). It disappeared quite completely.

    Also there is a big fear of specialized hardware accelerators because they could be rigged in silicon, which you will never find out. With the functionality implemented in software on generic purpose CPU you at least have a chance to audit the code to find out if the SSL handling has some NSA backdoor added or so. You buy a Chrysalis Luna VPN booster PCI card and assuredly know Mossad reads whatever you transactio
  • I think why they implement it as an Opetron is that besides the hypertransport thing, they have their own exclusive set of memory. The co-processor don't have to share this with the system, making algorithm a lot easier (a big continuous chunk of memory for matrix operation!) and design would be much simplier, no MMU whatsoever.
  • by DeadCatX2 ( 950953 ) on Monday April 24, 2006 @10:38AM (#15189750) Journal
    Lots of other comments have made clear the point that it's not easy to program this kind of hardware. Typical software programs run in a very sequential manner. In fact, trying to get cooperative parallel execution of threads is known to be a major sticking point in the average programmer's education.

    Hardware, on the other hand, is massively parallel. All the "gates" (*) are all running all the time. It's like multi-threading a program, taken to the limit of infinity. However, if designed correctly, this thing can scale beyond belief, since it's all parallel.

    It's also important to note that it's a Virtex4 [xilinx.com] on that card. That's one hell of an FPGA, they sure aren't cutting any corners. I'm not sure which one they're using, but some Virtex4 chips have PowerPC processors at 450 MHz.

    This is definitely a niche product for now, due mainly to the lack of people who can write code in Hardware Description Languages (HDLs). But if you can figure it out, and you have an application that works on a massive scale, this may be for you.

    Oh, and for all you detractors who are saying "that thing only runs at 500 MHz! How is it supposed to be faster than my 2 GHz AMD chip?" You're forgetting one very important factor. Your AMD chip executes one instruction at a time, and the important instructions are surrounded by instructions whose sole purpose is to control program flow or move data back and forth. However, the XtremeDSP slices of a Virtex4 can each execute a multiply and an add in a single cycle, and there are up to 512 of them in the most hardcore Virtex4 chip, and other logic executing in parallel can control the "program flow" and ferry data back and forth across the bus.

    *: Modern FPGAs are actually built out of SRAMs that can implement arbitrary logic functions. They're no longer arrays of gates, so to speak.
    • *: Modern FPGAs are actually built out of SRAMs that can implement arbitrary logic functions. They're no longer arrays of gates, so to speak.

      What a relief for the Linux crowd! We no longer have to imagine a Beowulf cluster of Vistas.

  • by Dr. Spork ( 142693 ) on Monday April 24, 2006 @11:06AM (#15189972)
    The fact that this is practical has made me wonder how well it would work to use a motherboard socket for a GPU. With Hypertransport it would have absolutely direct access to system ram and could help itself to as much as it needed. I would love to be able to use standard CPU heatsinks on a GPU.

    But what I find really exciting about this idea is that once the GPU is in the motherboard, I'm sure programmers would find an easy way to use all that logic to do calculations - say, media encoding. Heck, I know they are trying to do this with GPU's on cards, but this would be a much lower latency connection.

    I wonder how this would affect total system cost. I mean, I know multi-socket mobos will always cost more, but then again, when the GPU is a chip instead of a card, that should bring costs down. Also, they could ditch all that PCI-e logic and those slots. Upgrading would definitely be cheaper, and can you imagine two socketed GPUs on the mobo running a Hypertransport version of SLi? That might be the fastest, quietest gaming rig ever!

    • Another requirement then would probably be to have a second bank of ram slots for "Video" ram ... though this could be a great thing if the gpu becomes generally available as a co-processor as presumably your faster video ram could be used for aonther level of caching? It would also mean you could upgrade your gpu and video ram seperately. The only downside I think I can see is that your video output's are likely to be tied to your motherboard though perhaps that's where pci-* steps back in (so you can
    • Probably not, because only expensive server processors like the Opteron and Xeon can be used in multi-CPU systems. While there are a few gamers who use the Opteron- probably they just like to spend money- they don't have the volume to justify producing such a thing. It would cost $5000 like the FPGA in the article, and probably not be updated as often as regular GPU's.

      Now, a programmable co-processor on a PCIe x16 card... I'd like to be able to encode a movie in five minutes.
    • But what I find really exciting about this idea is that once the GPU is in the motherboard, I'm sure programmers would find an easy way to use all that logic to do calculations - say, media encoding.

      Now I'm confused. This sounds about like someone saying: "Now that they've got hybrid technology in cars, they should put it in trucks. Then we can take the trucks and make them smaller by removing the truck bed, and put more seats in. Maybe even put a car body on it..."

      What do you think this FPGA is for, exa

      • I think the parent was suggesting that CPU-socket devices could be produced that were marketed as GPUs but could be used to assist other CPU-bound processes. Whether or not said devices are designed as graphics chips or general-purpose logic devices is another question.

        WRT your vehicular analogy, there are people who buy cars and want to use them as trucks occasionally, and people who buy trucks but sometimes just use them as cars. It's no big deal.
    • The advantage of pcie is that the specification will likely stick around for 6-7 more years, and will be used on both intel and AMD systems, as well as some of the more obscure architectures. You can design your graphics card and know that just about anyone will be able to throw is into their box. It gives you a very large addressable market. putting the GPU into a processor socket cuts your addressable market down to only AMD boards.

      Furthermore, GPU's do require a fair amount of bandwidth, but are a lot mo
  • Who needs PCI-X and the like, when you could just plug your graphics coprocessor (and other things like that) into Hypertransport? Maybe some day we'll all be using motherboards with lots of socket 940s instead of traditional expansion slots.
  • by toybuilder ( 161045 ) on Monday April 24, 2006 @12:28PM (#15190606)
    A dedicated co-processor with enough registers to perform a complex calculation without having to constantly ferry register values between memory and the processor, combined with the ability to run several calculations simultaneously will blow the socks off a general purpose CPU for *very specifically designed algorithms*.

    There's a market for GPU's on video cards running $1,200+... People that buy them won't be satisfied with standard GPU's no matter how fast their main processors run... The custom acceleration of graphics calculation makes it worthwhile.

    Now, imagine doing massive calculations (think three blackboards filled with quantum physics equations) -- and you can see how some scientific/industrial applications would go ga-ga over this stuff...
  • by Jan ( 7105 ) on Monday April 24, 2006 @12:28PM (#15190610)
    See earlier postings and blog entries on this concept:

    http://www.fpgacpu.org/usenet/fpgas_as_pc_coproces sors.html [fpgacpu.org]
    http://www.fpgacpu.org/log/aug01.html#010821-dimm [fpgacpu.org]

    The latency to the FPGA fabric largely determines what kinds of coprocessing workloads are feasible.

    When hypertransport came out, we (FCCM'ers) knew a HT-based lower latency interconnect should be possible. (Though I wouldn't call 75 ns +/- "low" latency -- that's a couple of hundred instruction issue slots, or a bit more than 1 full cache miss.) But DRC has gone and done it. I love the way it (apparently) just drops in and can even use that socket's DRAM DIMMs. Congrats to Steve Casselman and co.
  • Look! Data Flow! (Score:3, Insightful)

    by JumpingBull ( 551722 ) on Monday April 24, 2006 @01:07PM (#15190915)
    As a cranky engineer, I find this ... sweet.
    The best phrase to help the system design effort is data flow.
    How does the machine chop up the task for the most performance?
    The major problem in design is finding where to place the dotted line that says "cut here". Software mavens know this as refactoring, or partitioning.
    The gotcha in development would be to ignore the internal architecture of the FPGA.
    As a word of advice to the beginner, look carefully at the FPGA data flow, and try to decompose the algorithm ( or find a similar one) so that the data manipulation and movement fits the part as best as possible.
    Just having an HDL is not enough, the neophyte hardware designer can easily write code that cannot be synthesised to work, let alone fit the part. A sensitivity to the underlying hardware is needed.
    As an example of this, using hand crafted hardware design, Chuck Moore wrung several times the expected clock performance for a hardware Forth engine. A starting point for reading might be:
    http://www.ultratechnology.com/cowboys.html
    Using hand-crafting, you can get enormous processing gains, but the hardware and system designs have to be well understood.
    Perhaps the GNU uber-geeks could handle the translation efforts to make a tool for the average application programmer, but until then the brave soul who tackles these efforts should be prepared to learn a lot of the edges of computer science, hardware, and system design. It's not a horrible job, just long. And the problem should be worthy of the efforts needed.
  • Might as well make it an HTX card rather than suck up a CPU socket that could otherwise be driving more memory (picture implies no attached memory). Most upcoming 940 platforms have HTX slots now for the equivalent performance/latency with a left over hypertransport links.

After Goliath's defeat, giants ceased to command respect. - Freeman Dyson

Working...