Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Graphics Software Hardware

AMD Demonstrates "Teraflop In a Box" 182

UncleFluffy writes "AMD gave a sneak preview of their upcoming R600 GPU. The demo system was a single PC with two R600 cards running streaming computing tasks at just over 1 Teraflop. Though a prototype, this beats Intel to ubiquitous Teraflop machines by approximately 5 years." Ars has an article exploring why it's hard to program such GPUs for anything other than graphics applications.
This discussion has been archived. No new comments can be posted.

AMD Demonstrates "Teraflop In a Box"

Comments Filter:
  • by jimstapleton ( 999106 ) on Thursday March 01, 2007 @11:12AM (#18194966) Journal
    It shouldn't be a TERAble FLOP at the stores anyway. Nice performance...

    OK, yes, bad pun, bad spelling, you can "-1 get a real sense of humor" me now.
  • Even if Nvidia's CUDA is as hard as the Ars Technica article suggests, I still hope AMD either makes their chips binary compatible, or makes a compiler that works for CUDA code.

    • Re:Compatibility (Score:5, Interesting)

      by level_headed_midwest ( 888889 ) on Thursday March 01, 2007 @11:29AM (#18195210)
      The chips are a much different ISA, so there's no way that binaries that will run on G80 hardware will run on an R600. Heck, even the ATi R400 series (x700, x8x0) is not binary-compatible with the current R500 x1000 units.Maybe ATi will make a CUDA compiler, but I am guessing that since folks have already gotten going using the R500 hardware (see: http://folding.stanford.edu/ [stanford.edu] I doubt that AMD/ATi will make a big effort to use a competitor's technology. Please correct me if I am incorrect, but I am not aware of any groups or programs that use NVIDIA hardware as number-crunchers yet.
      • by MrHanky ( 141717 )
        That seems likely, but it should be possible to make an API like OpenGL for more general processing as well, shouldn't it? Then all you need is a driver, and your code won't be obsolete every time a new generation GPU comes out.
    • Re:Compatibility (Score:5, Informative)

      by UncleFluffy ( 164860 ) on Thursday March 01, 2007 @12:55PM (#18196408)

      Even if Nvidia's CUDA is as hard as the Ars Technica article suggests, I still hope AMD either makes their chips binary compatible, or makes a compiler that works for CUDA code.

      From what I saw at the demo, the AMD stuff was running under Brook [stanford.edu]. As far as I've been able to make out from nVidia's documentation, CUDA is basically a derivative of Brook that has had a few syntax tweaks and some vendor-specific shiny things added to lock you in to nVidia hardware.

      • Hey, thanks -- I was wondering if something like that existed! I'm actually about to start working on a computer vision-related research project that might be well-suited to running on a GPU, and was trying to figure out what technology to use to write it. I think Brook might be it.

        • No problem. I'd advise you to grab the latest CVS and look in the forums for any required build tweaks - the tarball on sourceforge often lags by quite a bit.
  • ubiquitous (Score:5, Insightful)

    by Speare ( 84249 ) on Thursday March 01, 2007 @11:16AM (#18195034) Homepage Journal

    Look up 'ubiquitous' before you whine about how far behind Intel might seem to be.

    Though having one demonstration will help spur the demand, and the demand will spur production, I still think it'll be five years before everybody's grandmother will have a Tf lying around on their checkbook-balancing credenza, and every PHB will have one under their desk warming their feet during long conference calls.

    • Look up 'ubiquitous' before you whine about how far behind Intel might seem to be.

      Sorry, late night submission. I'll claim an error of verb tense rather than adjective usage: "this will beat" rather than "this beats". This silicon is shipping high-end in a couple of weeks, so it'll be mid-range this time next year and integrated on the motherboard the year after that (or thereabouts). Another year or two for the regular PC replacement cycle to churn that through, and it should be widespread by the time

    • A working prototype is nice, but it's only viable if the can manufacture the chips with a high yield. New processes have low yield, meaning a high percentage of the chips don't work.
  • Oh no.

    I mean, the PS3 does 2 Teraflops! OMG, they're like 20 years ahead of Intel, who are so RUBBISH.

    And what would be the theoretical floppage of, say, a Intel Core 2 Extreme with 2 x nVidia GTXs in a dual SLI arrangement using CUDA? I'm willing to bet it would be somewhat higher than this setup.
    • by sumdumass ( 711423 ) on Thursday March 01, 2007 @11:20AM (#18195078) Journal
      Isn' the reason this is so interestiong because you cannot have a Intel Core 2 Extreme with 2 x nVidia GTXs in a dual SLI arrangement using CUDA pushing a tflop at this present time?

      Maybe soon but I thought it isn't _now_!
      • by ArcherB ( 796902 ) *
        Isn' the reason this is so interestiong because you cannot have a Intel Core 2 Extreme with 2 x nVidia GTXs in a dual SLI arrangement using CUDA pushing a tflop at this present time?

        Excellent point! Expect to see a nVidia/Intel partnership in 5, 4, 3, 2...

        • by BobPaul ( 710574 ) * on Thursday March 01, 2007 @11:56AM (#18195592) Journal

          Excellent point! Expect to see a nVidia/Intel partnership in 5, 4, 3, 2...
          Good call! That must be why nVidia has decided to enter the x86 chip market and Intel has significantly improved their GPU offerings, as well as indicate they may include vector units in future chips, because these companies plan to work together in the future! It's so obvious! I wish I hadn't paid attention these past 6 months, as it's clearly confused me!
          • Re: (Score:3, Informative)

            by ArcherB ( 796902 ) *
            hat must be why nVidia has decided to enter the x86 chip market and Intel has significantly improved their GPU offerings, as well as indicate they may include vector units in future chips, because these companies plan to work together in the future! It's so obvious! I wish I hadn't paid attention these past 6 months, as it's clearly confused me!

            Sarcasm suits you well.

            While Intel and nVidia may both be independently reinventing the wheel right now, neither seems to be getting very far very fast. Intel's vid
    • Re: (Score:3, Insightful)

      Well, as I see it, advertizing "[some amazing benchmark] in a box" is reasonably foolish because I could produce a system with amazing theoritical performance that doesn't really perform that much better than a system that is a fraction of the cost ... It wasn't that long ago where you could (easily) buy motherboards that supported 2 or 4 seperate processors, and people have generated Quad-SLi setups; what this means is you could create a 4 processor Core 2 Duo system with a Quad SLi Geforce 8800 GTx which
  • Step 1 (Score:3, Funny)

    by Anonymous Coward on Thursday March 01, 2007 @11:16AM (#18195040)
    Step 1: Put your chip in the box.
    • Step 2 (Score:5, Funny)

      by Saikik ( 1018772 ) on Thursday March 01, 2007 @12:00PM (#18195648)
      Step 2: Don't leave your box in Boston.
    • Re: (Score:3, Informative)

      by Anonymous Coward

      Step 1: Put your chip in the box.
      Dude. You have to cut a hole in the box first, otherwise you will pinch your junk...err...your chip under the lid.

  • by TheCreeep ( 794716 ) on Thursday March 01, 2007 @11:17AM (#18195050)
    How much is that in BogoMIPS?
  • by arlo5724 ( 172574 ) <jacobw56&gmail,com> on Thursday March 01, 2007 @11:19AM (#18195070)
    I might be (read: am mostly) retarded but I never thought of using a graphics processor for anything else, but with the super cards around the corner it makes sense that some normal processing jobs could be farmed out to the GPU when its not being occupied with graphics duties. Does anyone know where I can find some extra info on this, or to what extent this is being implemented? My curiosity is piqued!
  • OOOoooo (Score:5, Interesting)

    by fyngyrz ( 762201 ) * on Thursday March 01, 2007 @11:22AM (#18195104) Homepage Journal
    it's hard to program such GPUs for anything other than graphics applications

    It might be hard, but then again, it might be worthwhile. For instance (I'm a ham radio operator) I ran into a sampling shortwave radio receiver the other day. Thing samples from the antenna at 60+ MHz, thereby producing a stream of 14-bit data that can resolve everything happening below 30 MHz, or in other words, the entire shortwave spectrum and longwave and so on basically down to DC.

    Now, a radio like this requires that the signal be processed; first you separate it from the rest, then you demodulate it, then you apply things like notch filters (or you can do that prior to demodulation, that's very nice) you build an automatic gain control to handle amplitude swings, provide a way to vary the bandwidth and move the filter skirts (low and high) independently... you might like to produce a "panadapter" display of the spectrum around the signal of interest where the is a graph that lays out signal strengths for a defined distance up and down spectrum... you might want to demodulate more than one signal at once (say, a FAX transmission into a map on the one hand, and a voice transmission of the weather on the other.) And so on - I could really go on for a while.

    The thing is, as with all signal processing, the more you try to do with a real-time signal, the more resources you have to dedicate. And this isn't audio, or at least, not at the early stages; a 60+ MHz stream of data requires quite a bit more in terms of how fast you have to do things to it than does an audio stream at, say, 44 KHz.

    Bit signal processing typically uses fairly simple math; a lot of it, but you can do a lot without having to resort to real craziness. A teraflop of processing that isn't even happening on the CPU is pretty attractive. You'd have to get the data to it, and I'm thinking that would be pretty resource intensive, but between the main CPU and the GPU you should have enough "ooomph" left over to make a beautiful and functional radio interface.

    There is an interesting set of tasks in the signal processing space; forming an image of what is going on under water from sound (not sonar... I'm talking about real imaging) requires lots and lots of signal processing. Be a kick to have it in a relatively standard box, with easily replaceable components. Maybe you could do the same thing above-ground; after all, it's still sound and there are still reflections that can tell you a lot (just observe a bat.)

    The cool thing about signal processing is that a lot of it is like graphics, in a way; generally, you set up some horrible sequence of things to do to your data, and then thrash each sample just like you did the last one.

    Anyway, it just struck me that no matter how hard it is to program, it could certainly be useful for some of these really resource intensive tasks.

    • Re:OOOoooo (Score:5, Insightful)

      by sitturat ( 550687 ) on Thursday March 01, 2007 @11:29AM (#18195208) Homepage
      Or you could just use the correct tool for the job - a DSP. I don't know why people insist on solving all kinds of problems with PC hardware when much more efficient solutions (in terms of performance and developer effort) are available.
      • Re:OOOoooo (Score:5, Insightful)

        by fyngyrz ( 762201 ) * on Thursday March 01, 2007 @11:32AM (#18195252) Homepage Journal
        I don't know why people insist on solving all kinds of problems with PC hardware when much more efficient solutions (in terms of performance and developer effort) are available.

        Simple: they aren't available. PC's don't typically come with DSPs. But they do come with graphics, and if you can use the GPU for things like this, it's a nice dovetail. For someone like that radio manufacturer, no need to force the consumer to buy more hardware. It's already there.

        • Re: (Score:3, Interesting)

          You can buy a decent FPGA development board and turn it into a DSP for the price of a high-end graphics card. It isn't a trivial project to get started with, but it might be easier than using a GPU. Plus, the skills and hardware from this project will take you much farther than GPU skills.

          Get started here [fpga4fun.com] and find some example DSP cores here [opencores.org].

          • by fyngyrz ( 762201 ) *

            If you were going to go to that kind of trouble, why not buy a chip (or entire board) designed to be a DSP? Why go the FPGA route? Not trying to be nasty, I assume you have a reason for suggesting this, I just don't know what it is.

            • The original poster seems to want a lot of control and the possibility of tinkering with different configurations -- "Be a kick to have it in a relatively standard box, with easily replaceable components." Working with FPGAs gives you that software-like ability to create or download new components and rearrange them to fit your needs. A DSP board gives you one fixed layout of components. Plus, you can have fun turning the FPGA into anything else you want.
          • by julesh ( 229690 )
            You can buy a decent FPGA development board and turn it into a DSP for the price of a high-end graphics card. It isn't a trivial project to get started with, but it might be easier than using a GPU. Plus, the skills and hardware from this project will take you much farther than GPU skills.

            Really? I haven't seen PC-insertable FPGA dev boards that are capable of clocking anything like as high as a modern GPU (i.e. typically ~800MHz) for sub-$1000. If you can point me in the direction of a reasonably-priced
            • The other response said it all, but here's another way of looking at it:

              For a processor, the minimum clock speed required is

              (rate of incoming data) * (# of instructions to process a unit of data) / (average number of instructions per clock cycle, aka IPC)

              For a nicely pipelined hardware design, you could theoretically get away with a clock rate equal to the rate of incoming data, or even less, if you can process more than one unit of data per clock and have a separate, higher-clocked piece capturing the inpu
      • The graphics processor is basically a DSP now.

        We use computers to do things that it really isn't the best at doing, but we use the computer because it is so flexible at doing so many things and cheaply, wheras a DSP in a specialized box may be better for a specific single task, the economies of scale come into play.
      • Re:OOOoooo (Score:5, Insightful)

        by maird ( 699535 ) on Thursday March 01, 2007 @11:51AM (#18195538) Homepage
        A DSP probably is more efficient for that task but you can't go down to your local WalMart and buy one. Besides, even if you could, the IC isn't much use to anyone. Don't forget that you need at least a 60MHz (yes, sixty megahertz) ADC and DSP pair to do what was suggested. The cost of building useful supporting electronics around a DSP capable of implementing a direct sampling receiver at 60MHz would be prohibitive in the range $ridiculous-$ludicrous. Add to that the cost of getting any code written for it and the idea becomes suitable for military application only. OTOH, the PC has a huge and varied user base so it has the price consistent with being a mere commodity. It is general purpose and can be adapted to a large variety of tasks. It is relatively cheap to write code for and has a huge base of capable special interest programmers. If there is a 60+MHz ADC out there somewhere for a reasonable price then it isn't just a matter of whether a DSP is a better tool, a PC is a trivially cheap tool by comparison. You'd still need a decent UI to use an all-band direct sampling HF receiver. A PC would be good for that too, so keep it all in the same box. You can buy non-direct sampling receivers with DSPs in them at prices ranging from $1000 to exceeding $10000. The DSP is probably no faster than about 100kHz so the signal has to be passed through one or more analogue IF stages to get the signal you want into the 50kHz that can be decoded. You can probably buy a PC from with greater digital signal processing potential for less than $500. A 30MHz direct sampling receiver will receive and service 30MHz worth of bandwidth simultaneously. Not long after general availability, the graphics card configuration in question will probably cost less than $1000. With the processing capabilities it has you (the human) will probably run out of ability to interpret simultaneously decoded signals before the PC runs out of ability to decode more (it's really hard to listen to two conversations at the same time on an HF radio).
        • Re:OOOoooo (Score:5, Informative)

          by End Program ( 963207 ) on Thursday March 01, 2007 @01:05PM (#18196522)
          Don't forget that you need at least a 60MHz (yes, sixty megahertz) ADC and DSP pair to do what was suggested. The cost of building useful supporting electronics around a DSP capable of implementing a direct sampling receiver at 60MHz would be prohibitive in the range $ridiculous-$ludicrous.

          Maybe there aren't any DSP available and low cost, if you aren't a hardware designer:

          400 MHz DSP $10.00 http://www.analog.com/en/epProd/0,,ADSP-BF532,00.h tml [analog.com]
          14-bit, 65 MSPS ADC $30.00 http://www.analog.com/en/prod/0,,AD6644,00.html [analog.com]
          Catching non-designers talking smack ...priceless
          • Damn I wish I had modpoints... -gus
          • Re: (Score:2, Insightful)

            by MrNaz ( 730548 )

            NOTE:

            The cost of building useful supporting electronics around a DSP capable of implementing a direct sampling receiver at 60MHz would be prohibitive

            Not the cost of the units, but the cost of doing anything useful with them. For a person NOT integrating the parts into mass-produced items, it's only suitable for people doing something simple as a hobby, or for learning. I would *guess* that building anything to solve a problem in practice would take an incredibly large amount of time and skill, both of whi

  • by Duncan3 ( 10537 ) on Thursday March 01, 2007 @11:34AM (#18195274) Homepage
    Don't mention the wattage...

    And the second rule of teraflop club...

    Don't mention the wattage...

    Back here in the real world where we PAY FOR ELECTRICITY, we're waiting for some nice FLOPS/Watt, keep trying guys.

    And they announced this some time ago didn't they?
    • Also (Score:3, Interesting)

      by Sycraft-fu ( 314770 )
      There's a real difference between getting something to happen on a quasi-DSP like a GPU and on a real, general purpose processor like a CPU. If GPUs were full out CPU replacements, well then we wouldn't have CPUs any more, would we? The problem is that they are very very fast, but only at some things. Now that's fine, because that's what they were designed for. They are made to push pixels really fast and if they can do anything else, well bonus. However it does mean that they aren't a general purpose compu
    • by dlapine ( 131282 ) <lapine.illinois@edu> on Thursday March 01, 2007 @12:02PM (#18195662) Homepage
      LOL- you're complaining about wattage for 1 TF when they did it on a pair of friggin' video cards?? That's gotta be what, 500 watts total for whole PC?


      We've run several PC clusters and IBM mainframes that didn't have a 1TF of capacity. You don't want know much power went into them. Yes, our modern blade-based clusters are more condensed, but they're still power hogs for dual and quad core systems.

      Blue gene is considered to be a power efficient cluster and the fastest [top500.org], but it still draws 7kw per rack of 1024 cpus [ibm.com]. At 4.71 TF per rack, even Blue Gene pulls 1.5kw per teraflop.

      Yes, it's a pair of video cards, and not a general purpose cpu, but your average user doesn't have ability to program and use a Blue Gene style solution either. They just might get some real use out of this with a game Physics Engine that taps into this computing power.

      This is cool.

      • Re: (Score:3, Informative)

        by Duncan3 ( 10537 )
        Count real, usable FLOPS. GPU's don't win.

        But for ~$500, it's what's going to be used.
    • by julesh ( 229690 )
      About 230W [engadget.com] per card.
  • by Assmasher ( 456699 ) on Thursday March 01, 2007 @11:35AM (#18195304) Journal
    ...generic purposes, it is that they're (GPUs) suited better for certain types of operations. Image processing, as an example, is very well suited to working on a GPU because the GPU excels at addressing and operating on elements of arrays (textures basically.) I've used it as a proof of concept at work for processing large numbers of video feeds simultaneously for things like photometric normalization, image stabilization, et cetera, and the things are awesome. They work well in this scenario because the problem I'm trying to solve fits the caveats of using the GPU well. Slow upload of data, miraculously fast action upon that data, slow download of the data. Now, slow is relative and getting more and more relative as new chipsets are released.

    The actual framework for doing this is relatively simple although it certainly did help that I've a background in OpenGL and DirectXGraphics (so I've done shader work before); however, again, progress is removing those caveats as well. Generic GPU programming toolsets are imminent the only problem being ATI has no interest in their toolsets working with nVidia and nVidia has even less interest in their toolset(s) running ATI hardware. Something we'll just have to learn to deal with.

    BTW, DirectX10 will make this a little easier as well with changes to how you have to pipeline data in order to operate on it in a particular fashion.
  • Notpick (Score:4, Informative)

    by 91degrees ( 207121 ) on Thursday March 01, 2007 @11:36AM (#18195314) Journal
    That should be Teraflops. Flops is Floating-point operations per second, so always has an s on the end even if singular.
    • That should be Teraflops. Flops is Floating-point operations per second, so always has an s on the end even if singular.

      I don't think so. You can either use 1 teraFLOPS, 2 teraFLOPS, 3 teraFLOPS (in the same way you say 1 MHz, 2 MHz, 3 MHz), where I am not using capitals for emphasis but as the way the letters should be written, or you can use 1 teraflop, 2 teraflops, 3 teraflops (in the same way you say 1 snafu, 2 snafus, 3 snafus). The thing is that "FLOPS" is an acronym (i.e. an abbreviation formed fro

  • Worthless Preview (Score:3, Insightful)

    by jandrese ( 485 ) <kensama@vt.edu> on Thursday March 01, 2007 @11:39AM (#18195364) Homepage Journal
    So the preview could be boiled down to: Card still in development will be faster than cards currently available for sale.

    It also included some pictures of the cooling solution that will completely dominate the card. Not that a picture of a microchip with "R600" written on it would be a lot better I guess. Although the pictures are fuzzy and hard to see, it looks like it might require two separate molex connections just like the 8800s.
    • Keep in mind that is still a prototype, and from what I've heard, the cooling apparatus in the pictures is for OEMs like HP, Gateway, etc. If you consider that once it's released to retail, the fan will move on-board, and the total card won't be all that remarkably large.
  • I thought the dual CPU G5 machines were rated at 1 teraflop. Certainly PowerPC AltiVec processors are super floating-point engines (but I don't know exactly how they rank at flops/mhz....)

    But then maybe the issue depends on the notion of what is "ubiquitous" and Macs don't qualify. I dunno, but I'm sure someone on /. will correct me :-)

            dave
    • by bnenning ( 58349 )
      I thought the dual CPU G5 machines were rated at 1 teraflop.

      IIRC the best case for Altivec is 8 flops/cycle (fused multiply/add of 4 32-bit floats), so a quad G5 at 2.5GHz would have a maximum of 80 GFlops. With perfectly scheduled code you could get some additional ops out of the integer and FP units, but not close to a teraflop.
  • How long before they put in the on the HT bus using a HTX slot?
    • by *weasel ( 174362 )
      Or more appropriately:
      How long until AMD starts releasing multi-core chips with multiple/mixed CPU/GPU cores, joined by an virtual inter-core HT bus, and all wired into main memory? (and optionally a bank of GDDR)
  • which is fully connected to the Internet so that I can put my toast down or pop it up remotely.

    Wait...from some of the other comments about electricity usage, I might be able to do away with the heating coils and use the circuits themself to toast. That would really be an environment plus. Wonder how it would affect the taste of the bread?
    • So would the heat sinks leave 'scorch marks'? Would this lead to a redesign of heatsinks to provide branding/corporate logos on toast?

      It might be kinda cool to get "Intel Inside" burnt onto a panini sandwich... :-)

              dave
    • Well, given your bread will be so "close to the metal", I'm guessing, not good ;)
  • by Doc Ruby ( 173196 ) on Thursday March 01, 2007 @11:47AM (#18195478) Homepage Journal

    it's hard to program such GPUs for anything other than graphics applications.


    "Anything other" is "general purpose", which they cover at GPGPU.org [gpgpu.org]. But the general community of global developers hasn't gotten hooked on the cheap performance yet. Maybe if someone got an MP3 encoder working on one of these hot new chips, the more general purpose programmers would be delivering supercomputing to the desktop on these chips.
    • MP3 is trivial. No more than 5 or 10 minutes to do an entire album. Or maybe 3 minutes. Video is where it's at. Turning home movies into h.264 video takes a ton of computing power and time. Get a GPU assisting a CPU encoding an hour of DV into h.264 in only fifteen or thirty minutes and the video scene will be all over it.
      • MP3 encoding at a server isn't trivial load for a thousand simul streams on a P4. Your 3 minutes per 45min album is only 15x for about $1000, while a 1TFLOPS GPU card might encode 16 thousand times for $300.

        There are many more people coding multistream MP3 servers, but still no port to GPGPU.

        Video servers follow the same logic. But video decoders at the client will get better economics from many thousands/millions of ASICs in the mass market, rather than the few thousand servers a year that the market will
        • Could you clarify what situation needs streaming mp3 recompressed for multiple bitrates on the fly? Wouldn't it make more sense to do that ahead of time? Or are you talking about the music channels over digital cable and satellite? Do those channels get compressed on the fly like the rest of the video streams?
  • by Animats ( 122034 ) on Thursday March 01, 2007 @11:59AM (#18195638) Homepage

    Ars has an article exploring why it's hard to program such GPUs for anything other than graphics applications.

    No, Ars has an article blithering that it's hard to program such GPUs for anything other than graphics applications. It doesn't say anything constructive about why.

    Here's an reasonably readable tutorial on doing number-crunching in a GPU [uni-dortmund.de]. The basic concepts are that "Arrays = textures", "Kernels = shaders", and "Computing = drawing". Yes, you do number-crunching by building "textures" and running shaders on them. If your problem can be expressed as parallel multiply-accumulate operations, which covers much classic supercomputer work, there's a good chance it can be done fast on a GPU. There's a broad class of problems that work well on a GPU, but they're generally limited to problems where the outputs from a step have little or no dependency on each other, allowing full parallelism of the computations of a single step. If your problem doesn't map well to that model, don't expect much.

    • You don't need greater than 32-bit precision for any of the MAC ops. Usually that kind of limitation can be overcome by rethinking the algorithm, and doing some accumulation or error analysis outside of the GPU.
    • Maybe you should do more than just skim the article and post an ill-informed flame. In the article, I blame the problems specifically on the complexity of dealing with programmer-managed memory hierarchy, and I give some of my reasoning.

      As for your specific comments about the classes of problems that do or don't map well onto a GPU, I've covered those issues in previous posts on the topic. The post you're trying to criticize wasn't about the kinds of problems that you can and can't solve efficiently with GP
    • Re: (Score:3, Informative)

      Yes, you used to have to do everything in a graphical environment, but not any more. With nVidia's CUDA you program in C/C++, have a general memory model (you can access texture memory if it's efficient for what you need, but you also have general device memory and several other types of memory to choose from) and run on fully capable stream processors. As far as the programmer is concerned, the gpu is just a stream processor add-in card. You do have to manually transfer to and from device memory, but on
      • Mod this guy up - Cuda and CTM are Nvidia's and ATI's API's for direct access to the GPU, completely bypassing all the DirectX/OpenGL layers that were previously the only way to shoe-horn in computational workloads. The GP is obsolete and does not deserve a 5 at all.
  • by natrius ( 642724 ) <niran@@@niran...org> on Thursday March 01, 2007 @12:04PM (#18195702) Homepage
    To all the fellas out there with geek friends to impress
    It's easy to do, just follow these steps:
    One: Cut a hole in a box
    Two: Stick your chip in that box
    Three: Make her open the box
    And that's the way you do it
    It's my chip in a box
  • Could this be the start of some really good opensource drivers for ATI cards?
    Just how much of X and OpenGL could they offload on this card?
    What Theora, Ogg, Speex, or Divx encoding and decoding?
    I know it is a radical idea but since they are optimized for graphics and graphics like operations why not use them for that?
    • ATI has been offloading codec work (for at least certain codecs) to the graphics card since the 9XXX series. H.264, for instance, is offered through a codec interface that is 'accelerated' by the card on X19XX series cards... I'd assume it's not done with dedicated hardware, but by offloading the work of processing to the cards GPU.
  • SuperCell (Score:3, Informative)

    by Doc Ruby ( 173196 ) on Thursday March 01, 2007 @12:14PM (#18195854) Homepage Journal
    The Playstation 3 [wikipedia.org] is reported to harness 2 TFLOPS [wikipedia.org]. But "only" 204GFLOPS run on the Cell CPU, 10%. The other 1.8TFLOPS runs on the nVidia G70 GPU. But the G70 runs shaders, very limited application to anything but actually rendering graphics.

    The Cell itself is notoriously hard to code for. If just some extra effort can target the nVidia, that's TWO TeraFLOPS in a $500 box. A huge leap past both AMD and Intel.
    • by Halo- ( 175936 )
      disclosure: I do not speak for my employer.

      The problem is that you can't just say, "I can multiply two floating points in time X, and therefore my speed is 1/X." You have to actually get that data to and from some sort of useful location. High performance computing is bounded by memory bandwidth these days, not clock speed. The article summary mentions streaming but I can find no reference to that in the the actual article itself.
      Consider digital SLR cameras, decent dSLR can take a picture in 1/160
      • Re: (Score:3, Informative)

        by Doc Ruby ( 173196 )
        PCI-Express offers 64 "lanes" pumping up to 500MBps each (since January, 250MBps in actual shipping HW). In a switched hub, for 256Gbps total. The Cell's EIB is probably its most interesting feature: 200Gbps token ring that transparently connects offchip. So the new IBM Cells, with 4 cores (Power970 + 8-SPEs each) on one die (or SoC) has 32x 25.6GFLOPS + 4x 970 all moving at 200Gbps. Or just a single Cell at 204GFLOPS feeds 200Gbps to a PCIe stuffed with 20x 10Gig ethernet cards (10 double-10GigE PCIe cards
  • Well...duh (Score:5, Insightful)

    by Anonymous Coward on Thursday March 01, 2007 @12:35PM (#18196154)
    GPGPU is hard because we're still in the very early days of this particular revolution. As I think about it, and from what we know of AMD's plans in particular, I think this is kind of like the evolution of FPU.

    See, in the early days FPU was a seperate chip (anyone remember buying an 80387 to plug into their mobo?). Writing code to use FPU was also a complete pain in the ass, because you had to use assembly, with all the memory management and interrupt handling headaches inherent. FPUs from different vendors weren't guaranteed to have completely compatible instruction sets. Because it was such a pain in the ass, only highly special purpose applications made use of FPU code. (And, it's not that computer scientists hadn't thought up appropriate abstractions to make writing floating point easy. Compilers just weren't spitting out FPU code).

    Then, things began to improve. The FPU was brought on die, but as an optional component (think 486SX vs 486DX). Languages evolved to support FPUs, hiding all the difficulty under suitible abstractions so programmer could write code that just worked. More applications began to make use of floating point capabilities, but very few required a FPU to work.

    Finally, FPU was brought on die as a bog standard part of the CPU. At that point, FPU capabilities could be taken for granted and an explosion of applications requiring an FPU to achieve decent performance ensued (see, for istance, most games). And writing FPU code is now no longer any more difficult than declaring type float. The compiler handles all the tricky parts.

    I think GPGPU will follow a similar trajectory. Right now, we're in phase one. Use a GPU for general purpose computation is such an incredible pain that only the most specialized applications are going to use GPGPU capabilities. High level languages haven't really evolved to take advantage of these capabilities yet. And yes, it's not as though computer scientists don't have appropriate abstractions that would make coding for GPGPU vastly easier. Eventually, GPGPU will become an optional part of the CPU. Eventually high level languages (in addition to the C family, perhaps FORTRAN or Matlab or other languages used in scientific computing) will be extended to use GPGPU capabilities. Standards will emerge, or where hardware manufacturers fail to standardize, high level abstraction will sweep the details under the rug. When this happens, many more applications will begin to take advantage of GPGPU capabilities. Even further down the road, GPGPU capabilities will become bog standard, at which point will see an explosion of applications that need these capabilities for decent performance.

    Granted, the curve for GPGPU is steeper because this isn't just a matter of different instructions, but a change in memory management as well. But I think this kind of transition can and will eventually happen.
  • by UnknowingFool ( 672806 ) on Thursday March 01, 2007 @01:19PM (#18196734)

    Though a prototype, this beats Intel to ubiquitous Teraflop machines by approximately 5 years."

    So I take it that AMD will be ready for Vista's successor?

  • I do not think it means what you think it means.
  • I laugh every day at the tags people assign to articles, but today I laughed the hardest with the tag "dickinabox" ...
  • this teraFLOP are on 64-bit doubles.  Single precision teraFLOPs are close to useless for anything that requires a teraFLOP.

Talent does what it can. Genius does what it must. You do what you get paid to do.

Working...