Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Graphics Software Hardware

AMD Demonstrates "Teraflop In a Box" 182

UncleFluffy writes "AMD gave a sneak preview of their upcoming R600 GPU. The demo system was a single PC with two R600 cards running streaming computing tasks at just over 1 Teraflop. Though a prototype, this beats Intel to ubiquitous Teraflop machines by approximately 5 years." Ars has an article exploring why it's hard to program such GPUs for anything other than graphics applications.
This discussion has been archived. No new comments can be posted.

AMD Demonstrates "Teraflop In a Box"

Comments Filter:
  • ubiquitous (Score:5, Insightful)

    by Speare ( 84249 ) on Thursday March 01, 2007 @12:16PM (#18195034) Homepage Journal

    Look up 'ubiquitous' before you whine about how far behind Intel might seem to be.

    Though having one demonstration will help spur the demand, and the demand will spur production, I still think it'll be five years before everybody's grandmother will have a Tf lying around on their checkbook-balancing credenza, and every PHB will have one under their desk warming their feet during long conference calls.

  • Re:OOOoooo (Score:5, Insightful)

    by sitturat ( 550687 ) on Thursday March 01, 2007 @12:29PM (#18195208) Homepage
    Or you could just use the correct tool for the job - a DSP. I don't know why people insist on solving all kinds of problems with PC hardware when much more efficient solutions (in terms of performance and developer effort) are available.
  • Re:OOOoooo (Score:5, Insightful)

    by fyngyrz ( 762201 ) * on Thursday March 01, 2007 @12:32PM (#18195252) Homepage Journal
    I don't know why people insist on solving all kinds of problems with PC hardware when much more efficient solutions (in terms of performance and developer effort) are available.

    Simple: they aren't available. PC's don't typically come with DSPs. But they do come with graphics, and if you can use the GPU for things like this, it's a nice dovetail. For someone like that radio manufacturer, no need to force the consumer to buy more hardware. It's already there.

  • by Duncan3 ( 10537 ) on Thursday March 01, 2007 @12:34PM (#18195274) Homepage
    Don't mention the wattage...

    And the second rule of teraflop club...

    Don't mention the wattage...

    Back here in the real world where we PAY FOR ELECTRICITY, we're waiting for some nice FLOPS/Watt, keep trying guys.

    And they announced this some time ago didn't they?
  • Worthless Preview (Score:3, Insightful)

    by jandrese ( 485 ) <kensama@vt.edu> on Thursday March 01, 2007 @12:39PM (#18195364) Homepage Journal
    So the preview could be boiled down to: Card still in development will be faster than cards currently available for sale.

    It also included some pictures of the cooling solution that will completely dominate the card. Not that a picture of a microchip with "R600" written on it would be a lot better I guess. Although the pictures are fuzzy and hard to see, it looks like it might require two separate molex connections just like the 8800s.
  • Re:OOOoooo (Score:5, Insightful)

    by maird ( 699535 ) on Thursday March 01, 2007 @12:51PM (#18195538) Homepage
    A DSP probably is more efficient for that task but you can't go down to your local WalMart and buy one. Besides, even if you could, the IC isn't much use to anyone. Don't forget that you need at least a 60MHz (yes, sixty megahertz) ADC and DSP pair to do what was suggested. The cost of building useful supporting electronics around a DSP capable of implementing a direct sampling receiver at 60MHz would be prohibitive in the range $ridiculous-$ludicrous. Add to that the cost of getting any code written for it and the idea becomes suitable for military application only. OTOH, the PC has a huge and varied user base so it has the price consistent with being a mere commodity. It is general purpose and can be adapted to a large variety of tasks. It is relatively cheap to write code for and has a huge base of capable special interest programmers. If there is a 60+MHz ADC out there somewhere for a reasonable price then it isn't just a matter of whether a DSP is a better tool, a PC is a trivially cheap tool by comparison. You'd still need a decent UI to use an all-band direct sampling HF receiver. A PC would be good for that too, so keep it all in the same box. You can buy non-direct sampling receivers with DSPs in them at prices ranging from $1000 to exceeding $10000. The DSP is probably no faster than about 100kHz so the signal has to be passed through one or more analogue IF stages to get the signal you want into the 50kHz that can be decoded. You can probably buy a PC from with greater digital signal processing potential for less than $500. A 30MHz direct sampling receiver will receive and service 30MHz worth of bandwidth simultaneously. Not long after general availability, the graphics card configuration in question will probably cost less than $1000. With the processing capabilities it has you (the human) will probably run out of ability to interpret simultaneously decoded signals before the PC runs out of ability to decode more (it's really hard to listen to two conversations at the same time on an HF radio).
  • Re:Not sonar? (Score:4, Insightful)

    by fyngyrz ( 762201 ) * on Thursday March 01, 2007 @12:52PM (#18195544) Homepage Journal

    You use ambient sound instead of radiating a signal yourself, and you try to resolve the entire environment, rather than just the sound emitting elements in the environment. This makes you a lot harder to detect; it also makes resolving what is going on a lot more difficult. Hence the need for lots of CPU power. In the water or in the air. Passive sonar - at least typically - is intended to resolve (for instance) a ship or a weapon that is emitting noise. But the sea is emitting noise all the time - waves, fish burping, whale calls, shrimp clicking - all kinds of noise, really. Using that noise as the detecting signal is the trick, and it isn't very similar to normal sonar, in terms of what kind of computations or results are required. Classic sonar gives you a range and bearing; this kind of thing is aimed at giving you an actual picture of the environment. It's a lot harder to do, but man, is it cool.

  • by BobPaul ( 710574 ) * on Thursday March 01, 2007 @12:56PM (#18195592) Journal

    Excellent point! Expect to see a nVidia/Intel partnership in 5, 4, 3, 2...
    Good call! That must be why nVidia has decided to enter the x86 chip market and Intel has significantly improved their GPU offerings, as well as indicate they may include vector units in future chips, because these companies plan to work together in the future! It's so obvious! I wish I hadn't paid attention these past 6 months, as it's clearly confused me!
  • by HappySqurriel ( 1010623 ) on Thursday March 01, 2007 @01:11PM (#18195806)
    Well, as I see it, advertizing "[some amazing benchmark] in a box" is reasonably foolish because I could produce a system with amazing theoritical performance that doesn't really perform that much better than a system that is a fraction of the cost ... It wasn't that long ago where you could (easily) buy motherboards that supported 2 or 4 seperate processors, and people have generated Quad-SLi setups; what this means is you could create a 4 processor Core 2 Duo system with a Quad SLi Geforce 8800 GTx which (in most applications) would not perform much better than a single processor Core 2 Duo system with a single Geforce 8800GTx.
  • Well...duh (Score:5, Insightful)

    by Anonymous Coward on Thursday March 01, 2007 @01:35PM (#18196154)
    GPGPU is hard because we're still in the very early days of this particular revolution. As I think about it, and from what we know of AMD's plans in particular, I think this is kind of like the evolution of FPU.

    See, in the early days FPU was a seperate chip (anyone remember buying an 80387 to plug into their mobo?). Writing code to use FPU was also a complete pain in the ass, because you had to use assembly, with all the memory management and interrupt handling headaches inherent. FPUs from different vendors weren't guaranteed to have completely compatible instruction sets. Because it was such a pain in the ass, only highly special purpose applications made use of FPU code. (And, it's not that computer scientists hadn't thought up appropriate abstractions to make writing floating point easy. Compilers just weren't spitting out FPU code).

    Then, things began to improve. The FPU was brought on die, but as an optional component (think 486SX vs 486DX). Languages evolved to support FPUs, hiding all the difficulty under suitible abstractions so programmer could write code that just worked. More applications began to make use of floating point capabilities, but very few required a FPU to work.

    Finally, FPU was brought on die as a bog standard part of the CPU. At that point, FPU capabilities could be taken for granted and an explosion of applications requiring an FPU to achieve decent performance ensued (see, for istance, most games). And writing FPU code is now no longer any more difficult than declaring type float. The compiler handles all the tricky parts.

    I think GPGPU will follow a similar trajectory. Right now, we're in phase one. Use a GPU for general purpose computation is such an incredible pain that only the most specialized applications are going to use GPGPU capabilities. High level languages haven't really evolved to take advantage of these capabilities yet. And yes, it's not as though computer scientists don't have appropriate abstractions that would make coding for GPGPU vastly easier. Eventually, GPGPU will become an optional part of the CPU. Eventually high level languages (in addition to the C family, perhaps FORTRAN or Matlab or other languages used in scientific computing) will be extended to use GPGPU capabilities. Standards will emerge, or where hardware manufacturers fail to standardize, high level abstraction will sweep the details under the rug. When this happens, many more applications will begin to take advantage of GPGPU capabilities. Even further down the road, GPGPU capabilities will become bog standard, at which point will see an explosion of applications that need these capabilities for decent performance.

    Granted, the curve for GPGPU is steeper because this isn't just a matter of different instructions, but a change in memory management as well. But I think this kind of transition can and will eventually happen.
  • by pk69 ( 541206 ) on Thursday March 01, 2007 @07:26PM (#18200802) Homepage
    I laugh every day at the tags people assign to articles, but today I laughed the hardest with the tag "dickinabox" ...
  • Re:OOOoooo (Score:2, Insightful)

    by MrNaz ( 730548 ) on Thursday March 01, 2007 @07:49PM (#18201042) Homepage

    NOTE:

    The cost of building useful supporting electronics around a DSP capable of implementing a direct sampling receiver at 60MHz would be prohibitive

    Not the cost of the units, but the cost of doing anything useful with them. For a person NOT integrating the parts into mass-produced items, it's only suitable for people doing something simple as a hobby, or for learning. I would *guess* that building anything to solve a problem in practice would take an incredibly large amount of time and skill, both of which are valuable resources even if they are your own. Cost of parts is only the total cost if you consider your time to be worthless. Making a DSP output a nice spectrograph of the airwaves wandering past your house is fine, making one that can perform underwater imaging is a different kettle of fish. Building something that can do that and then writing the code for it would not be a one man job, and it would not be cheap.

    Lunch money for public high school over 10 years: $10,000

    College education: $100,000

    Ability to read: Priceless.

Always look over your shoulder because everyone is watching and plotting against you.

Working...