Overclocked Radeon Card Breaks 1 GHz 199
dacaldar writes "According to Yahoo Finance, noted Finnish over-clockers Sampsa Kurri and Ville Suvanto have made world history by over-clocking a graphics processor to engine clock levels above 1 GHz. The record was set on the recently-announced Radeon® X1800 XT graphics processor from ATI Technologies Inc."
A bit presumptious? (Score:5, Insightful)
I think that's going a bit far. Good for them and everything, but world history? V-E day, Einstein's 1905, Rosa Parks refusing to give up her seat on the bus- these events impact world history (sorry for the all-Western examples); making a chip oscillate faster than an arbitrary threshold does not.
World history? (Score:3, Insightful)
Uh, it's cool and all but not likely to be in the history books. (easy on that hyperbole, wiil ya)
Re:One wonders... (Score:5, Insightful)
Since their product is still mostly vapor (you can't buy it yet), and nVidia is currently owning them in the high end market because ATI's product is so late, one has to grasp straws in order to try look l33t in the eyes of the potential purchasers.
Wish they'd spend less time yapping and more time actually putting product on the shelves.
Nice overclock in any case, but ATI putting out a press release about it is kinda silly
Historical? (Score:1, Insightful)
What's the point of these tests? (Score:4, Insightful)
And as for being the first people in the world to do this... the chances of that are small. I'm sure there are people at Radeon (and other companies) who have done things far more bizarre, but didn't announce it to the world.
Not for the weak (Score:5, Insightful)
It seems we may have a ways to go before it can be done with standard air cooling. I actually didn't think that operating temperatures for these processors went down to -80C.
Re:What's the point of these tests? (Score:3, Insightful)
Really, I don't think it's interesting whatsoever. It's like testing the strength of various bulletproof glass samples at a temperature of -100 C. The fact is, bulletproof glass is not used in such environments so the test gives no useful information.
Beside, since when are geeky pursuits practical?
I can't believe you're being serious. My geeky pursuits pay for my house.
Re:GPU vs. CPU Speed (Score:5, Insightful)
To answer both questions. Graphics are trivial to parallelize. You know to start with that you'll be doing essentially the same code for all pixels, and each pixel is essentially independent from its neighbours. So doing one or twenty at the same time is mostly the same, and since all you need is to make sure the whole screen is rendered, each pipeline just needs to grab the next unhandled pixel. No syncronization difficulties, no nothing. Since pixel pipelines don't stop each other doing syncing, you effectively have a 24 GHz processor in this beast.
On the other hand, you have an Athlon 64 X2 4800+ (damn, that's a needlessly big, numbery name). It has two cores, each running at 2.4 GHz (2.4 * 2 = 4.8, hence the name, I believe). However, for safe use of two processors for general computing purposes, lots of timing trouble has to be handled. Even if you do have those two processors, a lot of time has to be spent making sure they're coherent, and the effective performance is well below twice that of a single processor at twice the clock speed.
So, if raising the speed is easier than adding another core, and gives enough performance benefits to justify it, without the added programming complexity and errors (there was at least one privilege elevation exploit in linux that involved race conditions in kernel calls, IIRC), why go multiple processor earlier than needed? Of course, for some easily parallelized problems, people have been using multiprocessing for quite a while, and actually doing two things at the same time is also a possibility, but not quite as directly useful as in the graphics card scenario.
Re:O RLY? (Score:3, Insightful)
A Fraud! A Sham! A Scandal! (Score:2, Insightful)