AMD Breaks 1GHz GPU Barrier With Radeon HD 4890 144
MojoKid writes "AMD announced today that they can lay claim to the
world's first 1GHz graphics processor with their ATI Radeon HD 4890 GPU. There's been no formal announcement made about what partners will be selling the 1GHz variant, but AMD does note that Asus, Club 3D, Diamond Multimedia, Force3D, GECUBE,
Gigabyte, HIS, MSI, Palit Multimedia, PowerColor, SAPPHIRE, XFX and others are all aligning to release higher performance cards." The new card, says AMD, delivers 1.6 TeraFLOPs of compute power.
It Was Epic (Score:5, Funny)
AMD Breaks 1GHz GPU Barrier
I was diligently working at XYZ Corp a few buildings down when Incident One happened in their lab. At first, I was just sitting in my cubicle when suddenly we felt a severe shuddering of space & time around us. Then a few seconds later everyone heard a loud "Ka-BOOM" and everyone stood up to see what was going on outside. The buildings directly adjacent to the AMD lab had all their windows blown out and every car alarm within a square mile was going off. Some scientists with their hair blown straight back and carbon scoring randomly on their faces and white lab coats were seen to climb out of the rubble of AMD's R&D building. They immediately began dusting themselves off, high-fiving each other and patting each other on the back laughing and ecstatic. Then they headed towards the liqueur store down the street to pick up some champagne. Shortly after it was discovered that 1Ghz is the frequency at which æther vibrates when it is at rest so once you pass it, you leave a wake of æther behind your time cone. Roger Penrose and Stephen Hawking are due to give a speech at "GPU Ground Zero" this week, I hope to make it.
If I were working marketing for AMD, I would be pointing out how switching from base ten to base eleven, twelve, thirteen, etc provides a theoretically unlimited amount of newsworthy advertisements in broken barriers. "We just need to make it to 2,357,947,691 hertz and we'll be the first to claim we've broken the 1 Ghz (base11) barrier! Where the hell was the report that we broke base9 last year?!"
Re: (Score:2)
Re: (Score:2)
Shortly after it was discovered that 1Ghz is the frequency at which æther vibrates when it is at rest so once you pass it, you leave a wake of æther behind your time cone.
Wow! And here I thought it was 1.21Ghz at 88 MPH.
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2, Funny)
jiga what?
Re: (Score:2)
"Giga" in some countries is actually pronounced "jiga". (History says that is how "Giga" is pronounced everywhere except the US, but that's debatable). Thus, 1.21GHz would be an accurate figure in this article.
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:3, Funny)
It doesn't matter what base you use, AMD owns that achievement.
According to AMD top researchers, whether it was base-9, base-10, or base-11 doesn't matter. According to AMD,
"All your base are belong to us."
AMD CPU too (Score:2)
Didn't AMD break the 1ghz desktop CPU "barrier" too? ;)
Re:AMD CPU too (Score:5, Informative)
Digital Broke that with the DEC Alpha (Was it DEC at that time?). Wasn't popular but it was a desktop CPU for high end workstations.
Re:AMD CPU too (Score:4, Interesting)
Sorry. It was Compaq who owned the Alpha at that time. It was still DEC who designed it though.
Comment removed (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
No, AMD don't have a bad track record you're right, however ATI do, I can't comment on them now though.
I was in tech. support for 7 years from 2000 to 2007 and the fact is that ATI drivers were consistently crap. Our team supported over 10,000 systems, ranging from low end desktops, to high end workstations and big high end laptops to small netbook type systems.
Time and time again the only cards that provided driver issues were ATI, it was absolutely horrendous how bad ATI's drivers were, they've simply nev
Re: (Score:2)
Re: (Score:2)
I would say that your customers need to go elsewhere since you aren't staying up to date (and it appears that's what you should be doing for your line of business!). What you're describing is no longer true. Not only is NVidia having a hard time with their drivers, their cards that are a generation or two behind are dying left and right. If you had bothered to take notice, which you clearly haven't, ATI's drivers have greatly improved since AMD became their new overlords.
The only thing you have correct is t
Re: (Score:2)
Re: (Score:2)
Uuuhhh....you DO realize you read my post EXACTLY ass backwards, right?
Nope, just merely pointing out that you aren't up to date on what's what.
Getting NVidia brand new is fine, if you want to replace your card in two years, which a lot of people do not want to do. Look at the failure rates now on the 8 series and 9 series. If they're using a certain chip, they have a high failure rate. I see people on gaming forums complaining about it all the time, running going, "this game killed my graphics card!"
And to go one step back, in Vista's early life time, NVidia drivers took up a
Re: (Score:2)
Re: (Score:2)
I always troll the ATI forums
Heh heh.. I assume you meant trawl but it kinda works with either word ;)
Re: (Score:2)
I have bought several ATI cards over the years, and not once could I get the drivers that came with the card to install. I always had to download newer ones and install parts of those, reboot, then try to install the ones that came with it and get a little more to install, reboot, then try the new ones again, and they would bitch that I needed to install Y to be able to install X, but when I tried to install Y it required X, until I manually installed the card as a generic VGA card and then manually instal
So this means... (Score:5, Funny)
one will finally have a graphics card capable of playing Duke Nukem Forever.
Oh wait...
Re: (Score:2, Funny)
Why is it harder on GPUs than CPUs? (Score:5, Interesting)
Why is it harder to raise the clock frequenceies on GPUs than CPUs? Is more code in use at the same time per unit area, or?
Re: (Score:2)
Re: (Score:3, Insightful)
Err... It's not that black and white, you can't just say that GHz != performance. If you take a card and raise its clock, you'll usually get more performance. If you raise memory speed you'll usually get more performance. The only time you won't is when the one is bottlenecking the other.
All we're learned from CPU wars is that within two different architectures, the faster one isn't necessarily the one with more GHz.
Re:Why is it harder on GPUs than CPUs? (Score:5, Informative)
Heat. Because of the form factor, you can't put a massive heatsink on a graphics card, certainly not the kind that you see on high end desktop CPUs.
GPUs are also generally a completely different architecture than a CPU... they're usually massively parallel and optimized for working with enormous matrices, whereas a CPU is significantly more linear in its operation, and generally prefers single variables.
Re: (Score:2, Interesting)
Yeah you can't put the exact same heatsink on them but take a look at the Accelero S1 Rev. 2 at http://www.arctic-cooling.com/catalog/product_info.php?cPath=2_&mID=105&language=en [arctic-cooling.com]
You even putting a 120mm fan on it doesn't cover the entire fin area. http://www.silentpcreview.com/article793-page5.html [silentpcreview.com]
Yeah with fan it'll be a 3 slot solution and yeah it only weighs half the weight of a high end CPU heatsink but then again that is not their biggest GPU heatsink.
The heaviest solution on AC's site is the
Re:Why is it harder on GPUs than CPUs? (Score:4, Interesting)
Pity there isn't a GPU socket on the motherboard the same as the CPU socket. Then we COULD use those big honking CPU cooling solutions (or some derivative of them), provided the case were designed to accommodate the board. You could also get high speed runs between memory (perhaps it could have its own bank), and the CPU.
Pity some CPU maker couldn't come along, buy a GPU maker, and make something like this.
(of course existing GPU solutions in slots are MUCH easier to upgrade, which is something against this sort of solution, unless they come out with a form factor that combines Chip+Cooling solution (similar to the old Slot1/A)
should be in a HTX slot not a pci-e onee (Score:2)
should be in a HTX slot not a pci-e onee
Re: (Score:2)
of course existing GPU solutions in slots are MUCH easier to upgrade, which is something against this sort of solution, unless they come out with a form factor that combines Chip+Cooling solution (similar to the old Slot1/A)
You're not gonna believe this dude, but someone beat you to the idea of a slotted GPU. Sorry. =[
http://www.legitreviews.com/images/reviews/378/ati_radeon_x1950.jpg [legitreviews.com]
They put all the pins on the bottom in such a way that it fits into a modular slot on the motherboard and even comes with built in cooling. =]
Re: (Score:2)
Pity there isn't a GPU socket on the motherboard the same as the CPU socket. Then we COULD use those big honking CPU cooling solutions (or some derivative of them), provided the case were designed to accommodate the board. You could also get high speed runs between memory (perhaps it could have its own bank), and the CPU.
Not a bad idea, though discrete cards today have dedicated memory for the GPU, with a bus custom designed for that card. Not expandable, but high performance. The connection to main memor
Re: (Score:2)
(that was his point)
Re: (Score:2)
Something that I don't see other posters mention is that the design of many parts of the CPU are hand-tweaked down to the transistor level exactly for this purpose - low heat, high frequency. GPU's are designed in a larger scale, which is logical if you remember that if you exclude the cache, the CPU is a much simpler (in transistor count) than a GPU, when GPU generations occur much more often and differ more from one to the other. So, you have a fraction of the CPU design cycle to incorporate a radically d
Re: (Score:2)
I've actually had larger heatsinks on my GPUs than CPUs in recent years with those double height graphics cards taking up two back plate slots and that was with a 2.83ghz quad core when it was high end! That said, the heatsinks often seem to be on the wrong side of the card in most motherboards/cases, that is, they're on the bottom, and of course, heat rises, so presumably it's more that than the actual size of the heat sinks? They seem to get round this by creating those heat tunnels that try to lead the h
Re: (Score:2)
GPUs are a little more CISCy.. Since the cycle time is constrained to be as slow as the slowest operation that must complete in one cycle, it means that it's a bit harder to cut down on cycle time.
Re: (Score:3, Interesting)
Re: (Score:2)
Right- it's a design choice. Rather than incredibly simple micro-ops that the real instructions get translated to, or instructions spanning multiple clock cycles, they've chosen to keep the per-core implementation much simpler. That lets them pack more of the simple cores on the chip, getting additional parallel processing at the cost of per-core optimal performance- which is fine, because these are things running on massively parallel problems.
CISCy was perhaps the wrong choice, but it's valid in a sense-
Re: (Score:2)
> "Why is it harder to raise the clock frequenceies on GPUs than CPUs?
Speed costs money...how fast 'ya want to go?
Re: (Score:3, Insightful)
GPU's have recently become massively parallel -- not as much need to go too fast in overall clock speed.
Re: (Score:2, Informative)
Re: (Score:2)
I think it has to due with the massively parallel operations. You can't pipeline stuff as far. Of course, I'm just guessing.
Basically, due to the parallelization it's more efficient to add more streams/'processors' than to ramp up the overall speed of the system - for example, the referenced 4890 has 800.
In order to have all the stream processors work, you might have to be a bit more conservative in your timing.
Re: (Score:1)
That can't be it. Graphics cards can have vast pipelines. Pipelines' main problems are with branches, and graphics cards don't need to be able to branch.
Re: (Score:2)
Re: (Score:2)
This site [codinghorror.com] suggests a couple possibilities.
A: A GPU had, until fairly recently, only a 1 high slot. Even with 2 slots, it has less room for cooling than the CPU, where weight actually matters more than size.
B: Transistors. The site dates from 2006, but mentions that my core 2 duo has ~291 million transistors. A G800GTX has 680M, and my research shows that the 4890 this review is about has 959 Million. Even a Core 2 Quad is 582M, and we know they cost a bit more for a given speed rating. A GT200 is liste
Re:Why is it harder on GPUs than CPUs? (Score:5, Informative)
You have so much data being churned around. The high end GPU's have 240+ stream processors, compared to a handful for a mobile phone. Then there is the constant punting of video data from the VRAM chips to the LCD screens (width x depth x RGB x bits/channel Hertz. VRAM is like standard RAM memory except there is a special read channel to allow whole rows of memory to be read by the video decoder simultaneously as it is being read/written by the GPU. It would be possible to
raise the clock frequency, but they would need a larger heatsink. If you visit the overclocking websites, you will see some of the custom water cooling systems that they have. Early supercomputers like Cray used Fluorinert [wikipedia.org].
apples to apples (Score:2)
I have a intel quad core 2 duo, a Q6600 I think.
How many TeraFLOPS is that?
Re: (Score:2, Informative)
Re:apples to apples (Score:5, Funny)
Re: (Score:2)
Even quad-core x86 CPUs are in the 10s of GigaFLOPS.
CPUs have to do a lot of integer ops, and have to be good at everything. GPUs simply have to crunch a lot of Floating Point numbers,
Re: (Score:2, Insightful)
Modern GPUs including every single Nvidia GPU since the G80 series has had a full integer instruction set capable of doing integer arithmetic and bit operations.
CPUs aren't designed to be good at everything, they're designed to be exceedingly good at executing bad code, which is the vast majority of code written by poor programmers or in high level languages.
You can write code for a CPU without worrying specifically about the cache line size, cache coherency, register usage, memory access address patterns a
Dear AMD, intel, nVidia, etc (Score:4, Insightful)
As you may have seen from the sales of netbooks and low-power computers, the future is... wait for it... low-power devices!
Where are the 5W GPUs? Does the nVidia 9400M require more than 5W?
Re: (Score:2)
Even for desktops, I'd like to see more of those. Lets say below 20 W, so a not-too-massive passive heat sink will do.
I'm quite happy with the performance of my NVidia 6800 GT, and it needs about 50W at full usage. With the latest chip technology (40 nm anyone?), the same performance should be possible with much less power consumption.
Re: (Score:2)
Radeon HD 4350 (55nm) is ~20 W and I think should be somewhat better than a 6800.
Re: (Score:2)
Re: (Score:2, Flamebait)
Does the nVidia 9400M require more than 5W?
Google is your friend [justfuckinggoogleit.com]
The GeForce 9400M claims a TDP of only 12 W. [tomshardware.com]
Re: (Score:2)
So "only" about as much power as a hard drive [ixbtlabs.com].
Re: (Score:2)
It's a big step in the right direction. I had been hoping to answer the other question but it looked like it was going to be too hard to find information on an embedded GPU core (like for cellphones and stuff.) I wonder what's in the GP2x Wiz [dcemu.co.uk]
Re: (Score:2)
uhhh....wikipedia?
Specifications
* Chipset: MagicEyes Pollux System-on-a-Chip
* CPU: 533MHz ARM9 3D Accelerator
* NAND Flash Memory: 1 GB
* RAM: SDRAM 64 MB
* Operating System: GNU/Linux-based OS
Re: (Score:2)
>Where are the 5W GPUs?
Intel integrated graphics
Re: (Score:2)
Yes, when you offload the entire thing to CPU and even ignore hardware t&l feature from GeForce 2 ages, it goes down to 5 watts.
Even Apple couldn't stand to their junk and switched back to real GPUs, down to "non pro" laptops.
Re: (Score:2)
I wouldn't call it junk. My X3100 plays ioquake3 just fine.
Re: (Score:2)
So in a the next few months, we'll be seeing mobile chipsets from both companies (Nvidia's Tegra and Qualcomm's Snapdragon) that will have scaled-down tech capable of handling HD video and impressive 3D graphics on embedded devices.
Re: (Score:2)
Re: (Score:2)
as you can see from the pictures of the massive heatsink (covers the entire board) this is NOT a low power device
and until there is a market for laptop gamers wanting 60fps and millions of polygons specialized cards/chips like this will be found only on render farms, gamer desktop rigs and graphics workstations -which is their intended market anyway
you generally do not get high performance with an economical product, so, for my car analogy I will say that a Pontiac Vibe that gets 35 miles to the gallon is n
Re: (Score:2)
Soon, not just gamers but ordinary users may need way higher "FPS" than today. 3d stuff (200hz), artificial 3d, massive amounts of transcoding, 12bit per channel video, 2K (or even 4K) are all making their way to average home user. Slowly but sure. These things were all pro high end studio stuff just some years ago.
For example, Apple is still testing a technology which scales desktop to infinite levels of DPI. It is there, embedded to core of OS but not stable or complete yet. To display such a desktop on a
Re: (Score:2)
Power consumption? (Score:3, Interesting)
No mention of power consumption or heat dissipation. My PC is already a radiator and in the summer fights with my AC.
I am interested in the computing power, 1.6 terraflops is no small number even if it is single precision.
Re: (Score:3, Insightful)
if you want TFlops, try the 4870x2 at 2.4TFlops, or NVideas tesla (http://en.wikipedia.org/wiki/NVIDIA_Tesla) series, made just for GPGPU which reach over 4TFlops
Re: (Score:2)
This is why I am going to literally make my next PC a hot water heater.
Re:Power consumption? (Score:4, Funny)
Personally, I'd recommend you make it a cold water heater, and get more bang for your buck!
how's ATI driver quality and performance on linux? (Score:1)
I'm a long-time Nvidia user because of good driver support on Windoze and Linux. I would love to give ATI a try but i've read a lot of negative things about driver quality in Linux. Granted, that was some time ago and things may have changed today. I'd be interested to hear about other slashdotters' experiences using today's ATI hardware + drivers under Linux/X.
Re: (Score:1)
Driver is fine (finally).
Re: (Score:2)
ATI drivers are great in Linux
*Punches fist in air* (Score:2)
Re: (Score:1)
Re: (Score:2)
I think that was the part that was meant to be funny, but my 8800 has gotten a 5.9 on that test for over a year now. Isn't it time we moved past the 'Vista is slow' thing?
Re: (Score:2)
FLOPs/Hz (Score:2)
And.... (Score:2, Informative)
Re: (Score:3, Interesting)
I'm also interested in your "slower than a GTX 285" assertion. I just looked at some benchmarks, and Xbit labs has an overclocked 4890@1GHz [xbitlabs.com] beating the tar out of the 285.
uhhh (Score:5, Funny)
Wait, let me get this straight. Graphics card manufacturers are actually attempting to make their graphics cards perform better? Why was I not informed of this before???
"Barrier" (Score:3, Insightful)
You keep using that word. I do not think it means what you think it means.
Disclaimer .... (Score:2)
Note: Damage caused by overclocking AMDâ(TM)s GPUs above factory-set overclocking is not covered by AMDâ(TM)s product warranty, even when such overclocking is enabled via AMD software.
What about a real revolution? (Score:2)
Offer the card, in same price down to cents along with a goodly written driver for Mac Pros and even more miraculously to last generation G5s (Quad/Dual Core).
Open Firmware, Endianness, Altivec, non standard interface (???), all excuses gone. If anyone wonders what I talk about, just watch this card's price when (if!) it ships to Macs. You will understand the comedy going on. In PowerPC times, we had some sort of excuse as "Firmware is hard to code", "drivers man, they can't code for PowerPC" etc. Now all e
Asymptotic, my ass (Score:4, Funny)
Re: (Score:2)
I may not completely understand graphic cards but,
I think in this case clock cycles actually *DO* mean something.
Re: (Score:2)
A 3GHz P4 is faster than a 2.6GHz P4
A 3GHz core 2 is faster than a 2.6GHz core 2
A 1GHz R700 is faster than 800MHz R700
Anyway, the R700 (Radeon 4xxx) series has been very good, mostly equaling or beating Nvidia's current lineup at similar prices.
Re: (Score:3, Informative)
How about the fact that it runs each instruction on 800 pieces of data at once? This isn't a 1 GHz one, two, four, or even 16-way chip. It's processing up to 800 pieces of data at once, and its clock for doing that ticks every billionth of a second. You're absolutely right, the clock speed by itself means nothing. The clock speed times the amount of work done per clock does mean something. If you raise either without lowering the other, you raise the overall amount of work the chip can do.
Re: (Score:1)
There's nothing stopping you from putting 6 of these in a single system. Even better, go with 4 4870x2s for around 12TFlops
Re: (Score:3, Informative)
Re: (Score:2)
Well, up to 42 anyway...
Re: (Score:2)
Great for ASCI Red. Now, in 1996, can I buy ASCI Red in a size that fits on a single PCI-e card and costs less than $300?
Re: (Score:2)
Well, if you got money, you can have 180 GigaFlop (32bit) or 90 GigaFlop (64bit) right now, on a PCI-e card.
http://us.fixstars.com/products/gigaaccel/ [fixstars.com]
It is Cell powered as you may guess. There is also mention of "720 GF computing power" which I can't even dare to think about it. I guess it is when you combine 4 of them. Oh, just $6100 per one :)
Re: (Score:2, Informative)
Re: (Score:2, Insightful)
Also, 1 GHz is the core speed without overclocking.
False. It's overclocked alright, it just doesn't have to be overclocked by users or the third party manufacturers to run at 1 ghz. From their press release:
Nine years after launching the world's first 1 GHz CPU, AMD is again first to break the gigahertz barrier with the factory overclocked, air-cooled ATI Radeon(TM) HD 4890 GPU -
Re: (Score:3, Interesting)
I'm pretty sure that word doesn't mean what you think it means. "Overclocked" means "our reliability people don't think this is smart, but it might work for you." In this case, you get a part that may or may not die before you expect it to, it might not last much beyond the warranty, it might have non-standard cooling to enable an operating window that Reliability can't assume (say they model frequency shiftin
Re: (Score:3, Interesting)