GDDR2 Emerging As A Real Standard 122
An anonymous reader writes "I noticed here that EE Times is reporting that the GDDR2 standard is finally becoming a reality. Both NVIDIA and ATI's latest chips offer support. ATI helped spearhead the initiative to develop the standard. The significance of this is great, since it may very well mean that every 18 months or so a new graphics memory standard will be released."
I'm sorry... (Score:5, Insightful)
GDDR2 SDRAM? What the hell is that supposed to mean? Sheesh. Why can't you just call it something like DDR3 or GDRAM or something simple like that?
Re:I'm sorry... (Score:1)
Re:I'm sorry... (Score:5, Informative)
GDDR2 SDRAM really means -> Graphics Double Data Rate 2 Synchronous Dynamic Random Access Memory
Re:I'm sorry... (Score:1)
Re:I'm sorry... (Score:2)
You must be a marketing tit. Please go away.
Regards,
YaG
Comment removed (Score:5, Insightful)
Re:What? (Score:5, Interesting)
No because the lowend is the bulk of the market and there every penny counts.
"A new standard means the old one isn't..."
No it just means that they get the benefits of a new standard for high end and high margin devices while reaping the prior standard as well.
"Or am i missing something?"
You are missing the fact that the bulk of graphics chips sold are at the low end. This low end bulk is good for 18+ months which is an eternity in the graphics business due to the rate of change (which seems to still be at a rate of preformance doubling every six to nine months). Standardization on this low end will allow lower prices while meeting the need for faster and more specialized RAM than is required compared the more stable CPU markets. In addition the standards will insulate all parties from lawsuits or patent claims lending more stability to their ventures. Finally it may herald a change from the bad old days where a great deal of R&D had to go into reinventing the wheel for memory or relying on exclusive vendors who may not even have the capacity when the need came. I'm thinking in particular of the year with the semiconductor fire that ratched up certain graphics card vendors highend cards.
Even in this market standards are good.
Re:What? (Score:2)
according to the article (Score:5, Funny)
it says that GDDR3 is going to be the standard, not GDDR2 - which sounds like it has multiple different implementations.
(I know it is against the spirit of
Re:What? (Score:2)
PS. slashcode is a lame lump of shite - without this PS, it wouldn't let me post...! :-p
Number of memory suppliers (Score:5, Interesting)
last I remembered (Score:5, Informative)
For that point, why arn't they doing a QDR architecture? QDR is basically DDR but with dedicated in / out pins (separate) that allows this kind of simultaneous read/write.
Granted, pin count is higher but I think it would be better suited to the graphics people.
That or I am not quite clear on the GDDR-n specs. heh. Or I am thinking about frame-buffer memory instead of texture memory (AFAIK the latter only need to be continuously read, really fast) hmm...
Re:last I remembered (Score:5, Informative)
However, with the bus widths being used by GPUs today (128,256), they really don't fit anymore. GPUs now manage the RAM accesses so that frame buffer access is shared with drawing etc. This means that the most important thing is RAM speed - with accesses for the frame buffer being sequential, the less time taken for that the more memory bandwidth left for drawing.
This will become even more important once we have the very high resolution monitors (LCD) on the horizon - 3K x 2K pixel displays will require a LOT of memory bandwidth to keep them refreshed.
VRAM wasn't just dual port RAM (Score:1, Informative)
Also note that latency is a relati
Re:last I remembered (Score:1)
Since those times, video memory bandwidth has increased enormously but DAC requirements have not. A 32 bit 3k x 2k LCD at 60Hz will only consume 1.4GB/s. Even maxing out a 400MHz DAC, you top out at 1.6GB/s with 32 bit colour. This is well under 10% of a modern graphics memory subsystems to
Re:last I remembered (Score:2)
The pin count is one of the biggest cost factors. Doubling the number of pins to separate input and output would be very cost-ineffective -- if you can afford to add that many pins, using twice as many I/O ports instead would be a much better solution, since it would double your peak read bandwidth.
Re:last I remembered (Score:1)
many other of the pins are:
power pins
addres pins
control pins
i/o timing sync pins
ground pins
no-connect pins (for improving signal quality)
so adding a few dozen (yes a few dozen - because the 256-bit data bandwidth is achieved with multiple chips) pins on a 460 pin BGA is hardly difficult.
Re:last I remembered (Score:2)
(And, BTW, a standard memory channel is 64-bits wide, data-wise. 256-bit bandwidth is achieved with 4 of those channels.)
Re:last I remembered (Score:1)
Re:last I remembered (Score:2)
This is only really important for the Framebuffer. It simplifies scanning out
to the screen if you don't have to contend with the other parts of the GPU. But most of the memory today is used for textures and streamable data (vertex lists, etc), where VRAM wouldn't be as useful. nVidia is very proud these days that it
uses the same memory for all three functions, but I wou
Re:last I remembered (Score:1)
Maybe... (Score:4, Insightful)
Maybe graphic makers should hold out on GDDR2 for GDDR3. People that buy high-end graphics cards want quality. Look at the GeForce FX. It's going to kill NVIDIA. I think NVIDIA and others (ATI) are going to really learn from the FX and make extra sure that what they come out with will be real innovation, not a quick way to get back on top and the expense of their customers.
Re:Maybe... (Score:1, Informative)
Uhm read the article:
"Lee said Micron's GDDR3 chips are made with 0.11- micron processing, allowing speeds that could reach 700 MHz for a 1.5Gbit/s data rate. The device will sample next quarter, he said"
GDDR2 reaches 500MHz now, GDDR3 _could_ reach 700MHz sometime in the future. By that time GDDR2 will reach at least same speeds. AFAIK frequency goal for GDDR3 is 500MHz in Q3.
"Look at the GeForce FX. It's going to kill NVI
Re:Maybe... (Score:4, Interesting)
Why do you say that?
So the FX didn't exactly blow out the Radeon 9700 Pro like it was supposed to, it's still a very fast, very good card capable of rendering anything a game throws at it for the next couple of years.
nVidia is very smart. They don't make very much money off the highest of the high-end market. Where they make most of their money is in that lower-mid range market, where they've traditionally marketed their "MX" products.
At the GDC nVidia was talking about implementing the full DX9 feature set in a card for $79. That's where they're going to make a killing.
I honestly don't think nVidia cares THAT MUCH if they don't have the absolute fastest card in every benchmark. Like any other company, they want to stick around for awhile and make some money.
Price-performance is VERY important in the market. That's why AMD is still around, despite the fact that P4's are undoubtedly faster now. I think people just see nVidia as being the king of the hill for awhile, and would like to see them taken down a notch.
The video-card market is very healthy, we have good competition, and the FX is definitely not going to kill nVidia. I think their strategy is right on.
Has anybody tried... (Score:5, Interesting)
I've always wondered this, since those two patterns are the ones I've fallen in and out of for the past few years.
I still think this is why console gaming is more mainstream, either way. With a console, you might not get the best quality in graphics, but hell, you pay $200-300 and the machine lasts 5 years, and you get quite a nice selection of quality games (that's really a bias, I started out on the NES...).
Re:Has anybody tried... (Score:2)
My rule is "upgrade your vid card when you can get double the performance for $100."
Doesn't keep me anywhere near the latest and greatest, but is good enough for me, and conserves those valuable beer tokens for the use that God intended 'em for.
Re:Has anybody tried... (Score:1)
Re:Has anybody tried... (Score:2)
Geforce4mx400 for about $50 at newegg.
Re:Has anybody tried... (Score:1)
My 21" Trinitron spoils me, I can't live with any less than that.
Re:Has anybody tried... (Score:1)
A person might value a newly released high-end gfx card more (in order to impress his friends for instance) than a person that only uses his/hers computer to write e-mails.
Re:Has anybody tried... (Score:3, Interesting)
"
Whos to say games ever 'expire'?
I still play a good game of nethack every now and then, and enjoy a lot of snes games. Of course, my main addiction is counterstrike (not the newest game, but not exactly old either), so ymmv.
Console games sell mostly because they 'just work'. no co
Re:Has anybody tried... (Score:1)
My SNES is still in working condition, albiet a tad yellow thanks to some weird effect of the plastic case aging. Super Metroid is in it right now. Punch Out is on my NES emulator a lot. Also procured a Genesis just for Gunstar Heroes. So yeah, old games aren't dead.
I just meant that these game systems go through a 5 year cycle before any pressure is put on the consumer to even consider upgrading.
I'm glad you point out the expire part. (Score:2)
OTOH, I can still play SMB3 off of my SNES Mario Allstars cart, and that's older than Doom by a couple of years. In PC gaming, games may not expire, but targetted architectures do. This classic interview [archive.org] contains some insight into this (Glide/Verite vs. OpenGL targetting).
Re:I'm glad you point out the expire part. (Score:2)
Really? (Score:2)
I don't.
Re:Really? (Score:2)
Oh. (Score:2)
Or your Windows XP has SB emulation and VGA emulation, etc?
I can't play DOOM anymore than I can play Genecyst and get my Shinning Force state files. Luckily my Sega Smash pack on Dreamcast isn't on a platform that's a moving target.
Re:Oh. (Score:2)
It would be due to the fact that you completely forgot to mention you're running Linux. Good job.
I guess WinXP just blows then; I'm still running Win98 on my Windows box, and DOOM/DOOM II work perfectly there.
Uh-huh, that's part of the point. (Score:2)
The only way you can still play titles is to use Win98, which is 5 years old. At that point I might as well buy a new computer every 5 years and live with each one as its own gaming console. Even old DirectX games don't work on new DirectX.
Re:Uh-huh, that's part of the point. (Score:2)
This is a total non-sequitur. You said that DOOM won't run on modern hardware/software. I proved you wrong. You brought up Linux, to wit, the reason you couldn't run DOOM was because you were using Linux. I had assumed you were using some version of Windows up until that point.
Now I'm beginning to thi
Games die (Score:1)
Whos to say games ever 'expire'?
The internal battery used to power the SRAM that saves the state of the game, that's what. It will eventually run out of charge.
Re:Games die (Score:2)
Re:Games die (Score:1)
You don't need an actual physical Legend of Zelda cartridge and NES console to play the game
But if you don't have the cartridge within five meters of the computer you're emulating it on, the IDSA will come and kill you. Besides, most computers don't have TV out; a 27" living room TV is much nicer for multi-player split-screen video gaming than a 17" VGA monitor.
Re:Has anybody tried... (Score:3, Insightful)
if you are going to do that, you should also factor in the grief of trying to install a bleeding edge card with bleeding edge drivers.
Having wasted a lot of time and multiple re-installs, I now stick to "not quite bleeding, but still a bit bloodstained" edge products, where at least the drivers are mature.
GDDR2? (Score:4, Insightful)
Can anyone answer me that? What makes it special?
Re:GDDR2? (Score:4, Informative)
But the key that makes it worth the extra bucks is the fact that DDR-II delivers twice the external bandwidth of a standard DDR solution for the same internal frequency. The 1.8-volt device features a high-speed data transfer rate of 533Mbps that can be extended to 667Mbps for networks and special system environments.
The last year chip-makers have released diffrent DDR chips with increasing frequency like DDR 266, DDR333 and DDR400. But its limited how much higher its possible to go so instead they are trying to add another sort of "bus" inside the chip.
The reason they started producing DDR (vs. SDR) is because it's much easier to implement such a double data rate (DDR) bus than it is to actually double the clock rate of a bus. So DDR allows you to instantly double a bus's peak bandwidth without all the hassle and expense of a higher frequency bus.
DDR-II is made thinking in the same way.
Re:GDDR2? (Score:1)
So i
Processors falling behind (Score:4, Insightful)
This really goes to show how humans are visual animals above all. I wonder how much more power could be squeesed out of porcessors if we were to use memory like this and wider buses...
Re:Processors falling behind (Score:3, Interesting)
I used to think that this was indeed interesting or even surprising but when you look at how CPU's and GPU's (thanks Nvidia for making them equal via naming) it is not suprising because evolution in the graphics market was slower for so many years while CPU's kept chugging along. Consider the long period where the most compelling feature of a video card was it could
Re:Processors falling behind (Score:3, Insightful)
Not really that interesting, quite trivial reallly, that's where the pressure is at the moment.
The bottle neck in 3D is still the graphics accelerators. There's not nearly as big a drive towards more power in the CPU market simply because smp and cluster solutions provide more bang for your buck.
I am a little surprised at the moment though, how come we haven't seen m
Re:Processors falling behind (Score:3, Interesting)
Until someone comes up with a radically new scheme of processing, these physical limitations will always be with us. That's why the Voodoo (3Dfx) scheme
Processors vs. GPU bus (Score:5, Insightful)
memory.
A CPU cares a lot about latency because typical code will
have "random" accesses scattered with calculations in
between. The same data and code areas are often
accessed many times and data are small
(e.g. a Word document is small) while code
maybe quite large.
That's why CPU's don't have enormous
256-bit buses (which have the same latency as a 64-bit
bus)
A GPU performs "multimedia" calculations which typically
involve serial access to memory where caching can be of
very little help. You cannot "cache" a whole texture set
and code is of really trivial size (until now, maybe
PixelShader 2.0+++ will change all that). Therefore
a GPU needs serial access to huge areas of memory,
involving items of similar size and in regular intervals.
That's why a GPU needs BANDWIDTH (not necessarily
latency, because when the calculation starts latency
is hidden inside the calculation loop).
Considering the above, P4 is a "multimedia" design (much
more like a GPU) that's why it was made to work with
very high FSB and RAMBUS (high bandwidth) originally.
Contrary to this, AMD Athlon is a "generic" design which
does not depend on huge bandwidth but on very low
latency (hence the HUGE L1 cache). That's why P4 needs
HyperThreading : its long pipelines do not care a lot about
latency but can cause a big bottleneck if they stall. Intel
feeds them continuously by drawing instructions from 2
processes at once (so that the pipeline does not remain
empty if one process is stalled from the front side bus or
something...).
Anyway, I expect GPUs to drift slowly towards the generic
CPU design because pixelshader language has become
quite complicated with long loops etc. Gradually this
means that GPUs (esp. with DirectX9) will start being
compute-limited and not texture-fill-rate limited
(anything over 2 GTexel/s is really absurd for
typical screen sizes). This will propably become apparent
with DOOM III.
P.
Re:Processors vs. GPU bus (Score:4, Interesting)
"anything over 2 GTexel/s is really absurd for
typical screen sizes"
Let's say the screen has 1 million pixels for simplicity (that's somewhere in between 1024x768 and 1280x1024). Let's say you really want smooth motion and target a framerate of 100fps. That means you need to produce 100 MPixels/s. At 2GTexel/s, that's 20 texels per resulting pixels. Now add a 2x overdraw (which is quite low I think) and you're left with 10 texels per resulting pixel.
Many additional effects, esp. refraction and reflection need render to texture, i.e. you basically render (parts of) the scene twice, which obviously uses a lot of additional performance.
2GTexel/s doesn't sound so absurd anymore, does it?
WTF? (Score:3, Interesting)
I'm making a law right now (Score:5, Funny)
Re:I'm making a law right now (Score:2)
GDDR3! (Score:3, Informative)
Re:GDDR3! (Score:3, Insightful)
I think they only say that because GDDR3 is farther off into the future.
I've noticed once these things get closer to an actual release date, these people tend to take off their rose colored glasses. My money says there won't be much of a difference between the two different memory types when they're actually released. Not enough to
GDDR2 (Score:5, Funny)
Re:GDDR2 (Score:2)
Strange the way these things work out...
Doubling the data rate of Dance Dance Revolution (Score:1)
OK, so I started on typical 150 bpm songs such as "Hot Limit". I then tried increasing the bus speed to 200 BPM with the "Paranoia" songs. But why am I having so much trouble passing songs once I've doubled the data rate from 150 bpm to 300 bpm with "Max 300"?
Why? (Score:1)
bears repeating (Score:1)
But the key that makes it worth the extra bucks is the fact that DDR-II delivers twice the external bandwidth of a standard DDR solution for the same internal frequency. The 1.8-volt device features a high-speed data transfer rate of 533Mbps that can be extended to 667Mbps for networks and special system environments.
The last year chip-makers have released diffrent DDR chips with increasing frequency like DDR