The Impact of Memory Latency Explored 162
EconolineCrush writes "Memory module manufacturers have been pushing high-end DIMMs for a while now, complete with fancy heat spreaders and claims of better performance through lower memory latencies. Lowering memory latencies is a good thing, of course, but low-latency modules typically cost twice as much as standard DIMMs. The Tech Report has explored the performance benefits of low-latency memory modules, and the results are enlightening. They could even save you some money."
Ask a builder (Score:3, Insightful)
The difference between, say, Corsair Value Select memory, and Corsair 1337 Ultra X2000 - the memory equipped with LCDs, heat spreaders, and a spoiler with metal-flake yellow paint that add at least 10 horsepower - is going to be absolutely unnoticeable in the real world. Even benchmark scores will show little to no improvement.
Ricer RAM - you know, the PC equivalent of this crap [hsubaru.com] - is for overclocking. If you're not planning on overclocking it, you're paying too damned much.
Re:Insightful article (Score:3, Insightful)
Perhaps, that particular benchmark was for Far Cry at 800x600 w/ medium settings, and the lowest fps was around 168, and the highest was 188, so a 20fps difference.
They were using this video card: NVIDIA GeForce 6800 GT with ForceWare 77.77 drivers
However if you look at the opportunity cost of buying this ram because you have a bad video card and play at those resolutions, then it would still be more worth it to just get a better video card. Even if you want to upgrade your ram, it would be wiser to just save that extra money to put into a 6600GT or something.
It would have been interesting if they did the test with an older video card as well, like a GeForce3 series.
The underestimated impact of latency. (Score:1, Insightful)
In the short-run, these tests help a person decide whether to buy low-latency RAM. But they provide little long-term insight into how much faster the entire system could be if low-latency were the norm and compilers, libraries, operating systems, and applications were re-optimized for low-latency, not high-latency, architectures.
The real issue ... (Score:4, Insightful)
Re:Ask a builder (Score:4, Insightful)
These products are not for people who want to achieve a useable level of performance and as such are not marketed at those crowds. They are for people who have already fast equipment but want more. I won't say this is a good or bad thing as it is simply a hobby for most of these people. Just like import tuners: they may drive funny-looking cars, but it's their choice of hobby.
Re:Just stick a few blue LEDs on it... (Score:5, Insightful)
Re:Just stick a few blue LEDs on it... (Score:4, Insightful)
Guess what? That wicked dual-core CPU actually runs games slower than its single core cousin. That brand-spankin' new video card that cost you $400(or more)? I pay that much once every several years on my video card. The difference is that I don't care if I squeeze out my maximum frames per second because most people can't even detect the difference if the game didn't have an option to show the number in the corner of the screen like some veritable rating of thier manhood (sorry for my gender bias on that). And that super ultra OHMYFUCKINGGODITMAKESMYEXPLODEITSSOFAST low-latency RAM is giving you a performance boost of 2% of what I've got now.
I find it educational to read these reports so I can make educated purchasing choices. For that, I'm quite grateful. However, I find it kind of sad that the parent post is unsettlingly accurate in that the 'hardcore pc gamers' will shove this to the side for the ATI SXL 10G Super Elite XTRME Pro card next week. Witness what happens when PC gaming meets MTV-esque marketing.
Re:Just stick a few blue LEDs on it... (Score:4, Insightful)
Re:What about cache? (Score:3, Insightful)
Ok, you say, so move it off the chip. Well the problem is that part of the reason the cache can be so fast and low latency is that it's located on die. If you move it off, you start to run in to a lot harder speed limitations. Intel discovered that with their Celerons back in the PII days.
Real PIIs had 512k of cache, but on seperate chips. Because it was off die, half chip speed was the best they could do. The Celerons only had 128k of cache, but it was on the chip die and thus ran at full speed. Now what you found was if you overclocked a Celeron to the same bus and speed as a PII (for example a 300mhz Celeron ran on a 66mhz bus, cranking that to 100mhz made it run at 450mhz, the same as a PII) it ran at least as fast, sometimes faster, despite the lower cache.
Thus these days, on-die cache is what it's all about. Generally the value the pick is where diminishing returns start to seriously kick in. You discover that throwing more cache at things generally doesn't result in that big a speed increase (servers are a little different).
So, unless someone figures out a better kind of RAM to use, we are stuck. DRAM is what we use for main memory already, and SRAM is too expensive to use very much of.
(can't have a subject that starts with $) $ (Score:3, Insightful)
Cache isn't some magical thing. It's simply RAM. SRAM, usually, which is why it's so fast (don't have to waste power/time refreshing your contents). At the end of the day, it's just some very fast RAM. It sits between your CPU and the rest of your RAM, and uses its increased speed to "trick" the CPU into performing as if your main RAM is much faster than it is.
In my computer arch course a while back, someone asked why, if cache is so fast, we don't just build computers 100% SRAM memory. Our professor did some back-of-the-napkin calculations for fun. Major $. Have to include the extra space and cooling requirements, of course
The other thing, of course, is the good old law of diminishing returns. Cache actually solves the problem VERY nicely. For most people/computers/applications, cache misses aren't that great of a problem, because most computer code lends itself to cache hits (a phenomenon called "locality"). Locality is WHY we have cache in the first place. In general most computing works very well with a tiny amount of very fast cache and a small amount of fast cache. Adding more eventually gets you to the point where you're not seeing much if any improvement. On most modern systems, we're at that point - at least as far as the market will bear.
Oh, and outside of the HPC world, there's no NEED for programmers to worry about memory caching issues. This isn't where most bottlenecks show up, and again, most general-purpose code lends itself very nicely to small amounts of cache. Compilers often help here, too. Most of your average programmers would get better use of their time analyzing the data structures and algorithms they use.