Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

The Impact of Memory Latency Explored 162

EconolineCrush writes "Memory module manufacturers have been pushing high-end DIMMs for a while now, complete with fancy heat spreaders and claims of better performance through lower memory latencies. Lowering memory latencies is a good thing, of course, but low-latency modules typically cost twice as much as standard DIMMs. The Tech Report has explored the performance benefits of low-latency memory modules, and the results are enlightening. They could even save you some money."
This discussion has been archived. No new comments can be posted.

The Impact of Memory Latency Explored

Comments Filter:
  • Ask a builder (Score:3, Insightful)

    by Dragoon412 ( 648209 ) on Wednesday November 02, 2005 @12:30PM (#13932901)
    Seriously, this has been known very well amongst the gaming PC builder crowd for a long time. Most of them, anyways; there's unfortunately still that level at which people know enough to put the PC together, but don't know enough to tell you what any of the numbers mean.

    The difference between, say, Corsair Value Select memory, and Corsair 1337 Ultra X2000 - the memory equipped with LCDs, heat spreaders, and a spoiler with metal-flake yellow paint that add at least 10 horsepower - is going to be absolutely unnoticeable in the real world. Even benchmark scores will show little to no improvement.

    Ricer RAM - you know, the PC equivalent of this crap [hsubaru.com] - is for overclocking. If you're not planning on overclocking it, you're paying too damned much.
  • by mindaktiviti ( 630001 ) on Wednesday November 02, 2005 @12:32PM (#13932926)
    800x600? Won't you already be getting >100 FPS in most games anyway?

    Perhaps, that particular benchmark was for Far Cry at 800x600 w/ medium settings, and the lowest fps was around 168, and the highest was 188, so a 20fps difference.

    They were using this video card: NVIDIA GeForce 6800 GT with ForceWare 77.77 drivers

    However if you look at the opportunity cost of buying this ram because you have a bad video card and play at those resolutions, then it would still be more worth it to just get a better video card. Even if you want to upgrade your ram, it would be wiser to just save that extra money to put into a 6600GT or something.

    It would have been interesting if they did the test with an older video card as well, like a GeForce3 series.

  • by G4from128k ( 686170 ) on Wednesday November 02, 2005 @12:33PM (#13932931)
    These tests underestimate the performance impact of latency because they are conducted using software optimized over the years for the high-latency realities of current-day memory architectures. CPU clock speeds have been outstripping RAM clock speeds for about 15 years. Software developers have spent years optimizing their code to mitigate the impacts of latency.

    In the short-run, these tests help a person decide whether to buy low-latency RAM. But they provide little long-term insight into how much faster the entire system could be if low-latency were the norm and compilers, libraries, operating systems, and applications were re-optimized for low-latency, not high-latency, architectures.

  • The real issue ... (Score:4, Insightful)

    by TheCrig ( 3178 ) on Wednesday November 02, 2005 @12:48PM (#13933083) Homepage
    ... Is not memory performance as such, but system performance. If a 5 percent increase in system performance increases the cost of your system by 10 percent, you have to want it pretty badly or be on the edge of required performance or just be in a schoolyard comparison. But if it's reversed, and a 10 percent increase in system performance can be had for a 5 percent increase in system price, then if you can afford the 5 percent (say $100 for a $2000 system), go for it.
  • Re:Ask a builder (Score:4, Insightful)

    by theantipop ( 803016 ) on Wednesday November 02, 2005 @12:49PM (#13933102)
    What most people don't realize is that the only way to improve your performance at the top end of the performance spectrum is through a combination of small tweaks such as this. Sure spending twice the money for 103% of the performance sounds dumb, but when you combine that with small tweaks to your processor, graphics card and a 10,000rpm hard drive they add up.

    These products are not for people who want to achieve a useable level of performance and as such are not marketed at those crowds. They are for people who have already fast equipment but want more. I won't say this is a good or bad thing as it is simply a hobby for most of these people. Just like import tuners: they may drive funny-looking cars, but it's their choice of hobby.

  • by timeOday ( 582209 ) on Wednesday November 02, 2005 @12:50PM (#13933103)
    I wouldn't buy a $500 card either but, sheesh, at least they're faster than the cheap ones. This low-latency memory is twice the price for a ~3% boost... I think not.
  • by Iriel ( 810009 ) on Wednesday November 02, 2005 @12:52PM (#13933128) Homepage
    This isn't all that funny. I mean, it does make me laugh, but it's far more true than humorous. I constantly get berated by the 'hardcore' gamers for not having the fastest CPU/RAM/GPU/HD when I can still run a lot of games just as well as anyone else. The problem with hardcore gaming equipment is that it has become something like MTV selling you 'cool'.

    Guess what? That wicked dual-core CPU actually runs games slower than its single core cousin. That brand-spankin' new video card that cost you $400(or more)? I pay that much once every several years on my video card. The difference is that I don't care if I squeeze out my maximum frames per second because most people can't even detect the difference if the game didn't have an option to show the number in the corner of the screen like some veritable rating of thier manhood (sorry for my gender bias on that). And that super ultra OHMYFUCKINGGODITMAKESMYEXPLODEITSSOFAST low-latency RAM is giving you a performance boost of 2% of what I've got now.

    I find it educational to read these reports so I can make educated purchasing choices. For that, I'm quite grateful. However, I find it kind of sad that the parent post is unsettlingly accurate in that the 'hardcore pc gamers' will shove this to the side for the ATI SXL 10G Super Elite XTRME Pro card next week. Witness what happens when PC gaming meets MTV-esque marketing.
  • by Iriel ( 810009 ) on Wednesday November 02, 2005 @01:12PM (#13933307) Homepage
    You seemed to have missed the point that 'a lot of games' does not mean 'all games', 'any games', or any derivative thereof. And honestly, the point of my post is that I'm willing to sacrifice some detail and put my settings at 75-80% instead of maxed-out if it'll save me from spending close to a thousand dollars a year in upgrades.
  • by Sycraft-fu ( 314770 ) on Wednesday November 02, 2005 @02:45PM (#13934192)
    Cache is SRAM, since SRAM is much faster. Ok, except that SRAM takes 6 transistors per bit to make. So for 1 megabyte of cache, that's 48 million transistors to implement. That's a major budget of silicon. As transistor count goes up, so does die size, heat, cost, failure rate, etc. So putting large caches on just isn't feasable. A 8MB cache would use more transistors than most processors today do in total between core and cache.

    Ok, you say, so move it off the chip. Well the problem is that part of the reason the cache can be so fast and low latency is that it's located on die. If you move it off, you start to run in to a lot harder speed limitations. Intel discovered that with their Celerons back in the PII days.

    Real PIIs had 512k of cache, but on seperate chips. Because it was off die, half chip speed was the best they could do. The Celerons only had 128k of cache, but it was on the chip die and thus ran at full speed. Now what you found was if you overclocked a Celeron to the same bus and speed as a PII (for example a 300mhz Celeron ran on a 66mhz bus, cranking that to 100mhz made it run at 450mhz, the same as a PII) it ran at least as fast, sometimes faster, despite the lower cache.

    Thus these days, on-die cache is what it's all about. Generally the value the pick is where diminishing returns start to seriously kick in. You discover that throwing more cache at things generally doesn't result in that big a speed increase (servers are a little different).

    So, unless someone figures out a better kind of RAM to use, we are stuck. DRAM is what we use for main memory already, and SRAM is too expensive to use very much of.
  • by freeweed ( 309734 ) on Wednesday November 02, 2005 @03:29PM (#13934610)
    Price. Well, price and size, but mostly price.

    Cache isn't some magical thing. It's simply RAM. SRAM, usually, which is why it's so fast (don't have to waste power/time refreshing your contents). At the end of the day, it's just some very fast RAM. It sits between your CPU and the rest of your RAM, and uses its increased speed to "trick" the CPU into performing as if your main RAM is much faster than it is.

    In my computer arch course a while back, someone asked why, if cache is so fast, we don't just build computers 100% SRAM memory. Our professor did some back-of-the-napkin calculations for fun. Major $. Have to include the extra space and cooling requirements, of course :)

    The other thing, of course, is the good old law of diminishing returns. Cache actually solves the problem VERY nicely. For most people/computers/applications, cache misses aren't that great of a problem, because most computer code lends itself to cache hits (a phenomenon called "locality"). Locality is WHY we have cache in the first place. In general most computing works very well with a tiny amount of very fast cache and a small amount of fast cache. Adding more eventually gets you to the point where you're not seeing much if any improvement. On most modern systems, we're at that point - at least as far as the market will bear.

    Oh, and outside of the HPC world, there's no NEED for programmers to worry about memory caching issues. This isn't where most bottlenecks show up, and again, most general-purpose code lends itself very nicely to small amounts of cache. Compilers often help here, too. Most of your average programmers would get better use of their time analyzing the data structures and algorithms they use.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...