Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

DDR3 Isn't Worth The Money - Yet 120

An anonymous reader writes "With Intel's motherboard chipsets supporting both DDR2 and DDR3 memory, the question now is whether DDR3 is worth all that extra cash. Trustedreviews has a lengthy article on the topic, and it looks like (for the moment) the answer is no: 'Not to be too gloomy about this, but the bottom line is that it can only be advised to steer clear of DDR3 at present, as in terms of performance, which is what it's all about, it's a waste of money. Even fast DDR2 is, as we have demonstrated clearly, only worthwhile if you are actually overclocking, as it enables you to raise the front-side bus, without your memory causing a bottleneck. DDR3 will of course come into its own as speeds increase still further, enabling even higher front-side bus speeds to be achieved. For now though, DDR2 does its job, just fine.'"
This discussion has been archived. No new comments can be posted.

DDR3 Isn't Worth The Money - Yet

Comments Filter:
  • by Gilatrout ( 694977 ) on Friday September 14, 2007 @08:50AM (#20602487)
    I read Intel supports Dance Dance Revolution 3.
  • I agree (Score:5, Funny)

    by TheRealFixer ( 552803 ) * on Friday September 14, 2007 @08:52AM (#20602501)
    I would have to agree. I see all these kids pumping quarters into these machines and pretending to dance. Seems like a complete waste of money to me.
    • Re: (Score:2, Offtopic)

      by Applekid ( 993327 )
      While the DDR joke is fighting hard for its place amongst Soviet Russia and Welcoming Overlords...

      You'd be hard pressed to find one player of that genre game to confess they actually consider it dancing any more than whack-a-mole is a simulator of pest-control.
      • You'd be hard pressed to find one player of that genre game to confess they actually consider it dancing any more than whack-a-mole is a simulator of pest-control.

        That's because you're not playing it on double. Once you have 8 panels over a 66-inch-wide platform, Dance Dance Revolution begins to look a lot more like dancing.

        ObTopic: For people in the USA, DDR 3rd Mix isn't worth the money to import a Japanese PS1 and a Japanese game. Stick with Konamix.

  • Duh? (Score:5, Insightful)

    by ynososiduts ( 1064782 ) on Friday September 14, 2007 @08:53AM (#20602515)
    Who in their right mind would pay so much for RAM? The only people I can think of are the middle - upper class teenagers with lots of money. The ones who run 8800Ultra's in SLI thinking that 2 cards = twice the performance when it's more like 30 - 50 % increase. Most educated system builders wont spend more money then they have to, and DDR 3 is just overpriced.
    • Exactly - DDR3 is something like 3 or 4x the price of DDR2. Who really expects it to offer the performance to match that kind of price step-up?
      • by Gr8Apes ( 679165 )
        Heck, DDR2 is only now worth it, since it's cheaper than DDR. I have a DDR 400 system at home that's more than 20% OC'd on the memory bus and rock stable. When I bought the 2GB that's in that machine, DDR was about half the price of DDR2.

        I'll jump on the DDR2 bandwagon with my next system, unless DDR3 drops to the same or less than DDR2 prices.
    • What's more, you're spending all that money in a system that will be obsolete in a few months (obsolete for what you want it, anyway) and on top of that, there aren't any games that require that much speed! When the new games that do require it come out, you will have gotten the new latest and greatest. Who needs 140 FPS anyway? Are there even screens that can display it?
      • If there are I doubt they are available to the general public. So few of those prior mentioned young people don't realize that if they have a 60 - 70 hz LCD monitor it doesn't matter what FPS you get over 70. Some claim to be able to tell the difference between 60 and 90 FPS. Like they can even see it.
        • by cnettel ( 836611 )
          The interesting point is the absolute min FPS, measured as 1/(max time between two frames). Even with an average of 70, you can sometimes go above 16 ms between frames. And we're quite sensitive to these things, as an example I notice the somewhat uncanny effect of having two TFTs with different panesl with video scaled over both. They have different response times and possibly different processing, but they should still be at most a (60 Hz) frame or so apart. The effect is nonetheless very visible.
        • by plover ( 150551 ) *
          There's a frequency that seems to be somewhere between 50 and 72 Hz at which the perception of flicker ends for most people. I know that on a CRT a 60 Hz refresh rate is quite bothersome to me but at 72 Hz it's not, while on an LCD screen a 60 Hz refresh rate doesn't bother me at all. This makes me believe my perceptions are related to the overall luminance of the screen ( which is evened out on an LCD by the backlighting, ) rather than the display rate of the bits themselves.

          Of course, I'm not playing

          • That's because LCDs don't flicker, while CRTs do... an LCD wouldn't show flicker at 1hz, it would just refresh slowly.
          • That's because LCDs don't refresh. They don't have a beam that scans the screen 60 times a second, like CRTs do. Instead, their pixels remain at that value until they are given the signal to change, and the faster that change happens, the faster the screen is. That's why an LCD could be at 1 Hz and you wouldn't notice anything (until the picture changed, anyway).
            • by tlhIngan ( 30335 )

              That's because LCDs don't refresh. They don't have a beam that scans the screen 60 times a second, like CRTs do. Instead, their pixels remain at that value until they are given the signal to change, and the faster that change happens, the faster the screen is. That's why an LCD could be at 1 Hz and you wouldn't notice anything (until the picture changed, anyway).

              False, actually.

              LCDs are refreshed much the same way as CRTs are. You start at the upper left, write the pixel's data, then the next pixel, until y

              • Pull the LCD data cable out and you'll see it fade

                Not quite: most monitors will display a [pick your color, usually a shade of black] screen for a few seconds upon detecting loss of sync before powering down to standby, thereby clearing whatever happened to be displayed within 20ms of the VGA/DVI/HDMI/whatever cable plug being pulled.

                If you really want to see Active-Matrix TFT persistence, you would fare better with yanking the power plug out of the wall socket and shining a very bright light on the LCD. Un

        • I can tell the difference between 60 and 90fps easily, but visually is just the start of it - Most games sync the gameplay to the fps. Higher fps = moving faster, jumping higher, potentially shooting faster, regenerating things faster. I could cite sources if needed, but basically anything based off quake1 (incl. future versions of quake and their derivatives) function this way. If you play competitively, its worth it. Of course, if you're the type to really care this much you probably dont use a 70hz lcd,
      • by Mprx ( 82435 )
        My monitor can display 200FPS. (Iiyama Vision Master Pro 454). Some CRTs are even faster.
        • My monitor can display 200FPS. (Iiyama Vision Master Pro 454).
          The 200hz refresh rate likely only applies to 640x480 resolution, which you won't find many hardcore gamers using these days!

          According to its manual, the 454 does support a refrest rate of 102Hz at 1600x1200 which is still pretty damn good though.
      • Um, I run my monitors at SXGA and 100 FPS with VSchync on and it looks nice, I think the VGA Standard will allow like 160-200 FPS at like 640X480, but until DVI CRTs come out we wont be able to get more than 100 fps on a CRT at SXGA. ALso in games, 200+ FPS will help tremendously with shot registration but LAG is usually the biggest factor in registration once you are over like 100 FPS.
      • by zokum ( 650994 )
        I play world of warcraft in 120Hz and many other games in 160Hz on my eizo F930. With that kind of refresh rate, you can turn off vsync without any noticable effect for most games. Personally I am very sensitive to refresh rates and it doesn't get comfortable until i get at least 90Hz on a CRT. (I run 1600*1200@100Hz for desktop usage). You can definetly feel/see the difference, even in games, depends on what game it is and how bright it is of course. For a comparison, try scrolling a page like slashdot o
      • Who needs 140 FPS anyway? Are there even screens that can display it?

        Take two machines. Machine A renders a low-complexity scene at 140 fps. Machine B renders the same scene at 70 fps. Which is more likely to render a high-complexity scene at >= 60 fps?

        Even if your machine is rendering the highest complexity scenes at twice your monitor's refresh rate, programs can still put excess GPU horsepower to work with full-scene motion blur. Draw a scene with objects positioned midway between the last frame and this frame, and blend it 50-50 with this frame, and things start

    • A little off topic, but I read on wikipedia [wikipedia.org] that with the R600 series/drivers the crossfire solution is approaching its theoretical maximum of twice the performance of a single card.
    • Actually, with the 8800's the benchmarks I've seen reflect more like a 60% - 80% increase. The new DX10 hardware design makes SLI much more powerful. I've got two 8800 GTSs in SLI and I've seen improvements similar to the 60%-80% I talked about. With older cards, that's obviously not the case, but the newer stuff is pretty impressive.
      • That's really only useful if you are using super high resolutions or multi monitor. My 8800GTS (320 MB version) can play any game at my monitors native 1680x1050 with everything maxed out at 40 FPS. Which is the minimum I would go.
        • by mogwai7 ( 704419 )
          SLI can only be enabled while using a single monitor.
          • SLI can only be enabled while using a single monitor.

            Well, with two monitors I can't think of a reason to do SLI since you can have each card powering one whole monitor.

    • by Moraelin ( 679338 ) on Friday September 14, 2007 @10:16AM (#20603461) Journal
      Some, oh, I think 6-7 years ago, I happened to be at the local computer store, to buy some stuff. (In the meantime I buy most components online, so that's not to say it hasn't happened ever since, just that I wasn't there to see it.)

      So an older guy came and said he wants them to build him a system. He was pretty explicit that he really doesn't want much more than to read emails and send digital photos to his kids. You'd think entry level system, right? Well, the guy behind the counter talked him into buying a system that was vastly more powerful than my gaming rig. (And bear in mind that at the time I was upgrading so often to stay high end, that the guys at the computer hardware store were greeting me happily on the street. Sad, but true.) They sold him the absolute top end Intel CPU, IIRC some two gigabytes of RAM (which at the time was enterprise server class), the absolute top-end NVidia card (apparently you really really need that for graphical stuff, like, say, digital photos), etc.

      So basically don't underestimate what lack of knowledge can do. There are a bunch of people who will be just easy prey to the nice man at the store telling them that DDR3 is 50% better than DDR2, 'cause, see 3 is a whole 50% bigger than 2.

      And then there'll be a lot who'll make that inferrence on their own, or based on some ads. DDR3 is obviously newer than DDR2, so, hmm, it must be better, right?

      Basically at least those teenagers you mention read benchmarks religiously, with the desperation of someone whose penis size depends (physically) on his 3DMark score and how many MHz he's overclocked. If god forbid his score fall 100 points short of the pack leader, he might as well have "IMPOTENT, PLEASE KILL ME" tattooed on the forehead. At 1000 points less, someone will come at a door with a rusty garden scissors and revoke his right to pee standing. So they'll be informed at least roughly what difference does it make, or at least if the guys with the biggest e-penis are on DDR2 or DDR3.

      I worry more about moms and pops who don't know their arse from their elbow when it comes to computers. Now _normally_ those won't go for the highest end machine, but I can see them swindled of an extra 100 bucks just because something's newer and might hopefully make their new computer less quick to go obsolete.
    • I agree, but it's not trial becoming an educated system builder. Back when Computer Shopper could kill small pets if dropped and I read it monthly, I could build a system no problem.

      Fast forward a decade, my career is different and I'm not as well informed. But I need to build a specialized system. For FEA on large problems (100,000+ nodes) you need masses of fast RAM. Fast everything. But I own a big chunk of my business and have to pinch pennies or it comes out of my pocket. Even if I go to someone
    • Does anyone recall when FPM DRAM cost a 2nd mortgage for 64K? Memory has ALWAYS been expensive. Most of that cost is hype and nonsense but it still coems out of my pocket...
  • Ad-free! (Score:4, Insightful)

    by InvisblePinkUnicorn ( 1126837 ) on Friday September 14, 2007 @08:53AM (#20602523)
    I'm so used to crap like c|net that I immediately went searching for a "printer-friendly" (aka, ad-free) version of the article, but lo and behold, that's not necessary. To think, I could actually read an article online without having to navigate through the usual nightmare... what an intriguing concept!
  • by eknagy ( 1056622 )
    Yawn.
    Well then, we have to wait for AM2+ to become available, and with the new AM2+ Barcelonas, it will worth the money.
    Reminds me RDRAM...
  • by downix ( 84795 ) on Friday September 14, 2007 @08:58AM (#20602587) Homepage
    Every time I see "the need isn't there" or "there's more than enough memory bandwidth" I check their figures, they're only measuring the CPU memory needs. Well, hate to break it to you, but there's more to a computer than just the CPU. Having that extra bandwidth means that those lovely PCI Bus Mastering devices (such as my SCSI 3 controller, and quad firewire card) aren't fighting with the CPU for memory access. Frankly, add in a game accelerator like the Phys-X and a high-end GPU fetching data from the main memory for local cache, and even DDR3 starts looking a bit narrow....
    • Did you just say Phyx-X? Don't you know that those things are only good for working on your trash can hook shot from your desk chair? At least... I haven't read any reviews that say otherwise.
    • by Slashcrap ( 869349 ) on Friday September 14, 2007 @10:19AM (#20603521)
      Having that extra bandwidth means that those lovely PCI Bus Mastering devices (such as my SCSI 3 controller, and quad firewire card) aren't fighting with the CPU for memory access.

      With a SCSI 3 card and 4 port Firewire you'd be looking at about 360MB/s of bandwidth assuming that they reach their max theoretical speed (and of course PC hardware always reaches its maximum theoretical speed). Unless they're both on the PCI bus in which case 133MB/s max for both. Which is fairly minor compared to the 6GB/sec of memory bandwidth that I get with shitty DDR2 on a shitty motherboard.

      Unless you can provide evidence to the contrary, I am going to go out on a limb and suggest that the performance increases you are expecting do not actually exist. Unless your primary workloads involve running memory benchmarks and Prime95 in which case I would point out that you accidentally posted to Slashdot instead of the Xtremesystems forums.
    • Re: (Score:3, Informative)

      by julesh ( 229690 )
      Every time I see "the need isn't there" or "there's more than enough memory bandwidth" I check their figures, they're only measuring the CPU memory needs.

      The reason they're only measuring the CPU memory needs is becase the CPU memory needs dwarf all others.

      Max CPU memory access rate (Intel Core 2 @ 1333FSB) = 10.7 GB/s
      Max PCIe memory access rate (16 lanes @ 2500MH/z) = 4 GB/s

      Total 14.7GB/s over 2 lanes of memory = 7.35GB/s ~= 1800MHz. So, if both your CPU and your I/O devices are running at 100% capacity o
      • by unfunk ( 804468 )

        I don't see why the faster memory is worth paying enough extra that I could buy an entire extra computer instead, when I will only use it in the rare case I'm maxing out both I/O bandwidth and CPU bandwidth.

        Agreed.

        When I buy memory, it's always the best value stuff that I get. Do I get 1GB DDR2-533 (no-name brand) for $55au, or 1GB DDR2-800 (Corsair brand) for $140au?
        Gee... it's a tough choice, but I think the no-name stuff is the goer here...
      • Pretty much all new memory technologies have been historically ridiculously overpriced for the first many months following their initial introduction.

        It takes a while for people to adopt new memory technologies because they do not want to pay the full introductory price. It takes a while for manufacturers to ramp up production because they do not want to end up with excessive inventory caused by slow initial uptake. It takes a while for new technologies to become mainstream but it will happen in due time as
        • by julesh ( 229690 )
          Pretty much all new memory technologies have been historically ridiculously overpriced for the first many months following their initial introduction.

          Well, yes. But as the title of this article is "DDR3 Isn't Worth The Money - Yet", I don't see anyone disagreeing with this. The point is that it isn't worth it, for the vast majority of people, to buy this technology if they're upgrading their computers right now.
          • The point is that it isn't worth it, for the vast majority of people, to buy this technology if they're upgrading their computers right now.

            The vast majority of people use mainstream systems built with mainstream components. DDR3 is still quite early on its ramp-up and about one year away from becoming mainstream technology - most dramurais are waiting for DDR3 support on AMD's side before pushing volumes.

            The currently ridiculously large premiums combined with marginal performance gains (3-4X the price, 5-1

  • 60ns SIMMs ought to be fast enough for anybody.

    In a year's time, DDR3 will have totally supplanted DDR2.
    • when the price comes down. i think thats the key point being made. of course faster is better. just not if it costs a stupid amount of money. so when ddr3 costs the same or not much more than ddr2 then it will indeed become an attractive proposition.
    • 60ns SIMMs ought to be fast enough for anybody.

      people used to say the same thing about 64k.
    • There's a huge difference between "you don't need anything faster" vs. "DDR3 is not faster."
    • In a year's time, DDR3 will have totally supplanted DDR2.
      Which is exactly what the post said:

      DDR3 will of course come into its own as speeds increase still further
    • by m50d ( 797211 )
      In a year's time, DDR3 will have totally supplanted DDR2.

      Well yes, but who cares right now? My system still uses AGP - even though I knew it was "obsolete" when I built the system (2 years ago), it had the right price/performance at the time, and by the time I need a better video card I will also need a new CPU, a new motherboard to accommodate it, and much more memory - so it's going to be easier to build a new system. It was the right decision at the time, and I don't regret it one bit. It's the same sit

  • Anyone remember when DDR2 was rolled out and was actually *slower* than the standard of the day, regular DDR? It took about a year IIR for the speed of the newer ramm to catch up and overtake the older ram, and even then it was still pricey. I expect with the current glut in the market of DDR2 that it will take quite a while for DDR3 to be considered a worthy upgrade.
    • by Zephiris ( 788562 ) on Friday September 14, 2007 @09:32AM (#20602999)
      Part of the reason that DDR2 was so much slower at most clockspeeds is because of the added latency. The lower speed DDR2 can have more than twice the tested latency of DDR400. The problem is that apparently both JEDIC, or whoever standardizes memory now, isn't thinking of what is the best direction for DDR to take. They're going in the same direction as the manufacturers, trying to sell higher "Megahertz" and "gigabytes per second" ratings, even when they're effectively meaningless now.

      Does it exactly matter if your computer can do 6GB/s, or 12GB/s? 14GB/s? Where does it stop? And even then, that's mostly theorhetical, particularly in the case of DDR2. But a very important distinction is that so many memory accesses are of very small to small size. On basically all of those accesses, the memory request will be served in far less time than the latency will allow the command to return and allow another request.

      Way back when, Intel motherboards tried out RDRAM for its 'higher end' boards, and the Nintendo 64 also started using it. Both were fairly large fiascos, in that sense, with more or less all technical reviews noting that the increased latency more than cancelled out the improved bandwidth. Now we're looking at DDR3, with far higher latencies than classic RDRAM for a relatively minor bandwidth improvement that only extremely large memory requests (such as applications that would theorhetically be done in an extremely large-scaled database and scientific research).

      It reminds me acutely of the early 'Pentium 4s'. A 600Mhz Pentium 3 could beat up to a 1.7Ghz Pentium 4 in most applications and benchmarks, and the (rare and expensive) 1.4Ghz Pentium 3s were real monsters. But people kept trying to tailor benchmarks to hide that, so people would buy more product.

      Overclocking has also generally demonstrated that overclocking regular 'old' DDR1, while a bit pricier (mostly due to the virtual elimination of production nowadays, though), scales better and also has far better numbers than DDR2 and the like. DDR600 equivalent is extraordinarily zippy, and (of course) real-world latency is also absurdly low.

      It makes me feel like the 'governing bodies' here have really let people down. Instead of trying to standardize on and promote what's best for general computing, they're trying to push a greater volume of merchandise that has no meaningful improvement, and in fact usually a notable decline, over what we've already had for years. The bottom line for them is money, and that's just wrong to put their own pocketbooks over the long term well-being of computing technology and the needs of the consumer.
      • by imgod2u ( 812837 )
        The increased latency means a larger problem but the argument is that the aggregate improvement over time is better. That is, there was no further way to improve standard DDR other than to start dual or quad-channeling it (making 512-bit buses on the motherboard). There is a clear frequency hit unless you start increasing latency and pipelining memory accesses. There is a penalty, yes, and with latency-sensitive applications that does a lot of pointer-hopping, it can mean that the application will actual
      • by Agripa ( 139780 )

        Part of the reason that DDR2 was so much slower at most clockspeeds is because of the added latency. The lower speed DDR2 can have more than twice the tested latency of DDR400.

        It is not quite that simple.

        The latency is ultimately limited by the characteristics of the DRAM array which has a specific access time after the row and column addresses are provided. When you compare the latencies of DDR to DDR2 or DDR2 to DDR3, you need to take into account the interface clock speed. Internally, DDR-400, DDR2-800

  • That's just because it hasn't gotten out for Wii just yet.
  • ...isn't worth it when it's brand new. Give it a while for the price to come down.
  • by A Friendly Troll ( 1017492 ) on Friday September 14, 2007 @09:11AM (#20602775)
    Intel's C2Ds love their memory bandwidth. Even the extreme low end, such as the E4xxx, can profit from something like DDR2-800 and an asynchronous 1:2 FSB:RAM. The E6xxx with their 266 MHz FSB can run at 2:3 with DDR2-800 and perform better than with 1:1 and slightly lower latencies.

    Besides, the price difference between DDR2-533 and DDR2-800 is really small. You might as well go for it, if only for futureproofing your system.
    • Re: (Score:3, Insightful)

      There is no such thing as "futureproofing" a computer. I thought that once too, and spent ridiculous amounts of money on computers that should last very long. They did, but while I could run most future programs well and fast, the people I knew bought a new computer for much cheaper that did the same stuff faster than my futureproofed machine. In the end buying more PCs, for less money. While they had 3 machines over that time, and I only one, they always had the faster machines except for the first 6

      • Well, "futureproofing" in this context means replacing your E4400 @ 200 MHz FSB with a new quad-core Penryn that has an FSB of 333 MHz. It would be a noticeable upgrade for gaming, development, video encoding, etc. I agree with you that trying to buy the latest and greatest is a bad idea; my old PC lasted since 2000, and it was only replaced this year with medium-range components.

        Anyway, the point is that if you buy that Penryn, your "good enough" DDR2-533 (266 MHz FSB) you bought with the E4400 isn't guara
        • Anyway, the point is that if you buy that Penryn, your "good enough" DDR2-533 (266 MHz FSB) you bought with the E4400 isn't guaranteed to work as DDR2-667 needed for the new CPU. If you have an "overkill" DDR2-667, it'll feel right at home with the Penryn...

          Well, tell that to the people that bougth an AMD64 socket 754 or even 939.... I know, the article is about Intel, but that's one way one loses faith in futureproofing. The machine I used in my example was a PPro 200. It was a great machine, but Int

  • by Applekid ( 993327 ) on Friday September 14, 2007 @09:17AM (#20602839)
    I remember the same discussion when DDR2 was hitting stores.
    • I remember the same discussion when DDR2 was hitting stores.

      ...or when PC-133 SDRAM first came out. Or when 72-pin DIMMs first came out. Or when you could stuff 4MB onto a 286 instead of just 1 or 2.

      Each step was nic,e, but hampered by the tech that used those parts (e.g. DOS and its apps were still fighting each other between EMS and XMS for using anything over 640k, back when boxes started coming out with 1, then 2MB of RAM on 'em).

      ...and don't get me started on how frickin' worthless that 512k RAM cartridge turned out to be on my old Commodore 64. ITt took

    • It's similar, but not the same. When DDR2 came out, we were comparing low-latency DDR-400 to high-latency DDR2-533; the latency wiped out the marginal bandwidth improvement. With DDR3, we have higher latencies again but the bandwidth boost is much higher (some of the best modules are rated for 2x the bandwidth of what JEDEC-spec DDR2 does) but it's still not getting much of a performance boost. At this point the problem is what's using the RAM, Intel's P35 memory controller + C2D design can't properly use a
  • Really, memory and CPU bottlenecks are not the biggest issue right now. The problem is and has been storage speed. It doesn't matter if we can crunch bits faster on the mainboard if we can't get them in and out to begin with. Memory and CPU speeds are skyrocketing and hard disk performance has stayed rather flat for years. Until drive performance catches up we'll still be waiting forever for the OS to boot up or apps to load.
    • by imgod2u ( 812837 )
      That may be for things like application boot-up and OS boot-up time but I don't think those things are a priority for speed-up. Most applications now-a-days can run almost entirely out of RAM (and store their data-sets in RAM). 2GB of memory is not uncommon. This makes memory speed predominant in limiting the speed of a computer in most applications.

      Having photoshop filters run faster or your iTunes transcode your "collection" of Simpsons episodes so you can play it on your iPod are all things that are c
      • by ypps ( 1106881 )
        Photoshop filters and video transcoding are predictable interruptions that come in big chunks. You can then use your other core(s) to do other tasks, for example to load some files from HDD to RAM...

        Opening photos for editing in Photoshop and opening video files for transcoding are tasks that are limited by HDD-performance. This HDD lag comes in tiny bits all the time. You can't avoid it. It's also ANNOYING and pisses you off (some call it "micro stress"). Let's say you lose 10 seconds every two minutes tha
        • by imgod2u ( 812837 )
          Considering I'm writing this from work. I don't think computer speed's the limitation to my productivity.

          And while HD lag is annoying, the concern for most computational limits, IMO, has been with processing heavy workloads (simulation time, gaming, processing filters, etc.) The actual time it takes to load a picture from HD is quite trivial compared to waiting 5 min for a black-and-white filter.
      • You keep telling that to yourself.

        I want my Half-Life 2 levels to load faster.
      • I run 4 GB now with memory prices being what they are. 64-bit operating systems (I run Vista Ultimate 64) can take advantage of it, and high-end apps like Photoshop and Premiere seem to love the extra headroom. Add in a high-end graphics card (8800 GTS) and it's all good.
  • Nearly every incremental step in technology is met with a barrage of "it's too expensive, it doesn't work right, it's not worth it, nobody will go there..." at which point it goes on to become the norm.
  • Or does anyone else have trouble taking a site called "TrustedReviews" seriously?
  • Would DDR3 be worthwhile in a system with two quad-processors installed? I'm sure that'd load down the bus pretty heavily...
    • Re: (Score:3, Interesting)

      On a dual-processor Intel machine, you have to move to FB-DIMMs. I'm not sure if there are currently DDR3 FB-DIMMs, but I don't think so. If there were DDR3 FB-DIMMs, they'd also be quad-channel.

      On a dual-processor AMD machine, you have NUMA (non-uniform memory architecture), so each each processor (processor, not core) has its own set of memory and its own bus, meaning you have 2 dual-channel busses.
  • Is it just me or does it seem like every new memory technology disappoints? I've built systems since before EDO-DRAM was all the rage, and we've seen lots of advances since... Burst EDO, SDRAM, RDRAM, DDR, DDR2, DDR3... but every time one of these supposed breakthroughs debuts, the review sites quickly go to work and reveal (at most) 5-10% performance increases over the previous generation. Often it's in the 1-2% benefit range. It seems like its very difficult to squeeze extra performance out of memory with
    • Re: (Score:3, Interesting)

      by TheRaven64 ( 641858 )
      Intel, when they are prototyping a new CPU, run it in a simulator. This simulates an entire computer, and is very tweakable. A few years ago, they did an experiment; they made every CPU operation take no simulated time. Effectively, this meant that the CPU was infinitely fast. In their standard benchmark suite, they showed a 2-5x performance improvement overall. After doing this, however, increasing the speed of RAM and the disk gave significant improvements.

      A given generation of RAM may only make y

  • by Anonymous Coward
    This, is a, great article, and I will, read, it again and, again.
  • by InvisblePinkUnicorn ( 1126837 ) on Friday September 14, 2007 @10:13AM (#20603423)
    I'm looking for a motherboard that has DDR2 and DDR3 slots, but also a firewire port (and eSATA would be a plus), necessary for video editing. Any takers? I could only find one by Gigabyte on newegg but the reviews are mixed.
    • I'm looking for a motherboard that has DDR2 and DDR3 slots, but also a firewire port (and eSATA would be a plus), necessary for video editing.

      Check again in nine days. There should be at least a few more boards with both DDR2 and DDR3 slots when Intel's X38 chipset is "officially" launched in on September 23 [xbitlabs.com] (early X38 boards are starting to appear in stores). Since X38 will be Intel's "performance" chipset, most motherboards should have firewire and eSATA ports (in addition to PCI Express 2.0).

      Foxconn and MSI showed "hybrid" DDR2/DDR3 boards [techreport.com] based on this chipset at June's Computex.

  • I haven't even mastered Dance Dance Revolution #1 yet. There's already a 3?
  • Question (Score:3, Interesting)

    by rehtonAesoohC ( 954490 ) on Friday September 14, 2007 @10:21AM (#20603555) Journal
    The real question I have is whether or not DDR2 is worth upgrading over DDR1. I have 2 gigabytes of DDR RAM in my computer, and I recently started thinking that upgrading might be a good idea. But would I notice a performance increase by upgrading to DDR2? I don't want to spend $150 on a new motherboard and RAM only to get a marginal speed boost.

    Does anyone have any insight?
    • by Kjella ( 173770 )
      If you have a DDR1 era system, chances are the perfomance gain is minimal. Save your money for when it's time to get a new CPU as well.
    • by r3m0t ( 626466 )
      It depends on your current CPU and hard disk. If you have a very old CPU and a slow hard disk, then no. If you have more recent hardware (which seems a bit unlikely on a DDR motherboard) then your RAM may be holding you back.
      • My video card is a Geforce 7900 GTX (512mb) and my CPU is an AMD Athlon X2 4600+. With that setup, would my RAM be holding me back?
        • My video card is a Geforce 7900 GTX (512mb) and my CPU is an AMD Athlon X2 4600+. With that setup, would my RAM be holding me back?

          No, not really [anandtech.com]. The difference in performance is typically %5 or less. Games are not usually memory-limited.
        • As default luser said, the difference would be marginal. I probably wouldn't bother upgrading your RAM unless you were also upgrading your CPU to a Core 2 Duo (which doesn't have any motherboards that use regular DDR, IIRC). Here's a benchmark. [anandtech.com] I have a E6600 which is stable and quiet when overclocked to 3.474 GHz with a Scythe Infinity...and 31 degrees C.
        • My video card is a Geforce 7900 GTX (512mb) and my CPU is an AMD Athlon X2 4600+.

          Athlon X2 CPUs have on-die memory controllers, so you'd also have to upgrade your CPU (in addition to the motherboard and memory). That seems like a waste (to me) since the X2 4600+ is still a pretty sweet CPU. If you're currently using DDR, then your CPU and motherboard uses Socket 939. To use DDR2, you would need to get a Socket AM2 CPU and motherboard.

          With that setup, would my RAM be holding me back?

          Not by much, if at all. Since your next memory upgrade will require a CPU upgrade, your next upgrade should probably have quad-core CPUs in mind (or octo

    • I'm no expert but I wouldn't expect a big performance boost from upgrading from DDR to DDR2. Memory performance in general isn't the bottleneck in a typical desktop system; memory CAPACITY might be, but if you have 2GB already that's not the issue.

      If you're looking for an easy speed boost, a new motherboard plus a new CPU would be the way to go; CPU performance has been increasing dramatically lately. Here's a chart from THG [tomshardware.com] that illustrates the progress; even the mid-range Core 2 Duos benchmark at 2-3 tim

    • by archen ( 447353 )
      Because of latency DDR2 is only faster than DDR if you have a CPU over 2Ghz clock speed. And pretty much all speed boosts are marginal now days. The only way you really notice the difference is aggregated marginal increases. Like CPU + mainboard + hard drive + RAM, etc. Typically you can't see much of a difference in changing out one part anymore.
  • before we move to DDR3?
  • I appreciate some users make heavy use of graphics software and/or games etc, but for regular office use I am willing to bet that 90% of people have an absolute overkill of a system. I'm using a 1.6 ghz Pentium 4 with 640 MB of ram ( oblig: it should be enough for everybody ; ) , and currently about 258 of that is used ( when accounting for buffers and cache ) to run my desktop environment and most of the software I ever use. Essentially, I expect that in perhaps 2-3 years time I might actually consider to
    • I do use graphics software and games.

      And I have 2GB of DDR memory, not DDR2. It was the right choice for me a couple of months ago, and it has proven to be a great buy, in a performance/cost benefit point of view.

      No Xfce or other weird software here, normal XP SP2 and lots of games.
  • I hate these freaking articles that tell ME, a hard-working, well-paid computer consumer that something is "too expensive". That is a relative term, and relative to most everyone else's income, maybe it ISN'T too expensive for me. A product is worth what somebody will pay for it, everyone else who isn't buying it can STFU and butt out. Just because YOU can't afford it, doesn't mean it is "too expensive" (whatever that means).

The 11 is for people with the pride of a 10 and the pocketbook of an 8. -- R.B. Greenberg [referring to PDPs?]

Working...