Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

DDR4 RAM To Hit Devices Next Year 233

angry tapir writes "Micron has said that DDR4 memory — the successor to DDR3 DRAM — will reach computers next year, and that the company has started shipping samples of the upcoming DDR memory type. DDR4 is more power-efficient and faster than DDR3. New forms of DDR memory first make it into servers and desktops, and then into laptops. Micron said it hopes that DDR4 memory will also reach portable devices like tablets, which currently use forms of low-power DDR3 and DDR2 memory."
This discussion has been archived. No new comments can be posted.

DDR4 RAM To Hit Devices Next Year

Comments Filter:
  • ... I'm still stuck on good ole DDR2

    Realistically, while there are benefits for "faster", it's no substitute for reducing inefficient bloatware.

    • by Dorkmaster Flek ( 1013045 ) on Tuesday May 08, 2012 @11:02AM (#39928971)
      True, but I'm actually more interested in the supposed power savings. These days, I think reducing power consumption is a higher priority than increasing speed, or at least it should be.
      • Re: (Score:3, Insightful)

        Well, one way we can reduce power consumption is to go to operating systems that aren't as bloated. If you've tried the Windows 8 Consumer Preview, you already know that Windows 8 isn't just the worst product Microsoft has ever made - it's also bloatware. Microsoft would be better off making an XP 2014 release and selling it.

        The same with LXDE as opposed to bloatware like KDE.

        Another thing is screen savers - not only not needed, but a total waste of energy. Just have the OS turn the stupid screens off

        • by cpu6502 ( 1960974 ) on Tuesday May 08, 2012 @11:18AM (#39929221)

          LXDE == Lubuntu Linux?
          Good release.

          >>>capable of executing 1,000 times more instructions per second than the original pc

          Heh. More than that. The IBM PC was 4 megahertz? And now we have double-clocking where CPUs execute instruction on both rising & falling edges. And dual-core CPUs are now standard, so 3000*2*2/4 == 3000 times faster. And yet as you pointed-out we still have to deal with annoying "wait" states while the PC thinks or redraws a screen. Bloat.

          • I'm currently posting from LXDE+Knoppix (boot off the dvd image, load the actual runtime + persistent data off a hard drive image on /dev/sda1 because linux distros have a nasty habit of breaking stuff on updates). Set up with zero swap, and the only real problem is the memory leaks in Iceweasel, same as in firefox under every other distro.

            Instead of competing on features, why not have a 6-month moratorium where people just fix current bugs? It would make everyone more conscious of bad practices that lead to bugs in the first place, hopefully reducing future breakage (and slow/fugly code to work around buggy cruft).

            • nstead of competing on features, why not have a 6-month moratorium where people just fix current bugs?

              Because, to paraphrase many WONTFIX bugs on the openoffice project (under Sun's watch): It's less fun to fix bugs than to focus on new features.

              • This is *so* true of so many projects. It's a pervasive problem - nobody likes to do a bughunt - you're basically seen as the maid cleaning up after other people's messes.

                Maybe there should be a "you can only add 1 feature for every x number of bugs you've removed - and your 'x' reverts to zero every time someone else finds a bug you created."

                Similar to "you can only add n bytes of code for new features if you first remove n+1 bytes of code w/o losing any existing features or introducing any bugs". Even

          • by Aryden ( 1872756 )
            More often than not, we confuse the speed of the OS with other hardware related latency. I swapped out to a samsung SSD last week for my boot drive and now windows as well as most applications pretty much start instantly. Its less than 4 seconds between my password entry and windows loaded with applications and widgets ready.
        • by CajunArson ( 465943 ) on Tuesday May 08, 2012 @11:41AM (#39929569) Journal

          How does crap like this get modded insightful? Oh wait.. it's because it plays up to the bigoted prejudices that prevail on this site.

          1. I've actually used the Windows 8 preview on a 4 year old PC and it is more responsive than Linux for desktop use. I don't like Metro, but everything under the hood in Windows 8 is in very good shape and some changes to the UI could make it a good successor to Windows 7.
                People on this website who brag about being Linux "experts" because they got Ubuntu to boot one time should know the difference between the UI presentation layer and the underlying OS services. Unfortunately a bunch of self-proclaimed "experts" who troll this site are anything but.

          2. I also use KDE on the desktop and I've used LXDE. Guess what? KDE is faster for my use because of the ability to reconfigure its setup. I don't want or need a taskbar to switch between apps, and because of KDE's flexibility I have a very efficient keyboard shortcut system in place to handle window management. Additinally, yakuake gives KDE a big edge for handling the konsole in a smart way and guake (which cloned yakuake) is still not as good.

              Firefox under KDE starts up in the same amount of time as on LXDE.. and so does every other application I try. Windows don't move faster across the screen on LXDE either and they resize at the same speed on both desktops!

          • I also use KDE on the desktop and I've used LXDE. Guess what? KDE is faster for my use because of the ability to reconfigure its setup. I don't want or need a taskbar to switch between apps,

            Guess what - you don't need a taskbar to either launch or switch between apps in LXDE. It makes me wonder if you even tried it, or are just repeating someone else's BS.

            Firefox under KDE starts up in the same amount of time as on LXDE.. and so does every other application I try. Windows don't move faster across the sc

        • by Kongming ( 448396 ) on Tuesday May 08, 2012 @11:45AM (#39929641)

          Actually, I am running the Windows 8 Consumer Preview on the same hardware that I was previously running a clean XP installation, and Windows 8 is definitely snappier, plus has better search/launch functionality. I can't say that I am particularly fond of the Metro UI (I mostly use the Explorer-style interface), and I preferred the search UI in Windows 7 to the one in Windows 8. But saying that Windows 8 is a worse OS than such champions as Vista, 98, and ME is quite a stretch.

        • by Bengie ( 1121981 )

          Windows 8 isn't just the worst product Microsoft has ever made - it's also bloatware

          Funny how less memory, CPU, while booting faster turns into "bloatware". I would love to see your definition for the word.

          • by Intropy ( 2009018 ) on Tuesday May 08, 2012 @12:18PM (#39930117)

            Bloatware: software I dislike and wish to deride but for which I am unwilling or unable to give reasons why.

      • by eviljolly ( 411836 ) on Tuesday May 08, 2012 @11:12AM (#39929117) Journal

        Luckily we're getting both. I just purchased a video card that's twice as powerful as my current one, and only uses 2/3 the power. I'm upgrading from a CPU using up to 130W to just 77W, but still gaining 20-25% performance.

        Those are some good jumps in performance, but great leaps in efficiency. Total power consumption is a big factor moving forward in trying to reduce what we need from the grid.

        • Upgrading from a P4 machine to a rig equipped with a AMD Socket AM2+ era CPU noticeably dropped the electric bill at the house and the old machine wasn't even a Prescott! Those P4 and Socket A era machines were real power hogs due to Intel and AMD one-upping each other in CPU speed without much regard to power use.
        • Luckily we're getting both. I just purchased a video card that's twice as powerful as my current one, and only uses 2/3 the power. I'm upgrading from a CPU using up to 130W to just 77W, but still gaining 20-25% performance.

          Those are some good jumps in performance, but great leaps in efficiency. Total power consumption is a big factor moving forward in trying to reduce what we need from the grid.

          7950 and 2500k?

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Tuesday May 08, 2012 @11:03AM (#39928993)
      Comment removed based on user account deletion
      • A fashionable opinion on Slashdot, no doubt, but go back and actually try out an older piece of hardware. I bet it will seem absolutely bog-slow. I remember the days not so long ago when I would shut down everything to fire up a browser (Netscape), and really think hard before opening a new window (no tabs, of course). Now I sit here with two browsers, each with dozens of tabs, mp3s playing in the background, bit-ticket software like Photoshop and Illustrator running, and a disk-scan going, without the slig

        • by 0123456 ( 636235 )

          A fashionable opinion on Slashdot, no doubt, but go back and actually try out an older piece of hardware. I bet it will seem absolutely bog-slow. I remember the days not so long ago when I would shut down everything to fire up a browser (Netscape), and really think hard before opening a new window (no tabs, of course).

          I remember those days. It was 1996, when I had 4MB of RAM on my laptop and had to run both Apache and Netscape for web development. I was really glad when I managed to get another 4MB and eliminated the perpetual swapping.

          Otherwise, unless you had an insanely low amount of RAM or were running Vista, I can't see why you'd have had that problem 'not so long ago'.

          • What's a long time ago to you? I'm talking 2001ish I guess. I had a G3 250mhz PowerBook with 64mb of RAM. Classic Mac OS sucked at swapping--that's my whole point: software improved along with hardware.

        • i used to bitch about windows,and to a lesser extent os x, but my introduction to ssd's caused me a major revaluation as to where the real anchor on my time was
          • Me too, my 2008 machine has a relatively cheap SSD in it, and I'm not going to need to upgrade until it breaks (despite doing computationally expensive stuff like motion graphics).

        • This. While software vendors certainly deserve some part of the blame for eating more cycles, much of that is not bloat, and any realistic analysis of the problem must also take into account usage patterns. Does Photoshop use more cycles and more RAM than it used to? Yes, for certain. It's also able to do many more things than it used to, and is regularly run on huge images by relative standards. I also think nothing of having a browser with 20-30 tabs open, while listening to MP3s, editing a photo, a

      • Well, that's the funny thing. If you optimize too much, the code will only run on that one machine. With increasing levels of abstraction, we run into increasing costs.

        You have the engineer's dilemma -> write one program that eats several cores & 40 GBs of RAM, but runs on every machine, or you write one program that uses 5 processor cycles & 1 KB of RAM, but runs only on one machine.

    • My primary computer is a 7 year old laptop with 512M of RAM and it works great.
      After you turn off all the crap, XP takes up like 50M of RAM plus 100M of "System Cache", whatever that is.I haven't fully tweaked it out of sheer laziness: some guy built a "Micro XP" distro that can get it to run in 64M.
      Thing is, any stupid browser takes up more memory than the entire operating system, and leaks *heavily* due to the insanity that is JavaScript. The browser alone will easily eat all the RAM available. Don't get

    • I just made the switch from DDR2 to DDR3 in March and only did it because it's cheaper and easier to get a DDR3 motherboard and 16GB DDR3. Current mobo supports up to 32GB RAM, so I'll probably be good until DDR5 comes out. I still have a number of PCs and Servers on DDR and DDR2 and foresee it staying that way for a while.

      • What do you think it would take to convince motherboard manufacturers to put a few more DIMM slots on those boards?

        I want 256GB on my main machine, but I don't want to use a server motherboard (not enough expansion slots).

    • More importantly, what does "faster" mean?

      Higher bandwidth or lower latency? It's supposed to be both, but my guess is that latency is mostly not affected.

  • Great (Score:5, Funny)

    by demonbug ( 309515 ) on Tuesday May 08, 2012 @11:00AM (#39928935) Journal

    I predict a 33% performance increase going from DDR3 to DDR4 based on my own super-secret analysis of the press release.

  • I'll be impressed when they finally get around to changing DDR to TDR or QDR.
    • Re:DrrDrrArr (Score:5, Informative)

      by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Tuesday May 08, 2012 @11:26AM (#39929349)

      I'll be impressed when they finally get around to changing DDR to TDR or QDR.

      QDR's already around. In fact, a popular console already uses it. It's still heavily patented though, so it's not very appealing.

      The Playstation 3 has 256MB of XDR-DRAM by RAMBUS (yes, that RAMBUS). It does QDR - two bits on falling edge, two bits on rising edge (using multi-level signalling).

      It's tricky for memory because the bus speed is high, signalling ovltages low, and motherboard traces bad enough that the eye window is very small, so a lot of (patented) tricks are needed to "open up" the eye and recover the bits from it. Impedance mismatches are a killer (and they happen at connectors especially).

      • Do you know if XDR(aside from having...historically unfortunate... friends) is considered theoretically viable for general-purpose use?

        I know that the PS3's RAM is soldered directly onto the mainboard; but that is normal for consoles. Does RAMBUS' secret sauce allow them to handle less controlled environments(in servers, say, if you can't do at least 8 DIMMs per socket you might as well go home) or are there technical reasons, as well as legal togetherness issues, that drove them to pursue specialty embe
    • Re:DrrDrrArr (Score:5, Informative)

      by beelsebob ( 529313 ) on Tuesday May 08, 2012 @11:41AM (#39929579)

      DDR2 effectively *is* QDR –it transfers 4 words per clock cycle... It just doesn't do it in quite the same way that true QDR RAM would. DDR3 effectively is ODR (octa-data-rate) RAM. DDR4 will effectively be HDDR (hexa-deca-data-rate) RAM.

  • Latency? (Score:5, Interesting)

    by ruiner13 ( 527499 ) on Tuesday May 08, 2012 @11:02AM (#39928973) Homepage
    What is the expected latency of this new RAM? I've noticed that as the RAM technology has progressed, it has favored pure throughput to latency, but this is not always ideal. Is DDR4 going to help with this, or is this yet another advance that comes at the expense of added lag? Just curious on this. I didn't think RAM bandwidth was a problem, but latency could starve these current ultra-fast processors.
    • Re:Latency? (Score:5, Informative)

      by demonbug ( 309515 ) on Tuesday May 08, 2012 @11:10AM (#39929097) Journal

      13 clock cycles according to the all-knowing Wikipedia [wikipedia.org], so similar to the latency increas going from DDR2->DDR3; theoretically it will be made up for by increasing clock frequency, I guess, with DDR4 starting at 2133 MT/s (unfortunately I'm not clear on how transfers/s translates to MHz for DDR4 - is it the same two transfers per quad-pumped cycle?).

    • Re:Latency? (Score:5, Informative)

      by rgbrenner ( 317308 ) on Tuesday May 08, 2012 @11:19AM (#39929241)

      Latency has significantly decreased, thanks to higher clock frequencies. See the chart on this page: http://en.wikipedia.org/wiki/CAS_latency [wikipedia.org]

      But RAM will always be slower than L1 and L2, simply because of the size of the memory.

      • "But RAM will always be slower than L1 and L2, simply because of the size of the memory"

        Actually, it is the proximity to the CPU core that is the primary mitigating factor here. A 512MB on die Cache will be faster than one off chip (assuming competent designers) because you can clock the RAM much faster when the CLK (clock) signal has to travel microns rather than inches.

        • Actually, the size is quite important. The complexity of the address decoding logic and the capacitance of the bit lines on the memory array scale with the capacity. You can mitigate it somewhat by making the array wider (say, each access grabs 128 bytes instead of 64), but that increases power consumption and the complexity of the logic required to select the desired word from the big chunk you just pulled down.
      • But type and location. DRAM has worse access times than SRAM for various reasons. Also there is simply the distance from the processor. When you start wanting super low access time, distance matters. That's why L2 and L3 are on CPU dies these days. For L1, even that isn't enough, it has to be near the core to get the kid of speeds you want there.

        The good news is with judicious use of caching, you can have your cake and eat it too for the most part. You can use cheap DRAM for most of your memory, but get ove

    • by Twinbee ( 767046 )

      has favored pure throughput to latency

      Hey, sounds [apcmag.com] like [skytopia.com] the Linux OS.

      • It is a well known fact that Con Kolivas has inhaled too much anesthetic. The second link is to some page by a clueless guy who wouldn't know how to handle a benchmark if it involved a park bench and some paint.

        My Linux box turns on in under 10 seconds (from sleep mode - didn't have that in the IBM PC/XT days) and I get right to work. All of my apps are already open and ready to go, and Internet Connectivity is up and running (You remember the Internet and WiFi from the 80's right?). Try booting an IBM
        • by Twinbee ( 767046 )
          In the second link, they already said the measurements were rough. Even if they're only approximately right, it shows an indication of how horribly laggy the GUI in Linux is (or at least was). And sure they're not rigorous, but that's obvious anyway, as it's an experiment to show what a new typical user in the real world might experience, not to get numbers down to the last microsecond as what some fake benchmark might produce. Also remember the latest Ubuntu may have improved since then.

          If you try say,
        • by Twinbee ( 767046 )
          Do you have any reliable or semi-reliable sources which discredit what Con Kolivas has said, particularly in the 3rd page of the article I gave?
          • "Do you have any reliable or semi-reliable sources which discredit what Con Kolivas has said ... ?"

            Yes. Most of what he said is ridiculous. ... and I blockquote:

            " Had the innovative hardware driven development and operating system competition continued, there is no way it would have attracted as much following, developers and time to evolve into what it has become. The hardware has barely changed in all that time. PCs are ludicrously powerful compared to what they were when Linux first booted in 1991, but

            • by Twinbee ( 767046 )
              I was hoping for some links from independent parties, even it's from a disgruntled post or two on a random forum (a debate would be nicer though). Not that I won't at least listen to you, but it would be nice to see other people feel the same way. That blockquote you gave could be interpreted slightly differently in that 'barely' is a relative term, making him not so mistaken as you would originally think. Anyway, let's give some choice quotes from page 3:

              Quote1:

              The main problem was that there simply was not a convincing way to prove that staircase was better on the desktop. User reports were not enough. There was no benchmark. There was no way to prove it was better, and the user reports if anything just angered the kernel maintainers further for their lack of objectivity.

              Quote2:

              And there are all the obvious bug reports. They're afraid to mention these. How scary do you think it is to say 'my Firefox tabs open slowly since the last CPU scheduler upgrade'? To top it all off, the enterprise users are the opposite. Just watch each kernel release and see how quickly some $bullshit_benchmark degraded by .1% with patch $Y gets reported. See also how quickly it gets attended to.

              Quote 3:

              Then I hit an impasse. One very vocal user found that the unfair behaviour in the mainline scheduler was something he came to expect. A flamewar of sorts erupted at the time, because to fix 100% of the problems with the CPU scheduler we had to sacrifice interactivity on some workloads. It wasn't a dramatic loss of interactivity, but it was definitely there. Rather than use 'nice' to proportion CPU according to where the user told the operating system it should be, the user believed it was the kernel's responsibility to guess. As it turns out, it is the fact that guessing means that no matter how hard and how smart you make the CPU scheduler, it will get it wrong some of the time. The more it tries to guess, the worse will be the corner cases of misbehaving. The option is to throttle the guessing, or not guess at all. The former option means you have a CPU scheduler which is difficult to model, and the behaviour is right 95% of the time and ebbs and flows in its metering out of CPU and latency. The latter option means there is no guessing and the behaviour is correct 100% of the time... it only gives what you tell it to give. It seemed so absurdly clear to me, given that interactivity mostly was better anyway with the fair approach, yet the maintainers demanded I address this as a problem with the new design. I refused. I insisted that we had to compromise a small amount to gain a heck of a great deal more. A scheduler that was deterministic and predictable and still interactive is a much better option long term than the hack after hack approach we were maintaining.

              Disclaimer: I'm not sure ho

              • "I was hoping for some links from independent parties"

                I can't take you seriously anymore.

                • by Twinbee ( 767046 )
                  Let me rephrase for you: I was hoping for other sources BESIDES yourself. At least refute quote 2 if you can.
                  • "And there are all the obvious bug reports. They're afraid to mention these."

                    So you want links to the part of the brain in the developers that shows that they are not afraid to mention these? Is Kolivas claiming that the developers are somehow removing reports of performance issues from lkml?

                    "How scary do you think it is to say 'my Firefox tabs open slowly since the last CPU scheduler upgrade'?"

                    How do you propose I refute a rhetorical question?

                    "To top it all off, the enterprise users are the opposite. Just

                    • by Twinbee ( 767046 )
                      I think we're agreed that money will hold more sway about what gets put into Linux.

                      That doesn't take anything away from the fact that as far as average desktop users are concerned, latency is given (to put it politely) second priority. In any case, a 0.1% increase in bandwidth performance at a cost of 2-4x latency drop in GUI responsiveness is pretty short-sighted in my opinion, no matter which way you look at it, even if he was exaggerating somewhat. Especially if the desktop is a goal for Linux (which
                    • "In any case, a 0.1% increase in bandwidth performance at a cost of 2-4x latency drop in GUI responsiveness is pretty short-sighted in my opinion, no matter which way you look at it, even if he was exaggerating somewhat. Especially if the desktop is a goal for Linux (which it appears to be)."

                      There is no problem. Any problem encountered amounts to improperly configured kernels. If I select Voluntary Preemption rather than Preemption, like Ubuntu does, then I too will get a much slower GUI response. Kern

    • Re:Latency? (Score:5, Interesting)

      by beelsebob ( 529313 ) on Tuesday May 08, 2012 @11:46AM (#39929659)

      Actually, RAM latencies have slightly improved over time, it's just not as fast as transfer rate, so the units (number of missed transfers) make it look like it's getting a lot worse. The main reason that RAM latencies haven't improved much is because they're not that important in the grand scheme of things.

      In reality, it takes around 200 transfers to get from the CPU asking for something to getting it, of that, only about 7-9 are the RAM. An improvement of one transfer, makes that 199 transfers, instead of 200 – yay, we gained 0.5%. Except that in reality, the gain is not 0.5%, because in reality, most of the CPU's requests are in level 1 cache... Make that 0.005%. Except that in reality, the gain is not 0.005%, because in reality, most of the CPU's requests that are not in level 1 cache are in level 2 cache... Make that 0.00005%... You get the idea.

      The real way to sort out the latency issue is via tighter integration of things onto the CPU (hence why we've seen memory controllers move on board, and more levels of faster cache), not in skimming one or two cycles off how quickly the RAM responds.

      • If your application does cache misses 99% of the time, then latency still matters.

        Also, higher latency could allow removing some cache levels, which would make the CPU faster and free up some transistors.

    • by 0123456 ( 636235 )

      From the benchmarks I've seen on DDR3, the increased clock speed does seem to increase performance up to around 1.6GHz. What I haven't seen is a comparison between max clock speed on DDR2 and DDR3.

    • by Bengie ( 1121981 )
      It's not that absolute latency has gone up between ram tech, it's the relative latency has gone up. CAS 3 latency on DDR1 is the same as CAS 6 latency on DDR2 because DDR2 twice as high external lock, but the same internal clock.

      There have been a few reviews involving modern DDR3 1066-1600, and the difference between 7-7-7(CAS-CasToRas-RAS) and 11-11-11 is less than 1% performance across nearly every benchmark. Multiple cores coupled with huge amounts of cache with advanced pre-fetch units has all but nul
  • Slightly lower power consumption. Slightly faster memory. Sorry, but it's looking to me like just another way of obsoleting my portable faster, without significant performance improvement.

    • Like the car industry of the 1950s, the computer industry has now reached the point of incremental tiny improvements rather than revolutionary improvements (like jumping from 8 bit to 32 bit in one decade). I've had the same PC for 10 years and it still runs everything just fine (except the latest flash update). It would have been impossible to run a 1985 PC with Windows95 and the latest software.

      • This is exactly how I've felt about computers for the past decade. Unless you play a lot of games or do some heavy editing work to upgrade your Video Card, etc there's no /true/ incentive to upgrade.
        • As someone above pointed out, the biggest gains these days are in power efficiency, not performance. You can upgrade your CPU/motherboard and video card and probably get perhaps 25% more performance, but with less than half the power usage, which over a year will probably pay for itself in reduced electric bills (even more so in southern climates with A/C).

      • No, you just can't figure out your numbers... Remember, the jump from 32 bit to 33 bit is as big as the jump from 0 bit to 32 bit ;).

    • I was at the store the other day and they still had DDR, DDR2, and DDR3 on the shelf, DIMM and SODIMM sizes, in a variety of capacities...

      The price/GB sweet spot does seem to migrate to the 'current' flavor, after a period of new-hotness pricing; but the RAM industry doesn't seem to be pursuing its sinister forced upgrade strategy very aggressively...
    • "Sorry, but it's looking to me like just another way of obsoleting my portable faster, ..."

      Too late.

    • by gman003 ( 1693318 ) on Tuesday May 08, 2012 @12:08PM (#39929975)

      The initial DDR4 models will be only marginal increases over DDR3, true. But remember how the original DDR3 models were only marginally better than DDR2, or even how some initial DDR2 modules were *worse* than DDR?

      DDR3 is hitting a wall, where increasing the frequency any further is causing exponentially higher power usage and heat. I can't find any air-cooled DDR3-1866 or DDR3-2133 - every module I can find is water-cooled, because that's the only way to dissipate the heat. DDR4 begins at DDR4-2133, apparently without even needing a heat sink. And it's expected to scale to double those speeds, over time. And *those* you *can* upgrade - if you buy a DDR4-2133 device now, you can upgrade to DDR4-3200 or DDR4-4266 whenever you wish, if your memory controller supports it.

      DDR4 is also making a rather significant shift in architecture, going from a dual/triple/quad-channel-memory paradigm to a point-to-point system. So better scalability with multiple modules.

      Oh, and one quote cited a 40% decrease in power usage compared to an equivalent DDR3 module. That's hardly "slightly" lower.

      • Almost all processors today barely saturate ddr3-1600, and gain very little from ddr3-1866. (Source: http://www.tomshardware.com/reviews/ivy-bridge-benchmark-core-i7-3770k,3181-10.html)

        Secondly there are no ddr4 devices now, because as the article summary said, ddr4 won't be out till next year. It will be pin incompatible with ddr3 (to protect it from wrong voltages and different signal methods). Also, ddr3-1866 and higher ram is available (I just bought some), they come with air cooled heatsinks, just t
  • by eln ( 21727 ) on Tuesday May 08, 2012 @11:13AM (#39929141)
    I just bought a new computer with DDR3 in it yesterday.
  • What a surprise they neglect to inform us of the cost to the average geek of these Thuper-Duper improvements. Whats so hard about saying the MSRP is projected to be $$$/GB. I can do the street price discount on my own.
  • Obligatory Onion [theonion.com] .
  • Intel has already confirmed that the 2013 "tock", Haswell, will still use DDR3.
    Not sure about AMD's position, but this sounds like DDR4 will wait on desktops and laptops for 2014 or 2015.

    • by Mashiki ( 184564 )

      Doubtful. You can bet you'll see Asus and Gigabyte having boards out late this year for testing with full releases probably early or mid-next year in time for the new CPU cycle.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...