Forgot your password?
typodupeerror
Hardware

DDR4 May Replace Mobile Memory For Less 145

Posted by Soulskill
from the not-dance-dance-revolution dept.
Lucas123 writes "The upcoming shift from Double Data Rate 3 (DDR3) RAM to its successor, DDR4, will herald a significant boost in both memory performance and capacity for data center hardware and consumer products alike. Because of the greater density, 2X performance and lower cost, the upcoming specification and products will for the first time mean DDR may be used in mobile devices instead of LPDDR. Today, mobile devices use low-power DDR (LPDDR) memory, the current iteration of which uses 1.2v of power. While the next generation of mobile memory, LPDDR3, will further reduce that power consumption (probably by 35% to 40%), it will also likely cost 40% more than DDR4 memory."
This discussion has been archived. No new comments can be posted.

DDR4 May Replace Mobile Memory For Less

Comments Filter:
  • Excellent (Score:4, Funny)

    by DWMorse (1816016) on Tuesday May 15, 2012 @10:20PM (#40012619) Homepage

    With RAM that fast and cheap, 640kB ought to be enough for anyone!

    Whoops, I mean 6.40 x 10^7 kB. THAT ought to do it.

    • Re:Excellent (Score:4, Insightful)

      by arth1 (260657) on Tuesday May 15, 2012 @10:35PM (#40012687) Homepage Journal

      Fast and cheap are well enough, but cool and reliable are important factors too.

      From TFS, it looks like it may run cool, but I'll wait with the hallelujah until I've seen something about reliability. Especially because with die shrinks for flash, reliability has gone way down from the last generation - I hope that won't be the case with RAM too.

      • With Micron purchasing Elpida, Micron gonna get to make DDR4 DRAM with cell area of 4F2.

        On the other hand, Samsung's DRAM is still occupying cell area of 6F2.

      • by iamhassi (659463)
        Reliability is the reason I havent gone SSD yet. Every time I'm about to upgrade I read the reviews on newegg of some guy losing all his data. Guess if it's only for the OS and you clone it nightly that's not a big deal.... actually, that's not a bad idea, a small cheap ssd for the OS so it boots fast, then keep files on reliable hd and make clone of ssd to backup drive so u can still boot if the ssd dies....
        • by AvitarX (172628)

          That's what I did for my htpc (I Wanted to keep the inside as solid state as possible, as It's so small).

          I use a pair of 2.5 inch USB drives for reliability, and lack of cords for /home and backup.

        • Re:Excellent (Score:5, Insightful)

          by TheRaven64 (641858) on Wednesday May 16, 2012 @03:51AM (#40013961) Journal

          Reliability is the reason I havent gone SSD yet. Every time I'm about to upgrade I read the reviews on newegg of some guy losing all his data

          If 'some guy losing all of his data' is your reason for not buying an SSD, does it also stop you from buying a hard disk?

          • Ya no shit (Score:5, Insightful)

            by Sycraft-fu (314770) on Wednesday May 16, 2012 @05:02AM (#40014211)

            Also there's the fact that the people who post things like that are the whiny ones who had problems. I've never posted my SSD experiences before, because I'm happy, but here they are:

            I have 3 256GB SSDs, one in my laptop, two in my desktop. All have worked without flaw since their purchase 11 months ago. Thus I never felt the need to go whine online about them. I've suffered no failures, no data loss. They just work.

            Now, do SSDs die? Sure. So do HDDs. In terms of personal HDDs I've had about 5 fail on me over the course of my 20ish years using computers. At work, I've seen hundreds fail. Some are dead on arrival, some fail within hours of install, some fail after months or a year, some are still going strong 10+ years later.

            SSDs are fine. You need to back up your data, but then that is true of anything. If you don't back up your data and have never lost anything to HDD failure that is luck, not because HDDs don't fail.

            If you want an SSD the only issue should be cost. They are expensive, about $1/GB at best and as much as $3/GB for some of the really high performance/lots of write cycles stuff. HDDs are more like $0.08/GB. However if the price is acceptable, then get one. Back up the data on it to a HDD (since HDDs are cheaper, and a different technology) and you are fine. Could it die? Sure, if it does, RMA it, get a new one, and go back to what you were doing.

            • by bungo (50628)

              Hmmm.... insightful?

              Can you say "Selection bias"?

              I've just jumped out of a 20 story building, after 10 floors, everything is fine, so I don't expect it to be any different after another 10......

              Now, if you could point to a study showing that the failure rates of SSDs vs HDDs are no different.

              • Here's the problem: You are mad about the selection bias of my story, but ok with the selection bias of some dude whining on Newegg? I've never seen a test showing SSDs are more reliable or unreliable, and I suspect neither have you (since I suspect none have been done).

                Given the lack of such evidence, you simply have to go on other things you use normally to buy components like brand reliability and warranty. Well, in the warranty department, SSDs are fine, 3 years is the norm, 5 years is available from so

                • by arth1 (260657)

                  Here's the problem: You are mad about the selection bias of my story, but ok with the selection bias of some dude whining on Newegg? I've never seen a test showing SSDs are more reliable or unreliable, and I suspect neither have you (since I suspect none have been done).

                  Many have been done. Google is your friend. No, not just for searching, but Google has quite a bit of experience with both HDDs and SDDs in their data centres.

                  Given the lack of such evidence, you simply have to go on other things you use normally to buy components like brand reliability and warranty. Well, in the warranty department, SSDs are fine, 3 years is the norm, 5 years is available from some manufacturers. That's fairly similar to HDDs. Also, good chance they figure most SSDs will live at least 3 years if they warranty them for that long, as they don't want to be paying for replacements.

                  Except that the warranty covers defects in manufacturing; it doesn't cover wear and tear. If your drive wears out, you won't get a new one.

                  I just don't get where the mistrust for SSDs comes from. Near as I can tell, it is just from whiny people. Some dude gets an SSD, it fails, he gets crybaby and posts bad reviews on Newegg, Amazon, his blog, and so on. This happens a few times and it is "common knowledge" that SSDs are unreliable despite no actual evidence of that.

                  They have a finite number of write cycles[*]. And the new generation (TLC) are even worse than the last (MLC). SLC drives aren't available anymore because they cost too much.

                  [*] According to Centon [centon.com], SLC gives

        • I've been using HDDs for the last 20+ years and I think every single one I have owned has survived past the day I retire them (typically 3 - 5 years). I'm aware that statistically I've been rather lucky but the tipping point for me is whether I can buy an SSD, of decent size, that stands a good chance of surviving for at least 3 years, preferably a bit more. I've been watching this space for months / years and am still left with enough uncertainty to have prevented me from going SSD, until now anyway. I'm g

        • Reliability is the reason I havent gone SSD yet.

          My experience is that SSD's fail more often than hard drives, more catastrophically, and without warning, especially on the low end.

          But I still use them because they're super fast. Put them in a mirror. Use SLC where it's really important to not have a mirror half fail.

        • by tedgyz (515156) *

          There is a decent hybrid solution available for SSD. With the Intel Z68 chipset, you can setup an SSD as a cache for the primary hard drive. It is configured similar to RAID. This allows me to get the capacity of an HD with the performance gains of an SSD. However, I have not been able to easily quantify the performance gains.

      • by rrohbeck (944847)

        That's why there is ECC.
        Why anybody runs without it is beyond me.
        With EDAC I see occasional ECC errors on many systems.

        • Below please find a table listing DDR / DDR2 / DDR3

          http://chipdesignmag.com/images/idesign/misc/defazio_table1.gif [chipdesignmag.com]

          Does anyone have the numbers for DDR4 ?

        • Re:Excellent (Score:4, Informative)

          by arth1 (260657) on Wednesday May 16, 2012 @06:43AM (#40014591) Homepage Journal

          That's why there is ECC.
          Why anybody runs without it is beyond me.

          In the case of HHC, which TFS mentions, likely because you need to both buy, fit and power the extra circuitry. Added development costs, production costs, size requirements and larger power drain is a hard sell.

          On a PC, the main reason is that Intel only supports it on Xeon CPUs. A secondary reason is consumerism, where people pick the cheapest system that has "comparable" specs, without understanding minute differences, or caring about longevity.

          For servers, can you even buy them without ECC? Every single IBM or Dell system I've purchased over the last few years always came with ECC RAM. But the mind boggles when some expensive RAID controllers come with non-ECC RAM!

  • Awesome. (Score:5, Insightful)

    by bistromath007 (1253428) on Tuesday May 15, 2012 @10:25PM (#40012651)
    Now we just have to wait for Intel to give a goddamn about it. Quick, somebody tell AMD to be competitive again for a few months.
    • Re:Awesome. (Score:4, Interesting)

      by Baloroth (2370816) on Tuesday May 15, 2012 @10:37PM (#40012703)

      If DDR4 is really as power-saving as they say, AMD will be competitive simply by adapting it (more than they already are). At the low-power end, especially low-cost low-power, AMD is pretty competitive with Intel already. If they can push out server DDR4 compatible products first, they could stand to gain quite a lot (Intel isn't planning on offering DDR4 till 2014, so AMD has a year and a half).

      • by rrohbeck (944847)

        Mmmm, 4 way DDR4 - I'd buy a new motherboard for that.

      • by TheEyes (1686556)

        Even more interesting is that AMD's APUs are severely memory-constrained; even Llano really needed DDR-1866 or higher (if it existed) to really show what it could do, and Trinity is probably even more constrained. If AMD goes the same route as they did with Phenom II and includes both a DDR3 and DDR4 controller (and makes their chip compatible with both old sockets and new DDR4-compatible ones) they might be able to pull off some interesting design wins in the low power gaming-capable market.

        • by hairyfeet (841228)

          But gamers are, for the most part, gonna use a discrete rather than an APU and the consumers frankly don't know DDR 4 from a grilled cheese sandwich so other than a few benches I just don't see it making that big a deal. Sure the higher scores will make a nice bullet point but I have one of the E350 APUs and with 1200Mb of DDR 3 1333MHz RAM dedicated to video movies are buttery smooth and the games I play don't jerk and isn't that what matters?

          I've been selling AMDs in my shop pretty exclusively since the I

      • by jd (1658)

        AMD has now low-power cores due out later in the year. I imagine their timing is designed to make it possible to have manufacturers design motherboards with the new memory in mind, rather than have them aim for DDR3 and only develop a DDR4 board later.

    • Re:Awesome. (Score:4, Interesting)

      by hairyfeet (841228) <bassbeast1968@NOsPAM.gmail.com> on Wednesday May 16, 2012 @01:19AM (#40013483) Journal

      Ya know I just have to laugh at statements like yours because while the reviewers haven't liked AMD that hasn't stopped the OEMs from gobbling up every chip they can handle. hell AMD had to slow down their desktop lines just to give more room to mobile because they kept running out! Take the Bobcats, they haven't had an update in awhile yet the OEMs are slapping them into everything from netbooks to laptops to all-in-ones as fast as they can get them, same with llano. Even the lower end Bulldozers have been selling quite briskly and the reviewers couldn't stand that chip!

      What thing AMD does seem to get is to coin a phrase "Its the GPU stupid". I mean what does the average person DO with their machine? FB, webmail, YouTube videos and movies. Is there a single job on that list that even the lowest bobcat can't do? Nope. Hell I recommended to my dad to get his GF a little Acer with the C60 chip in it which is just a 1Ghz dual core with turbo and she can't stop gushing about the thing! it plays her FB games, let's her chat anywhere without being tied to the cords, she is just tickled to death with it.

      While DDR 4 may give the integrated GPUs in AMD chips a little speed boost frankly they haven't been having too much trouble selling them or having them run pretty much anything you want. Don't take my word for it, look up the Youtube videos on chips like the E350 where they are playing Crysis on it. I have 8Gb of DDR 3 in mine which gives the GPU 1200Mb of system memory and while i don't often game on it i can say that the little E350 stays cool to the touch even after hours of HD video or office work.

      So I don't think you have to worry about AMD friend, once they have gotten rid of the last of their stake in GloFlo they should be doing quite well with Piledriver and Bobcat II. The RAM in TFA will give it a free speed boost but the only ones who seem to care about such things are the reviewers. For everybody else as long as it does the tasks they have with good battery life they'll be quite happy.

    • by dbIII (701233)
      They are VERY competive in some markets:
      64 core 2GHz AMD system 128GB memory - $9000
      80 core 2GHz Intel system 128GB memory - $66000
      There may be some Intel systems around the 60 core mark (six sockets, ten cores), but I've got no idea how much it would be and it would still max out at 2GHz for now. To get speed you've got to go for less cores :(
    • by Sycraft-fu (314770) on Wednesday May 16, 2012 @04:49AM (#40014167)

      You think all Intel has to do is say "Hey! We'd like to support DDR4," and it just happens?

      Not so much, actually. First off it has to actually, you know, be a real specification. The spec isn't final and released yet. They can't really start to use something that isn't final and subject to change.

      Once it is actually out comes the harder part. They have to redesign the memory controller, which is on the chip now, to accommodate it. DDR4 isn't "DDR3 but faster," it is a different spec that works differently. Big different is no more RAM channels with multi-sticks. It is a point-to-point memory interface. So that is going to require a different setup, particularly to support large numbers of memory sticks. Also along with that the motherboards will have to be redesigned to accommodate the new RAM. Again given the point-to-point nature, the wiring would be different even if all the connectors were the same (which they aren't).

      Then of course those new chips have to be fabbed, tested and made ready for sale, and those boards have to be rolled out. After all that, they still need memory. The memory manufacturers will have to retool their lines and get DDR4 chips and sticks produced in quantity to be sold.

      When all that is done, then DDR4 can hit the market and go in your computer (if you purchase a new board, and processor).

      So, maybe give it 6-12 months, rather than just bitching at Intel for not "giving a damn"? Just because you don't understand how something works, doesn't mean it is easy to do. Implementing a new RAM spec isn't something you just snap your fingers on, it isn't a tiny patch for software. It is a pretty major thing.

      You'll probably see it in systems next year. Intel's roadmap says it will be coming to Haswell-EX server chips first, I haven't seen what AMD's plans are.

    • by jd (1658)

      I would imagine DDR4's higher performance will be important to Intel now that they've designed an even faster Itanium chip (8-core, multithreaded). The power requirement probably wouldn't be much of a factor, though.

  • 1.2V of power? (Score:5, Interesting)

    by mvdw (613057) on Tuesday May 15, 2012 @10:33PM (#40012679) Homepage
    1.2V of power??!! This is supposed to be news for nerds. Nerds should know the difference between voltage and power.
    • Re:1.2V of power? (Score:4, Informative)

      by Bengie (1121981) on Wednesday May 16, 2012 @12:07AM (#40013163)
      Power consumption for computer chips

      C=Capacitance
      F=Frequency
      V=Voltage
      P=Power

      P=VC^2F

      Capacitance is static, so there are only two variables, F and V. As you can tell, amperage doesn't even play into the equation.

      A chip may draw amperage, but that is just a function of C and F.
      • by Anonymous Coward

        Plus leakage which is dependent on voltage, temperature and process (fast corner parts have a lot more leakage than slow corner parts).

      • by mvdw (613057)
        That is all true, but the units for power are "watts", not "volts". You wouldn't say your car goes 33mpg fast, would you?
        • Again because as the grand parent said, the equation is one of capacitance, frequency, and voltage. So when you produce something of equal frequency, with lower voltage, it'll use lower power. Watts isn't normally specified because that depends on the specific frequency you are using, how many chips, etc. As a designer you can compare the voltage differences to tell you what kind of power savings you can expect. The specifics of that translated to watts is based on your design.

          It may not be technically corr

          • by mvdw (613057)

            No, it still doesn'e make sense. The outcome may be the same, but the terminology used is incorrect. From the summary:

            "Today, mobile devices use low-power DDR (LPDDR) memory, the current iteration of which uses 1.2v of power. While the next generation of mobile memory, LPDDR3, will further reduce that power consumption (probably by 35% to 40%)"

            Which would be much better stated as:

            "Today, mobile devices use low-power DDR (LPDDR) memory, the current iteration of which runs at 1.2V. The next generation o

        • That is all true, but the units for power are "watts", not "volts".

          The thing is, the EE post-docs who are designing chips find the terminology useful. So, being pedantic here just makes things worse, not better.

          If it helps, we often talk about a 3GHz chip as being faster than a 2GHz chip. Everybody knows that's not always true, but we all know the assumptions, so it's still a useful conversation.

      • P=VC^2F

        That equation is plain wrong.

        The equation you are looking for P=(V^2)CF which is derived from combining the equations P=IV and I=VCF and provides a reqasonable approximatino for how digital CMOS power consumption will behave.

        But it's only an approximation for a couple of reasons

        1: I=VCF gives an average current. P=IV is true for instantanious voltage and instantanious current It is only correct for average current if votlage is a constant. Assuming that voltage is constant is an approximation of reality.
        2:

    • by Anonymous Coward

      Also, the voltage was written 1.2v, while the correct format (in the SI system) is 1.2 V, with a space before the unit.

    • by pahles (701275)
      You quoted the post wrong. It said "1.2v of power". Clearly the OP is talking about "the 1.2 version of power". Everybody knows LPDDR uses the 1.2 version of power. Don't you? Please read first and comment later!
      • We are still working on version 2. According to my pointy haired boss it will be a synergetic enhanced power package for the most advanced enabeled robust business solutions. :)
    • by Twinbee (767046)
      It was only a couple of days ago when I moaned about this very subject.

      Let's standardize on watts (for power), or joules / watthours (for energy). Yes, amps or volts are useful in some situations, but for the average consumer, we only want to hear about raw energy/power something consumes/supplies. Batteries make this mistake too and it only leads to confusion when comparing technologies.
  • Volts are not a measure of power. Watts are.

    More importantly, energy to accomplish a particular task is what really matters. Though usually, we're just given average or typical power numbers. But your mobile device's battery stores energy, not power or potential.

    At least the voltage is proportional to power and energy...

    • by rrohbeck (944847)

      At least the voltage is proportional to power and energy...

      Fail.
      P=V^2/R.
      E=P*T=V^2*T/R.

      • OK, I was playing fast and loose with my English. I should have said that "there is a positive correlation between voltage and power and energy" or something to similar effect. As opposed to no correlation (or a negative one).

        In any case, at least for a lot of signaling types, P=V^2/R doesn't matter, because R is approximately infinity. What matters more is charging and discharging the capacitance of the traces on the PCB, etc., so E=1/2*C*V^2 is the quantity of interest. With memory, this can vary, as ther

    • by Junta (36770)

      More importantly, energy to accomplish a particular task is what really matters

      Actually, power also matters in many scenarios. If you need more than ~10 amps from a 110V circuit, that's not real practical for home use regardless of the kWh-efficiency the solution gets. Also, for interactive tasks idle power matters critically because most of the time is not bound by performance of your computer bits, but by the human considering what's on the screen. Cell phones mostly fall into the latter category.

  • This may be the most confusing article summary I've ever read. I read it 5 times before I gave up trying to understand it.

    Headline: DDR4 May Replace Mobile Memory For Less
    Summary: LPDDR3, will further reduce that power consumption (probably by 35% to 40%), it will also likely cost 40% more than DDR4 memory."

    • by Anonymous Coward

      Uh, it makes sense to me.

      DDR4 may be used as opposed to LPDDR3 simply because the power savings may not be considered worthwhile when the RAM is 40% more expensive. What the summary fails to explicitly point out is that DDR4s power envelope roughly matches LPDDR, which I guess is why it's considered "good enough" all of a sudden. Presumably this means cheaper devices may go DDR4 to save a bit of cash, whilst premium devices may opt for the more expensive solution to squeeze a bit of battery life out.

      Eithe

    • If I understand correctly, LPDDR2 draws significantly less power than DDR3.

      DDR4 will be competitive with LPDDR2.

      But in turn, LPDDR3 will draw significantly less power than DDR4.

      So manufacturers will have the choice of preserving today's mobile power levels by going with DDR4. Or they can use a more expensive LPDDR3 with lower power but, presumably, lower performance.

  • by Anonymous Coward
    ...a significant boost in software bloat to nullify all that great hardware progress. I'm still stunned as to why anything takes perceptible time on a modern computer. Things should just pop and wink into existence at the merest click.
  • What I hate about "New tech XYZ increases throughput by 2x!" is:

    Why didn't they simply specify the high transfer rate in the original spec (as in USB)?

    Why didn't they simply specify a lower voltage in the first place (for memory)?

    • by pushing-robot (1037830) on Tuesday May 15, 2012 @10:59PM (#40012821)

      Yep, it makes you wonder why we bothered with old technology at all. Why didn't we start using today's computers fifty years ago? Think of all the time and effort it would have saved!

    • by blueg3 (192743) on Tuesday May 15, 2012 @11:01PM (#40012849)

      In order for a spec to be useful, you need to be able to actually build the specified system. The reason they don't encompass things that they can't currently build in the specs is that they want the specs to be useful.

    • by batkiwi (137781)

      Technology to make it work at X speed and Y voltage didn't exist at the time, and for something like a memory module you don't design it to take a range of voltages or speeds.

    • by jd (1658)

      As others have noted, the tech wasn't there.

      However, in the more abstract sense, you can only extrapolate models so far beyond the furthest point for which you have data before the models break down. But you don't know when that will happen, it depends on how good the model is and you can't know that in advance.

      Specs are therefore reasonably conservative. They'll go a little bit beyond what's feasible right now, but only a little. Just enough to give wiggle-room and space for progress, but not to the point

  • by magarity (164372) on Tuesday May 15, 2012 @11:07PM (#40012879)

    I'm glad they keep it reasonably simple with DDR(1), 2, 3, and now 4. I dread the arrival of RAM2015 or somesuch nonsense one day.

  • by Anonymous Coward

    I don't care if it costs less. What are they selling it for?

  • I'm waiting for DDR MAX 2 and DDR Extreme. The difficulty level in the early ones is just- ... Ohhh we're talking about memory here. Carry on, folks. Ignore me.

  • by Anonymous Coward

    Some graphics cards have GDDR5 in it, why not use that?

    • I hope that was a joke. Thats like asking why not use the airplane specification engine in a car, just because the version number is 1 greater than the current car engine versioning system.
       
      If you really want to compare, GDDR5 losely compares with DDR3, and DDR4 is better than that.

  • Will DDR4 have the same slot pinout, or a different one from DDR3? If I have a PC w/ DDR3, can a DDR4 substitute it, or are we looking @ a change in a whole lot of things, from chipsets to the RAM itself?

Parkinson's Law: Work expands to fill the time alloted it.

Working...