Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Hacking Intel Build

Overclocked Memory Breaks Core i7 CPUs 267

arcticstoat writes "Overclockers looking to bolster their new Nehalem CPUs with overclocked memory may be disappointed. Intel is telling motherboard manufacturers not to encourage people to push the voltage of their DIMMs beyond 1.65V, as anything higher could damage the CPU. This will come as a blow to owners of enthusiast memory, such as Corsair's 2.133MHz DDR3 Dominator RAM, which needs 2V to run at its full speed with 9-9-9-24 timings."
This discussion has been archived. No new comments can be posted.

Overclocked Memory Breaks Core i7 CPUs

Comments Filter:
  • by Freeside1 ( 1140901 ) on Tuesday October 07, 2008 @04:47PM (#25291653)
    Agreed. I overclock, but I accept the risk, and do a little research first.
  • by Anonymous Coward on Tuesday October 07, 2008 @04:50PM (#25291683)

    Push what limits?

    You're not pushing a cpu, it was designed to run faster! Just bined lower.

    You're not overclocking overclocking ram at 2v. Its designed to run with that voltage!

    This isn't an overclocking issue, its a design flaw by Intel. Not our fault you can't see the forest for the trees.

  • by VEGETA_GT ( 255721 ) on Tuesday October 07, 2008 @04:56PM (#25291797)

    You are missing a point here. there are ram chips out there that are designed to run with more voltage then 1.65. So you do not even need to overclock for this to happen.

    for example
    OCZ Platinum 2GB (2 x 1GB) 240-Pin DDR3 SDRAM DDR3 1333 is a 1.8v standard. that's NOT overclocking

    I agree overclocking and you break something your own problem but this product can't even use some decent ram as its stated to be used without blowing the CPU. At that point I would want my CPU replaced thank you,

  • by gEvil (beta) ( 945888 ) on Tuesday October 07, 2008 @04:59PM (#25291837)
    So you don't buy that memory to use with your new chip--that memory is out of spec.
  • by Piranhaa ( 672441 ) on Tuesday October 07, 2008 @05:02PM (#25291889)

    Yes and No. The JEDEC specifications say that DDR2 must be able to handle UP TO 2.3 volts before incurring any PERMANENT damage. However, 1.9v is considered the max when stability is of concern and anything over that is not guaranteed to work (properly).

    DDR3 is specified to work at 1.575v, but able to withstand up to 1.975v .. Again, no guarantees it will function properly, but (according to the standard) shouldn't fry it. Now, other factors do come into play such as less life, more heat generated, more power used, etc.

    The JEDEC specification is for memory modules. What Intel is saying is their processor will (likely) get damaged any more than 1.65v.

  • by Soko ( 17987 ) on Tuesday October 07, 2008 @05:04PM (#25291907) Homepage

    Push what limits?

    You're not pushing a cpu, it was designed to run faster! Just bined lower.

    You're not overclocking overclocking ram at 2v. Its designed to run with that voltage!

    This isn't an overclocking issue, its a design flaw by Intel. Not our fault you can't see the forest for the trees.

    Run a CRC on your brain, sparky, you dropped a bit or two.

    The Nehalem CPU is designed to run at JDEC Spec of 1.5V, but can handle 1.65 without being binned. Yes, the RAM is designed for 2V, but the CPU wasn't - use the RAM, take a chance on killing the CPU and voiding your warranty.

    60nm parts have 25% more area in which to absorb electrons and 25% more dielectric between elements than a 45nm part, so of course they could handle more voltage without damage. It's a design flaw in material physics, not the processor.
     

  • by Anonymous Coward on Tuesday October 07, 2008 @05:13PM (#25291997)

    1.8 volts for DDR3 memory is severly out of spec.

    The nominal voltage is 1.5. Chips nominally operating at higher voltages are of *LOWER QUALITY* than chips operating at the proper 1.5 voltage.

    The ability to increase voltage to offset more aggressive timings than the memory supports is the real issue. At that point you are getting no real performance improvement and the real possibility of random bit flips + additional wear on the memory/northbridge/cpu.

    DDR3 and CPU caches are all about bulk data transfers and have zero to do about latency. Whatever silly gains you think you are getting by playing with timings are hidden by the nature of the hardware.

  • by lagfest ( 959022 ) on Tuesday October 07, 2008 @05:19PM (#25292055)

    by adjusting the RAM voltage, you are also the voltages on the input pins of the processor. Overvolting an I/O pin can cause latchups, which basically is a short circuit.

  • by MrFlibbs ( 945469 ) on Tuesday October 07, 2008 @05:25PM (#25292123)
    Looks like there are enough missed points to go around. The JEDEC DDR3 specification (see JEDEC Standard No. 79-3B) explicitly defines VDD as 1.5 V +/- 0.075 V for DDR3-compliant memory modules. Furthermore, the max supported frequency is 1600 MHz. What OCZ and other like-minded manufacturers are doing is intentionally violating the DDR3 spec to enable overclockers. Higher frequencies can only be reached with higher voltages, so they screen the DRAM chips to find the ones that can be pushed the farthest. These are then sold to enthusiasts to enable them to "push the envelope" on their gaming monster. Specifications exist to enable interoperability between different manufacturers. Intel is supporting the spec. OCZ is not. It's hard to blame Intel for not supporting OCZ's non-compliant parts.
  • by mr_mischief ( 456295 ) on Tuesday October 07, 2008 @05:31PM (#25292191) Journal

    No, you're right. In rare cases an overclocked Celeron performed better than the standard-clocked Pentium 3 of the same nominal speed on most benchmarks. It's been a long time since the Pentium 3 and that generation of Celerons, though, and it usually wasn't worth doing even then.

  • Re:dominator (Score:3, Informative)

    by DavidKlemke ( 1048264 ) on Tuesday October 07, 2008 @05:33PM (#25292225) Homepage

    Back in the day of DDR1 you'd be right, but these days the timings on the RAM are much larger but this isn't necessairly a bad thing. DDR3 runs much faster then it's older brothers and so the actual latency times are quite comparable.

    The bigger numbers in timings mean a whole lot less when the clock is ticking that much faster :)

  • by stun ( 782073 ) on Tuesday October 07, 2008 @05:40PM (#25292335)
    DDR3 specs

    DDR3 modules can transfer data at the effective clock rate of 800â"1600 MHz (see here) [wikipedia.org]

    That means DDR3-1600 is the max speed as a standard.
    Anything faster than DDR3-1600 is already an overclocked memory by the memory manafacture.


    However, Nehalem supports up to DDR3-1333 only.

    Other features discussed include support for DDR3-800, 1066, and 1333 memory. (see here) [intel.com]

    As a hardware enthusiast (but not an overclocker), I would rather be using a DDR3-1600 memory.
    Understandably, the overclocking community would want to use DDR3-2000 or faster (if any).

    Personally, I would not be buying Nehalem until a newer one comes out
    with at least DDR3-1600 or faster support.

  • by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Tuesday October 07, 2008 @05:55PM (#25292569) Homepage

    The data and address lines are connected. No amount of design can change that.

  • by Hells Ranger ( 305981 ) on Tuesday October 07, 2008 @06:53PM (#25293291)

    Probably because the IO voltage rating of the Intel technology for the transistor is lower than AMD. Intel CPU is on a 45nm process and AMD a 65nm process, usually bigger process are more tolerant. If Intel IO run at 1.5V we can suppose there are 2 reason for the limit of the ram.

    First if the IO go beyond the 1.5V you can either break the protection diode on the cpu pin or inject current on the power line for the IO on the chip. That part is bad because it force the power supply to compensate for that and try to keep the same voltage on the power pin. While you have higher voltage incoming from the digital pins creating a differential on the internal power supply line, who start to carry more current than designed. That cause the line to heat and dissipate a lot of power eventually breaking them.

    Second option is that having a higher voltage the transistor aren't made to support, is going to cause more electron leaking trough the gate and eventually breaking the isolation layer. If the isolator become to cracked by the electron a pinhole could form creating a contact between the gate and the substrate. Transistor gate are in reality small capacitor so contact between the 2 side it become a wire. That would cause the transistor to stop working. It also would inject changing voltage on the power line inside of the chip. Worse than the previous problem because now if a pinhole is created you can inject either a positive voltage or ground level on both power line and at different rate everywhere. That would effectively assure the destruction of the IO bank.

  • by sexconker ( 1179573 ) on Tuesday October 07, 2008 @07:23PM (#25293591)

    Because Intel and other chip fabricators can run lower level tests on the actual electronics of the chip than a nerd on the internet can.

    They can physically inspect the chips from a given batch.

    The most 99% of overclockers do is run a program to calculate Pi to a hojillion places over night.

    Intel and other chip fabricators have set tolerances for the electronics. If a part falls within the tolerances, it is deemed good, if it doesn't, it is deemed bad.

    For Intel and other fabricators, if a chip passes physical inspection, and a batch of them meets or beats the MTBF, they are considered good. If they pass physical inspection, but are statistically deviant from the MTBF (in a bad way), the batch is bad.

    In a processor, logical failure is often the end result of physical failure, but physical failure usually does NOT end in logical failure.

    You CAN prove that any given processor is logically reliable if run all possible valid input sequences on it. This is beyond astronomical (but not infinite, since we're talking about a logical level, and there are a finite number of logical states to any processor, along with a finite number of valid inputs).

    You cannot prove that a processor is physically reliable, since the processor physically changes as you use it. This is why we have tolerances. Unfortunately, we want more performance, which means smaller fabrication processes, which means tighter tolerances, which means lower yields.

  • by bjourne ( 1034822 ) on Tuesday October 07, 2008 @07:38PM (#25293731) Homepage Journal

    60nm parts have 25% more area in which to absorb electrons and 25% more dielectric between elements than a 45nm part, so of course they could handle more voltage without damage. It's a design flaw in material physics, not the processor.

    And that looks like a fault in your calculation. 45^2 = 2025, 60^2 = 3600. 3600/2025 = 1.78. So 60 nm parts have 78% more area.

  • by sexconker ( 1179573 ) on Tuesday October 07, 2008 @08:13PM (#25294041)

    A properly written software test is what, exactly?

    Something that would cause the chip to fail physical inspection may not show up on any software test, especially if it only caused the part to be rebinned to a slower speed.

    A CPU can be operating incorrectly in countless ways. Whether it shows up on one specific logical test under certain physical conditions, or whether it continues to show up or not after a certain amount of time is another issue entirely.

    Go to school, or go back, or major in something other than retardism.

  • by frieko ( 855745 ) on Tuesday October 07, 2008 @09:29PM (#25294591)
    Although they are related measurements, process names refer to the ram cell pitch, not the size of the transistors.
  • by aliquis ( 678370 ) on Tuesday October 07, 2008 @10:14PM (#25294919)

    If it's porn rendered in 0.5 fps on the Dell maybe it will.

  • by warrior ( 15708 ) on Wednesday October 08, 2008 @12:05AM (#25295745) Homepage

    A high-speed clock and data recovery system like that used to implement the memory controller and RAM won't be fixed with additional mobo components. Put anything in that path and it will very likely break. 2.0V is likely well above the Vmax of the FETs used in Intel's controller. They needed to take care of the voltage conversion at the pads to avoid issues like this. Instead I'm guessing they run the whole controller at the same voltage as the pads. That might allow them to run the controller logic and FFE/DFE faster but it's bad for power and then causes problems like this one. This looks like a bad design on the part of Intel.

  • by frieko ( 855745 ) on Wednesday October 08, 2008 @01:08AM (#25296101)
    bzzt. nope. Process names are half the distance between two adjacent DRAM cells. I know you're thinking CPU's don't even have any DRAM cells, but it is what it is. See: MOSFET and Front-end Process Integration by Zeitoff, Hutchby and Huff.

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...