Google Finds DRAM Errors More Common Than Believed 333
An anonymous reader writes "A Google study of DRAM errors in their data centers found that they are hundreds to thousands of times more common than has been previously believed. Hard errors may be the most common failure type. The DIMMs themselves appear to be of good quality, and bad mobo design may be the biggest problem." Here is the study (PDF), which Google engineers published with a researcher from the University of Toronto.
Re:Percentage? (Score:5, Informative)
"We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a suprisingly small effect on error behavior in the field, when taking all other factors into account."
Re:Percentage? (Score:5, Informative)
No, I don't believe so. They use server boards, custom made to their specs. And, I'm pretty sure that those specs include ECC memory - that is the standard for servers, after all. http://news.cnet.com/8301-1001_3-10209580-92.html [cnet.com] If you're really interested, that story gives you a starting point to google from.
Bus errors! (Score:5, Informative)
What I have seen (and generated) is the occasional (2-3/day) bus error with specific (nasty) datapatterns. Usually at a few addr. I write that off to mobo trace design and crosstalk between the signals. Failing to round the corners sufficiently, or leaving spurs is the likely problem. I think Hypertransport is a balanced design (push-pull differential like ethernet) and should be less succeptible.
Re:ZFS (Score:3, Informative)
Obviously, not having RAM errors would be even nicer; but, if you can at least detect trouble when it arises rather than well afterwords, you can avoid having it propagate further, and get away with using cheap redundancy instead of expensive perfection.
Re:Bus errors! (Score:3, Informative)
I had a RAM stick (256MB DDR I think) with a stuck bit once. At first I just noticed a few odd kernel panics, but then I got a syntax error in a system Perl script. One letter had changed from lowercase to uppercase. That's when I ran memtest86 and found the culprit.
At the time, a "mark pages of memory bad" patch for the kernel did the trick and I happily used that borked stick for a year or so.
Re:Percentage? (Score:5, Informative)
Good news
The study had several findings that are good news for consumers:
* Temperature plays little role in errors - just as Google found with disk drives - so heroic cooling isnâ€(TM)t necessary.
* The problem isnâ€(TM)t getting worse. The latest, most dense generations of DRAM perform as well, error wise, as previous generations.
* Heavily used systems have more errors - meaning casual users have less to worry about.
* No significant differences between vendors or DIMM types (DDR1, DDR2 or FB-DIMM). You can buy on price - at least for the ECC-type DIMMS they investigated.
* Only 8% of DIMMs had errors per year on average. Fewer DIMMs = fewer error problems - good news for users of smaller systems.
Re:ECC on a home system? (Score:5, Informative)
Re:ECC on a home system? (Score:3, Informative)
ECC is slower by something like 1%, which is completely unnoticeable since RAM contributes relatively little to the overall system performance. 2x faster RAM won't make things run twice as fast, because normally CPU caches get a > 90% hit ratio. Otherwise things would be incredibly slow, as the fastest RAM still is horribly slow and has a horrible latency compared to the cache.
Re:Percentage? (Score:4, Informative)
uh, article showed that temperature has nothing to do with it.
the rest is accurate.
Re:Percentage? (Score:3, Informative)
... Running ECC performs a basic parity check, nothing more...
Not [wikipedia.org] exactly [wikipedia.org]...
Re:Want to confirm? Look at your bittorrent log. (Score:5, Informative)
The checksum used by TCP is several orders of magnitude more likely to match a corrupted packet than the checksum used by bittorrent. (citation [psu.edu])
More than likely these are transmission errors where the TCP checksum matched but the bittorrent checksum did not.
Re:Percentage? (Score:3, Informative)
UPS - Uninterruptible Power Supply
Now many UPSs also include a Power Conditioner, but a UPS is not a power conditioner.
Re:"RAID"-style system for RAM... (Score:2, Informative)
No, not really.
RAID-5 allows for disk failure via distributed block parity. ECC recovers single bit error.
The "Memory RAID" design should prevent a larger issue (multi-bit/DIMM failure/etc. that ECC cannot prevent) from taking the whole system out.
I would imagine that ECC memory would be used in conjunction with higher-level striping or mirroring to prevent and recover from both failures.
Re:Percentage? (Score:4, Informative)
Re:ECC on a home system? (Score:3, Informative)
Re:"RAID"-style system for RAM... (Score:2, Informative)
You can do this. My IBM x3550 servers (which are ancient) has this option. It's set by jumpers on the motherboard.
Re:Percentage? (Score:5, Informative)
I work on server design, specifically motherboards. ECC is a feature, it helps prevent bit errors from passing through undetected. It is not a method for preventing errors from happening in the first place, nor does it influence the number of bit errors. That is a property of the motherboard design, the chipset, the DIMM PCB and the DRAM. Second, just because you provide a spec for a mobo, does not mean that it is all inclusive. Generally people specify form factor, power, features. They don't specify quality and in most cases don't give a criteria for what it means for a feature to "work". In fact most customers I've talked to don't really understand what quality means from hardware (and sometimes in general). Hardware management, much like software, is designed with similar principles of impact/effort: if customers don't care, we don't test. In other words if it ain't listed on the box, or the salesman won't write it down, just assume it wasn't done.
In spite of the fact that computer motherboards are digital electronics, there is in fact anything but a binary determination of "work" and "not work". Digital signals are an engineering approximation, one which falls apart at high speeds, dense routing and inexpensive design. Well designed and tested motherboards have a well known bit error rate, and reliable companies will not ship a new design until they meet their target. I do this on systems I design, but they aren't cheap, not by a lot. It is a very expensive, time consuming process, one which most companies really want to get rid of. Not all systems are so thoroughly tested, in fact the vast majority of boards out there, server or otherwise, aren't tested much at all.
Forking money for ECC is very similar to paying the mob to protect you. Yes, it will give you more peace of mind, but what you really want is to not be having these problems to begin with. For people who care about data integrity, you should be asking what the bit error rate is and how they know. If they don't know, then you don't want it, ECC or no ECC. Don't assume "the industry" is equal, and don't assume that because a vendor's product X is really good that their product Y is really good too: you WILL be wrong, particularly on computers.
How to enable ECC after booting... (Score:2, Informative)
The commands to do this are:
You can watch the scrub address register incrementing using
setpci -d 1022:1203 60.L 5C.L
Similar commands work on the K8 (single-core Athlon 64), but the device is :1103, and leave the msbyte of 58.L alone (there is no L3 cache scrubber).
Re:Percentage? (Score:5, Informative)
"Regular RAM" has neither parity nor ECC.
The original PC added a 9th bit to each byte, creating parity RAM. It was unique among personal computers at the time. None (or nearly none) of the original PC's contemporaries did this. But, since IBM did, many clones followed suit in the PC space. Macs, notably, didn't support ECC for many, many years, but if you pop open a Columbia Data Products PC [textfiles.com], you'll see parity RAM. (Note "128K RAM with parity" in that scan.) IBM went with byte parity in part because bytes were the smallest memory unit the CPU read or wrote to the memory. With byte parity, every memory access could be protected.
This ratio of 9/8 stuck with the PC's memory system over the years, following it to ever wider interfaces. That includes the 16 bit buses of the 286 and 386SX, the 32-bit buses of the 386DX and 486, and the 64 bit bus of the original Pentium. While many manufacturers made the byte parity optional as a cost saver, it was still rather common.
Once you get to 64 bits, you have 8 extra parity bits for a total memory width of 72 bits. This is enough bits to implement a single-error correct, double-error detect Hamming code [wikipedia.org] on the 64-bit data. As long as you always read or write in multiples of 64 bits, you can also generate the Hamming code on writes and check it on reads.
Note that caveat: "As long as you always read or write in multiples of 64 bits." By the time you get to the 486 era, on-board L1 caches started to become standard equipment. Caches can turn a single byte read or write into a multiple byte line-fill (assuming they do read-allocate and write-allocate). They can also make writes wider. In write-back mode, they tend to write back the entire cache line if any portion was updated. In write-through mode, they could theoretically package additional bytes from the cache line to go with whatever bytes the CPU wrote to get to a minimum data size. (I don't know if the 486 or Pentium actually did this, FWIW. I'm speaking of general principles of operation.)
The combination of caches and wider buses made ECC practical for PC hardware starting with the Pentium. That's why you started to see it in that time frame and not before.
BTW, the error rate for individual DRAM bit flips should increase as the bits get smaller. It doesn't surprise me that your Pentium Pro's bits never flipped. It was probably built around 16 megabit DRAM chips, or maybe 64 megabit. If you compare a 16 megabit DRAM chip to a 1 gigabit DRAM chip of the same physical size, the bit cells on the gigabit chip are 1/64th the size. That means far fewer electrons holding the bit. As you can imagine, that might increase the likelihood of error per bit. Google's study didn't show an increase in error rate across memory technologies, but its window of memory technologies didn't stretch back 15 years to the Pentium Pro era.
There's also just the total quantity of memory. Your Pentium Pro system probably had at most 128MB. Compare that to a modern system with 4GB. A 4GB system has 32x the memory of a 128MB system. Even if the per-bit error rate remained constant, there are 32x as many bits, so 32x as many errors. Modern systems also implement scrubbing, meaning they actively read all of memory in the background looking for errors. Older systems just waited for the CPU to access a word with a bad bit to raise an error. This also makes the observed error rate drastically different, since many errors would go by unnoticed in a system without scrubbing, but would get proactively noticed (and fixed) in a system with scrubbing.
FWIW, I run my systems these days with ChipKill ECC enabled and scrubbing enabled. Not taking chances. I'll give up 3-5% on performance since most of the time I won't notice it.