Google Finds DRAM Errors More Common Than Believed 333
An anonymous reader writes "A Google study of DRAM errors in their data centers found that they are hundreds to thousands of times more common than has been previously believed. Hard errors may be the most common failure type. The DIMMs themselves appear to be of good quality, and bad mobo design may be the biggest problem." Here is the study (PDF), which Google engineers published with a researcher from the University of Toronto.
Percentage? (Score:4, Interesting)
"a mean of 3,751 correctable errors per DIMM per year."
I'm much to lazy to do the math. Let's round up - 4k errors per year has to be a vanishingly small percentage for a system that is up 24/7/365, or 5 nines. The fact that these DIMMs were "stressed" makes me wonder about the validity of the test. Heat stress, among other things, will multiply errors far beyond what you will see in normal service.
Re:Percentage? (Score:5, Informative)
"We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a suprisingly small effect on error behavior in the field, when taking all other factors into account."
Re: (Score:2)
"We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a suprisingly small effect on error behavior in the field, when taking all other factors into account."
What temperature range does "the field" encompass, as opposed to "lab conditions" ?
They found a similar result with hard disks, but their data pretty much finishes at around 40 degrees, roughly where the typical desktop PC's drive is starting.
Re: (Score:2)
Add to that the fact that Google (apparently) tends to run their data centers "hot" compared to what is commonly accepted, and use significantly cheaper components, and you've got a good explanation for why their error count is as high as it is.
Re:Percentage? (Score:5, Insightful)
Add to that the fact that Google (apparently) tends to run their data centers "hot" compared to what is commonly accepted, and use significantly cheaper components, and you've got a good explanation for why their error count is as high as it is.
Yeah, but let's look at the more common situation - a home. Variable temperatures, most likely QUITE variable power quality, low-quality PSU, and almost certaily no UPS to make up for it. Add that to low-quality commodity components (mobo & RAM).
I'd not be surprised to find the problem much more prevalent in non-datacenter environments.
Switching to high-quality memory, PSU & UPS has made my systems unbelievably reliable the last several years. YMMV, but I doubt by much.
Re:Percentage? (Score:5, Informative)
Good news
The study had several findings that are good news for consumers:
* Temperature plays little role in errors - just as Google found with disk drives - so heroic cooling isnâ€(TM)t necessary.
* The problem isnâ€(TM)t getting worse. The latest, most dense generations of DRAM perform as well, error wise, as previous generations.
* Heavily used systems have more errors - meaning casual users have less to worry about.
* No significant differences between vendors or DIMM types (DDR1, DDR2 or FB-DIMM). You can buy on price - at least for the ECC-type DIMMS they investigated.
* Only 8% of DIMMs had errors per year on average. Fewer DIMMs = fewer error problems - good news for users of smaller systems.
Re: (Score:3, Interesting)
IIRC ECC ram has extra bits and hardware to fix any single bit error and record that it happened.
Regular ram only has parity which can tell the MB the data is suspect but not which bit flipped. Kernel panic, Blue Screen, Guru Meditation# whatever.
It's the same RAM, just arranged differently on the DIMM.
I once had a dual Pentium PRO that required ECC RAM. BIOS recorded 0 ECC errors in the three years or so that was my primary machine. Which is what the Google study would lead me to expect.
Re: (Score:2)
***Regular ram only has parity***
Commodity DRAM hasn't had parity since the early 1990s when DRAM was selling for $100 a Megabyte. Microsoft -- which was trying to sell its memory hungry Windows OS -- pushed for the removal of parity in order to reduce DRAM prices, claiming (probably incorrectly) that DRAM failures were no longer a significant problem. I wished at the time, and still wish, they hadn't done that. Up to that point, Microsoft's record was actually pretty consumer friendly. No more regretta
Re:Percentage? (Score:5, Informative)
"Regular RAM" has neither parity nor ECC.
The original PC added a 9th bit to each byte, creating parity RAM. It was unique among personal computers at the time. None (or nearly none) of the original PC's contemporaries did this. But, since IBM did, many clones followed suit in the PC space. Macs, notably, didn't support ECC for many, many years, but if you pop open a Columbia Data Products PC [textfiles.com], you'll see parity RAM. (Note "128K RAM with parity" in that scan.) IBM went with byte parity in part because bytes were the smallest memory unit the CPU read or wrote to the memory. With byte parity, every memory access could be protected.
This ratio of 9/8 stuck with the PC's memory system over the years, following it to ever wider interfaces. That includes the 16 bit buses of the 286 and 386SX, the 32-bit buses of the 386DX and 486, and the 64 bit bus of the original Pentium. While many manufacturers made the byte parity optional as a cost saver, it was still rather common.
Once you get to 64 bits, you have 8 extra parity bits for a total memory width of 72 bits. This is enough bits to implement a single-error correct, double-error detect Hamming code [wikipedia.org] on the 64-bit data. As long as you always read or write in multiples of 64 bits, you can also generate the Hamming code on writes and check it on reads.
Note that caveat: "As long as you always read or write in multiples of 64 bits." By the time you get to the 486 era, on-board L1 caches started to become standard equipment. Caches can turn a single byte read or write into a multiple byte line-fill (assuming they do read-allocate and write-allocate). They can also make writes wider. In write-back mode, they tend to write back the entire cache line if any portion was updated. In write-through mode, they could theoretically package additional bytes from the cache line to go with whatever bytes the CPU wrote to get to a minimum data size. (I don't know if the 486 or Pentium actually did this, FWIW. I'm speaking of general principles of operation.)
The combination of caches and wider buses made ECC practical for PC hardware starting with the Pentium. That's why you started to see it in that time frame and not before.
BTW, the error rate for individual DRAM bit flips should increase as the bits get smaller. It doesn't surprise me that your Pentium Pro's bits never flipped. It was probably built around 16 megabit DRAM chips, or maybe 64 megabit. If you compare a 16 megabit DRAM chip to a 1 gigabit DRAM chip of the same physical size, the bit cells on the gigabit chip are 1/64th the size. That means far fewer electrons holding the bit. As you can imagine, that might increase the likelihood of error per bit. Google's study didn't show an increase in error rate across memory technologies, but its window of memory technologies didn't stretch back 15 years to the Pentium Pro era.
There's also just the total quantity of memory. Your Pentium Pro system probably had at most 128MB. Compare that to a modern system with 4GB. A 4GB system has 32x the memory of a 128MB system. Even if the per-bit error rate remained constant, there are 32x as many bits, so 32x as many errors. Modern systems also implement scrubbing, meaning they actively read all of memory in the background looking for errors. Older systems just waited for the CPU to access a word with a bad bit to raise an error. This also makes the observed error rate drastically different, since many errors would go by unnoticed in a system without scrubbing, but would get proactively noticed (and fixed) in a system with scrubbing.
FWIW, I run my systems these days with ChipKill ECC enabled and scrubbing enabled. Not taking chances. I'll give up 3-5% on performance since most of the time I won't notice it.
Re: (Score:2)
eh? I was following this thread and I misread and followed the route of digishaman as well. I'm not defending him, just sometime people fail to read properly when multitasking, myself included.
Re: (Score:2)
Re: (Score:2)
Talk about a misunderstanding.
First, the paper on hard drives did show that temperature was important. It did show though that too cold is worse than too hot. Also, the data wasn't perfect. Google doesn't have a whole lot of drives running at strange temperatures, since they're a datacenter. A consumer though, might well run a drive at 60C in a badly cooled desktop or laptop, and there's n
Re: (Score:2)
At 3,751 errors per DIMM/year it means that a system with 2 sticks (very common for dual channel) is getting 20 bits flipped per day. The question then is how long will it take for that to screw up something important.
Since a modern machine has plenty RAM for disk cache, and in many workloads most memory would be dedicated to that, this would easily mean that every day some software operates on data that's not exactly what was on disk, and if you write any significant amount of data back, it's quite possibl
Re: (Score:2)
Temperature plays little role in errors - just as Google found with disk drives [...]
That's not what Google found at all. They found that in the temperature range typically seen an airconditioned datacentre, temperature is not a major influence on failure rates.. Their data shows that once the temperature rises above about 40 degrees C, failure rates start to increase. 40 degrees is pretty typical for the average home PC, and downright cool in cramped cases like iMacs.
Re: (Score:2)
I'll second this. Once or twice I skimped on mobo or memory in a pinch, and those have been the only machines of mine to have stability issues post Windows 98. (Even in Windows 98 I could get about 3 weeks of uptime before needing a reboot. It sucked, but it wasn't as bad as some people had to deal with).
Re: (Score:3, Insightful)
Yeah, but let's look at the more common situation - a home. Variable temperatures, most likely QUITE variable power quality, low-quality PSU, and almost certaily no UPS to make up for it. Add that to low-quality commodity components (mobo & RAM).
The vast majority of people have laptop's now which come with a built in UPS.
Re: (Score:2)
The vast majority of people have laptop's now which come with a built in UPS.
I doubt the battery system of a laptop does any undervoltage or power spike protection. A UPS is more than a battery.
Re: (Score:3, Informative)
UPS - Uninterruptible Power Supply
Now many UPSs also include a Power Conditioner, but a UPS is not a power conditioner.
Re: (Score:2)
Now many UPSs also include a Power Conditioner, but a UPS is not a power conditioner.
True, but the power conditioning is what's going to improve the life of your system, most likely, not the battery backup.
Re: (Score:2)
Seconded - my private PC runs very reliably with a quality PSU and ECC RAM. It does not have a UPS but the power grid is quite stable here in Germany.
Re: (Score:2)
It varies from town to town here in U.S. I've always been fortunate to live in good power areas (and Los Angeles used to give us 90 p.s.i. Water pressure!) But when we move to our retirement house, I'm gonna need a power conditioner. The lights dimmed several times when I was re-painting it recently, and went off once for a few minutes, the tenants said it happens ALL the time. I always get high-end RAM and PSUs, I've seen others suffer for the lack.
Re: (Score:3, Interesting)
Your post:
Post before yours:
Re: (Score:2)
I'm much to lazy to do the math. Let's round up - 4k errors per year has to be a vanishingly small percentage for a system that is up 24/7/365, or 5 nines. The fact that these DIMMs were "stressed" makes me wonder about the validity of the test. Heat stress, among other things, will multiply errors far beyond what you will see in normal service.
Except it depends on how the modules were originally tested. The study is saying that they break more than previously thought, rather than they break a lot. If they were originally tested in a stressed system similar to Googles and Google is finding that they have far more errors than they should then their study is still valid.
Re:Percentage? (Score:4, Insightful)
"a mean of 3,751 correctable errors per DIMM per year."
Hey, the ECC did its job! Let's all go home.
I'm much to lazy to do the math.
I tried, based on the abstract. Wound up getting a figure of 8% of 2 gigabyte systems having 10 RAM failures per hour and the other 92% being just peachy. While a few bits going south is AFAIK the most common failure state for RAM, some of those RAM sticks must be complete no-POST duds and some are errors-up-the-wazoo massive swaths of RAM corrupted, so that throws my back of the envelope math WAY off....
In other words, big numbers make Gronk head hurt. Gronk go make fire. Gronk go make boat. Gronk go make fire-in-a-boat. Gronk no happy with fire-in-a-boat. Boat no work, and fire no work, all at same time.
Sorry, lost my thread there. So yeah, complex numbers, hard math, random assumptions that bugger our conclusions and maybe bugger theirs.
The fact that these DIMMs were "stressed" makes me wonder about the validity of the test. Heat stress, among other things, will multiply errors far beyond what you will see in normal service.
The problem with something like this is the assumption that Google world == real world.
This RAM is all running on custom Google boards that no one else has access to, with custom power supplies in custom cases in custom storage units. To the researchers' credit, they split things by platform later on, but that just means Google-custom-jobbie-1 and Google-custom-jobbie-2, not Intel board/Asus board/Gigabyte board. Without listing the platforms down to chipsets and CPU types (not gonna happen), it's hard to compare data and check methodology.
While Google is the only place you're going to find literal metric tons of RAM to play with, the common factor that it's all Google might be throwing the numbers. At least some confirmation that these numbers hold at someone else's data center would be nice.
But then, I didn't RTWholeFA, so maybe I missed something.
Re: (Score:2)
Re: (Score:2)
Yes, I saw that, and it was also pointed out earlier in this discussion. I, for one, am not willing to accept that statement. It should be noted that a lot of "assumptions" were made in this study, and that those assumptions are referred to throughout the TFA and the PDF. Of all the hardware errors I've ever dealt with, heat was the most common problem.
Re: (Score:2)
thats more than 10 errors per day... That is excessive, no matter the load they put on their servers, or how many DIMMS there are.. And their memory loads aren't all that excessive in the day of 1U boxes holding 128GB of ram for Virtual Machines...
Re:Percentage? (Score:5, Informative)
No, I don't believe so. They use server boards, custom made to their specs. And, I'm pretty sure that those specs include ECC memory - that is the standard for servers, after all. http://news.cnet.com/8301-1001_3-10209580-92.html [cnet.com] If you're really interested, that story gives you a starting point to google from.
Re: (Score:3, Insightful)
No, I don't believe so. They use server boards, custom made to their specs.
I suppose it depends on how you define "server board". Room for tons of ECC RAM and two CPUs is server or serious-workstation class (or maybe I-just-use-Notepad-and-my-sales-guy-is-on-commission class), but I think once you're on to custom boards that only use certain voltages of electricity, you've moved into a class by yourself.
And, I'm pretty sure that those specs include ECC memory - that is the standard for servers, after all.
Section 7: "All DIMMs were equipped with error correcting logic (ECC) to correct at least single bit errors."
So, yes, it's ECC.
Re: (Score:2)
Room for tons of ECC RAM and two CPUs is server or serious-workstation class (or maybe I-just-use-Notepad-and-my-sales-guy-is-on-commission class), but I think once you're on to custom boards that only use certain voltages of electricity, you've moved into a class by yourself.
He probably means that the boxes are made to spec. Google isn't stupid enough to go with custom mobos for what amounts to generic grunt clusters.
Re:Percentage? (Score:4, Interesting)
Re:Percentage? (Score:5, Funny)
Re: (Score:2)
You know, maybe googling it isn't the best idea in this case. Memory errors and all...
I was going to debunk that but I forgot what was on my mind. Damn dimms!
Re: (Score:2, Interesting)
Re:Percentage? (Score:4, Informative)
uh, article showed that temperature has nothing to do with it.
the rest is accurate.
Re: (Score:3, Informative)
... Running ECC performs a basic parity check, nothing more...
Not [wikipedia.org] exactly [wikipedia.org]...
Re: (Score:2)
I've actually been looking for a 12V power supply for a while. I wonder if they use power supplies off the shelf or if they are custom-manufactured just for Google?
Re: (Score:2)
www.logicsupply.com [logicsupply.com]
Re: (Score:2)
Scratch that, they're not doing what I thought. I went and RTFA now that I have a few minutes and see there is no separate power supply outside of the 12VDC feed. Darn.
In that case, are these results usefull at all? (Score:2)
Which really makes me question whether these results have any validity outside of google. The study found that the majority of errors appeared to be related to the motherboard, but didn't list any information about the motherboards in use. If they are all custom built for google, then there is absolutely no way for any of us to know whether the error rate they exhibited is representative of what you'd get from average COTS server-grade motherboards currently on the market. Thus these results are meaningless
Re: (Score:2)
Oh, so it's $FOO, but in a server.
Running computers on batteries? It got a patent?
I think [dell.com] there is [apple.com] a good bit [hp.com] of prior art [asus.com] if only [sagernotebook.com] one knows where to look [wikipedia.org].
I mean, really. This is a good idea, and it's about darn time a large-form-factor motherboard running on low voltage is available, but IMHO this should not be patentable. It's simply designing around a low-voltage input.
Re:Percentage? (Score:5, Informative)
I work on server design, specifically motherboards. ECC is a feature, it helps prevent bit errors from passing through undetected. It is not a method for preventing errors from happening in the first place, nor does it influence the number of bit errors. That is a property of the motherboard design, the chipset, the DIMM PCB and the DRAM. Second, just because you provide a spec for a mobo, does not mean that it is all inclusive. Generally people specify form factor, power, features. They don't specify quality and in most cases don't give a criteria for what it means for a feature to "work". In fact most customers I've talked to don't really understand what quality means from hardware (and sometimes in general). Hardware management, much like software, is designed with similar principles of impact/effort: if customers don't care, we don't test. In other words if it ain't listed on the box, or the salesman won't write it down, just assume it wasn't done.
In spite of the fact that computer motherboards are digital electronics, there is in fact anything but a binary determination of "work" and "not work". Digital signals are an engineering approximation, one which falls apart at high speeds, dense routing and inexpensive design. Well designed and tested motherboards have a well known bit error rate, and reliable companies will not ship a new design until they meet their target. I do this on systems I design, but they aren't cheap, not by a lot. It is a very expensive, time consuming process, one which most companies really want to get rid of. Not all systems are so thoroughly tested, in fact the vast majority of boards out there, server or otherwise, aren't tested much at all.
Forking money for ECC is very similar to paying the mob to protect you. Yes, it will give you more peace of mind, but what you really want is to not be having these problems to begin with. For people who care about data integrity, you should be asking what the bit error rate is and how they know. If they don't know, then you don't want it, ECC or no ECC. Don't assume "the industry" is equal, and don't assume that because a vendor's product X is really good that their product Y is really good too: you WILL be wrong, particularly on computers.
Re: (Score:3, Insightful)
Comparing ECC to mob protection is not a very good analogy. ECC lets you detect and in some cases fix memory errors. The key is the detection part.
If you get a single bit error which results in corrupt data, unless you verify that data some other way you won't know about it unless you have ECC. Verifying data multiple times is computationally expensive and degrades performance, and most server OSs and software don't do it anyway.
As well as error detection the fact that you know it was the memory which corru
Re: (Score:3, Insightful)
Then, for leaping gods sake, tell us who you work for!
Re:Percentage? (Score:4, Informative)
Re: (Score:2)
No, Google has always used servers. The trademark of Google, which you're misquoting, is the fact that they use clusters of x86 hardware, rather than big iron (mainframes).
Compared to proprietary hardware, x86 servers are dirt cheap.
Re: (Score:2)
Actually, I thought I had heard that they build their clusters using SuperMicro boxes (which are integrated and sold by a variety of distributors), but I can't find anything to back that up now. But yeah, black box com
Bus errors! (Score:5, Informative)
What I have seen (and generated) is the occasional (2-3/day) bus error with specific (nasty) datapatterns. Usually at a few addr. I write that off to mobo trace design and crosstalk between the signals. Failing to round the corners sufficiently, or leaving spurs is the likely problem. I think Hypertransport is a balanced design (push-pull differential like ethernet) and should be less succeptible.
Re: (Score:3, Informative)
I had a RAM stick (256MB DDR I think) with a stuck bit once. At first I just noticed a few odd kernel panics, but then I got a syntax error in a system Perl script. One letter had changed from lowercase to uppercase. That's when I ran memtest86 and found the culprit.
At the time, a "mark pages of memory bad" patch for the kernel did the trick and I happily used that borked stick for a year or so.
Re: (Score:3, Insightful)
Re: (Score:2)
Yup, I'm really disappointed that consumer PCs are still lacking ECC ram. The support for it is in all the chipsets, but it adds $5 to the cost of the machines. Oh well.
Re: (Score:2, Interesting)
This machine compiled a lot of source (it was a Gentoo box), so surely if errors like these had bee
Re: (Score:2)
Re: (Score:2)
Some hard errors occur because of natural alpha-decay - even one alpha particle can flip a bit. Also, energetic cosmic rays can cause problems.
ECC on a home system? (Score:5, Interesting)
I've always thought it would be a nice-to-have feature for my home system to have ECC - perhaps it might degrade over time and misbehave less if it could detect and fix some errors. But my normal sources don't seem to stock many choices. E.g. Newegg appears to have 2 motherboards to choose from, both for AMD CPUs, nothing for Intel. Frys appears to have one, same, AMD only. Is this just the way things are, or do I need to be looking somewhere else? Would picking one of these motherboards end up in not working out well for my gaming rig?
Re: (Score:2)
ECC is slightly slower.
Re: (Score:3, Informative)
ECC is slower by something like 1%, which is completely unnoticeable since RAM contributes relatively little to the overall system performance. 2x faster RAM won't make things run twice as fast, because normally CPU caches get a > 90% hit ratio. Otherwise things would be incredibly slow, as the fastest RAM still is horribly slow and has a horrible latency compared to the cache.
Re: (Score:3, Informative)
Re: (Score:2)
You'd probably have to look at server boards rather than desktop boards.
http://bit.ly/16EUiC [bit.ly]
Link to Newegg with filtered set of ECC compatible server boards.
But you'll pay a lot more and probably need a larger case and a bunch of other BS, although it looks like there are some ATX factor boards.
Re: (Score:2)
Because AMD Athlon/Phenom CPUs have the memory controller integrated into the CPU, the CPU (not the motherboard) actually dictates what type of RAM you can use.
For all the desktop class AMD Athlon/Phenom CPUs, you can use un-buffered ECC memory. Just make sure it's not buffered or registered. You need an Opteron to use buffered or registered memory.
If you want an Intel processor, you have to use a Xeon (and the right mobo) to use ECC memory.
Re: (Score:2)
Re: (Score:2)
ECC is a server-targeted feature. Newegg has 18 mainboards that support ECC listed in the Dual LGA 1366 category alone, and I'd imagine plenty more scattered throughout their server board categories.
As you've already discovered, though, it's not terribly common on home-targeted boards. You're welcome to use one of those boards for gaming, but you'll probably have to use a pricier Xeon or Opteron processor, more expensive ECC RAM, and suffer with slower PCI-E links for your videocards. Higher prices and simi
Re:ECC on a home system? (Score:5, Informative)
Re: (Score:2)
I guess it's gettin' pretty long in the tooth, but my favorite home board is a one-socket opteron. It's only got four gigs of RAM though (and two empty slots).
What I learned from TFA is I didn't do anything but piss everyone off with the "heroic" cooling I've been doing all these years. I've never lost a HDD, and I've always blamed the wind tunnel factor. Live and learn, eh?
Re: (Score:2)
You're right, the i7 does not support ECC. You need to instead run a Lynnfield or Bloomfield Xeon processor, which are as i7, based on Nehalem.
Dell (Score:5, Interesting)
In my experience at work ordering Dell desktops and laptops, by far the most common defect is 1-3% of machines with bad RAM. Typically it's made by Hynix, occasionally Hyundai, and I've never seen other brands fail. On many occasions though, I've predicted Hynix, pulled it, and sure enough theirs was the piece causing the errors in Memtest86+...
Re:Dell (Score:5, Interesting)
Hyundai is Hynix and they are the second largest DRAM manufacturer by marketshare (roughly 20% second to Samsung's 30%).
Its no surprise that you've only seen Hynix brand fail in Dells, chances are they are in 90%+ of Dell (and HP and Apple) boxes because they primarily buy from Hynix in the first place. Its selection bias.
Re: (Score:2)
I've had the worst luck with hynix sticks. Usually when I rebuild systems the sticks that are bad are usually hynix or even hyundai. Mushkin and kingston have always been pretty good to me though and are usually pretty rock solid. Hell, mushkin even has a lifetime warranty. How many other manufacturers offer that?
Re: (Score:2)
Hynix is the former Hyundai Electronics.
I thought that an inability to recall events (Score:4, Funny)
Re: (Score:2)
Misleading, to say the very least. (Score:5, Interesting)
Read the article and remember they are talking averages here.
They give it away with this line:
Only 8% of DIMMs had errors per year on average. Fewer DIMMs = fewer error problems - good news for users of smaller systems
Essentially, only 8% of their ECC DIMM's reported ANY errors in a given year.
Also this was pretty telling:
Besides error rates much higher than expected - which is plenty bad - the study found that error rates were motherboard, not DIMM type or vendor, dependent.
And this:
For all platforms they found that 20% of the machines with errors make up more than 90% of all observed errors on that platform.
So essentially, they are saying that only 8% of DIMMSs reported errors, 90% of which were on 20% of the machines that had errors, mostly because of motherboard issues... yet DIMMs are less reliable than previously thought.
I would imagine that if you removed all of the bad motherboards, power supplies, environmental, and other issues... that DIMMs are actually more reliable than I previously thought, not less! I wonder what percentage of CPU operations yield incorrect results. With Billions of instructions per second, even an astronomically low average of undetected cpu errors would guarantee an error at least as often as failed DIMMs.
What I did take from the article was that without ECC ram, you have no way of knowing that your RAM has errors. I guess I should rethink my belief that ECC was a waste of money.
Re: (Score:2)
I do remember reading an article where I was surprised that Google used such low-quality cheap hardware...
That being said, this isn't really that surprising. Like another poster said, once I started buying quality motherboards (Asus) and quality RAM brands, I really haven't had any problems.
Re: (Score:2)
the quality of the hardware matters little when you have so much built in redundancy. who cares if a server fails when you got three to back the failed one up? they were smart in realizing that for the cost of a sun server you could buy like 10 pcs and basically achieve a lot more with a great deal more redundancy.
Re: (Score:2)
``What I did take from the article was that without ECC ram, you have no way of knowing that your RAM has errors.''
But that's not actually true. Parity allows you to detect errors, but not correct them. Thus, parity RAM is not ECC RAM, but it will detect memory errors.
"RAID"-style system for RAM... (Score:4, Interesting)
RAM is dirt cheap and most server systems support significantly more RAM than most people bother to install. For critical systems, ECC works but that doesn't prevent everything (double bit errors etc.). Is it time for a Redundant Array of Inexpensive DIMMs? Many HA servers now support Memory Mirroring (aka RAID-1 http://www.rackaid.com/resources/rackaid-blog/server-dysfunction/memory_mirroring_to_the_rescue/ [rackaid.com]) but should there be more research into different RAID levels for memory (RAID5-6, 10, etc?)
Re: (Score:3, Insightful)
ECC IS Raid5 for RAM....
Re: (Score:3, Interesting)
I think OP's point was, say you have 4G of non-ECC RAM. It would be neat if you could turn that into, say, 2G of "RAID RAM".
Re: (Score:2, Informative)
No, not really.
RAID-5 allows for disk failure via distributed block parity. ECC recovers single bit error.
The "Memory RAID" design should prevent a larger issue (multi-bit/DIMM failure/etc. that ECC cannot prevent) from taking the whole system out.
I would imagine that ECC memory would be used in conjunction with higher-level striping or mirroring to prevent and recover from both failures.
Want to confirm? Look at your bittorrent log. (Score:5, Interesting)
Especially if the torrent is old. Some of them may be sabotage activity, but I doubt that, considering kind of files.
They are not transmission errors: TCP-IP checks for that. Not hard drive errors - again checksums. They can be intrasystem transmission errors though.
I remember folks who did complete checkers wrote that they had a lot of them too.
Re:Want to confirm? Look at your bittorrent log. (Score:5, Interesting)
The TCP/IP checksums are really weak, only 16bits and rather a poor algorithm anyway. So more than one in 65 thousand errors will be undetected by a TCP/IP checksum. And that's not including buggy network adaptors and drivers that 'fix' or ignore the checksums.
If you're transferring gigabytes of data you really need something a lot better.
Still that's probably not the most common source of errors. You see the same problem exists when data is transferred across an IDE or SCSI bus if there's a checksum at all it's very weak and the amounts of data transferred across a disk bus are scary.
Re: (Score:2)
That's interesting. If you were checking with a newer version of uTorrent, you may have been using UDP, and not TCP. They added UDP capability about a year ago, and I assume others have as well. I don't know if they do error correction on a per-packet basis or rely on block checksums.
Re:Want to confirm? Look at your bittorrent log. (Score:5, Informative)
The checksum used by TCP is several orders of magnitude more likely to match a corrupted packet than the checksum used by bittorrent. (citation [psu.edu])
More than likely these are transmission errors where the TCP checksum matched but the bittorrent checksum did not.
Radiation Effects (Score:5, Interesting)
Re: (Score:2)
Well.
Bullshit.
Sorry, but true. Look up alpha radiation if you want to know why.
clearly not a radiation engineer (Score:5, Insightful)
That window looked out to a pile of coal, so the culprit was assumed to be low level alpha radiation.
Alpha radiation is stopped by a sheet of office paper. It certainly wouldn't make it through the window, through the machine case, electromagnetic shield, circuit board, chip case, and into the silicon. Even beta radiation would be unlikely to make it that far.
What is much more likely: thermal effects. IE, infrared from the sun heating up machines near the window.
Re: (Score:2)
beta would be believable though (as opposed to alpha).
I tend to agree thermal might be the culprit, specifically the delta T not the absolute T. It is the act of changing temperature that harms PCs the most, not the temp that they settle at. As the temperature changes different materials (FR4, lead/tin solder, copper, plastic) expand/contract at different rates. This change causes poor signal connections, and as RAM is likely the most sensitive (socketed rather than soldered) this would explain the bit e
Mainboards (Score:2)
Alrighty then, which mainboards have the lowest error rates? TFA seems to have obfuscated that. That's MSs job, I thought Google was supposed to Do No Evile?
Re: (Score:2)
Re: (Score:2)
I rely on /. for all my free research. Thanks.
Lessons learned from *Non* ECC RAM (Score:4, Insightful)
My takeaway from this paper is that maybe google should hire more technicians who are experienced with non-ecc ram systems. They even believed, prior to this study, that soft errors were the most common error state. I could have told you from the start that was bunk. In over 15 years of burn-in tests as part of pc maintenance, the number of soft-errors observed is... 0. Either the hardware can make it through the test with no error, or there is a DIMM that will produce several errors over a 24 hour test. This doesn't mean that random soft errors never happen when I'm not looking/testing, but the 'conventional wisdom' that soft errors are the predominant memory error doesn't even pass the laugh test.
From looking at the numbers on this report, I get the feeling that hardware vendors are using ECC as an excuse to overlook flaws on flaky hardware. I would now be really interested in a study that compares the real world reliability of ECC vs non-ECC hardware that has been properly QC'd. I'll wager the results would be very interesting, even of ECC still proves itself worth the extra money.
Difficult to find parts that support ECC (Score:5, Interesting)
When I was building the computer I'm typing this on, I had the grand idea of building it with so much RAM that I could basically work from RAM. Meaning, for example, that all my running programs and the project I was working on would have to fit in RAM.
Of course, with such a dream, I was concerned about the reliability of my memory. So I wanted ECC. I found out that having ECC memory is not just a matter of buying ECC memory. There are different kinds of ECC memory, and you need to find a combination of memory, motherboard, and CPU that works together. Many sites that offer CPUs and/or motherboards don't list support for ECC among the specifications. Searching for it is difficult, because searching for "ECC" also returns hits for things like "non-ECC" and "ECC: no".
Finally, I found a combination of motherboard and CPU that would support unbuffered ECC DDR2, and a matching pair of memory modules to go with it. And then, when I got all the parts, the RAM didn't fit in the motherboard. Turns out the RAM was FB-DIMM, which had not been listed in the advertisement. I gave up and just bought 2GB of non-ECC RAM to just get the system working. The FB-DIMM (all 8GB of it) is still sitting here, because I haven't found anyone who wants to buy it from me.
Lessons learned: 1. The saying "the nice thing about standards is that there are so many to choose from" is still relevant. I don't know why there have to be so many hardware interfaces to memory chips, but there are, so be careful. 2. Apparently, nobody really cares about ECC RAM, otherwise information would be easier to find. 3. Apparently, AMD CPUs and matching motherboards more usually support ECC RAM than Intel parts and matching motherboards.
Re:ZFS (Score:5, Insightful)
Changing your file system solves RAM errors how?
Re: (Score:3, Informative)
Obviously, not having RAM errors would be even nicer; but, if you can at least detect trouble when it arises rather than well afterwords, you can avoid having it propagate further, and get away with using cheap redundancy instead of expensive perfection.
Re: (Score:3, Interesting)
Adding checksumming adds another place for errors to occur though -- if data is written correctly but the checksum is-miscalculated, either before it is stored or when the data is being verified -- you'll end up throwing out perfectly good data. If you also have redundancy you're probably willing to live with that, but if you're running on single disk ZFS is just adding more opportunities for data corruption in RAM.
Re: (Score:3, Funny)
Re: (Score:2)
it reduces the effects of universal entropy, obviously.
Sorry, you're looking for the thread two doors over, "Universe Has 100x More Entropy Than We Thought"
Re:Gentoo?? (Score:5, Funny)
I would suspect that it has no bearing on you at all. Simply chanting "Gentoo Gentoo Gentoo" should cure any and all hardware errors. You're safe, AC.
I'll keep this fool occupied, someone go call the guys in white coats for me.
Re: (Score:3, Funny)
If you use Gentoo, you'll have to make your own DRAM from the schematics.
Re: (Score:2)