Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage

Google Finds DRAM Errors More Common Than Believed 333

An anonymous reader writes "A Google study of DRAM errors in their data centers found that they are hundreds to thousands of times more common than has been previously believed. Hard errors may be the most common failure type. The DIMMs themselves appear to be of good quality, and bad mobo design may be the biggest problem." Here is the study (PDF), which Google engineers published with a researcher from the University of Toronto.
This discussion has been archived. No new comments can be posted.

Google Finds DRAM Errors More Common Than Believed

Comments Filter:
  • Percentage? (Score:4, Interesting)

    by Runaway1956 ( 1322357 ) * on Tuesday October 06, 2009 @02:57PM (#29661065) Homepage Journal

    "a mean of 3,751 correctable errors per DIMM per year."

    I'm much to lazy to do the math. Let's round up - 4k errors per year has to be a vanishingly small percentage for a system that is up 24/7/365, or 5 nines. The fact that these DIMMs were "stressed" makes me wonder about the validity of the test. Heat stress, among other things, will multiply errors far beyond what you will see in normal service.

    • Re:Percentage? (Score:5, Informative)

      by gspear ( 1166721 ) on Tuesday October 06, 2009 @03:05PM (#29661171)
      From the study's abstract:

      "We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a suprisingly small effect on error behavior in the field, when taking all other factors into account."

      • by drsmithy ( 35869 )

        "We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a suprisingly small effect on error behavior in the field, when taking all other factors into account."

        What temperature range does "the field" encompass, as opposed to "lab conditions" ?

        They found a similar result with hard disks, but their data pretty much finishes at around 40 degrees, roughly where the typical desktop PC's drive is starting.

    • by CAIMLAS ( 41445 )

      Add to that the fact that Google (apparently) tends to run their data centers "hot" compared to what is commonly accepted, and use significantly cheaper components, and you've got a good explanation for why their error count is as high as it is.

      • Re:Percentage? (Score:5, Insightful)

        by Tumbleweed ( 3706 ) on Tuesday October 06, 2009 @03:08PM (#29661219)

        Add to that the fact that Google (apparently) tends to run their data centers "hot" compared to what is commonly accepted, and use significantly cheaper components, and you've got a good explanation for why their error count is as high as it is.

        Yeah, but let's look at the more common situation - a home. Variable temperatures, most likely QUITE variable power quality, low-quality PSU, and almost certaily no UPS to make up for it. Add that to low-quality commodity components (mobo & RAM).

        I'd not be surprised to find the problem much more prevalent in non-datacenter environments.

        Switching to high-quality memory, PSU & UPS has made my systems unbelievably reliable the last several years. YMMV, but I doubt by much.

        • Re:Percentage? (Score:5, Informative)

          by jasonwc ( 939262 ) on Tuesday October 06, 2009 @03:18PM (#29661345)
          The article suggests that errors are less likely on systems with few DIMMS, those which are less heavily used, and that there was no significant difference among types of RAM or vendors, at least with regard to ECC RAM. Thus, laptop and desktop users, who likely only have 2 or 3 DIMMs and make only casual use of their systems have lower risk of errors. ECC RAM may in general be of much higher quality than non-ECC RAM, and thus more prone to error, but its usage is also less mission-critical. In addition, ECC RAM is usually used in systems with many DIMMs that are run 24/7/365.

          Good news
          The study had several findings that are good news for consumers:

                  * Temperature plays little role in errors - just as Google found with disk drives - so heroic cooling isnâ€(TM)t necessary.
                  * The problem isnâ€(TM)t getting worse. The latest, most dense generations of DRAM perform as well, error wise, as previous generations.
                  * Heavily used systems have more errors - meaning casual users have less to worry about.
                  * No significant differences between vendors or DIMM types (DDR1, DDR2 or FB-DIMM). You can buy on price - at least for the ECC-type DIMMS they investigated.
                  * Only 8% of DIMMs had errors per year on average. Fewer DIMMs = fewer error problems - good news for users of smaller systems.
          • Re: (Score:3, Interesting)

            by HornWumpus ( 783565 )

            IIRC ECC ram has extra bits and hardware to fix any single bit error and record that it happened.

            Regular ram only has parity which can tell the MB the data is suspect but not which bit flipped. Kernel panic, Blue Screen, Guru Meditation# whatever.

            It's the same RAM, just arranged differently on the DIMM.

            I once had a dual Pentium PRO that required ECC RAM. BIOS recorded 0 ECC errors in the three years or so that was my primary machine. Which is what the Google study would lead me to expect.

            • ***Regular ram only has parity***

              Commodity DRAM hasn't had parity since the early 1990s when DRAM was selling for $100 a Megabyte. Microsoft -- which was trying to sell its memory hungry Windows OS -- pushed for the removal of parity in order to reduce DRAM prices, claiming (probably incorrectly) that DRAM failures were no longer a significant problem. I wished at the time, and still wish, they hadn't done that. Up to that point, Microsoft's record was actually pretty consumer friendly. No more regretta

            • Re:Percentage? (Score:5, Informative)

              by Mr Z ( 6791 ) on Wednesday October 07, 2009 @10:30AM (#29669689) Homepage Journal

              "Regular RAM" has neither parity nor ECC.

              The original PC added a 9th bit to each byte, creating parity RAM. It was unique among personal computers at the time. None (or nearly none) of the original PC's contemporaries did this. But, since IBM did, many clones followed suit in the PC space. Macs, notably, didn't support ECC for many, many years, but if you pop open a Columbia Data Products PC [textfiles.com], you'll see parity RAM. (Note "128K RAM with parity" in that scan.) IBM went with byte parity in part because bytes were the smallest memory unit the CPU read or wrote to the memory. With byte parity, every memory access could be protected.

              This ratio of 9/8 stuck with the PC's memory system over the years, following it to ever wider interfaces. That includes the 16 bit buses of the 286 and 386SX, the 32-bit buses of the 386DX and 486, and the 64 bit bus of the original Pentium. While many manufacturers made the byte parity optional as a cost saver, it was still rather common.

              Once you get to 64 bits, you have 8 extra parity bits for a total memory width of 72 bits. This is enough bits to implement a single-error correct, double-error detect Hamming code [wikipedia.org] on the 64-bit data. As long as you always read or write in multiples of 64 bits, you can also generate the Hamming code on writes and check it on reads.

              Note that caveat: "As long as you always read or write in multiples of 64 bits." By the time you get to the 486 era, on-board L1 caches started to become standard equipment. Caches can turn a single byte read or write into a multiple byte line-fill (assuming they do read-allocate and write-allocate). They can also make writes wider. In write-back mode, they tend to write back the entire cache line if any portion was updated. In write-through mode, they could theoretically package additional bytes from the cache line to go with whatever bytes the CPU wrote to get to a minimum data size. (I don't know if the 486 or Pentium actually did this, FWIW. I'm speaking of general principles of operation.)

              The combination of caches and wider buses made ECC practical for PC hardware starting with the Pentium. That's why you started to see it in that time frame and not before.

              BTW, the error rate for individual DRAM bit flips should increase as the bits get smaller. It doesn't surprise me that your Pentium Pro's bits never flipped. It was probably built around 16 megabit DRAM chips, or maybe 64 megabit. If you compare a 16 megabit DRAM chip to a 1 gigabit DRAM chip of the same physical size, the bit cells on the gigabit chip are 1/64th the size. That means far fewer electrons holding the bit. As you can imagine, that might increase the likelihood of error per bit. Google's study didn't show an increase in error rate across memory technologies, but its window of memory technologies didn't stretch back 15 years to the Pentium Pro era.

              There's also just the total quantity of memory. Your Pentium Pro system probably had at most 128MB. Compare that to a modern system with 4GB. A 4GB system has 32x the memory of a 128MB system. Even if the per-bit error rate remained constant, there are 32x as many bits, so 32x as many errors. Modern systems also implement scrubbing, meaning they actively read all of memory in the background looking for errors. Older systems just waited for the CPU to access a word with a bad bit to raise an error. This also makes the observed error rate drastically different, since many errors would go by unnoticed in a system without scrubbing, but would get proactively noticed (and fixed) in a system with scrubbing.

              FWIW, I run my systems these days with ChipKill ECC enabled and scrubbing enabled. Not taking chances. I'll give up 3-5% on performance since most of the time I won't notice it.

          • by vadim_t ( 324782 )

            * Temperature plays little role in errors - just as Google found with disk drives - so heroic cooling isn'tt necessary.

            Talk about a misunderstanding.

            First, the paper on hard drives did show that temperature was important. It did show though that too cold is worse than too hot. Also, the data wasn't perfect. Google doesn't have a whole lot of drives running at strange temperatures, since they're a datacenter. A consumer though, might well run a drive at 60C in a badly cooled desktop or laptop, and there's n

          • by drsmithy ( 35869 )

            Temperature plays little role in errors - just as Google found with disk drives [...]

            That's not what Google found at all. They found that in the temperature range typically seen an airconditioned datacentre, temperature is not a major influence on failure rates.. Their data shows that once the temperature rises above about 40 degrees C, failure rates start to increase. 40 degrees is pretty typical for the average home PC, and downright cool in cramped cases like iMacs.

        • Switching to high-quality memory, PSU & UPS has made my systems unbelievably reliable the last several years. YMMV, but I doubt by much.

          I'll second this. Once or twice I skimped on mobo or memory in a pinch, and those have been the only machines of mine to have stability issues post Windows 98. (Even in Windows 98 I could get about 3 weeks of uptime before needing a reboot. It sucked, but it wasn't as bad as some people had to deal with).

        • Re: (Score:3, Insightful)

          Yeah, but let's look at the more common situation - a home. Variable temperatures, most likely QUITE variable power quality, low-quality PSU, and almost certaily no UPS to make up for it. Add that to low-quality commodity components (mobo & RAM).

          The vast majority of people have laptop's now which come with a built in UPS.

          • The vast majority of people have laptop's now which come with a built in UPS.

            I doubt the battery system of a laptop does any undervoltage or power spike protection. A UPS is more than a battery.

            • Re: (Score:3, Informative)

              UPS - Uninterruptible Power Supply

              Now many UPSs also include a Power Conditioner, but a UPS is not a power conditioner.

              • Now many UPSs also include a Power Conditioner, but a UPS is not a power conditioner.

                True, but the power conditioning is what's going to improve the life of your system, most likely, not the battery backup.

        • Seconded - my private PC runs very reliably with a quality PSU and ECC RAM. It does not have a UPS but the power grid is quite stable here in Germany.

          • It varies from town to town here in U.S. I've always been fortunate to live in good power areas (and Los Angeles used to give us 90 p.s.i. Water pressure!) But when we move to our retirement house, I'm gonna need a power conditioner. The lights dimmed several times when I was re-painting it recently, and went off once for a few minutes, the tenants said it happens ALL the time. I always get high-end RAM and PSUs, I've seen others suffer for the lack.

      • Re: (Score:3, Interesting)

        by Red Flayer ( 890720 )
        Humorous ordering of replies to this article.

        Your post:

        Add to that the fact that Google (apparently) tends to run their data centers "hot" compared to what is commonly accepted, and use significantly cheaper components, and you've got a good explanation for why their error count is as high as it is.

        Post before yours:

        From the study's abstract:
        "We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a suprisingly small effect on error behavior in the field, when taking

    • I'm much to lazy to do the math. Let's round up - 4k errors per year has to be a vanishingly small percentage for a system that is up 24/7/365, or 5 nines. The fact that these DIMMs were "stressed" makes me wonder about the validity of the test. Heat stress, among other things, will multiply errors far beyond what you will see in normal service.

      Except it depends on how the modules were originally tested. The study is saying that they break more than previously thought, rather than they break a lot. If they were originally tested in a stressed system similar to Googles and Google is finding that they have far more errors than they should then their study is still valid.

    • Re:Percentage? (Score:4, Insightful)

      by The Archon V2.0 ( 782634 ) on Tuesday October 06, 2009 @03:54PM (#29661921)

      "a mean of 3,751 correctable errors per DIMM per year."

      Hey, the ECC did its job! Let's all go home.

      I'm much to lazy to do the math.

      I tried, based on the abstract. Wound up getting a figure of 8% of 2 gigabyte systems having 10 RAM failures per hour and the other 92% being just peachy. While a few bits going south is AFAIK the most common failure state for RAM, some of those RAM sticks must be complete no-POST duds and some are errors-up-the-wazoo massive swaths of RAM corrupted, so that throws my back of the envelope math WAY off....

      In other words, big numbers make Gronk head hurt. Gronk go make fire. Gronk go make boat. Gronk go make fire-in-a-boat. Gronk no happy with fire-in-a-boat. Boat no work, and fire no work, all at same time.

      Sorry, lost my thread there. So yeah, complex numbers, hard math, random assumptions that bugger our conclusions and maybe bugger theirs.

      The fact that these DIMMs were "stressed" makes me wonder about the validity of the test. Heat stress, among other things, will multiply errors far beyond what you will see in normal service.

      The problem with something like this is the assumption that Google world == real world.

      This RAM is all running on custom Google boards that no one else has access to, with custom power supplies in custom cases in custom storage units. To the researchers' credit, they split things by platform later on, but that just means Google-custom-jobbie-1 and Google-custom-jobbie-2, not Intel board/Asus board/Gigabyte board. Without listing the platforms down to chipsets and CPU types (not gonna happen), it's hard to compare data and check methodology.

      While Google is the only place you're going to find literal metric tons of RAM to play with, the common factor that it's all Google might be throwing the numbers. At least some confirmation that these numbers hold at someone else's data center would be nice.

      But then, I didn't RTWholeFA, so maybe I missed something.

    • It says in the article that the study found temperature not to be a factor.
      • Yes, I saw that, and it was also pointed out earlier in this discussion. I, for one, am not willing to accept that statement. It should be noted that a lot of "assumptions" were made in this study, and that those assumptions are referred to throughout the TFA and the PDF. Of all the hardware errors I've ever dealt with, heat was the most common problem.

    • thats more than 10 errors per day... That is excessive, no matter the load they put on their servers, or how many DIMMS there are.. And their memory loads aren't all that excessive in the day of 1U boxes holding 128GB of ram for Virtual Machines...

  • Bus errors! (Score:5, Informative)

    by redelm ( 54142 ) on Tuesday October 06, 2009 @03:11PM (#29661251) Homepage
    Hard DRAM errors are rather hard to explain if the cells are good -- maybe a bad write. After much DRAM testing (I use memtest86+ weeklong), I've yet to find bad cells.

    What I have seen (and generated) is the occasional (2-3/day) bus error with specific (nasty) datapatterns. Usually at a few addr. I write that off to mobo trace design and crosstalk between the signals. Failing to round the corners sufficiently, or leaving spurs is the likely problem. I think Hypertransport is a balanced design (push-pull differential like ethernet) and should be less succeptible.

    • Re: (Score:3, Informative)

      by marcansoft ( 727665 )

      I had a RAM stick (256MB DDR I think) with a stuck bit once. At first I just noticed a few odd kernel panics, but then I got a syntax error in a system Perl script. One letter had changed from lowercase to uppercase. That's when I ran memtest86 and found the culprit.

      At the time, a "mark pages of memory bad" patch for the kernel did the trick and I happily used that borked stick for a year or so.

      • Re: (Score:3, Insightful)

        by CastrTroy ( 595695 )
        I find that more often then not, when people get blue screens or frequent crashes, that it's due to a bad RAM chip. I think it's kind of a bad thing that most motherboards don't really test the RAM when you book up. Usually running the real RAM test will pick up on most memory errors. You don't even need to run memtest. Sure you save a few seconds on boot up, but it's often better to know there is a problem with your memory then go on for months thinking there is some other problem.
        • by SuperQ ( 431 ) *

          Yup, I'm really disappointed that consumer PCs are still lacking ECC ram. The support for it is in all the chipsets, but it adds $5 to the cost of the machines. Oh well.

      • Re: (Score:2, Interesting)

        by dotgain ( 630123 )
        I had one mobo, can't remember brand/model exactly but CPU was an AMD K6-2 450MHz, and back then we ran XFree86 which came as seven gzipped-tarballs (if you compile from source). I think it was file number three that would never gunzip on my PC, "invalid compressed data - CRC error", but the MD5 checked out, so I tried it on another machine and it was indeed fine.(and this is back when MD5 was thought secure)

        This machine compiled a lot of source (it was a Gentoo box), so surely if errors like these had bee

    • by afidel ( 530433 )
      That's why Nehalem server boards have ECC on the busses just like real servers have had since forever.
    • by Cyberax ( 705495 )

      Some hard errors occur because of natural alpha-decay - even one alpha particle can flip a bit. Also, energetic cosmic rays can cause problems.

  • by eison ( 56778 ) <pkteison&hotmail,com> on Tuesday October 06, 2009 @03:14PM (#29661301) Homepage

    I've always thought it would be a nice-to-have feature for my home system to have ECC - perhaps it might degrade over time and misbehave less if it could detect and fix some errors. But my normal sources don't seem to stock many choices. E.g. Newegg appears to have 2 motherboards to choose from, both for AMD CPUs, nothing for Intel. Frys appears to have one, same, AMD only. Is this just the way things are, or do I need to be looking somewhere else? Would picking one of these motherboards end up in not working out well for my gaming rig?

    • ECC is slightly slower.

      • Re: (Score:3, Informative)

        by vadim_t ( 324782 )

        ECC is slower by something like 1%, which is completely unnoticeable since RAM contributes relatively little to the overall system performance. 2x faster RAM won't make things run twice as fast, because normally CPU caches get a > 90% hit ratio. Otherwise things would be incredibly slow, as the fastest RAM still is horribly slow and has a horrible latency compared to the cache.

    • by swb ( 14022 )

      You'd probably have to look at server boards rather than desktop boards.

      http://bit.ly/16EUiC [bit.ly]

      Link to Newegg with filtered set of ECC compatible server boards.

      But you'll pay a lot more and probably need a larger case and a bunch of other BS, although it looks like there are some ATX factor boards.

    • by Spoke ( 6112 )

      Because AMD Athlon/Phenom CPUs have the memory controller integrated into the CPU, the CPU (not the motherboard) actually dictates what type of RAM you can use.

      For all the desktop class AMD Athlon/Phenom CPUs, you can use un-buffered ECC memory. Just make sure it's not buffered or registered. You need an Opteron to use buffered or registered memory.

      If you want an Intel processor, you have to use a Xeon (and the right mobo) to use ECC memory.

      • If the motherboard doesn't have the traces for that extra chip, though, then it won't support it. Or if it does, but the BIOS doesn't have the option to enable it, same deal. Don't assume a motherboard supports ECC memory unless the manufacturer says so (and the manufacturer isn't Abit). Asus boards for the Athlon 64 and up all support ECC, for instance.
    • ECC is a server-targeted feature. Newegg has 18 mainboards that support ECC listed in the Dual LGA 1366 category alone, and I'd imagine plenty more scattered throughout their server board categories.

      As you've already discovered, though, it's not terribly common on home-targeted boards. You're welcome to use one of those boards for gaming, but you'll probably have to use a pricier Xeon or Opteron processor, more expensive ECC RAM, and suffer with slower PCI-E links for your videocards. Higher prices and simi

    • by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Tuesday October 06, 2009 @03:40PM (#29661659) Journal
      A lot of the AMD boards support ECC RAM but newegg doesn't show it. Most every AM2 motherboard supports it. My main workstation at home is a Phenom II with 8GB ECC RAM mainly for that reason.
  • Dell (Score:5, Interesting)

    by ^_^x ( 178540 ) on Tuesday October 06, 2009 @03:20PM (#29661389)

    In my experience at work ordering Dell desktops and laptops, by far the most common defect is 1-3% of machines with bad RAM. Typically it's made by Hynix, occasionally Hyundai, and I've never seen other brands fail. On many occasions though, I've predicted Hynix, pulled it, and sure enough theirs was the piece causing the errors in Memtest86+...

    • Re:Dell (Score:5, Interesting)

      by Jah-Wren Ryel ( 80510 ) on Tuesday October 06, 2009 @03:43PM (#29661723)

      Hyundai is Hynix and they are the second largest DRAM manufacturer by marketshare (roughly 20% second to Samsung's 30%).

      Its no surprise that you've only seen Hynix brand fail in Dells, chances are they are in 90%+ of Dell (and HP and Apple) boxes because they primarily buy from Hynix in the first place. Its selection bias.

      • by ZosX ( 517789 )

        I've had the worst luck with hynix sticks. Usually when I rebuild systems the sticks that are bad are usually hynix or even hyundai. Mushkin and kingston have always been pretty good to me though and are usually pretty rock solid. Hell, mushkin even has a lifetime warranty. How many other manufacturers offer that?

    • Hynix is the former Hyundai Electronics.

  • by bugs2squash ( 1132591 ) on Tuesday October 06, 2009 @03:21PM (#29661409)
    was only a problem for government computers.
  • by jhfry ( 829244 ) on Tuesday October 06, 2009 @03:25PM (#29661453)

    Read the article and remember they are talking averages here.

    They give it away with this line:

    Only 8% of DIMMs had errors per year on average. Fewer DIMMs = fewer error problems - good news for users of smaller systems

    Essentially, only 8% of their ECC DIMM's reported ANY errors in a given year.

    Also this was pretty telling:

    Besides error rates much higher than expected - which is plenty bad - the study found that error rates were motherboard, not DIMM type or vendor, dependent.

    And this:

    For all platforms they found that 20% of the machines with errors make up more than 90% of all observed errors on that platform.

    So essentially, they are saying that only 8% of DIMMSs reported errors, 90% of which were on 20% of the machines that had errors, mostly because of motherboard issues... yet DIMMs are less reliable than previously thought.

    I would imagine that if you removed all of the bad motherboards, power supplies, environmental, and other issues... that DIMMs are actually more reliable than I previously thought, not less! I wonder what percentage of CPU operations yield incorrect results. With Billions of instructions per second, even an astronomically low average of undetected cpu errors would guarantee an error at least as often as failed DIMMs.

    What I did take from the article was that without ECC ram, you have no way of knowing that your RAM has errors. I guess I should rethink my belief that ECC was a waste of money.

    • by PRMan ( 959735 )

      I do remember reading an article where I was surprised that Google used such low-quality cheap hardware...

      That being said, this isn't really that surprising. Like another poster said, once I started buying quality motherboards (Asus) and quality RAM brands, I really haven't had any problems.

      • by ZosX ( 517789 )

        the quality of the hardware matters little when you have so much built in redundancy. who cares if a server fails when you got three to back the failed one up? they were smart in realizing that for the cost of a sun server you could buy like 10 pcs and basically achieve a lot more with a great deal more redundancy.

    • ``What I did take from the article was that without ECC ram, you have no way of knowing that your RAM has errors.''

      But that's not actually true. Parity allows you to detect errors, but not correct them. Thus, parity RAM is not ECC RAM, but it will detect memory errors.

  • by MattRog ( 527508 ) on Tuesday October 06, 2009 @03:28PM (#29661483)

    RAM is dirt cheap and most server systems support significantly more RAM than most people bother to install. For critical systems, ECC works but that doesn't prevent everything (double bit errors etc.). Is it time for a Redundant Array of Inexpensive DIMMs? Many HA servers now support Memory Mirroring (aka RAID-1 http://www.rackaid.com/resources/rackaid-blog/server-dysfunction/memory_mirroring_to_the_rescue/ [rackaid.com]) but should there be more research into different RAID levels for memory (RAID5-6, 10, etc?)

    • Re: (Score:3, Insightful)

      by imsabbel ( 611519 )

      ECC IS Raid5 for RAM....

      • Re: (Score:3, Interesting)

        by TJamieson ( 218336 )

        I think OP's point was, say you have 4G of non-ECC RAM. It would be neat if you could turn that into, say, 2G of "RAID RAM".

      • Re: (Score:2, Informative)

        by MattRog ( 527508 )

        No, not really.

        RAID-5 allows for disk failure via distributed block parity. ECC recovers single bit error.

        The "Memory RAID" design should prevent a larger issue (multi-bit/DIMM failure/etc. that ECC cannot prevent) from taking the whole system out.

        I would imagine that ECC memory would be used in conjunction with higher-level striping or mirroring to prevent and recover from both failures.

  • by sshir ( 623215 ) on Tuesday October 06, 2009 @03:38PM (#29661627)
    Seriously. If you download a lot, and I do, you see quite a few checksum mismatches in the log.
    Especially if the torrent is old. Some of them may be sabotage activity, but I doubt that, considering kind of files.

    They are not transmission errors: TCP-IP checks for that. Not hard drive errors - again checksums. They can be intrasystem transmission errors though.

    I remember folks who did complete checkers wrote that they had a lot of them too.
    • by rdebath ( 884132 ) on Tuesday October 06, 2009 @04:30PM (#29662421)

      The TCP/IP checksums are really weak, only 16bits and rather a poor algorithm anyway. So more than one in 65 thousand errors will be undetected by a TCP/IP checksum. And that's not including buggy network adaptors and drivers that 'fix' or ignore the checksums.

      If you're transferring gigabytes of data you really need something a lot better.

      Still that's probably not the most common source of errors. You see the same problem exists when data is transferred across an IDE or SCSI bus if there's a checksum at all it's very weak and the amounts of data transferred across a disk bus are scary.

    • by pavon ( 30274 )

      That's interesting. If you were checking with a newer version of uTorrent, you may have been using UDP, and not TCP. They added UDP capability about a year ago, and I assume others have as well. I don't know if they do error correction on a per-packet basis or rely on block checksums.

    • by phantomcircuit ( 938963 ) on Tuesday October 06, 2009 @04:39PM (#29662547) Homepage

      The checksum used by TCP is several orders of magnitude more likely to match a corrupted packet than the checksum used by bittorrent. (citation [psu.edu])

      More than likely these are transmission errors where the TCP checksum matched but the bittorrent checksum did not.

  • Radiation Effects (Score:5, Interesting)

    by Maximum Prophet ( 716608 ) on Tuesday October 06, 2009 @04:01PM (#29662023)
    At Purdue, many years ago, one of the engineers mapped the ECC RAM errors in a room with hundreds of sparc stations and found that it was mostly in a cone shape pointed toward the window. That window looked out to a pile of coal, so the culprit was assumed to be low level alpha radiation.
    • Well.
      Bullshit.

      Sorry, but true. Look up alpha radiation if you want to know why.

    • by SuperBanana ( 662181 ) on Tuesday October 06, 2009 @04:12PM (#29662191)

      That window looked out to a pile of coal, so the culprit was assumed to be low level alpha radiation.

      Alpha radiation is stopped by a sheet of office paper. It certainly wouldn't make it through the window, through the machine case, electromagnetic shield, circuit board, chip case, and into the silicon. Even beta radiation would be unlikely to make it that far.

      What is much more likely: thermal effects. IE, infrared from the sun heating up machines near the window.

      • beta would be believable though (as opposed to alpha).

        I tend to agree thermal might be the culprit, specifically the delta T not the absolute T. It is the act of changing temperature that harms PCs the most, not the temp that they settle at. As the temperature changes different materials (FR4, lead/tin solder, copper, plastic) expand/contract at different rates. This change causes poor signal connections, and as RAM is likely the most sensitive (socketed rather than soldered) this would explain the bit e

  • Alrighty then, which mainboards have the lowest error rates? TFA seems to have obfuscated that. That's MSs job, I thought Google was supposed to Do No Evile?

    • Wouldn't be very useful info to anyone buying consumer level products as the boards in question are server grade. Also, Google saying that company X's boards are more failure prone could get them into trouble. Furthermore, if you have a server farm/data center, you should do your own research, but barring that, shouldn't expect others to do it for you for free.
  • by Rashkae ( 59673 ) on Tuesday October 06, 2009 @04:29PM (#29662395) Homepage

    My takeaway from this paper is that maybe google should hire more technicians who are experienced with non-ecc ram systems. They even believed, prior to this study, that soft errors were the most common error state. I could have told you from the start that was bunk. In over 15 years of burn-in tests as part of pc maintenance, the number of soft-errors observed is... 0. Either the hardware can make it through the test with no error, or there is a DIMM that will produce several errors over a 24 hour test. This doesn't mean that random soft errors never happen when I'm not looking/testing, but the 'conventional wisdom' that soft errors are the predominant memory error doesn't even pass the laugh test.

    From looking at the numbers on this report, I get the feeling that hardware vendors are using ECC as an excuse to overlook flaws on flaky hardware. I would now be really interested in a study that compares the real world reliability of ECC vs non-ECC hardware that has been properly QC'd. I'll wager the results would be very interesting, even of ECC still proves itself worth the extra money.

  • by RAMMS+EIN ( 578166 ) on Tuesday October 06, 2009 @04:46PM (#29662659) Homepage Journal

    When I was building the computer I'm typing this on, I had the grand idea of building it with so much RAM that I could basically work from RAM. Meaning, for example, that all my running programs and the project I was working on would have to fit in RAM.

    Of course, with such a dream, I was concerned about the reliability of my memory. So I wanted ECC. I found out that having ECC memory is not just a matter of buying ECC memory. There are different kinds of ECC memory, and you need to find a combination of memory, motherboard, and CPU that works together. Many sites that offer CPUs and/or motherboards don't list support for ECC among the specifications. Searching for it is difficult, because searching for "ECC" also returns hits for things like "non-ECC" and "ECC: no".

    Finally, I found a combination of motherboard and CPU that would support unbuffered ECC DDR2, and a matching pair of memory modules to go with it. And then, when I got all the parts, the RAM didn't fit in the motherboard. Turns out the RAM was FB-DIMM, which had not been listed in the advertisement. I gave up and just bought 2GB of non-ECC RAM to just get the system working. The FB-DIMM (all 8GB of it) is still sitting here, because I haven't found anyone who wants to buy it from me.

    Lessons learned: 1. The saying "the nice thing about standards is that there are so many to choose from" is still relevant. I don't know why there have to be so many hardware interfaces to memory chips, but there are, so be careful. 2. Apparently, nobody really cares about ECC RAM, otherwise information would be easier to find. 3. Apparently, AMD CPUs and matching motherboards more usually support ECC RAM than Intel parts and matching motherboards.

It appears that PL/I (and its dialects) is, or will be, the most widely used higher level language for systems programming. -- J. Sammet

Working...