Reliability of Computer Memory? 724
olddoc writes "In the days of 512MB systems, I remember reading about cosmic rays causing memory errors and how errors become more frequent with more RAM. Now, home PCs are stuffed with 6GB or 8GB and no one uses ECC memory in them. Recently I had consistent BSODs with Vista64 on a PC with 4GB; I tried memtest86 and it always failed within hours. Yet when I ran 64-bit Ubuntu at 100% load and using all memory, it ran fine for days. I have two questions: 1) Do people trust a memtest86 error to mean a bad memory module or motherboard or CPU? 2) When I check my email on my desktop 16GB PC next year, should I be running ECC memory?"
Surprise? (Score:4, Funny)
Recently I had consistent BSODs with Vista64 on a PC with 4GB...
This was a surprise?
Re:Surprise? (Score:5, Informative)
Re: (Score:3, Insightful)
Re:Surprise? (Score:5, Funny)
... vista is way too slow for my, and many other's tastes.
Now you got what he meant with "rock solid"....
Re:Surprise? (Score:5, Informative)
Re:Surprise? (Score:5, Insightful)
Agreed. People who will sit and tell me with a straight face that Vista, in their experience, is unstable are either very unlucky, or liars. Windows stopped being generally unstable years ago. Get with the times.
I'm not convinced, I have a fairly old desktop at work I keep for Outlook use only. After a few days outlook's toolbar becomes unresponsive, and whenever I shut it down it stalls and requires a poweroff. Task manager doesn't say I'm using that much memory (still got cached files in physical ram).
I don't use windows much, I'm not used to the tricks that keep it running, where I probably use those tricks subconciously to keep my linux workstation and laptop running.
I wonder if Windows continued increase in stability is, at least partly, people subconciously learning how to adapt to it.
Re:Surprise? (Score:5, Interesting)
Vista can hose it's user profiles easily and they get the white scrteen loading bug that causes lots of problems and even networking to fail for that user.
It's a profile problem that can be fixed easily by creating a new profile and deleting the old one, but that is way out of the ability of most users.
This happens a LOT with home users. Out of the last 30 vista support calls I got 6 were this problem of corrupt user profiles.
Honestly user profiles under Windows have sucked cince the 2000 days.
Re: (Score:3, Insightful)
A guy at work got his laptop with Vista on it. Explorer would hang often (Explorer, not IE), and if he tried to arrange his second monitor to the left of his laptop screen, the system would BSoD. (pretty funny, he had his monitor on the left, due to physical desk constraints, but he had to move his mouse off of the right side of his laptop screen where it would appear on the left of his second monitor...). We updated all the latest drivers from HP but to no avail.
Since putting Vista SP1 on though it has bee
Re:Surprise? (Score:5, Insightful)
People who will sit and tell me with a straight face that Vista, in their experience, is stable are either very lucky, or Microsoft shills.
See? I can say the opposite, and provide just as much evidence? Do I get modded to 5 as well? Where's your statistics on the stability of Vista? Did it work well for you, therefore, it works well for everyone else?
I worked for a company that bought a laptop of every brand, so that when the higher-ups went into meetings with Dell, HP, Apple, etc. they had laptops that weren't made by a competitor. They have had problems like laptops not starting-up the first time due to incompatible software. That was a recent as 6 months ago. My mother-in-law bought a machine that has plenty of Vista-related problems (audio cutting out, USB devices not working, random crashes in explorer) on new mid-range hardware that came with Vista. But I have a neighbor who found it fixed lots of problems with gaming under XP.
There's plenty of issues. Vista's problems weren't just made-up because you didn't experience them.
Everybody's experience is different. Quit making blanket statements based on nothing.
Re: (Score:3, Insightful)
Looks like he posted his opinion based on his experience, and you posted your opinion based on your experience. So you should quit making blanket statements based on nothing too.
Neither of you posted statistics. Where are yours?
Re:Surprise? (Score:5, Insightful)
I worked for a company that bought a laptop of every brand, so that when the higher-ups went into meetings with Dell, HP, Apple, etc. they had laptops that weren't made by a competitor. They have had problems like laptops not starting-up the first time due to incompatible software. That was a recent as 6 months ago. My mother-in-law bought a machine that has plenty of Vista-related problems (audio cutting out, USB devices not working, random crashes in explorer) on new mid-range hardware that came with Vista. But I have a neighbor who found it fixed lots of problems with gaming under XP.
On the other hand, my Linux server freezes up and needs to be reset (sometimes even reboot -f doesn't work) every few days due to a kernel bug, probably some unfortunate interaction with the hardware or BIOS. (I'm using no third-party drivers, only stock Ubuntu 8.04.) And hey, in the ext4 discussions that popped up recently, it emerged that some people had their Linux box freeze every time they quit their game of World of Goo. Just yesterday I had to kill X via SSH on my desktop because the GUI became totally unresponsive, and even the magic SysRq keys didn't seem to work. Computers screw up sometimes.
What's definitely true is that Windows 9x was drastically less stable any Unix. Nobody could use it and claim otherwise with a straight face. Blue screens were a regular experience for everyone, and even Bill Gates once blue-screened Windows during a freaking tech demo.
This is just not true of NT. I don't know if it's quite as stable as Linux, but reasonably stable, sure. Nowhere near the hell of 9x. I used XP for several years and now Linux for about two years, and in my experience, they're comparable in stability. The only unexpected reboots I had on a regular basis in XP was Windows Update forcing a reboot without permission. Of course there were some random screwups, as with Linux. And of course some configurations showed particularly nasty behavior, as with Linux (see above). But they weren't common.
Of course, you're right that none of us have statistics on any of this, but we all have a pretty decent amount of personal experience. Add together enough personal experience and you get something approaching reality, with any luck.
Re: (Score:3, Interesting)
Agreed. They have moved away from generalization to specialization now, and Vista is much more specific about how, when, and where it is unstable. Essentially, they pushed the crashes out of the kernel, and all the applications now act funny or crash instead of crashing the kernel.
Saying they are unlucky, when they are unfortuna
Re: (Score:3, Interesting)
What?
Vista is not 100% stable, never has been, obviously never will be. Do you think it's magically immune to its own BSOD's? I run Vista 64bit myself, and it's "better than XP", but not stable. Apps still get random errors, etc.
Windows is as stable as it will ever be; at least with Ubuntu you can have a month's uptime and be fine. Now if only Wine was 100% there for gaming (it's getting there).
Re: (Score:3, Informative)
Re: (Score:3, Informative)
I get uptimes of 4-5 weeks on Vista. I have to reboot on the Wednesday after the second Tuesday every month for updates.
I have an uptime of about 6 months on Ubuntu since the last time I rebooted to put an extra hard drive in. I don't have to reboot for updates.
Re:Surprise? (Score:5, Insightful)
You are right and you are wrong. Yes, it's true that Vista, XP or even Windows 2k are rock solid, but only as long as you don't add third party hardware driveres of dubious quality. Unfortunately many hardware venders don't spend as much effort as they should to develop good drivers. Just using the drivers that comes with windows leaves you with a rather small set of supported hardware, so people install whatever drivers that comes with the hardware they buy, and as a result they get BSOD if they are unlucky, and then they blame Microsoft.
Re:Surprise? (Score:5, Insightful)
The OS running on the cheapest hardware with the most clueless user base has the highest failure rate? You don't say!
Re:Surprise? (Score:5, Funny)
Re:Surprise? (Score:5, Insightful)
To all the posters who think the parent is a bad mechanic I will tell you my anecdote: I have never had a harddrive fail. Never. Not on a fresh computer and not on a decade old one.
Either I have magic hands, harddrives don't fail that often or /.ers can't handle harddrives.
Or people can beat the odds. Chances, sometimes you win in a casino.
Re:Surprise? (Score:5, Funny)
I have never had a harddrive fail. Never. Not on a fresh computer and not on a decade old one.
Can I hire you as admin for our raid-0 disk server?
Re: (Score:3, Interesting)
Re:Surprise? (Score:5, Informative)
Well, if it takes your corporate IT staff that long to rebuild a computer, they're probably doing it by hand while putting out other fires, which is foolish. Better IT departments have standard images that have been made for and tested upon the computer models that they've standardized upon. Barring hardware failure, the result is a stable Windows environment with few software problems that aren't user-inflicted. In addition, rebuilding a system takes less than an hour: Gigabit Ethernet drops to the benches make backing up a system and restoring a clean image to it go very quickly. Rebuilds for purely remote users are a priority as well. They have access to their email and calendar via OWA, but not to any corporate systems that require VPN access, so getting their laptop repaired and back to them quickly is important: We try to get them repaired and sent out the day we receive them, and have been known to work Saturdays as well to get a system back to someone by the next Monday. We also maintain a hot spare pool: One laptop of every model that we support is on hand to overnight to someone whose laptop is broken. So, in all cases except where the hard drive is broken or the software on it borked, we can have a person up and running the next day. They then send us their computer, and we handle the warranty issues and return it to them.
We also don't permit anyone (ourselves included), to run Windows as Administrator or equivalent except for purposes of installing software or patching. While the computers are joined to our domain, remote/traveling users also have a local user account that is Administrator-equivalent whose name is "[their domain login name].local". They are given the password to it (which is different than their domain password) and told not to use it except to install software or in emergencies (but if they get to that point, they're expected to call: We have a person whose main job is to support remote/traveling users, and she's very good - not only is she an intelligent person, she's a skilled technician and knows our systems inside and out).
It sounds to me as though there are number of things going on: First, you're getting poor Windows installations. Secondly, there's probably a degree of PEBKAC going on as well. You say that you use Macs at home, so there's almost certainly more than a little resistance to using Windows stemming from attitude: "Macs are better and so I don't have to/won't learn how to use Windows". I've seen this more than once in our company: People that have Macs at home tend to be smug about them and pounce upon every problem (whether real or perceived) with their Windows computers at work. That's OK: After awhile you learn which people are your "problem children", and accommodate them as best you can.
In any event, I am sorry for your difficulties, and hope that they are remedied soon.
Re:Surprise? (Score:5, Insightful)
Or could it be that they have a queue full of machines waiting for reinstalls, etc? No. It couldn't be that, since we all know that the thousands of people saying they have had major problems are liars, and we have as evidence a few people who claim that they haven't had major problems, or don't know that they have problems ;-)
Re:Surprise? (Score:5, Insightful)
Dude! Take a chill pill. This is not FUD. The gp is just relating his experience, and here's a shock, YMMV! So just sit back and have another beer.
BTW, I've also had major hassles with windows - mostly related to viruses. As it happens this forced me to switch 100% to linux and I'm happy here, but not everyone who switches is. Personally I like the bandwidth I save from not constantly downloading AV updates, and the speed increase from not running AV. But hey, where you are computing power and bandwidth are probably cheap. Again, YMMV.
Re: (Score:3, Funny)
Re:Surprise? (Score:5, Insightful)
And for those who will go to the security well here, we call it a trade-off. For many systems uptime is more important. It generally isn't a very big risk to run an older Linux kernel though it is more risky than not updating. In a world of blind men, the one-eyed man is king. We can sacrifice a modicum of security, exchanging our plate mail for chain mail, and still feel confident because we are surrounded with weaponless peasants
Re:Surprise? (Score:5, Insightful)
New Microsoft ad slogan (Score:5, Funny)
You must be unlucky or the cause.
This would make a great slogan for Microsoft's new ad campaign:
Re:New Microsoft ad slogan (Score:4, Insightful)
This is truly a sign that Windows has caught up with Linux: It used to be only Linux users saying that, but now Windows users are, too!
Re:Surprise? (Score:4, Informative)
I fail to see how the parent is a troll, regardless of whether he is right or not.
Nevertheless my experience with Vista is the same, I run home premium on a newish laptop I use for music production and haven't had a glitch on it for months. My first intention was to wipe out the drive and install XP, but I abandoned the idea some time ago.
Re:Surprise? (Score:4, Informative)
I fail to see how the parent is a troll, regardless of whether he is right or not.
That's because I wasn't trolling. Yes, I do know people here on slashdot don't like to hear positive opinions on Vista, but in fact Vista isn't all that bad.
I use Linux exclusively on my desktop pc at home and at work. I've been using Linux for over a decade. When I bought a laptop a year and a half ago, it came with Vista. Vista is IMHO a great improvement over XP. It's not even slow on decent hardware.ÂI have yet to receive my first BSOD since SP1 was released. SP0 gave me a few BSODs, maybe 5 in total.
That being said, I use Linux for work and Vista for play. So the comparison may not be entirely fair.
Re:Surprise? (Score:5, Funny)
In reference to the parent, gp, ggp, etc. Either I'm reading the alternate-reality edition of Slashdot, or y'all are warming up for Wednesday.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
It's slower than XP in any case and requires more memory.
Not true. It uses more memory than XP, but it doesn't require it. In exactly the same way that linux uses more memory than XP, but doesn't require it (it's used for system cache if you bother to check). If you actually install the 64bit version, you'll see where MS's development budget has been spent (The 32bit version of vista feels a bit like Win ME in comparison). In every test I've done, 64bit vista has crapped all over XP from quite a big height.
The problem is I don't consider decent hardware to be something an IT'er would buy
Dual core machine + 2Gb ram + integrated ATI/Nvidia/Intel
Re: (Score:3, Insightful)
It's slower than XP in any case and requires more memory.
Not true. It uses more memory than XP, but it doesn't require it. In exactly the same way that linux uses more memory than XP, but doesn't require it (it's used for system cache if you bother to check).
Umm, yes, it is true, many benchmarks were done of XP SP3 vs Vista SP1, and XP SP3 is definately faster than Vista SP1, and it definitely _requires_ less memory. I can run an XP machine with 512MB of RAM, and it will be OK. Not great, but OK. Put Vista on the exact same machine (or even on a more modern, faster machine, but still with only 512MB of RAM), and it will be a total dog. Vista really needs a bare minimum of 1GB of RAM to be usable, whereas XP will run acceptably on 512MB... you could probabl
Re:Surprise? (Score:5, Insightful)
I find that when a Windows machine, from Windows 2000 on up, when taken care not to install too many programs and/or immature or junk-ware, then Windows remains quite stable and usable. The trouble with Windows is the culture. It seems everything wants to install and run a background process or a quick-launcher or a taskbar icon. It seems many don't care about loading old DLLs over newer ones. There is a lot of software misbehavior in Windows-world. (To be fair, there is software misbehavior in MacOS and Linux as well, but I see it far less often.) But Windows by itself is typically just fine.
Since the problem is Windows culture and not Windows itself, one has to educate one's self in order to avoid the pitfalls that people tend to associate with Windows itself.
Re: (Score:3, Interesting)
I can definately attest to this fact! The family computer has dual boot with Vista (It shipped with the 64 bit machine, and is 32 bit of course) and Mandriva Linux 2009 x86_64. Vista has been used to view Oprah's website with it's proprietary garbage, but other than that is unused and unmolested. It is a stock install. No third party stuff has been added other than iTunes. I recently had to install iTunes to restore my ipod after trashing the filesystem, and I can tell y
Re: (Score:3, Funny)
Re:Surprise? (Score:4, Insightful)
And some of us actually expect an OS with a certification logo program to send lawyer letters to Marvell telling them to recall that driver. Sheesh, get with the program, badly written, certified drivers make Microsoft look bad, deservedly.
Memtest not perfect. (Score:5, Informative)
My experience with memtest is you can trust the results if it says the memory is bad, however if the memory passed it could still be bad. Troubleshooting your scenario should involve replacing the DIMM's in questions with known good modules while running Windows.
Re:Memtest not perfect. (Score:5, Funny)
I bet Windows will love you replacing the DIMM's while running.
Re:Memtest not perfect. (Score:5, Funny)
I bet Windows will love you replacing the DIMM's while running.
Yeah wait until it starts to sleep first, or even better if you catch it while hibernating
One eye open! (Score:3, Informative)
Be careful. Vista hibernates with one eye open. It can wake itself up from hibernation to do updates. I dual boot my laptop with Linux Mint (an Ubuntu variant). Every week, I'd go to turn on my computer only to find that the battery was dead. Checking the startup logs showed that linux was starting up at about 3:00 in the morning. After googling, I found out that many people were having that problem. The suggested solution was to turn off Vista automatic updates
Re:Memtest not perfect. (Score:4, Informative)
I've yet to see memtest86 find an error even though replacing the ram fixed the problem. This has been on several builds.
Re:Memtest not perfect. (Score:5, Funny)
I've often had it pick up bad ram, usually within the first five minutes. One time, the memory in question had been through a number of unprotected power surges. The motherboard and power supply were dead too.
You can reliably replicate my results by removing the ram, snapping it in half and putting it back in. No need to wait for a power surge to see memtest86 shine.
Re: (Score:3, Funny)
That's impressive. Most memory tester software so I've tried requires a working power supply and motherboard.
Re:Memtest not perfect. (Score:5, Informative)
i've seen memtest find an error and yes the ram was bad.
There is a bit of a difference between ram use on linux and windows desktops, Linux tends to require less ram than a windows system to run, windows is far more likely to use all your ram and page out. In day to day use rarely do my linux systems need to use the swap file. If some of your ram is faulty and never gets used then you will not see crashes. I'm sure most of us have juggled ram about finding swapping slots cures the problem although reseating ram can fix problems anyway. If memtest is showing problems then the ram has problems bare in mind that some tests performed can pass with later tests failing.
memtest is to prove ram to be bad, not good. At higher temperatures than the testing was performed at the ram may become unreliable. It might be the case that the ram is ok in some systems but not in others, I've seen that too.
Re:Memtest not perfect. (Score:5, Interesting)
I've had a lot more success with Microsoft's RAM tester, free download here: http://oca.microsoft.com/en/windiag.asp [microsoft.com]
See, good things do come out of Redmond!
Re: (Score:3, Interesting)
I run memtest86 overnight (12+ hrs) as a routine part of the initial evaluation of a sick machine. Occasionally it finds errors after several hours that were not present on a single pass test. The last instance was a few months ago: a single stuck bit in one of the progressive pattern memory tests that only showed up after 4+ hours of repetitive testing. Replacing that mem module cured WinXP of a lot of weird flakey behavior involving IEv7 and Word.
The overnight memtest86 runs have only kicked out errors
Re:Memtest not perfect. (Score:5, Interesting)
Another nice tool is prime95. I've used it when doing memory overclocking and it seemed to find the threshold fairly quickly. Of course your comment still stands - even if a software tool says the memory is good, it might not necessarily be true.
Re:Memtest not perfect. (Score:5, Funny)
Memtestx86 is bögus. My machine alwayS generated errors when I run the test but it works fOne otherwise ÿ
Re:Memtest not perfect. (Score:5, Informative)
+1. I once had a pair of DIMMs which would intermittently throw errors in whichever machine they were placed, but Memtest would never detect anything wrong with them - even if used for weeks.
I called Micron, and they said "Yes, we do see sticks that go bad and Memtest won't detect it." They replaced them for free, the problem went away, and I was happy.
Re: (Score:3, Interesting)
Actually its worth noting that several motherboards on the market automatically over-clock the timings on the board under high load situations to improve performance. These same situations may not happen while simply running memtest86[+].
I've often thought that throwing in a copy of Folding@Home or Distributed.NET running in the background would be fun while memory testing, to juice the CPU and test the system under a heavier load.
Unfortunately, isolating the memory to run said software and relocating it p
Re:Memtest not perfect. (Score:5, Interesting)
I wonder how strongly RAM stability depends on power fluctuations. While you're testing memory using Memtest, the GPU is not used at all, for example. When playing a game and/or running some heavy compile-jobs, on the other hand, overall power usage will be much higher. I wonder if this may reflect on RAM stability, especially if the power supply is not really up to par?
If so, you might never find out about such a problem by using (only) memtest.
Re:Memtest not perfect. (Score:5, Informative)
A lot. When AM2 boards were new I went through a bunch of bad RAM (memory manufacturers hadn't quite gotten their act together yet) and RAM voltage would significantly change the number of bits that were 'bad'. 1.9 V and there were a few bits bad, 1.85, some more, 1.8 and memtest would light up all over.
So certainly, if any component is subpar, even a slight power fluctuation could trigger a borderline bad bit.
Re:Memtest not perfect. (Score:5, Interesting)
While you're testing memory using Memtest, the GPU is not used at all, for example. When playing a game and/or running some heavy compile-jobs, on the other hand, overall power usage will be much higher.
I think memtest is a good first level test - it will pinpoint gross errors in memory. But probably won't detect more subtle problems. For me the best extended test is to enable all the opengl screen savers and let the system run overnight cycling through each of them. If the system doesn't crash with this it will probably be solid under a normal load. For me this has been the best test of overall system stability. Unfortunately if it fails won't know exactly what is wrong.
tinfoil is the answer (Score:5, Funny)
wrap your _whole_ computer in tinfoil to deflect those pesky cosmic rays. it also works to keep them out of your head too.
Re:tinfoil is the answer (Score:5, Funny)
Re: (Score:3, Funny)
Re:tinfoil is the answer (Score:4, Funny)
The tin foil hat works. We can't read your mind. Feel safe wearing the tin foil hat. You've protected yourself against our evil plot to control your mind. :)
metal armour is the answer (Score:5, Funny)
I usually wear medieval armour. Not only does that work as efficient as tinfoil, it's also very fashionable.
Re: (Score:3, Funny)
Meanwhile, aren't some people wrapping their WiFi antennas with tin foil to boost reception?
Error response (Score:5, Informative)
If a system gives memtest86 errors, I break it down and swap components until it doesn't. The test pattern it uses can find subtle errors you're unlikely to run into with any application-based testing even when run for a few days. Any failures it reports should be taken seriously. Also: you should pay a attention to the memory speed value it reports, that's a surprisingly effective simple benchmark for figuring out if you've setup your RAM optimally. The last system I built, I ended up purchasing 4 different sets of RAM, and there was about a 30% delta between how well the best and worst performed on the memtest86 results--correlated extremely well with other benchmarks I ran too.
At the same time, I've had memory that memtest86 said was fine, but the system itself still crashed under a heavy Linux-based test. I consider both a full memtest86 test and a moderate workload Linux test to be necessary before I consider a new system to have baseline usable reliability.
There are a few separate problems here that are worthwhile to distinguish among. A significant amount of RAM doesn't work reliably when tested fully. Once you've culled those out, only using the good stuff, some of that will degrade over time to where it will no longer pass a repeat of the initial tests; I recently had a perfectly good set of RAM degrade to useless in only 3 months here. After you take out those two problematic sources for bad RAM, is the remainder likely enough to have problems that it's worth upgrading to ECC RAM? I don't think it is for my home systems, because I'm OK with initial and periodic culling to kick out borderline modules. And things like power reliability cause me more downtime than RAM issues do. If you don't know how or have the time to do that sort of thing yourself though, you could easily be better off buying more redundant RAM.
Re: (Score:3, Informative)
Anyone else have RAM modules degrade over time? I've never seen this.
I've seen a few known good modules fail later on but it's pretty rare. I'd say I've seen fewer than 5 in 15 years. Most times when a previously good module suddenly appears bad there's something else going on such as a failing power supply etc.
Re: (Score:3, Funny)
Anyone else have RAM modules degrade over time? I've never seen this.
I don't know if this is from degraded RAM, or rats pissing on the motherboard, but an olde IBM PC running DOS (upgraded to 3?) started having little blips on-screen and other strange characters appear in the output of programs and the shell itself, and in addition to this it would randomly lock up occasionally displaying a stack error.
I know the floppy is alright, because it boots fine without any of these symptoms occurring from other machines it boots from. The video cardish component appears fine to t
Re: (Score:3, Insightful)
Anyone know, why PC133 memory would have an issue on a bus overclocked from 100MHz to 133? It should be able to handle it just fine, so I'd like to think :-/
It's probably not the RAM as such; the 440BX on the P2B is only officially rated for 100MHz. Overclocking the chipset can have any number of side-effects.
Answers (Score:5, Interesting)
1) Yes
2) No
Now to be serious. Home PC do not come yet with 6GB or 8GB. Most new home PC still seem to have between 1GB and 4GB. Where the 4GB variety is rare because of the fact that most home PCs still come with a 32-bit operating system. 3GB seems to be the sweet spot for higher-end-home-pcs. Your home PC will most likely not have 16GB next year. Your workstation at work, perhaps, but then even perhaps.
At the risk of sounding like "640KByte is enough for everyone", I have to ask why you think why you need 16GB to check your email next year. I'm typing this on a 6 year old computer, I'm running quite a few applications at the same time and I know a second user is logged in. Current memory usage: 764Meg RAM. As a general rule, I know that Windows XP runs fine on 512Meg RAM and is comfortable with 1GB RAM. The same is true for GNU/Linux running Gnome.
Now, at work with Eclipse loaded, a couple of application servers, a database and a few VMs... Yeah, there indeed you get memory starved quickly. You have to keep in mind that such usage pattern is not that of a typical office worker. I can imagine that a heavy Photoshop user would want every bit of RAM he can get too. The Word-wielding-office-worker? I don't think so.
Now, I can't speak for Vista. I heard it runs well on 2GB systems, but I can't say. I got a new work laptop last week and booted briefly in Vista. It felt extremely sluggish and my machine does have 4Gig RAM. Anyway, I didn't bother and put Debian Lenny/amd64 on it and didn't look back.
I my idea, you have quite a twisted sense of reality regarding to the computers people actually use.
Oh, and frankly... If cosmic rays would be a big issue by now with huge memories, don't you think that more people would be complaining? I can't say why Ubuntu/amd64 ran fine on your machine. Perhaps GNU/Linux has built-in error correction and marks bad RAM as "bad".
Re:Answers (Score:5, Informative)
... 3GB seems to be the sweet spot for higher-end-home-pcs.
3GB is not so much a "sweet spot" as it is a limitation based on a 32 bit OS.
You can address 4GB max using 32 bits. Now take out the address space needed for your video card and any other cards you may put on the bus and you are looking at a 3GB max for useable memory.
So instead of "sweet spot" you really mean "maximum that can be used by Windows XP 32 Bit (the most commonly used OS today).
Re:Answers (Score:5, Informative)
Just FYI, 32bit Intel processors from the Pentium Pro generation and forward (with the exception of most, if not all of the Pentium-M's) have 36 physical address pins or more?
Many, but not all, chipsets have a facility for breaking the physical address presentation of the system RAM into a configurably-sized contiguous block below the 4GB limit and then making the rest available above the 4GB limit. If you're curious, the register (in intel parlance) is often called TOLUD (Top of Low Useable DRAM).
Yes, furthermore, given modern OS designs on x86 architecture, a process cannot utilize more than 2gb (windows without /3gb boot option) or 3gb (linux, most BSDs, windows with /3gb and apps specially built to use the 3/1 instead of 2/2 split.)
However, that limitation does not preclude you from having a machine running eight processes using 2GB of physical memory each.
The processor feature is called PAE (Physical Address Extension). It works, basically, by adding an extra level of processor pagetable indirection.
Incidentally, I have a quad P3-700 (It's a Dell PowerEdge 6450) propping a door open that could support 8GB of RAM if you had enough registered, ECC PC-133 SDRAM to populate the sixteen dimm slots.
Anyways, here's a snippet from the beginning of a 32 bit machine running Linux which has 4GB of RAM:
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: 0000000000000000 - 0000000000097c00 (usable)
[ 0.000000] BIOS-e820: 0000000000097c00 - 00000000000a0000 (reserved)
[ 0.000000] BIOS-e820: 00000000000e8000 - 0000000000100000 (reserved)
[ 0.000000] BIOS-e820: 0000000000100000 - 00000000defafe00 (usable)
[ 0.000000] BIOS-e820: 00000000defb1e00 - 00000000defb1ea0 (ACPI NVS)
[ 0.000000] BIOS-e820: 00000000defb1ea0 - 00000000e0000000 (reserved)
[ 0.000000] BIOS-e820: 00000000f4000000 - 00000000f8000000 (reserved)
[ 0.000000] BIOS-e820: 00000000fec00000 - 00000000fed40000 (reserved)
[ 0.000000] BIOS-e820: 00000000fed45000 - 0000000100000000 (reserved)
[ 0.000000] BIOS-e820: 0000000100000000 - 000000011c000000 (usable)
The title of that list should really be "Physical Address Space map." Either way, notice that the majority of the RAM is available up until 0xDEFAFE00 and the rest is available from 0x100000000 to 0x11c000000 - a range that's clearly above the 4GB limit.
Yes, it's running a bigmem kernel... But that's what bigmem kernels are for.
Oh, incidentally, even windows 2000 supported PAE. The bigger problem is the chipset. Not all of them support remapping a portion of RAM above 4GB.
The truth (Score:5, Insightful)
My first computer was a 80286 with 1 MB of RAM. That RAM was all parity memory. Cheaper than ECC, but still good enough to positively identify a genuine bit flip with great accuracy. My 80386SX had parity RAM, so did my 486DX4 120. I ran a computer shop for some years, so I went through at least a dozen machines ranging from the 386 era through the Pentium II era, at which point I sold the shop and settled on a AMDK62 450. And right about the time that the Pentium was giving way to the Pentium II, non-parity memory started to take hold.
What protection did parity memory provide, anyway? Not much, really. It would detect with 99.99...? % accuracy when a memory bit had flipped, but provided no answer as to which one. The result was that if parity failed, you'd see a generic "MEMORY FAILURE" message and the system would instantly lock up.
I saw this message perhaps three times - it didn't really help much. I had other problems, but when I've had problems with memory, it's usually been due to mismatched sticks, or sticks that are strangely incompatible with a specific motherboard, etc. none of which caused a parity error. So, if it matters, spend the money and get ECC RAM to eliminate the small risk of parity error. If it doesn't, don't bother, at least not now.
Note: having more memory increases your error rate assuming a constant rate of error (per megabyte) in the memory. However, if the error rate drops as technology advances, adding more memory does not necessarily result in a higher system error rate. And based on what I've seen, this most definitely seems to be the case.
Remember this blog article about the end of RAID 5 in 2009? [zdnet.com] Come on... are you really going to think that Western Digital is going to be OK with near 100% failure of their drives in a RAID 5 array? They'll do whatever it takes to keep it working because they have to - if the error rate became anywhere near that high, their good name would be trashed because some other company (Seagate, Hitachi, etc) would do the research and pwn3rz the marketplace.
Re:The truth (Score:5, Interesting)
Actually, error rates per bit are increasing, because bits are getting smaller and fewer electrons are holding the value for your bit. An alpha particle whizzing through your RAM will take out several bits if it hits the memory array at the right angle. Previously, the bits were so large that there was a good chance the bit wouldn't flip. Now they're small enough that multiple bits might flip.
This is why I run my systems with ECC memory and background scrubbing enabled. Scrubbing is where the system actively picks up lines and proactively fixes bit-flips as a background activity. I've actually had a bitflip translate into persistent corruption on the hard drive. I don't want that again.
FWIW, I work in the embedded space architecting chips with large amounts of on-chip RAM. These chips go into various infrastructure pieces, such as cell phone towers. These days we can't sell such a part without ECC, and customers are always wanting more. We actually characterize our chip's RAM's bit-flip behavior by actively trying to cause bit-flips in a radiation-filled environment. Serious business.
Now, other errors that parity/ECC used to catch, such as signal integrity issues from mismatched components or devices pushed beyond their margins... Yeah, I can see improved technology helping that.
Re:The truth (Score:5, Insightful)
Yes. A higher energy particle hits something in the RAM, and alpha/beta particles scatter from the impact point... which is inside the memory cell.
That's why higher energy radiation is dangerous. It doesn't cause the damage itself, the products of the collision do. Radiation shrapnel, if you will.
RAID(?) for RAM (Score:5, Interesting)
With memory becoming so plentiful these days (I haven't seen many home PC's with 6 or 8GB granted, but we're getting there) it seems that a single error on a large capacity chip is getting more and more trivial. Isn't it a waste to throw away a whole DIMM? Why isn't it possible to "remap" this known-bad address, or allocate some amount of RAM for parity the way software like PAR2 works? Hard drive manufacturers already remap bad blocks on new drives. Also it seems to me that, being a solid state device, small failures in RAM aren't necessarily indicative of a failing component like bad sectors on a hard drive are. Am I missing something really obvious here or is it really just easier/cheaper to throw it away?
Re: (Score:3, Interesting)
You just described ECC scrubbing [wikipedia.org] and Chipkill [wikipedia.org]. The technology's been around for a while, but it costs >$0 to implement so most people don't bother. As with most RAS [wikipedia.org] features most people don't know anything about it, so would rather pay $50 less than have a strange feature that could end up saving them hours of downtime. At the same time if you actually know what these features are and you need them, you're probably going to be willing to shell out the money to pay for them.
Joking aside... (Score:5, Informative)
First, it was not cosmic rays; memory was tested in a lead vault and showed the same error rate. Turns out to have been alpha particles emitted by the epoxy / ceramic that the memory chips were encapsulated in.
That said: Quite clearly given your experience, Vista and Ubuntu load the memory subsystem quite differently. It is possible that Vista, with its all-over-the-map program flow, is missing cache a lot more often and so is hitting DRAM harder; I don't have the background to really know. I believe that Memtest86, in order to put the most strain on memory and thus test it in the most pessimal conditions, tries to access memory in patterns that equally hit physical memory hardest. But, what I have found is that some OSs, apparently including Ubuntu, will run on memory that is marginal, memory that Memtest86 picks up as bad.
As for ECC in memory... The problem is that ECC carries a heavy performance hit on write. If you only want to write 1 byte, you still have to read in the whole QWord, change the byte, and write it back to get the ECC to recalculate correctly. It is because of that performance hit that ECC was deprecated. The problem goes away to a large extent if your cache is write-back rather than write-through; though there will be still a significant number of cases where you have to write a set of bytes that has not yet been read into cache and does not comprise a whole ECC word.
That said, it is still used on servers...
But I don't expect it will reappear on desktops any time soon. Apparently they have managed to control the alpha radiation to a great extent, and so the actual radiation-caused errors are now occurring at a much lower rate, significantly lower than software-induced BSODs.
Re: (Score:3, Informative)
Perhaps that's another "feature" of Windows - no need for Memtest86 ... just leave Windows running for a few days with some applications running ... and if nothing crashes, the RAM is probably good.
Re:Joking aside... (Score:5, Insightful)
As for ECC in memory... The problem is that ECC carries a heavy performance hit on write. If you only want to write 1 byte, you still have to read in the whole QWord, change the byte, and write it back to get the ECC to recalculate correctly. It is because of that performance hit that ECC was deprecated. The problem goes away to a large extent if your cache is write-back rather than write-through; though there will be still a significant number of cases where you have to write a set of bytes that has not yet been read into cache and does not comprise a whole ECC word.
AFAIK, on modern computer systems all memory is always written in chunks larger than a byte. I seriously doubt there's any system out there that can perform single-bit writes either in the instruction set, or physically down the bus. ECC is most certainly not "depreciated" -- all standard server memory is always ECC, I've certainly never seen anything else in practice from any major vendor.
The real issue is that ECC costs a little bit more than standard memory, including additional traces and logic in the motherboard and memory controller. The differential cost of the memory is some fixed percentage (it needs extra storage for the check bits), but the additional cost in the motherboard is some tiny fixed $ amount. Apparently for most desktop motherboard and memory controllers that few $ extra is far too much, so consumers don't really have a choice. Even if you want to pay the premium for ECC memory, you can't plug it into your desktop, because virtually none of them support it. This results in a situation where the "next step up" is a server class sytem, which is usually at least 2x the cost of the equivalent speed desktop part for reasons unrelated to the memory controller. Also, because no desktop manufacturers are buying ECC memory in bulk, it's a "rare" part, so instead of, say, 20% more expensive, it's 150% more expensive.
I've asked around for ECC motherboards before, and the answer I got was: "ECC memory is too expensive for end-users, it's an 'enterprise' part, that's why we don't support it." - Of course, it's an expensive 'enterprise' part BECAUSE the desktop manufacturers don't support it. If they did, it'd be only 20% more expensive. This is the kind of circular marketing logic that makes my brain hurt.
Re: (Score:3, Insightful)
Depends (Score:5, Interesting)
My experience with a server that recorded about 15TB of data is something like 6 bit-errors per year that could not be traced to any source. This was a server with ECC RAM, so the problem likely occured in busses, network cards, and the like, not in RAM.
For non-ECC memory, I would strongly syggest running memtest86+ at least a day before using the system and if it gives you errors, replace the memory. I had one very persistend bit-error in a PC in a cluster, that actually reqired 2 days of memtest86+ to show up once, but did occure about once per hour for some computations. I also had one other bit-error that memtest86+ did not find, but the Linux commandline memory tester found after about 12 hours.
The problem here is that different testing/usage patterns result in different occurence probability for weak bits, i.e. bits that only sometimes fail. Any failure in memtest86+ or any other RAM tester indicates a serious problem. The absence of errors in a RAM test does not indicate the memory is necessarily fine.
That said, I do not believe memory errors have become more common on a per computer basis. RAM has become larger, but also more reliable. Of course, people participating in the stupidity called "overclocking" will see a lot more memory errors and other errors as well. But a well-designed system with quality hardware and a thourough initial test should typically not have memory issues.
However there is "quality" hardware, that gets it wrong. My ASUS board sets the timing for 2 and 4 memory modules to the values for 1 module. This resulted in stable 1 and 2 module operation, but got flaky for 4 modules. Finally I moved to ECC memory before I figuerd out that I had to manually set the correct timings. (No BIOS upgrade available that fixed this...) This board has a "professional" in its name, but apparently, "professional" does not include use of generic (Kingston, no less) memory modules. Other people have memory issues with this board as well that they could not fix this way, seems that somethimes a design just is bad or even reputed manufacturers do not spend a lot of effort to fix issues in some cases.In can only advise you to do a thourough forum-search before buying a specific mainboard.
If it was really a cosmic ray (Score:5, Funny)
Then it would proba%ly alter not just one byte, b%t a chain of them. The cha%n of modified bytes would be stru%g out, in a regular patter%. Now if only there were so%e way to read memory in%a chain of bytes, as if it w%re a string, to visu%lize the cosmic ray mod%fication. hmmm...
Settings matter too (Score:5, Informative)
Not all memory is created equal. Memory can be bad if Memtest detects errors, or you can simply be running it at the wrong settings. Usually there are both "normal" and "performance" settings for memory on higher end motherboards, or sometimes you can tweak all sorts of cycle-level stuff manually (CAS latency etc.).
Try running your memory with the most conservative settings before you assume it's bad.
Workaround bad memory howto (linux only) (Score:5, Informative)
Depending on where it fails (if it fails in a the same spot) you can relatively easily work around it and not throw out the remaining good portion of the stick. I wrote a howto..
http://gquigs.blogspot.com/2009/01/bad-memory-howto.html [blogspot.com]
I've been running on Option 3 for quite some time now. No, it's not as good as ECC, but it doesn't cost you anything.
Re:Workaround bad memory howto (linux only) (Score:4, Informative)
On vista you can do the same thing using bcdedit:
bcdedit /set badmemorylist 0x12345 0x23456
Parameters are page frame numbers.
Trust Memtest86 (Score:5, Informative)
1) Do people trust a memtest86 error to mean a bad memory module or motherboard or CPU?
Well, I'd add some other possibilities such as:
Bad power supply,
Memory isn't seated properly in it's socket.
Incorrect timing set in bios.
Memory is incompatable with your motherboard.
etc..
But yeah, if memtest86 says there's a problem then there really is something wrong.
Was it cosmic rays, or...? (Score:3, Informative)
OK (Score:3, Insightful)
Yes. I do, anyway; I've never had it report a false-positive, and it's always been one of the three (and even if it was cosmic rays, it wouldn't consistently come up bad, then, would it?). Then again, it could also mean that you could be using RAM requiring a higher voltage than what your motherboard is giving it. If it's brand-name RAM, you should look up the model number and see what voltage the RAM requires. Things like Crucial Ballistix and Corsair Dominator usually require around 2.1v.
Depends. If you're doing really important stuff then sure. ECC memory is quite a boon in that case. If you're just using your desktop for word processing and web browsing, it's a waste of money.
Here's the article I remember RE alpha particles. (Score:5, Informative)
http://www.ida.liu.se/~abdmo/SNDFT/docs/ram-soft.html [ida.liu.se]
This references an IBM study, which is what I think I actually remember but could not find quickly this morning.
"In a study by IBM, it was noted that errors in cache memory were twice as common above an altitude of 2600 feet as at sea level. The soft error rate of cache memory above 2600 feet was five times the rate at sea level, and the soft error rate in Denver (5280 feet) was ten times the rate at sea level."
Re: (Score:3, Informative)
With today's wide buses, parity RAM is ECC RAM. It's worth paying the extra couple dollars.
Several years back I experienced disk corruption that seemed to be due to a bitflip that had happened in RAM and got committed to disk. That machine didn't have ECC RAM. I went to ECC for everything after that. That was back in the 128MB days, and no I don't overclock.
(Well, not aggressively. My machine is overclocked by about 1%.)
Re: (Score:3, Informative)
I think it occurs quite a bit more often than once every few days. It is however rare that you'll notice since the data corrupted is often not in code that would lead to a crash, actual program code being such a small percentage of RAM usage these days. A flip in graphical data, sound or text data will very likely go unnoticed. Same goes for flips in code-paths that are rarely used.
Re:(Sensible) People do use ECC RAM (Score:5, Funny)
I see you've never experienced the joys of J2EE.
Re:Paranoia? (Score:5, Interesting)
The probability of a cosmic ray at precisely the right angle and speed to cause a single bit error and cause an app to crash is somewhere on the same order as your chances of getting hit by a car, getting struck by lightning, getting torn apart by rabid wolves, and having sex in the back of a red 1948 Buick convertible at a drive-in movie theater on Tuesday night, Feb. 29th under a blue moon... all at the same time.... Sure, given enough bits, it's bound to happen sooner or later, but it isn't something I'd worry about. :-)
The probability of RAM just plain being defective---failing to operate correctly due to bugs in handling of certain low power states, having actual bad bits, having insufficient decoupling capacitance to work correctly in the presence of power supply rail noise, etc---is probably several hundred thousand orders of magnitude greater (probably on the order of a one in several thousand chance of a given part being bad versus happening to a given part a few times before the heat death of the universe).
Memory test failures (other than mapping errors) are pretty much always caused by hardware failing. If running memtest86 in Linux works correctly for days, this probably means one of three things:
I couldn't tell you which of these is the case without swapping out parts, of course. You should definitely take the time to replace whatever is bad even if it seems to be "working" in Linux. In the worst case, you have a few bad bits of RAM, they're somewhere in the middle of your disk cache in Linux, and you are slowly and silently corrupting data periodically on its way out to disk.... You definitely need to figure out what's wrong with the hardware and why it is only failing in Windows, and it sounds like the only way to do that is to swap out parts, boot into Windows, and see if the problem is still reproducible in under a couple of days, repeating with different part swaps until the problem goes away. Don't forget to try a different power supply.
Re:Paranoia? (Score:4, Insightful)
several hundred thousand orders of magnitude
We've crossed beyond the realm of the astronomical and into something else entirely. Surely you meant several orders of magnitude, aka, hundreds of thousands of times? Let's keep things on this side of the googol.
Re:Paranoia? (Score:5, Funny)
and having sex in the back of a red 1948 Buick convertible at a drive-in movie theater on Tuesday night, Feb. 29th under a blue moon... all at the same time....
Mom?
Re: (Score:3)
When rudely swiping at other people, at least stop dribbling nonsense like "several hundred thousand orders of magnitude greater". I don't think you know what you are talking about. >>10^100000?
So I discount the rest of your "contribution" accordingly. Actually, several other parts of your answer are independently rubbish too: have you considered a career in tabloid journalism? Wish I had mod points...
Rgds
Damon
Re: (Score:3, Interesting)
The real issue with memory cells flipping is not cosmic rays -- at least not with terrestrially deployed memory, it's alpha particle emissions from radioactive decay of the plastics in the memory package. Yes, the plastics surrounding the silicon.
A lot of work has been done to reduce the radioactivity of plastics used in IC packaging from normal background levels that you don't worry about in day-to-day life, to as quiet as possible, by carefully selecting source materials that have few naturally occurring
Re:Paranoia? (Score:5, Informative)
Re:Paranoia? (Score:5, Informative)
My bet is that it is cerenkov radiaton as a high speed charged particle breaks the speed of light in the fluid in the eyeball.
Indeed, these flashes have pretty much already been identified [sciencemag.org] as the result of Cerenkov radiation.
Re: (Score:3, Interesting)
You're right that I've never run memtest86 at all. I hadn't regularly worked with any hardware based on an Intel architecture until about two years ago, and haven't experienced any RAM problems in that relatively short period. That is the sole valid criticism in your post, and even that was redundant. The rest of your post consists of you putting words in my mouth that I did not say.
Regarding point A., many Linux systems do perform at least rudimentary RAM checks. What I said was that it is remotely pos
My experience dictates it... (Score:4, Insightful)
If you go with non-ECC, I would suggest running memtest86+. If you get errors, swap the memory. If swapping the memory still doesn't take care of it, swap motherboards! I recently had a memory problem in one of my customers' racks, and running memtest86+ got nothing until I had it running on my bench for over a week. There may be some problems with memtest86+...I even had another bit-error that memtest86+ did not find, but a Linux commandline memory tester found a problem almost immediately
The problem here is that different testing/usage patterns result in different probabilities of finding potentially bad words, e.g. words that may only be bad if you read from them a hundred cycles consecutively. But, if you do see a failure in memtest86+ or the CLI tester, you got yourself a serious problem. The point to take from this is that if you don't see errors, that doesn't mean you don't have errors!
Having said this, I still don't think memory errors among PCs are that common. We have more RAM on machines these days, but at the same time, the manufacturing processes have become better. I have a personal conviction in believing that though the likelihood of word error due to the increased amount of words in memory has increased, the RAM itself has become so much more "solid" that the increase of memory is negligible. Now, if you do dumb things with your computer like running it without a case or not giving it ventilation( learned this the hard way) or overclocking it, you *WILL* still run into problems. But if you design a system with quality and integrity, you typically shouldn't have these issues with memory!
One last thing to point out: there is quality hardware, and there is cheap hardware. My PC-Chips motherboard ran for three months and two days, and I didn't have a problem. Two days out of warrant. Now, take my MSI motherboard. It sets the timing for all memory modules to have the values of a single module. This resulted in stable single module operation, but got flaky for all four modules. I Finally moved to ECC before I figuerd out that I had to manually set the correct timings. This board is an ultra board, but apparently, it does not include use of generic (Micron, Corsair, etc!! - tried 'em all) memory modules. People on the Newegg reviews board have memory issues with this board as well that they could not fix with a BIOS update, and it appears that sometimes a design just is bad! Even the "good" manufacturers do not spend a lot of effort to fix issues in some cases.
My words of advice: Do your homework. Read through the reviews. AND DON'T BUY HARDWARE AS SOON AS IT COMES OUT!
Re:Mod Parent Up (Score:5, Insightful)
if you look at the username it's not him at all, it's someone with ID 1344097 pretending to be him. Still, what he says is sensible, and what's wrong with this piece? If it doesn't interest you, why are you reading the comments?