Data Center Study Reveals Top 5 SMART Stats That Correlate To Drive Failures 142
Lucas123 writes Backblaze, which has taken to publishing data on hard drive failure rates in its data center, has just released data from a new study of nearly 40,000 spindles revealing what it said are the top 5 SMART (Self-Monitoring, Analysis and Reporting Technology) values that correlate most closely with impending drive failures. The study also revealed that many SMART values that one would innately consider related to drive failures, actually don't relate it it at all. Gleb Budman, CEO of Backblaze, said the problem is that the industry has created vendor specific values, so that a stat related to one drive and manufacturer may not relate to another. "SMART 1 might seem correlated to drive failure rates, but actually it's more of an indication that different drive vendors are using it themselves for different things," Budman said. "Seagate wants to track something, but only they know what that is. Western Digital uses SMART for something else — neither will tell you what it is."
Skip the blogspam, here's the real link (Score:5, Informative)
https://www.backblaze.com/blog/hard-drive-smart-stats/
Goes into a lot more detail too.
Uncorrected reads (Score:3)
Uncorrected reads do not indicate a drive will fail. They indicate the drive has _already_ failed.
The number one predictor is probably power-on time, they go into that in an earlier post.
Re:Uncorrected reads (Score:5, Interesting)
I have had drives fail. I took them off line and wrote 0 and 1 to them with dd until Reallocated_Sector_Ct stops raising and Current_Pending_Sector goes to zero then ran e2fsck -c -c on them 2 or 3 times then, I put them back on line!!!
Most people would say this is crazy but in my opinion, the surface of the drives often have bad spots while the rest is perfectly OK. Some on those drives are still on line without reporting any new errors after more than 5 years, some almost 10 years. Those are server drives with very low Start_Stop_Count, Power_Cycle_Count and Power-Off_Retract_Count. All lower than 250 after 10 years. Those drives are spinning all the time.
Newer drives will relocate bad sectors to free reserved space they keep for that purpose. As long as you don't run out of free spare space, IMHO, it is worth a try.
Re: (Score:2)
Newer drives will relocate bad sectors to free reserved space they keep for that purpose.
IBM Mainframe drives did that back in the 1960s.
From what I've seen of hard drives, they're a lot like silicon wafers. Rarely perfect, but as long as they're "good enough", the controller maps around the bad spots that they came with as well as a certain number of ones that form over the operating life.
Re: (Score:2)
Re: (Score:2)
I know that. I run e2fsck -c -c (write+read test) to generate random pattern writes on the drives then read the data to make sure it is the same. If I put the drive back on line, e2fsck -c -c will always report 0 bad blocks and no timeouts will have occurred. I also check for timeouts in the logs.
Failed reads on a drive part of a RAID array will usually cause the drive to be kicked out of the RAID array after a timeout slowing down the machine. The strategy I suggested allow the drive hardware to indeed rel
Re: (Score:2)
Re: (Score:2)
spring the extra money to buy WD Red's which do have TLER IIRC.
Re: (Score:2)
I suggest you do a little more research. If a sector was successfully written to and then 2 months later the drive hardware can't read from it, there is no way for the drive hardware to automagically correct the error and recover the data. The drive hardware then just increment the Current_Pending_Sector count. You could start by reading your own link but then again, you seem to have problems reading my own posts so your mileage may vary ;-)
Re: (Score:2)
Re: (Score:2)
Run the cost/risk aassessment and apply accordingly.
Exactly, use ZFS that does just that if you want to afford the extra memory. Use a fancy hardware raid controller that does that if you wish. I just use cheap drives and Linux MD. Do your research before commenting on setup you don't seem to know about. You don't have to brag about your hardware here and try to convince others to do as you do.
Didn't I mention in my first post: "Most people would say this is crazy but in my opinion,..."?
I do not see what was your point in replying to my posts anyway other th
Re: (Score:2)
Re: (Score:2)
Who says I don't ALSO work for others and I don't know about more expensive solutions? I just don't brag about it mister Shaman ;-)
I know enough to know about people covering their arses, it is pretty common you know...
Yet, I never lost any data on the cheaper setup I run on the side.
Take care man!
Re: (Score:2)
more so when an update rolls around and potentially throws a wrinkle in the mix.
You are right about this. Once, a linux kernel update, or was it mdtools? was screwed. You would add a new partition to an linux MD raid array and it wouldn't sync the partition before putting it online ;-) This is where a good backup strategy comes into place.
Anyways, toying around with linux MD and cheap solutions makes you more creative in the long run IMHO.
Just keep your mind open please. There are plenty of approaches and trade-offs available and just as you said:
Run the cost/risk aassessment and apply accordingly.
Furthermore, it depends on SLAs and su
Re: (Score:2)
The problem is you have no idea how many free reallocated sectors are available. It isn't even consistent between drives, as some will have been used at the factory before the count was reset to zero.
Your strategy is reasonable if the drives are part of a redundant array or just used for backed up data, but for most people once the reallocated sector count starts to it's best to just return the drive as a SMART failure and get it replaced.
Re: (Score:2)
Re: (Score:2)
Good one! mke2fs -c -c
Thanks for pointing this out!
Re: (Score:2)
Still a valid approach today for surface defects. And if you had run regular full surface scans, you would probably not have had to do anything yourself.
Re: (Score:2)
Wrong. Uncorrected roads indicate surface defects. The rest of the surface may be entirely fine. All disks have surface defects and not all are obvious on manufacturer testing.
They also indicate faulty drive care. Usually, data goes bad over a longer tome. If you run your long SMART selftests every 7-14 days, you are very unlikely to be hit by this and will get reallocated sectors with no data-loss instead. Not doing these tests is like never pumping your bicycle tires and complaining when they eventually g
The measurements in question: (Score:5, Informative)
for those who are only passingly curious and don't want to read the article.
SMART 5 - Reallocated_Sector_Count.
SMART 187 - Reported_Uncorrectable_Errors.
SMART 188 - Command_Timeout.
SMART 197 - Current_Pending_Sector_Count.
SMART 198 - Offline_Uncorrectable
Re:The measurements in question: (Score:4, Insightful)
Those 5 SMART stats match up exactly with what I habitually look at on the job monitoring lots of RAID arrays' drives. Those are the stats that tell you if the drive is going bad most often in my experience.
Re: (Score:3)
Yes. This article isn't exactly news as it pretty much confirms what the global peanut gallery has already said about this stuff.
Re:The measurements in question: (Score:5, Insightful)
Yes. This article isn't exactly news as it pretty much confirms what the global peanut gallery has already said about this stuff.
Still, data is better than emergent collective perceptions from distributed anecdotes.
Re: (Score:2)
Those 5 SMART stats match up exactly with what I habitually look at on the job monitoring lots of RAID arrays' drives
Really? At my job I get notified that the array is ejecting a drive based on whatever parameters the OEM uses, it's already started the rebuild to spare space on the remaining drives, and a ticket has been dispatched to have a technician bring a replacement drive. If it's a predictive fail it generally doesn't notify until the rebuild has completed as it can generally use the "failing" drive
Re: (Score:2)
We just look at the flashing lights every once in a while. Though we've got drives the RAID controller has been telling us are failing for the best part of a year now, and haven't got around to replacing them.
Re: (Score:3)
I never worry about going home, my array has plenty of spare capacity to handle rebuilds, we schedule the technician when it's convenient to us, not when it's convenient for them or the array. When you have guard space for at least 4 disk failures (out of a few hundred) you deal with replacements in a less urgent manner than a traditional small RAID5 array in a standalone server. Within ~30 minutes of a failure or a predictive failure my arrays are back to 100% resiliency with slightly less guard space. It'
Re: (Score:2)
Your later comments about ignoring RAID controller warnings for a *year* strike me as callous. But we all have our standards, and standards vary greatly from place to place as the needs the drive the standards also vary greatly. (financial institutions care much more about transactional correctness than reddit)
After months of testing, our organization has wholeheartedly adopted ZFS and have been finding that not only is it technically far superior to other storage technologies, it's significantly faster in
Re: (Score:2)
You can't compare real filesystems to EXT. EXT4 is a backport of some of what is possible in modern filesystems to a brand name that makes people comfortable. Like most filesystems it's sufficient for many uses, but it's not particularly good at anything and it's really bad a whole slew of fairly common uses. It's not even a good compromise for backwards compatibility, like EXT3 was, as volumes formatted as EXT4 can't be mounted as EXT2/3.
I'm not saying EXT4 is bad, just that it isn't a terribly useful base
Re: (Score:3)
I tend to think a drive has failed once it has any uncorrectable errors... I lost some data, it couldn't be read back. Drive gets returned to the manufacturer under warranty. Don't wait around for it to fail further.
I agree with the reallocated sector count though. The moment that starts to rise I usually make sure the data is fully backed up and then do a full surface scan. The full scan almost always causes the drive to find more failed sectors and die, so it gets send back under warranty too.
Re: (Score:3)
And to list these for your own drive:
$ sudo smartctl -A /dev/sda | egrep '^\s*(ID|5|1[89][78])'
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
188 Command_Timeout 0x0032 100 253 000 Old_age Always - 0
197 Current_Pending_Sector 0x0012 100 1
Re:The measurements in question: (Score:5, Informative)
And I can confirm. Reallocated Sector Count rarely goes above zero when the drive is fine. It's possible to have a few sectors go bad and get reallocated, but it's usually part of a bigger problem when it happens (this number is reset to zero at the factory, after all initially bad sectors have been remapped). If the Current Pending Sector Count is non-zero, it's likely over.
I always clone a drive immediately with ddrescue when it gets to this point, while the drive is still working.
Re: (Score:2)
If it comes to you having to clone the drive, it's too late. That's going to bite you in the ass sooner or later.
Re: (Score:2)
At the first sign of trouble? How much earlier should I do it? I'm not saying in place of a backup. Just as a quicker way to get a new drive up and running.
Re:The measurements in question: (Score:4, Informative)
Also, generally you don't need to panic over this attribute. You should panic when it increases steadily.
True, I've had a few drives hold steady at 1 sector reallocated. But if Current Pending Sector count remains non-zero for very long, it's a headache at the very least and probably a failure. Generally, it seems like as soon as you crest zero, it's over. I've had the next symptom be a totally unresponsive drive. But doing the backup when you hit 1 (admittedly overly cautious) will force the drive to read off all the sectors and you'll at least get your backup while you verify the rest of the drive still reads OK.
Re: (Score:2)
just take the drive off-line and try this:
http://slashdot.org/comments.p... [slashdot.org]
Current_Pending_Sector will go back to zero if the drive is still usable.
Re: (Score:2)
I realize that. But I always make a clone first, because it's a lot of wear and runtime if the drive is actually failing.
Re: (Score:2)
bad sectors = pull it, clone it, bin it
Re: (Score:2)
Current_Pending_Sector > 0 means you most likely already have unrecoverable errors on your disk, because otherwise the sectors would already have been remapped (and thus not pending). So if your CPS = 3, expect there to be at least three sectors which will return an uncorrectable error when read. Writing to these sectors will allow them to be remapped, which will decrease your CPS.
Re: (Score:2)
That's true. Writing to the sector will remap it. But if you get a bad sector, it's very rare for it to remain an isolated incident. And it may not be the sector, but rather the head that's actually failing. I usually consider the drive a likely loss by this point. After doing a full backup, I'll run the drive manufacturer's utility to scan the disk and remap sectors and then write zeroes to the drive for good measure. If all is OK after that, I can always clone my backup back onto the drive.
Re: (Score:3, Informative)
Re: (Score:2)
Did some idiot mod you DOWN?
This is information that bears frequent repetition.
Re: (Score:2)
In other words: nothing new and people have been tracking these values for decades anyway.
Re: (Score:2)
They needed a study to arrive at that conclusion?
Reallocated, uncorrectable and pending sectors are all obvious indicators of closing drive failure.
Command Timeouts, depending on definition, could be timeouts after failing a read, so nothing unusual there.
Re: (Score:2)
Re: (Score:2)
for SSD's, that's what I would like to know.
Re: (Score:2)
I don't know. I believe though that, unlike hard drives, SSDs are designed on the presumption that cells will gradually fail as part of normal operations, and hence any such statistics would mean something very different than they would for a hard drive.
Re: (Score:2)
Any results on that yet, should we expect them or will they vary even more by manufacturer/model making a "top 5"
list impossible.
Re: (Score:2)
Well, these are exactly the ones every knowledgeable person was watching anyways. 188 can also be controller or cable problems though.
Cool data but... (Score:2)
Ever find it odd that most PC manufacturers (at least the variety I've seen over the years) disable S.M.A.R.T. in BIOS by default? Never understood the reasoning behind that...
Re: (Score:2)
I could never imagine why it is even POSSIBLE to disable it. If you don't want to read it, just freakin don't read it.
Re: (Score:2)
I could never imagine why it is even POSSIBLE to disable it. If you don't want to read it, just freakin don't read it.
I think there's some routine testing going on that adds overhead unless you disable it.
Re: (Score:3)
If the PC has less than optimal cooling, it's possible, even l iikely, the drive temperature will exceed operating specs at some point. Even if there is no ill effect or any long term problem, the BIOS will forever more report "Imminent Drive Failure" on every boot if BIOS SMART is enabled.
Re: (Score:2)
I would like to see SMART tools built into Windows and other OS's (maybe there are some I don't know about). Especially since some of my computers are up for 6 months or more a
Re: (Score:2)
I would like to see SMART tools built into Windows and other OS's (maybe there are some I don't know about). Especially since some of my computers are up for 6 months or more at a time, a drive could be fine 4 or 5 months ago when it was last booted, but I wont get a smart message until next reboot, maybe a month or two from now, after it's to late.
Linux smartmontools package has smartd, the "SMART Disk Monitoring Daemon", which will monitor SMART-capable drives and will log problems and send email alerts. Can be handy. Don't know about Windows.
Re: (Score:2)
I generally use HD Tune (www.hdtune.com) which is free unless you want to buy the Pro version with a bunch of features that are irrelevant if all you want is SMART reporting.If I was going to spend actual money on a checker though, I would tend toward the LSoft Hard Disk Monitor (www.lsoft.net).
Re: (Score:2)
Less warranty replacement.
Re: (Score:2)
I have used utilities to view the SMART info on drives where this BIOS option is disabled, can't recall any systems where it flat-out didn't work. I won't say that this information couldn't be blocked in some cases, but I believe that this option is for whether the BIOS checks SMART status during POST. It has made the difference between a system merrily proceeding to boot with a SMART failure versus reporting that the drive's SMART indicates failure and
Re: (Score:2)
Thanks, Backblaze! (Score:2)
Re: (Score:2)
The list of parameters that are closely correlated with failure is pretty bloody obvious.
Re: (Score:3)
Re: (Score:2)
And yet they aren't even by Backblaze's admission. SMART values they expected to be an indication on drive wear showed no correlation with failure.
Re: (Score:3)
> SMART values they expected to be an indication on drive wear showed no correlation with failure
Exactly. Also, some people care more than "approximately correlates" vs seeing the actual data of exactly how correlated it is.
Windows app that displays these meaningfully? (Score:2)
I've used Crystal Disk Info and while it reports SMART info, I can't make much out of the info.
Many values for Samsung spinning rust just have values of Current and Worst of 100 and either a raw value of 0 or some insanely huge number.
Re: (Score:2)
A few of them aren't accounted for very well (and some of Samsung's stats are not accumulative stats). Crystal Disk Info makes it idiot-proof. If the square is blue, the drive is fine, yellow and the drive is probably failing soon, and red is a definite failure.
Raw value of zero is good. If Current Pending Sector Count or Reallocated Sector Count go above zero, you're likely dealing with a failing drive.
Most of the numbers are not important.
Re: (Score:2)
We use (currently) PartedMagic Linux distribution on a boot USB. The "Disk Health" tool happily reports on failing drives and gives reasons.
Added bonus is that Linux is better than windows at allowing data to be copied from a failing drive (and doesn't care about the NTFS file permissions)
Re: (Score:2)
On Linux, I just use smartmontools. Gives the same grid of data (mostly) as Crystal Disk Info. But when copying a failing drive, always use ddrescue. It will allow you to unplug the drive (to do some mysterious temporary fix like putting it back in the freezer) and plug it back in and restart from where you left off. Unless you only need a small amount of data (I prefer to just clone the entire system to a new drive to boot from).
Just my personal take (Score:2)
Top #1 Indicator That Correlates To Drive Failure (Score:2)
Let's be real here. You almost never get advanced warning from SMART. Maybe one in twenty. Almost without fail you'll go from a drive running properly to a drive that won't rotate the spindle or the heads smash against the casing or you've suddenly got so many bad sectors that it's effectively unusable. Failure prediction is almost (but not quite) valueless compared to the reality of how drives fail.
Re: (Score:2)
Let's be real here. You almost never get advanced warning from SMART. Maybe one in twenty. Almost without fail you'll go from a drive running properly to a drive that won't rotate the spindle or the heads smash against the casing or you've suddenly got so many bad sectors that it's effectively unusable. Failure prediction is almost (but not quite) valueless compared to the reality of how drives fail.
Yeah, I did mention smartd in an earlier post, and I said it "can be handy" but I suppose I must agree with you based on my own life as its been lived until now. We never put a server into service without at least software raid, usually with just two disks with some exceptions. A lot of our equipment are tiny supermicro 1u's that can only hold two. But after many years we have yet to have two go at once (knock on wood) so the warning of a raid out of sync has saved us.
Re: (Score:2)
If you go by Google's definition of failing (the raw value of any of Reallocated_Sector_Ct, Current_Pending_Sector, or Offline_Uncorrectable goes non-zero) rather than the SMART definition of failing (any scaled value goes below the "failure threshold" value defined in the drive's firmware), about 40% of drive failures can be predicted with an acceptably low false-positive rate. You're correct, though, that the "SMART health assessment" is useless as a predictor of failure.
They did a study on this [google.com] a few ye
Re: (Score:2)
I'd disagree. As an MSP we see occasional SMART errors and they're logged and tickets created.
So far we've cloned / backed up / moved everything of note off all 27 of them, but the three we left in and just spinning have all died within a month or so.
Sure, it's not scientifically representative, but I'll not take that chance with clients data...
Re: (Score:2)
I'd disagree. As an MSP we see occasional SMART errors and they're logged and tickets created. So far we've cloned / backed up / moved everything of note off all 27 of them, but the three we left in and just spinning have all died within a month or so.
Sure, it's not scientifically representative, but I'll not take that chance with clients data...
Yeah, I won't dispute your experience because it happened. On the other hand, the only SMART warnings I've seen in our fleet of... four-digits worth of spindles... have ended up false-positives. As in, I contact DELL / IBM / HP / Lenovo and report the issue, they instruct me to flash some controller firmwares, reboot, and go away. If those drives ever fail, it's years later, well beyond any correlation with the SMART events.
Re: (Score:2)
As MSP, false-positives are not always a negative. There, I said it... and most MSPs will agree begrudgingly when off the record.
That said, our support prices alter when the device is no longer under warranty, so the device usually gets moved to a location covered under a different support structure like only 8x5 or have a longer response time to compensate.
Put the SMART stats to the test (Score:2)
Re:Put the SMART stats to the test (Score:4, Informative)
Re: (Score:2)
Google did this [google.com] about seven years ago. Of the stats, a drive with a non-zero scan error count has a 70% chance of surviving eight months, one with a non-zero reallocated sector count has a 85% chance of survival, and one with a non-zero pending sector count has a 75% chance of survival. For comparison, a drive with no error indications has a better than 99% chance of surviving eight months.
Overall, 44% of failures can be predicted with a low false-positive rate, while 64% can be predicted with an unaccept
Re: (Score:3)
I've had drives fail in the ~3 years range from a few different manufacturers. I think with a sample size of 3 drives you can't really draw any conclusions.
Re: (Score:2)
We learned in statistics class that unless you have special circumstances a sample of size 100 is what you need at the very least.
Re: Seagate OEM? (Score:4, Informative)
I buy whatever is cheapest.
I know it's a toss up no matter what or when you buy hard drives, so the only thing I have left to guage is price, capacity, and speed (RPM) depending on the intended use.
About a year ago I took a gamble on an SSD for my primary workstation. I bought an ADATA SX900 64GB drive. I had never heard of the brand before. It was ~$120 at the time, and the cheapest for that capacity. I've been looking at getting a 128GB (or so) SSD for my laptop. Prices right now look like I will be getting another ADATA... but I am holding out for Black Friday/Cyber Monday deals to decide.
Oddly enough, over the past 10 years, I've never had a hard drive die in any of my computers while in use. I have a stack of 4 or 5 drives, ranging in capacity from 100GB to 500GB, 3 different different brands, that I'm not using right now. A while back, I plugged one in just to see if it still worked and it didn't. I recently found out it was the hotswap bay that quit working, so as far as I know it still works.
Conversely, I have some servers in a datacenter. Had a drive fail on reboot after a kernel upgrade the other night. Sent a ticket to the DC and they plugged a new one in. Good to go again. In case you're wondering, it has 4x600GB SAS drives in RAID-10.
TL;DR: Buy whatever is cheapest, the odds are always the same.
Re: (Score:2)
You got lucky. I had 8 out of 10 ADATA 64gb msata drives fail at my workplace over the last year. Adata is crap.
SSDs are a whole different ballgame. Comparing their quirks to rotating hard drives is akin to comparing a car to a train. They do not work the same, nor fail the same.
SSD are by far not all created equal and you must do research before buying them. I like samsung, in
Re: Seagate OEM? (Score:5, Insightful)
Disclaimer: I work at Backblaze. I'm going to completely agree with you wholeheartedly, and say in addition you must have a backup. You don't have to use us, I'm just saying if a drive has a 1 percent chance or a 30 percent chance of failing, the actionable item is the same - keep a backup and buy the cheaper drive and restore from backup when it happens.
> over the past 10 years, I've never had a hard drive die in any of my computers while in use.
Professionally we lose something like 10 (?) drives every single day at Backblaze, but *PERSONALLY* I had a LOT of luck for a number of years, but about 3 years ago I finally lost one drive. I'm more backed up than most people, so it was a completely relaxed event. Not a bit of stress. Replace the drive, re-install the OS, and restore the data. Yet something like 95 percent of people never backup their data. IT professionals backup up their family computers, but once you are out there in "normal computer user" land, it's a horror show.
Re: (Score:2)
What are the odd's I would have one of the employee's from the article comment on my little ol' post?
I keep local backups. I've been browsing online, looking for an online backup service that I like, so far not a whole lot of luck. I exclusively run Funtoo Linux on all of my personal and office computers (workstation at home, workstation at office, and laptop). From what I understand, you don't support Linux (yet).
My basic requirements are:
- support Linux (one of ssh/scp, rsync, webdav)
- preferably data loc
Re: (Score:3)
http://smartmontools.sourceforge.net [sourceforge.net]
Re: (Score:3)
He hasn't given up, he's just acknowledged the reality that the variance among drives of any particular model is large enough that he can't statistically pick a winner even given reliable statistics about the past performance of similar drives (which is definitely not available) and assuming the drives never change over their manufacturing life (with is definitely not true).
If you're buying 1000 hard drives their average reliability is meaningful to you (though even then it's only *a* factor, not *the* fact
Re: (Score:2)
Yes, even with lots of data you'd probably have a hard time showing that some manufacturers are significantly more reliable, due to lots of factors that will create a large deviation within each manufacturer.
I heard a story of someone seeing a shipping container full of hard drives get dropped accidentally, and they just hooked it back up, put it on the ship, and sent them on their way. That probably generated a lot of the "I bought 3 SuchBrand drives, and they all failed in the first month".
Re: (Score:2)
Re: Seagate OEM? (Score:2)
I pay for a business account with an online retailer. Said business account provides me with a 2 year exchange on all hard drives (and a bunch of other benefits).
So if the drive fails within 2 years, I send it back to them and they replace it with a similar model, and they pay for the shipping.
If it happens out of the two year scope, I'm better off just buying a new drive than dealing with the hassle of sending it to the manufacturer.
I don't own a shop, nor do I provide IT services. I used to buy A LOT of s
Re: (Score:2)
Please mod up. Seagate drives fail much sooner than all other brands.
Common cause (Score:2)
Re:Correlation != causation (Score:4, Insightful)
Nope. When looking for warning signs you don't care about causation, it's enough to know that the presence of A indicates an increased probability of imminent B.
Re: (Score:2)
Perhaps it has been too long - you've clearly forgotten the even longer history of the deadpan response to spam.
That said, I can't actually think of many cases of such spam in response to articles/discussions that never mentioned causation at all - but maybe that's just because causally irrelevant mentions of correlation are relatively rare. Or because the spamming was so bland that it just disappeared into the background.
Also: why oh why would you want to try to resurrect such an old and worthless meme? It
Re: (Score:2)
As this is about indicators, it is correlation all right and it is meaningful. Of course, SMART attributes do not _cause_ drive fails.
Re: (Score:2)
I've not done anything special with the two that I have in a media server at home. This stat is at 5 on the older drive and 4 on the newer drive. By comparison, a Seagate Barracuda LP in the same box is at 128 (it's quite a bit older than the WD drives), and the boot dri
Re: (Score:3)
Yes, exactly, why are you calling this stupid? It is interesting because it might affect your behavior - if you power cycle the drives every day, maybe you should consider leaving them powered up, if electricity is cheaper than replacing the drive. It's just an observation, leaving it out seems.... irresponsible? Disclaimer: I work at Backblaze.
Re: (Score:2)
IIRC, the greens are the "energy efficient" drives, and I think they power themselves down when idle, and up when they come back into use, so the numbers can grow even if the machine hasn't been rebooted since the drive was first installed.
Re: (Score:2)
Load Cycle Count and Power Cycle Count aren't the same thing.
Re: (Score:2)
I'm calling it stupid because if you don't know anything about the time between the power cycles, you can at best assume that the power cycle count is a low-quality proxy for powered hours.
For any claim that the number of power cycles itself is a predictor of failure, you'd need to, you know, power cycle a bunch of drives at various rates until they die, and see if merely power cycling it more often makes it fail faster. Only in such conditions would the power cycle mean anything. Otherwise it's stupid and
Re: (Score:2)
Pray tell, what has a firmware bug got to do with the meaning of a power cycle counter, otherwise that in this particular case you can't rely on a faulty counter? Let's not deflect attention to strawmen.
Re: (Score:2)
Don't worry about it. For reads the disks try to start reading very early after positioning, so the heads may not be perfectly aligned yet. This leads to some retries and some ECC recovered.