Intel Stomps Into Flash Memory 130
jcatcw writes "Intel's first NAND flash memory product, the Z-U130 Value Solid-State Drive, is a challenge to other hardware vendors. Intel claims read rates of 28 MB/sec, write speeds of 20 MB/sec., and capacity of 1GB to 8GB, which is much smaller than products from SanDisk. 'But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.'"
WTF? (Score:3, Insightful)
2 million hours? (Score:3, Insightful)
Re:WTF? (Score:5, Insightful)
Mean time between failures is not a hard perdiction of when things will break. http://en.wikipedia.org/wiki/MTBF [wikipedia.org]
Re:MTBF (Score:4, Insightful)
From the wikipedia article [wikipedia.org]
Apple would lose all its value over time (Score:3, Insightful)
Unless Intel can keep Jobs and gives him free reign, Apple would soon go rotten from a mediocre vision of someone who just doesn't get the Apple culture and is looking at the spreadsheets when doing products and releasing "Me Too!" items that look and act like everyone elses. Just look at the stagnation of Apple throughout the late 80's and 90's. Intel certainly isn't that company.
And I think Jobs is too much of a control freak to voluntarily hand himself over to some corporate masters just for a few dollars better margin on a few components.
5 million hours MTBF (Score:3, Insightful)
Re:MEAN time between failures, what does that MEAN (Score:2, Insightful)
Re:Why? what does it matter (Score:3, Insightful)
Re:MEAN time between failures, what does that MEAN (Score:3, Insightful)
Or, depending on how you look at it, they are both equally invalid if, in fact, the products have a thermal failure in which a trace on the board melts with a period of 2 hours +/- 1 hour and you've just started hitting the failures when testing concludes. The shorter the testing time, the more thoroughly meaningless the results, because in the real world, most products do not fail randomly; they fail because of a flaw. And in cases where you have a flaw, failures tend to show clusters of failures at a particular age or level of use. For example, I find that the MTBF for cars and hard drives tends to be the duration of the warranty period plus 1-4 weeks. :-)
MTBF is approximately useless unless product failures are distributed with a gaussian distribution around the mean. You could have a long tail with a few of them lasting a decade and most of them dying after a week and still have a MTBF figure measured in years, depending on how the testing was done, and specifically on whether they reached the magic cluster death point during the testing period or not. The odds of accidentally hitting such a degenerate case on a single drive are small, but they add up quickly when you're talking about an entire industry worth of drive models. Were that not the case, a whole lot of really awful hard drive models would never have made it out of testing, IMHO.
I wish manufacturers would be more transparent about their testing methodologies. My gut feeling, though, is that many of them have poor practices and don't want the world to know. This is one of the rare cases where the "if you have nothing to hide, you shouldn't keep this information private" argument actually holds some weight, IMHO---this and crypto research. :-)