Intel Stomps Into Flash Memory 130
jcatcw writes "Intel's first NAND flash memory product, the Z-U130 Value Solid-State Drive, is a challenge to other hardware vendors. Intel claims read rates of 28 MB/sec, write speeds of 20 MB/sec., and capacity of 1GB to 8GB, which is much smaller than products from SanDisk. 'But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.'"
MTBF (Score:5, Interesting)
Re: (Score:1)
And why wouldn't you want your pen drive to last 2 1/2 times longer?
Would it be that you're an AMD "fan" and are rooting against your home teams rival?
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
Doubtful. (Score:1)
Well, given that 5 million hours is equal to 570.39 years, I'm going to guess that no, they didn't actually test them for that long.
MEAN time between failures, what does that MEAN (Score:4, Informative)
Re: (Score:2, Insightful)
Re: (Score:1)
Re: (Score:2)
It would be mathematically equal, but I'm not sure it'd be equally _valid_. Given the initial defects and the possibility of misdesign causing heat-related losses or such, some stretch of time is really necessary. Testing 5 million for one hour proves little more than that the expected life is longer than one hour. Testing 200,000 for 25 hours would likely, despite the smaller but still sizable sample size, mean much more. Testing 20,000 at 250 hours would likely mean more still.
5,000 un
Re: (Score:2)
As such an unreliable measure, the first two letter 'MTBF' stands for 'misleading'.
I didn't just say it, Carnegie Me
Re: (Score:2)
I'd like to see the industry do it without getting government involved. A simple law that clearly states that the manufacturers must describe the testing procedure in order to use the number for marketing would be great if the industry doesn't d
Re: (Score:3, Insightful)
Or, depending on how you look at it, they are both equally invalid if, in fact, the products have a thermal failure in which a trace on the board melts with a period of 2 hours +/- 1 hour and you've just started hitting the failures when testing concludes. The shorter the testing time, the more thoroughly meaningless the results, because in the real world, most products do not fail randomly; they fail because of a flaw. And in cases where you have a flaw, failures tend to show clusters of failures at a pa
Re: (Score:1)
Re: (Score:2)
I am not a product tester, so I can only go with what I've read on the subject, but what you describe just doesn't sound valid to me in general electronics testing.
First, according to the Google results, thermal considerations had no statistically significant impact on failure rate. Yes, thermal failures can shorten life expectancy (particularly of hard drives), but in a real-world environment, there are far more things besides heat that can cause drive failures, including metal fatigue, bearing fluid le
Re: (Score:2)
Certain parts for agricultural and earth-moving vehicles (possibly ordinary cars, too, but we were a bit specialised) have to go through a "burn-in" test. This involves loading special test firmware, wh
Re:MTBF (Score:5, Funny)
Re: (Score:1)
Re:MTBF (Score:4, Insightful)
From the wikipedia article [wikipedia.org]
Re: (Score:2)
http://www.faqs.org/faqs/arch-storage/part2/secti
There is significant evidence that, in the mechanical area "thing-time" is much more related to activity rate than it is to clock time.
Why? what does it matter (Score:1)
Re: (Score:2)
It matters a lot if you're using 200 of them at your company...
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:Why? what does it matter (Score:5, Funny)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
MTBF matters because it's random. They're not saying that every drive will last that long, they're saying that the average drive will.
False advertising is illegal in many countries. This 5 million hours figure (and SanDisk's 2 million) seems to be based on much shorter tests of large numbers of devices and extrapolating the results based on the assumption that this randomness is evenly distributed. They MUST know that this assumption is wrong. As taught in basic engineering courses, failure distribution
Re: (Score:2)
So with a 5000000 MTBF the chance of any one drive failing in your life time is incredibly minuscule.
I have a box full of dead hard drives that would disagree with you, and I didn't typically use lots of drives at once until fairly recently, so most of those failures were consecutive single drive failures....
The numbers are utterly meaningless for individual consumers. They are only really useful at a corporate IT level with dozens or hundreds of drives to figure out how many spares you should keep o
Warning (Score:2)
In most cases the part that fails is the software, not the hardware. For example, FAT is a terrible way to store data you love. To get reliability you need to use a flash file system that is designed to cope with NAND.
Better than FAT. (Score:3, Interesting)
Any suggestions of possible candidate filesystems?
Right now, most people that I know of, use flashdrives to move data from one computer to another, in many cases across operating systems or even architectures, so FAT is used less for technical reasons than because it's probably the most widely-understood filesystem: you can read and write it on Windows, Macintosh, Linux, BSD, and most commercial UNIXes.
However, a disk
Re: (Score:2)
Re: (Score:2)
Wear leveling in hardware (Score:2)
For example, FAT is a terrible way to store data you love. To get reliability you need to use a flash file system that is designed to cope with NAND.
Or you could create a FAT partition inside a file, stick that file on a flash file system, and mount the FAT partition on loopback. The microcontrollers built into common CF and SD memory cards do exactly this, and this is why you only get 256 million bytes out of your 256 MiB flash card: the extra 4.8% is used for wear leveling, especially of sectors containing the FAT and directories.
Re:Wear leveling in hardware (Score:4, Interesting)
Putting a FAT partition onto such a device, or into a file via loop mounting, only gives you wear levelling. It does not buy you integrity. If you eject a FAT file system before mounting it then you are likely to damage the file system (potentially killing all the files in the partition). This might be correctable via a fschk.
Proper flash file systems are designed to be safe from bad unmounts. THese tend to be log structured (eg. YAFFS and JFFS2). Sure, you might lose the data that was in flight, but you should not lose other files. That's why most embedded systems don't use FAT for critical files and only use it where FAT-ness is important (eg. data transfer to a PC).
i'll say... (Score:2)
Re: (Score:2)
Info. (Score:2, Informative)
I want to see how valid the claims are that you can keep writing data on a flash disk for as long as you'll ever need it. Depending on the particular wear-levelling algorithm and the write pattern, this might not be true at all.
Re:Info. (Score:4, Informative)
Different file systems and block managers do different things to code with wear levelling etc. For some file systems (eg. FAT) wear levelling is very important. For some other file systems - particularly those designed to work with NAND flash - wear levelling is not important.
hmm (Score:2)
Shouldn't a solid state device be able to be read faster than a spinning disc?
Spinning states (Score:3, Informative)
At first thought I agree, though. Maybe there's something inherent in the nature of the conducting materials which creates an asymptote, for conventional technologies, closing in around 30 mb/sec.
Re: (Score:1, Funny)
No. That's crazy hobo talk.
Re: (Score:1, Informative)
Yes and no.
With random access the bottleneck is going to be superb - random reads are going to be far faster than any mechanical drive (where waiting for the drive and heads to move) are a real problem.
With sustained transfers, speeds are going to depend on the interface - which in this case is USB 2.0 - which has a maximum practical transfer rate of... about 30MB/s.
What's needed are large flash drives with SATA 3 interfaces.
Re: (Score:3, Informative)
The place where you make up time with solid state is in seek time...There is no hardware to have to move, so finding non-contiguous data is quicker.
Hard drive heads aren't used in parallel (Score:2)
Re: (Score:2)
That's true, but a seek to read the same track on the next platter should be very quick, as IIRC, a lot of drive mechanisms do short seeks in a way that significantly reduces the settle time needed compared with long seeks.
Re: (Score:2)
On some disks the track-to-track seek time for a single platter is shorter than the time to switch to the next platter. Switching platter means you need to find the track again, and you don't know how far you're off to start with. Switching track on the same platter is sometimes easier, because you know exactly how far you are going.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Yeah, if you RAID them (Score:2)
Re: (Score:2)
Re: (Score:2)
Reason to switch #341 (Score:2)
We know Apple commands a great deal of pricing advantage with their current supplier(s) (Samsung, if memory serves). But, could this be another reason to switch, by picking up Intel CPUs and Intel flash memory chips? Cringely could be getting closer to actually being right - if Intel buys Apple, suddenly iPod, iPhone, Mac, etc. production could go in-house for a huge chunk of the parts.
Just had to throw an Apple reference in there. It's /. law or something.
Apple would lose all its value over time (Score:3, Insightful)
Unless Intel can keep Jobs and gives him free reign, Apple would soon go rotten from a mediocre vision of someone who just doesn't get the Apple culture and is looking at the
Intel will never buy Apple (Score:2)
It's fun to ponder and an interesting combination but it will never happen unless both the management of Apple and Intel both suffer severe brain aneurysms. Why? Culture and the difficulties of vertical integration. Also, if you want to see the dangers of vertical integration, look no further than Sun and SGI. If you are really big like IBM it's possible to be a soup to nuts vendor but even then it is rare. IBM after all just got out of the PC business which is Apple's core market.
Re: (Score:1)
in another story, "Microsoft buys AMD"
Ah, good, more competition (Score:2)
Need to check out how Intel is actually backing up it's reliability claim - if they just replace the drive when it stops working - that may be a cheap proposition for them (it fails a year or two later, even a currently highend drive by that time the drive is small to relative current numbers and they can replace it with a cheap one). Hate for this to become a war with who can fiddle with the numbers
For how long? (Score:3, Interesting)
Intel bough the StrongARM off Digital, then sold it, presumably to focus on "core business" of x86 etc. They've done similar moves with their 8051 and USB parts. It is hard to see what would attract them to NAND flash which has very low margins. NAND flash now costs less than 1 cent per MByte, about a fif
Re: (Score:2)
A quick note: Intel is not new to flash memory production. Intel pioneered flash memory production back in the 1980s, and it has been hugely profitable. The new thing here is NAND flash production.
Both AMD (now Spansion) and Intel jumped on the NOR flash train bec
verification (Score:1)
WTF? (Score:3, Insightful)
Re:WTF? (Score:5, Insightful)
Mean time between failures is not a hard perdiction of when things will break. http://en.wikipedia.org/wiki/MTBF [wikipedia.org]
Re: (Score:1)
True, but since it supposed to be the average time between failures, it had better be closer to 228 than, say, 5 most of the time or the use of the statistic as a selling point is utterly bogus (some would say fraudulent). It would help to know what the (guesstimated) standard deviation is. The implication of a MTBF of 2x10^6 hours is that it will easily outlast you.
Re: (Score:2)
True, but even if the drive lasts half as long as the manufacturer's MTBF claim, your data will still outlive you.
.
.
.
Re: (Score:2)
Re: (Score:2)
MTBF doesn't work like that. You can, however, directly translate it to a likelyhood of failure over a year; that is, if a 1 million hour MTBF corresponds to a 1% chance of failure over the course of a year, then a 5 million hour MTBF corresponds to an even lower likelyhood of failure over the course of a year.
Re: (Score:2)
The higher the number, the statically less you'll likely get hit with a drive failure.
Think of it like getting in a car accident in the country road versus getting in a car accident in the busy city. You might go your entire life in both places never getting in an accident, but in both places you always have the possibility you will wreck on your first day of driving.
However, you fare much better on
Sounds familiar (Score:1)
Re: (Score:2)
2 million hours? (Score:3, Insightful)
Failures (Score:2)
Yes, because I should be concerned that my pr0n collection isn't making it all the way to my laptop for traveling purposes.
5 million hours MTBF (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Was that supposed to be a pun? (Score:2)
Useful Size (Score:1)
This would be an ideal boot and OS drive for me. / and most of it's directories, along with a decent sized swap (2-3 GiB). Put
I've thought about doing this for a while, in fact... but every time I research it out I either come to dead ends with no price info
Re: (Score:1)
http://www.newegg.com/Product/Product.asp?Item=N8
USB2 isn't all that odd.
Re: (Score:1)
Re: (Score:2)
-nB
Re: (Score:2)
Re: (Score:2)
-nB
Re: (Score:2)
Re: (Score:2)
-nB
Re: (Score:2)
On the other hand, given that a CF card is smaller than a laptop harddrive, and many laptop PATA controllers seem fully functional in the sense that they'll support both a master and slave drive, I wonder if you could hack two CF cards to fit into a regular laptop where the harddrive would usually fit, and then use software raid? (though I imagine you woul
Has anyone actually done the math on this? (Score:1)
So Intel upping the rating to 5 million hours is meaningless. Somehow I suspect that the people at Intel know this...
Wait a minute.. (Score:3, Informative)
Didn't we just recently learn that they're pulling these numbers out of their arse, and that they're essentially useless?
Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? [usenix.org]
This was covered on Slashdot [slashdot.org] already.
If you're going to read Slashdot, at least fucking read it.
Aero
Re: (Score:2)
Maybe they were waiting until that story was accepted to Slashdot a second time before reading it.
MTTF != MTBF (Score:1)
Re: (Score:2)
They are essentially the same for many pieces of computer hardware, since things like a disk drive or a flash chip generally aren't repaired when they fail. Which means that the MTTF is the same as the MTBF, as the first failure is the only failure of the device, as it is
Seemed Inevitable... (Score:2)
The generation-old fabs they abandon for CPU-making, are still a generation newer than what most anyone else has available. Repurposing those fabs to produce something like Flash chips, chipsets, etc. seems a pretty straight-forward and inexpensive way to keep making money on largely worthless facilities, even after the cost of retooling is taken into account.
Though they obviously haven't done it yet, companies like In
Re: (Score:1)
You do know that 65nM FPGA's were on the market before 65nM processors. The reason is obvious, while Intal has to tool and tune a very complicated CPU to get decent yields, all a RAM/Flash/FPGA manufacturer has to do is tune the small amount of cookie cutter design, and ramp up production. As Ram/Flash/FPGA chips
Re: (Score:2)
No, actually what I know is that you're absolutely wrong.
Intel's 65nm Core CPUs were released January 2006, while Xilinx was turning out press releases at the end of May 2006, claiming to have produced the first 65nm FPGAs.
What appears to be "obvious" to you, is utterly and completely wrong to the rest of the world...
Intel vs AMD (Score:2)
I know that they spun off the division to Spansion, which was a joint venture with Fujitsu, but if memory serves me correctly they still own a good section (40% or similar) of the company and make a lot of money out of it.
Conspiracy theories'R'us I guess. It could just be that Intel turned around and said "What do you mean AMD is making a heap of cash out of something that isn't as hard to make as CPUs and we aren't?"
What does it "mean" anyway? (Score:2)
The mean just tells us what you have if you get a sample and divide the sum of values in the sample by the sample size. It's one of the three more meaningful "averages" you can get in statistics. I'd be at least as interested in this case in seeing the mode and median.
You can "screw up" a mean by adding one or two samples that are extreme. These disks, say they have a 5 million MTBF as the figure you want, but they all really fail after 5 minutes of use. Problem, right? Wro
Re: (Score:2)
Downloading the content is not the only aspect of browsing the web, the machine must parse and render that content as well.
Incremental layout and web accelerator (Score:3, Informative)
Remember their old Pentium add which claimed surfing the 'net would be sooooo much faster with their new Pentium, 'cause it's not like it's actually limited by the speed of you network connection?
It wasn't entirely false advertising. A web browser on a faster computer can run more iterations of the incremental layout code, so that the data looks like it's coming in faster. A faster computer can run more complex text and mark-up compression in human-acceptable time, allowing for "web accelerator" software that became especially popular during the wane of dial-up.