Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Stomps Into Flash Memory 130

jcatcw writes "Intel's first NAND flash memory product, the Z-U130 Value Solid-State Drive, is a challenge to other hardware vendors. Intel claims read rates of 28 MB/sec, write speeds of 20 MB/sec., and capacity of 1GB to 8GB, which is much smaller than products from SanDisk. 'But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.'"
This discussion has been archived. No new comments can be posted.

Intel Stomps Into Flash Memory

Comments Filter:
  • MTBF (Score:5, Interesting)

    by Eternauta3k ( 680157 ) on Monday March 12, 2007 @04:17PM (#18322697) Homepage Journal

    'But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.'"
    Is this hours of use or "real time" hours? I don't know about other people but my pendrives spend most of their time disconnected.
    • Think of the caching flash in a hybrid drive.

      And why wouldn't you want your pen drive to last 2 1/2 times longer?

      Would it be that you're an AMD "fan" and are rooting against your home teams rival?
      • Where did you get the idea that he didn't want his pen drive to last 2 1/2 times longer? We get lied to so much that it's reasonable to be skeptical. Why are you trying to attribute ulterior motives to his skepticism?
      • by Lehk228 ( 705449 )
        because in 2.5 million hours i , my grandkids, and likely my great grand kids will be dead
    • Did they really test these for 5 million hours or are they just pulling the number out of their ass?
      • Any statisticians on slashdot?
      • Did they really test these for 5 million hours or are they just pulling the number out of their ass?

        Well, given that 5 million hours is equal to 570.39 years, I'm going to guess that no, they didn't actually test them for that long.
      • Did they really test these for 5 million hours or are they just pulling the number out of their ass?
        It's a mean time between failures. An MTBF figure of 5 million hours means they tested 500,000 of them for 300 hours, and 30 of them failed. A rate of 150 million unit hours per 30 failures equals 5 million unit hours per failure.
        • Re: (Score:2, Insightful)

          by hackwrench ( 573697 )
          That makes about as much sense as declaring that they tested 5 million of them for 1 hour and only one of them failed.
          • by tepples ( 727027 )

            That makes about as much sense as declaring that they tested 5 million of them for 1 hour and only one of them failed.
            Which is, in fact, equally valid. The MTBF for such a test session would be the same: 5 million unit hours.
            • IANA product tester.

              It would be mathematically equal, but I'm not sure it'd be equally _valid_. Given the initial defects and the possibility of misdesign causing heat-related losses or such, some stretch of time is really necessary. Testing 5 million for one hour proves little more than that the expected life is longer than one hour. Testing 200,000 for 25 hours would likely, despite the smaller but still sizable sample size, mean much more. Testing 20,000 at 250 hours would likely mean more still.

              5,000 un
              • by jelle ( 14827 )
                The main problem is that manufacturers loudly blare the MTBF/MTTF values without telling people how long the test was done, hence they can use whatever time they like. You have to agree in the very least that even for comparison between products from different manufacturers, the MTBF is useless, for the simple fact that one manufacturer might test for 100 hours, and the other for 1000 hours...

                As such an unreliable measure, the first two letter 'MTBF' stands for 'misleading'.

                I didn't just say it, Carnegie Me
                • Oh, there's no doubt there's some serious issues with the numbers and how they're calculated. An industry standard for minimum number of units tested and minimum number of hours tested would be nice. At least disclosure of the testing conditions should be a minimum.

                  I'd like to see the industry do it without getting government involved. A simple law that clearly states that the manufacturers must describe the testing procedure in order to use the number for marketing would be great if the industry doesn't d
            • Re: (Score:3, Insightful)

              by dgatwood ( 11270 )

              Or, depending on how you look at it, they are both equally invalid if, in fact, the products have a thermal failure in which a trace on the board melts with a period of 2 hours +/- 1 hour and you've just started hitting the failures when testing concludes. The shorter the testing time, the more thoroughly meaningless the results, because in the real world, most products do not fail randomly; they fail because of a flaw. And in cases where you have a flaw, failures tend to show clusters of failures at a pa

      • When you heat electronic devices they have been proven to fail at a higher rate. The increase in temperature and the increase in failure rate has a known relationship. Therefore you can heat up the equipment when you test it, and that will simulate it being used for a longer period of time. So, for example, you can heat up the flash disks by 50 degrees, then test 100 of them over 2 weeks, and then extrapolate from that what the failure rate would be at room temperature. Hence the ability to state values t
        • by dgatwood ( 11270 )

          I am not a product tester, so I can only go with what I've read on the subject, but what you describe just doesn't sound valid to me in general electronics testing.

          First, according to the Google results, thermal considerations had no statistically significant impact on failure rate. Yes, thermal failures can shorten life expectancy (particularly of hard drives), but in a real-world environment, there are far more things besides heat that can cause drive failures, including metal fatigue, bearing fluid le

          • by ajs318 ( 655362 )
            I have been a product tester. And electronic components are more likely to fail at elevated temperatures: 165 degrees is the Kiss of Death for silicon. In practice, if you can't bear to keep your finger on a device (so about 60 degrees, but this is a person-to-person variable), it's probably too hot.

            Certain parts for agricultural and earth-moving vehicles (possibly ordinary cars, too, but we were a bit specialised) have to go through a "burn-in" test. This involves loading special test firmware, wh
      • Re:MTBF (Score:5, Funny)

        by smallfries ( 601545 ) on Monday March 12, 2007 @07:35PM (#18325553) Homepage
        Yes of course they tested them for 5 million hours, after all it's only 570 years. Don't you know your ancient history? The legend of Intelia and their flashious memerious from 1437AD?
    • 5 000 000 hours = 570.397764 years I don't know how Intel came up with those numbers, but I'd be happy if I lived to see my SanDisk flash keep working at only 2 000 000 hours.
      • Re:MTBF (Score:4, Insightful)

        by Target Drone ( 546651 ) on Monday March 12, 2007 @04:43PM (#18323133)

        5 000 000 hours = 570.397764 years I don't know how Intel came up with those numbers

        From the wikipedia article [wikipedia.org]

        Many manufacturers seem to exaggerate the numbers to sell more products (i.e.) Hard Drives to accomplish one of two goals: sell more product or sell for a higher price. A common way that this is done is to define the MTBF as counting only those failures that occur before the expected "wear-out" time of the device. Continuing with the example of hard drives, these devices have a definite wear-out mechanism as their spindle bearings wear down, perhaps limiting the life of the drive to five or ten years (say fifty to a hundred thousand hours). But the stated MTBF is often many hundreds of thousands of hours and only considers those other failures that occur before the expected wear-out of the spindle bearings.
    • by omeomi ( 675045 )
      This FAQ seems to suggest that MTBF would imply actual hours of active use:

      http://www.faqs.org/faqs/arch-storage/part2/sectio n-151.html [faqs.org]

      There is significant evidence that, in the mechanical area "thing-time" is much more related to activity rate than it is to clock time.
    • 2 million hours vs 5 million hours. There are ~10K hours in a year. With 2 million hours, there is more than 200 years. If you are still using the same computer in 200 years, I will be either impressed or scared.
      • by 26199 ( 577806 ) *

        It matters a lot if you're using 200 of them at your company...

      • Re: (Score:3, Informative)

        MTBF matters because it's random. They're not saying that every drive will last that long, they're saying that the average drive will. Therefore the chance of any drive failing within a reasonable amount of time drops the more the mean time is. So with a 5000000 MTBF the chance of any one drive failing in your life time is incredibly minuscule.
        • 2 Million hours MTBF means the time to a failure is a lot longer than my lifetime too. Overkill isn't always better.
          • No, more hours can be good - just think, you can pass down your Family Photo Thumbdrive to your kids, who might be able to pass it onto their grandkids, if USB is still available...
            • By then it'll be like keeping a 5.25' disk. Sure I still have 3 drives lying around but I wouldnt dream of using them.
        • by Reason58 ( 775044 ) on Monday March 12, 2007 @05:09PM (#18323499)

          MTBF matters because it's random. They're not saying that every drive will last that long, they're saying that the average drive will. Therefore the chance of any drive failing within a reasonable amount of time drops the more the mean time is. So with a 5000000 MTBF the chance of any one drive failing in your life time is incredibly minuscule.
          In 20 years from now, when hard drive capacity is measured in yottabytes, will you really be carrying around a 512MB thumbdrive you bought for $20 back before the Great War of 2010?
          • Re: (Score:3, Insightful)

            by LoudMusic ( 199347 )

            In 20 years from now, when hard drive capacity is measured in yottabytes, will you really be carrying around a 512MB thumbdrive you bought for $20 back before the Great War of 2010?
            How do you know it's going to happen in 2010? Are you SURE it's going to happen in 2010? That only gives me 3 years to prepare the shelter ...
        • by jrumney ( 197329 )

          MTBF matters because it's random. They're not saying that every drive will last that long, they're saying that the average drive will.

          False advertising is illegal in many countries. This 5 million hours figure (and SanDisk's 2 million) seems to be based on much shorter tests of large numbers of devices and extrapolating the results based on the assumption that this randomness is evenly distributed. They MUST know that this assumption is wrong. As taught in basic engineering courses, failure distribution

        • by dgatwood ( 11270 )

          So with a 5000000 MTBF the chance of any one drive failing in your life time is incredibly minuscule.

          I have a box full of dead hard drives that would disagree with you, and I didn't typically use lots of drives at once until fairly recently, so most of those failures were consecutive single drive failures....

          The numbers are utterly meaningless for individual consumers. They are only really useful at a corporate IT level with dozens or hundreds of drives to figure out how many spares you should keep o

    • The MTBF only applies to failures at ther NAND level, not the software level.

      In most cases the part that fails is the software, not the hardware. For example, FAT is a terrible way to store data you love. To get reliability you need to use a flash file system that is designed to cope with NAND.

      • Better than FAT. (Score:3, Interesting)

        by Kadin2048 ( 468275 )
        To get reliability you need to use a flash file system that is designed to cope with NAND.

        Any suggestions of possible candidate filesystems?

        Right now, most people that I know of, use flashdrives to move data from one computer to another, in many cases across operating systems or even architectures, so FAT is used less for technical reasons than because it's probably the most widely-understood filesystem: you can read and write it on Windows, Macintosh, Linux, BSD, and most commercial UNIXes.

        However, a disk
      • For example, FAT is a terrible way to store data you love. To get reliability you need to use a flash file system that is designed to cope with NAND.

        Or you could create a FAT partition inside a file, stick that file on a flash file system, and mount the FAT partition on loopback. The microcontrollers built into common CF and SD memory cards do exactly this, and this is why you only get 256 million bytes out of your 256 MiB flash card: the extra 4.8% is used for wear leveling, especially of sectors containing the FAT and directories.

        • by EmbeddedJanitor ( 597831 ) on Monday March 12, 2007 @05:02PM (#18323415)
          The cards with internal controllers do something like you say and you can thead the SD or SmartMedia specs for details. They manage a "free pool" primarily as a way to address bad blocks, but this also provides a degree of wear levelling.

          Putting a FAT partition onto such a device, or into a file via loop mounting, only gives you wear levelling. It does not buy you integrity. If you eject a FAT file system before mounting it then you are likely to damage the file system (potentially killing all the files in the partition). This might be correctable via a fschk.

          Proper flash file systems are designed to be safe from bad unmounts. THese tend to be log structured (eg. YAFFS and JFFS2). Sure, you might lose the data that was in flight, but you should not lose other files. That's why most embedded systems don't use FAT for critical files and only use it where FAT-ness is important (eg. data transfer to a PC).

    • I would assume hours of use. If you run windows, when you go idle, something's accessing that hard drive (as evidenced by the little blinking light attached to the HDD activity light,) an slowly killing away your read/write cycles. OT: If anyone knows how to stop XP from doing that, please let me know. When everything's gone, only 21 processes are running. What in the hel is accessing my hard drive, I don't know. BOT: As it stands, I'll not really expect this to last very long. If they used the PRAM techn
      • by x2A ( 858210 )
        Disable event logging, and disable swap (you'll need enough real memory for this). OSs will tend to write least often accessed pages to disk, even if there's no memory shortage, so that if memory is needed at some point, it can just quickly free that page and use it, without having to wait to swap it out to disk first. There's also last-accessed-time updates to files/folders that are read, even if they are cached, the writeback of the new time has to occure at some point (i believe this can be disabled in w
  • Info. (Score:2, Informative)

    by Anonymous Coward
    Wear-levelling algorithms. Is there a resource for finding out which algorithms are used by various vendors' flash devices? And links to real algorithms? Hint: not some flimsy pamphlet of a "white paper" by sandisk.

    I want to see how valid the claims are that you can keep writing data on a flash disk for as long as you'll ever need it. Depending on the particular wear-levelling algorithm and the write pattern, this might not be true at all.
    • Re:Info. (Score:4, Informative)

      by EmbeddedJanitor ( 597831 ) on Monday March 12, 2007 @04:41PM (#18323095)
      These claims will be made at the flash level (ie. ignoring what the block managers and file systems do).

      Different file systems and block managers do different things to code with wear levelling etc. For some file systems (eg. FAT) wear levelling is very important. For some other file systems - particularly those designed to work with NAND flash - wear levelling is not important.

  • read rates of 28 MB/sec

    Shouldn't a solid state device be able to be read faster than a spinning disc?
    • These days the platters spin so fast and the data density is so high that the math just might work out the same for a solid state device and the spinning disc--ie. the spinning disc may, mathematically, approximate the solid state device.

      At first thought I agree, though. Maybe there's something inherent in the nature of the conducting materials which creates an asymptote, for conventional technologies, closing in around 30 mb/sec.
      • Re: (Score:1, Funny)

        by Anonymous Coward
        > Maybe there's something inherent in the nature of the conducting materials which creates an asymptote, for conventional technologies, closing in around 30 mb/sec.

        No. That's crazy hobo talk.
    • Re: (Score:1, Informative)

      by Anonymous Coward
      Shouldn't a solid state device be able to be read faster than a spinning disc?

      Yes and no.

      With random access the bottleneck is going to be superb - random reads are going to be far faster than any mechanical drive (where waiting for the drive and heads to move) are a real problem.

      With sustained transfers, speeds are going to depend on the interface - which in this case is USB 2.0 - which has a maximum practical transfer rate of... about 30MB/s.

      What's needed are large flash drives with SATA 3 interfaces.
    • Re: (Score:3, Informative)

      Not necessarily...Three platters spinning at 7200rpm is a lot of data.

      The place where you make up time with solid state is in seek time...There is no hardware to have to move, so finding non-contiguous data is quicker.
      • Not necessarily...Three platters spinning at 7200rpm is a lot of data.
        Due to limitations in the accuracy at which a servo can position a hard drive's read and write heads, a hard drive reads and writes only one platter at a time. But you're still right that 7200 RPM at modern data densities is still a buttload of data flying under a head at once.
        • by dgatwood ( 11270 )

          That's true, but a seek to read the same track on the next platter should be very quick, as IIRC, a lot of drive mechanisms do short seeks in a way that significantly reduces the settle time needed compared with long seeks.

          • by amorsen ( 7485 )
            That's true, but a seek to read the same track on the next platter should be very quick

            On some disks the track-to-track seek time for a single platter is shorter than the time to switch to the next platter. Switching platter means you need to find the track again, and you don't know how far you're off to start with. Switching track on the same platter is sometimes easier, because you know exactly how far you are going.
        • by Lehk228 ( 705449 )
          is that a metric or english buttload. also is it bese 2 or base 10?
    • by Kenja ( 541830 )
      Not realy, the advantage of solid state VS magnetic media is in the seak time not the transfer rate.
    • Shouldn't a solid state device be able to be read faster than a spinning disc?
      Yes. You could fit a RAID of twenty miniSD cards into an enclosure smaller than a laptop hard drive. Panasonic P2 memory cards [wikipedia.org] work this way. However, Intel sells flash chips and must quote the specifications for individual chips.
    • The USB bus slows it down firewire 400 is faster.
    • by dbIII ( 701233 )
      Unfortunately not - which is why the MS virtaul memory on flash should be renamed StupidFetch. Seek times are better and fragmentation is not an issue, so it may be better than a filesystem on a really full disk that has got into a mess over time - but otherwise virtual memory on disk will be dramaticly faster.
  • We know Apple commands a great deal of pricing advantage with their current supplier(s) (Samsung, if memory serves). But, could this be another reason to switch, by picking up Intel CPUs and Intel flash memory chips? Cringely could be getting closer to actually being right - if Intel buys Apple, suddenly iPod, iPhone, Mac, etc. production could go in-house for a huge chunk of the parts.

    Just had to throw an Apple reference in there. It's /. law or something.

    • Right now, Apple has 90% of its value due to the vision of Steve Jobs and the products he helps create. This is not to say that there aren't many people involved in Apple's success nor that he even thinks up of most of the products like iPod - but he does a great job in realizing those products and positioning them in the marketplace.

      Unless Intel can keep Jobs and gives him free reign, Apple would soon go rotten from a mediocre vision of someone who just doesn't get the Apple culture and is looking at the
    • if Intel buys Apple

      It's fun to ponder and an interesting combination but it will never happen unless both the management of Apple and Intel both suffer severe brain aneurysms. Why? Culture and the difficulties of vertical integration. Also, if you want to see the dangers of vertical integration, look no further than Sun and SGI. If you are really big like IBM it's possible to be a soup to nuts vendor but even then it is rare. IBM after all just got out of the PC business which is Apple's core market.

    • This just in: "Intel buys Apple"
      in another story, "Microsoft buys AMD"
  • Maybe in the next few generation, we'll get the best of both worlds, much higher capacities and reliability.

    Need to check out how Intel is actually backing up it's reliability claim - if they just replace the drive when it stops working - that may be a cheap proposition for them (it fails a year or two later, even a currently highend drive by that time the drive is small to relative current numbers and they can replace it with a cheap one). Hate for this to become a war with who can fiddle with the numbers
    • For how long? (Score:3, Interesting)

      Intel is a weird company when it comes to the way they do business and I am suprised they are stepping into NAND flash space. The writing was on the wall since they are members of ONFI http://www.onfi.org/ [onfi.org]

      Intel bough the StrongARM off Digital, then sold it, presumably to focus on "core business" of x86 etc. They've done similar moves with their 8051 and USB parts. It is hard to see what would attract them to NAND flash which has very low margins. NAND flash now costs less than 1 cent per MByte, about a fif

      • Intel has been trying to diversify over the past two decades. Some of their attempts have been fruitful (their move into NOR flash in the late 80s, the move into networking products), whereas others have been mistakes (StrongARM / XScale, LCoS).

        A quick note: Intel is not new to flash memory production. Intel pioneered flash memory production back in the 1980s, and it has been hugely profitable. The new thing here is NAND flash production.

        Both AMD (now Spansion) and Intel jumped on the NOR flash train bec
  • I believe I will wait for third-party verification of those numbers. Specifications from the producers tend to have somewhat... generous fine print.
  • WTF? (Score:3, Insightful)

    by xantho ( 14741 ) on Monday March 12, 2007 @04:30PM (#18322909)
    2,000,000 hours = 228 years and 4 months or so. Who the hell cares if you make it to 5,000,000?
    • Re:WTF? (Score:5, Insightful)

      by Kenja ( 541830 ) on Monday March 12, 2007 @04:37PM (#18323029)
      "2,000,000 hours = 228 years and 4 months or so. Who the hell cares if you make it to 5,000,000?"

      Mean time between failures is not a hard perdiction of when things will break. http://en.wikipedia.org/wiki/MTBF [wikipedia.org]

      • Mean time between failures is not a hard perdiction of when things will break.

        True, but since it supposed to be the average time between failures, it had better be closer to 228 than, say, 5 most of the time or the use of the statistic as a selling point is utterly bogus (some would say fraudulent). It would help to know what the (guesstimated) standard deviation is. The implication of a MTBF of 2x10^6 hours is that it will easily outlast you.
      • by SeaFox ( 739806 )

        Mean time between failures is not a hard perdiction of when things will break.

        True, but even if the drive lasts half as long as the manufacturer's MTBF claim, your data will still outlive you.
        .
        .
        . ...wow, why do I feel the urge to say that with a Russian accent.

    • by biocute ( 936687 )
      Well, for those who care about the difference between 250 FPS and 251 FPS in a graphic card.
    • 2,000,000 hours = 228 years and 4 months or so. Who the hell cares if you make it to 5,000,000?


      MTBF doesn't work like that. You can, however, directly translate it to a likelyhood of failure over a year; that is, if a 1 million hour MTBF corresponds to a 1% chance of failure over the course of a year, then a 5 million hour MTBF corresponds to an even lower likelyhood of failure over the course of a year.
    • MTBF is an average. So things don't automatically break at after 2 million nor do all of them last till then.

      The higher the number, the statically less you'll likely get hit with a drive failure.

      Think of it like getting in a car accident in the country road versus getting in a car accident in the busy city. You might go your entire life in both places never getting in an accident, but in both places you always have the possibility you will wreck on your first day of driving.

      However, you fare much better on
  • 8 GB should to be enough for anybody...

  • 2 million hours? (Score:3, Insightful)

    by jgoemat ( 565882 ) on Monday March 12, 2007 @04:35PM (#18322977)
    So on average, it will last 570 years instead of 228?
  • But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.'

    Yes, because I should be concerned that my pr0n collection isn't making it all the way to my laptop for traveling purposes.
  • by Dachannien ( 617929 ) on Monday March 12, 2007 @04:56PM (#18323315)
    That figure doesn't tell me jack. What I want to know is if I order 100 of these things, how many of them will fail just after the warranty expires?

    • Half as many as if you had bought from SanDisk?
      • Who can tell? Maybe half of them fail five minutes after you first plug it in, and the other half fail ten million hours later. Maybe only a very few fail within the first five years, and the failures start picking up after that. Nobody can tell from this figure, which is pure marketroid-speak without any practical application.

  • Sheesh, I read the headline and thought Intel had developed some buggy chip that somehow stomps on flash memory. Nice, well, at least it got my attention.
  • I'd like to see a semi-affordable (around $250) solid state storage device in a standard form factor and connection (3.5" SATA), at a decent size (15GiB).

    This would be an ideal boot and OS drive for me. / and most of it's directories, along with a decent sized swap (2-3 GiB). Put /home and /tmp on a 'normal' large drive (standard SATA drive of decent speed, RAID array, etc.).

    I've thought about doing this for a while, in fact... but every time I research it out I either come to dead ends with no price info
    • by maxume ( 22995 )
      Too slow?

      http://www.newegg.com/Product/Product.asp?Item=N82 E16820233042 [newegg.com]

      USB2 isn't all that odd.
    • by b1scuit ( 795301 )
      It would be wiser to put the swap partition on the conventional disks. And for the love of god, buy some RAM. (or quit wasting all that disk space) 2-3GB of swap is silly, and if you actually find yourself using that much swap, you really need more RAM.
    • I'm looking for a Compact Flash to 2.5" style IDE connector myself for basically the same use. I figure I could deal with an 8 Gig / partition, and would just double face tape the CF card to the drive sled.
      -nB
      • Why not RAID0 two 8GB compact flash cards? You would end up with 16GB of fast flash storage with a convienent interface, and I don't think it would be any less reliable than a single mechanical HDD.
        • Find me a notebook with space inside for 2 hdds (without taking away the cd-rom drive) and with RAID ability...
          -nB
  • Let's see now - 2 million hours works out to about 228 years. Seems like a safe claim to make...

    So Intel upping the rating to 5 million hours is meaningless. Somehow I suspect that the people at Intel know this...

  • Wait a minute.. (Score:3, Informative)

    by aero2600-5 ( 797736 ) on Monday March 12, 2007 @05:22PM (#18323719)
    "mean time between failure of 5 million hours"

    Didn't we just recently learn that they're pulling these numbers out of their arse, and that they're essentially useless?

    Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? [usenix.org]

    This was covered on Slashdot [slashdot.org] already.

    If you're going to read Slashdot, at least fucking read it.

    Aero
    • This was covered on Slashdot already. If you're going to read Slashdot, at least fucking read it.

      Maybe they were waiting until that story was accepted to Slashdot a second time before reading it.
    • MTTF is not MTBF. In the world of metrics, they're different. While they both measure failures, time to fail and time between failures are different measurements for a reason, they tell us different things about the product we're testing.
      • MTTF is not MTBF. In the world of metrics, they're different. While they both measure failures, time to fail and time between failures are different measurements for a reason, they tell us different things about the product we're testing.

        They are essentially the same for many pieces of computer hardware, since things like a disk drive or a flash chip generally aren't repaired when they fail. Which means that the MTTF is the same as the MTBF, as the first failure is the only failure of the device, as it is
  • It seemed pretty inevitable to me, that the Intel/IBM/AMDs of the world would branch out.

    The generation-old fabs they abandon for CPU-making, are still a generation newer than what most anyone else has available. Repurposing those fabs to produce something like Flash chips, chipsets, etc. seems a pretty straight-forward and inexpensive way to keep making money on largely worthless facilities, even after the cost of retooling is taken into account.

    Though they obviously haven't done it yet, companies like In
    • Intel have been making Flash for years (decades?). And their fabs aren't much if any better (in terms of scale) than those of Altera and Xilinx and probably Kingston, Samsung, Motorola and the rest.

      You do know that 65nM FPGA's were on the market before 65nM processors. The reason is obvious, while Intal has to tool and tune a very complicated CPU to get decent yields, all a RAM/Flash/FPGA manufacturer has to do is tune the small amount of cookie cutter design, and ramp up production. As Ram/Flash/FPGA chips
      • You do know that 65nM FPGA's were on the market before 65nM processors.

        No, actually what I know is that you're absolutely wrong.

        Intel's 65nm Core CPUs were released January 2006, while Xilinx was turning out press releases at the end of May 2006, claiming to have produced the first 65nm FPGAs.

        The reason is obvious,

        What appears to be "obvious" to you, is utterly and completely wrong to the rest of the world...

  • Just wondering, doesn't AMD make a whole bunch of money on Flash memory?

    I know that they spun off the division to Spansion, which was a joint venture with Fujitsu, but if memory serves me correctly they still own a good section (40% or similar) of the company and make a lot of money out of it.

    Conspiracy theories'R'us I guess. It could just be that Intel turned around and said "What do you mean AMD is making a heap of cash out of something that isn't as hard to make as CPUs and we aren't?"
  • Rudimentary statistics (IANAS)

    The mean just tells us what you have if you get a sample and divide the sum of values in the sample by the sample size. It's one of the three more meaningful "averages" you can get in statistics. I'd be at least as interested in this case in seeing the mode and median.

    You can "screw up" a mean by adding one or two samples that are extreme. These disks, say they have a 5 million MTBF as the figure you want, but they all really fail after 5 minutes of use. Problem, right? Wro

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...