Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Bug

Your Hard Drive Lies to You 512

fenderdb writes "Brad Fitzgerald of LiveJournal fame has written a utility and a quick article on how all hard drives from the consumer level to the highest level 'enterprise' grade SCSI and SATA drives do not obey the fsync() function. Manufacturers are blatantly sacrificing integrity in favor of scoring higher on 'pure speed' performance benchmarking."
This discussion has been archived. No new comments can be posted.

Your Hard Drive Lies to You

Comments Filter:
  • by |>>? ( 157144 ) on Friday May 13, 2005 @02:19AM (#12517212) Homepage
    Since when do computers do what you mean?
  • by Tetard ( 202140 ) on Friday May 13, 2005 @02:24AM (#12517243)
    Write Cache enable is default on most IDE/ATA
    drives. Most SCSI drives don't enable it.
    If you don't like it, turn it off. There's
    no "lying", and I'm sure the fsync() function
    doesn't know diddly squat about the cache of
    your disk. Maybe the ATA/device abstraction layer does, and I'm sure there's a configurable registry/sysctl/frob you can twiddle to make it DTRT (like FreeBSD has).

    Move along, nothing to see...
    • by ewhac ( 5844 ) on Friday May 13, 2005 @02:35AM (#12517281) Homepage Journal
      Yes, except there is a 'sync' command packet that is supposed to make the drive commit outstanding buffers to the platters, and not signal completion until those writes are done. It would appear, at first blush, that the drives are mis-handling this command when write-caching is enabled.

      There is historical precedent for this. There were recorded incidents of drives corrupting themselves when the OS, during shutdown, tried to flush buffers to the disk just before killing power. The drive said, "I'm done," when it really wasn't, and the OS said Okay, and killed power. This was relatively common on systems with older, slower disks that had been retrofitted with faster CPUs.

      However, once these incidents started ocurring, the issue was supposed to have been fixed. Clearly, closer study is needed here to discover what's really going on.

      Schwab

      • by frinkazoid ( 880013 ) on Friday May 13, 2005 @06:25AM (#12518122)
        this is true .. Installing a fresh windows 98 SE on a fairly new pc and then doing windows update, there is an update witch this description:

        The Windows IDE Hard Drive Cache Package provides a workaround to a recently identified issue with computers that have the combination of Integrated Drive Electronics (IDE) hard disk drives with large caches and newer/faster processors. Computers with this combination may risk losing data if the hard disk shuts down before it can preserve the data in its cache.

        This update introduces a slight delay in the shutdown process. The delay of two seconds allows the hard drive's onboard cache to write any data to the hard drive.

        I found it nice to see how M$ worked around it, just waiting 2 seconds, how ingenious !
        link to the M$ update site: http://www.microsoft.com/windows98/downloads/conte nts/WUCritical/q273017/Default.asp [microsoft.com]
        • >I found it nice to see how M$ worked around it,
          >just waiting 2 seconds, how ingenious !

          What would you have done? Verifying all data would probably take longer than 2 seconds, and you can't trust the disk to tell you when it's written the data.

          So you'd either have to figure out all the data that was in the cache, and verify that against the disk surface and only write when all that is done, or wait a bit. Making some assumptions about buffer size and transfer speed, then adding a saftey factor, is p
      • by jonwil ( 467024 ) on Friday May 13, 2005 @06:51AM (#12518227)
        The right answer is for the drive not to respond to the "Sync" command with "Done" untill it really is done (however long it takes) and for the OS to not continue untill it sees the "done" command from the drive.
    • by Yokaze ( 70883 ) on Friday May 13, 2005 @02:47AM (#12517333)
      No. If you had no cache, there would be no need for a flush command. The flush command exists purely for the reason of flushing buffer and caches on the harddisc. The ATA-5 specifies the command as E7h (and as mandatory).

      The command is specified in practically in all storage interfaces for exactly the reason the author cited, integrity. Otherwise, you can't assure integrity without sacrificing a lot of performance.
    • by Everleet ( 785889 ) on Friday May 13, 2005 @04:13AM (#12517596)
      fsync() is pretty clearly documented to cause a flush of the kernel buffers, not the disk buffers. This shouldn't come as a surprise to anyone.

      From Mac OS X --

      DESCRIPTION
      Fsync() causes all modified data and attributes of fd to be moved to a
      permanent storage device. This normally results in all in-core modified
      copies of buffers for the associated file to be written to a disk.

      Note that while fsync() will flush all data from the host to the drive
      (i.e. the "permanent storage device"), the drive itself may not physi-
      cally write the data to the platters for quite some time and it may be
      written in an out-of-order sequence.

      Specifically, if the drive loses power or the OS crashes, the application
      may find that only some or none of their data was written. The disk
      drive may also re-order the data so that later writes may be present
      while earlier writes are not.

      This is not a theoretical edge case. This scenario is easily reproduced
      with real world workloads and drive power failures.

      For applications that require tighter guarantess about the integrity of
      their data, MacOS X provides the F_FULLFSYNC fcntl. The F_FULLFSYNC
      fcntl asks the drive to flush all buffered data to permanent storage.
      Applications such as databases that require a strict ordering of writes
      should use F_FULLFSYNC to ensure their data is written in the order they
      expect. Please see fcntl(2) for more detail.

      From Linux --

      NOTES
      In case the hard disk has write cache enabled, the data may not really
      be on permanent storage when fsync/fdatasync return.

      From FreeBSD's tuning(7) --

      IDE WRITE CACHING
      FreeBSD 4.3 flirted with turning off IDE write caching. This reduced
      write bandwidth to IDE disks but was considered necessary due to serious
      data consistency issues introduced by hard drive vendors. Basically the
      problem is that IDE drives lie about when a write completes. With IDE
      write caching turned on, IDE hard drives will not only write data to disk
      out of order, they will sometimes delay some of the blocks indefinitely
      under heavy disk load. A crash or power failure can result in serious
      file system corruption. So our default was changed to be safe. Unfortu-
      nately, the result was such a huge loss in performance that we caved in
      and changed the default back to on after the release. You should check
      the default on your system by observing the hw.ata.wc sysctl variable.
      If IDE write caching is turned off, you can turn it back on by setting
      the hw.ata.wc loader tunable to 1. More information on tuning the ATA
      driver system may be found in the ata(4) man page.

      There is a new experimental feature for IDE hard drives called
      hw.ata.tags (you also set this in the boot loader) which allows write
      caching to be safely turned on. This brings SCSI tagging features to IDE
      drives. As of this writing only IBM DPTA and DTLA drives support the
      feature. Warning! These drives apparently have quality control problems
      and I do not recommend purchasing them at this time. If you need perfor-
      mance, go with SCSI.
      • Exactly - the author of this "test" made a bad assumption: fsync() (or rather the windows equivalent) means it's on the disk. Understandable, and once upon a time it was true in Unix. fsync() doesn't (that I know of) issue ATA sync commands, though.

        I used to beta-test SCSI drives, and write SCSI and IDE drivers (for the Amiga). Write-caching is (except for very specific applications) mandatory for speed reasons.

        If you want some performance and total write-safety, tagged queuing (SCSI or ATA) could prov
  • by binaryspiral ( 784263 ) on Friday May 13, 2005 @02:24AM (#12517244)
    Hard drive manufacturers screwing over customers? Why, who would have thought?

    1 billion bytes equals 1 gigabyte - since when?

    Dropped MTBF right after reducing the 3 year standard wrty to a 1 year - good timing.

    Now this?

    Wow what a track record of consumer loving...

    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
    • Re:What's this? (Score:2, Informative)

      by Anonymous Coward
      1 billion bytes equals 1 gigabyte - since when?

      Since 1960 [wikipedia.org]. Since 1998 [wikipedia.org], 2^30 bytes = 1 gibibyte.

      • by pyrrhonist ( 701154 ) on Friday May 13, 2005 @02:57AM (#12517366)
        2^30 bytes = 1 gibibyte.

        AaARaaGGgHHhh! I simply loathe the IEC binary prefix names.

        Kibibits sounds like dog food [kibblesnbits.com].

        "Kibibits, Kibibits, I'm gonna get me some Kibibits..."

      • Re:What's this? (Score:3, Insightful)

        by KiloByte ( 825081 )
        No, the gibi crap is a new invention, going against established practice. And, it sounds awful.
    • by fo0bar ( 261207 ) on Friday May 13, 2005 @02:50AM (#12517348)
      using the wrong definitions to make their products seem bigger. I bought a P4 2.4GHz CPU the other day, and was shocked to find it wasn't 2,576,980,377.6Hz like it should be! Lying thieves...
    • Re:What's this? (Score:3, Insightful)

      by Shinobi ( 19308 )
      Ever since they started using the Giga prefix. Giga is explicitly defined as 10^9 base-10, ever since 1873 when the kilo, Mega, Giga etc prefixes were standardized.

      Ergo, 1 GigaByte=1 000 000 000 Bytes.

      Anything else is a result of comp sci people fucking up their standards compliance.
  • by Kaenneth ( 82978 ) on Friday May 13, 2005 @02:25AM (#12517246) Journal
    So, do you think someone typed "Nuclear weapons are being developed by the government of Iraq.^H^Hn." just before the power went out?
  • Why do we need it? (Score:4, Interesting)

    by Godman ( 767682 ) on Friday May 13, 2005 @02:25AM (#12517247) Homepage Journal
    If we are just now figuring out that fsync's don't work, then the question is, why do we care? Have we been using them, and they just haven't been working or something?

    If we've made it this far without it, why do we need it now?

    I'm just curious...
    • by Erik Hensema ( 12898 ) on Friday May 13, 2005 @02:50AM (#12517345) Homepage

      We need it because of journalling filesystems. A JFS needs to be sure the journal has been flushed out to disk (and resides safely on the platters) before continuing to write the actual (meta)data. Afterwards, it needs to be sure the (meta)data is written properly to disk in order to start writing the journal again.

      When both the journal and the data are in the write cache of the drive, the data on the platters is in an undefined state. Loss of power means filesystem corruption -- just the thing a JFS is supposed to avoid.

      Also, switching off the machine the regular way is a hazard. As an OS you simply don't know when you can safely signal the PSU to switch itself off.

      • by spectecjr ( 31235 )
        When both the journal and the data are in the write cache of the drive, the data on the platters is in an undefined state. Loss of power means filesystem corruption -- just the thing a JFS is supposed to avoid. ... except most drives use the angular momentum of the drive, the power left in the PSU and any spare voltage in the on-board capacitors to provide the power to finish writing and park the drive heads.

        At least, that was the state of the art in the early 90s.
        • by pe1chl ( 90186 )
          But since then, the angular momentum of drives has decreased, and cache size has increased.
          Of course write speed has increased as well, but typical cache size of 8MB and write speed of 50MB/s would mean 160ms of continuous writing when the head already is positioned correctly.
          Assuming the cache can contain blocks scattered over the entire disk, it does not seem realistic to write everything back on power failure.
      • by bgog ( 564818 ) * on Friday May 13, 2005 @04:08AM (#12517584) Journal
        The author is specifically talking about the fsync function not the ATA sync command. fsync is an OS call notifying the system to flush it's write caches to the physical device. This writes to the disks write cache but I don't believe it actually issues the sync command to the drive.

        In the case of a journaling file system they issue the sync command to the drive to flush the data out.

        I work on a block-level transactional system that requires blocks to be synced to the platters. There where two options, modify the kernel to issue syncs to the ata drives on all writes (to the the disk in question) or to just disable the physical write cache on the drive. Turned out to be a touch faster to just diable the cache but the two are effectivly equal.

        However drives operate fine under normal conditions, applications write to file systems which take care of forcing the disks to sync. fsync (which the author is talking about) is an OS command and not directly related to the disk sync command.
        • by swmccracken ( 106576 ) on Friday May 13, 2005 @05:50AM (#12517983) Homepage
          This writes to the disks write cache but I don't believe it actually issues the sync command to the drive.

          Yeah - that's the point of this thing - what's supposed to happen with fsync? From memory, sometimes it will guarentee it's all the way to the platters, sometimes it will not, depending on what storage system you're using, and how easy such a guarentee is to make.

          Linus in 2001 [iu.edu] discussing this issue - it's not new. That whole thread was about comparing SCSI against IDE drives, and it seemed that the IDE drives were either breaking the laws of physics, or lying, but the SCSI drives were being honest.

          From hazy memory, one problem is that without tagged-command-queing or native-command-queuing, one process issuing a sync will cause the hard drive and related software to wait until it has fully synched for all i/o "in flight"; holding up any other i/o tasks for other processes!

          That's why fsync often lies; because it's not pratical for people that fsync all the time to flush buffers to screw around with the whole i/o subsystem, and apparently some programs were overzealous with calling fsync when they shouldn't.

          However, with TCQ, commands that are synched overlap with other commands, so it's not that big a deal (other i/o tasks are not impacted any more than they would by other, unsynchronised, i/o). (Thus, with TCQ, fsync might go all the way to the platters, but without it it might just go to the IDE bus.) SCSI has had TCQ from day one, which is why a SCSI system is more likely to sync all the way than IDE.

          If I'm wrong, somebody correct me please.

          Brad's program certainly points out an issue - it should be possible for a database engine to write to disk and guarentee that it gets written; perhaps fsync() isn't good enough - be this fault in the drives, the IDE spec, IDE drivers or the OS.
          • Actually, it's a flaw in the ATA specification: ATA drives can do a disconnected read, but there is no way to do a disconnected write.

            Because of this, you can have a tagged command queue for read operations, but there is no way to provide a corresponding one for write operations.

            SCSI does not have this limitation, but the bus implementation is much more heavyweight, and therefore more expensive.

            The problem is exacerbated, in that ATA does not permit new disconnected read requests to be issues while the n
        • I work on a block-level transactional system that requires blocks to be synced to the platters. There where two options, modify the kernel to issue syncs to the ata drives on all writes (to the the disk in question) or to just disable the physical write cache on the drive. Turned out to be a touch faster to just diable the cache but the two are effectivly equal.

          Just to clarify - use hdparm -W to fiddle with the write cache on the drive. I've built linux-based network appliances that go out in the field,

    • If we've made it this far without it, why do we need it now?


      Maybe you've made it this far, but I'm sure there's other people that have mysteriously lost data, or had it corrupted. They probbably blamed the OS, faulty hardware, drivers, whatever.

      Data security is based on assumptions (a contract if you will). If you assume the contract hasn't been broken, you look elsewhere for blame when something goes wrong. Up until now I'm sure no one questioned whether fsync() was doing what it was supposed to (a
  • Of course it does! (Score:5, Interesting)

    by grahamsz ( 150076 ) on Friday May 13, 2005 @02:26AM (#12517253) Homepage Journal
    Having written some diagnostic tools for a smaller hard disk maker (who i'll refrain from naming) it's amazing to me that disks work at all.

    Most systems can identify and patch out bad sectors so that they aren't used. What surprised me is that the manufacturers have their own bad sector table, so when you get the disk it's fairly likely that there are already bad areas which have been mapped out.

    Secondly the raw error rate was astoundingly high. It's been quite a few years but it was somewhere between on error in every 10E5 to 10E6 bits. So it's not unusual to find a mistake in every megabyte read. Of course CRC picks up this error and hides that from you too.

    Granted this was a few years ago, but i wouldn't be surprised if it's as bad (or even worse) now.
    • It's been quite a few years but it was somewhere between on error in every 10E5 to 10E6 bits. So it's not unusual to find a mistake in every megabyte read.

      I'm surprised, but not that surprised.

      Areal densities are so high these days, the r/w heads are so small, and prices are so low, that I also am truly amazed that modern HDDs are made to work.

      But then, I remember 13" removable 5MB platters, and 8" floppy drives.
    • "What surprised me is that the manufacturers have their own bad sector table, so when you get the disk it's fairly likely that there are already bad areas which have been mapped out."

      Can't you get the count with SMART?
      • by cowbutt ( 21077 ) on Friday May 13, 2005 @04:00AM (#12517560) Journal
        Sort of, yes:
        # smartctl -a /dev/hde | grep 'Reallocated_Sector_Ct'
        5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0
        This indicates that /dev/hde is far from exhausting its supply of reserved blocks (the first 100) and never has been (the second 100, which is 'worst'). When it crosses the threshold (36) (or the threshold of any of the other 'Pre-fail' attributes for that matter), failure is imminent.
    • As anybody who's ever used (or had to use :-( ) SpinRite [grc.com] will tel you, your HDD not only lies to you, it cheats and steals as well. To whit: It makes it seem there are no bad sectors, when in fact the surface is riddled with them, only the manufacturer hides this fact from you by having a bad sector table. Also errors are corrected on the fly by some CRC checking. You can ask the SMART for the stats, but you can do very little about the results it gives you, other than maybe buying a new disk (which most li
      • by enosys ( 705759 )
        IMHO having the drive hide bad sectors is a good idea. That way you don't have to enter any bad sector lists, you don't have to scan for them when formatting, and the OS doesn't have to worry about them.

        What would you do if you had full control over bad sectors? You're still able to keep trying to read a new bad sector that contains data. The drive will try to repair it when you write to it and if it can't then it will remap it. It seems to me the only thing you can't do is force the drive to try to re

  • Manufacturers are blatently sacrificing integrity in favor of scoring higher on 'pure speed' performance benchmarking."

    Corporate Integrity, not data integrity. I've read through the article and don't see how you can lose data integrity unless you disable all caching, from the OS to the disk itself. In this day and age, nobody does that. Sure, somethings broke. But I fail to see how its very useful these days anyway. Maybe someone with a better grasp of why you would need Fsync could help out?

    • by Dorsai65 ( 804760 ) <[dkmerriman] [at] [gmail.com]> on Friday May 13, 2005 @02:42AM (#12517320) Homepage Journal
      What the article is saying is that the drive (or sometimes the RAID card and/or OS) is lying (with fsync) when it answers that it wrote the data: it didn't; so when you lose power, the data that was in cache (and should have been written) gets lost. It isn't a question of whether caching is turned on or not, but the drive truthfully saying whether or not the data was actually written.
    • Here's how (Score:5, Informative)

      by Moraelin ( 679338 ) on Friday May 13, 2005 @02:44AM (#12517322) Journal
      For example, don't think "home user losing the last porn pic", think for example "corporate databases using XA transactions".

      The semantics of XA transactions say that at the end of the "prepare" step, the data is already on the disc (or whatever other medium), just not yet made visible. That, basically all that could possibly fail, has in fact had its chance to fail. And if you got an OK, then it didn't.

      Introducing a time window (likely extending not just past "prepare", but also past "commit") where the data is still in some cache and God knows when it'll actually get flushed, throws those whole semantics out the window. If, say, power fails (e.g., PSU blows a fuse) or shit otherwise hits the fan in that time window, you have fucked up the data.

      The whole idea of transactions is ACID: Atomicity, Consistency, Isolation, and Durability:

      - Atomicity - The entire sequence of actions must be either completed or aborted. The transaction cannot be partially successful.

      - Consistency - The transaction takes the resources from one consistent state to another.

      - Isolation - A transaction's effect is not visible to other transactions until the transaction is committed.

      - Durability - Changes made by the committed transaction are permanent and must survive system failure.

      That time window we introduced makes it at least possible to screw 3 out of 4 there. An update that involves more than one hard drive may not be Atomically executed in that case: only one change was really persisted. (E.g., if you booked a flight online, maybe the money got taken from your account, but not given to the airline.) It hasn't left the data in a Consistent state. (In the above example some money have disappeared into nowhere.) And it's all because it wasn't Durable. (An update we thought we committed hasn't, in fact, survived a system failure.)
      • Re:Here's how (Score:3, Informative)

        by arivanov ( 12034 )
        And this is the exact reason why any good SLQ based system must have means of integrity checking.

        As someone who have been writing database stuff for 10+ years now, I get really pissed off when I see lunatics raving on Acid about ACID. ACID in itself is not enough.

        You must have reference checking, offline integrity tests as well as ongoing online integrity test. Repeating your example a transaction for buying tickets for a holiday must insert a record in the Requests table, Tickets table, Holidays table, e
        • And your point is? (Score:4, Informative)

          by Moraelin ( 679338 ) on Friday May 13, 2005 @05:02AM (#12517804) Journal
          Yes, nothing by itself is enough, not even XA transactions, but it can make your life a _lot_ easier. Especially if not all records are under your control to start with.

          E.g., the bank doesn't even know that the money is going to reserve a ticket on flight 705 of Elbonian United Airlines. It just knows it must transfer $100 from account A to account B.

          E.g., the travel agency doesn't even have access to the bank's records to check that the money have been withdrawn from your account. And it shouldn't ever have.

          So you propose... what? That the bank gets full access to the airline's business data, and that the airline can read all bank accounts, for those integrity checks to even work? I'm sure you can see how that wouldn't work.

          Yes, if you have a single database and it's all under your control, life is damn easy. It starts getting complicated when you have to deal with 7 databases, out of which 5 are in 3 different departments, and 2 aren't even in the same company. And where not everything is a database either: e.g., where one of the things which must also happen atomically is sending messages on a queue.

          _Then_ XA and ACID become a lot more useful. It becomes one helluva lot easier to _not_ send, for example, a JMS message to the other systems at all when a transaction rolls back, than to try to bring the client's database back in a consistent state with yours.

          It also becomes a lot more expensive to screw up. We're talking stuff that has all the strength of a signed contract, not "oops, we'll give you a seat on the next flight".

          Yes, your tools discovered that you sent the order for, say, 20 trucks in duplicate. Very good. Then what? It's as good as a signed contract the instant it was sent. It'll take many hours of some manager's time to negotiate a way out of that fuck-up. That is _if_ the other side doesn't want to play hardbal and remind you that a contract is a contract.

          Wouldn't it be easier to _not_ have an inconsistency to start with, than to detect it later?

          Basically, yes, please do write all the integrity tests you can think of. Very good and insightful that. But don't assume that it suddenly makes XA transactions useless. _Anything_ that can reduce the probability of a failure in a distributed system is very much needed. Because it may be disproportionately more expensive to fix a screw-up, even if detected, than not to do it in the first place.
  • by ToraUma ( 883708 ) on Friday May 13, 2005 @02:31AM (#12517269)
    96% of Livejournal users replied, "What's a hard drive? Is that like a modem?"
  • ... "Swear to you there's no pr0n there !!"
  • by rice_burners_suck ( 243660 ) on Friday May 13, 2005 @02:36AM (#12517287)
    Why am I not surprised at this? First, they decide that a kilobyte = 1000 bytes, rather than the correct value of 1024. This leads the megabyte to be 1000 kilobytes, again, rather than 1024. The gig is likewise 1000 megabytes. You might think, ok, big deal, right?

    Yeah. In the days when the biggest hard drive you could get was 2 gigs, you would get 147,483,648 bytes less storage than advertised, unless you read the fine print located somewhere. This is only about 140 megs less than advertised. Today, when you can get 200 gig hard drives, the difference is much larger: 14,748,364,800 bytes less storage than advertised. This means that now, you get almost FOURTEEN GIGABYTES less storage than advertised. That's bigger than any hard drive that existed in 1995. That is a big deal.

    I'm bringing up the size issue in a thread on fsync() because it is only one more area where hard drive manufacturers are cheating to get "better" performance numbers, instead of being honest and producing a good product. As a result, journaling filesystems and the like cannot be guaranteed to work properly.

    If the hard drive mfgs really want good performance numbers, this is what they should do: Hard drives already have a small amount of memory (cache) in the drive electronics. Unfortunately, when the power goes away, the data therein becomes incoherent within nanoseconds. So, embed a flash chip on the hard drive electronics, along with a small rechargeable battery. If the battery is dead or the flash is fscked up, both of which can easily be tested today, the hard drive obeys all fsync() more religiously than the pope and works slightly more slowly. If the battery is alive and the flash works, the hard drive will, in the event of power-off with data remaining in the cache (now backed by battery), that data would be written to the flash chip. Upon the next powerup, the hard drive will initialize as normal, but before it accepts any incoming read or write commands, it will first record the information from flash to the platter. This is a good enough guarantee that data will not be lost, as the reliability of flash memory exceeds that of the magnetic platter, provided the flash is not written too many times, which it won't be under this kind of design; and as I said, nothing will be written to flash if the flash doesn't work anymore.

    • kilo = 10^3 = 1,000
      mega = 10^6 = 1,000,000
      giga = 10^9 = 1,000,000,000

      kibi = 2^10 = 1,024
      mebi = 2^20 = 1,048,576
      gibi = 2^30 = 1,073,741,824

      So it's not the harddrive manufacturers that are wrong. You get 1 gigabyte harddisk space for every gigabyte advertised. When you're buying 1 gigabyte of memory you get 74 megabytes for free (because you actually get 1 gibibyte).
      • Ok, fair enough. Now step into any of the 99% of all computer shops out there and ask for a hard drive, 160 gibibyte in size.

        If they don't laugh until you exit the store, I'll pay your disk. Please make sure you record the event and share it on the net.

    • by Sparr0 ( 451780 ) <sparr0@gmail.com> on Friday May 13, 2005 @02:54AM (#12517359) Homepage Journal
      You have no grasp of what 'kilo', 'mega', and 'giga' mean. They have meant the same thing for 45 years, computers did not change that. There is a standard for binary powers, you simply refuse to use it.
      • Ah, so now we know your 3GB space an 100GB of transfer advertised in your sig aren't binary gigabytes, but decimal, just like the hard drive manufacturers :-)
      • by hyfe ( 641811 ) on Friday May 13, 2005 @06:23AM (#12518110)
        You have no grasp of what 'kilo', 'mega', and 'giga' mean. They have meant the same thing for 45 years, computers did not change that. There is a standard for binary powers, you simply refuse to use it.

        Being able to keep two thoughts in your head simultaniosly is a nice skill.

        Sure, kilo, mega and giga scientific meanings never changed, but kilo, mega and giga in computer science started as out the binary values. They are still in use, when reporting free space left on your hard-drive both Windows and Linux use binary thousands. Saying this is a clear cut case is just ignoring reality, as using 1024 really does simplify alot of the math.

        Secondly, if the manufacturers actually had come out and said 'we have decided to adhere to scientific standards and use regular 1000's' and clearly marked their products as such, we wouldn't have any problems now. The problem is, they didn't. They just silently changed it, causing shitloads of confusion along the way. Of all the alternatives in this mess, they choose the one which could ruin an engineers day, only for the purpose of having your drive look a few % larger.

        Some fool let the marketers in on the engineering meetings and we all lived to rue that day.

    • Why am I not surprised at this? First, they decide that a kilobyte = 1000 bytes, rather than the correct value of 1024. This leads the megabyte to be 1000 kilobytes, again, rather than 1024. The gig is likewise 1000 megabytes. You might think, ok, big deal, right?

      Wrong. If you start ranting get your FACTS STRAIGHT. It's been solved in 1998 allready.

      The Standards

      Although computer data is normally measured in binary code, the prefixes for the multiples are based on the metric system. The nearest

  • More information (Score:5, Interesting)

    by Halo1 ( 136547 ) on Friday May 13, 2005 @02:39AM (#12517305)
    There was an interesting discussion [apple.com] on this topic a while ago on Apple's Darwin development list a while ago.
  • by Anonymous Coward on Friday May 13, 2005 @02:42AM (#12517319)
    The author lied when implied that DRIVES are the issue.

    ATA-IDE, SCSI, and S-ATA drives from all major manufacturers will accept commands to flush the write buffer including track cache buffer completely.

    These commands are critical before cutting power and "sleeping" in machines that can perform a complete "deep sleep" (no power at all whatsoever sent to the ATA-IDE drive.

    Such OSes include Apples OS 9 on a G4 tower, and some versions of OSX on machines not supplied with certain nuaghty video cards.

    Laptops, for example need to flush drives... AND THEY do.

    All drives conform.

    As for DRIVER AUTHORS not heeding the special calls sent to them.... he is correct.

    Many driver writers (other than me) are loser shits that do not follow standards.

    As for LSI raid cards, he is right, and otehr raid cards... that is becasue the products are defective. But the drives are not and the drivers COULD be written to honor a true flush.

    As for his "discovery" of sync not working.... DUH!!!!!

    the REAL sync is usually a privelidged operation, sent from the OS, and not highly documented.

    For example on a Mac the REAL sync in OS9 is a jhook trap and not the documented normal OS call which has a governor on it.

    Mainframes such as PRIMOS and other old mainframes including even unix typically faked the sync command and ONLY allowed it if the user was at the actual physical systems console and furthermore logged in as a root or backup operator.

    This cheating always sickened me. but all OSes do this because so many people that think they know what they are doing try to sync all the time for idiotic self-rolled journalling file systems and journalled databases.

    But DRIVES, except a couple S-ATA seagates from 2004 with bad firmware, ALWAYS will flush.

    This author should have explained that its not the hard drives.

    They perform as documented.

    Admittedly Linux used to corrupt and not flush several years ago... but it was not the IDE drives. They never got the commands.

    Its all a mess... but setting a DRIVE to not cache is NOT the solution! Its retarded to do so, and all the comments in this thread taling of setting the cache off are foolish.

    As for caching device topics, there are many options.

    1> SCSI WCE permanent option

    2> ATA Seagate Set Features command 82h Disable write cache

    3> ATA config commands sent over SCSI (RAID card) device using a SCSI CDB in passthrough It uses 16 byte CBD with 8h, or 12 byte CDB with Ah for sending the tunneled command.

    4> ATA ATAPI commands for WCE bit, asif it was SCSI

    Fibre Channel drives of course honor SCSI commands.

    As for mere flushing, a variety of low level calls all have the same desired effect and are documented in respective standards manuals.

    • Parent either doesn't know what he's talking about, or is a troll. Pity there isn't an "incoherent rant" moderation option, or we could avoid the ambiguity.
  • Not really a Lie (Score:3, Informative)

    by bgog ( 564818 ) * on Friday May 13, 2005 @02:52AM (#12517355) Journal
    It's not a lie. fsync syncs to a device. The device is a hard drive with a cache.

    You'd expect a fsync to complete only when the data is physically written to disk. However usually this is not the case it completes only when it is fully written to the cache on the physical disk.

    The downside of this is that it's possible to loose data if you pull the power plug (usually not just by hitting the power switch). However if the disks were to actually commit fully to the physical media on every fsync you would see a very very dramatic performance degredation. Not just a little slower so you look bad in a magazine article but incredibly slow, especially if you are running a database or similar application that fsyncs often.

    Server class machines solve this problem by providing battery backed cache on their controllers. This allow the full speed operation by fsyncing only to cache but if power is lost the data is then safe because of the battery.

    This doesn't matter too much for the average joe for a number of reasons. First the when the power switch is hit, the disks tend to finnish writing their caches before spinning down. IN the case of a power failure journaled file systems will usually keep you safe (but not always).

    This is a big issue however if you are trying to implement an enterprise class database server on everyday hardware.

    So turn off the write cache if you don't want it on but don't complain when your system starts to crawl.
    • Re:Not really a Lie (Score:4, Informative)

      by ravenspear ( 756059 ) on Friday May 13, 2005 @03:09AM (#12517407)
      However if the disks were to actually commit fully to the physical media on every fsync you would see a very very dramatic performance degredation. Not just a little slower so you look bad in a magazine article but incredibly slow, especially if you are running a database or similar application that fsyncs often.

      I think you are confusing write caching with fsyncing. Having no write cache to the disk would indeed slow things down quite a bit. I don't see how fsync fits the same description though. Simply honoring fsync (actually flushing the data to disk) would not slow things down anywhere near the same level as long as software makes intelligent use of it. Fsync is not designed to be used with every write to the disk, just for the occasional time when an application needs to guarantee certain data gets written.
  • by Dahan ( 130247 ) <khym@azeotrope.org> on Friday May 13, 2005 @03:07AM (#12517401)
    According to SUSv3 [opengroup.org]:
    The fsync() function shall request that all data for the open file descriptor named by fildes is to be transferred to the storage device associated with the file described by fildes. The nature of the transfer is implementation-defined.
    If _POSIX_SYNCHRONIZED_IO is not defined, the wording relies heavily on the conformance document to tell the user what can be expected from the system. It is explicitly intended that a null implementation is permitted. This could be valid in the case where the system cannot assure non-volatile storage under any circumstances or when the system is highly fault-tolerant and the functionality is not required. In the middle ground between these extremes, fsync() might or might not actually cause data to be written where it is safe from a power failure.
    (Emphasis added). If you don't want your hard drive to cache writes, send it a command to turn off the write cache. Don't rely on fsync(). Either that, or hack your kernel so that fsync() will send a SYNCHRONIZE CACHE command to the drive. That'll sync the entire drive cache though, not just the blocks associated with the file descriptor you passed to fsync().
  • fsync semantic is needed whenever you want to implement ACID transactions. This lies at the core of database systems and journaling file systems, for example. No fsync, no data integrity.
  • RTFM (Score:2, Informative)

    by BigYawn ( 842342 )
    From the fsync man page (section "NOTES"):

    In case the hard disk has write cache enabled, the data may not really be on permanent storage when fsync/fdata sync return.
    When an ext2 file system is mounted with the sync option, directory entries are also implicitly synced by fsync.
    On kernels before 2.4, fsync on big files can be ineffi cient. An alternative might be to use the O_SYNC flag to open(2).

  • by cahiha ( 873942 ) on Friday May 13, 2005 @03:55AM (#12517541)
    Well, it's unlikely this is going to change. The real solution is to give power long enough to the disk drive to let it complete its writes no matter what, and/or to add non-volatile or flash memory to the disk drive so that it can complete its writes after coming back up.

    There is a fairly simple external solution for that: a UPS. They're good. Get one.

    And even then it is not guaranteed that just because you write a block, you can read it again, because nothing can guarantee it. So, file systems need to deal, one way or another, with the possibility that this case occurs.
  • by stereoroid ( 234317 ) on Friday May 13, 2005 @04:56AM (#12517769) Homepage Journal
    Microsoft have had a few problems in this area - see KB281672 [microsoft.com] for example.

    Then they released Windows 2000 Service Pack 3, which fixed some previous cacheing bugs, as documented in KB332023 [microsoft.com]. The article tells you how to set up the "Power Protected" Write Cache Option", which is your way of saying "yes, my storage has a UPS or battery-backed cache, give me the performance and let me worry about the data integrity".

    I work for a major storage hardware vendor: to cut a long story short, we knew fsync() (a.k.a. "write-through" or "synchronize cache") was working on our hardware, when the performance started sucking after customers installed W2K SP3, and we had to refer customers to the latter article.

    The same storage systems have battery-backed cache, and every write from cache to disks is made write-through (because drive cache is not battery-backed). In other words, in these and other Enterprise-class systems, the burden of honouring fsync() / write-through commands from the OS has switched to the storage controller(s), the drives might as well have no cache for all we care. But it still matters that the drives do honour the fsync() we send to them from cache, and not signal "clear" when they're not - if they lie, the cache drops that data, and no battery will get it back..!

  • by jgarzik ( 11218 ) on Friday May 13, 2005 @08:50AM (#12519114) Homepage
    All it would have taken is ten minutes of searching on Google to discover what is going on.


    You need a vaguely recent 2.6.x kernel to support fsync(2) and fdatasync(2) flushing your disk's write cache. Previous 2.4.x and 2.6.x kernels would only flush the write cache upon reboot, or if you used a custom app to issue the 'flush cache' command directly to your disk.


    Very recent 2.6.x kernels include write barrier support, which flushes the write cache when the ext3 journal gets flushed to disk.


    If your kernel doesn't flush the write cache, then obviously there is a window where you can lose data. Welcome to the world of write-back caching, circa 1990.


    If you are stuck without a kernel that issues the FLUSH CACHE (IDE) or SYNCHRONIZE CACHE (SCSI) command, it is trivial to write a userspace utility that issues the command.



    Jeff, the Linux SATA driver guy

  • by kublikhan ( 838265 ) on Friday May 13, 2005 @12:03PM (#12521277)
    Couldn't they just stick a large capacitor or small battery on the harddrive that is only used for flushing the write cache to the platters in the event of a power failure? It should be a simple enough matter, we only need a few seconds here, and it would solve this whole mess.

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...