Forgot your password?
typodupeerror
This discussion has been archived. No new comments can be posted.

Linux Filesystems Benchmarked

Comments Filter:
  • 'Tis a dupe (Score:5, Informative)

    by Quattro Vezina (714892) on Tuesday May 11, 2004 @10:04AM (#9116141) Journal
    The original article [slashdot.org]
    • by vwjeff (709903) on Tuesday May 11, 2004 @10:15AM (#9116262)
      here is the winner. FAT 16
    • by Anonymous Coward
      ...making Linux just a little more fun!

      Benchmarking Filesystems
      By Justin Piszcz

      INTRO
      I recently purchased a Western Digital 250GB/8M/7200RPM drive and wondered which journaling file system I should use. I currently use ext2 on my other, smaller hard drives. Upon reboot or unclean shutdown, e2fsck takes a while on drives only 40 and 60 gigabytes. Therefore I knew using a journaling file system would be my best bet. The question is: which is the best? In order to determine this I used common operations that
      • All of the files systems faired fairly well when finding 10,000 files in a single directory, the only exception being XFS which took twice as long.

        I'm surprised that XFS did so poorly here. I do know they had a bug one point in time, which may reflect such a score, however, I thought it had long been addressed. Worse, I thought I remembered reading that XFS used a btree to track file and directory names. Please correct as needed. If this is true, it would appear to be a bug rather than normal performa
  • -1 Offtopic (Score:5, Funny)

    by numbski (515011) * <{numbski} {at} {hksilver.net}> on Tuesday May 11, 2004 @10:05AM (#9116147) Homepage Journal
    tests on popular Linux filesystes

    So did the tests come back positive or negative? Systes are nasty things, and early detection is important to increase chances of survivability.

    Remember kids. Test early, and test often. You files will thank you.
  • Not a clear winner (Score:5, Interesting)

    by stecoop (759508) on Tuesday May 11, 2004 @10:05AM (#9116155) Journal
    These charts make the choice of which file system to use clear as mud. What is the charts really saying? From what I gather, it appears that:

    EXT2 has better throughput

    EXT3 has better file handling capablities

    Reiser has better search ablity

    XFS has better delete capablities

    JFS may be a better choice in regards to file manipulation Subject to debate of course...

    • by Coryoth (254751) on Tuesday May 11, 2004 @10:12AM (#9116223) Homepage Journal
      Not quite what I got from it. Ext2 was certainly faster for a lot of operations, but is, of course, not journalled. XFS and JFS were fast, but most importantly, when it came to large files, these two tended to really take the lead. XFS was particularly good at handling large files. Overall Ext3 was disappointingly slow surprisingly often.

      Jedidiah.
      • by Anonymous Coward on Tuesday May 11, 2004 @10:20AM (#9116313)
        Ext3 met Dr. Tweedie's engineering goals. The idea was to develop a journaling file system which was seamlessly compatible with Ext2. Ext3 is really an engineering marvel. You can instantly convert it back and forth between Ext2 an Ext3.

        Ext3 provides a safe low-pain entry into the world of journaled file systems. No need to re-partition or reformat. It offers reasonably good performance plus the benefits of journalling.

        • by ValourX (677178) on Tuesday May 11, 2004 @10:58AM (#9116693) Homepage

          Also, it's mountable from FreeBSD. Reiser, XFS and JFS are not.

          This may seem trivial until you have a dual boot system with FreeBSD and GNU/Linux and you want to transfer your /home dir or whatever.

          -Jem
          • by ImpTech (549794) on Tuesday May 11, 2004 @11:50AM (#9117179)
            Not to mention mountable under Windows... for those of us that still need that. Or rather, EXT2 is mountable under Windows (with a 3rd party daemon), and thus EXT3 can be read as well.
            • by mindriot (96208) on Tuesday May 11, 2004 @01:49PM (#9118665)

              Reiserfs can at least be accessed [p-nand-q.com] under Windows.

              My personal peeve with ReiserFS is, though, that I've had the main ReiserFS partition on my Laptop completely destroyed by a simple power failure once. Many files were in lost+found afterwards, but some had their contents destroyed. (And restoring files by looking at their contents is not fun for ~1000 files...) So I've kinda lost trust in it. ext3 may be slow, but I've never had a single problem with it. Seems very robust to me. So, reliability is more important than speed for me (whoever needs performant servers is of course entitled to a different opinion). XFS and JFS seemingly can't be accessed from Windows, so that is a Minus for some.

      • by lecca (84194)

        "Overall Ext3 was disappointingly slow surprisingly often."

        I disagree, plus this test is obsolete, why did he use a 2.4 kernel?!

        from: http://freshmeat.net/projects/linux/?branch_id=463 39&release_id=160407 [freshmeat.net]

        "Linux 2.6.6
        [...]
        Changes:
        ...ext2 and ext3 filesystem performance was significantly improved. "

        And thats just from today's kernel release. What about all the changes between 2.4 and now?

        Considering the conveniance of backward compatability, and the fact that ext3 wasn't the worst in

    • by kfg (145172)
      Tests such as these will always make things clear as mud. Engineering is always a matter of compromise. Trade offs must be made.

      Do you want a car that goes really, really fast, or do you want a car that gets good milage and has a really big back seat? ( You can always lie about having run out of gas).

      Neither car is "best" until you define its intended use, and they both make lousy hammers. I canna change the laws of physics.

      Different engineers have different ideas, different goals and different ways of g
    • by Lussarn (105276) on Tuesday May 11, 2004 @11:47AM (#9117157)
      I can't see the slashdoted page but if you have a mailserver and are using maildirs (qmail) ReiserFS is a good choice because you will have alot of small files. Reiser can use the space on your disk fully and are not limited to "at least as big as the blocksize".

      I have a mailserver which have about 20GB mail with Reiser. With Ext3 it would be over 30GB.
    • So if I want to trash my files XFS would be the faster, right? :-)
    • by perlchild (582235) on Tuesday May 11, 2004 @12:12PM (#9117433)
      No mention of data=writeback, or any other optimisation tweaks, however. Kinda sad. The article is nice, the graphs are.... Err too much of a good thing?
      And basically the results just reiterate the design imperatives of each filesystem(how unsurprising!)

      - ext2 predates them all
      - ext3 is a low-impact, let's reuse what we know as much as possible, kinda file system
      - reiser's b-trees reflect it's "once we put the data in, how do we find it again" orientation
      - XFS was at least at one point, designed for "Media" files(think renderfarms), aka LARGE files, much of the benchmarks it won were on such files, although its design was also influenced by large-scale server needs(a renderfarm is a large-scale server cluster right?)
      - JFS was influenced by large-scale server needs(databases), but tampered by OS/2's needs, and other systems, resulting in a filesystem that's a bit more nimble than XFS, but less handy with huge files(normal, since databases try to use raw-io if necessary on huge files, unlike render clusters)

      I think this demonstrates the implications of early design imperatives on long-term software trends. XFS and JFS were developed for other platforms and ported to linux, yet notice how they can't really change their strengths(good thing too!).

      Anyone try the same benchmark on the 2.6 kernel just to contrast it? Wouldn't the new IO-system help to mitigate those weird ext2/ext3 slowdowns the article mentions, but doesn't explain?
  • Slightly OT (Score:3, Interesting)

    by iReflect (215501) on Tuesday May 11, 2004 @10:09AM (#9116188) Homepage
    This is good information to know, but for a project I'm working on I need to know which filesystem can take the most abuse. I'm talking about power-outages and hard-resets mostly. I know I should go journalled, but which one? What else should I keep in mind.
    • Re:Slightly OT (Score:5, Informative)

      by Malc (1751) on Tuesday May 11, 2004 @10:21AM (#9116315)
      Obviously (as you point out) a journallying filesystem is what you need. I went for Ext3 on my Debian servers. I/O throughput wasn't so important. The good thing about Ext3 is its backwards-compatibility with Ext2. If there's a problem and you don't have all the kernel modules or tools then you're still pretty much guarranteed access to the file system by mounting it as Ext2 as support for that system is almost universal under Linux.
    • Re:Slightly OT (Score:4, Insightful)

      by Rik van Riel (4968) on Tuesday May 11, 2004 @11:39AM (#9117088) Homepage
      Ext3 has most of its metadata (inodes, block group descriptors, etc) in fixed places on disk and e2fsck has a decade of testing in cleaning up the non-journaled ext2, so it's probably better tested than any of the fscks for journaling filesystems.
  • by Bronster (13157) <slashdot@brong.net> on Tuesday May 11, 2004 @10:09AM (#9116193) Homepage
    Maybe slashdot needs a filesystem update to one with more powerful meta-data support like something that can detect when the same URL has been used in more than one post within a certain time. Sheesh.
  • by SquareOfS (578820) on Tuesday May 11, 2004 @10:09AM (#9116195)
    . . . is which file system linuxgazette is running.

    That is, before it melted.

  • by lorcha (464930) on Tuesday May 11, 2004 @10:11AM (#9116211)
    the results may surprise you.
    The server at the link provided is not responding!

    You're right, that was a total surprise.

  • by Trailer Trash (60756) on Tuesday May 11, 2004 @10:12AM (#9116232) Homepage
    And the reason is that you used jpegs. jpegs are for photographs; use gif for images such as this. The text won't end up unreadbly blurry and you'll save tons of disk space/bandwidth.
  • reiserfs? (Score:5, Funny)

    by emc (19333) on Tuesday May 11, 2004 @10:13AM (#9116244)
    I like Paul Reiser [imdb.com] as much as the next guy... but naming a filesystem after him?


    I mean, really. "Mad about You" was a fine TV show... but this good?

    How long until we see McKellenFS [imdb.com]?

  • by Anonymous Coward on Tuesday May 11, 2004 @10:14AM (#9116251)
    http://209.81.41.149/~jpiszcz/index.html

    it's not slashdotted yet :)
  • Hrmmm (Score:4, Interesting)

    by nizo (81281) on Tuesday May 11, 2004 @10:16AM (#9116276) Homepage Journal
    See the whole article and a full range of hideously colored full sized graphs here [209.81.41.149] before it gets slashdotted too. Speaking of which, there has got to be better graph making software out there in Linuxland......
    • Re:Hrmmm (Score:3, Informative)

      by Bob Uhl (30977)
      Speaking of which, there has got to be better graph making software out there in Linuxland...

      There is: gnuplot [gnuplot.info], an utterly wonderful little program. I use it all the time.

  • by dduardo (592868) on Tuesday May 11, 2004 @10:16AM (#9116277)
    Right now I'm running reiserfs under gentoo and I recently lost some rather important data, which has made me a little skeptical in using it in a production system. Therefore I'm asking you guys which filesystem do you think is good for a webserver that will be handling a medium sized database and a significant number of transacations each day.
    • It works for mine! (Score:5, Interesting)

      by MarcQuadra (129430) * on Tuesday May 11, 2004 @10:26AM (#9116369)
      I've been using ReiserFS _EXCLUSIVELY_ since about 2.4.11 and I've never had a single problem. It's important to format with the defaults and not specify 'special' arguments to mkreiserfs or you can run into trouble.

      I've got three systems currently running reiser on Gentoo, from my PowerPC/SCSI/NFS/Samba file/print server to the ancient Compaq laptop with a 4GB drive. I've never had as much as a hiccup from ReiserFS.

      Under what circumstances did you lose data?
      • by Erwos (553607)
        "I've been using ReiserFS _EXCLUSIVELY_ since about 2.4.11 and I've never had a single problem. It's important to format with the defaults and not specify 'special' arguments to mkreiserfs or you can run into trouble."

        I've personally had several friends tell me of their data loss with ReiserFS. No one's arguing that it's a horrible file system, only that it still experimental.

        The typical data loss situation is a power loss in the middle of a write. ReiserFS might be atomic in operation, but it still can't
      • by jcupitt65 (68879) on Tuesday May 11, 2004 @11:13AM (#9116834)
        I've lost stuff with reiser too. About a year ago I was fiddling with an nvidia driver and locked my machine up. When I rebooted the tree that had had a compile going on had vanished forever.

        My understanding is that journalled means the FS can't get into an inconsistant state and so does not need fsck-ing. It does not mean your data is safe from having the power pulled halfway through a write. If you want a super-safe home area you want some sort of logging FS and these are all far slower than journalled (I think).

    • by molarmass192 (608071) on Tuesday May 11, 2004 @10:33AM (#9116427) Homepage Journal
      I use JFS and it's been pretty good thus far. It's been around for a long time and it's backed by IBM, so that makes it a pretty safe bet for production use in my mind. I used to use Ext3 before that and experienced a few data losses that caused me to make a switch. I can't comment on Reiser or XFS since I haven't used 'em.
    • by Mr Smidge (668120) on Tuesday May 11, 2004 @10:42AM (#9116522) Homepage
      I have ran reiserfs on my fileserver ever since it existed, and like you it corrupted once and I lost data.

      However, I pinned it down to a faulty drive (Quantum Fireball, hehe, which never acted up under NTFS/Win2k.. oh well). I was close to blaming reiserfs, but once I put in a quality hard drive and reinstalled, it's run like clockwork. Perfectly.

      There sure haven't been too many stabilty issues with reiserfs in my experience. Try another drive as a test and see if the same happens.
    • by Sxooter (29722) on Tuesday May 11, 2004 @11:47AM (#9117158)
      If you are using IDE drives with the write cache enabled, then NO journaling file system is safe on your system. This is because IDE drives with write cache respond immediately to requests for fsync with true, whether they've written the data out or not.

      If your data is important, either turn off the cache on your IDE drive or buy a SCSI drive, which won't lie about fsync. This is a problem for both linux and BSD.

      Later model IDE drivers and drives may be able to work properly with cache enabled, but not now. There are ongoing discussions on BOTH kernel hackers lists, BSD and Linux, about what to do, and no solution in sight.
      • Please read this [oracle.com]

        Just to show that it depends upon the application you need to run.

        Now, you will not hear me say that you should not use ReiserFS, for a desktop it is probably the best choice, but for servers, please think again.

        In addition to that, my own experiences with ReiserFS are mostly positive, especially on my 233 Mhz laptop, but I have also a big system with a Promise SX6000 raid controller, where I had a partition with ReiserFS, ext3 and JFS using Red Hat 9. Everytime I did file operations u

        • Nice article, thanks for the link.

          It looks like Oracle has gotten the same basic results as the PostgreSQL Global Development Group have. JFS and ext3 are generally the fastest under a database, while XFS and Reiser seem to be pretty slow.

          The odd thing here is that most other tests show XFS and Reiser as the kings. But the kind of random access databases do seems to be better handled by JFS / ext3.

          The problems with your RAID consistency, were these file system problems, or RAID level problems? If they
  • by Ignorant Aardvark (632408) <cydeweys.gmail@com> on Tuesday May 11, 2004 @10:22AM (#9116326) Homepage Journal
    While they do measure stuff like access times in ms, they don't mention recovery times (chkfs) that are mentioned in ms for reiserfs and mins for ext2. And they don't mention reinstallation times (measured in hours) which occurs for ext2 a lot more than the journalling filesystems :-)
    • And they don't mention reinstallation times (measured in hours) which occurs for ext2 a lot more than the journalling filesystems :-)

      What the fsck are you talking about? When the filesystem has problems you just need to run fsck like on any unix system from the last decade and more instead of doing the windows thing of reformatting and re-install. Certainly it's a scary thing, but you don't have to throw everything away just because you get a few easily fixable filesystem errors.

  • by codepunk (167897) on Tuesday May 11, 2004 @10:23AM (#9116331)
    Personally I could care less which file system is fastest. The most important aspect to a file system is how stable it is with my important data. All the speed in the world means absolutely nothing if the file system is not stable. EXT 3 has never ever let me down so I intend to stick with it, at least until RedHat releases their version of GFS.
    • I agree, and after having bad blocks with a reiserfs system, I don't want to touch reiserfs again. reiserfs has no way to deal with bad blocks. Only a hack that you have to implament everytime your system does a fsck. You basicaly fake it that those bad blocks have a file.
      Reiserfs +IBM HD's = hair loss
  • by Anonymous Coward on Tuesday May 11, 2004 @10:23AM (#9116333)
    It would be nice to see an unbiased comparison with NTFS (though it would be difficult seeing as how you can't get it to run natively on *nix afaik)
    • by Lxy (80823) on Tuesday May 11, 2004 @10:44AM (#9116552) Journal
      I agree, it would be an interesting test. If I'm not mistaken, NTFS is a journaling filesystem as well. Its databasing design is really interesting to me.. is there something similar to this in linux journaling filesystems?

      WinFS is designed to utilize the database feature, I'd be really curious about the results of searching for a file in NTFS/WinFS versus a linux file system. Hopefully NTFS linux support improves to the point where we can safely use it as a linux filesystem.

      Data recovery is my biggest issue right now with linux. It's damn near impossible to rescue data off a failed linux disk. Even just deleting a file is tough to recover from. NTFS has a nice selection of tools (albeit non-free) to safely and reliably recover data.
  • ext3 slowness (Score:5, Informative)

    by ReignStorm (247561) on Tuesday May 11, 2004 @10:23AM (#9116338)
    from Linux ext3 FAQ [sapienti-sat.org]
    Q: How can I recover (undelete) deleted files from my ext3 partition?

    Actually, you can't! This is what one of the developers, Andreas Dilger, said about it: In order to ensure that ext3 can safely resume an unlink after a crash, it actually zeros out the block pointers in the inode, whereas ext2 just marks these blocks as unused in the block bitmaps and marks the inode as "deleted" and leaves the block pointers alone. Your only hope is to "grep" for parts of your files that have been deleted and hope for the best.
    • Re:ext3 slowness (Score:3, Insightful)

      by Speare (84249)

      Personally, I see this as a mild security benefit. If I delete something, I want it GONE. It's not as good as an idle-time thread that 11-pass nukes unallocated sectors at random, but it'll do for a start.

  • by jaylee7877 (665673) on Tuesday May 11, 2004 @10:27AM (#9116376) Homepage
    I've never understood why they don't move to ReiserFS, at least for new installations. With Fedora you have to use a kernel option to enable ReiserFS installation and with RHEL you can't install to a ReiserFS root, even the reiserfs kernel module is in their kernel-unsupported RPM which means don't call for help. I love RH but they need to get the ball rolling on this one!
    • I've never understood why they don't move to ReiserFS, at least for new installations.

      Because for most uses, it's not the best option. So, if you're going to junk ext2 compatibility, why would you go to Reiser?
    • by vorwerk (543034) on Tuesday May 11, 2004 @10:58AM (#9116699)
      Redhat 9 and Fedora Core 1 both ship with JFS support -- the graphical installer, however, does not offer it as an option.

      So, what I usually do when installing a new copy of Fedora or Redhat is to drop to a console, and use fdisk + mkfs.jfs manually. Then, when I get to the right page in the installer, I can simply set it to not reformat the partition but to use it as the "/" mount point, and voila -- my computer has JFS.
    • I use ReiserFS, because on average - it is a faster filesystem than EXT3 for most desktop purposes. I personally feel that EXT3, however, is a more reliable FS when it comes to recovering bad data on the hard disks. I recently had some failures due to a failing motherboard, which corrupted some data on my drive, but the reiserfsk tools have cryptic descriptions for failures and don't always seem to do the job right when it comes to recovering bad data. I've had reiserfsck work properly, but the few times
    • by flaming-opus (8186) on Tuesday May 11, 2004 @11:15AM (#9116858)
      Well, the simplest answer is that Stephen Tweedie is their filesystem guru, so why not use his baby in their OS. However, that's not the real answer. SCT is a clever guy, and mature enough to not let pride get in the way of the best possible system. (a similar question: Why does sun still use UFS?) For Redhat, EXT3 is probably the best general purpose filesystem, particularly for the root drive. Redhat is interested in selling on servers, where the root filesystem is not the bottleneck. You install the OS onto EXT3, which has decent performance and is very mature. Then you install your database / exported directories / mail spool / whatever onto the filesystem that is best for that job.

      Ext3 is a very close cousin to Ext2, which has been around for a very long time, and changes very slowly. Reiser has grown and changed a LOT in the last three years, including some metadata changes that effect on-disk structures. Though it has stabalized lately, Redhat is correct to be cautious. XFS and JFS, though very mature filesystems on other OSes, have only recently become tightly integrated with the Linux kernel. Though technically controlled by the linux kernel community, all three of these other filesystems are really controlled by little cabals of people within IBM/SGI/ and then Hans Reiser. While these groups try to be transparent in their development process, Ext3 is very transparent in its development and direction.

      One other tremendous advantage that Ext3 inherits from Ext2 is a fast, versatile, and effective fsck program. Journals are great in the event of power failures. However, they do not protect against Windows, or a faulty fibre channel driver, or uninformed sysadmin who accidently writes over the first 1 MB of the disk. Fsck.Ext2 is one of the best around.
      • Sun uses UFS because it is still the best filesystem for a root filesystem.

        • It supports software mirroring of the root FS in solaris.
        • It's backwards and forwards compatible.
        • It does not require any license fees, since it's been worked on in-house.
        • It already supports logging, which provides the benefits of journalling and a substantial performance boost.
        • UFS also has alternate superblocks, like all modern filesystems. (Including JFS and XFS!)

        A more interesting question is: Why do Linux zealots incessantly rag on UFS?

    • You know - if it works, don't fix it!

      Ext3 is stable and there's a lot of useful available tools for it.

      If, for the end user, the difference is marginal, why bother to make things more difficult than necessary for yourself?

      Or maybe they've received unusually many bug reports for ReiserFS and thus concluded it's not stable enough for them to push it. After all, they want to be associated with (amongst other things) reliability.
  • by foolip (588195) on Tuesday May 11, 2004 @10:31AM (#9116410) Homepage
    What I'd like is a file system for which there is actually a defrag-tool. Sure, ext2/3 may try to reduce fragmentation as much as possible, but when it happens (as is likely when you have a near-full disk) you've got little or no way of fixing it. Or actually there is a defrag tool for ext2 (not ext3) but my experiences with it are not the best -- it took forever and I don't know that it actually did anything to the disk (fdisk didn't report a 0% fragmentation level afterwards anyway).
    • XFS supports defragmentation
      http://oss.sgi.com/projects/xfs/manpages/xfs_fsr.h tml [sgi.com]
    • Most people don't have their /home directory tied into their root partition. Often, they are even on seperate drives. Even if a defrag program were used, there would be very little benefit. You're not constantly writing new files in and out of the same space as the root partition in that respect.

      And even if someone does put their /home directory on the root partiton, the modern Linux filesystems practically negate the need for defragmentation, due to their designs (as well as OS and drive design).
  • Mirror (be kind) (Score:5, Informative)

    by Helmholtz (2715) on Tuesday May 11, 2004 @10:35AM (#9116448) Homepage
    Here's a mirror of the article:

    http://www.gutenpress.org/links/LG/102/piszcz.html [gutenpress.org]
  • by mi (197448) <slashdot-2012@virtual-estates.net> on Tuesday May 11, 2004 @10:49AM (#9116608) Homepage
    From FreeBSD, that is... Would be nice to see that compared to Linux' FS-es. As in this earlier benchmark [usenix.org] (PDF).
  • ext3 options (Score:5, Informative)

    by kardar (636122) on Tuesday May 11, 2004 @11:01AM (#9116736)
    There are options, or settings, that you can do for ext3, the default is slower, but it saves your data. Ext3 not only journals metadata, like XFS, etc... but it also journals data, which is the only filesystem that does that, if I understand this correctly.

    "data=writeback" mode does no data journaling, only metadata journaling, and you would probably see better performance here. Although, you could lose data in the event of a power outage (no fun). Same thing applies to XFS, JFS - you could lose data because only metadata is being journaled, not real data.

    "data=ordered" mode - inbetween, still no data journalling, but there are provisions that make it less likely to lose data in the case of a power problem. It has something to do with the way it journals the metadata and the way the filesystem interacts with the disk that makes is a little slower than data=writeback but also a little more secure than data=writeback if you get a power outage.

    "data=journal" mode - this journals data and metadata, and with the exception of a few situations, is the slowest. The least likely to lose your data, but also much slower.

    I am assuming, or at least it looks like, these tests were run with the default data=journal - so to be fair, they should have been run in data=writeback, or maybe even all three modes. Again, all you have to is specify in /etc/fstab and reboot, no big deal.

    It would probably be better to compare the ext3 in data=writeback mode.

    • Re:ext3 options (Score:4, Informative)

      by CmdrTHAC0 (229186) on Tuesday May 11, 2004 @11:35AM (#9117062)

      "I am assuming, or at least it looks like, these tests were run with the default data=journal - so to be fair, they should have been run in data=writeback, or maybe even all three modes. Again, all you have to is specify in /etc/fstab and reboot, no big deal."

      And where do you get the idea that this is the default? According to mount(8):

      ordered

      This is the default mode.

      What I really would have liked to see on this benchmark is ext3 on 2.6 with dir_index enabled. (Maybe this would also gain the benefit of the Orlov allocator? I haven't been paying attention to what's been backported.) ...In fact, I would have liked to see this whole thing on 2.6.

  • HFS+ (Score:3, Interesting)

    by mbbac (568880) on Tuesday May 11, 2004 @11:01AM (#9116738)
    Does anyone have any statistics for how HFS+ [apple.com](testable with Darwin [apple.com] stacks up against these other filesystems?
  • by Gribflex (177733) on Tuesday May 11, 2004 @11:07AM (#9116781) Homepage
    Why is it that every benchmarking article contains the words "The results may surprise you?"

    Have any of you ever been surprised?
  • I did some too (Score:5, Informative)

    by Rufus211 (221883) <(rufus-slashdot) (at) (hackish.org)> on Tuesday May 11, 2004 @11:30AM (#9117007) Homepage
    I did a bunch of tests like this, but in 2.6 instead of 2.4. My conclusions:

    * Ext2 is still overall the fastest but I think the margin is small enough that a journal is well worth it
    * Ext3, ReiserFS, and XFS all perform similarly and almost up to ext2 except:
    o XFS takes an abnormally long time to do a large rm even though it is very fast at a kernel `make clean`
    o ReiserFS is significantly slower at the second make (from ccache)
    * JFS is fairly slow overall
    * Reiser4 is exceptionally fast at synthetic benchmarks like copying the system and untaring, but is very slow at the real-world debootstrap and kernel compiles.
    * Though I didn't benchmark it, ReiserFS sometimes takes a second or two to mount and Reiser4 sometimes takes a second or two to unmount while all other filesystem's are instantaneous.

    Whole thing available here [cmu.edu]
  • by Cthefuture (665326) on Tuesday May 11, 2004 @11:31AM (#9117017)
    Seriously, all the time we see benchmarks like this that are just done on the same machine with the same setup. Who knows if there is some unseen problem or bottleneck (in this particular case the CPU is weak).

    We need a large sample base. Different types of chipsets, CPU's, hard-drives, etc. Then we can better see the big picture or at least see how the filesystems might perform on a system similar to your own.

    So I'm calling for a "filesystem benchmark" page were people can post their results from a standard set of benchmarks. Something where they can include their system specs/setup and everything.

    Then maybe we'll get useful information.
  • by hansreiser (6963) on Tuesday May 11, 2004 @12:20PM (#9117527) Homepage
    ReiserFS V3 is being obsoleted by V4, which is 2-5x times faster.

    You can see benchmarks of it at www.namesys.com/benchmarks.html [namesys.com]

  • by harlows_monkeys (106428) on Tuesday May 11, 2004 @01:30PM (#9118463) Homepage
    Gah! The charts are shrunk so that the labels are hard to read, and the order of the results and color assigned to each FS seems to have been picked randomly for each chart, so once you squint and decipher one of them...you have to start over on the next.
  • by sudog (101964) on Tuesday May 11, 2004 @01:44PM (#9118610) Homepage
    A superior string of tests would be to simulate, to as close a degree as possible, a real, live high-use environment such as a scaled-up Perforce server, a supremely busy mail server, a giant busy database, or a massive web server.

    A single process running through 10,000 files isn't particularly realistic: since when does a scaled-up server sit there and hammer the filesystem with just a single process? What about contention? Caching?

    And what about recovery from errors? I didn't once see what happens if something blorts over random parts of the filesystems.. how does Reiser handle this? Ext3? XFS? Are there recovery tools in case of catastrophe?

    What about these file systems stuffed into RAID volumes of various stripe sizes and configurations?

    Straight deletes, creates, or modifications are useless because the only time you're going to be doing something like that is when you rm -rf * or building a new environment for.. something. Daily use, however, which eats up far more time (and thus would save the most user time if improved) is something which should been better accounted-for.
  • by WoodstockJeff (568111) on Tuesday May 11, 2004 @01:49PM (#9118670) Homepage
    Having read the article, it would have been nice if the bar graphs had been consistent... but, that's not the problem. As mentioned by others, a very large criteria for non-home users is damage tolerance, and, to an equal extent, the lack of any tendency for the driver to damage the file system (aka "stability"). And, in this day of databases, the ability to handle large files is increasingly important.

    I'm rapidly approaching the point where I need support for file sizes greater than 2GB. Quite frankly, most of what I've found about file sizes and file systems is 2 to 4 years old... Everyone's too concerned with speed!

    • large file support (Score:4, Informative)

      by David Jao (2759) * <djao@dominia.org> on Tuesday May 11, 2004 @03:30PM (#9119642) Homepage
      I'm rapidly approaching the point where I need support for file sizes greater than 2GB. Quite frankly, most of what I've found about file sizes and file systems is 2 to 4 years old... Everyone's too concerned with speed!

      Here's some real world information on the state of large file support in 2004. Filesystem driver support is the least of your worries -- almost any linux filesystem you can think of (except for maybe umsdos) supports over 2GB files at the kernel level. The Linux LFS [www.suse.de] page, dated April 2004, contains reasonably updated information on large file support in linux.

      The bigger problem is that many userspace applications cannot yet read or write to the large files. This failure arises from non-use of the LFS API combined with just plain unfortunate use of a signed 32-bit int in the wrong place. So for instance mkisofs will reject all input files larger than 2GB in size, and cdrdao will simply abort at 2GB if you try to rip a DVD larger than 2GB in size. In some extreme cases there are programs that can't even handle large files off of the disk; one example is

      wget http://mirror.linux.duke.edu/pub/fedora/linux/core /test/1.92/i386/iso/FC2-test3-i386-DVD.iso

      which fails spectacularly on any x86 linux system (hint: the DVD is not really 84MB in size). In general, the "core" system utilities such as dd, cp, mv, cat are fully compatible with large files whereas third party applications are much more hit-or-miss.

      Even today, by far the most practical solution to large file woes is to migrate to a 64-bit system, the most affordable of which is AMD64 by a long shot. I've been using an Athlon 64 system for the past few weeks and it has handled large files perfectly in all respects so far.

  • by aggieben (620937) <aggieben@ g m a i l . com> on Tuesday May 11, 2004 @01:54PM (#9118724) Homepage Journal
    I'd like to see a set of benchmarks regarding stability and fidelity of the various filesystems. Which ones are the most durable? Which ones get corruption the most, and what are their corruption/data-loss ratios? Performance isn't the end of the world for me....but losing data *is*.
  • jackass article (Score:5, Insightful)

    by jusdisgi (617863) on Tuesday May 11, 2004 @02:34PM (#9119109)

    Wow...I'm really surprised that I don't see anyone else around here bashing this "benchmarking" as totally ridiculous. Get it together, people! I mean, how does a group of folks that typically pride themselves on shredding the foolish articles that come by miss these beauties:

    1) This guy goes out with the stated goal of evaluating real-world performance...so he starts by throwing out all real benchmarks. Of course, those tools are designed by experts to try to represent real-world performance, but who cares, right? Instead, our jackass throws together a bundle of random operations and times them. No thought is apparent in the choices of the operations, and no discussion is given as to why the choices were made.

    2) The conclusions are drawn by simply adding the times of all the tests together. If you haven't figured out why this is dumb as a rock, let me explain: test #1 took 23-40 seconds, while test #2 took .02-.04 seconds. So, in his conclusion, test #1 was weighted 1000 times as heavily as test #2. I don't know about you all, but I for one don't feel that touching speed is 1000 times as important as finding speed.

    3) Even if he had normalized all the times so that the mean in each test was the same and then added those, he would still be wrong...various tests ought to be weighed differently, because real-world usage doesn't do all of these things the same amount. That said, the weight given to the tests needs to be well thought out and planned, rather than arbitrarily assigned (accidentally) without paying any attention. Interestingly enough, this sort of purposeful weighing of tests is exactly the sort of thing done by the real benchmarking tools that this idiot threw away.

    4) Perhaps this one isn't as important...but this guy can't make a graph to save his life. Half the bar graphs put time on X and the other half put time on Y. Graphs that obviously should be bar graphs are made into dot-line ones. The text is blurry and you can't tell the colors in the key.

    Anyway, I still don't get why everybody around here seems to have missed all this...it was painfully obvious to me when I just took a cursory glance at it.

"Life, loathe it or ignore it, you can't like it." -- Marvin the paranoid android

Working...