Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

On the State of Linux File Systems 319

kev009 writes to recommend his editorial overview of the past, present and future of Linux file systems: ext2, ext3, ReiserFS, XFS, JFS, Reiser4, ext4, Btrfs, and Tux3. "In hindsight it seems somewhat tragic that JFS or even XFS didn't gain the traction that ext3 did to pull us through the 'classic' era, but ext3 has proven very reliable and has received consistent care and feeding to keep it performing decently. ... With ext4 coming out in kernel 2.6.28, we should have a nice holdover until Btrfs or Tux3 begin to stabilize. The Btrfs developers have been working on a development sprint and it is likely that the code will be merged into Linus's kernel within the next cycle or two."
This discussion has been archived. No new comments can be posted.

On the State of Linux File Systems

Comments Filter:
  • ZFS!! (Score:3, Interesting)

    by Anonymous Coward on Saturday November 29, 2008 @05:04PM (#25927639)

    What Sun needs to do is release ZFS under a proper license so we can finally have 1 unified filesystem. Yes, we can use it under FUSE, but this brings unnecessary overhead and problems. It will be nice when we can transport disks around, similar to fat(32), and not have to worry about whether another OS will be able to read it or not. On top of that, CRC block checksumming, high performance, smb/nfs/iscsi support integrated, Volume AND partition manager.

    Come on Sun! Are you listening??

  • by bboxman ( 1342573 ) on Saturday November 29, 2008 @05:18PM (#25927725)

    Just my 2 bits. As a user of Linux in a software/algorithm context, my personal beefs with ext3 / the current kernel line are:

    1) IO priority isn't linked to to process priority, or at least, not in a decent manner. it is all too easy to lock up the system with one process that is IO heavy (or a multiple of these) -- hurting even high priority processes. As the IO call is handled by a system level (handling buffering, etc.) -- it garners a relatively high priority (possibly falling under the RT scheduler) and as a result IO heavy processes can choke other processes.

    2) ext3+nfs simply sucks with very large amount of files. I used to routinely have directories with 500,000 files (very easy to reach such amounts with a cartesian multiplication of options). The result is simply downright appalling performance.

  • by r00t ( 33219 ) on Saturday November 29, 2008 @05:27PM (#25927779) Journal

    We're checksumming free disk space. That's dumb.
    It makes RAID rebuilds needlessly slow.

    We're unable to adjust redundancy according to
    the value that we place on our data. Everything
    from the root directory to the access time stamps
    gets the same level of redundancy.

    The on-disk structure of RAID (the lack of it!)
    prevents reasonable recovery. We can handle a
    disk that disappears, but not one that gets
    some blocks corrupted. We can't even detect it
    in normal use; that requires reading all disks.
    We have extremely limited transactional ability.
    All we get for transactions is a write barrier.
    There is no way to map from RAID troubles (not
    that we'd detect them) to higher-level structures.

    With an integrated system, we could do so much
    better. Sadly, it's blocked by an odd sort of
    kernel politics. Radical change is hard. Giving
    of the simplicity of a layered approach is hard,
    even when obviously inferior. There is this idea
    that every new kernel component has to fit into
    the existing mold, even if the mold is defective.

  • by tytso ( 63275 ) * on Saturday November 29, 2008 @05:28PM (#25927791) Homepage

    NFS semantics require that the data be stably written on disk before it can be client's RPC request can be acknowledged. This can cause some very nasty performance problems. One of the things that can help is to use a second hard drive to store an external journal. Since the journal is only written during normal operation (you need it when you recover after an system crash), and the writes are contiguous on disk, this eliminates nearly all of the seek delays associated with the journal. If you use data journalling, so that data blocks are written to the journal, the fact that no writes are required means that the data can be written onto stable storage very quickly, and thus will accelerate your NFS clients. If you want things to go _really_ fast, use a battery-backed NVRAM for your external journal device.

  • by Bandman ( 86149 ) <bandman.gmail@com> on Saturday November 29, 2008 @05:29PM (#25927797) Homepage

    You seem very knowledgeable regarding filesystems in general. I'm interested in learning more about filesystems and how they work. To give you an idea of where I am, I believe I know what blocksize is, but I don't know what an extent is, and how it relates to performance (or why the grandparent would like extents several megabytes large).

    What resources would you suggest to people who are looking to learn more?

  • Re:ZFS!! (Score:0, Interesting)

    by Anonymous Coward on Saturday November 29, 2008 @05:56PM (#25927965)
    Malicious licensing at its best; help anyone with a corporate interest, hurt anyone with a Free Software interest. Sun pulled off the perfect fuck you to the existing Free Software community when it released ZFS and DTrace.

    They didn't even do that to Java's licensing, which while not being technically as advanced and unique as the above mentioned software, is probably worth a whole hell of a lot more.
  • Re:Choose ONE! (Score:2, Interesting)

    by Anonymous Coward on Saturday November 29, 2008 @05:56PM (#25927967)

    because "one size does not fit all." Some file systems handle better in say a database enviroment handling large number of small files while other handle better in something else. If you want a standard fs for general use, that's what ext2 is for as well as ext3(which is backwards compatable with ext2). Can you use another fs other then what most distro decided upon, sure, that's what freedom is about. New implementations are created because they are created with different goals in mind.

    Windows is not without it's own choices mind you either, fat(fat16), fat32, ntfs, WinFS(cancelled). As time pass, even microsoft attempts to create a new and improved fs (key-word: attempt). Sure they tend to force the latest fs on you but that's microsoft way VS linux way of choice.

  • Re:ZFS!! (Score:5, Interesting)

    by dokebi ( 624663 ) on Saturday November 29, 2008 @06:00PM (#25927991)

    UFS (of BSDs) is under the most liberal license possible, yet it's definitely not the most widely used. FAT32 is patented by MS, and it is the most widely used. So, do you still think the problem is GPL?

  • by Britz ( 170620 ) on Saturday November 29, 2008 @06:02PM (#25928007)

    Maybe not for a desktop machine, but for servers I like to use XFS. That started way back then when XFS was the first (and then only AFAIR) fs that supported running on softraid. It was not that long ago and CPU cycles were already so cheap on x86 that softaid was already a pretty nice solution for small servers.

    For small servers I have not changed that setup (XFS on softraid level one on two cheap drives) ever since.

    I guess for the big machines it might be very different. I am pretty happy with XFS as it is.

  • by diegocgteleline.es ( 653730 ) on Saturday November 29, 2008 @06:04PM (#25928017)

    The CFQ IO scheduler has been able to link IO priority with process priority for ages. But there's a performance issue in the ext3 journaling code that has been affecting many people for some time....

  • Re:ZFS!! (Score:5, Interesting)

    by atrus ( 73476 ) <atrus.atrustrivalie@org> on Saturday November 29, 2008 @06:26PM (#25928141) Homepage
    Because having a block based filesystem that has no notion of what the underlying storage is "dumb". ZFS fixes those problems.

    Want to create a new filesystem in ZFS? Sure, no problem. You don't even need to specify a size, it will use whatever space the storage pool has available, no pre-allocation needed. How about removing one? Ok, its removed. Yes, it only took a second to do that. A traditional LVM + FS system can't do that - you need to resize, move, and tweak filesystems when doing any of the above operations - time consuming and limited.

    And if you're asking why you'd want to create and remove filesystems on the fly, there is one word for that: snapshots. Its quite feasible to generate snapshots many times per day for a ZFS backed fileserver (or even database server). Someone created a file at 9am and then accidentally nuked it before lunch? Don't worry, its still present in the 10am and 11am snapshots. All online, instantly available.

  • Reiser4 (Score:5, Interesting)

    by Enderandrew ( 866215 ) <enderandrew&gmail,com> on Saturday November 29, 2008 @06:28PM (#25928149) Homepage Journal

    Hans was a jerk who has difficult to work with, and now he is a convicted murderer. That doesn't change the fact that Reiser4 as is may be the best desktop file system for Linux users, even with plenty of room for improvement.

    There are filesystems in development like Btrfs and Tux3 that look promising, but why should Reiser4 be abandoned? It is GPL. Anyone can pick it up and maintain it, or fork it.

    Does anyone know anything about the future of Reiser4?

  • by Piranhaa ( 672441 ) on Saturday November 29, 2008 @06:29PM (#25928153)

    That's the goal of ZFS. Each block is checked with a 256-bit CRC checksum on every access. It incorporates a volume and partition manager in '1 tool', and knows where data is written to. On rebuilds it only repairs data that is actually there, which saves significant time. You should also setup weekly or bi-weekly scrubs (once a month for enterprise grade drives), which reads EVERY block written to and verifies it. This ensures that each block is still good, none is suffering from flipped bits, and that your disk isn't slowly failing on you.

  • Re:ZFS!! (Score:5, Interesting)

    by ArbitraryConstant ( 763964 ) on Saturday November 29, 2008 @06:43PM (#25928233) Homepage

    > But hey maybe I'm missing something, why not improve or create a replacement for LVM instead of including volume management in the filesystem?

    Maybe. But it would be a lot harder.

    Think about LVM snapshots for example. LVM allocates a chunk of the disk for your filesystem, and then a chunk of disk for your snapshot. When something changes in the parent filesystem, the original contents of that block are first copied to the snapshot. But if you've got two snapshots, it has to be copied to two places, and each snapshot needs its own space to store the original data. Because ZFS/BTRFS/etc are unified, they can keep the original data for any number of snapshots by the simple expedient of leaving it alone and writing the new data someplace new.

    LVMs can grow/shrink filesystems, but filesystems deal with this somewhat grudgingly. LVM lacks a way to handle dynamic allocation of blocks to filesystems in such a way that they can give them back dynamically when they're not using them. ZFS/BTRFS/etc can do this completely on the fly. LVMs rely on an underlying RAID layer to handle data integrity, but most RAID doesn't do this very well. BTRFS is getting a feature that allows it to handle seeky metadata differently than data (eg, use an SSD as a fast index into slow but large disks).

    It is conceivable that an advanced volume manager could be created that does all these things and all the rest (eg checksumming) just as well... but I think the key point is that this isn't something you can do without a *much* richer API for filesystems talking to block devices. They'd need to be able to free up blocks they don't need anymore, and have a way to handle fragmentation when both the filesystem and the volume manager would try to allocate blocks efficiently. They'd need substantially improved RAID implementations, or they'd need to bring the checksumming into the volume manager. I'm not saying it can't be done, but doing it as well as ZFS/BTRFS/etc when you're trying to preserve layering would be very tough. At a minimum you'd need new or substantially updated filesystems and a new volume manager of comparable complexity to ZFS/BTRFS/etc. I understand the preference for a layered approach, but I just don't think it's competitive here.

  • by millosh ( 890186 ) <millosh@millosh.org> on Saturday November 29, 2008 @06:52PM (#25928269) Homepage
    Whenever I have to install some server, I have a metaphysical question: ext3 or reiserfs?

    Ext3 has a lot of advantages, including a possibility to do a fast recovery of files. While it is not needed often, at least once per year I have such demand. At the other side, undelete methods with raiserfs are very problematic.

    At the other side, my servers are up usually for a year or more. This means that the most of company's employees may go on one day vacation whenever I want to reboot a machine with 4TB file system.

    Any good idea to solve those two issues with one file system?
  • Re:Reiser4 (Score:5, Interesting)

    by Ant P. ( 974313 ) on Saturday November 29, 2008 @06:55PM (#25928281)

    Reiser4 is still being maintained, by one ex-Namesys person IIRC.
    The main problem is the Linux kernel devs - they were too busy trying to find reasons to keep it out of the kernel (I can agree with their complaints about code formatting, but after that they descend deep into BS-land) to actually improve it. From the outside it sounds a lot like the story about the RSDL scheduler - completely snubbed because it stepped on the toes of one kernel dev and his pet project.

  • by grimJester ( 890090 ) on Saturday November 29, 2008 @07:03PM (#25928329)
    As a result, many benchmarking attempts are very misleading, because they are often done by a filesystem developer who consciously or unconsciously, wants their filesystem to come out on top, and there are many ways of manipulating the choice of benchmark or benchmark configuration in order to make sure this happens.

    Wouldn't it be logical to assume a filesystem developer has an idea on what the workload and hardware will be like _before_ writing his filesystem, then picking a benchmark that suits his ideas on what a filesystem is supposed to do? No manipulation necessary, intentional or otherwise.
  • by Anonymous Coward on Saturday November 29, 2008 @07:32PM (#25928491)

    Sure. If you're a fan of catastrophic data loss, turn on async.

    The best NFS (v3) client mount options for a linux server with linux clients vary heavily with what you're actually doing and what you have, but the following is a good start:

    sync,hard,intr,nfsvers=3,rsize=32768,wsize=32768,
    acregmin=0,acregmax=0, acdirmin=0,acdirmax=10

    sync: don't use async! Ever!
    hard: better than soft!
    intr: allow interruptions despite hard! handy for failure situations!
    nfsvers=3: force nfsv3. You DO NOT want to use nfsv2!
    rsize,wsize: increase block size to something decent.
    acregmin/max: Don't cache regular files. Just don't
    acdirmin/max: Cache directories for a _small_ time. Necessary for adequate performance, really, but the smaller while keeping things bearable the better.

    Yes, this will make server load pretty high. Get a beefy server. They're cheap nowadays.

  • Re:ZFS!! (Score:5, Interesting)

    by Kent Recal ( 714863 ) on Saturday November 29, 2008 @07:59PM (#25928641)

    I hear you and I'm sure the filesystem developers have the same ideas in their heads.
    The problem is that there are some really hard problems involved with these things.

    In the end everybody wants basically the same thing: A volume that we can write files to.
    This volume should live on a pool of physical disks to which we can add and remove disks at will and during runtime.

    The unused space should always be used for redundancy, so when our volume is 50% full then we'd expect that 50% of the disks (no matter which) can safely fail at any time without data loss.

    Furthermore we don't really want to care about any of these things. We just want to push physical disks into our server, or pull them, and the pool should grow/shrink automagically.
    And ofcourse we want to always be able to split a pool into more volumes, as long as there's free space in the pool we're splitting from. Ideally without losing redundancy in the process.

    We want all these things and on top we want maximum IOPS and maximum linear read/write performance in any situation. Oh, and we won't really be happy until a pool can span multiple physical machines (that will auto re-sync after a network split and work-as-expected over really slow and unrealiable networks), too.

    ZFS is a huge step forward in many of these regards and there's a whole industry built solely around these problems.
    Only time will tell which of these goals (and the ones that I omitted here) can really be achieved and how many of them can be addressed in a single filesystem.

  • by pedantic bore ( 740196 ) on Saturday November 29, 2008 @07:59PM (#25928645)

    NFS semantics require that the data be stably written on disk before it can be client's RPC request can be acknowledged.

    This hasn't been true since NFSv2. We're at NFSv4 now...

  • by lysergic.acid ( 845423 ) on Saturday November 29, 2008 @11:08PM (#25929785) Homepage

    on Windows i can see the file extension of every file on my hard drive. i determine the file type based on the same attribute that my shell does. if i get a file attachment or am browsing a directory, i can immediately distinguish executables from non-executables. if i'm looking for a PNG image, i just look for the appropriate icon and the .png extension, and i can double click on the icon and open the image without the possibility of accidentally running a malicious executable.

    however, on a lot of people's Windows systems they have explorer configured to hide known extensions. so the shell still uses file extensions to determine file format, but they're now relying solely on the file icon to indirectly determine file type. but since executable files can have embedded icons, it's very easy for an attacker to give a file a deceptive name and icon, disguising a virus or trojan as an image or text document.

    sure, the user could right-click on the file and select "Properties" to look at the "Type of file:" field. however, doing this for every single file you want to examine is very tedious and time-consuming. most people simply aren't going to go through that kind of hassle. imagine if you have to examine a directory with 100 images in it. are you going to open the properties dialog 100 times, once for each and every file?

    using meta data or magic number to determine file format would have the same drawback. how would you determine the format of a file at a glance using meta data? you wouldn't have a safe/accurate and intuitive means of determining file type.

  • using meta data or magic number to determine file format would have the same drawback. how would you determine the format of a file at a glance using meta data? you wouldn't have a safe/accurate and intuitive means of determining file type.

    I don't think that there is a 100% "safe and accurate" way to display the file type, assuming you are depending on a possibly-hostile file to supply the information in the first place. There are, however, a few things that an operating system can do to make life safer for users:

    1) Clearly mark executable files. Have some visual indication whether a file is set to be executable (this, of course, assumes that your operating system has an execute bit; if it doesn't, that's a bigger problem). This indication should be consistent, universal, and impossible to override with metadata or custom icons. It should apply both to CLI shells and GUIs. (Although not necessarily in the exact same way; however my personal preference for such an indicator, which is putting the file name in bold, would work both in a GUI and CLI environment.)

    2) Don't use the same action to execute as to open. Using the same action (the double-click) both to "run" and to "open" -- which are two very different actions -- is probably responsible for the vast majority of user-propagated malware today. I would love to see an operating system rigorously enforce a separate 'run' action, so that a user clicking on what appears or claims to be a data file (intending to open an application and read that file) could not accidentally execute it.

    3) Break the filesystem into 'data' and 'executable' sections, and bar files on the 'data' sections from being marked as executable under any circumstances. I don't think this would be as effective as #2, but it would probably involve less user retraining. In order for content to be executed, it would have to be copied or installed onto the executable partition (which in normal operation could even be mounted read-only).

    You could do all of this with the data-type indicator as part of the file name, or as a separate piece of metadata; it doesn't really matter. There's no 'safety' advantage to doing it either way, it's just that keeping it in the file name is considered very ugly by a lot of people (myself included). I'm personally a fan of the way that the Mac used to do it, with a two part code (one for the file's actual type, the other for the application that either created it or should be used to open it), except that unlike the Mac, it should be easily editable by the user, and a lot of standardization and interoperability challenges would have to be solved. I'll be surprised if I see the filename.ext thing die in my lifetime, honestly. It's just too entrenched.

  • Re:Reiser4 (Score:3, Interesting)

    by Enderandrew ( 866215 ) <enderandrew&gmail,com> on Sunday November 30, 2008 @01:09PM (#25934017) Homepage Journal

    Honestly, I have never lost data with Reiser4 and I have a toddler who loves pushing power buttons. However every single time I have tried ext3 or ext4 I have lost data. And I've lost data within weeks with ext3 and ext4.

    Reiser4 recovers "dirty" shutdowns better than ext3.

    I had one time when I had a dirty shutdown, and e2fsck decided to wipe my /etc directory and put all the files in lost+found.

If all else fails, lower your standards.

Working...