Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

The Hairy State of Linux Filesystems 187

RazvanM writes "Do the OSes really shrink? Perhaps the user space (MySQL, CUPS) is getting slimmer, but how about the internals? Using as a metric the number of external calls between the filesystem modules and the rest of the Linux kernel I argue that this is not the case. The evidence is a graph that shows the evolution of 15 filesystems from 2.6.11 to 2.6.28 along with the current state (2.6.28) for 24 filesystems. Some filesystems that stand out are: nfs for leading in both number of calls and speed of growth; ext4 and fuse for their above-average speed of growth and 9p for its roller coaster path."
This discussion has been archived. No new comments can be posted.

The Hairy State of Linux Filesystems

Comments Filter:
  • by dbIII ( 701233 ) on Wednesday February 11, 2009 @05:13PM (#26818955)
    In the case of NFS for instance, hasn't there been a performance improvement? Isn't that the thing that matters?
  • What? (Score:5, Interesting)

    by svnt ( 697929 ) on Wednesday February 11, 2009 @05:17PM (#26819047)

    While OSes may be "sliming down" as the article says, what does the removal of standard db packages from Ubuntu have to do with filesystem-related kernel calls?

    The article doesn't seem to mention the possiblity that more functionality may be pushed into the kernel from userspace, which might make sense in other situations, but I don't think that argument would hold up here.

    I am struggling to make the connection between the summary and the so-called article. The fact that they are not stripping/locking fs functionality means that OSes aren't shrinking? That's the hypothesis?

  • by Smidge207 ( 1278042 ) on Wednesday February 11, 2009 @05:23PM (#26819137) Journal

    Yes, that sounds like "slimming down" to me. At least, I can understand what teh article is trying to get at. It seems like we went through a period of early operating system development over the past few decades where the stress was on throwing everything in, including the kitchen sink. It's at least interesting that Linux distros are putting in some amount of effort into pulling excess functionality out of the default installation while computers continue to become bigger, faster, stronger.

    And I think it is pointing at something similar to what is going on with OSX, and it is a trend. We've hit some kind of a milestone, I think, where most of our computer functionality is "good enough" for most of what we actually use them for. Something about the development of computer systems right now reminds me of... whenever it was... 10 years ago?... when people were using their computers mostly for word-processing, and their computers were good enough for that, so there wasn't a huge drive to accomplish a particular thing. Then people discovered that they could rip CDs into MP3s and share them, and there grew this whole new focus on multimedia and the Internet.

    Now we have those things handled, and it seems like the answer to "what's next?" is making both hardware and software smaller and less bloated. We're getting smart phones that are becoming something more like a real portable computer, and we're getting things like netbooks. I predict you're also going to start seeing better use of embedded systems, like maybe DVRs are just going to be built into TVs soon. Not sure on that one, but I think you're going to see things shrinking, devices being consolidated, and a renewed focus on making things more efficient and refined.

    Meh. It's rambling time...

    =Smidge=

  • by Kjella ( 173770 ) on Wednesday February 11, 2009 @05:29PM (#26819265) Homepage

    Ever since.... well, the first abstraction there's been a holy flamewar of abstractions versus spaghetti code. The one side of the war claims that by building enough layers each layer is simple, well-understood with well-defined interactions and thus fairly bugfree. The other side claims that abstractions wrap things in so many layers that the whole code is like an onion without substance, separating cause from effect so it's difficult to grasp and that these layers seriously hurt performance. The answer is usually to do is simple if possible, complex if necessary. Of calls went up and performance went up it's probably necessary, but isolated an increase in cross calls would be a bad thing.

  • Re:Where's NTFS ? (Score:5, Interesting)

    by Kjella ( 173770 ) on Wednesday February 11, 2009 @05:52PM (#26819551) Homepage

    Sometimes I wish there was a way to make my own meta-mod, like "don't include mods from the people that modded this up ever again". The same copy-paste has been in tons of stories now, and it's not funny anymore because it's the EXACT same thing. I'd even rather hear one more variation on our insensitive clod overlords from Soviet Russia.

  • Re:Thoughts (Score:4, Interesting)

    by morgan_greywolf ( 835522 ) on Wednesday February 11, 2009 @05:54PM (#26819583) Homepage Journal

    In fact, if you think about it, the greater the number of different functions a filesystem driver uses, the less functionality it needs to have within itself. I also don't think the number of external calls is a significant measure of anything related to the size or performance, really. It all depends on what calls are being made and for what purpose.

    If anything, as you imply, it's a measure of complexity. But even that might not really be the case if you stop and think about it. As more stuff is abstracted out, the less code goes into the filesystem code, the simpler, really, not more complex that filesystem driver becomes.

    I think this was a really poor choice of metric and that almost renders this entire article moot.

  • Re:Yes/no (Score:5, Interesting)

    by Yokaze ( 70883 ) on Wednesday February 11, 2009 @05:55PM (#26819599)

    > The number of calls in the interface do matter because they increase complexity.

    That is only true, if a similar functionality is provided and the function-calls are of similar complexity (e.g. number of parameters, complexity of arguments.

    To my limited knowledge, over work has been done to extract more common functionality from file-systems. Should that be the the case, it would increase the number of function calls, but reduce the overall complexity.

  • by renoX ( 11677 ) on Wednesday February 11, 2009 @06:34PM (#26820087)

    I think that you can compile only the filesystem you want in the kernel..
    So the only complexity which matter to an user is the one of the filesystem they select to compile in the kernel!

  • Re:wrong data (Score:3, Interesting)

    by jedidiah ( 1196 ) on Wednesday February 11, 2009 @06:41PM (#26820185) Homepage

    Nevermind COMPILING stuff. You can just plain choose not to USE stuff.

    Don't want the "bloat" of NFS or ext4, then don't bloody use them.

    Yeah, the spiffy new things or inherently complex things might
    show that complexity in the code. Imagine that. The source for
    Halo looks bigger than the source for Pacman.

    There is no news here.

    As an nvidia user, ATI can make their Linux drivers as bad and
    as bloated as they want. I don't care. It really doesn't effect
    me.

  • Re:Yes/no (Score:4, Interesting)

    by ckaminski ( 82854 ) <slashdot-nospam.darthcoder@com> on Wednesday February 11, 2009 @10:00PM (#26822215) Homepage
    Nevermind the fact that modern processors can cache the entirety of the Linux kernel.

    Simplicity of code is nearly always better than premature and not necessarily useful optimizations.
  • Re:Yes/no (Score:4, Interesting)

    by Zan Lynx ( 87672 ) on Wednesday February 11, 2009 @10:21PM (#26822403) Homepage

    I believe I read somewhere or other that branch predictors need a certain number of instructions between the branch instruction and the branch target in order to do a good job. If the only instruction in the loop is a single increment, that might explain the problem. Unrolling the loop so it has more instructions might fix it.

  • Re:Yes/no (Score:4, Interesting)

    by Z34107 ( 925136 ) on Wednesday February 11, 2009 @10:31PM (#26822485)

    So... rather surprisingly, the cost of these function calls is as close as doesn't matter, to exactly zero.

    If the compiler knows the relative address of the function ahead of time, they are really fast.

    Try replacing your direct function call with a function pointer instead. Assign the function pointer the address of your function during runtime. It will be many orders of magnitude slower.

    Not sure why this is; just something I discovered the hard way.

  • Re:Yes/no (Score:3, Interesting)

    by Daniel Phillips ( 238627 ) on Thursday February 12, 2009 @12:46AM (#26823349)

    Try replacing your direct function call with a function pointer instead. Assign the function pointer the address of your function during runtime. It will be many orders of magnitude slower.

    It goes faster as an indirect functional call if anything. Go figure.

    Anyway... orders of magnitude difference? Under some other rules of physics maybe. It would probably be a good idea to compile and time your program, as I did.

  • Re:At least Reiser (Score:4, Interesting)

    by mollymoo ( 202721 ) on Thursday February 12, 2009 @02:32AM (#26823897) Journal

    Being dead doesn't sound too bad to me. The process of dying almost always sucks and I don't want to be dead, but once I am dead I can guarantee you I won't give a shit about it.

For God's sake, stop researching for a while and begin to think!

Working...