Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Sun Microsystems IT

ZFS, the Last Word in File Systems? 564

guigouz writes "Sun is carrying a feature story about its new ZFS File System - ZFS, the dynamic new file system in Sun's Solaris 10 Operating System (Solaris OS), will make you forget everything you thought you knew about file systems. ZFS will be available on all Solaris 10 OS-supported platforms, and all existing applications will run with it. Moreover, ZFS complements Sun's storage management portfolio, including the Sun StorEdge QFS software, which is ideal for sharing business data."
This discussion has been archived. No new comments can be posted.

ZFS, the Last Word in File Systems?

Comments Filter:
  • Two things... (Score:5, Insightful)

    by rincebrain ( 776480 ) on Thursday September 16, 2004 @12:08PM (#10267148) Homepage
    1) Even Sun has succumbed to recursive acronyms, now.

    2) Is it just me, or is the post surprisingly bereft of unique details? I mean, integration with all existing applications is rather assumed, given that it's a file system and all...
  • Hmf. (Score:5, Insightful)

    by BJH ( 11355 ) on Thursday September 16, 2004 @12:09PM (#10267155)
    Logically, the next question is if ZFS' 128 bits is enough. According to Bonwick, it has to be. "Populating 128-bit file systems would exceed the quantum limits of earth-based storage. You couldn't fill a 128-bit storage pool without boiling the oceans."

    So, what was the point of creating a 128-bit filesystem?

    -1, Marketing Hype.

    *Yawn*
  • by Anonymous Coward on Thursday September 16, 2004 @12:10PM (#10267162)
    Unlimited scalability
    As the world's first 128-bit file system, ZFS offers 16 billion billion times the capacity of 32- or 64-bit systems.
    But the last time I checked, 16 billion billion is still less than infinity.
  • by Paulrothrock ( 685079 ) on Thursday September 16, 2004 @12:10PM (#10267163) Homepage Journal
    Billion billion is a perfectly valid number. Or would you rather they say 6.0 × 10^18? Most people can't imagine that. But people can (kind of) visualize a billion, and then multiply that by a billion, and see it's really, really big.
  • by Ewan ( 5533 ) on Thursday September 16, 2004 @12:16PM (#10267252) Homepage Journal
    Reading the article, all I see is Sun saying how bad their old stuff was, e.g.:

    Consider this case: To create a pool, to create three file systems, and then to grow the pool--5 logical steps--5 simple ZFS commands are required, as opposed to 28 steps with a traditional file system and volume manager.

    and
    Moreover, these commands are all constant-time and complete in just a few seconds. Traditional file systems and volumes often take hours to configure. In the case above, ZFS reduces the time required to complete the tasks from 40 minutes to under 10 seconds.


    Compared to AIX or HP-UX, 28 steps is shockingly bad, both have had much simpler logical volume management for several versions now (AIX for 5 years or more? certainly as long as I have used it). The existing Solaris 9 logical volume infrastructure is years behind the competition, this is bringing it up to date, but not putting it far ahead.

    Ewan
  • by Anonymous Coward on Thursday September 16, 2004 @12:18PM (#10267284)
    Actually what's the big deal of supporting such massive amounts of data?0

    OK I am saying it now and it wont be back to curse me.

    64 bits should be enough for everybody.

    Now, here's the deal .. how are you going to ORGANIZE a 128 bit file system? Oh I see folders? Umm so if you're going to use folders .. umm why not have multiple drives or partitioning?

    Ah yes I can hear people saying "what about large file data sets?" .. well what about it? Look if each data word size is so massive that the only way to address it is with 128 bits .. how the hell do you process such a huge amount of data in one pass anyway? Show me a CPU (not parallel system) .. that do operations on billions of trillions of gigabytes of data simultaneously.

    Reminds me of the Gillete Mach 3 versus Schick Quattro lawsuit .. Gillete decided to have 3 blades ..and so Schick put 4 and claimed to be superior .. Now why not add 5 .. what about 6?

    This 128 bit file system only serves marketing purposes. I want to see more clear advantages .. not that they made the breakthrough of ... "hmm we used 16 bits .. 32 bits .. 64 bits .. hmm why not 128 bits!" When they have a system capable of actually processing such data ..I'll be the first to cheer.
  • by AsciiNaut ( 630729 ) on Thursday September 16, 2004 @12:18PM (#10267287)
    I broke the habit of a lunchtime and RTFA. According to Jeff Bonwick, the chief architect of ZFS, "populating 128-bit file systems would exceed the quantum limits of earth-based storage. You couldn't fill a 128-bit storage pool without boiling the oceans."

    Who else instantly thought of, "640 K ought to be enough for anybody", uttered by the chief architect of twenty years of chaos?

  • Re:Open source (Score:4, Insightful)

    by tolan-b ( 230077 ) on Thursday September 16, 2004 @12:18PM (#10267288)
    I suspect that whatever open source license Sun release Solaris under, they'll be careful to make sure it's incompatible with the GPL.
  • fileless systems (Score:2, Insightful)

    by Doc Ruby ( 173196 ) on Thursday September 16, 2004 @12:19PM (#10267304) Homepage Journal
    I don't know about the "last" word in file systems, but they won't be anything but klugey simulations of antiquated paper cabinets until their first word is "SELECT". Will someone finally replace the hierarchical inode database with relational tables, and a SQL API? Throw in a traditional file/directory API mapped to SQL statements, and the world will beat a path/filespec to your door.
  • by kcbrown ( 7426 ) <slashdot@sysexperts.com> on Thursday September 16, 2004 @12:19PM (#10267312)
    ...and that I haven't seen in any file system announced to date, is a way of bundling multiple filesystem operations into a single atomic transaction that can be rolled back. This would clearly require an addition of four system calls (one to begin a transaction, one to commit it, one to roll it back, and one to set the default action, commit or rollback, on exit).

    Such a feature would rock, because it would be possible to make things like installers completely atomic: interrupt the installer process and the whole thing rolls back.

  • by escher ( 3402 ) <the.mind.walrus@[ ]il.com ['gma' in gap]> on Thursday September 16, 2004 @12:25PM (#10267376) Journal
    You don't do much video editing, do you? ;)
  • by yeremein ( 678037 ) on Thursday September 16, 2004 @12:25PM (#10267387)
    ZFS is supported on both SPARC and x86 platforms. More important, ZFS is endian-neutral. You can easily move disks from a SPARC server to an x86 server. Neither architecture pays a byte-swapping tax due to Sun's patent-pending "adaptive endian-ness" technology, which is unique to ZFS.
    Bleh. How expensive is it to byte-swap anyway? Compared with checking whether the number you're looking at is already the right endianness? Just store everything big-endian; x86 systems can swap it in a single instruction anyway. It's not like all data needs to be byte-swapped anyway, just metadata. I can't imagine the penalty would come even close to the amount of time spent doing their integrity checksums anyway.

    Looks to me like nothing more than an excuse to put up a patent tollboth for anyone who wants to implement ZFS.
  • by poot_rootbeer ( 188613 ) on Thursday September 16, 2004 @12:25PM (#10267388)
    As the world's first 128-bit file system, ZFS offers 16 billion billion times the capacity of 32- or 64-bit systems.

    A 64-bit (unsigned) binary number can already store values up to 16 billion billion (actually, closer to 18, but who's counting). That's roughly 2.5 billion individually addressable locations for every man, woman, and child living on Earth.

    Shouldn't that be enough to hold us for a few generations at least?
  • huh? (Score:1, Insightful)

    by helmespc ( 807573 ) on Thursday September 16, 2004 @12:26PM (#10267412)
    Never need more than a 128 bit filesystem? My arse... and I'll never need more than 640k of system memory. Just because 128 bit filesystems allow an utter crapload of data doesn't negate the fact that 256 bit filesystems would allow a super utter crapload of data...
  • Silly AC (Score:3, Insightful)

    by 2nd Post! ( 213333 ) <gundbear&pacbell,net> on Thursday September 16, 2004 @12:37PM (#10267532) Homepage
    You organize a 128bit file system with a database.

    Why bother with folders as a root? You can create a folder hierarchy *with* a database too.
  • by jellomizer ( 103300 ) * on Thursday September 16, 2004 @12:38PM (#10267544)
    64 bits should be enough for everybody.
    Well 128 Bit is more of an issue of coming up with something without a limit or a limit that anyone any time soon will use up. The difference between 64bit and 128 bit is the diffence of a number that we can handle and comprehend to a number that is much to big for our minds to properly comprehend.
    How can someone fill a 64bit file system, Well a large company or government organization that stores all their persons files onto one file system. Or say a program that gives its logs in seporate files. Or say storing uncompiled movies frame by frame. Or having an archive of data spanning hundreds of years. Yes there are ways around it now. But sometimes have a file system that doesn't have those limits. Comes in handy, nor nessarly for not but to expend into the future.

  • by Anonymous Coward on Thursday September 16, 2004 @12:42PM (#10267616)
    People can visualize it. Billion is much more common than you think. 1 billion in hertz is 1 GHz. 1 Billion bytes in RAM is 1 Gigabyte.
  • by Jeff DeMaagd ( 2015 ) on Thursday September 16, 2004 @12:47PM (#10267677) Homepage Journal
    It would take over 500 years to fill a 64 bit filesystem written at 1GB/sec (and of course 500 years to read it back again).

    One product already can transfer a Terrabyte per second, so that would cut the transfer down to half a year. And I imagine that transfer rate would continue to increase.

    I don't see how one would necessarily argue against such a thing for products that will go for cluster and supercomputer use. I say might as well get the bugs out so when you can so that once the 65th bit is needed, the supercomputer suppliers are ready.

    http://www.sc-conference.org/sc2004/storcloud.ht ml
  • by pslam ( 97660 ) on Thursday September 16, 2004 @12:54PM (#10267764) Homepage Journal
    Yeah, its probably marketing hype now, but in 5 years, what about 10? Just because we can't do it now doesn't mean that we should stop progress.

    No, precisely because we can't do it now, and for the very predictable future, we shouldn't be wasting all that disk space, access and CPU time for a boundary that no production system is likely to ever reach before they get upgraded. That's just practicality.

    Seagate apparently sold 18.3 million desktop drives last year. Assuming they're all about 120GB (which is generous of me), that would be about 17.6*10^18 bits. Guess what, that's 2^64 bits. Yes, you would have to buy every single desktop hard drive Seagate shipped in the last year to have the capacity to fill a 64 bit filesystem. And find space for 18 million drives. And a power station to deliver the several hundred megawatts you'd need.

    Even at 2 times drive capacity growth per year that's still a ridiculously unattainable figure. In 14 years time you'd only need to buy 1000 drives (which are now 2000TB each). But 14 years is a geological time scale when it comes to computers. You'd have wasted 14 years of CPU time and disk space devoted to those extra 64 bits.

    If you still think 64 bits isn't enough, how about 96 bits? It would take 46 years before hard disks were big and cheap enough so you could fill the filesystem by buying 1000 of them. But no, they chose 128 bits because it sounded good.

  • by jfinke ( 68409 ) on Thursday September 16, 2004 @01:13PM (#10267996) Homepage
    I have always maintained that the only reason Veritas exists is to make up for shortcomings in Sun's volume management and file systems.

    IBM has had a LVM since the early to mid 90s.

    Linux has one now.

    If Sun had bothered to keep up the Jones on these little things, Veritas could possible have never become what they are.

    Last I heard, they were going to start offering VXVM and VVM on AIX. My AIX admins did not care. They figured why would they spend the money for the product when they already have a usable system that is supported by the OEM.

  • Re:Out of letters. (Score:3, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday September 16, 2004 @01:15PM (#10268025) Homepage Journal
    To those who don't know, no amount of explanation can make a joke funny. In fact, if you have to explain the joke, it's pretty much guaranteed not to be funny. I found it kind of amusing - I didn't know that [ was the next character but I was able to guess that it was simply by what was said. Consequently, I found it amusing. The response from someone who doesn't think about that stuff is going to be similar to "Ah. That's funny." Followed by a shaking of the head as they walk off toward the water cooler to tell everyone what an insufferable nerd you are.
  • Re:Hmf. (Score:3, Insightful)

    by PaSTE ( 88128 ) <paste@mpsCOW.ohi ... minus herbivore> on Thursday September 16, 2004 @01:53PM (#10268541) Homepage
    Let's do the math, shall we?

    2^128 is about 3.4e38. Now, let's be generous and asume we can control the spin of every electron we come across and incorporate it into a quantum storage device, such that each electron represented a bit of information (either left- or right-spin). Now, because I'm still being generous, I'm going to say the Earth's oceans contain 2e9 km^3, or 2e18 m^3 (compare here [hypertextbook.com]) Assuming all this water is liquid, its density is 1000 kg/m^3 (abouts), so we have 2e24 g of water.

    2e24 g of water contains about 1e23 moles of water molecules, or about 1e46 individual water molecules. With about 10 electrons per molecule, that's 1e47 electrons. So if we indeed "boil the oceans" in order to harvest the electrons to feed into our massive quantum storage system, we would have 1,000,000,000 spare electrons for things like hydrogen fuel cells.

    But this does not exceed the quantum limits of earth-based storage, even by a long shot. Bonwick even admits it: You couldn't fill a 128-bit storage pool without boiling the oceans. Boiling the oceans is definately an earth-based option for quantum storage, as we wouldn't have to import the materal from space. We also have other ways of harvisting electrons, like boiling humans and evacuating the atmosphere. To give you an idea, there's something like 10^54 electrons on earth, give or take a few hundred trillion. We'd need at least a 192-bit system to approach Earth's quantum (electron-based) limits.

  • by Anonymous Coward on Thursday September 16, 2004 @01:57PM (#10268589)
    the funny thing about reading the article is that you get the details. you should try it sometime.

    here are some more details, but nowhere near as long a list as you'll get from reading the article (since the full list would mean quoting the article, which i suggest reading).
    - data checksums eliminate the need for fsck
    - easy to add disks to the pool
    - seems to support raid 0 and at least one real raid
    - data rollbacks (sound like netapp snapshots)
    - can mount the same filesystem on sparc or x86

    while not necessarily amazing, when you start adding all of it together it makes for a large improvement over ufs or vxvm. it's interesting, to say the least. i consider this a big announcement for the solaris platform (and, as more than one person pointed out, possibly linux and bsd since the code for it will ultimately be open source).

    as far as greater technical details, how are people even going to know it exists in order to, say, make independent performance benchmarks if there's no announcement. should everyone just discover the feature accidentally?
  • by Jugalator ( 259273 ) on Thursday September 16, 2004 @02:03PM (#10268676) Journal
    And how many have a clue of how much that is?
  • by vrt3 ( 62368 ) on Thursday September 16, 2004 @03:03PM (#10269412) Homepage
    I'm a bit of a skeptic but In your list I would like to see (as necessary features of a modern file system):
    - online defrag (you don't take the volume offline to do a defrag)

    IMO a necessary feature of a modern file system is that it doesn't need to be defragged.
  • Re:Hmf. (Score:1, Insightful)

    by Anonymous Coward on Thursday September 16, 2004 @03:03PM (#10269414)

    That freaking 640k quote is over used!

    It would have been ridiculous AT THE TIME to address more data.. CPU's and software werent there yet.



    Discounting the whole backstory about how this quote was a myth and that Bill Gates did not say it: If the quote was made, even at that time it was wrong. The 640k limit was imposed by the craptacular CPU of choice: the 8088. There were numerous processors without this limitation, or such a shitty design to make the limit so.

    Remember, that even if Gates (or anyone) made this quote back in the early 80's it should have been completely forgotten. The reason it wasn't is because people using PC's had to contend with that real mode addressing limit WELL into the mid-90s, which was completely insane. That was the reason people kept dredging the quote up, because even if the quote wasn't true it was almost as if MS was saying that 640k was enough based on their actions into the 90s. Granted it's not all MS's fault, they had to contend with dyed in the wool DOS programmers, other systems like Novell, and preserving backward compatibility. At the same time, they were awfully slow to the forfront with a usuable, true 32-bit operating system which title probably goes to Windows 95. Of course, the first viable for home OS that nixed real mode operation and placed it in a virtualization (VM86) was not until Windows 2000, and some might say Windows XP. So some would say that the 640k limit had its influence up until 2001.
  • by mcrbids ( 148650 ) on Thursday September 16, 2004 @03:29PM (#10269743) Journal
    1) Adding more address space bits doesn't significantly slow down performance.

    2) Migrating from one address space to another is painful. Why make it more frequent by aiming low? Do you think migration would be any less painful in 14 years?

    3) New applications: Broadband didn't just result in really fast web-page downloads - the entire online music industry stems from that. The original creators of TCP/IP had no idea that they were developing media on-demand, they were making it so that you could transfer bits from one archaic machine to another.

    Building flexible, capable systems creates an environment where development isn't as constrained by limitations - resulting in new, unpredictable developments.
  • by stonecypher ( 118140 ) * <stonecypher@@@gmail...com> on Thursday September 16, 2004 @04:11PM (#10270395) Homepage Journal
    There's a big difference between visualizing the space containing a billion elements and visualizing the elements themselves. Try imagining all the little plastic millimeter chips that fill that half mile.

    Then, since it's actually a billion billion at stake, try to imagine that half by half mile square full of tiny plastic chips.

    Finally, put them in an oversized bathtub, surround the tub with video games, a bad pizza parlor and tired parents, and wham! You're Chuck E Cheese. Therefore, we can state firmly:

    1) Visualize Billion Billion.
    2) ??? [Which adequately describes setting up a chuck e cheese]
    3) Profit.

    In soviet slashdot, billion billion profits you.

    Pardon me; I have to find a way to convince myself that my hot grits cluster joke isn't outdated.
  • by julesh ( 229690 ) on Thursday September 16, 2004 @04:13PM (#10270437)
    once you're going to expand past a 64-bit filesystem, there's not much point in going smaller than a 128-bit fileystem.

    Why expand past a 64 bit filesystem. 64 bits with 1k blocks as your smallest addressable unit (which is more than reasonable for a filesystem this size) gives you 2^74 bytes to play with. For reference, that's 16 * 2^70 bytes = 16 * 2^30 terabytes, or "one hell of a lot of data".
  • by identity0 ( 77976 ) on Thursday September 16, 2004 @06:20PM (#10271899) Journal
    You've pointed out just why we need this. The problem is, you're still thinking in terms of individual hard drives in individual computers that can only be accessed by the local machine.

    What are you going to do when you access all of your data through a network, and the whole world has their storage on the internet, using a global filesystem? You said yourself that one manufacturer makes 2^64 bits of HD space every year, so 64-bit is obviously not enough. We need 128 bits if we want to be able to make use of all the HD space that is going to waste on networked computers today.

    Hell, we could do that today, if we had - wait for it - the right filesystem.

    The fact that it's Sun that came up with this suggests they're thinking along the same lines. They would benefit greatly if people started using a massively networked filesystem, especially if they own the code to it.
  • by Mornelithe ( 83633 ) on Thursday September 16, 2004 @08:25PM (#10272928)
    Actually, it's only 18.3 million desktop drives if you address every single byte of the filesystem. Most don't do this; they allocate space in blocks. 1k is a reasonable block size if you're talking many terabyte systems.

    With a 1k block size, you'd be addressing 16 billion terabytes of storage. Let us know as soon as every single person on earth has more than 2 terabytes to donate to your distributed
    filesystem project.
  • by mcrbids ( 148650 ) on Thursday September 16, 2004 @09:01PM (#10273148) Journal
    If you could suggest to me a new applications that needs over 8 billion times more storage capacity as top-of-the-range current systems, please, go ahead and introduce it. Just don't ask me for financing.
    Please read what I wrote! Or, is the word "unpredictable" not in your comprehension? Try reading this, word for word, and see if your response does anything but make you sound like an idiot:

    "3) New applications: Broadband didn't just result in really fast web-page downloads - the entire online music industry stems from that. The original creators of TCP/IP had no idea that they were developing media on-demand, they were making it so that you could transfer bits from one archaic machine to another."

    How could they predict iTunes? Why would you think it reasonable to predict the usage of such a filesystem?

Always try to do things in chronological order; it's less confusing that way.

Working...