Forgot your password?
typodupeerror
Data Storage Sun Microsystems IT

ZFS, the Last Word in File Systems? 564

Posted by CmdrTaco
from the no-point-discussing-I-guess dept.
guigouz writes "Sun is carrying a feature story about its new ZFS File System - ZFS, the dynamic new file system in Sun's Solaris 10 Operating System (Solaris OS), will make you forget everything you thought you knew about file systems. ZFS will be available on all Solaris 10 OS-supported platforms, and all existing applications will run with it. Moreover, ZFS complements Sun's storage management portfolio, including the Sun StorEdge QFS software, which is ideal for sharing business data."
This discussion has been archived. No new comments can be posted.

ZFS, the Last Word in File Systems?

Comments Filter:
  • Open source (Score:4, Informative)

    by Splinton (528692) * on Thursday September 16, 2004 @11:07AM (#10267137) Homepage

    And it looks like it's going to be opensourced along with most of Solaris 10!

    Presumably a 32 bit machine will be able to handle a 128 bit file system, in the same way as Solaris 10 is currently destined for (at most) 64 bits.

  • Re:Hmf. (Score:5, Informative)

    by Kenja (541830) on Thursday September 16, 2004 @11:10AM (#10267168)
    "So, what was the point of creating a 128-bit filesystem?

    Getting rid of file/drive size limitations for the foreseeable future?

  • by grunt107 (739510) on Thursday September 16, 2004 @11:12AM (#10267196)
    Having a global pool does lessen maintenance/support, but what method are they using to place data on the disks?

    Frequently accessed data needs to be spread out on all the disks for the fastest access, so does that mean Sun has FS files/tables that track usage and repositions data based on that?
  • by TheLoneGundam (615596) on Thursday September 16, 2004 @11:13AM (#10267203) Journal
    IBM has ZFS on their z/OS Unix Systems Services (POSIX interfaces on z/OS) component. ZFS was developed to provide improvements over the HFS (Hierarchical File System) that they ship with the OS.
  • Re:billion billion? (Score:5, Informative)

    by michael path (94586) * on Thursday September 16, 2004 @11:15AM (#10267244) Homepage Journal
    How about quintillion [wikipedia.org]?
  • Re:Oh wow! (Score:2, Informative)

    by elmegil (12001) on Thursday September 16, 2004 @11:24AM (#10267367) Homepage Journal
    Until Veritas makes their product free, there's going to have to be SOMETHING that operates in that space that is under Sun's control, don't you think? Not to mention VxVM has plenty of warts all its own.
  • Sounds really nice (Score:5, Informative)

    by mveloso (325617) on Thursday September 16, 2004 @11:25AM (#10267381)
    Looks like Sun went out and redid their filesystem based on the performance characteristics of machines today, instead of machines of yesteryear.

    Some highllights, for those that don't (or won't) RTA:

    * Data integrity. Apparently it uses file checksums to error-correct files, so files will never be corrupted. About time someone did this.

    * Snapshots, like netapp?

    * Transactional nature/copy-on-write

    * Auto-striping

    * Really, Really Large volume support

    All of this leads to speed and reliability. There's a lot of other stuff (varying blocks sizes, write queueing, stride stuff which I haven't heard about in years), but all of it leads to above.

    Oh, and they simplified their admin too.

    It's hard to make a filesystem look exciting. Most of the time it just works, until it fails. The data checksum stuff looks interesting, in that they built error correction into the FS (like CDs and RAID but better hopefully).

    It might also do away with the idea of "space free on a volume," since the marketing implies that each FS grows/shrinks dynamically, pulling storage out of the pool as needed.

    Any users want to chime in?
  • by FullMetalAlchemist (811118) on Thursday September 16, 2004 @11:30AM (#10267453)
    There are several FS like this, but you don't know of them because they require completely new FS API to work with.
    With UFS2/SU we have snapshots [freebsd.org] which is a compromise; it does require any changes in the original UNIX API, and all current apps therefor work. On the other hand, it either requires a daemon or a competent user.

    So, either you have UNIX or you have something else. Plan9 has many advantages, still, we use BSD, Solaris or whatever.
  • by pslam (97660) on Thursday September 16, 2004 @11:30AM (#10267456) Homepage Journal
    Getting rid of file/drive size limitations for the foreseeable future?

    It would take over 500 years to fill a 64 bit filesystem written at 1GB/sec (and of course 500 years to read it back again). 64 bits is already an impossibly large figure. There's absolutely nothing special or clever whatsoever about doubling the size of your pointers aside from using up more disk space for all the metadata.

    64 bits is enough for today's filesystems in much the same way that 256 bit AES is enough for today's encryption - there are far bigger things that will require complete system changes than that so called "limit". I suspect a better filesystem will come along well before those 500 years are up... I agree with grandparent:

    -1, Marketing Hype.

  • Re:Out of letters. (Score:5, Informative)

    by badriram (699489) on Thursday September 16, 2004 @11:34AM (#10267502)
    I just wonder how many people on slashdot would even understand that....

    To those who dont know.. [ comes after Z in ASCII and unicode-latin
  • by dominator (61418) on Thursday September 16, 2004 @11:39AM (#10267559) Homepage
    Reiserfs will apparently soon have what you're looking for. Already, all primitive operations are atomic, but they plan on exporting a user-space transaction interface soon.

    http://www.namesys.com/benchmarks.html

    "V4 is a fully atomic filesystem, keep in mind that these performance numbers are with every FS operation performed as a fully atomic transaction. We are the first to make that performance effective to do. Look for a user space transactions interface to come out soon....

    Finally, remember that reiser4 is more space efficient than V3, the df measurements are there for looking at....;-) "
  • by ChrisRijk (1818) on Thursday September 16, 2004 @11:41AM (#10267596)
    ZFS achieves its impressive performance through a number of techniques:
    * Dynamic striping across all devices to maximize throughput
    * Copy-on-write design makes most disk writes sequential
    * Multiple block sizes, automatically chosen to match workload
    * Explicit I/O priority with deadline scheduling
    * Globally optimal I/O sorting and aggregation
    * Multiple independent prefetch streams with automatic length and stride detection
    * Unlimited, instantaneous read/write snapshots
    * Parallel, constant-time directory operations


    ZFS has some similarities to NetApp's WAFL in that it uses "copy on write".

    One of the fun things with ZFS is that it automatically stripes across all the storage in your pool. Disk size doesn't matter - it's all used. This even works across SCSI and IDE.

    One of the important things is that volume management isn't a seperate feature. Effectively, all the current limitations of volume managers are blown away:

    Just as it dramatically eases the suffering of system administrators, ZFS offers relief for your company's bottom line. Because ZFS is built on top of virtual storage pools (unlike traditional file systems that require a separate volume manager), creating and deleting file systems is much less complex. Not only does this eliminate the need to pay for volume manager licenses and allow for single support contracts, it lowers administration costs and increases storage utilization.

    ZFS appears to applications as a standard POSIX file system--no porting is required. But to administrators, it presents a pooled storage model that eliminates the antique concept of volumes, as well as all of the related partition management, provisioning, and file system sizing problems. Thousands--even millions--of file systems can all draw from ZFS' common storage pool, each one consuming only as much space as it needs. The combined I/O bandwidth of all of the devices in that storage pool is always available to each file system.


    This is also part of the stuff making admin and configuration far far simpler. The thing I like is that it should be far harder to go wrong with ZFS (not available in Solaris Express yet so I haven't seen this for myself).

    The very high degree of reliability as standard is very welcome too:

    Data can be corrupted in a number of ways, such as a system error or an unexpected power outage, but ZFS removes this fear of the unknown. ZFS prevents data corruption by keeping data self-consistent at all times. All operations are transactional. This not only maintains consistency but also removes almost all of the constraints on I/O order and allows changes to succeed or fail as a whole.

    All operations are also copy-on-write. Live data is never overwritten. ZFS writes data to a new block before changing the data pointers and committing the write. Copy-on-write provides several benefits:

    * Always-valid on-disk state
    * Consistent, reliable backups
    * Data rollback to known point in time

    "We validate the entire I/O stack, start to finish, no guesswork involved. It's all provable data integrity," says Bonwick.

    Administrators will never again have to run laborious recovery procedures, such as fsck, even if the system is shut down in an unclean fashion. In fact, Solaris Kernel engineers Bill Moore and Matt Ahrens have subjected ZFS to more than a million forced, violent crashes in the course of their testing. Not once has ZFS lost data integrity or leaked a single block.


    For more technical info see Matt Ahrens's [sun.com] and Val Henson's [sun.com] blogs - since they're among the engineers who worked on it.
  • by dynamo (6127) on Thursday September 16, 2004 @11:44AM (#10267647) Journal
    Just because not all worlds are inhabited doesn't mean there aren't an infinite number. If you allow yourself to presume infinite space and infinite worlds, suppose 9% of them turn out to be inhabited, no matter how many you keep examining.

    Infinity is relative.
  • by thehunger (549253) on Thursday September 16, 2004 @11:45AM (#10267661)
    The codename for the first generation of Novells current filesystem was ZFS. Why? because it was supposed to be "the last, or final word" in file systems.

    Novell now Novell Storage System (I think it used to be NetWare Storage System).

    Apart from the obvious fact that SUN didnt manage to be very original in naming their filesystem, its noteworthy that Novell is porting their ZFS - now NSS - to Linux. It'll be part of Novell Open Enterprise Server - on both Linux and NetWare kernels.

    From the top of my mind, here are some features of NSS that SUN needs to exceed to qualify for a new "final word..":

    - Background compression
    - Fast on-demand decompression
    - Transactions
    - Pluggable Name spaces
    - Pluggable protocols (ie. http, nfs, etc)
    - Advanced Access control model with inheritance, rights filters, etc. integrated with directory service (duh!)
    - Quotas on user, group, directory level
    - 64-bit (ok, SUN obviously got that one)
    - mini-volumes
    - journaled
    - etc.

    oh well, I wont bother continuing, but its worth looking out for NSS. Hopefully Novell will open source it and not make it exclusive to their distros.
  • by anzha (138288) on Thursday September 16, 2004 @11:50AM (#10267710) Homepage Journal

    Right now there are a lot of file systems that do somehing not all that different than what Sun is proposing. The project [nersc.gov] I am on is evaluating them as we speak for a center wide filesystem. I've had the fun (no sarcasm, honestly) of setting up a number of different onces and helping to run benchmarks and tests against each. All of them have strengths. Every single one of them has some nasty weaknesses.

    If you are looking for an open source based cluster file system, Lustre [lustre.org] is what you want. It's supported by LLNL, PNNL, and the main writers at ClusterFS Inc [clusterfs.com]. It's a network based cluster FS. We've been using it over GigE. However, we've found that there needs to be a ratio of 3:1 for data server:clients for a ratio. Wehave only used one metadata server. Failover isn't the greatest. Quotas don't exist. it also makes kernel mods (some good and bad) to do a mild fork of the linux kernel (they put them into the newer kernels every so often). It only runs on Linux. Getting it to run on anything else looks...scary.

    GPFS [ibm.com] runs on AIX and Linux. Even sharing the same storage. It runs and is pretty stable. it has the option to run in a SAN mode or network based FS. In the latter form, it even does local discovery of disks via labels so that if a client can see the disks locally it will read and write to them via FC rather than to the server. It, however, is a balkanized mess. It requires a lot more work to bring up and run: there is an awful lot of software to configure to get it to run (re: RSCT. If you haven't had the joys of HATS and HAGS, count yourself very, very lucky).

    ADIC's StorNext [adic.com] software is another option. This one is good if you are interested in ease of installation, maintanence, and very, very fast speeds (damn near line speed on Fibre channel). I have set this one up for sharing disks in less than two hours from first install to getting numerous assorted nodes of different OS's to play together (Solaris, AIX, Linux). It freakin on virtually everything from Crays to Linux to Windows. It's issues seem to be scaling (right now doesn't go past 256 clients) and it has some nontrivial locking issues (righting to the same block from multiple clients, and parallel I/O to the same file from multiple clients if you change the file size).

    There are some others that are not as mature. Among them are Ibrix [ibrix.com], Panasas [panasas.com], GFS [redhat.com], and IBM's SANFS [ibm.com]. All of them are interesting or promising. Only SANF looks like it runs on more than Linux though at this point. Our requirements for the project I am on are to share the same FS and storage instance among disparate client OSes simultaneously. This might not be the same for others though and these might be worth a look. Lustre dodges this because its open source and they're interested in porting.

  • by Zapman (2662) on Thursday September 16, 2004 @11:53AM (#10267751)
    Well, I'm not 100% sure that's fair. AIX and HP still have their old school 'format -> mkfs' path, and that is what Sun is comparing their 'new world order' to. Now, if you want to do cool things like Raid, then you need to either do the hardware based stuff, or you play with Disksuite or Veritas Volume Manager[1].

    Both have more interesting and pretty ways of playing with volumes. Disksuite is a free, add on package, and Veritas charges an arm and a leg for their Volume Manager.

    In addition to the other cool features, ZFS is just a way to deepen the abstraction away from physical volumes.

    As to it's inherent coolness, or lack there of, I'll let y'all know when I've actually been able to play with it.

    [1]Had Sun been wise years ago, they would have just bought Veritas, and the world would be very different. Now however, Veritas is one of the largest software companies in the world.
  • by melted (227442) on Thursday September 16, 2004 @11:55AM (#10267777) Homepage
    As someone who's been involved with performance/stress optimizations I can tell you that for each situation you can carefully put together two types of tests: one which proves that there's a problem, another that proves the problem doesn't exist.

    The proof is in the pudding. Let Sun release it and administrators use it for a year or two, then we'll see if it's good enough. Right now I'm having doubts it's as good as they want you to believe.
  • by drinkypoo (153816) <martin.espinoza@gmail.com> on Thursday September 16, 2004 @12:01PM (#10267855) Homepage Journal
    How is this actually different from JFS on top of a LVM? Either way it's made up of blocks, which can be added to the filesystem later, located on any physical medium available, using RAID... The only measurable difference seems to be the 128-bitness, which as described elsewhere seems like a big fat waste of time for the next hundred years or so.
  • ZFS (Score:2, Informative)

    by BJH (11355) on Thursday September 16, 2004 @12:05PM (#10267897)
    Two words:

    "Patent burdened"
  • by sysadmn (29788) <sysadmn@gm[ ].com ['ail' in gap]> on Thursday September 16, 2004 @12:13PM (#10267993) Homepage
    With AIX and HP-UX, there's still 28 steps. It's just that the manuals say: 1) Run smit (IBM version) or 1) Run SAM (HP-UX version). and you're supposed to read the menus to figure out the other 27 steps.
  • Re:Oh wow! (Score:3, Informative)

    by Wakko Warner (324) * on Thursday September 16, 2004 @12:17PM (#10268048) Homepage Journal
    Oh, I have no problem with Sun offering a VM of its own. It's the lack of functionality that's always concerned me. It always seemed silly to pay $25k for the kind of volume management on Solaris that you get for free in AIX and HP/UX.

    Also, I'm tired running a volume manager simply to mirror root, and a separate, expensive volume manager (with a different level of support from a different vendor) simply to manage my data volumes, and I'm distressed that this is the "standard" way to do it in Solaris.

    Hopefully, this changes things significantly.

    - A.P.
  • White Papers (Score:2, Informative)

    by dTb (304368) on Thursday September 16, 2004 @12:22PM (#10268129)
    If anyone wants to read more details on the "Zettabyte File System" they can view the white papers on ZFS self-tuning [hp.com] and QOS [hp.com] as they contain far more detail than the marketing article given.
  • by mdmarkus (522132) on Thursday September 16, 2004 @12:23PM (#10268130)
    From Bruce Schneier in Applied Cryptography: Thermodynamic Limitations One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.) Given that k = 1.38x10^-16 erg/Kelvin, and that the ambient temperature of the universe is 3.2K, an ideal computer running at 3.2K would consume 4.4x10^-16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump. Now, the annual energy output of our sun is about 1.21x10^41 ergs. This is enough to power about 2.7x10^56 single bit changes on our ideal computer; enough changes to put a 187-bit counter through all of its values. If we built a Dyson sphere around the sun and captured all of its energy for 32 years, without any loss, we could power a computer to count up to 2^192. Of course it wouldn't have the energy left over to perform any useful calculations with this counter. But that's just one star, and a measly one at that. A typical supernova releases something like 10^51 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of the energy could be channedel into a single orgy of computation, a 219-bit counter could be cycled through all of its states. These numbers have nothing to do with the technology of the devices; they are the maxiumums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.
  • by dTb (304368) on Thursday September 16, 2004 @12:29PM (#10268211)
    According to the information given in this blog [sun.com] it is possible to "show how much space is used in each disk. If you want to reduce the amount of space in a pool by removing a disk, you could use this to choose the least-full disk, thus minimizing the time it will take to migrate that data to other disks".
  • by dTb (304368) on Thursday September 16, 2004 @12:45PM (#10268440)
    The filesystem has compression built in as an option to make storege more efficient. They currently use LZJB (fast but little reduction) compression but plan to add more powerfull but slower compression at a later date.
  • by Insightfill (554828) on Thursday September 16, 2004 @12:50PM (#10268505) Homepage
    Here's a good source. [skeptic.com]

    "Johnny Carson, America's popular talk-show host, loved to affectionately mimic Carl - one of his favorite guests - by saying "billions and billions," until everyone associated it with Carl. Yet Carl never said that precise phrase in public until years later.

    He grew quite tired of it. I remember a concert for Planetfest, a Planetary Society celebration of space exploration in 1981. He spoke about space exploration while accompanied by music conducted by John Williams, and inevitably had to use the word "billions." As soon as he did, tittering broke out in the audience. He glared at the offenders and continued."

    Seriously, I would LOVE to use "Sagan" as a unit of counting "billions" or something.

  • Re:billion billion? (Score:5, Informative)

    by Dazza (2865) on Thursday September 16, 2004 @12:58PM (#10268608)
    Hmm... another one who doesn't know that there's a fair amount of land outside the US borders.

    Nope. He said he'd never been outside the UK, so I'd be fairly certain he's aware of land outside the US.

    Also living in the UK, I can attest that whenever you hear '1 billion', '1000 million' is meant. The UK converted to this for accounting purposes during the 70's.

    The same I suspect is true for most of previously Europe-dominated countries (say India for example).

    India, in particular, is toally different. They don't rely on millions and billions but 'crore' and 'lakh' which are 10million and 100k respectively.
  • Re:billion billion? (Score:2, Informative)

    by escher (3402) <(moc.liamg) (ta) (surlaw.dnim.eht)> on Thursday September 16, 2004 @01:17PM (#10268840) Homepage Journal
    No, nobody can really visualize a billion (seriously, try!)

    Okay!

    Lesse, lets define a millimeter as 1000. That means a million is one meter and a billion is one kilometer. I, for one, can visualize a little over half a mile quite easily.
  • by the melon (89066) on Thursday September 16, 2004 @01:21PM (#10268890)
    All I can really say is if you have ever use a volume manager before
    you will rejoice at the ease of zfs.

    I have been using it on my main nfs server in my Solaris lab at Sun
    for quite a while now and it is great.

    I have a 1.6tb disk array that is allocated to a single zpool on the
    system. I can add/subtract drives/arrays to this pool at any time to
    increade decrease the amount of storage avalable to the pool.

    I can then creat, format and mount a zfs filesystem with one single
    command to the zpool. the filesystem will only consume as much of the
    zpool as it is actually using.

    It really is a great system.
  • Re:billion billion? (Score:3, Informative)

    by mikael (484) on Thursday September 16, 2004 @01:25PM (#10268928)
    Our newspapers regularly like to have front page headlines like "Chancellor raids nine billion pounds from company pension schemes". In this sense it means 9 thousand million pounds. At the same time we frequently have news reports from the USA, especially with regard to budget deficits in states like California.
  • Re:Oh wow! (Score:3, Informative)

    by elmegil (12001) on Thursday September 16, 2004 @01:50PM (#10269232) Homepage Journal
    If someone from Sun has conviced you that this is "standard" or "necessary", you need to talk to their management. While many people do it that way, there's absolutely no reason, since you're already paying for Veritas, to just use Veritas and be done with it.

    You're right, it'd be nice to see some regularization.

  • by Plugh (27537) on Thursday September 16, 2004 @01:56PM (#10269313) Homepage
    You forgot to mention the GPLed Cluster Filesystem [oracle.com] that Oracle [oracle.com] released some time ago.

    You also may want to check out the ASM [oracle.com] (Automated Storage Manager). It only works for disks that Oracle manages, but it does some pretty cool automatic load-balancing and RAIDing.

    Disclaimer:
    Yes, I do work for ORCL.
    No, I do not work on either OCFS or ASM (but I have partied with those guys :-)

  • Re:billion billion? (Score:3, Informative)

    by Just Some Guy (3352) <kirk+slashdot@strauser.com> on Thursday September 16, 2004 @01:59PM (#10269361) Homepage Journal
    I'm pretty film-ignorant, but let's say that you're talking about the equivalent of a 10000x10000 image with 64 bits of color (because you clearly want to maintain all of the information possible). That's 800,000,000 bytes (10000*10000*8) per image. Impressive, but at 24 frames per second a 64-bit filesystem will still yield 960,767,920 seconds (30.4 years) of uncompressed footage.

    Again, what exactly are you planning to film? :)

  • by gimpboy (34912) <john...m...harrold@@@gmail...com> on Thursday September 16, 2004 @03:39PM (#10270816) Homepage
    There are many opensource licenses [opensource.org]. All opensource means is that the code is available for inspection and modification. Opensource is more a copyright issue and has nothing to do with patents. The gpl --- which is not the same as opensource --- addresses both copyright and patent issues.
  • Re:billion billion? (Score:4, Informative)

    by david.given (6740) <dgNO@SPAMcowlark.com> on Thursday September 16, 2004 @04:11PM (#10271211) Homepage Journal
    I dunno, man. I've got a lot of porn...

    Hmm.

    If you had a filesystem 2^64 bytes wide, and your average porn jpeg was 100kB, then this means that you could store 1x10^14 images on it. That's 100'000'000'000'000 of them.

    Assuming you're male and heterosexual, this means that every woman on the planet would have to take 30'000 compromising pictures of herself to fill it up; or about 60'000 assuming you're not into the weird stuff.

    You're right, that's a lot of porn.

  • Re:billion billion? (Score:3, Informative)

    by lee7guy (659916) on Thursday September 16, 2004 @07:21PM (#10272907)
    What part of "lets define a millimeter as 1000" don't you get?
  • by ahrens (814221) on Friday September 17, 2004 @12:52AM (#10274456) Homepage
    You can find some more technical information about ZFS in my weblog [sun.com]. Check out the comments [sun.com] to my first entry about ZFS, there are a few juicy details there and I'll do my best to answer any questions posted to my blog.

    Disclaimer: I work on ZFS at Sun.
  • Re:billion billion? (Score:3, Informative)

    by lee7guy (659916) on Wednesday September 22, 2004 @06:12PM (#10324435)
    define: 1 mm = 1000.

    1 m = 1000 mm, per definition.

    1000x1000 = ?

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...