Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Data Storage Open Source

OpenZFS Project Launches, Uniting ZFS Developers 297

Posted by Soulskill
from the putting-the-band-together dept.
Damek writes "The OpenZFS project launched today, the truly open source successor to the ZFS project. ZFS is an advanced filesystem in active development for over a decade. Recent development has continued in the open, and OpenZFS is the new formal name for this community of developers, users, and companies improving, using, and building on ZFS. Founded by members of the Linux, FreeBSD, Mac OS X, and illumos communities, including Matt Ahrens, one of the two original authors of ZFS, the OpenZFS community brings together over a hundred software developers from these platforms."
This discussion has been archived. No new comments can be posted.

OpenZFS Project Launches, Uniting ZFS Developers

Comments Filter:
  • I'm addicted (Score:5, Interesting)

    by MightyYar (622222) on Tuesday September 17, 2013 @07:09PM (#44879395)

    I love ZFS, if one can love a file system. Even for home use. It requires a little bit nicer hardware than a typical NAS, but the data integrity is worth it. I'm old enough to have been burned by random disk corruption, flaky disk controllers, and bad cables.

    • by Anonymous Coward on Tuesday September 17, 2013 @07:18PM (#44879469)

      I love ZFS too, but I'd fucking kill for and open ReiserFS...

  • by Anonymous Coward on Tuesday September 17, 2013 @07:13PM (#44879423)

    If this gets us BP-rewrite, the holy grail of ZFS i'll be a happy man.

    For those who don't know what it is - BP-rewrite is block pointer rewrite, a feature promised for many years now but has never come. It's a lot like cold fusion is that its always X years away from us.

    BP-rewrite would allow implementation of the following features
    - Defrag
    - Shrinking vdevs
    - Removing vdevs from pools
    - Evacuating data from a vdev (say you wanted to destroy you're old 10 disk vdev and add it back to the pool as a different numbered disk vdev)

    • by Bengie (1121981)
      Re-balance vdevs, ftw! But yeah.. shrinking, defrag, blah blah blah.
    • This will have little to no effect on the bp-rewrite situation. The only people with the skill and intimate knowledge of ZFS to do the bp-rewrite coding have stated both that it's extremely difficult, and that the companies they work for/with have no interest in implementing the feature/paying them to work on the problem. I haven't heard any of them volunteering their free time to focus on it either. This is more or less a marketing campaign IMO.
    • And, of course, very importantly, the ability to add drives to a RAID-Z array [superuser.com] after it has been created.

  • Still CDDL... (Score:5, Informative)

    by volkerdi (9854) on Tuesday September 17, 2013 @07:15PM (#44879437)

    Oh well. I'd somehow hoped "truly open source" meant BSD license, or LGPL.

    • Re: (Score:3, Informative)

      by larry bagina (561269)
      CDDL is basically LGPL on a per-file basis.
      • by volkerdi (9854)

        CDDL is basically LGPL on a per-file basis.

        Perhaps the intent of the licenses is similar, but there's more to a license than that. Unfortunately, being licensed under the CDDL causes a lot more license incompatibility restrictions than either the LGPL or BSD license do. If it were under one of those, there'd be hope for seeing it as an included filesystem in the Linux kernel. But since it's under the CDDL, that can't happen.

        The developers are, of course, welcome to use whatever license they like. Just pointing out that the CDDL is *not* basicall

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          The GPL is the problem here, not the CDDL.

          It's funny how you cite license incompatibility restrictions, but Linux is the only one having those problems.

          OS X, FreeBSD and others don't seem to be having any problems with the CDDL.

          Gee, I wonder why.

          • You clearly have not been paying attention to the news, have you?

            After the leaks of Snowden regarding general malfeasance from security agencies against the encryption standards that we require to communicate safely and securely (like with your bank, just saying) you can't trust any software that you can't build (or know other people more capable can't build) from scratch.

            The GPL guarantees that no stupid institution or individual has free reign to corrupt the computational resources you are using.

            Other lic

        • by devman (1163205)

          In fairness its GPL that has the incompatibility problem not CDDL.

          CDDL is compatible BSD, Apache2, LGPL, etc.

          GPLv2 is incompatbile with CDDL, Apache2, GPLv3, LGPLv3, etc.

          Even if the license were not CDDL, it would have to be released under a license that came with a patent clause, which means GPLv3, LGPLv3, Apache2 or similar all of which are incompatible with GPLv2 which Linux is licensed under.

          CDDL isn't the problem.

    • Which would require a from-scratch cleanroom rewrite, probably.

      They could probably work on that, but if the current license isn't causing to much trouble, they probably have more important things to work on.

  • Patents? (Score:4, Insightful)

    by Danathar (267989) on Tuesday September 17, 2013 @07:19PM (#44879475) Journal

    Not to rain on anybody's parade,but will the commercial holders of ZFS allow this? Or will they unleash some unholy patent suit to keep it from happening?

  • by YesIAmAScript (886271) on Tuesday September 17, 2013 @07:26PM (#44879521)

    As long as Oracle's patents are valid, can anyone seriously believe this will go anywhere?

    His fleet of boats isn't going to pay for itself.

    • by Virtucon (127420)

      You mean that fleet of losing boats? Last time I checked it was 7-1 NZ with first to 9 winning.

      • Not quite as bad as you make it out to be considering Team Oracle started out at -2, and they also lost 3 of their key crew members from that incident. Bringing in that many new people at the last minute destroyed the team training that existed, which was a huge setback. The New Zealand ship though does seem to be faster.
    • by Bengie (1121981) on Tuesday September 17, 2013 @08:15PM (#44879837)
      Oracle released ZFS under a BSD compatible license. Anyone is allowed to do whatever to the opensource code. Going forward, Oracle has not opened an code after v28, which is the last OpenSource version to be compatible with Oracle ZFS.
      • Oracle released ZFS under a BSD compatible license. Anyone is allowed to do whatever to the opensource code.

        GP was talking about patents. If they had released it under (L)GPLv3 or Apache2, users would be safe from patents suits.

        • It's released under the CDDL, which explicitly grants patent rights. If they had licensed it under GPLv2, then they would have been able to sue people (clause 7 allows them to say 'oh, we've just noticed that we have patents on this. Everyone stop distributing it!') and if they'd released it under Apache2 or GPLv3 then it would still be GPLv2-incompatible, so still wouldn't have been useable in Linux.
  • by TheGoodNamesWereGone (1844118) on Tuesday September 17, 2013 @07:43PM (#44879643)
    I'm sure I'll be corrected if I'm wrong, but does it offer any advantage over BTRFS? I'm not trying to start a flame war; I'm honestly asking.
    • btrfs is still considered experimental by the devs zfs is used in production.

      Past that btrfs does not seem to support any sort of ssd caching wich is realy a requirement for any modern fs.

      • I'm using it right now on an SSD, but then I've turned off the swapfile and 'discard' in FSTAB, with no trouble. I'll admit I was put off initially by its experimental nature. This is the first time I've used it; prior to now I always used ext2, 3, or 4. Thanks to everyone who commented.
    • by Vesvvi (1501135) on Tuesday September 17, 2013 @07:59PM (#44879755)

      I don't have any practical experience with BTRFS, but I use ZFS heavily at work.

      The advantage of ZFS is that it's tested, and it just works. When I started with our first ZFS testbed, I abused that thing in scary ways trying to get it to fail: hotplugging RAID controller cards, etc. Nothing really scratched it. Over the years I've made additional bad decisions such as upgrading filesystem versions while in a degraded state, missing logs, etc, but nothing has ever caused me to lose data, ever.

      The one negative to ZFS (if you can call it that) is that it makes you aware of inevitable failures (scrubs catch them). I'll lose about 1 or 2 files per year (out of many many terrabytes) just due to lousy luck, unless I store redundant high-level copies of data and/or metadata. Right now I use use stripes over many sets of mirrored drives, but it's not enough when you read or write huge quantities of data. I've ran the numbers and our losses are reasonable, but it's sobering to see the harsh reality that "good enough" efforts just aren't good enough for 100% at scale.

      • by mysidia (191772)

        The one negative to ZFS (if you can call it that) is that it makes you aware of inevitable failures (scrubs catch them). I'll lose about 1 or 2 files per year (out of many many terrabytes) just due to lousy luck

        What? Interesting.... I never lost a file on ZFS... ever; and I was doing 12TB arrays, for VMDK storage; these were generally RAIDZ2 with 5 SATA disks, running ~50 VMs. Then in ~2011, concatenated mirrored sets of drives; large number of Ultra320 SCSI spindles in a direct attach SCSI

        • by Bengie (1121981)
          It sounds like he disabled/reduced ZFS's default to keep extra copies of meta-data.
          • by mysidia (191772)

            It sounds like he disabled/reduced ZFS's default to keep extra copies of meta-data.

            That would seem to require altering the source code. At least in the Solaris X86 ZFS implementation; there is no zpool or zfs dataset option to turn off metadata redundancy.... of course it would be a bad idea.

        • by BitZtream (692029)

          I corrupted some files by the following:

          This is a home setup, all parts are generic cheapo desktop grade components, except slightly upgraded rocket raid cards in dumb mode for additional sata ports:

          4 HDDs, 2 vdevs that 2 drive mirrors (RAID 1+0 with 4 drives essentially)

          1 drive in a 2 drive mirror fails, no hot spare.
          When inserting a replacement drive for the failed drive, the SATA cable to the remaining drive in the mirror was jiggled and the controller considered it disconnected.

          The pool instantly went

          • by mysidia (191772)

            You can't expect much better than what it did considering an entire vddv (both drives in the mirror) went off line as data was being written to them.

            I do expect better, because ZFS is supposed to handle this situation, where a volume goes down with in-flight operations; the filesystem by design is supposed to be able to re-Import the pool after system restart and recover cleanly....

            That shouldn't of happened; it sounds like either the hard drive acknowledged a cache FLUSH, before data had been w

            • by cheetah (9485)

              Doesn't look like he had a ZIL from the description of the hardware. So it's totally understandable that he might have corruption.

        • by Vesvvi (1501135)

          I had an upgrade path similar to yours, starting with RAIDZ and moving the a group of mirrors. I try not to let any pool get too big, so there are maybe 20 drives/pool. It's always the small files that are lost (see post above) I think each server does about 6 PB/year each direction on these highly-accessed files, so I think it's reasonable to drop ~1MB of non-critical files (they basically store notes of data analysis).

          So far I've never had a problem with VM images, but now we're mitigating that by addin

    • by mysidia (191772)

      I'm sure I'll be corrected if I'm wrong, but does it offer any advantage over BTRFS? I'm not trying to start a flame war; I'm honestly asking.

      BTRFS is still highly experimental. I had production ZFS systems back in 2008. A mature ZFS implementation is a lot less likely to lose your data with filesystem code at fault (assuming you choose appropriate hardware and appropriate RAIDZ levels with redundancy).

    • by sl3xd (111641)

      BTRFS has a large number of features that are still in the "being implemented", or "planning" stages. In contrast, those features are already present, well tested, and in production for half a decade on ZFS. Many touted "future" features (such as encryption) of BTRFS are documented as "maybe in the future, if the planets are right, we'll implement this. But not anytime soon"

      Comparing the two is like making up an imaginary timeline where ReiserFS 3 was 4-5 years old and in wide deployment while ext2 was bein

    • by jhol13 (1087781)

      ZFS is tested and has beed used in huge amount of different environments with very posive feedback for well over a decade. I do not know any catastrophic failures (though there must be).
      BTRFS requires latest version of Linux kernel and itself to work. I have no clue about testing (removing disks on the fly, etc.) and definitely it is not widely deployed, not yet proven to work (few anecdotes do not count).
      BTRFS seems to be only slightly more robust than it was five years ago - during this time I have lost t

  • by the_B0fh (208483) on Tuesday September 17, 2013 @09:37PM (#44880315) Homepage

    I wish they had encryption... *sigh*

    No, I don't want workarounds, I want it to be built in to ZFS like in Solaris 11.

    • There are no satisfactory workarounds, and never will be. The crypto needs to be handled within ZFS or it becomes an over complicated and inefficient mess. (As you are probably aware.) Consider a ZFS mirror on top of two disks encrypted by the OS; even though the data is identical, it now needs to be encrypted twice on write, and decrypted twice on scrub. For ditto blocks, multiply the amount of crypto work by another two or three. There are now (at least) two keys to manage and still no fine granularity

  • How does ZFS compare to btrfs? Several intentionally unnamed and unlinked commentaries on ZFS apparent current omission from Mac OSX refer to btrfs to be the more GPL-compliant alternative to ZFS. I need more information, as I do not think btrfs has the same aggressive checksumming and automatic volume size feature that ZFS does.
    Thanks.

    • by dbIII (701233)
      ZFS is for multiple disks and btrfs is not necessarily for multiple disks so there are differences, single SSD behaviour being one of them. Where they overlap ZFS shines mostly due to the amount of work that has been put in. However, up until now ZFS has been progressing very differently on different platforms. Having ZFS on a production linux server is currently still a bad idea in terms of performance and portability if that server can be reinstalled as bsd, so on some platforms btrfs may behave better

Stinginess with privileges is kindness in disguise. -- Guide to VAX/VMS Security, Sep. 1984

Working...