Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Open Source

OpenZFS Project Launches, Uniting ZFS Developers 297

Damek writes "The OpenZFS project launched today, the truly open source successor to the ZFS project. ZFS is an advanced filesystem in active development for over a decade. Recent development has continued in the open, and OpenZFS is the new formal name for this community of developers, users, and companies improving, using, and building on ZFS. Founded by members of the Linux, FreeBSD, Mac OS X, and illumos communities, including Matt Ahrens, one of the two original authors of ZFS, the OpenZFS community brings together over a hundred software developers from these platforms."
This discussion has been archived. No new comments can be posted.

OpenZFS Project Launches, Uniting ZFS Developers

Comments Filter:
  • Patents? (Score:4, Insightful)

    by Danathar ( 267989 ) on Tuesday September 17, 2013 @08:19PM (#44879475) Journal

    Not to rain on anybody's parade,but will the commercial holders of ZFS allow this? Or will they unleash some unholy patent suit to keep it from happening?

  • by TheGoodNamesWereGone ( 1844118 ) on Tuesday September 17, 2013 @08:43PM (#44879643)
    I'm sure I'll be corrected if I'm wrong, but does it offer any advantage over BTRFS? I'm not trying to start a flame war; I'm honestly asking.
  • Re:Cool, but.. (Score:5, Insightful)

    by Bengie ( 1121981 ) on Tuesday September 17, 2013 @09:17PM (#44879861)

    Everything else is already handled with LVM and software RAID.

    You have a great sense of humor, keep it up.

  • Re: FINALLY. (Score:1, Insightful)

    by Anonymous Coward on Tuesday September 17, 2013 @09:39PM (#44879971)

    He did say home use you dumb obnoxious cunt

  • by batkiwi ( 137781 ) on Wednesday September 18, 2013 @12:36AM (#44880883)

    Nice FUD there. You picked the btrfs-progs, which are the userspace tools, not the actual btrfs filesystem driver.

    http://git.kernel.org/cgit/linux/kernel/git/josef/btrfs-next.git/log/ [kernel.org]

  • Re:Still CDDL... (Score:2, Insightful)

    by Anonymous Coward on Wednesday September 18, 2013 @01:29AM (#44881059)

    The GPL is the problem here, not the CDDL.

    It's funny how you cite license incompatibility restrictions, but Linux is the only one having those problems.

    OS X, FreeBSD and others don't seem to be having any problems with the CDDL.

    Gee, I wonder why.

  • Re: Data integrity (Score:5, Insightful)

    by TheRaven64 ( 641858 ) on Wednesday September 18, 2013 @05:06AM (#44881731) Journal

    ZFS doesn't have ECC, but it does checksum each block, so it can detect per-block errors. If you have valuable data, you can set the copies property to some value greater than 1 for that data set and it will ensure that each block is duplicated on the disk so if one fails a checksum then the other will be used to recover. If you have three disks, you can use RAID-Z, which loses you 1/3 of the space (not 1/2) and allows any single-disk failures to be recovered. Running zfs scrub will make it validate all of the data and when any read fails the checksums recover the data from the other two.

    The reason it doesn't use ECC is that ECC doesn't mesh well with the failure modes of disks. ECC is used in RAM because when it gets hot, hit by a solar ray, or whatever, it is common for a single bit to flip (in a single direction, which makes the error correction easier). In a disk, you typically have an entire block fail, not a single bit. Modern disks use multiple levels, so the smallest failure that is even theoretically possible might be a single byte (or nibble) in a block. And since the failure isn't biased, you'd need a fairly large amount of space. A better approach would be for the filesystem to generate something like Reed–Solomon code blocks for every n blocks that are written. This would allow single-block errors to be recovered, as long as the other blocks are okay. The down side of this approach is that the error correcting block would need to be rewritten whenever any of the other blocks is modified. this might be relatively easy to add to ZFS, as it uses a CoW structure, so block-overwrites are relatively rare (although erasing a lot of data would require a lot of checksums to be recalculated). This would mean that a single-block write would end up triggering a lot of reads and that would hurt performance. For ZFS, this might actually be easier to implement, as blocks are written out in transaction groups and so including an error correction block at the end might be a fairly simple modification.

  • Re:I'm addicted (Score:4, Insightful)

    by The Last Gunslinger ( 827632 ) on Wednesday September 18, 2013 @05:19AM (#44881789)
    I'm sure most readers here "got" it. It just wasn't funny.
  • by jotaeleemeese ( 303437 ) on Wednesday September 18, 2013 @05:26AM (#44881813) Homepage Journal

    You clearly have not been paying attention to the news, have you?

    After the leaks of Snowden regarding general malfeasance from security agencies against the encryption standards that we require to communicate safely and securely (like with your bank, just saying) you can't trust any software that you can't build (or know other people more capable can't build) from scratch.

    The GPL guarantees that no stupid institution or individual has free reign to corrupt the computational resources you are using.

    Other licenses wax lyrical on this, and the consequence is that your precious Apple OS and applications are now tainted, because you have no way to know if they have backdoors or not.

    What does this have to do with ZFS you ask?

    Well, encryption. ZFS has the capability to encrypt the datasets you are using, but unfortunately its license would not make it suitable for truly secure encryption in the cases where the company or individual implementing it (Oracle, ahem,ahem) chose not to make the source code available.

    At that point you have no way to know if backdoors have been added to your implementation of ZFS.

    So again, how is GPL, a license that is protecting your security, the problem?

  • by Above ( 100351 ) on Wednesday September 18, 2013 @09:02AM (#44882659)

    You are correct that the disk will become fragmented, but the implication is fragmentation is a problem and that's simply not true. One of the prime causes of the misunderstanding is that fragmentation in Unix file systems is night and day different than fragmentation in a FAT file system, where most people are used to defragging windows drives. Unix file systems use much better algorithms to control fragmentation, so there is (generally) a lot less on a per file basis. They also automatically defragment, there are cases where when a fragmented file is written to the file system will defragment part of that file and rewrite it.

    The Berkeley FFS was the first to "solve" this problem, reserving 10% of the disk space primarily to avoid fragmentation. Decades of experience show that for all but the most corner of corner cases, that is enough, causing no significant amount of fragmentation, or performance degradation.

    * http://www.eecs.harvard.edu/~keith/research/tr94.html
    * http://www.cs.berkeley.edu/~brewer/cs262/FFS.pdf
    * http://www.cs.rutgers.edu/~pxk/416/notes/12-fs-studies.html
    * http://pages.cs.wisc.edu/~remzi/OSTEP/file-ffs.pdf

    The result is that for most applications fragmentation is a complete non-issue. After 25 years of playing with various file systems I've only seen it be an issue once, on an NNTP server that reached 20% fragmentation. Most user desktops and general purpose servers have under 1% fragmentation at all times. Generally, if you have a fragmentation problem it's because the storage is too full, and you need to add storage anyway (the aforementioned NNTP server was a good example). Adding the storage makes the problem go away.

    Most users of Unix file systems will never need to give fragmentation a second thought.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...