Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
OS X Businesses Data Storage Operating Systems Software Apple

ZFS Confirmed In Mac OS X Server Snow Leopard 178

number655321 writes "Apple has confirmed the inclusion of ZFS in the forthcoming OS X Server Snow Leopard. From Apple's site: 'For business-critical server deployments, Snow Leopard Server adds read and write support for the high-performance, 128-bit ZFS file system, which includes advanced features such as storage pooling, data redundancy, automatic error correction, dynamic volume expansion, and snapshots.' CTO of Storage Technologies at Sun Microsystems, Jeff Bonwick, is hosting a discussion on his blog. What does this mean for the 'client' version of OS X Snow Leopard?"
This discussion has been archived. No new comments can be posted.

ZFS Confirmed In Mac OS X Server Snow Leopard

Comments Filter:
  • It should be noted at the bottom of the page.
    I was under the impression that they had initially hoped to include such in Leopard.

    However, it isn't just Apple, Microsoft has been working on various structured file systems (WinFS through OFS and Storage+) for nearly 20 years with no shipped products
  • Re:Finaly (Score:3, Informative)

    by Jellybob ( 597204 ) on Wednesday June 11, 2008 @05:17PM (#23754907) Journal
    Our mail stores at work can fill 8TB quite happily (although they're on big network attached storage boxes, not ZFS).
  • by Cyberax ( 705495 ) on Wednesday June 11, 2008 @05:31PM (#23755083)
    It probably won't be faster (and may even be slower), but it definitely will be more reliable.

    ZFS uses super-paranoidal checksumming which can detect drive problems in advance.
  • by cblack ( 4342 ) on Wednesday June 11, 2008 @05:35PM (#23755137) Homepage
    RAM settings can be tuned down (see ARC cache sizing). If you've just lurked on a list and not run it or read the tuning docs, you don't know and your vage sense of it being "scary" should hold little weight. I will say that the defaults for ZFS on Solaris are geared towards large-memory machines where you can afford to give a gig to the filesystem layer for caching and such. I don't know the absolute minimum RAM requirements, but I doubt they are inflexible and "scary".
    I've been running zfs on solaris oracle servers for a bit and it is REALLY NICE in my opinion. They have also continually improved the auto-tuning aspects so you don't even have to worry about some of the settings that were often tuned even two releases ago (10u2 vs 10u4).
  • by ApproachingLinux ( 756909 ) on Wednesday June 11, 2008 @05:42PM (#23755215)
    a good place to start is probably the ZFS Best Practices [solarisinternals.com] page. the google text cache of that page is here [64.233.167.104]. beyond that, try to google "zfs ram requirements".
  • by Lally Singh ( 3427 ) on Wednesday June 11, 2008 @05:48PM (#23755289) Journal
  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday June 11, 2008 @06:14PM (#23755629) Homepage

    ZFS uses super-paranoidal checksumming which can detect drive problems in advance.
    No, checksumming cannot detect drive problems in advance; for that you need SMART. Once your drive has been corrupted ZFS will kick in and prevent you from accessing any corrupt data.
  • Re:Indeed. (Score:3, Informative)

    by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday June 11, 2008 @06:16PM (#23755669) Homepage

    Anyone know how many drives can fail at once in a RAID-Z2 before you are 100% SOL?
    RAID-Z2 can survive two drive failures; three failures will kill the pool.
  • by MSG ( 12810 ) on Wednesday June 11, 2008 @06:49PM (#23756095)
    I'd much rather have volume or block level snapshots ... All that without tying you to a single file system

    It is not possible to make consistent block-level snapshots without filesystem support. If your filesystem doesn't support snapshotting, it must be remounted read-only in order to take a consistent snapshot. This is true for all filesystems. When they are mounted read-write, there may be changes that are only partially written to disk, and creating a snapshot will save the filesystem in an inconsistent state. If you want to mount that filesystem, you'll need to repair it first.
  • by Anonymous Coward on Wednesday June 11, 2008 @07:09PM (#23756339)

    Sun is not recommending we use it for zone root file systems until they hit update 6 of Solaris 10


    For that to work, you need a boot loader that supports zfs. This will come first in Solaris 10 x86 because they already have grub there. It's easier. For SPARC machines, it'll require new OpenBoot firmware that understands zfs.
  • by The Blue Meanie ( 223473 ) on Wednesday June 11, 2008 @08:06PM (#23756979)
    For that to work, you need a boot loader that supports zfs. This will come first in Solaris 10 x86 because they already have grub there. It's easier.

    Actually, GP was talking about ZONE root filesystems, which have absolutely nothing to do with the bootloader, since the zone runs on top of the underlying global zone. You CAN put a zone root on ZFS at the moment, but Sun neither recommends nor supports that setup.

    For SPARC machines, it'll require new OpenBoot firmware that understands zfs.

    And this is simply untrue, period, even for non-zone ZFS root filesystems. OpenBoot loads the next stage of boot code by reading raw data from blocks 1-8 of the chosen slice of the boot disk, and THAT is the code that needs to be able understand the filesystem that will be mounted as root (UFS, ZFS, or whatever). OpenBoot only needs to understand the disk label/partitioning and to be able to read the disk blocks. It already does that, so non-zone ZFS root will NOT require any modifications or upgrades to OpenBoot, just updates to the bootloader code that is written to the disk in blocks 1-8.
  • by Anonymous Coward on Wednesday June 11, 2008 @08:22PM (#23757189)
    The fact that no one has refuted it can be seen as proof enough that the claim is so preposterous as to render such preposterousness self-evident and therefore unworthy of refutation. Additionally, your ability to receive intellectual "hand-outs" is stymied by said lack of refutation. Ergo, your desire for more information will go unfulfilled. However, being the bleeding-heart that I am: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Swap_Space [solarisinternals.com] and, for future reference, http://www.google.com/ [google.com]
  • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Wednesday June 11, 2008 @08:32PM (#23757301) Journal
    From what I understand, ZFS is fast not memory efficient. Minimum recommended system memory is 1GB, more is definitely better.

    I'm no expert on ZFS, I just did a google search on 'zfs benchmark' and then on 'zfs memory usage' and pulled information from the first few results. Maybe someone who actually knows something can chime in?
  • by Anonymous Coward on Wednesday June 11, 2008 @08:35PM (#23757339)
    Funny, I have it managing over a TB of disk space on a server with only 2GB of ram and plenty of applications and I don't have any performance issues whatsoever. You might want to try killing the talkoutofmyass daemon I do think it's using to many resources.
  • by the_B0fh ( 208483 ) on Wednesday June 11, 2008 @10:12PM (#23758179) Homepage
    The moon is made of green cheese. Until someone refutes that, you can continue to think so.

    Seriously though, zfs for osx is already available to be checked out and played with. Additionally, they hired one of the key zfs people and have her working on zfs for osx now.

    I highly doubt it will suck, since, iirc, she was one of the people who worked on the test sets that SUNW^H^H^H^HJAVA runs nightly.
  • by Anonymous Coward on Wednesday June 11, 2008 @11:56PM (#23759047)

    I was under the impression that they had initially hoped to include such in Leopard.
    Nope, not full support. Leopard has read-only command line ZFS support. There were rampant rumors about it (including a Slashdot item), some suggesting that the 2007 WWDC keynote would reveal not only R/W support for ZFS, but that the default disk format was going to change to ZFS in Leopard. However, Apple's position was always to have read-only support in Leopard (allowing for some functionality when 10.5 is considered a legacy OS, and ZFS is common), knowing that read-write support would be coming in a future version. There has also been a beta ZFS read-write driver for Leopard since just after last year's WWDC when ZFS support was announced.
  • No, license issues (Score:3, Informative)

    by bill_mcgonigle ( 4333 ) * on Thursday June 12, 2008 @01:27AM (#23759603) Homepage Journal
    will it be available on Debian(Ubuntu) soon?

    Not until OpenSolaris and Linux are both GPLv3.

    ZFS is patented and patent protection is only conferred through use of CDDL'ed code, which isn't compatible with GPLv2. A cleanroom implementation of ZFS, besides being redundant, has no license to use ZFS's patented technology. Whether Sun would sue a linux dev over this is a separate issue.

    BSD implemented a Solaris compatibility layer to use the CDDL code directly, but their license isn't incompatible.

    Jeff and Linus have visited lately - I think Jeff was just helping him hook up a new gas grill, but maybe something work-related was discussed. :)
  • by MrMickS ( 568778 ) on Thursday June 12, 2008 @05:35AM (#23761179) Homepage Journal
    Solaris has used the idea of "unused memory is wasted memory" for a long time now. If memory isn't being used by applications then why not use it for file system buffering and cache? As long as it gets reaped by your memory manager when you need it for applications it seems like a good thing to do performance wise.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...