Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
OS X Businesses Data Storage Operating Systems Software Apple

ZFS Confirmed In Mac OS X Server Snow Leopard 178

number655321 writes "Apple has confirmed the inclusion of ZFS in the forthcoming OS X Server Snow Leopard. From Apple's site: 'For business-critical server deployments, Snow Leopard Server adds read and write support for the high-performance, 128-bit ZFS file system, which includes advanced features such as storage pooling, data redundancy, automatic error correction, dynamic volume expansion, and snapshots.' CTO of Storage Technologies at Sun Microsystems, Jeff Bonwick, is hosting a discussion on his blog. What does this mean for the 'client' version of OS X Snow Leopard?"
This discussion has been archived. No new comments can be posted.

ZFS Confirmed In Mac OS X Server Snow Leopard

Comments Filter:
  • by daveschroeder ( 516195 ) * on Wednesday June 11, 2008 @05:12PM (#23754819)
    Nothing, in particular. It means that ZFS isn't going to be officially supported and/or promoted on client. But, since Mac OS X and Mac OS X Server are essentially the same OS with some different/additional pieces on the top of Server, and like other filesystems that were exposed via the GUI tools and supported on Mac OS X Server, but not on Mac OS X, in the past -- such as Mac OS Extended (Journaled, Case-Sensitve) -- it will likely be available via the command line tools, and usable by people savvy enough to work with other boot devices to format the volume in the desired fashion, etc.
    • 10.5 client includes readonly zfs support. The mac ZFS development is available here [macosforge.org]
  • It should be noted at the bottom of the page.
    I was under the impression that they had initially hoped to include such in Leopard.

    However, it isn't just Apple, Microsoft has been working on various structured file systems (WinFS through OFS and Storage+) for nearly 20 years with no shipped products
    • by QuantumRiff ( 120817 ) on Wednesday June 11, 2008 @06:32PM (#23755885)
      WinFS is almost ready... Its going to be here any day now. I heard its the base storage layer for Duke Nukem Forever!
    • by toby ( 759 ) *

      The killer features for ZFS have nothing to do with "structured" filesystems; ZFS is essentially POSIX.

      Everybody has their own favourites:

      • uncompromising data integrity through checksums and transactional copy-on-write; high durability using scrubs and redundancy
      • cheap snapshots
      • manage filesystems as simply as directories
      • pooled storage for manageability
      • very high throughput for certain workloads
      • etc.

      While snapshotting and COW are not entirely new, many of its features are not available elsewhere, though

  • How will I benefit? (Score:4, Interesting)

    by The Ancients ( 626689 ) on Wednesday June 11, 2008 @05:15PM (#23754865) Homepage

    Ok, I'm reasonably technical, but not savvy with the intimate workings of a file system. What will this mean for the average user with an iMac or MacbookPro, when ZFS finally appears as the default FS of OS X? Will it be faster, more error-resistant, or...?

    • Re: (Score:3, Informative)

      by Cyberax ( 705495 )
      It probably won't be faster (and may even be slower), but it definitely will be more reliable.

      ZFS uses super-paranoidal checksumming which can detect drive problems in advance.
      • Re: (Score:3, Informative)

        ZFS uses super-paranoidal checksumming which can detect drive problems in advance.
        No, checksumming cannot detect drive problems in advance; for that you need SMART. Once your drive has been corrupted ZFS will kick in and prevent you from accessing any corrupt data.
        • by Cyberax ( 705495 ) on Wednesday June 11, 2008 @06:21PM (#23755729)
          SMART sucks. That's just a fact - very often it kicks in when your drive has failed.

          Also, there are lot of real cases where malfunctioning drive can silently write incorrect data. ZFS will help you in this case.
          • Re: (Score:3, Insightful)

            by MrMacman2u ( 831102 )

            Actually S.M.A.R.T. is an amazing tool and is utterly invaluable in monitoring drive health... IF and ONLY IF you have the appropriate software (windows: google it, *nix: smartmontools) AND know how to read the resulting output.

            The reason many people think SMART sucks and I say to check SMART manually is because 95% of drive manufactures set the threshold or "fail" values WAAAAY too high or low!

            I use SMART constantly (about once every other week) to "check in" on how healthy my drives are and knowing how

            • Re: (Score:3, Insightful)

              by Cyberax ( 705495 )
              Maybe I'm unlucky, but I had three notebook HDs die on me without any warning. Even though I'm using 'SmartMon' program which should warn me about worsening drive condition.

              Also, Google's on hard drive survey seems to come to the same conclusion: "One of those we thought was most intriguing was that drives often needed replacement for issues that SMART drive status polling didn't or couldn't determine, and 56% of failed drives did not raise any significant SMART flags"

              http://www.engadget.com/2007/02/18/mass [engadget.com]
              • Again, as I mentioned in the beginning of my post, most drive manufactures set the threshold fail values to an extreme that, if reached, the drive has already failed for all intents and purposes. Your software cannot warn you if the drive does not report a failure in progress due to a threshold being set at an extreme value or '0'.

                This is why I advocated learning what the "appropriate" SMART value ranges for your drive and manually checking the values. Because of this, I have only lost one drive EVER due t

            • Re: (Score:3, Interesting)

              by Fweeky ( 41046 )
              SMART sometimes works, very often it doesn't. Manufacturers have been progressively crippling it, to the point at which some barely even monitor anything, because they're perceived by marketing as being bad for business.

              e.g. Seagate are one of the few vendors who are honest about ECC correction and seek error rates, and their SMART counters are correspondingly huge and read rather poorly (50-60/100 is a common value); you can even graph them and see the rates sweep up and down as the drive moves the heads
          • SMART sucks

            There have been numerous studies showing that SMART failure predictions are frequently incorrect, saying that a drive is not going to fail when it is, or is late in reporting a failure.

            "most intriguing was that drives often needed replacement for issues that SMART drive status polling didn't or couldn't determine, and 56% of failed drives did not raise any significant SMART flags (and that's interesting, of course, because SMART exists solely to survey hard drive health)"

            source:

            http://www.engadge [engadget.com]
        • Once your drive has been corrupted ZFS will kick in and prevent you from accessing any corrupt data.

          If you have redundancy (such as mirror or RAIDZ), ZFS will also repair the data.

      • ZFS uses super-paranoidal checksumming which can detect drive problems in advance.
        It won't detect them in advance. But, used appropriately, you won't care when they happen.

        I'm not sure that's relevant to single-drive Macs though. It needs a mirror with a clean copy of the data to correct from.
    • by cblack ( 4342 )
      Probably for an end-user workstation with a single hard drive, the main benefit will be resistance to errors. ZFS also has optional transparent compression so that could be useful as well I suppose.
      • Compression! It is the one feature that I'm jealous of on windows machines. I constantly run Linux on older machines and which for a default file system that offered compression. I know hard drives are cheap, but not free. The same thing comes up on slightly older Mac hard drives all the time. 10GB was a standard laptop drive on older powerbooks and ibooks, these machines still run leopard fine, but it takes half their hard drive!
        • The oldest laptop Apple supports running Leopard is the November 2002 TiBook, and that shipped with a 60GB drive.
          • Perhaps the oldest, but it only requires a G4 866 or better, which includes a lot of late model iBooks with 30G drives. I'm just saying I always run out of space on laptops (even with a newer 80G macbook). Lots of stuff can't be compressed, but I'd like those few extra gigs when it can be. I have a 1TB external drive at home, but defeats the point of a laptop to lug it around.
    • by countSudoku() ( 1047544 ) on Wednesday June 11, 2008 @05:40PM (#23755193) Homepage
      You'll also be able to create a pool of drives that acts as a single drive, like you can with the RAID setup now, but far faster to setup. Growing your pools is a breeze and if they can tie TimeMachine into the zfs snapshots, my god, what can't we do?! Seriously, this will be a nice advanced file system for Mac OSX. We've been using it on Solaris for a year now for zone root/usr file systems, and zfs is AWESOME!!! Except that even Sun is not recommending we use it for zone root file systems until they hit update 6 of Solaris 10. Whoops! That's in November, so we're just sitting tight until they support Solaris root/OS zfs file systems. Then we upgrade. Then ? Then we profit!

      Ob. Apple Joke referencing earlier /. artice:
      Of course, the delay for the consumer OSX support of zfs will have to wait until they code in skipping backups of your iTunes library! ;)
      • Faster to setup but real raid cards and fake RAID setup likely run faster.
    • by Lally Singh ( 3427 ) on Wednesday June 11, 2008 @05:42PM (#23755213) Journal
      Wow, it's such a major leap it's hard to describe.

      Imagine having an external HDD on your mac. Whenever you plug it in, it automatically starts mirroring the internal drive.

      Take atomic snapshots of your entire filesystem, send it over scp to back up your drive as a single file. Or, send over the difference between two snapshots as an incremental backup.

      Have more than one drive, want mirroring? 2 steps on the command line.

      Have a directory you really care about? Make it a sub-filesystem (this doesn't involve partitioning, etc, just a command that's almost identical in syntax and performance to mkdir) and tell ZFS to store 2 or 3 copies of it.

      Have a directory you'd like auto-compressed? Tell zfs to compress it. New data to it is automatically, and transparently compressed. Completely transparent to the user and to applications.

      And I'm just getting started. Trust me on this, google it.
      • by Lally Singh ( 3427 ) on Wednesday June 11, 2008 @05:48PM (#23755289) Journal
      • by profplump ( 309017 ) <zach-slashjunk@kotlarek.com> on Wednesday June 11, 2008 @06:15PM (#23755653)
        What is it with you people and filesystem-level snapshots?

        I'd much rather have volume or block level snapshots, like with LVM and other similar systems. Those systems provide RO and RW snapshots, dynamic partitioning, drive spanning, etc., and can be easily layered with other block-level components to provide compression, encryption, remote storage, etc. as well. All that without tying you to a single file system (though that may be a moot point on OS X, as it will only boot from HFS/HFS+ AFAIK).

        If you really wanted to you could even write a script that takes no arguments other than a path name and automatically created a series of volumes of an appropriate size for the folder you selected, setup software raid to mirror them into a single device, mount the device with a compression filter, format it (with any file system) mount it normally, move the data over, drop the old data, rebind the mount point to the old path name, and update fstab. The only thing you miss here that ZFS may be able to do (I didn't check) is avoid closing the files that are moved.

        I'm not saying the features ZFS has are useless -- I think they are great -- they just aren't all that new and exciting. They might be new OS X, or repackaged in a way that's easy to consume, but they are things that anyone with big disks has been doing for years.
        • by MSG ( 12810 ) on Wednesday June 11, 2008 @06:49PM (#23756095)
          I'd much rather have volume or block level snapshots ... All that without tying you to a single file system

          It is not possible to make consistent block-level snapshots without filesystem support. If your filesystem doesn't support snapshotting, it must be remounted read-only in order to take a consistent snapshot. This is true for all filesystems. When they are mounted read-write, there may be changes that are only partially written to disk, and creating a snapshot will save the filesystem in an inconsistent state. If you want to mount that filesystem, you'll need to repair it first.
        • by ArbitraryConstant ( 763964 ) on Wednesday June 11, 2008 @08:23PM (#23757203) Homepage

          I'd much rather have volume or block level snapshots, like with LVM and other similar systems. Those systems provide RO and RW snapshots, dynamic partitioning, drive spanning, etc., and can be easily layered with other block-level components to provide compression, encryption, remote storage, etc. as well. All that without tying you to a single file system (though that may be a moot point on OS X, as it will only boot from HFS/HFS+ AFAIK).
          ZFS shits all over LVM:

          -Say I want to take hourly snapshots, and retain them for a month. When the parent data for a ZFS snapshot changes, ZFS merely has to leave the old data alone. OTOH, LVM must copy the block to every snapshot before it can change it in the parent. My hourly snapshots will quickly cause my disk to thrash to a halt with LVM and using much more space, while ZFS incurs a negligible penalty.

          -LVMs allow dynamic partitioning, but they can't share capacity on the fly. If I delete a file on an LVM-hosted filesystem, that space becomes available to the filesystem but not all the others. Unless I shrink the filesystem, generally requiring that I take it offline for a while.

          -Another layer could potentially handle checksums on LVMs, but in practice Linux can't do this properly by itself.

          -ZFS can use other layers, there's just a substantial benefit to letting it run the show.

          The only reason this won't turn out to be a huge disadvantage for Linux is that BTRFS [kernel.org] will provide most of the same features. Layering can be a very helpful design tool, but there are times it becomes a hinderence. It's important to be flexible when there's benefits to integrating stuff into a single layer.
          • One of the major points of the ZFS checksums is that the checksum for block X is stored in the block that points to X. In addition to ensuring that X is written properly is also ensures that writes actually go to the correct blocks.
        • What other filesystems do end to end checksumming and self healing, on any operating system today?
    • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Wednesday June 11, 2008 @05:42PM (#23755217) Homepage

      For one thing it would make the implementation of Time Machine much simpler. No more directory tree full of hard links and such. If they put it on other boxes (like Time Capsule) they could unify the format (it uses a different storage method). Then you could pull the Time Capsule drive, stick it in your Mac, and be all set.

      For servers, it has all the standard ZFS benefits (easy storage adding, redundancy, performance, etc).

      For home users, it would let you simply plug a new drive in your Mac, press a button, and have it just add space to your main drive. You wouldn't need to specifically setup a RAID. No resizing. No "external drive" if you don't want it that way. Just buy a drive, plug it in, and it's all handled for you.

      • by Lars512 ( 957723 ) on Wednesday June 11, 2008 @06:48PM (#23756087)

        For home users, it would let you simply plug a new drive in your Mac, press a button, and have it just add space to your main drive. You wouldn't need to specifically setup a RAID. No resizing. No "external drive" if you don't want it that way. Just buy a drive, plug it in, and it's all handled for you.

        I'm not sure you'd want it to work this way for external drives. Will they be available at crucial parts of boot time when some important files are striped across them? Even if they are, you're basically unable to ever remove the external drive again. If there's a problem with the drive, all your data is lost. Probably the way these drives work now is better. Maybe mirroring onto an external drive would work ok, but it would then be an undesirable write bottleneck.

        • As Apple has been (over)selling for years: Your Mac is your Digital Life.

          Home users should be much more concerned about data durability and integrity than performance (not that performance would ever be bad).

          A physical photo album will last a couple of centuries. Your hard disk may not last 2 years. And there's only 1 of them in any consumer Macs.

          People are going to lose their digital snapshots, their unfinished novels, their emails and love letters, because nobody understands the inherent fragility of

          • by Lars512 ( 957723 )

            ...and Apple has really come to the table with Time Machine, making backup happen for the masses. Perhaps its reasonable to assume that if our computer crashes, we lose one hour worth of non-recoverable work. That's the promise if you use Time Machine with default settings.

            They could do better by making mirroring standard in desktops, but in laptops they're probably on the money. They could do better by perhaps using zfs for block level backups rather than file level, but what they have now is far better

    • by fishdan ( 569872 ) on Wednesday June 11, 2008 @05:44PM (#23755241) Homepage Journal
      • Imagine a future version of the Time Machine which has multiple drives. One drive bites the big one. No worries! You just go to Fried or Office Despot or wherever and get a replacement. You plug the little sucker in and BAM! The drive gets "re-silvered" and your data is safe. If two of the drives go TU, same thing. Anyone know how many drives can fail at once in a RAID-Z2 before you are 100% SOL?
        • Re: (Score:3, Informative)

          Anyone know how many drives can fail at once in a RAID-Z2 before you are 100% SOL?
          RAID-Z2 can survive two drive failures; three failures will kill the pool.
          • by hacker ( 14635 )

            RAID-Z2 can survive two drive failures; three failures will kill the pool.

            That's interesting, because the Linux implementations do not suffer these flaws. Look at the Drobo [drobo.com] for a hardware device that implements exactly this, runtime.

      • It's interesting to note that the ZFS monitors don't seem to recover until the gentleman unplugs the failed drive. Is this a bug with ZFS, and has it been fixed?

        RAID-6 [wikipedia.org] & RAID-DP [wikipedia.org] can also survive a dual-drive sledgehammer failure. The Linux MD Driver supports RAID-6 [die.net].

        How does Sun's RAID-Z2 distinguish itself from these existing implementations?
    • by pavon ( 30274 ) on Wednesday June 11, 2008 @06:09PM (#23755565)
      One feature of ZFS is copy-on-write file snapshots, which allow you to "copy" a file, but the common portions of the file will be shared between the two copies, decreasing disk space.

      This is great for backing up large files containing frequent but small changes. For example encrypted disk-images, parallels windows disk images, database files, the Entourage email box, or home videos you are in the process of editing etc.

      Right now Time Machine creates an entire copy of the file each time it changes, making it unsuitable for backing up these types of files, and so you are encourage to exclude those files from backup. ZFS could fix that.

      It could also make adding disk space more seamless, if desired. Slap on an external Firewire drive or even airport, click the "Add to storage pool button", and suddenly it just acts like part of your system drive. You don't have to worry about what is stored where.
      • (If this question isn't relevant to ZFS, please just yell at me.)

        I've been using CrashPlan lately for backups, which, AFAICT, is effectively the same as a copy-on-write system; they store some type of "rolling delta" so that only changed data is rewritten to disk.

        And that seems really cool. But one thing keeps nagging at me: Don't copy-on-write/delta based systems prevent you from safely growing a logical volume beyond the limits of a physical volume, short of massive duplication?

        Let's say I had a 500GB ph
    • by tknd ( 979052 )

      For end user usability, one of the nicest features about ZFS is that things like fstab go away. On a freebsd box with ZFS, I setup a raidz pool across 4 disks. One of my controllers was giving me issues so I tried flipping around the disks in the various serial ata ports I had across 3 different serial ata controllers. ZFS comes back and detects the array correctly regardless of how the OS assigns the device names. Normally you would cause serious headache if you swapped around the drives.

  • by boxless ( 35756 ) on Wednesday June 11, 2008 @05:15PM (#23754869)
    I've lurked a bit in the opensolaris forums, and there's a whole bunch of scary things with this FS. Like the RAM requirements for starters.
    • by cblack ( 4342 ) on Wednesday June 11, 2008 @05:35PM (#23755137) Homepage
      RAM settings can be tuned down (see ARC cache sizing). If you've just lurked on a list and not run it or read the tuning docs, you don't know and your vage sense of it being "scary" should hold little weight. I will say that the defaults for ZFS on Solaris are geared towards large-memory machines where you can afford to give a gig to the filesystem layer for caching and such. I don't know the absolute minimum RAM requirements, but I doubt they are inflexible and "scary".
      I've been running zfs on solaris oracle servers for a bit and it is REALLY NICE in my opinion. They have also continually improved the auto-tuning aspects so you don't even have to worry about some of the settings that were often tuned even two releases ago (10u2 vs 10u4).
      • Re: (Score:3, Informative)

        by MrMickS ( 568778 )
        Solaris has used the idea of "unused memory is wasted memory" for a long time now. If memory isn't being used by applications then why not use it for file system buffering and cache? As long as it gets reaped by your memory manager when you need it for applications it seems like a good thing to do performance wise.
    • Agreed, however it does seem (currently) to be directed at Servers, which tend to have 4GB's of RAM or more and dont really start and stop processes randomly like a Personal/Home computer.

      As far as RAM requirements, ive seen various opinions ranging from 1GB to 2GB's as being a "sufficient" amount, but 4GB+ being ideal... and a Minimum of 768MB... if thats true, and if that is also including general ram use for other things, thats not so bad...

      The only major limitation (according to Wikipedia http://en.wiki [wikipedia.org]
    • There are some edge cases in ZFS, but compared with HFS+ - well, "Invalid leaf node".
  • by MonoSynth ( 323007 ) on Wednesday June 11, 2008 @05:22PM (#23754979) Homepage
    The ability to hibernate your Mac with 16TB of RAM [apple.com] :)
    • Is it me, or does 16 TB of Ram not seem so large?
    • The ability to hibernate your Mac with 16TB of RAM [apple.com] :)
      No one can afford 16TB of FB-DIMMs, there's not that much money in circulation.
      • Systems with a terabyte of RAM are not unusual in the government installations here in the DC metro area. Putting 16TB of RAM in a server will cost millions dollars right now, but it's by no means out of the question. See this SGI press release [sgi.com] for a sample with 28TB of RAM.
        • by TheGreek ( 2403 )

          Putting 16TB of RAM in a server will cost millions dollars right now
          Nah. Just $1,228,800 [newegg.com].
      • by Lars T. ( 470328 )

        The ability to hibernate your Mac with 16TB of RAM [apple.com] :)
        No one can afford 16TB of FB-DIMMs, there's not that much money in circulation.
        Even at Apple's RAM prices it will cost less than a hour of non-war in Iraq. Far less.
  • by failedlogic ( 627314 ) on Wednesday June 11, 2008 @06:32PM (#23755897)
    I don't know a whole heck of a lot of the technical details on ZFS. What I have read and understood, it sounds like what ZFS offers is something that every OS should include in its file system. Since, as I understand the BSDs and many Linux distros are starting to include (albeit limited/beta/alpha) ZFS support, and the long-rumored OS X inclusion being confirmed, could this be a universal file system for Operating systems? I would definitely like to see ZFS as a bootable Windows file system.

    Say I have a portable USB hard drive or a dead motherboard in one system and want to retrieve the data off a hard drive. One computer has Windows and the other is Nix or OSX. Generally, the file system one could use that *should* work between Windows, Mac and 'Nix was Fat32. There are some issues with FAT32, the least of which is lack of support for large hard drives. The only other ways I can think of transferring the data are via Network or using a OS hook to read the data.

    I just switched from Apple to Windows. I've been using an app to read my HFS+ file system on Windows to get data off the hard drive. It works well, but its not build-in. Nor is read/write NTFS access in other OSes. In any case, getting the data has been a bit of a pain. A standard file system I can just plug in a drive no problem would be awesome.
    • Re: (Score:3, Insightful)

      1. The GPL prevents ZFS integration (without a complete and total reimplementation of all the code... which won't happen since everybody prefers to write their own filesystem)
      2. MS DOS/FAT is the universal file system for operating systems.
      • Um, MS DOS/FAT may be universally read/write supported, but I don't think you can boot much other than Windows on it.
    • by shani ( 1674 )
      I have a few Linux computers, running either Ubuntu or Debian, a Windows XP box I use for games, and a MacBook Pro. My girlfriend has a Vista laptop.

      I was tired of not being able to save files bigger than 2 Gbyte on USB drives, so I recently looked at the possibilities:
      • ext3 [wikipedia.org] is nice and reliable, and there are ext2 [wikipedia.org] drivers for Windows and OS X.
      • HFS+ [wikipedia.org] comes with OS X, and it works in Linux, but the standard way to run it under Windows seems to be via a $50 piece of software.
      • NTFS [wikipedia.org] comes with Windows, and it wo
    • Get a copy of VMWare or Parallels and create a Virtual Disk from your windows machine... then use that on your Mac as a virtual machine.. voila. Now you can drag and drop files between Windows and Mac without any fuss at all. Bonus, if you have files that only open in Windows or need to be exported to a generic format for import to a Mac application... you can do it all on 1 PC (the Apple PC of course).

      • Oh wait... just re-read and saw that you switched from Apple to Windows... whoops.

        I suppose you could go the other way, create a VM Windows on the Mac... just a new install would work fine... then copy your files over to the VM, copy the VM to your Windows PC... then mount it as a drive and copy to the Native Windows OS.

    • by toby ( 759 ) *
      I just switched from Apple to Windows.
    • by Fweeky ( 41046 )
      With the CDDL covering ZFS, I know some people are hopeful that HAMMER [wikipedia.org] will become ubiquitous.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...