ZFS Confirmed In Mac OS X Server Snow Leopard 178
number655321 writes "Apple has confirmed the inclusion of ZFS in the forthcoming OS X Server Snow Leopard. From Apple's site: 'For business-critical server deployments, Snow Leopard Server adds read and write support for the high-performance, 128-bit ZFS file system, which includes advanced features such as storage pooling, data redundancy, automatic error correction, dynamic volume expansion, and snapshots.' CTO of Storage Technologies at Sun Microsystems, Jeff Bonwick, is hosting a discussion on his blog. What does this mean for the 'client' version of OS X Snow Leopard?"
What does this mean for 'client'? (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2, Informative)
Re:What does this mean for 'client'? (Score:5, Informative)
I'm no expert on ZFS, I just did a google search on 'zfs benchmark' and then on 'zfs memory usage' and pulled information from the first few results. Maybe someone who actually knows something can chime in?
Re: (Score:3, Insightful)
Nitpick: it definitely likes a lot of RAM, but it doesn't necessarily use it inefficiently. Car analogy: a semi truck is fuel efficient, even though it's gas mileage is a lot lower than a sedan's.
Re: (Score:2)
ZFS is fast, but not lightweight (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Even XFS is on par with ZFS (with lower cpu utilization) for nonclustered performanc
Re: (Score:2, Informative)
Seriously though, zfs for osx is already available to be checked out and played with. Additionally, they hired one of the key zfs people and have her working on zfs for osx now.
I highly doubt it will suck, since, iirc, she was one of the people who worked on the test sets that SUNW^H^H^H^HJAVA runs nightly.
Re: (Score:2)
quad:~ ilgaz$ lsof |grep 'timemac'
Path\x20F 185 ilgaz 15r DIR 14,14 68 9068665
AdobeRead 629 ilgaz 25r REG 14,14 4083738 5061210
The core Unix/OS layer of course provides that information. For some reason , Apple Finder doesn't ask that information to Core OS. Apple is almost fanatically conservative on Fin
"All features on this page are subject to change" (Score:5, Informative)
I was under the impression that they had initially hoped to include such in Leopard.
However, it isn't just Apple, Microsoft has been working on various structured file systems (WinFS through OFS and Storage+) for nearly 20 years with no shipped products
Re:"All features on this page are subject to chang (Score:5, Funny)
Re: (Score:2)
The killer features for ZFS have nothing to do with "structured" filesystems; ZFS is essentially POSIX.
Everybody has their own favourites:
While snapshotting and COW are not entirely new, many of its features are not available elsewhere, though
How will I benefit? (Score:4, Interesting)
Ok, I'm reasonably technical, but not savvy with the intimate workings of a file system. What will this mean for the average user with an iMac or MacbookPro, when ZFS finally appears as the default FS of OS X? Will it be faster, more error-resistant, or...?
Re: (Score:3, Informative)
ZFS uses super-paranoidal checksumming which can detect drive problems in advance.
Re: (Score:3, Informative)
Re:How will I benefit? (Score:4, Insightful)
Also, there are lot of real cases where malfunctioning drive can silently write incorrect data. ZFS will help you in this case.
Re: (Score:3, Insightful)
Actually S.M.A.R.T. is an amazing tool and is utterly invaluable in monitoring drive health... IF and ONLY IF you have the appropriate software (windows: google it, *nix: smartmontools) AND know how to read the resulting output.
The reason many people think SMART sucks and I say to check SMART manually is because 95% of drive manufactures set the threshold or "fail" values WAAAAY too high or low!
I use SMART constantly (about once every other week) to "check in" on how healthy my drives are and knowing how
Re: (Score:3, Insightful)
Also, Google's on hard drive survey seems to come to the same conclusion: "One of those we thought was most intriguing was that drives often needed replacement for issues that SMART drive status polling didn't or couldn't determine, and 56% of failed drives did not raise any significant SMART flags"
http://www.engadget.com/2007/02/18/mass [engadget.com]
Re: (Score:2)
Again, as I mentioned in the beginning of my post, most drive manufactures set the threshold fail values to an extreme that, if reached, the drive has already failed for all intents and purposes. Your software cannot warn you if the drive does not report a failure in progress due to a threshold being set at an extreme value or '0'.
This is why I advocated learning what the "appropriate" SMART value ranges for your drive and manually checking the values. Because of this, I have only lost one drive EVER due t
Re: (Score:3, Interesting)
e.g. Seagate are one of the few vendors who are honest about ECC correction and seek error rates, and their SMART counters are correspondingly huge and read rather poorly (50-60/100 is a common value); you can even graph them and see the rates sweep up and down as the drive moves the heads
Re: (Score:2)
There have been numerous studies showing that SMART failure predictions are frequently incorrect, saying that a drive is not going to fail when it is, or is late in reporting a failure.
"most intriguing was that drives often needed replacement for issues that SMART drive status polling didn't or couldn't determine, and 56% of failed drives did not raise any significant SMART flags (and that's interesting, of course, because SMART exists solely to survey hard drive health)"
source:
http://www.engadge [engadget.com]
self-healing (Score:2)
Once your drive has been corrupted ZFS will kick in and prevent you from accessing any corrupt data.
If you have redundancy (such as mirror or RAIDZ), ZFS will also repair the data.
Re: (Score:2)
I'm not sure that's relevant to single-drive Macs though. It needs a mirror with a clean copy of the data to correct from.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:How will I benefit? (Score:5, Interesting)
Ob. Apple Joke referencing earlier
Of course, the delay for the consumer OSX support of zfs will have to wait until they code in skipping backups of your iTunes library!
Re: (Score:2)
Re:How will I benefit? (Score:5, Informative)
Actually, GP was talking about ZONE root filesystems, which have absolutely nothing to do with the bootloader, since the zone runs on top of the underlying global zone. You CAN put a zone root on ZFS at the moment, but Sun neither recommends nor supports that setup.
For SPARC machines, it'll require new OpenBoot firmware that understands zfs.
And this is simply untrue, period, even for non-zone ZFS root filesystems. OpenBoot loads the next stage of boot code by reading raw data from blocks 1-8 of the chosen slice of the boot disk, and THAT is the code that needs to be able understand the filesystem that will be mounted as root (UFS, ZFS, or whatever). OpenBoot only needs to understand the disk label/partitioning and to be able to read the disk blocks. It already does that, so non-zone ZFS root will NOT require any modifications or upgrades to OpenBoot, just updates to the bootloader code that is written to the disk in blocks 1-8.
Re:How will I benefit? (Score:5, Interesting)
Imagine having an external HDD on your mac. Whenever you plug it in, it automatically starts mirroring the internal drive.
Take atomic snapshots of your entire filesystem, send it over scp to back up your drive as a single file. Or, send over the difference between two snapshots as an incremental backup.
Have more than one drive, want mirroring? 2 steps on the command line.
Have a directory you really care about? Make it a sub-filesystem (this doesn't involve partitioning, etc, just a command that's almost identical in syntax and performance to mkdir) and tell ZFS to store 2 or 3 copies of it.
Have a directory you'd like auto-compressed? Tell zfs to compress it. New data to it is automatically, and transparently compressed. Completely transparent to the user and to applications.
And I'm just getting started. Trust me on this, google it.
Re:How will I benefit? (Score:5, Informative)
ZFS: The last word on filesystems [sun.com]
Why ZFS for home [blogspot.com]
Why ZFS Rocks [acu.edu]
ZFS: what "the ultimate file system" really means for your desktop -- in plain English! [apcmag.com]
Re:How will I benefit? (Score:5, Insightful)
I'd much rather have volume or block level snapshots, like with LVM and other similar systems. Those systems provide RO and RW snapshots, dynamic partitioning, drive spanning, etc., and can be easily layered with other block-level components to provide compression, encryption, remote storage, etc. as well. All that without tying you to a single file system (though that may be a moot point on OS X, as it will only boot from HFS/HFS+ AFAIK).
If you really wanted to you could even write a script that takes no arguments other than a path name and automatically created a series of volumes of an appropriate size for the folder you selected, setup software raid to mirror them into a single device, mount the device with a compression filter, format it (with any file system) mount it normally, move the data over, drop the old data, rebind the mount point to the old path name, and update fstab. The only thing you miss here that ZFS may be able to do (I didn't check) is avoid closing the files that are moved.
I'm not saying the features ZFS has are useless -- I think they are great -- they just aren't all that new and exciting. They might be new OS X, or repackaged in a way that's easy to consume, but they are things that anyone with big disks has been doing for years.
Re:How will I benefit? (Score:5, Informative)
It is not possible to make consistent block-level snapshots without filesystem support. If your filesystem doesn't support snapshotting, it must be remounted read-only in order to take a consistent snapshot. This is true for all filesystems. When they are mounted read-write, there may be changes that are only partially written to disk, and creating a snapshot will save the filesystem in an inconsistent state. If you want to mount that filesystem, you'll need to repair it first.
Re:How will I benefit? (Score:5, Interesting)
-Say I want to take hourly snapshots, and retain them for a month. When the parent data for a ZFS snapshot changes, ZFS merely has to leave the old data alone. OTOH, LVM must copy the block to every snapshot before it can change it in the parent. My hourly snapshots will quickly cause my disk to thrash to a halt with LVM and using much more space, while ZFS incurs a negligible penalty.
-LVMs allow dynamic partitioning, but they can't share capacity on the fly. If I delete a file on an LVM-hosted filesystem, that space becomes available to the filesystem but not all the others. Unless I shrink the filesystem, generally requiring that I take it offline for a while.
-Another layer could potentially handle checksums on LVMs, but in practice Linux can't do this properly by itself.
-ZFS can use other layers, there's just a substantial benefit to letting it run the show.
The only reason this won't turn out to be a huge disadvantage for Linux is that BTRFS [kernel.org] will provide most of the same features. Layering can be a very helpful design tool, but there are times it becomes a hinderence. It's important to be flexible when there's benefits to integrating stuff into a single layer.
Regarding checksums (Score:2)
Nothing new, eh? (Score:2)
Re:How will I benefit? (Score:5, Interesting)
For one thing it would make the implementation of Time Machine much simpler. No more directory tree full of hard links and such. If they put it on other boxes (like Time Capsule) they could unify the format (it uses a different storage method). Then you could pull the Time Capsule drive, stick it in your Mac, and be all set.
For servers, it has all the standard ZFS benefits (easy storage adding, redundancy, performance, etc).
For home users, it would let you simply plug a new drive in your Mac, press a button, and have it just add space to your main drive. You wouldn't need to specifically setup a RAID. No resizing. No "external drive" if you don't want it that way. Just buy a drive, plug it in, and it's all handled for you.
Re:How will I benefit? (Score:4, Insightful)
For home users, it would let you simply plug a new drive in your Mac, press a button, and have it just add space to your main drive. You wouldn't need to specifically setup a RAID. No resizing. No "external drive" if you don't want it that way. Just buy a drive, plug it in, and it's all handled for you.
I'm not sure you'd want it to work this way for external drives. Will they be available at crucial parts of boot time when some important files are striped across them? Even if they are, you're basically unable to ever remove the external drive again. If there's a problem with the drive, all your data is lost. Probably the way these drives work now is better. Maybe mirroring onto an external drive would work ok, but it would then be an undesirable write bottleneck.
Home Users = Digital Hub (Score:2)
As Apple has been (over)selling for years: Your Mac is your Digital Life.
Home users should be much more concerned about data durability and integrity than performance (not that performance would ever be bad).
A physical photo album will last a couple of centuries. Your hard disk may not last 2 years. And there's only 1 of them in any consumer Macs.
People are going to lose their digital snapshots, their unfinished novels, their emails and love letters, because nobody understands the inherent fragility of
Re: (Score:2)
...and Apple has really come to the table with Time Machine, making backup happen for the masses. Perhaps its reasonable to assume that if our computer crashes, we lose one hour worth of non-recoverable work. That's the promise if you use Time Machine with default settings.
They could do better by making mirroring standard in desktops, but in laptops they're probably on the money. They could do better by perhaps using zfs for block level backups rather than file level, but what they have now is far better
Re:How will I benefit? (Score:4, Insightful)
Indeed. (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
That's interesting, because the Linux implementations do not suffer these flaws. Look at the Drobo [drobo.com] for a hardware device that implements exactly this, runtime.
RAID-6 vs. RAID-Z2 (Score:2)
RAID-6 [wikipedia.org] & RAID-DP [wikipedia.org] can also survive a dual-drive sledgehammer failure. The Linux MD Driver supports RAID-6 [die.net].
How does Sun's RAID-Z2 distinguish itself from these existing implementations?
More efficient backups. (Score:4, Interesting)
This is great for backing up large files containing frequent but small changes. For example encrypted disk-images, parallels windows disk images, database files, the Entourage email box, or home videos you are in the process of editing etc.
Right now Time Machine creates an entire copy of the file each time it changes, making it unsuitable for backing up these types of files, and so you are encourage to exclude those files from backup. ZFS could fix that.
It could also make adding disk space more seamless, if desired. Slap on an external Firewire drive or even airport, click the "Add to storage pool button", and suddenly it just acts like part of your system drive. You don't have to worry about what is stored where.
Delta-based backups? (Score:2)
I've been using CrashPlan lately for backups, which, AFAICT, is effectively the same as a copy-on-write system; they store some type of "rolling delta" so that only changed data is rewritten to disk.
And that seems really cool. But one thing keeps nagging at me: Don't copy-on-write/delta based systems prevent you from safely growing a logical volume beyond the limits of a physical volume, short of massive duplication?
Let's say I had a 500GB ph
Re: (Score:2)
For end user usability, one of the nicest features about ZFS is that things like fstab go away. On a freebsd box with ZFS, I setup a raidz pool across 4 disks. One of my controllers was giving me issues so I tried flipping around the disks in the various serial ata ports I had across 3 different serial ata controllers. ZFS comes back and detects the array correctly regardless of how the OS assigns the device names. Normally you would cause serious headache if you swapped around the drives.
I dunno if I trust it yet. (Score:3, Interesting)
Re:I dunno if I trust it yet. (Score:5, Informative)
I've been running zfs on solaris oracle servers for a bit and it is REALLY NICE in my opinion. They have also continually improved the auto-tuning aspects so you don't even have to worry about some of the settings that were often tuned even two releases ago (10u2 vs 10u4).
Re: (Score:3, Informative)
Re: (Score:2)
As far as RAM requirements, ive seen various opinions ranging from 1GB to 2GB's as being a "sufficient" amount, but 4GB+ being ideal... and a Minimum of 768MB... if thats true, and if that is also including general ram use for other things, thats not so bad...
The only major limitation (according to Wikipedia http://en.wiki [wikipedia.org]
Relative Metric. (Score:2)
Re:I dunno if I trust it yet. (Score:4, Informative)
possible use (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What does it mean for the client? (Score:2)
http://www.apple.com/macosx/snowleopard/ [apple.com]
Standardizing file systems (Score:4, Interesting)
Say I have a portable USB hard drive or a dead motherboard in one system and want to retrieve the data off a hard drive. One computer has Windows and the other is Nix or OSX. Generally, the file system one could use that *should* work between Windows, Mac and 'Nix was Fat32. There are some issues with FAT32, the least of which is lack of support for large hard drives. The only other ways I can think of transferring the data are via Network or using a OS hook to read the data.
I just switched from Apple to Windows. I've been using an app to read my HFS+ file system on Windows to get data off the hard drive. It works well, but its not build-in. Nor is read/write NTFS access in other OSes. In any case, getting the data has been a bit of a pain. A standard file system I can just plug in a drive no problem would be awesome.
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
I was tired of not being able to save files bigger than 2 Gbyte on USB drives, so I recently looked at the possibilities:
Re: (Score:2)
Re: (Score:2)
I suppose you could go the other way, create a VM Windows on the Mac... just a new install would work fine... then copy your files over to the VM, copy the VM to your Windows PC... then mount it as a drive and copy to the Native Windows OS.
sux 2 b u (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Re:Finaly (Score:5, Insightful)
Re: (Score:2)
Frankly until ZFS gets cluster and HSM capabilities it is rather uninteresting.
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
With clustered Samba nearly production ready, shared disk filesystems such as GPFS, CXFS, GFS etc. will become much more important. Imagine being able to yank the power cord on one of your Samba servers and watch as the clients just keep trucking and the load is transparently taken by the other servers in the cluster.
File systems th
Re: (Score:2)
[For the record, I've used both ZFS and HDFS. Our largest HDFS--and we have multiple--has around 3-4PB, counting the space required by the 1:3 repl
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Funny)
We can finaly fill up more than 8 TB on this FS. Anyone up to try?(with what?)
Rookie. My swap space is 8 TB.
Re:Finaly (Score:4, Funny)
Re:Finaly (Score:4, Funny)
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
No, license issues (Score:3, Informative)
Not until OpenSolaris and Linux are both GPLv3.
ZFS is patented and patent protection is only conferred through use of CDDL'ed code, which isn't compatible with GPLv2. A cleanroom implementation of ZFS, besides being redundant, has no license to use ZFS's patented technology. Whether Sun would sue a linux dev over this is a separate issue.
BSD implemented a Solaris compatibility layer to use the CDDL code directly, but their license isn't incompatible.
Jeff and Lin
you find HFS+ slow?! (Score:2)
THIS: (Score:2)
"
Excuse me, but that is NOT even remotely a "native pluggable filesystem"! That is an add-on from a third party (parties) that "can be compiled and made to work with OS X".
There is a pretty BIG difference, dude.
No, I am understanding just fine. (Score:2)
However, I went back and looked at your first post to see what you meant. And indeed, there was a link there that
Re:No, I am understanding just fine. (Score:4, Insightful)
http://developer.apple.com/qa/qa2001/qa1242.html [apple.com]
In fact, every filesystem OS X supports is written using this mechanism, out of the box:
[gutro:~/] gutter% ls -1
AppleShare
URLMount
afpfs.fs
cd9660.fs
cddafs.fs
ftp.fs
hfs.fs
msdos.fs
nfs.fs
ntfs.fs
smbfs.fs
udf.fs
ufs.fs
webdav.fs
zfs.fs
Your most recent tirade seems to be a complaint about the lack of available filesystems, which I guess is a reasonable complaint, but that's not what you orignally asked for. Then you asked for a simple package you could download and install, and again, the original reply contained one (MacFUSE). Granted, that's a poor example, because it hides OS X's native pluggable FS support behind the FUSE pluggable FS support, but that doesn't mean that the AC was wrong. You can go and download the MacFUSE package, and the sshfs package, install them using the standard installer, and begin using a filesystem that works over SSH, no compiling necessary. (Incidentally, that one is super handy).
In short, the original reply by the AC was 100% correct, and you were 100% wrong, (and seemingly unable to comprehend his reasonable explanations) and somehow by sheer bluster, you seem to have convinced everyone of the opposite.