Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Businesses OS X Operating Systems Apple

Apple Removes Nearly All Reference To ZFS 361

Roskolnikov writes "Apple has apparently decided that ZFS isn't really ready for prime time. We've been discussing Apple/ZFS rumors, denials, and sightings for some years now. Currently a search on Apple's site for ZFS yields only two hits, one of them probably an oversight in the ZFS-cleansing program and the other a reference to open source. Contrast this with an item from the Google cache regarding ZFS and Snow Leopard. Apple has done this kind of disappearing act in the past, but I was really hoping that this was one feature promise they would keep. I certainly hope this isn't the first foot in the grave for ZFS on OS X."
This discussion has been archived. No new comments can be posted.

Apple Removes Nearly All Reference To ZFS

Comments Filter:
  • Integration issues (Score:5, Informative)

    by henrikba ( 516486 ) * on Wednesday June 10, 2009 @02:42AM (#28276119)
    The Known Issues and Features in the Works [macosforge.org] page for ZFS on MacOSforge [macosforge.org] explains the situation pretty well. Integrating ZFS into MacOSX isn't just a matter of creating a device driver. Time Machine, Finder, Spotlight and other core OS products needs to support ZFS features explicitly, since ZFS behaves a lot differently from HFS+.
  • by beelsebob ( 529313 ) on Wednesday June 10, 2009 @02:44AM (#28276129)

    I'm fairly confident of what it is, having actually used zfs on OS X.

    1. The implementation still has some major bugs -- I managed to get a kernel panic with it just by writing to a raid-z.
    2. There are some unresolved issues just with the way zfs behaves, for example, pulling a USB device with a zfs volume on it *must* cause zfs to shit its pants, because it's guarenteeing that writes to it will work.
  • by 0x000000 ( 841725 ) on Wednesday June 10, 2009 @02:59AM (#28276217)

    Okay, first and foremost it is well known that if you are running a database engine on top of ZFS you have to tune it to that specific database engine. This is well documented, and well described in the ZFS manuals, including steps to be taken to resolve these issues.

    As for the performance degradation when the disks are close to full are being worked on, while this can cause issues (especially if you have a lot of snapshots) any IT worth their salt would not have let the disk get that close to full that it causes issues (I've seen this error once on my production servers, when the disk was at 95% capacity, I was brought in as a contractor). Replacing and upgrading disk capacity is as simple as pulling one drive from the RAID Z, placing a new one in, and letting it resilver, then pull the next one, until you have pulled all of them, after which you will get the full space the new disks can provide, so going from 1 TB drives to 1.5 TB drives will at the end of replacing all of them (so that they are now all 1.5 TB or bigger) give you the extra space.

    As for 1, ZFS is extremely simple to use. gvinum from FreeBSD, or Linux's LVM are complicated, unnecessarily so, and 2, ZFS has so far proven far more reliable. It has been extremely fast, and has already saved a whole lot of trouble when a disk started failing by giving us a warning that ZFS reads were failing and letting us replace the disk before disaster strikes. Since we started using it in the last year we have had not yet had to resort to finding the backup tapes for a server because a disk went bad in Linux's LVM and bad data was written to other disks and files were lost.

    I don't believe the issue is that ZFS is not ready yet, I don't think that Apple has had the time to make sure that everything fits in with their way everything has to work, certain features that HFS+ can offer are not possible on ZFS yet. Certain tools are relying on very specific HFS+ mechanics and workings (Time machine for example) which would complicate work to replicate that on ZFS.

    While I was looking forward to seeing ZFS in Mac OS X, I doubted that it would be anytime soon, especially since it is a large undertaking making sure that the various parts of the system are all tuned for ZFS, this includes the way the OS caches, the amount of memory it can use for ZFS arc cache, and things along those lines. FreeBSD has slowly been working through those exact issues.

  • by MichaelSmith ( 789609 ) on Wednesday June 10, 2009 @03:27AM (#28276369) Homepage Journal
    Its an nis + automount + zfs + nfs problem. I am a user of the system (not an administrator) so my information is incomplete. Basically we lost the ability to export deep subdirectories (say /path/to/user) from our zfs file system (which is physically on a raid array) via nfs.

    We retained the ability to export /path/to as a workaround. The way nis (yp) works in our setup is that it exports individual user directories as required.
  • by Anonymous Coward on Wednesday June 10, 2009 @03:43AM (#28276457)

    Mac OS X Server has a few features that are hard to replicate well on other servers, basically coming down to specific Mac management (Open Directory, NetBoot, Software Update), and in particular AFP file services. There are a lot of design/production companies out there with a lot of Macs who need a reasonable amount of storage, and AFP still tends to work better for Mac clients than things like SMB. We've got a few clients with a few hundred Macs and and ZFS would have been a good additional option to have for backend storage. The snapshot and scrub features alone would be a big benefit.

    Xsan is great for certain situations but Apple's tools tend to target that towards video production, and not everyone needs or can afford a full SAN.

  • by prockcore ( 543967 ) on Wednesday June 10, 2009 @04:33AM (#28276707)

    The iTunes store uses Akamai. So it uses Linux, not OSX at all.

  • Re:Death knell (Score:2, Informative)

    by KonoWatakushi ( 910213 ) on Wednesday June 10, 2009 @05:25AM (#28276999)

    This is almost entirely nonsense. I have been following the zfs-discuss list for years, and almost no one has lost data. There have been a few bugs which could in rare cases render your data inaccessible, but they almost always have workarounds, and do get fixed.

    The data loss and corruption that the parent is talking about is the fault of crap hardware. In almost every case, USB is involved, or more rarely the lack of ECC ram. It is true that ZFS is less tolerant of bad hardware. Note, faulty good hardware is not considered bad; that is reserved for garbage which (for instance) lies to the OS about flushing the disk cache. With such hardware, it is impossible for any filesystem to function reliably.

    USB and Firewire bridges are notorious for this. If you care about your data, you should run the other way if you happen upon one. ZFS works great on good hardware though. With directly attached disks and ECC ram, there is no cause for concern.

  • We do (Score:5, Informative)

    by theolein ( 316044 ) on Wednesday June 10, 2009 @05:40AM (#28277091) Journal

    We use Mac OSX Server for our infrastructure. It's a royal PITA and I now wish we hadn't done it, but there have been a number of media companies in recent years that have moved to Mac OSX Server because all their clients are OSX.

    My view is that Apple is just jealous of Microsoft and said to itself that if Microsoft can drop promised new features in Vista like the DB based file system, then why can't Apple drop ZFS? ;-)

  • by LKM ( 227954 ) on Wednesday June 10, 2009 @05:44AM (#28277111)
    Apple uses Akamai for mirroring some of their stuff. They use Xserves as the main source.
  • by totally bogus dude ( 1040246 ) on Wednesday June 10, 2009 @06:03AM (#28277197)

    I wouldn't think Akamai would be doing any of the actual work behind the iTunes store. I seem to recall they do have that capability, but it would be really hard to take advantage of unless you designed for it from the start, and even then I doubt anyone, especially a company as large as Apple, would be happy to give their content distribution network access to any of their actual user data.

    Our website is served by Akamai as well, but nearly all the content is served by Windows web servers. If you do a simple GET and the page is in the cache of the Akamai server you're using, then you could maybe say it was served by Linux or whatever. If you do a search or anything that requires actual work, your request will be getting funneled back to our Windows servers.

    I would say it's extremely unlikely iTunes works any differently.

  • by Anonymous Coward on Wednesday June 10, 2009 @06:30AM (#28277341)
    Wikipedia [wikipedia.org]

    Wikipedia is a free,[5] multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Its name is a portmanteau of the words wiki (a technology for creating collaborative websites, from the Hawaiian word wiki, meaning "quick") and encyclopedia. Wikipedia's 13 million articles (2.9 million in the English Wikipedia) have been written collaboratively by volunteers around the world, and almost all of its articles can be edited by anyone who can access the Wikipedia website.[6] Launched in January 2001 by Jimmy Wales and Larry Sanger,[7] it is currently the most popular general reference work on the Internet.

    (for those who got freaked out and wondered were they were after they clicked your link).

  • by beelsebob ( 529313 ) on Wednesday June 10, 2009 @06:53AM (#28277437)

    Sure they have http://zfs.macosforge.org/trac/wiki#ZFSDocumentation [macosforge.org] you just don't know where to get apple's current head of their zfs implementation.

  • by ThePhilips ( 752041 ) on Wednesday June 10, 2009 @07:01AM (#28277471) Homepage Journal

    It's actually more interesting than that.

    Oracle currently sponsoring development of Btrfs for Linux, which is equivalent to recently acquired ZFS. I wonder if that played any role.

    P.S. Though my personal opinion is more pragmatical. ZFS doesn't seem to be extensible and obviously doesn't support all featlets of HFS/HFS+ (streams, aliases, case insensitivity) which are required for Mac OS and its applications. I guess that should have been a major bummer. Meaning that even if Apple would add ZFS support, it would very likely have different name and it would be incompatible with Sun's ZFS.

  • by Anonymous Coward on Wednesday June 10, 2009 @07:06AM (#28277493)

    ntfs-tools package
    ntfsfix /dev/sdXY :) no need for chkdsk

  • by AttilaSz ( 707951 ) on Wednesday June 10, 2009 @07:09AM (#28277505) Homepage Journal

    It's widely known that Steve Jobs and Larry Ellison are good friends, see this from http://en.wikipedia.org/wiki/Larry_Ellison [wikipedia.org]

    "On 18 December 2003, Ellison married Melanie Craft, a romance novelist, at his Woodside estate. His friend Steve Jobs, Apple, Inc's CEO, was the official wedding photographer."

    So, no, Larry's company becoming ZFS owner ain't the reason Steve's company would drop it.

  • Re:Death knell (Score:3, Informative)

    by ThePhilips ( 752041 ) on Wednesday June 10, 2009 @07:18AM (#28277551) Homepage Journal

    Of course, the other person answering your flawed arguments about 'crap hardware' is right to the point: What good is a fault tolerant file system if it isn't tolerant of faults?

    Or in layman's terms: if shit happens, system shouldn't help to spread it, but to localize it.

    (The mods opting for 'informative' of your post obviously don't read the ZFS mailing list, and nobody blames them.)

    What you describe is a standard problem faced by all journaling and/or distributed file systems. Or for that matter any distributed ("shared data") system. You simply cannot guarantee anything (efficiently) when many asynchronous agents are involved. And it all depends where would designers cut the compromise with inefficiency (force sync of all the agents).

    Judging that it took Veritas a decade to make such system (at least from reading their 5.0 papers I get the impression), I think ZFS needs much much more time to mature.

  • by R.Mo_Robert ( 737913 ) on Wednesday June 10, 2009 @08:19AM (#28277921)

    If you read the linked page (from Google cache), you'll see that this feature was slated for Snow Leopard Server, not the consumer version. I do not recall Apple ever advertising fll ZFS support as a feature for the consumer verison of 10.6, and neither does Wikipedia [wikipedia.org].

    (Yes, consumer 10.5 does have read-only support for ZFS from the command-line; I imagine this would be still present in 10.6. In any case, it's not like this project is a secret, as Apple has released it [macosforge.org] open-source.)

  • Re:Death knell (Score:5, Informative)

    by toby ( 759 ) * on Wednesday June 10, 2009 @09:14AM (#28278391) Homepage Journal

    What ZFS does have that typical Apple Consumers would like to see it on desktops

    Pretty much all of it applies equally to consumer systems.

    ZFS is not miracle what is not possible to gain already with other kind setup with RAID and other filesystem

    You need to study ZFS more, [opensolaris.org] as you clearly know little about it. Almost no RAID systems can do what ZFS does. Hints: end to end checksumming; self-healing; copy on write; ...

    Hint: The extra capability largely comes from integrating both the "filesystem" and "volume manager" layers, which are separate modules in traditional configurations. Calling ZFS a "filesystem" seems to mislead a lot of people that it can be compared to other "filesystems"; and the fact that ZFS implements RAID-like redundancy leads people to think that it can be compared to other "RAID" systems. Sure, it can be compared, but conventional systems will generally lose (notably in data integrity, but also in performance, manageability, etc).

  • Parent Wrong. (Score:5, Informative)

    by toby ( 759 ) * on Wednesday June 10, 2009 @09:17AM (#28278409) Homepage Journal

    Shame I just blew my mod points by posting.

    But parent is completely wrong. ZFS root/boot is fully supported by Sun, and ZFS itself is used in production in thousands of installations.

  • by Anonymous Coward on Wednesday June 10, 2009 @09:35AM (#28278621)
    OS X 10.5 has an API to monitor file changes. Time Machine is built on top of that. The backup is built from new files and hard links to previous files and directories. Taking a zfs snapshot and sending it to another disk would do the same thing, but time machine lets you exclude directories.

    Using zfs on the destination disk would be easier (just take a snapshot and sync up the deletes, additions, and changes).

  • by kriston ( 7886 ) on Wednesday June 10, 2009 @09:44AM (#28278741) Homepage Journal

    No, sorry, iTunes is not mirrored. It really is all on Akamai.

  • Re:Death knell (Score:2, Informative)

    by wereHamster ( 696088 ) on Wednesday June 10, 2009 @10:50AM (#28279753)

    In the link you posted, the admin found three uberblocks (there are supposed to be four). ZFS correctly made multiple uberblocks, per design. It appears that all three were corrupt.

    ZFS keeps a history of the last 256 uberblocks in four different places in the pool. So even if all copies of the most recent uberblock got corrupted, it could still fall back to one of the older ones. You'd maybe loose the last few minutes of work, but that's not nearly as catastrophic as loosing the whole pool. It could fall back, but it doesn't, it rather panics the kernel. This is where a userspace fsck would help, for examply by giving you the choice to safely invalidate the last uberblocks. I was not feeling very comfortable when I wrote the dd/ud script that automated that task, but I had nothing to loose at that point.

    The silent corruption was just an example. It doesn't matter what causes the corruption. But _if_ you end up in a situation like the admin or me, you have to resort to such ugly tricks to recover your pool. And that is something I'm not willing to accept on a production system - or any system at all for that matter.

  • by Anonymous Coward on Wednesday June 10, 2009 @12:53PM (#28281505)

    I'm a different AC and want to offer a more detailed point to that:

    Some of the Finder integration issues had been fixed (eject, gajillions of volumes appearing) in earlier builds with the Carbon Finder. Then in a later build the GUI stuff for zfs disappeared from Disk Utility.app, that was a warning sign. Just now those of us outside of Apple got our hands on the newest builds. The Finder is a new Cocoa version and all the zfs command line stuff is missing. The macosforge zfs stuff has no 10.6 code on it yet. So until someone has the time to pull the drivers and tools from a previous build of Snow Leopard and see how well a zfs volume integrates with the new Cocoa Finder we won't really know about if those resolved Finder issues continue to be resolved.

    Also in the earlier builds Time Machine was not able to use the zfs features. Nobody has had time for the latest build, and again with all the drivers and command line tools missing it's not trivial to check.

  • by raddan ( 519638 ) on Wednesday June 10, 2009 @01:01PM (#28281637)
    The backend, for the vast majority of their customers, is up to the customer to decide. I had the pleasure of taking a grad CS course with an Akamai engineer, and I specifically asked him about Apple, which is one of the customers he works with. He said Apple provides their own backend.
  • by HumanEmulator ( 1062440 ) on Wednesday June 10, 2009 @01:02PM (#28281657)

    Just playing devil's advocate: All the posts here seem to be trying to figure out what's wrong with ZFS to cause Apple to yank it out, but what if ZFS is fine and there's some big feature they're working on for HFS+ that they couldn't duplicate in ZFS?

    I admit it's much more likely they just don't want to maintain full support for multiple filesystems, which is what they'd have to do because there's no way they're putting ZFS on iPhones and iPod Touches anytime soon.

    Either way, the really telling thing is they aren't talking about ZFS in Mac OS X Server. If they had any plan for a ZFS future, it would start there much like the way HFS+ Journaling was initially a Mac OS X Server feature. (Introduced in OS X Server 10.2.2 and rolled out to non-server OS X in 10.3.)

  • Re:ZFS not ready? (Score:4, Informative)

    by ggendel ( 1061214 ) on Wednesday June 10, 2009 @02:52PM (#28283249)

    You're right on the button. I created a sparse file for each machine image using diskutil so I could fix maximum size (I'd hate it to take over my entire 2.5 TB pool). The trick is to figure out the name that each machine wants, but worse comes to worse, you cancel it quick on the first sync if it's wrong and then rename the file and start it again.

    Then I used the native CIFS service that comes with OpenSolaris for the connection. I started with Samba, but the native CIFS service had 1000X better throughput.

    There is an option that enables mounting "foreign" disks for time machine. This may explain it better:

    http://www.macosxhints.com/article.php?story=20080420211034137 [macosxhints.com]

They are relatively good but absolutely terrible. -- Alan Kay, commenting on Apollos

Working...