OpenZFS Project Launches, Uniting ZFS Developers 297
Damek writes "The OpenZFS project launched today, the truly open source successor to the ZFS project. ZFS is an advanced filesystem in active development for over a decade. Recent development has continued in the open, and OpenZFS is the new formal name for this community of developers, users, and companies improving, using, and building on ZFS. Founded by members of the Linux, FreeBSD, Mac OS X, and illumos communities, including Matt Ahrens, one of the two original authors of ZFS, the OpenZFS community brings together over a hundred software developers from these platforms."
I'm addicted (Score:5, Interesting)
I love ZFS, if one can love a file system. Even for home use. It requires a little bit nicer hardware than a typical NAS, but the data integrity is worth it. I'm old enough to have been burned by random disk corruption, flaky disk controllers, and bad cables.
Re:I'm addicted (Score:5, Funny)
I love ZFS too, but I'd fucking kill for and open ReiserFS...
Re:I'm addicted (Score:5, Funny)
I think that anything having to do with ReiserFS is a dead end.
Re:I'm addicted (Score:4, Funny)
OK stop already, you guys are driving this joke into the woods.
Re: (Score:3)
Re: (Score:2)
I love ZFS too, but I'd fucking kill for and open ReiserFS...
I heard that the act of using ReiserFS might be a criminal offense.
Something about making oneself an accomplice after the fact... I don't know; it's a bit murky
Re: (Score:3)
Re: (Score:3)
I guess nobody got the joke.
Re:I'm addicted (Score:4, Insightful)
Re: (Score:2)
FreeBSD. I'm sure that makes me more retarded. Or retardeder in your people's language.
Re: Data integrity (Score:5, Informative)
Not sure what you mean. You certainly can set up a mirrored pair (or triplet or quadruplet), but you can also set up what's referred to as raidz, where it stripes the redundancy across multiple disks. You can configure how much redundancy... 1, 2, or more disks if you like. You can also tell ZFS to keep multiple copies of blocks, and it will spread those copies out among the disks. You can set that policy per sub-volume (file system in zfs-speak), so that if you decide that some of your data deserves more redundancy, you can set up a folder that will keep 2 copies of everything, but leave all the other folders at 1 copy. It's super geeky. I've had it detect (and correct) corruption in a failing disk, detect corruption because of a flaky disk controller that would otherwise pretend to work fine, and detect corruption when a SATA cable came loose. Combined with the ECC RAM in the server, I feel more comfortable about the integrity of my data than I ever have. I've lost family photos before to random drive corruption, so I'm sensitive to this stuff :)
Re: Data integrity (Score:5, Informative)
Re: Data integrity (Score:5, Interesting)
ECC RAM is an important part here, due to how scrubbing works in ZFS. The background disk scrubbing can check every block on the filesystem to see if it still matches its checksum, and it tries to repair issues found too. But if your memory is prone to flipping a bit, that can result in scrubbing actually destroying data that was perfectly fine until then. The worst case impact could even destroy the whole pool like that. It's a controversial issue; the odds of a massive pool failure and associated doom and gloom are seen as overblown by many people too. There's a quick summary of a community opinion survey at ZFS and ECC RAM [mikemccandless.com], but sadly the mailing list links are broken and only lead to Oracle's crap now.
Re: (Score:2)
What are the chances of the exact same sector being corrupt on at least three disks in a raidz2 vdev? This doesn't seem like a plausible scenario.
Re: Data integrity (Score:4, Informative)
That's what you have backups for.
Re: (Score:2)
That depends on the reason for the failure. If it's because there's a little bit of dust on the platter, or a manufacturing defect in the substrate, then it's very unlikely. If it's because of a bug in the controller or a poor design on the head manipulation arm, then it's very likely.
This is why the recommendation if you care about reliability more than performance is to use drives from different manufacturers in the array. It's also why it costs a lot more if you buy disks from NetApp than if you buy
Re: Data integrity (Score:5, Insightful)
ZFS doesn't have ECC, but it does checksum each block, so it can detect per-block errors. If you have valuable data, you can set the copies property to some value greater than 1 for that data set and it will ensure that each block is duplicated on the disk so if one fails a checksum then the other will be used to recover. If you have three disks, you can use RAID-Z, which loses you 1/3 of the space (not 1/2) and allows any single-disk failures to be recovered. Running zfs scrub will make it validate all of the data and when any read fails the checksums recover the data from the other two.
The reason it doesn't use ECC is that ECC doesn't mesh well with the failure modes of disks. ECC is used in RAM because when it gets hot, hit by a solar ray, or whatever, it is common for a single bit to flip (in a single direction, which makes the error correction easier). In a disk, you typically have an entire block fail, not a single bit. Modern disks use multiple levels, so the smallest failure that is even theoretically possible might be a single byte (or nibble) in a block. And since the failure isn't biased, you'd need a fairly large amount of space. A better approach would be for the filesystem to generate something like Reed–Solomon code blocks for every n blocks that are written. This would allow single-block errors to be recovered, as long as the other blocks are okay. The down side of this approach is that the error correcting block would need to be rewritten whenever any of the other blocks is modified. this might be relatively easy to add to ZFS, as it uses a CoW structure, so block-overwrites are relatively rare (although erasing a lot of data would require a lot of checksums to be recalculated). This would mean that a single-block write would end up triggering a lot of reads and that would hurt performance. For ZFS, this might actually be easier to implement, as blocks are written out in transaction groups and so including an error correction block at the end might be a fairly simple modification.
all i want is BP-rewrite (Score:5, Informative)
If this gets us BP-rewrite, the holy grail of ZFS i'll be a happy man.
For those who don't know what it is - BP-rewrite is block pointer rewrite, a feature promised for many years now but has never come. It's a lot like cold fusion is that its always X years away from us.
BP-rewrite would allow implementation of the following features
- Defrag
- Shrinking vdevs
- Removing vdevs from pools
- Evacuating data from a vdev (say you wanted to destroy you're old 10 disk vdev and add it back to the pool as a different numbered disk vdev)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
And, of course, very importantly, the ability to add drives to a RAID-Z array [superuser.com] after it has been created.
Re: (Score:2)
Why would ZFS need defrag support? UFS never had defrag support and the only times that ever became a problem was when the disk was running out of room. Which is bad for performance reasons anyways.
Re:all i want is BP-rewrite (Score:5, Informative)
Re: (Score:3)
So you propose that we kill array performance for a bit to de-fragment? Do you have any idea how long it takes to defragment multiple terabytes of data? On a multi-user multitasking OS access is more random anyhow, so its not like your contiguous files are likely to be read sequentially anyhow.
No, for a mission critical system that actually has a workload, its probably much easier/better to just maintain free space.
Re: (Score:3)
Ideally, in something like ZFS you'd want background defragmentation. When you a file that hadn't been modified for a while into ARC, you'd make a note. When it's about to be flushed unmodified, if there is some spare write capacity you'd write the entire file out contiguously and then update the block pointers to use the new version.
That said, defragmentation is intrinsically incompatible with deduplication, as it is not possible to have multiple files that all refer to some of same blocks all being con
Re:all i want is BP-rewrite (Score:4, Insightful)
You are correct that the disk will become fragmented, but the implication is fragmentation is a problem and that's simply not true. One of the prime causes of the misunderstanding is that fragmentation in Unix file systems is night and day different than fragmentation in a FAT file system, where most people are used to defragging windows drives. Unix file systems use much better algorithms to control fragmentation, so there is (generally) a lot less on a per file basis. They also automatically defragment, there are cases where when a fragmented file is written to the file system will defragment part of that file and rewrite it.
The Berkeley FFS was the first to "solve" this problem, reserving 10% of the disk space primarily to avoid fragmentation. Decades of experience show that for all but the most corner of corner cases, that is enough, causing no significant amount of fragmentation, or performance degradation.
* http://www.eecs.harvard.edu/~keith/research/tr94.html
* http://www.cs.berkeley.edu/~brewer/cs262/FFS.pdf
* http://www.cs.rutgers.edu/~pxk/416/notes/12-fs-studies.html
* http://pages.cs.wisc.edu/~remzi/OSTEP/file-ffs.pdf
The result is that for most applications fragmentation is a complete non-issue. After 25 years of playing with various file systems I've only seen it be an issue once, on an NNTP server that reached 20% fragmentation. Most user desktops and general purpose servers have under 1% fragmentation at all times. Generally, if you have a fragmentation problem it's because the storage is too full, and you need to add storage anyway (the aforementioned NNTP server was a good example). Adding the storage makes the problem go away.
Most users of Unix file systems will never need to give fragmentation a second thought.
Still CDDL... (Score:5, Informative)
Oh well. I'd somehow hoped "truly open source" meant BSD license, or LGPL.
Re: (Score:3, Informative)
Re: (Score:3)
CDDL is basically LGPL on a per-file basis.
Perhaps the intent of the licenses is similar, but there's more to a license than that. Unfortunately, being licensed under the CDDL causes a lot more license incompatibility restrictions than either the LGPL or BSD license do. If it were under one of those, there'd be hope for seeing it as an included filesystem in the Linux kernel. But since it's under the CDDL, that can't happen.
The developers are, of course, welcome to use whatever license they like. Just pointing out that the CDDL is *not* basicall
Re: (Score:2, Insightful)
The GPL is the problem here, not the CDDL.
It's funny how you cite license incompatibility restrictions, but Linux is the only one having those problems.
OS X, FreeBSD and others don't seem to be having any problems with the CDDL.
Gee, I wonder why.
The problem for whom? (Score:2, Insightful)
You clearly have not been paying attention to the news, have you?
After the leaks of Snowden regarding general malfeasance from security agencies against the encryption standards that we require to communicate safely and securely (like with your bank, just saying) you can't trust any software that you can't build (or know other people more capable can't build) from scratch.
The GPL guarantees that no stupid institution or individual has free reign to corrupt the computational resources you are using.
Other lic
Re: (Score:3)
In fairness its GPL that has the incompatibility problem not CDDL.
CDDL is compatible BSD, Apache2, LGPL, etc.
GPLv2 is incompatbile with CDDL, Apache2, GPLv3, LGPLv3, etc.
Even if the license were not CDDL, it would have to be released under a license that came with a patent clause, which means GPLv3, LGPLv3, Apache2 or similar all of which are incompatible with GPLv2 which Linux is licensed under.
CDDL isn't the problem.
Re: (Score:2)
Which would require a from-scratch cleanroom rewrite, probably.
They could probably work on that, but if the current license isn't causing to much trouble, they probably have more important things to work on.
Patents? (Score:4, Insightful)
Not to rain on anybody's parade,but will the commercial holders of ZFS allow this? Or will they unleash some unholy patent suit to keep it from happening?
Re: (Score:3)
Re:Patents? (Score:5, Informative)
If you're successful, Larry will come a callin' (Score:3, Funny)
As long as Oracle's patents are valid, can anyone seriously believe this will go anywhere?
His fleet of boats isn't going to pay for itself.
Re: (Score:3)
You mean that fleet of losing boats? Last time I checked it was 7-1 NZ with first to 9 winning.
Re: (Score:2)
Re:If you're successful, Larry will come a callin' (Score:5, Informative)
Re: (Score:2)
Oracle released ZFS under a BSD compatible license. Anyone is allowed to do whatever to the opensource code.
GP was talking about patents. If they had released it under (L)GPLv3 or Apache2, users would be safe from patents suits.
Re: (Score:3)
Re:If you're successful, Larry will come a callin' (Score:5, Funny)
Collecting money from opensource-companys? Daryl McBride will turn in his grave if Larry is even stupid enough to try it...
Eh? I don't think that the Mormons bury their living, no matter how ghoulish are the corporations that they helm.
I'm afraid Daryl McBride will be quite operational when your friends' commits arrive...
Re: (Score:2)
Eh? You mean Darl McBride and not Daryl McBride? I usually do not nitpick on small stuff like this but this pig vomit should be remembered by his correct name. We don't want to assign blame for what he did to some other innocent person.
Advatages of ZFS over BTRFS? (Score:3, Insightful)
Re: (Score:2)
btrfs is still considered experimental by the devs zfs is used in production.
Past that btrfs does not seem to support any sort of ssd caching wich is realy a requirement for any modern fs.
Re: (Score:2)
aka bcache + any filesystem you want (Score:4, Informative)
Using a small, fast SSD as a cache for large, slow disks can be awesome for some workloads, mostly servers with many concurrent users.
To do that with ANY filesystem, bcache is now part of the mainline kernel . dmcache does the same thing, and there is another one that Facebook uses.
Re: (Score:2)
Re: Advatages of ZFS over BTRFS? (Score:2)
It is called ZIL - zero insertion log IIRC
Re: (Score:2)
Re: (Score:2)
There are two uses for SSDs in a ZFS pool. The first is L2ARC. The ARC (adaptive replacement cache) is a combination of LRU / LFU cache that keeps blocks in memory (and also does some prefetching). With L2ARC, you have a second layer of cache in an SSD. This speeds up reads a lot. Data that is either recently or frequently used will end up in the L2ARC and so these reads will be satisfied from the flash without touching the disk. The bigger the L2ARC, the better, although practically if it's close to
Re:Advatages of ZFS over BTRFS? (Score:5, Informative)
I don't have any practical experience with BTRFS, but I use ZFS heavily at work.
The advantage of ZFS is that it's tested, and it just works. When I started with our first ZFS testbed, I abused that thing in scary ways trying to get it to fail: hotplugging RAID controller cards, etc. Nothing really scratched it. Over the years I've made additional bad decisions such as upgrading filesystem versions while in a degraded state, missing logs, etc, but nothing has ever caused me to lose data, ever.
The one negative to ZFS (if you can call it that) is that it makes you aware of inevitable failures (scrubs catch them). I'll lose about 1 or 2 files per year (out of many many terrabytes) just due to lousy luck, unless I store redundant high-level copies of data and/or metadata. Right now I use use stripes over many sets of mirrored drives, but it's not enough when you read or write huge quantities of data. I've ran the numbers and our losses are reasonable, but it's sobering to see the harsh reality that "good enough" efforts just aren't good enough for 100% at scale.
Re: (Score:2)
The one negative to ZFS (if you can call it that) is that it makes you aware of inevitable failures (scrubs catch them). I'll lose about 1 or 2 files per year (out of many many terrabytes) just due to lousy luck
What? Interesting.... I never lost a file on ZFS... ever; and I was doing 12TB arrays, for VMDK storage; these were generally RAIDZ2 with 5 SATA disks, running ~50 VMs. Then in ~2011, concatenated mirrored sets of drives; large number of Ultra320 SCSI spindles in a direct attach SCSI
Re: (Score:2)
Re: (Score:2)
It sounds like he disabled/reduced ZFS's default to keep extra copies of meta-data.
That would seem to require altering the source code. At least in the Solaris X86 ZFS implementation; there is no zpool or zfs dataset option to turn off metadata redundancy.... of course it would be a bad idea.
Re: (Score:2)
I corrupted some files by the following:
This is a home setup, all parts are generic cheapo desktop grade components, except slightly upgraded rocket raid cards in dumb mode for additional sata ports:
4 HDDs, 2 vdevs that 2 drive mirrors (RAID 1+0 with 4 drives essentially)
1 drive in a 2 drive mirror fails, no hot spare.
When inserting a replacement drive for the failed drive, the SATA cable to the remaining drive in the mirror was jiggled and the controller considered it disconnected.
The pool instantly went
Re: (Score:3)
You can't expect much better than what it did considering an entire vddv (both drives in the mirror) went off line as data was being written to them.
I do expect better, because ZFS is supposed to handle this situation, where a volume goes down with in-flight operations; the filesystem by design is supposed to be able to re-Import the pool after system restart and recover cleanly....
That shouldn't of happened; it sounds like either the hard drive acknowledged a cache FLUSH, before data had been w
Re: (Score:2)
Doesn't look like he had a ZIL from the description of the hardware. So it's totally understandable that he might have corruption.
Re: (Score:3)
I had an upgrade path similar to yours, starting with RAIDZ and moving the a group of mirrors. I try not to let any pool get too big, so there are maybe 20 drives/pool. It's always the small files that are lost (see post above) I think each server does about 6 PB/year each direction on these highly-accessed files, so I think it's reasonable to drop ~1MB of non-critical files (they basically store notes of data analysis).
So far I've never had a problem with VM images, but now we're mitigating that by addin
Re: (Score:2)
Apparently "never lost data" must mean never lost an entire filesystem -- that's not my definition. Usually file loss is user error.
ZFS does support snapshots, and Nexenta / FreeNAS / etc have snapshot options, and replication options (zfs send | zfs recv) available, for sure.
It's a highly resilient filesystem, but owning and using a highly resilient filesystem is not a replacement for having the proper backups.
Re: (Score:3, Informative)
You don't understand. ZFS didn't lose that data -- ZFS detected that the underlying disk drives lost that data. You can run ZFS in a highly redundant modes that allow it to reconstruct lost data, but it sounds like OP's redundancy is such that sufficient drives may lose bytes as to cause lost files.
Re:Advatages of ZFS over BTRFS? (Score:5, Interesting)
This is correct.
It is statistically assured that you will lose some data with anything less than obscene redundancy. I've run the numbers and we've settled on what's acceptable to us: we have offline backups far more frequently than 2 times/year for everything, so dropping about 2 files/year that are completely unrecoverable without backups isn't a big deal.
These systems are running a moderate number of very large static files, mixed with a very large number of very small files. The small files are SQLite-style records, and we churn through them very rapidly. I don't know exactly why, but it is always these small files that we lose: there is clearly a bias towards things that are written frequently. The analyst in me is quick to point out that implies failures in ZFS itself, beyond just the disks and "bit rot", but the accelerated failure isn't enough to worry about. So our non-failure rate is easily 6-nines or better per year on the live storage system, but it's still a bit uncomfortable to know that some data is going to be gone, despite that.
With a minimal amount of effort you can get hardware and software which is not longer the biggest threat to your data. I am personally the most likely source of a catastrophic failure: operator error is more likely than an obscure hardware failure. ZFS has allowed me to reduce that operator error (snapshots, piping filesystems, nested datasets with inheritance), and simultaneously it's outperforming other options on both speeds and security. Overall, I'm extremely pleased.
Re: (Score:2)
It means the files were lost from the filesystem, and he was notified and recovered them from the backups. Which is a hell of a lot more than what other filesystems would do for you. One of the benefits of ZFS is that it makes it a lot easier to monitor for bit rot.
Re: (Score:3)
Re: (Score:3)
I'm sure I'll be corrected if I'm wrong, but does it offer any advantage over BTRFS? I'm not trying to start a flame war; I'm honestly asking.
BTRFS is still highly experimental. I had production ZFS systems back in 2008. A mature ZFS implementation is a lot less likely to lose your data with filesystem code at fault (assuming you choose appropriate hardware and appropriate RAIDZ levels with redundancy).
Re: (Score:2)
Re:Advatages of ZFS over BTRFS? (Score:5, Insightful)
Nice FUD there. You picked the btrfs-progs, which are the userspace tools, not the actual btrfs filesystem driver.
http://git.kernel.org/cgit/linux/kernel/git/josef/btrfs-next.git/log/ [kernel.org]
Re: (Score:3)
BTRFS has a large number of features that are still in the "being implemented", or "planning" stages. In contrast, those features are already present, well tested, and in production for half a decade on ZFS. Many touted "future" features (such as encryption) of BTRFS are documented as "maybe in the future, if the planets are right, we'll implement this. But not anytime soon"
Comparing the two is like making up an imaginary timeline where ReiserFS 3 was 4-5 years old and in wide deployment while ext2 was bein
Re: (Score:2)
ZFS is tested and has beed used in huge amount of different environments with very posive feedback for well over a decade. I do not know any catastrophic failures (though there must be).
BTRFS requires latest version of Linux kernel and itself to work. I have no clue about testing (removing disks on the fly, etc.) and definitely it is not widely deployed, not yet proven to work (few anecdotes do not count).
BTRFS seems to be only slightly more robust than it was five years ago - during this time I have lost t
Re: (Score:3)
I'm playing around with btrfs at the moment, and I've spotted some inconsistencies in the document you mentioned.
* Subvolumes can be moved and renamed under btrfs. I do this on a daily basis.
* btrfs can do read-only snapshots. Mind you, it does have to be specified.
* As far as I can tell, "df" does work fine with btrfs. The document implies it does not.
I am still quite new to btrfs, so I'm learning much at the moment. There may be more points that I've missed.
It seems, though, your document is a bit out
Re: (Score:3)
Gotcha. So btrfs and df play up only under a raid1 situation. That explains why I didn't notice any problem.
As for snapshots, I've set up an automated snapshot system using btrfs. Main volume is mounted to /snapshots. One subvolume is created in there, and is then separately mounted to /data . Snapshots are created under the /snapshot directory, while /data is the path used by applications. I've created a nightly script which renames all previous snapshots, and then creates a new snapshot. It all wor
Re: (Score:3)
That's never been true, you always had the option of detaching it or outright deleting just one disk, you just had to make sure you did it in a careful manner so as not to delete things you didn't want to delete.
Also resizing a volume on a disk is a risky operation to engage in. If it's something that you really need to do, the correct way is to back up the data to a separate disk and restore it to a new volume. Resizing volumes is not exactly in keeping with the philosophy that led to ZFS being created.
Re: (Score:2)
Re: (Score:2)
Once a zfs filesystem is created that's it. No resize support
Minor correction: Once a ZFS pool is created, that's it. Filesystems are dynamically sized. You can also add disks to a pool, but not to a RAID set. You can also replace disks in a RAID set with larger ones and have the pool grow to fill them. You can't, however, replace them with smaller ones.
Re: (Score:3)
Limited number of drive slots + moving to a smaller, but faster platter in one or more of those slots.
Re: (Score:2)
Still no encryption... *sigh* (Score:3)
I wish they had encryption... *sigh*
No, I don't want workarounds, I want it to be built in to ZFS like in Solaris 11.
Re: (Score:2)
There are no satisfactory workarounds, and never will be. The crypto needs to be handled within ZFS or it becomes an over complicated and inefficient mess. (As you are probably aware.) Consider a ZFS mirror on top of two disks encrypted by the OS; even though the data is identical, it now needs to be encrypted twice on write, and decrypted twice on scrub. For ditto blocks, multiply the amount of crypto work by another two or three. There are now (at least) two keys to manage and still no fine granularity
How does ZFS compare to btrfs? (Score:2)
How does ZFS compare to btrfs? Several intentionally unnamed and unlinked commentaries on ZFS apparent current omission from Mac OSX refer to btrfs to be the more GPL-compliant alternative to ZFS. I need more information, as I do not think btrfs has the same aggressive checksumming and automatic volume size feature that ZFS does.
Thanks.
Re: (Score:2)
Re:FINALLY. (Score:4, Informative)
licensing or patent issues [sys-con.com]?
What you also forget is that Oracle was the leading proponent of BTRFS and yes it had to do with licensing and patents from Sun. Once they acquired Sun that all went out the window. If I were the CEO at Oracle I'd ask "Why two file systems that essentially do the same thing? One's mature and the other, not so much" That's why BTRFS still survives but now with less Oracle support. Wait, is that a bad thing?
Re: (Score:2)
Re: (Score:3, Informative)
Been using btrfs for several non-essential file systems. Working great so far, and have even done several successful bedup runs. Has worked great for minimizing disk usage on some Maven repositories with lots of duplicate files between Jenkins and Nexus. Maybe not tested enough for your server that you need to stay up all the time, but great for the home desktop (provided you're sane and are keeping backups, which you should be doing already anyway). The more testing it gets, the sooner it becomes "tested e
Re: (Score:3, Funny)
You don't have a multi-petabyte array with mission criitical data at home?
Re: (Score:2)
Re: (Score:3)
It doesn't have to be POSIX compliant to have it ported to it and it doesn't require somebody to pay for licensing. With the Features of ZFS one could argue that a port to at least Windows Server would be great and it would garnish quite a following from those who've had to put up with the way NTFS views disk volumes and storage. There are applications that run well on Windows, especially on the Server side of things so I wouldn't call it dead quite yet. Besides, with Server 2012 we now have Storage Spac
Re:ZFS for Windows? (Score:5, Informative)
It doesn't have to be POSIX compliant to have it ported to it and it doesn't require somebody to pay for licensing. With the Features of ZFS one could argue that a port to at least Windows Server would be great and it would garnish quite a following from those who've had to put up with the way NTFS views disk volumes and storage.
Windows isn't a very friendly development platform for Open Source, starting with the licensing requirements for tools and distribution restrictions on binaries derived from those tools when using header files containing substantial code, or runtime libraries. Part of this is an intentional legal defense against WINE and CrossOver Office, and part of it is just scale management by limiting the support community requirements to "serious developers".
In addition, a lot of the installable filesystem and similar code, as well as a lot of the necessary VM internals (memory mapped files and paging/swapping from filesystems) are not adequately explained (i.e. they involve locking text regions with level 0 locks, which require a level 3 lock then a level 0 lock, and to do this to get the offsets on the physical media for the blocks in question. This used to not work on removable media in NT as of 4.0.1; not sure if it's supported yet, but it was the reason you couldn't install it in JAZZ drives or even regular hard drives in removable carriers.
Having developed a filesystem for Windows95 IFSMgr, and reverse engineered all this crap, and having done it again for NT3.51, I would not look forward to having to repeat the process for Windows 7 or Windows 8, which are the only useful versions to target for by the time the code ends up functional.
So unless someone wanted to seriously underwrite the effort (read: it's have to be done by Oracle, or by a startup who had a monetization strategy that Microsoft wouldn't preempt, like they did when my team, at a previous employer, ported UFS + Soft Updates to Windows 95, and they announced Longhorn-which-never-happened, and then put together a lawsuit about "deep reverse engineering" which would have precluded using it as a bootable FS... no thanks.
Re: (Score:2)
Windows isn't a very friendly development platform for Open Source, starting with the licensing requirements for tools and distribution restrictions on binaries derived from those tools when using header files containing substantial code, or runtime libraries.
Well, the tools are free and there isn't a redistribution problem, never has been.
Now, you could argue that ZFS and Windows won't work unless MS does it because ZFS is the whole disk I/O stack rolled into one, and no driver is going to work with the kernel to allow the ZFS system to work in windows, but thats another story entirely. Theres no way to bypass the disk cache for instance, not in a way ZFS would be compatible with. ZFS must use its own cache, and directly access the raw devices, and provide th
Re: (Score:3)
Windows isn't a very friendly development platform for Open Source, starting with the licensing requirements for tools and distribution restrictions on binaries derived from those tools when using header files containing substantial code, or runtime libraries.
Well, the tools are free and there isn't a redistribution problem, never has been.
Not according to this document; the runtime components are not redistributable. This is an Anti-WINE license measure:
http://msdn.microsoft.com/en-us/library/ms235299(v=vs.90).aspx [microsoft.com]
Now, you could argue that ZFS and Windows won't work unless MS does it because ZFS is the whole disk I/O stack rolled into one, and no driver is going to work with the kernel to allow the ZFS system to work in windows, but thats another story entirely. Theres no way to bypass the disk cache for instance, not in a way ZFS would be compatible with. ZFS must use its own cache, and directly access the raw devices, and provide the filesystem driver all rolled into one ... but spread all across the kernel, in order to get proper performance.
Could get pretty close with some good hacks though, such as FUSE.
This is actually reverse-engineerable. FUSE isn't an option, since pages which get memory mapped and dirtied are not propagated up via invalidation events. This is the same problem the Heidemann stacking framework has if you stack FS A on top of FS B, and then expose both of them as visible in the mount hierarchy namespace. Yo
Re: (Score:3)
(You seem to write well so you'll probably appreciate being reminded it's "garner" not "garnish")
Re: (Score:2)
WTF are you smoking. POSIX compatibility is easy to achieve, and you can get it on Windows by installing the optional SFU package. Too bad POSIX says nothing about file system driver interfaces - that's entirely kernel-dependent, and even varies between BSDs.
Re:Cool, but.. (Score:5, Insightful)
Everything else is already handled with LVM and software RAID.
You have a great sense of humor, keep it up.
Re:Cool, but.. (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Selective encryption means that you have to be incredibly careful that sensitive data never hits a non-encrypted portion of the disk. So, I'd say that the full disk encryption is the cleaner option.
Re: (Score:2)
Full disk encryption isn't a fad; it's the only way to this job well. Selective encryption makes people relax because it gives a false sense of security, but there are so many holes that you're still quite vulnerable. In some ways it's worst than no encryption at all, because people at least know they have to be careful about their system then.
The first giant issue is that operating systems and programs like editors will write work in progress data to disk outside of the encrypted area, such as temporary
Re: (Score:3)
Temporary files and swap aren't a problem...
Swap can and should be stored on a separate partition, and encrypted using a randomly generated key so its completely lost after a reboot.
On a properly configured system, only a very small number of locations will be writable by the user, typically the user's home directory and a temporary area... The temporary area can be stored in ram/swap since it doesn't matter if its contents are lost and home can be encrypted.
It's trivial to add a hardware key logger to virt
Re: (Score:2)
If someone has physical access to a system for long enough, of course any security can be bypassed and the system must be considered tampered. But a fully encrypted system cannot be compromised in only a minute or two of access; one with an unencrypted boot drive certainly can be. And time to exploit impacts how vulnerable you are in very common real world situations.
A regular full disk encryption candidate is a laptop you leave home with. I will sometimes leave my laptop sitting at a table with someone