ZFS, the Last Word in File Systems? 564
guigouz writes "Sun is carrying a feature story about its new ZFS File System - ZFS, the dynamic new file system in Sun's Solaris 10 Operating System (Solaris OS), will make you forget everything you thought you knew about file systems. ZFS will be available on all Solaris 10 OS-supported platforms, and all existing applications will run with it. Moreover, ZFS complements Sun's storage management portfolio, including the Sun StorEdge QFS software, which is ideal for sharing business data."
Cool but.... (Score:3, Interesting)
Perhaps they had to rewrite an LVM from scratch in order to opensource it?
will it be open source (Score:2, Interesting)
I wonder if that means that this filesystem can be included in other kernels.
UFS2/SU (Score:3, Interesting)
But, with ZFS, maybe we finally have found a FS with replacing it with. I sure look forward to trying Solaris 10, though I'm sure that I will find that SunOS has a better feal to it, like always.
Maybe DragonflyBSD will be the one to do this, FreeBSD is generally more restrictive to radical changes; for good reasons, you don't get that stability without reason.
What sort of crap is this? (Score:4, Interesting)
Can someone please provide a link to some technical details other than it being 128-bit? What does this file system actually do that is even remotely special? What's under the covers? And, more importantly, does it actually work as described?
-1,Uninformative
Re:What is their disk allocation scheme? (Score:2, Interesting)
Re:Open source (Score:5, Interesting)
Assuming the Solaris 10 will be true open source (not like Microsoft's "shared source"), as well as GPL compatibile, would I be able to use ZFS on my GNU/Linux desktop? Will ZFS be a viable alternative to ext3 and ReiserFS? Or is the overhead too big?
Different architecture, same functionality? (Score:4, Interesting)
Until now it does sound just like raid, but:
I guess I just don't get it; I know they are talking about logical corruption and not a physical failure, but this is kind of like raid with somethink like SMART, or isn't it?
And what kinds of corruption can there be? Journaling filesystems already work well for write errors and such, or so I thought.
I know the architecture seems innovative and different (at least for me), but is there really new functionality?
Sorry if I seem ignorant this time. I don't know if I was able to get my point across; the things this filesystem does, wouldn't they be better left on a different layer?
Re:Hmf. (Score:5, Interesting)
It's your density, Luke.
Curious points (Score:4, Interesting)
ok, that aside. First 128bit file system, and get this: transactional object model
I think this means it is optimistic but they figure it has blazing fast performance, who am I to argue. Fed up with killing this indexing garbage on the work machine, bloody microsoft, disabled it and everything and every full moon it seems to come out and graze on my HDD platter.
From the MS article : This perfect storm is comprised of three forces joining together: hardware advancements, leaps in the amount of digitally born data, and the explosion of schemas and standards in information management.
Then I started to suspect they would rant about moores law and sure e-bloody-nough
Everyone knows Moore's law--the number of transistors on a chip doubles every 18 months. What a lot of people forget is that network bandwidth and storage technologies are growing at an even faster pace than Moore's law would suggest.
That is like saying, everyone knows the number 9 bus comes at half 3 on wednesdays, but noone expects 3 taxis sat there doing nothing at half past 3 on a tuesday.
Can we put this madness to rest? Ok back to the articles.
erm... lost track now....
Shared data pools... (Score:4, Interesting)
Patents and other Bad Signs. (Score:5, Interesting)
This article is shocking. I'm used to much less hype and far more technical details from Sun. Software patents and bullshit are not what I expect when I follow a link to them.
I don't like any of this.
Re:billion billion? (Score:1, Interesting)
I think Sun may be right on this one (Score:1, Interesting)
Re:64 bits is awfully big already (Score:4, Interesting)
Somehow, an alternate history where 80286 was 64-bit instead of 16-bit (while everything else staying the same) comes to mind when reading the Sun's marketing on this.
Re:Open source (Score:5, Interesting)
Re:Hmf. (Score:2, Interesting)
The highest-speed systems currently available can (maybe) transfer data at 300MB/s or so. To transfer a dataset of only 40 bits, it'd take approximately an hour. A 64-bit dataset is more than 16 million times as large - which means it'd take nearly two millenia to transfer on today's best systems.
Even if transfer rates are increased by two orders of magnitude (effectively unthinkable for the forseeable future without the development of entirely new and currently unknown technologies), you've still only reduced that time from 2000 years to 20 years.
British or American? (Score:3, Interesting)
True. However, it is more ambiguous than "million million million", as absent minded Brits might interpret it as a "million million million million".
Or would you rather they say 6.0 × 10^18?
Yes.
Most people can't imagine that.
Most people can't imagine it anyway, whether you call it "six billion billion", "6.0 x 10^18", "6 x 2^60", or "1.27 x e^43". Or understand any number higher than the number of dollars they carry in their wallet, for that matter. Anyone who needs to make any decisions in life based on this ZMS number ought to be able to understand it any of those ways (although getting help from a calculator for the last one or even two is understandable). Of course, many people manage things they can't understand. This is life.
Re:billion billion? (Score:4, Interesting)
Whenever I see or hear the word "billion" the first thing I ask is that US billion or British billion?
"six times ten raised to the power of eighteen" seems much more clear and precise.
Re:Hmf. (Score:2, Interesting)
It would have been ridiculous AT THE TIME to address more data.. CPU's and software werent there yet.
Look, there are limits to the amount of stuff people need! yeah so 640k wasnt enough doesnt mean 6 billion terabytes isnt going to be enough for you tomorrow.
You know what
Re:What I really want to see in a file system... (Score:2, Interesting)
Why is that? There's nothing inherently impossible about having the OS remember, via a transaction log, the changes that have taken place to a set of files made by a process, and then either committing them all or rolling back all of them at process exit time (or whenever the process does a commit() or rollback()). The file operations themselves can be identical, so all you really need are those 4 additional operations I mentioned previously.
Re:What is their disk allocation scheme? (Score:5, Interesting)
This is a good thing - queueing theory shows a single unified pool has better performance than several smaller ones. People who try to tune databases by dedicating drives to redo logs don't usually realize what they are doing is counterproductive - they optimize locally for one area, at the expense of global throughput for the entire system.
ZFS uses copy-on-write (a modified block is written wherever the disk head happens to be, not where the old one used to be). This means writes are sequential (as with all journaled filesystems) and also since the old block is still on disk (until it is garbage collected) this gives the ability to take snapshots, something that is vital for making coherent backups now that nightly maintenance windows are mostly history. This also leads to file fragmentation so enough RAM to have a good buffer cache helps.
Because the scheduler works best if it has full visibility of every physical disk, rather than dealing with an abstract LUN on a hardware RAID, they actually recommend ZFS be hosted on a JBOD array (just a bunch of disks, no RAID) and have the RAID be done in software by ZFS. Since the RAID is integrated with the filesystem, they have the scope for optimizations that is not available if you have a filesystem trying to optimize on one side and a RAID controller (or separate LVM software) on the other side. Network Applicance does something like this with their WAFL network filesystem to offer decent performance despite the overhead of NFS.
With modern, fast CPUs, software RAID can easily outperform hardware RAID. It is quite common for optimizations like hardware RAID made at a certain time to become counterproductive as technology advances and the assumptions behind the premature optimization are no longer valid. A long time ago, IBM offloaded some database access code in its mainframe disk controllers. It used to be a speed boost, but as the mainframe CPU speeds improved (and the feature was retained for backward compatibility), it ended up being 10 times slower than the alternative approach.
Re:Sounds really nice (Score:3, Interesting)
So, I take it that back in the days of DOS, you never got a CRC error trying to copy an important file off a floppy?
Re:Patents and other Bad Signs. (Score:4, Interesting)
Opensource is useless when it's patent encumbered.
The GPL [gnu.org] states the following...
I thought that if the patent holder distributes patented material under the GPL, it is a declaration that the holder has relinquished control over the patented material for as long as it is applied under the GPL.
Easy upgrades (Score:2, Interesting)
"We're absolutely trying to make disk storage more like memory, and often use that analogy in our presentations. For example, when you add DIMMS to your computer, you don't run some 'dimmconfig' program or worry about how the new memory will be allocated to various applications; the computer just does the right thing. Applications don't have to worry about where their memory comes from. Likewise with ZFS, when you add new disks to the system, their space is available to any ZFS filesystems, without the need for any further configuration. In most scenarios it's fairly straightforward for the software to make the unequivocably best choices about how to use the storage. If you want to tell the system more about how you want the storage used, you'll be able to do that too (eg. this data should be mirrored but that not; it's more important for this data to be accessed quickly but that can be slower). We hope that with relatively modern hardware, all but the most complicated and demanding configurations will be handled adequately without any administrator intervention." read more [sun.com]
Re:Another quote to cherish (Score:3, Interesting)
Yep. I think they might be right on this one.
~D
Re:fileless systems (Score:5, Interesting)
Convenient, and flawed.
XML isn't designed to handle changing data. It's designed to be a data markup language, which indicates it's used for presenting data, not managing data.
So far, the relational model is the best mathematically-rigorous method of managing sets of data. There are many advantages to hierarchical data representation, but for manipulation, the relational still trumps.
Do I want to use SQL to access my files? Not if I don't have to. There are perhaps better methods, even some transparent methods.
But, do I want to continue to self-organize my data? Hell, no! There's just too much information stored on my computer, and on my network, these days. And, considering that much of my data has multiple relationships, the hierarchical model is growing a bit long in the tooth. Many of my documents belong in multiple hierarchies.
But, there might be a real solution soon:
Gnome Storage [gnome.org] looks to be a good first step.
Re:64 bits is awfully big already (Score:4, Interesting)
This is about the same argument as IPv6 addressing: it's expensive to change the size of the address space, so make it absurdly large because bits of address space are cheap, you enable some interesting unforseen applications, and you put off a forced migration.
While I agree that 128-bit block addressing is overkill for a single computer, once you're going to expand past a 64-bit filesystem, there's not much point in going smaller than a 128-bit fileystem. It's not like you'd save money making it an 80-bit filesystem.
As to your point about the speed of a hard drive vs. the addressible space in the filesystem, keep in mind that filesystems are much larger than disks. For example, it's not that unusual (in cooler UNIX environments) for everyone in a company to work in one large distributed filesystem, which may run across hundreds or thousands of hard drives. Now imagine a building full of people working with very large files (e.g. video production) where you could easily accumulate terabytes of data. Wouldn't it be nice to manage your online, nearline, and offline storage as a system, extremely large filesystem? Or, for real blue-sky thinking, imagine that everyone on the planet uses a single shared, distributed filesystem for everything. Wouldn't it be cool to address _everything_ using a single, consistent scheme no matter where you are. Cool, eh?
think logarithmatic scale (Score:3, Interesting)
And com'n, don't be so against hypes. Not all numbers are evil. And the overhead to process some extra bits are miniscule. The space and time required are in logarithmic time to the size of the number set. E.g., 128-bit is some billions billion times the size of 64-bit, but only takes 2 times more to store and process. And this time is already small compared to the actual I/O time, and the space compared to combined storage space.
Re:64 bits is awfully big already (Score:3, Interesting)
So you don't need larger than a 64 bit filesystem unless you're going to have a single volume (real or virtual) that uses more than 16 billion terabytes of data. That's 64 billion 250 gig hard drives. What's the population of China these days? 2.5 billion or thereabouts? If you gave everyone in China 25 250 gigabyte hard drives, you'd come close to filling up a 64 bit filesystem (you'd fall short actually).
And that's only if everyone in China uses a single, giant RAID array for those 64 billion hard drives.
Or everyone on the planet gets 9 such hard drives. That 1.75 terabytes for every single human being right now, and we're still within the limits of a 64 bit filesystem.
Your video editing analogy doesn't even come close, and the idea of a whole country using a single, centralized volume (let alone the whole planet) doesn't really make any sense. Addressing all the data in the entire world on every computer at the filesystem level seems like a very bad idea, to me.
Maybe in 10 to 15 years we'll have individual disks large enough so that large clusters can exceed the bounds of a 64 bit filesystem, but you'll still have to buy entirely new hardware to take advantage of that capability, so a 128 bit filesystem on today's hardware offers no advantages over a 64 bit filesystem, and in fact only makes things slower. Not really very cool at all if you ask me (although the other features of the filesystem likely have merit).