ZFS, the Last Word in File Systems? 564
guigouz writes "Sun is carrying a feature story about its new ZFS File System - ZFS, the dynamic new file system in Sun's Solaris 10 Operating System (Solaris OS), will make you forget everything you thought you knew about file systems. ZFS will be available on all Solaris 10 OS-supported platforms, and all existing applications will run with it. Moreover, ZFS complements Sun's storage management portfolio, including the Sun StorEdge QFS software, which is ideal for sharing business data."
Two things... (Score:5, Insightful)
2) Is it just me, or is the post surprisingly bereft of unique details? I mean, integration with all existing applications is rather assumed, given that it's a file system and all...
Hmf. (Score:5, Insightful)
So, what was the point of creating a 128-bit filesystem?
-1, Marketing Hype.
*Yawn*
Unlimited scalability (Score:3, Insightful)
Re:billion billion? (Score:5, Insightful)
Just better than the old stuff from Sun (Score:5, Insightful)
and
Compared to AIX or HP-UX, 28 steps is shockingly bad, both have had much simpler logical volume management for several versions now (AIX for 5 years or more? certainly as long as I have used it). The existing Solaris 9 logical volume infrastructure is years behind the competition, this is bringing it up to date, but not putting it far ahead.
Ewan
Re:billion billion? (Score:1, Insightful)
OK I am saying it now and it wont be back to curse me.
64 bits should be enough for everybody.
Now, here's the deal
Ah yes I can hear people saying "what about large file data sets?"
Reminds me of the Gillete Mach 3 versus Schick Quattro lawsuit
This 128 bit file system only serves marketing purposes. I want to see more clear advantages
Another quote to cherish (Score:4, Insightful)
Who else instantly thought of, "640 K ought to be enough for anybody", uttered by the chief architect of twenty years of chaos?
Re:Open source (Score:4, Insightful)
fileless systems (Score:2, Insightful)
What I really want to see in a file system... (Score:5, Insightful)
Such a feature would rock, because it would be possible to make things like installers completely atomic: interrupt the installer process and the whole thing rolls back.
Re:billion billion? (Score:5, Insightful)
Patent-pending adaptive endianness? (Score:4, Insightful)
Looks to me like nothing more than an excuse to put up a patent tollboth for anyone who wants to implement ZFS.
Re:billion billion? (Score:5, Insightful)
A 64-bit (unsigned) binary number can already store values up to 16 billion billion (actually, closer to 18, but who's counting). That's roughly 2.5 billion individually addressable locations for every man, woman, and child living on Earth.
Shouldn't that be enough to hold us for a few generations at least?
huh? (Score:1, Insightful)
Silly AC (Score:3, Insightful)
Why bother with folders as a root? You can create a folder hierarchy *with* a database too.
Re:billion billion? (Score:4, Insightful)
Well 128 Bit is more of an issue of coming up with something without a limit or a limit that anyone any time soon will use up. The difference between 64bit and 128 bit is the diffence of a number that we can handle and comprehend to a number that is much to big for our minds to properly comprehend.
How can someone fill a 64bit file system, Well a large company or government organization that stores all their persons files onto one file system. Or say a program that gives its logs in seporate files. Or say storing uncompiled movies frame by frame. Or having an archive of data spanning hundreds of years. Yes there are ways around it now. But sometimes have a file system that doesn't have those limits. Comes in handy, nor nessarly for not but to expend into the future.
Re:billion billion? (Score:1, Insightful)
Re:64 bits is awfully big already (Score:5, Insightful)
One product already can transfer a Terrabyte per second, so that would cut the transfer down to half a year. And I imagine that transfer rate would continue to increase.
I don't see how one would necessarily argue against such a thing for products that will go for cluster and supercomputer use. I say might as well get the bugs out so when you can so that once the 65th bit is needed, the supercomputer suppliers are ready.
http://www.sc-conference.org/sc2004/storcloud.h
Re:64 bits is awfully big already (Score:5, Insightful)
No, precisely because we can't do it now, and for the very predictable future, we shouldn't be wasting all that disk space, access and CPU time for a boundary that no production system is likely to ever reach before they get upgraded. That's just practicality.
Seagate apparently sold 18.3 million desktop drives last year. Assuming they're all about 120GB (which is generous of me), that would be about 17.6*10^18 bits. Guess what, that's 2^64 bits. Yes, you would have to buy every single desktop hard drive Seagate shipped in the last year to have the capacity to fill a 64 bit filesystem. And find space for 18 million drives. And a power station to deliver the several hundred megawatts you'd need.
Even at 2 times drive capacity growth per year that's still a ridiculously unattainable figure. In 14 years time you'd only need to buy 1000 drives (which are now 2000TB each). But 14 years is a geological time scale when it comes to computers. You'd have wasted 14 years of CPU time and disk space devoted to those extra 64 bits.
If you still think 64 bits isn't enough, how about 96 bits? It would take 46 years before hard disks were big and cheap enough so you could fill the filesystem by buying 1000 of them. But no, they chose 128 bits because it sounded good.
Re:Just better than the old stuff from Sun (Score:4, Insightful)
IBM has had a LVM since the early to mid 90s.
Linux has one now.
If Sun had bothered to keep up the Jones on these little things, Veritas could possible have never become what they are.
Last I heard, they were going to start offering VXVM and VVM on AIX. My AIX admins did not care. They figured why would they spend the money for the product when they already have a usable system that is supported by the OEM.
Re:Out of letters. (Score:3, Insightful)
Re:Hmf. (Score:3, Insightful)
2^128 is about 3.4e38. Now, let's be generous and asume we can control the spin of every electron we come across and incorporate it into a quantum storage device, such that each electron represented a bit of information (either left- or right-spin). Now, because I'm still being generous, I'm going to say the Earth's oceans contain 2e9 km^3, or 2e18 m^3 (compare here [hypertextbook.com]) Assuming all this water is liquid, its density is 1000 kg/m^3 (abouts), so we have 2e24 g of water.
2e24 g of water contains about 1e23 moles of water molecules, or about 1e46 individual water molecules. With about 10 electrons per molecule, that's 1e47 electrons. So if we indeed "boil the oceans" in order to harvest the electrons to feed into our massive quantum storage system, we would have 1,000,000,000 spare electrons for things like hydrogen fuel cells.
But this does not exceed the quantum limits of earth-based storage, even by a long shot. Bonwick even admits it: You couldn't fill a 128-bit storage pool without boiling the oceans. Boiling the oceans is definately an earth-based option for quantum storage, as we wouldn't have to import the materal from space. We also have other ways of harvisting electrons, like boiling humans and evacuating the atmosphere. To give you an idea, there's something like 10^54 electrons on earth, give or take a few hundred trillion. We'd need at least a 192-bit system to approach Earth's quantum (electron-based) limits.
Re:What sort of crap is this? (Score:3, Insightful)
here are some more details, but nowhere near as long a list as you'll get from reading the article (since the full list would mean quoting the article, which i suggest reading).
- data checksums eliminate the need for fsck
- easy to add disks to the pool
- seems to support raid 0 and at least one real raid
- data rollbacks (sound like netapp snapshots)
- can mount the same filesystem on sparc or x86
while not necessarily amazing, when you start adding all of it together it makes for a large improvement over ufs or vxvm. it's interesting, to say the least. i consider this a big announcement for the solaris platform (and, as more than one person pointed out, possibly linux and bsd since the code for it will ultimately be open source).
as far as greater technical details, how are people even going to know it exists in order to, say, make independent performance benchmarks if there's no announcement. should everyone just discover the feature accidentally?
Re:billion billion? (Score:3, Insightful)
Re:Sounds really nice (Score:2, Insightful)
IMO a necessary feature of a modern file system is that it doesn't need to be defragged.
Re:Hmf. (Score:1, Insightful)
That freaking 640k quote is over used!
It would have been ridiculous AT THE TIME to address more data.. CPU's and software werent there yet.
Discounting the whole backstory about how this quote was a myth and that Bill Gates did not say it: If the quote was made, even at that time it was wrong. The 640k limit was imposed by the craptacular CPU of choice: the 8088. There were numerous processors without this limitation, or such a shitty design to make the limit so.
Remember, that even if Gates (or anyone) made this quote back in the early 80's it should have been completely forgotten. The reason it wasn't is because people using PC's had to contend with that real mode addressing limit WELL into the mid-90s, which was completely insane. That was the reason people kept dredging the quote up, because even if the quote wasn't true it was almost as if MS was saying that 640k was enough based on their actions into the 90s. Granted it's not all MS's fault, they had to contend with dyed in the wool DOS programmers, other systems like Novell, and preserving backward compatibility. At the same time, they were awfully slow to the forfront with a usuable, true 32-bit operating system which title probably goes to Windows 95. Of course, the first viable for home OS that nixed real mode operation and placed it in a virtualization (VM86) was not until Windows 2000, and some might say Windows XP. So some would say that the 640k limit had its influence up until 2001.
Re:64 bits is awfully big already (Score:4, Insightful)
2) Migrating from one address space to another is painful. Why make it more frequent by aiming low? Do you think migration would be any less painful in 14 years?
3) New applications: Broadband didn't just result in really fast web-page downloads - the entire online music industry stems from that. The original creators of TCP/IP had no idea that they were developing media on-demand, they were making it so that you could transfer bits from one archaic machine to another.
Building flexible, capable systems creates an environment where development isn't as constrained by limitations - resulting in new, unpredictable developments.
Re:billion billion? (Score:3, Insightful)
Then, since it's actually a billion billion at stake, try to imagine that half by half mile square full of tiny plastic chips.
Finally, put them in an oversized bathtub, surround the tub with video games, a bad pizza parlor and tired parents, and wham! You're Chuck E Cheese. Therefore, we can state firmly:
1) Visualize Billion Billion.
2) ??? [Which adequately describes setting up a chuck e cheese]
3) Profit.
In soviet slashdot, billion billion profits you.
Pardon me; I have to find a way to convince myself that my hot grits cluster joke isn't outdated.
Re:64 bits is awfully big already (Score:3, Insightful)
Why expand past a 64 bit filesystem. 64 bits with 1k blocks as your smallest addressable unit (which is more than reasonable for a filesystem this size) gives you 2^74 bytes to play with. For reference, that's 16 * 2^70 bytes = 16 * 2^30 terabytes, or "one hell of a lot of data".
Re:64 bits is awfully big already (Score:3, Insightful)
What are you going to do when you access all of your data through a network, and the whole world has their storage on the internet, using a global filesystem? You said yourself that one manufacturer makes 2^64 bits of HD space every year, so 64-bit is obviously not enough. We need 128 bits if we want to be able to make use of all the HD space that is going to waste on networked computers today.
Hell, we could do that today, if we had - wait for it - the right filesystem.
The fact that it's Sun that came up with this suggests they're thinking along the same lines. They would benefit greatly if people started using a massively networked filesystem, especially if they own the code to it.
Re:think logarithmatic scale (Score:3, Insightful)
With a 1k block size, you'd be addressing 16 billion terabytes of storage. Let us know as soon as every single person on earth has more than 2 terabytes to donate to your distributed
filesystem project.
Re:64 bits is awfully big already (Score:3, Insightful)
"3) New applications: Broadband didn't just result in really fast web-page downloads - the entire online music industry stems from that. The original creators of TCP/IP had no idea that they were developing media on-demand, they were making it so that you could transfer bits from one archaic machine to another."
How could they predict iTunes? Why would you think it reasonable to predict the usage of such a filesystem?