Ask Slashdot: Smarter Disk Space Monitoring In the Age of Cheap Storage? 170
relliker writes In the olden days, when monitoring a file system of a few 100 MB, we would be alerted when it topped 90% or more, with 95% a lot of times considered quite critical. Today, however, with a lot of file systems in the Terabyte range, a 90-95% full file system can still have a considerable amount of free space but we still mostly get bugged by the same alerts as in the days of yore when there really isn't a cause for immediate concern. Apart from increasing thresholds and/or starting to monitor actual free space left instead of a percentage, should it be time for monitoring systems to become a bit more intelligent by taking space usage trends and heuristics into account too and only warn about critical usage when projected thresholds are exceeded? I'd like my system to warn me with something like, 'Hey!, you'll be running out of space in a couple of months if you go on like this!' Or is this already the norm and I'm still living in a digital cave? What do you use, on what operating system?
I delete things when I'm done using them (Score:5, Funny)
I never run out of disk space.
Re:I delete things when I'm done using them (Score:5, Interesting)
I'll bet that's not true...
Seems to me that the stuff I work on keeps getting bigger and bigger, as does my collection of digital pictures and videos. Where I attempt to pare down what I keep, some of it stays around...
I expect that most users do the same things and thus data keeps piling up. I don't think it matters how well you are at deleting stuff you don't need anymore.
Re: (Score:2)
I never run out of space. As disks get larger and larger, the risk of running out of space seems like the single least significant thing possible. The real issue is corruption.
Based on the headline, I would have expected this to be about content verification with all of the ZFS fanboys coming out of the work to extol it's virtues.
Re: (Score:2)
Re: (Score:2)
Don't be a ZFS hater.
Re: (Score:2)
I suppose you're much like me.
Both of us are a "Being Digital".
Re: (Score:2)
Generally, if you are the type of organization that needs to monitor a 1TB for filespace, you're the kind of company that can fill that 1TB of filespace.
Where I currently work, it is not unusual to fill 1TB in about the same amount of time it took to fill a 100MB drive back int he day.
Re:I delete things when I'm done using them (Score:5, Interesting)
I delete things when I'm done using them
1) Many of my things I either desire to use for many years to come (a video download I paid for), or am required to keep to cover my ass (taxes, logs, most data at work due to policies, etc)
2a) The cost of more storage space is almost always less than the cost of the time to clean up files that could be deleted. In the context of work this does depend heavily on exactly who made the data and their rate of pay / work load - but I've noted the higher up execs and managers tend to be the worst hoarders as well as of course the highest rates of pay. Most of the lower techs on the shop floor don't even have access above read-only to the network storage here, though that is far from universal everywhere.
2b) Yes there are other people whos time is not as expensive, but no one other than the datas owner/creator can know 100% what needs to stay vs what can go (and sometimes even the owner/creator chooses wrong.)
3) After deleting/archiving data, the chances of you needing it in the future are typically higher to much higher than the chances you are really done with it.
4) For the small number of times you really are done with it (like, totally and fur sure), the amount of data that gets deleted is generally such a small percentage of the whole that, while still a good thing to do, doesn't really help much with the problem at hand - freeing up a lot of space for future needs.
I never run out of disk space.
You either have too much free storage space, not enough data, or possibly both :P
Performance issues? (Score:3, Insightful)
How does performance change as the big disks approach full? That was always one reason for the rule of thumb about keeping at least 10% free space on UNIX.
Re:Performance issues? (Score:5, Informative)
Re: (Score:3)
Re:Performance issues? (Score:5, Insightful)
ou want to keep the hard drive at 50% or less to maximize performance. If the hard drive is more than 50% full, the read/write head takes longer to reach the data. If the hard drive is 90% full, most OSes will have performance issues.
Actually, any OS will have performance issues, because the transfer rate (MB/sec) drops from the outside tracks to the inside tracks. That's why for home use, you just buy the biggest hard drive that you can easily afford (if you need 1TB, you buy 3TB), because that way you use only the parts of the drive with the highest transfer speed, and the average head movement time is also a lot less.
Re:Performance issues? (Score:4, Interesting)
Re:Performance issues? (Score:5, Insightful)
That's an interesting idea for the budget-minded, but personally I think if performance is actually an issue I'd use SSDs for things that need to be performant, and store everything else on regular drives.
Re:Performance issues? (Score:5, Insightful)
Inner tracks have better seek times, which is why high performance applications often "short stroke" drives (ie artificially restrict the percentage of the drive used so that only the inner tracks are utilized, though with modern drives and transparent sector remapping it's unlikely this practices actually works), outer tracks have better streaming performance because more sectors move under the head in a given timeframe.
Re: (Score:2)
The inner partitions with awful performance are where my media goes (movies, music, photos).
Hmm. I keep my media (movies, music, & photos) on an external USB drive. It's probably the slowest of all my storage devices and it works just fine. I'm sure there are higher latencies than your setup but I certainly never noticed them.
Re: (Score:2, Insightful)
Re: (Score:2)
You want to keep the hard drive at 50% or less to maximize performance.
You're talking about short-stroking the drive which is fundamentally a different question --- than what percentage of your space usage is best for performance.
For the sake of argument: Let's assume you create a single partition on your hard drive that only uses the first 30% of the disk drive, AND your partition's starting cylinder is carefully chosen to be in alignment with your allocation units / stripes down all RAID levels
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
If budget is a problem, then yes, get the couple of percent improvement from only using part of the drive instead of doubling the speed or more with mirrors.
Re: (Score:3)
Hmmm ... if the goal is to keep all of my disks under 50% to maximize performance ... don't I effectively need twice as much disk? And if it's under RAID I'd need at least 4x as much disk?
Which kind of defeats the purpose of both having cheaper disk, as well as having monitoring to let me know when it's filling.
Sorry, but who has the luxury of buying twice as much disk so we can keep them all under 50%??
What you say might get you a performance boost, but otherwise it doesn't make a lot of sense to me.
Re: (Score:2)
Sorry, but who has the luxury of buying twice as much disk so we can keep them all under 50%??
I'm planning to replace the 3x80GB hard drives in my FreeNAS file server at home with 3x1TB hard drives, as Newegg has 1TB drives on sale for $50. That will give me 80% free space in a RAID5 configuration for $150.
Re: (Score:2)
Sure, great ... and those of us in the real world who manage 10s or 100s (or in some cases 1000s) of terabytes?
We're talking an entirely different price point and quantity.
I seriously doubt people with NetApp servers and other large storage could even consider keeping 50% of their disk space empty just to make it slightly faster.
My user account on my personal machine has over 1TB of stuff in it, which gets mirrored to two other drives. That adds up after a while when you're staying under 50%.
So I'd be look
Re: (Score:2)
And, from the very little I know about RAID 5 ... if you only have 3 drives in it, you're not really getting a whole lot of added security, are you?
RAID5 requires a minimum of three drives. If one drive fails, the other two drives can continue function in degraded mode. The entire RAID would be lost if you have more than one hard drive failure. You could designate one or more extra hard drive as spares to automatically replace a failed hard drive. For extra security, each hard drive need to be on a separate controlller (which is what I have in my FreeNAS box). I typically have a hard drive crash every five years, which is why I replace my hard drives e
They've reset that date from 2005? (Score:2)
It didn't because disks and controllers got much faster as well as dealing with more capacity, while the premise assumed nothing but a change in capacity.
So now we have arrays 10x larger that rebuild in less than half the time of the old ones. We also have stuff like ZFS that acts like RAID6 in many ways (with raidz2) but can have much shorter rebuild (resilver) times because it only copies data instead of rebuilding
Re: (Score:2)
I'd expect someone running FreeNAS to know more than a journalist rewarming an old article that was a poor prediction in the first place, but I suppose seeing it in magazine format does make it look more credible.
RAID6 was something I heard about five or six years ago, but never seen in action or in the field. Supposedly it was the next great thing. I'm still figuring out ZFS on my FreeNAS box. Damn 8GB flash drives keep zapping out every six months, forcing me to install the current version of FreeNAS.
Re: (Score:3)
Anyway, the "raid only has five more years" article keeps on getting warmed up, and keeps getting disproved by the very reasons given for the RAID use by date. Increasing capac
Re: (Score:2)
Sorry, but who has the luxury of buying twice as much disk so we can keep them all under 50%??
i just had a look; if you need 1TB in a desktop, I can buy 1TB for £46 and 2TB for £54.
Re: (Score:2)
Depends on your level of mirroring, doesn't it?
I know people who do storage for a living, and some places use the RAID x+y where you have levels of RAID giving mirroring, combined with striping and parity to get additional redundancy. I those situations, the amount of raw space you need is at least 2x the amount of usable space you want to end up with.
And, a lot of those places replicate the entire storage to another instanc
The actual number you are looking for is 85%. (Score:2)
The actual number you are looking for is 85%.
Straight out of Donald Knuth volume 3: Sorting and Searching; at 85% fill, a perfect hash starts degrading in performance.
The basis of the Berkeley Fast File System warn level was an 85% fill on the disk, which the filesystem effectively hashed data allocations onto. As people started getting larger and larger disks, they began to be concerned about "wasted space" in the free reserve, and moved the warnings down to 10%, then 8%, and so on.
This is what the OP is
Re: (Score:2)
Did I mention the lawn?
Did I ever mentioned that I was running Unix and/or server? Re-read my comment. I'm talking about hard drives in general.
Re: (Score:2)
Wow, that is quite possibly one of the stupidest things I've seen in a while.
"Yarg! All servers must run teh unix, because all software runs teh unix, and if it doesn't run teh unix it must be crap".
Do people actually put you in charge of servers? For real?
I'm no Microsoft fanboi, but it simply is not possible to run all software a large organization needs on unix.
And believing otherwise is the sign of someone who ei
Re: (Score:2)
many large organizations putter along with only *nix-based servers and a few windows desktops just fine (CERN, Munich, many scientific orgs, etc.).
I think you might be suffering from confirmation bias or maybe just denial - CERN does use Microsoft server based solutions.
Here is just one example... CERN Using Microsoft Lync for Collaboration and Mobility [lyncmigration.com]
For different values of server (Score:2)
The discussion is a bit closer to the metal here than something in a virtual machine dealing with data on a SAN even though that technically is also a server. It's just not a file server.
Re: (Score:2, Insightful)
That was pretty caustic, wasn't it!
Anyway, in today's virtualized world, none of what you ranted about really matters anymore. If disk I/O is important to your application, you're using SSD. If your filesystem needs more space, you just grow it using your platform's volume manager. And yes, real work gets done on Windows servers now. It's not my personal cup of tea, but you might as well just acknowledge it.
And you don't plan 2 years ahead because who knows what your requirements will be in 2 years?
Re:Performance issues? (Score:4, Insightful)
If you use Unix on a server, you should have multiple partitions.
I use LVM, you insensitive clod!
Juggling physical partitions is a royal pain.
Re: (Score:2)
so not only your FS is fragmented, but your partitions, too.
Re: (Score:2)
Re: (Score:2)
Too bad my mod points expired, because that's exactly what I was thinking. Although, 20% was my rule of thumb.
It probably has a lot to do with usage patterns: is your multi-TB volume used as an IMAP server, and thus chock full of 5-250KB files -- so that the FS can easily find contiguous holes --, or is it a video server fully of 1-5GB files so that contiguous holes are much harder to find when the disk is "only" around 70%? Or a DB server who's files are even huger, and so contiguous holes impossible to
Re: (Score:2)
Even then, circumstances can alter the situation, since if you create a bunch of *huge* tablespaces on a virgin FS and they never extend, then you can get up to a high usage percentage without fragmentation.
This.
To explain it to uses that have no clue what a tablespace is, but may know what a partition is, imagine:
* setting up a separate partition for every set of similar sized files
* for very large files, give each its own partition
* pad every file in out to a fixed size
For example, for mp3's, pad every one of them to 6mb (I'm guessing; do some stats on your archive to determine optimal size).
Every time you write one, it's 6mb to that one partition.
If you delete one, there will be a hole in the filesystem, bu
Re: (Score:2)
Then you are just wasting the space that would usually be in fragments.
Unless I am terribly misinformed modern filesystems figure that out themselves and try to prevent fragmentation in a much better way. Old file systems didn't do that properly so back in the 90's (and beginning 00's for windows) that probably was a good way to work.
Re: (Score:2)
You're pulling into the parking lot at work, and you know that there are only 5% of the spaces free. How long will you have to drive around before you find a place to park?
Now picture pulling into the parking lot at Disney World, and you know that 5% of the spaces are free. Now how long will you have to drive around?
Re: (Score:3)
That isn't how First Fit works. Ever.
Re: (Score:2)
Hey! No "your mom" jokes here!
Re: (Score:2)
With perfect conductivity, and with no mass ...
Re: (Score:2)
Re: (Score:2)
Not entirely - it's an easier problem to solve when you're parking a car that takes up a single spot. Now, imagine you have a trailer count that is between 0 and 1024 parking spots wide, and breaking your trailer up into pieces (and then reassembling it!) is feasible but it takes time to do that.
That's why it's not that simple.
Re: (Score:2)
With Windows, and NTFS, the MFT (Master File Table) occupies 12.5% of the disk space. Once all other sectors on the disk are full, it will actually store files IN the MFT reserved space, and you run the risk of fragmenting the MFT itself and decreasing performance.
As well the defrag tool (automatically scheduled or not), requires 15% free space to run.
Nagios XI (Score:2, Informative)
Re: (Score:2)
Re: (Score:2)
Bigger question (Score:2)
The bigger question is how to reserve less than 1% for the superuser?
Re:Bigger question (Score:5, Informative)
It's a configuration option when you newfs a file system. Man newfs or mkfs.
[John]
Re: (Score:3)
I don't know; the default 5% might be excessive for really big volumes but keeping at least %1 free seems 'smart' pretty much no matter how many orders of magnitude the typical volume grows to be. The typical file size has grown with volume size. We now have all kinds of large media files we keep on online storage now that previously would have run off to some other sort of media in short order.
The entire port of the reservation is so in the event of calamity the super user retains a little free space to
Re: (Score:3)
Re: (Score:2)
We have more but we USE more. (Score:5, Insightful)
When we had drives in the 100s of MB range, we used a few MB at a time. Now that we have drives in the multi-TB range, we tend to use tens of GB at a time. In my experiences, a 90 percent full drive has as much time left before running out as it did a decade ago.
Perhaps more importantly, running at 90% of capacity kills your performance if you still use spinning glass platters as your primary storage medium (not so much when talking about a SAN of SSDs). In general, when you hit 90% full, you have problems other than just how long you can last before reaching 100%.
Re: (Score:2)
In my experiences, a 90 percent full drive has as much time left before running out as it did a decade ago.
In your experience maybe. Not in mine.
I don't use 10s of GB at a time. If I start a new torrent, dump my phones camera onto my computer, or install a new game that eats a several GB. But everything else is pretty steady state with very slow steady growth. I don't download a lot of torrents on this particular PC, and sometimes remove old ones, I install a few new games a year and sometimes uninstall ol
Re: (Score:3)
YOU don't use 10's of GB at a time, but I bet your organization does. My company has expanded their storage by 50% per year compounded for at least the last 10 years (I've been here 8 and I have 2 years of backup reports from before I started), and I don't think we're that unusual if you look at the industry reports for GB shipped per year.
Re: (Score:3)
But you are four years past the safe lifespan of your disk, and when needed, it could fail.
Hence... backups.
Hoarding capacity for a decade is as foolish as running out of space tomorrow.
Hoarding capacity? I don't even really know what that is supposed to mean.
Re: (Score:2)
How did you get to 90% if you're only using 2.5-5% per year?
a) I'm not at 90%; as it happens I'm at around 50%. I said when I reach 90% it will take a year or 2 to reach 95%
b) I didn't start at 0% and then average a couple percent a year. I was at 30-40% within a week of setting up the new home PC.
I copied my 10,000 track music library. So 50GB or so right there. And another several thousand digital images, scans, and so forth. I have a small library of ISOs I keep on the drive worth another 20-30GB. A hand
Re:We have more but we USE more. (Score:5, Informative)
Exactly. The question is strange (and the attitude of the poster is odd too... 20 years ago is "days of yore", and "olden days"?) Methinks dusting off the word "whippersnapper" might be appropriate here.
Oddly enough, a similar question fell through a wormhole in the space time continuum from Usenet, circa 1994. "Now that we have massive HDs of 100s of megabytes, and not the dinky little ones of several megabytes from the Reagan era, do we still have to worry about having 95% usage alarms?"
The truth being, if you got to 95% usage somehow, what makes you think that you're not going to get to 100% sometime soon? Maybe you won't, but you can't know unless you understand how and why your usage increases. That's not going to be solved by a magic algorithm alone, it involves understanding where your data comes from, and who or what is adding to it. This isn't new. The heuristics and usage question, and estimating when action needs to be taken is just as relevant now as it was 20 years ago.
Re: (Score:2)
In my experiences, a 90 percent full drive has as much time left before running out as it did a decade ago.
Not in mine. Granted, we're both going off of anecdotal evidence, but in my favor, my experience is based off of managing a few hundred servers and a couple thousand desktops.
It seems like most workstations/servers that I manage, if they're taking up massive amounts of space, it's very often because they're storing lots of old stuff. Several years ago, when we only had a 30 GB drives, people would go back and clear out, delete, and archive old data. Now they just store it, because why not? Storage is c
Re: (Score:2)
running at 90% of capacity kills your performance if you still use spinning glass platters
A decent SAN will show practically no performance degredation right up to the point it hits 100% full.
Re: (Score:2)
not so much when talking about a SAN of SSDs
You mean an array of SSDs.
Just as you wouldn't call a PC on a local network a "LAN", you don't refer to an array on a storage network as a SAN. The SAN is the network.
Sorry, but this really bugs me...
Re: (Score:2)
With today's 4-8 TB drives, it's easy to keep billions of of files on a single disk, so you could potentially keep data for many thousands of customers on a single disk. But if you do that, you quickly run into an entirely new type of constraint: IOPS.
The dirty secret of the HD industry is that while disks have become far bigger, they haven't really become any faster. 7200 RPM is still par for the course for a "high performance" desktop or NAS drive, and you can only queue up about 150 requests per second a
Re: (Score:2)
With today's 4-8 TB drives, it's easy to keep billions of of files on a single disk...
Uhhmmm, no, not quite ;-)
Re: (Score:2)
I have a 1TB drive with 5.5 million files on it (don't ask). Even scaling to 8TB, that'd still only be 44 million table entries. NTFS on a GPT volume can scale to 2^32-1 files, but I'd hate to think how big that'd end up being with 64KB clusters... 274TB? Grow it for larger files.
Re: (Score:2)
Perhaps more importantly, running at 90% of capacity kills your performance if you still use spinning glass platters as your primary storage medium (not so much when talking about a SAN of SSDs). In general, when you hit 90% full, you have problems other than just how long you can last before reaching 100%.
Do you have actual experience or data to back up that claim? Because my verified benchmarked experience is the opposite, 90% does NOT "kill" performance. Of course you're using inner tracks and getting lower transfer speeds, but nothing really dramatic like what you'd see with extreme fragmentation.
I will admit however, that when you get to 0.15% free (on a 4TB disk), performance really sux rox ;-)
It's a block size vs available space issue (Score:2)
So although I'm not the poster above I've had experience of both - the percent full number is only a rough guide and falls down when the block size is very small compared with the available space.
Re: (Score:2)
It's a block size vs available space issue so 90% full kills performance on small drives with big blocks (eg. SSDs from a couple of years back)...
OK, while I've not experienced that myself (no SSDs deployed), it certainly makes sense--much more so than the "blanket 90%" claim that people repeat mindlessly.
Re: (Score:2)
Recommend: Hard Drive Sentinel (Score:5, Informative)
Their support has been very responsive and courteous, their product can work through (see drives behind) most RAID controllers.
And no, I don't have any affiliation with HDS.
Re: (Score:2)
And temporarily free from http://giveawayradar.weebly.co... [weebly.com] :
http://www.buzz99.com/hard-dis... [buzz99.com]
https://topwaresale.com/produc... [topwaresale.com]
Re: (Score:2)
I got the full version of that a while ago, it's surprisingly useful - it even does SMART monitoring.
Whatever is measured is optimized. (Score:5, Insightful)
...when there really isn't a cause for immediate concern.
It all depends what one is concerned about. Is maximizing disk space down to the last possible byte important to you? Or is performance in accessing random data important to you? Or is wanting to keep artificial limits imposed by monitoring systems important to you?
.
Once you determine what is actually important to you, then you monitor for that parameter.
Whatever is measured is optimized.
Monitoring Sucks (Score:2)
The problem is the monitoring group is reluctant to make "custom" changes due to the size of the environment. OS and hardware level alerts are a pretty minor part of the overall monitoring environment in terms of the number of configuration changes required. With mirroring and system/geographic redundancy, we can wait until the morning status reports to identify systems before they get to critical.
[John]
It's all about the data prouction rate (Score:4, Insightful)
Re: (Score:2)
Unless you are still producing KBs of data.
Well yeah, lots of people are. An awful lot of work is still done in Microsoft Word and Microsoft Excel. No need to embed a 5 GB video just because you have the space.
Re: (Score:2)
An awful lot of work is still done in Microsoft Word and Microsoft Excel. No need to embed a 5 GB video just because you have the space.
*noob voice enable*
Well no, I take a screenshot of the video, which is then embedded unscalable in an excel file, which I paste into a word document, which I then send in a mime encoded email to the entire company directory.
I mean, this is the internet after all, it's not like some form of file transfer protocol exists or anything!
Re: (Score:2)
Some of that kind of nonsense happens in Powerpoint presentations-- embedding images that might be a couple hundred megabytes each. I see that in marketing companies often enough, but it's still been a pretty steady rate of growth for the past few years.
However, I still don't see multi-gigabyte Word or Excel documents, at least not often enough that I recall it.
Re: (Score:2)
I don't think this analysis is right (Score:2)
While "only 5% of my disk" is now many times larger than it used to be, so are the things I'm moving around, so "95% full" is just as bad now as it used to be.
Basically, once we got past quotas measured in single or double-digit numbers of kilobytes, this stopped changing for me. 95% full on a 100MB disk and 95% full on a 500GB disk work the same for me.
Synology (Score:3, Interesting)
Don't worry, I was too until recently...
Always mucked with fast external storage as the "main" solution -- firewire, thunderbolt, etc. This system is the main and had a few externals hooked up, that system had another, another over there for something else. It was a mess all around. How to back it all up??
Gave them all away -- bought a Synology [synology.com]
Then bought another (back it up
180-200M/sec throughput is the norm. On the network. Beats out most external drives I've ever come across. Everything ties into / backs up to the array. Home and work now too.
I use everything but Microsoft products. They're shit.
My filesystem is 60T w/ under 10T used today. I'll consider plugging in more drives or changing them out in the Synology somewhere between 2017 and 2020...
Re: (Score:3)
180-200M/sec throughput is the norm. On the network.
You have a 10 gigabit network? I ask because a 1 gigabit network can only provide 125MB/sec throughput. I know that some of the Synology units offer link aggregation support, but that also usually requires support in the switch and multiple network cards in each client.
That said, even 200MB/sec isn't particularly good if you can only provide that total to one client at a time, especially for the cost of a Synology enclosure that can hold enough drives for 60TB of storage.
No point nitpicking aboutt no "b" (Score:2)
Check_MK (Score:4, Informative)
The default disk monitoring allows alerting based on trends (full in 24hours, etc.) or thresholds based on a "magic factor." Basically it scales the thresholds so that larger disks alert at a higher percentage, adjustable in quite a few different ways to suit your tastes.
Re: (Score:2)
We're switching to check_mk too. Honestly though, anything with a graph will do - periodically stick something into Graphite or just stick another line onto a CSV. Then draw a graph, draw a rough trend line and there's your answer. Getting a nice email/text message with that information takes a bit more work (where check_mk might help), but so long as you can see it with enough advanced warning, checking the disk graphs weekly (or even monthly) is probably enough.
90% is still a good rule (Score:2)
If you are an enterprise shop, you likely have so many disks spread across so many servers that you probably have an admin team responsible for projecting utilization for the next 12 months, so that procurement and installation costs can budgeted.
For the home user, or a small business, 90% is still a good rule of thumb. I would hate to see some additional process running in the background constantly projecting when the disk will be full. Just throw a warning for the user when you reach 80-90% capacity, an
Re: (Score:2)
If you're doing your sizing projections correctly (very tricky, I'll give you that), space is not an issue, but time is. If you lease, then you need a replacement installed before the lease is up; the leasing company wants their hardware back. There is no stretching it out. If you purchase, you are bound by the length of your support contract, as nobody sane is going
FreeNAS with ZFS (Score:2)
I would always shoot for more disk but then issues arise from managing such large disks in the 1+ TB range that we tend to fill up fast.
For a laptop or desktop, I am targeting two large drives in a RaidZ mirror on Linux. I would do the same for a desktop.
For more data and centralization for my house or office, I would choose an iXSystems FreeNAS Mini. It has all the features that you need for your data and can be easily configured to send out warning messages on various measurements like disk space, SMART m
Doesit matter anymore? (Score:2)
I think most individual server filesystem monitoring for free space is kind of a waste of time anymore or at least low prioirty.
SANs and virtualized storage and modern operating systems can extend filesystems easily. Thin provisioning means you can allocate surpluses to filesystems without actually consuming real disk until you use it. Size your filesystem with surpluses and you won't run out.
Now you only have to monitor your SAN's actual consumption, and hopefully you bought enough SAN to cover your grow
Performance monitoring (Score:2)
Interesting things to monitor are I/O rates and read/write latency. More esoteric things might be stats about most active files and directories or percentage of recently accessed data -vs- inactive data. But these are more analysis than monitoring. What other parameters would a sysadmin want to look at?
RLH
I turn the alerts off. (Score:2)
I don't need the computer to tell me when a big disk nearly full. That would be something I was aware of for some time.
In an enterprise setting where there could be many disks... one would assume the sysadmin has set reasonable alert levels rather then leaving everything on default.
So... I guess this is relevant to non-power users in residential contexts? But then how is a non power user filling a terrabyte harddrive? I mean... seriously.
Its still usefull (Score:2)
the disk fills up with the same relative speed.
okay, the OS does not get a big problem with 99% full disk. but your media collection does. you still need to upgrade your storage, when its getting full, because you will still get new big files.
Re: (Score:2)
Adding 10% space AND notifying the sysadmin that autogrowth has happened is probably the best way IMHO, because it keeps things from crashing/locking up (most apps aren't happy to get an out of space notification) while allowing the intelligent person to investigate the root cause if they suspect an unusual cause (ie if my database server is growing its disk it's likely to be a bad query filling tempdb, I don't want the database to halt but I also want to figure out what the bad query is, but if a file serv
Re: (Score:2)
You need both. Sysadmins are adaptive, but (relatively) slow to respond. Automation is (should be?) much faster, but not usually all that adaptive.
Automation is used, first and foremost, to trigger anything that you need to do to save your whole application or system which must run faster than a human reaction time. In that case, we consider a disk space alarm to be a signal to automation to step in before it is too late. But how do we know when it is too late?
My answer to the poster's original question
Re: (Score:2)
And for my part, I think that a sustained growth velocity metric is very useful out of the box. You know or you can calculate easily how big your filesystem is, so you can calculate a "time to full" which becomes your "window of opportunity" to fix the issue within. Any time the rate means that the remaining free space will be consumed in under a certain "safety" interval, you know you need to act. You then set a alarm threshold which makes sense with your reaction time.
If you have automation to deal wit
Re: (Score:2)
You can build advanced, predictive analytics with Splunk. It can do exactly what you asked for.
But that will generate GB of data, worsening the problem, won't it?