Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage

Ask Slashdot: Smarter Disk Space Monitoring In the Age of Cheap Storage? 170

relliker writes In the olden days, when monitoring a file system of a few 100 MB, we would be alerted when it topped 90% or more, with 95% a lot of times considered quite critical. Today, however, with a lot of file systems in the Terabyte range, a 90-95% full file system can still have a considerable amount of free space but we still mostly get bugged by the same alerts as in the days of yore when there really isn't a cause for immediate concern. Apart from increasing thresholds and/or starting to monitor actual free space left instead of a percentage, should it be time for monitoring systems to become a bit more intelligent by taking space usage trends and heuristics into account too and only warn about critical usage when projected thresholds are exceeded? I'd like my system to warn me with something like, 'Hey!, you'll be running out of space in a couple of months if you go on like this!' Or is this already the norm and I'm still living in a digital cave? What do you use, on what operating system?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Smarter Disk Space Monitoring In the Age of Cheap Storage?

Comments Filter:
  • by Anonymous Coward on Thursday October 23, 2014 @01:21PM (#48214229)

    I never run out of disk space.

    • by bobbied ( 2522392 ) on Thursday October 23, 2014 @01:49PM (#48214477)

      I'll bet that's not true...

      Seems to me that the stuff I work on keeps getting bigger and bigger, as does my collection of digital pictures and videos. Where I attempt to pare down what I keep, some of it stays around...

      I expect that most users do the same things and thus data keeps piling up. I don't think it matters how well you are at deleting stuff you don't need anymore.

      • by jedidiah ( 1196 )

        I never run out of space. As disks get larger and larger, the risk of running out of space seems like the single least significant thing possible. The real issue is corruption.

        Based on the headline, I would have expected this to be about content verification with all of the ZFS fanboys coming out of the work to extol it's virtues.

      • I suppose you're much like me.
        Both of us are a "Being Digital".

      • Generally, if you are the type of organization that needs to monitor a 1TB for filespace, you're the kind of company that can fill that 1TB of filespace.

        Where I currently work, it is not unusual to fill 1TB in about the same amount of time it took to fill a 100MB drive back int he day.

    • by dissy ( 172727 ) on Thursday October 23, 2014 @03:06PM (#48215225)

      I delete things when I'm done using them

      1) Many of my things I either desire to use for many years to come (a video download I paid for), or am required to keep to cover my ass (taxes, logs, most data at work due to policies, etc)

      2a) The cost of more storage space is almost always less than the cost of the time to clean up files that could be deleted. In the context of work this does depend heavily on exactly who made the data and their rate of pay / work load - but I've noted the higher up execs and managers tend to be the worst hoarders as well as of course the highest rates of pay. Most of the lower techs on the shop floor don't even have access above read-only to the network storage here, though that is far from universal everywhere.

      2b) Yes there are other people whos time is not as expensive, but no one other than the datas owner/creator can know 100% what needs to stay vs what can go (and sometimes even the owner/creator chooses wrong.)

      3) After deleting/archiving data, the chances of you needing it in the future are typically higher to much higher than the chances you are really done with it.

      4) For the small number of times you really are done with it (like, totally and fur sure), the amount of data that gets deleted is generally such a small percentage of the whole that, while still a good thing to do, doesn't really help much with the problem at hand - freeing up a lot of space for future needs.

      I never run out of disk space.

      You either have too much free storage space, not enough data, or possibly both :P

  • by brausch ( 51013 ) on Thursday October 23, 2014 @01:22PM (#48214233)

    How does performance change as the big disks approach full? That was always one reason for the rule of thumb about keeping at least 10% free space on UNIX.

    • by Anonymous Coward on Thursday October 23, 2014 @01:29PM (#48214289)
      Well, ext4 strives to scatter files around disk to avoid fragmentation. Once the disk begins to approach full, it has to use even smaller and smaller holes to place data into, which causes some fragmentation.
    • You want to keep the hard drive at 50% or less to maximize performance. If the hard drive is more than 50% full, the read/write head takes longer to reach the data. If the hard drive is 90% full, most OSes will have performance issues.
      • by gnasher719 ( 869701 ) on Thursday October 23, 2014 @01:35PM (#48214335)

        ou want to keep the hard drive at 50% or less to maximize performance. If the hard drive is more than 50% full, the read/write head takes longer to reach the data. If the hard drive is 90% full, most OSes will have performance issues.

        Actually, any OS will have performance issues, because the transfer rate (MB/sec) drops from the outside tracks to the inside tracks. That's why for home use, you just buy the biggest hard drive that you can easily afford (if you need 1TB, you buy 3TB), because that way you use only the parts of the drive with the highest transfer speed, and the average head movement time is also a lot less.

      • by RenderSeven ( 938535 ) on Thursday October 23, 2014 @01:42PM (#48214413)
        I typically partition the drive into two logical drives. The inner partitions with awful performance are where my media goes (movies, music, photos). The performance falloff is non-linear. Also, performance degradation over time is worse for the inner tracks, so inner tracks are where you put data that is more or less static, or at least written sequentially.
        • by kuzb ( 724081 ) on Thursday October 23, 2014 @01:58PM (#48214551)

          That's an interesting idea for the budget-minded, but personally I think if performance is actually an issue I'd use SSDs for things that need to be performant, and store everything else on regular drives.

        • by afidel ( 530433 ) on Thursday October 23, 2014 @02:22PM (#48214791)

          Inner tracks have better seek times, which is why high performance applications often "short stroke" drives (ie artificially restrict the percentage of the drive used so that only the inner tracks are utilized, though with modern drives and transparent sector remapping it's unlikely this practices actually works), outer tracks have better streaming performance because more sectors move under the head in a given timeframe.

        • The inner partitions with awful performance are where my media goes (movies, music, photos).

          Hmm. I keep my media (movies, music, & photos) on an external USB drive. It's probably the slowest of all my storage devices and it works just fine. I'm sure there are higher latencies than your setup but I certainly never noticed them.

      • Re: (Score:2, Insightful)

        Unless you are the sort of disconcertingly disciplined and organized person who sets up a monitoring and alerting system for their dinky little desktop, you probably aren't talking about 'the hard drive'. At a minimum, you are probably dealing with some flavor of RAID, or ZFS, or an iSCSI LUN farmed out by some SAN that does its own mysterious thing behind the expensive logo, or some other additional complexity. Flash SSDs are also increasingly likely to be involved, quite possibly along with some RAM cache
      • by mysidia ( 191772 )

        You want to keep the hard drive at 50% or less to maximize performance.

        You're talking about short-stroking the drive which is fundamentally a different question --- than what percentage of your space usage is best for performance.

        For the sake of argument: Let's assume you create a single partition on your hard drive that only uses the first 30% of the disk drive, AND your partition's starting cylinder is carefully chosen to be in alignment with your allocation units / stripes down all RAID levels

        • When my vintage MacBook (2006) started slowing down last year, I read that keeping the hard drive to less than 50% full would improve performance. As it was, my hard drive was 60% full and I was able to reduce it down to 40%. Performance improved noticeably. Replacing the hard drive with an SSD improved the overall performance some more.
        • by dbIII ( 701233 )
          It's 2014 - just get a shitload of cheap small drives and stripe across a lot of mirrors if you can't put up with the speed of one drive. Even if they are old and slow laptop (or "green") drives if you have enough of them it's still going to be faster than short stroking a single drive even if it is 10k rpm and SAS.
          If budget is a problem, then yes, get the couple of percent improvement from only using part of the drive instead of doubling the speed or more with mirrors.
      • Hmmm ... if the goal is to keep all of my disks under 50% to maximize performance ... don't I effectively need twice as much disk? And if it's under RAID I'd need at least 4x as much disk?

        Which kind of defeats the purpose of both having cheaper disk, as well as having monitoring to let me know when it's filling.

        Sorry, but who has the luxury of buying twice as much disk so we can keep them all under 50%??

        What you say might get you a performance boost, but otherwise it doesn't make a lot of sense to me.

        • Sorry, but who has the luxury of buying twice as much disk so we can keep them all under 50%??

          I'm planning to replace the 3x80GB hard drives in my FreeNAS file server at home with 3x1TB hard drives, as Newegg has 1TB drives on sale for $50. That will give me 80% free space in a RAID5 configuration for $150.

          • Sure, great ... and those of us in the real world who manage 10s or 100s (or in some cases 1000s) of terabytes?

            We're talking an entirely different price point and quantity.

            I seriously doubt people with NetApp servers and other large storage could even consider keeping 50% of their disk space empty just to make it slightly faster.

            My user account on my personal machine has over 1TB of stuff in it, which gets mirrored to two other drives. That adds up after a while when you're staying under 50%.

            So I'd be look

            • And, from the very little I know about RAID 5 ... if you only have 3 drives in it, you're not really getting a whole lot of added security, are you?

              RAID5 requires a minimum of three drives. If one drive fails, the other two drives can continue function in degraded mode. The entire RAID would be lost if you have more than one hard drive failure. You could designate one or more extra hard drive as spares to automatically replace a failed hard drive. For extra security, each hard drive need to be on a separate controlller (which is what I have in my FreeNAS box). I typically have a hard drive crash every five years, which is why I replace my hard drives e

              • The linked article used to be about how RAID was going to stop working in 2005 or similar.
                It didn't because disks and controllers got much faster as well as dealing with more capacity, while the premise assumed nothing but a change in capacity.
                So now we have arrays 10x larger that rebuild in less than half the time of the old ones. We also have stuff like ZFS that acts like RAID6 in many ways (with raidz2) but can have much shorter rebuild (resilver) times because it only copies data instead of rebuilding
                • I'd expect someone running FreeNAS to know more than a journalist rewarming an old article that was a poor prediction in the first place, but I suppose seeing it in magazine format does make it look more credible.

                  RAID6 was something I heard about five or six years ago, but never seen in action or in the field. Supposedly it was the next great thing. I'm still figuring out ZFS on my FreeNAS box. Damn 8GB flash drives keep zapping out every six months, forcing me to install the current version of FreeNAS.

                  • by dbIII ( 701233 )
                    ZFS raidz2 is pretty well RAID6 with an awareness of what is going on with the files in the array giving a variety of improvements (eg. resilver time normally being vastly shorter than a RAID6 rebuild time). A few years of seeing RAID6 in action was ultimately what drove me to ZFS on hardware that's perfectly capable of doing RAID6.
                    Anyway, the "raid only has five more years" article keeps on getting warmed up, and keeps getting disproved by the very reasons given for the RAID use by date. Increasing capac
        • Sorry, but who has the luxury of buying twice as much disk so we can keep them all under 50%??

          i just had a look; if you need 1TB in a desktop, I can buy 1TB for £46 and 2TB for £54.

      • The actual number you are looking for is 85%.

        Straight out of Donald Knuth volume 3: Sorting and Searching; at 85% fill, a perfect hash starts degrading in performance.

        The basis of the Berkeley Fast File System warn level was an 85% fill on the disk, which the filesystem effectively hashed data allocations onto. As people started getting larger and larger disks, they began to be concerned about "wasted space" in the free reserve, and moved the warnings down to 10%, then 8%, and so on.

        This is what the OP is

    • by Nutria ( 679911 )

      Too bad my mod points expired, because that's exactly what I was thinking. Although, 20% was my rule of thumb.

      It probably has a lot to do with usage patterns: is your multi-TB volume used as an IMAP server, and thus chock full of 5-250KB files -- so that the FS can easily find contiguous holes --, or is it a video server fully of 1-5GB files so that contiguous holes are much harder to find when the disk is "only" around 70%? Or a DB server who's files are even huger, and so contiguous holes impossible to

      • by unrtst ( 777550 )

        Even then, circumstances can alter the situation, since if you create a bunch of *huge* tablespaces on a virgin FS and they never extend, then you can get up to a high usage percentage without fragmentation.

        This.
        To explain it to uses that have no clue what a tablespace is, but may know what a partition is, imagine:

        * setting up a separate partition for every set of similar sized files
        * for very large files, give each its own partition
        * pad every file in out to a fixed size

        For example, for mp3's, pad every one of them to 6mb (I'm guessing; do some stats on your archive to determine optimal size).
        Every time you write one, it's 6mb to that one partition.
        If you delete one, there will be a hole in the filesystem, bu

        • Then you are just wasting the space that would usually be in fragments.
          Unless I am terribly misinformed modern filesystems figure that out themselves and try to prevent fragmentation in a much better way. Old file systems didn't do that properly so back in the 90's (and beginning 00's for windows) that probably was a good way to work.

    • Picture this:

      You're pulling into the parking lot at work, and you know that there are only 5% of the spaces free. How long will you have to drive around before you find a place to park?
      Now picture pulling into the parking lot at Disney World, and you know that 5% of the spaces are free. Now how long will you have to drive around?
    • With Windows, and NTFS, the MFT (Master File Table) occupies 12.5% of the disk space. Once all other sectors on the disk are full, it will actually store files IN the MFT reserved space, and you run the risk of fragmenting the MFT itself and decreasing performance.

      As well the defrag tool (automatically scheduled or not), requires 15% free space to run.

  • Nagios XI (Score:2, Informative)

    by Jawnn ( 445279 )
    Isn't smart enough to track trends, but it does do graphs so you can easily see where your headed and how fast.
  • The bigger question is how to reserve less than 1% for the superuser?

    • Re:Bigger question (Score:5, Informative)

      by Bigbutt ( 65939 ) on Thursday October 23, 2014 @01:39PM (#48214377) Homepage Journal

      It's a configuration option when you newfs a file system. Man newfs or mkfs.

      [John]

    • by DarkOx ( 621550 )

      I don't know; the default 5% might be excessive for really big volumes but keeping at least %1 free seems 'smart' pretty much no matter how many orders of magnitude the typical volume grows to be. The typical file size has grown with volume size. We now have all kinds of large media files we keep on online storage now that previously would have run off to some other sort of media in short order.

      The entire port of the reservation is so in the event of calamity the super user retains a little free space to

    • Create a large file, that the super user then deletes when the super user needs to fix issues.
    • -m0
  • by pla ( 258480 ) on Thursday October 23, 2014 @01:35PM (#48214329) Journal
    Today, however, with a lot of file systems in the Terabyte range, a 90-95% full file system can still have a considerable amount of free space but we still mostly get bugged by the same alerts as in the days of yore when there really isn't a cause for immediate concern.

    When we had drives in the 100s of MB range, we used a few MB at a time. Now that we have drives in the multi-TB range, we tend to use tens of GB at a time. In my experiences, a 90 percent full drive has as much time left before running out as it did a decade ago.

    Perhaps more importantly, running at 90% of capacity kills your performance if you still use spinning glass platters as your primary storage medium (not so much when talking about a SAN of SSDs). In general, when you hit 90% full, you have problems other than just how long you can last before reaching 100%.
    • by vux984 ( 928602 )

      In my experiences, a 90 percent full drive has as much time left before running out as it did a decade ago.

      In your experience maybe. Not in mine.

      I don't use 10s of GB at a time. If I start a new torrent, dump my phones camera onto my computer, or install a new game that eats a several GB. But everything else is pretty steady state with very slow steady growth. I don't download a lot of torrents on this particular PC, and sometimes remove old ones, I install a few new games a year and sometimes uninstall ol

      • by afidel ( 530433 )

        YOU don't use 10's of GB at a time, but I bet your organization does. My company has expanded their storage by 50% per year compounded for at least the last 10 years (I've been here 8 and I have 2 years of backup reports from before I started), and I don't think we're that unusual if you look at the industry reports for GB shipped per year.

    • by Vellmont ( 569020 ) on Thursday October 23, 2014 @01:55PM (#48214529) Homepage

      Exactly. The question is strange (and the attitude of the poster is odd too... 20 years ago is "days of yore", and "olden days"?) Methinks dusting off the word "whippersnapper" might be appropriate here.

      Oddly enough, a similar question fell through a wormhole in the space time continuum from Usenet, circa 1994. "Now that we have massive HDs of 100s of megabytes, and not the dinky little ones of several megabytes from the Reagan era, do we still have to worry about having 95% usage alarms?"

      The truth being, if you got to 95% usage somehow, what makes you think that you're not going to get to 100% sometime soon? Maybe you won't, but you can't know unless you understand how and why your usage increases. That's not going to be solved by a magic algorithm alone, it involves understanding where your data comes from, and who or what is adding to it. This isn't new. The heuristics and usage question, and estimating when action needs to be taken is just as relevant now as it was 20 years ago.

    • In my experiences, a 90 percent full drive has as much time left before running out as it did a decade ago.

      Not in mine. Granted, we're both going off of anecdotal evidence, but in my favor, my experience is based off of managing a few hundred servers and a couple thousand desktops.

      It seems like most workstations/servers that I manage, if they're taking up massive amounts of space, it's very often because they're storing lots of old stuff. Several years ago, when we only had a 30 GB drives, people would go back and clear out, delete, and archive old data. Now they just store it, because why not? Storage is c

    • running at 90% of capacity kills your performance if you still use spinning glass platters

      A decent SAN will show practically no performance degredation right up to the point it hits 100% full.

    • not so much when talking about a SAN of SSDs

      You mean an array of SSDs.

      Just as you wouldn't call a PC on a local network a "LAN", you don't refer to an array on a storage network as a SAN. The SAN is the network.

      Sorry, but this really bugs me...

    • by mcrbids ( 148650 )

      With today's 4-8 TB drives, it's easy to keep billions of of files on a single disk, so you could potentially keep data for many thousands of customers on a single disk. But if you do that, you quickly run into an entirely new type of constraint: IOPS.

      The dirty secret of the HD industry is that while disks have become far bigger, they haven't really become any faster. 7200 RPM is still par for the course for a "high performance" desktop or NAS drive, and you can only queue up about 150 requests per second a

      • by sribe ( 304414 )

        With today's 4-8 TB drives, it's easy to keep billions of of files on a single disk...

        Uhhmmm, no, not quite ;-)

      • by ihtoit ( 3393327 )

        I have a 1TB drive with 5.5 million files on it (don't ask). Even scaling to 8TB, that'd still only be 44 million table entries. NTFS on a GPT volume can scale to 2^32-1 files, but I'd hate to think how big that'd end up being with 64KB clusters... 274TB? Grow it for larger files.

    • by sribe ( 304414 )

      Perhaps more importantly, running at 90% of capacity kills your performance if you still use spinning glass platters as your primary storage medium (not so much when talking about a SAN of SSDs). In general, when you hit 90% full, you have problems other than just how long you can last before reaching 100%.

      Do you have actual experience or data to back up that claim? Because my verified benchmarked experience is the opposite, 90% does NOT "kill" performance. Of course you're using inner tracks and getting lower transfer speeds, but nothing really dramatic like what you'd see with extreme fragmentation.

      I will admit however, that when you get to 0.15% free (on a 4TB disk), performance really sux rox ;-)

      • It's a block size vs available space issue so 90% full kills performance on small drives with big blocks (eg. SSDs from a couple of years back) but at 90% of 4TB you've still got a vast quantity of available blocks so it still performs very well.
        So although I'm not the poster above I've had experience of both - the percent full number is only a rough guide and falls down when the block size is very small compared with the available space.
        • by sribe ( 304414 )

          It's a block size vs available space issue so 90% full kills performance on small drives with big blocks (eg. SSDs from a couple of years back)...

          OK, while I've not experienced that myself (no SSDs deployed), it certainly makes sense--much more so than the "blanket 90%" claim that people repeat mindlessly.

    • STacker and doubledisc certainly helped back then, but nowadays, most of our media is already compressed as much as it will go unless you want to lose reosolution or bitrate...
  • by Bomarc ( 306716 ) on Thursday October 23, 2014 @01:38PM (#48214363) Homepage
    I install the shareware version of Hard Drive Sentinel [hdsentinel.com] on all my Windows systems. It not only will warn you about hard drive usage (%); it will also warn you about errors on the drive -- and in my case I was able to predict that two drives were going to fail (saving data) before they actually failed.

    Their support has been very responsive and courteous, their product can work through (see drives behind) most RAID controllers.

    And no, I don't have any affiliation with HDS.
  • by QuietLagoon ( 813062 ) on Thursday October 23, 2014 @01:39PM (#48214373)

    ...when there really isn't a cause for immediate concern.

    It all depends what one is concerned about. Is maximizing disk space down to the last possible byte important to you? Or is performance in accessing random data important to you? Or is wanting to keep artificial limits imposed by monitoring systems important to you?

    .
    Once you determine what is actually important to you, then you monitor for that parameter.

    Whatever is measured is optimized.

  • The problem is the monitoring group is reluctant to make "custom" changes due to the size of the environment. OS and hardware level alerts are a pretty minor part of the overall monitoring environment in terms of the number of configuration changes required. With mirroring and system/geographic redundancy, we can wait until the morning status reports to identify systems before they get to critical.

    [John]

  • by aglider ( 2435074 ) on Thursday October 23, 2014 @01:40PM (#48214385) Homepage
    You insensitive clod! In the age of MBs, we were producing KBs of data. In the age of GBs we were producing MBs of data. And in the age of TBs we are producing GBs of data. And so on. Thus a 90% full filesystem is as bad as 10 year ago. Unless you are still producing KBs of data.
    • Unless you are still producing KBs of data.

      Well yeah, lots of people are. An awful lot of work is still done in Microsoft Word and Microsoft Excel. No need to embed a 5 GB video just because you have the space.

      • by dissy ( 172727 )

        An awful lot of work is still done in Microsoft Word and Microsoft Excel. No need to embed a 5 GB video just because you have the space.

        *noob voice enable*

        Well no, I take a screenshot of the video, which is then embedded unscalable in an excel file, which I paste into a word document, which I then send in a mime encoded email to the entire company directory.
        I mean, this is the internet after all, it's not like some form of file transfer protocol exists or anything!

        • Some of that kind of nonsense happens in Powerpoint presentations-- embedding images that might be a couple hundred megabytes each. I see that in marketing companies often enough, but it's still been a pretty steady rate of growth for the past few years.

          However, I still don't see multi-gigabyte Word or Excel documents, at least not often enough that I recall it.

          • by dbIII ( 701233 )
            I've seen PDFs almost that big that were made by printing out large MS Word documents and then scanning them at 600dpi, 24 bit color. For added fun they used full sentences, including punctuation and variable whitespace, as their filenames. Various problems associated with making and opening such things I have been assured are due to a slow gigabit network and "crappy ten year old" i7 machines and not whoever decided to not just save as PDF. A few versions of files done that way and you've got GB an
  • While "only 5% of my disk" is now many times larger than it used to be, so are the things I'm moving around, so "95% full" is just as bad now as it used to be.

    Basically, once we got past quotas measured in single or double-digit numbers of kilobytes, this stopped changing for me. 95% full on a 100MB disk and 95% full on a 500GB disk work the same for me.

  • Synology (Score:3, Interesting)

    by krray ( 605395 ) on Thursday October 23, 2014 @01:51PM (#48214497)
    You're living in a digital cave IMHO.
    Don't worry, I was too until recently...

    Always mucked with fast external storage as the "main" solution -- firewire, thunderbolt, etc. This system is the main and had a few externals hooked up, that system had another, another over there for something else. It was a mess all around. How to back it all up??

    Gave them all away -- bought a Synology [synology.com]

    Then bought another (back it up :).

    180-200M/sec throughput is the norm. On the network. Beats out most external drives I've ever come across. Everything ties into / backs up to the array. Home and work now too.

    I use everything but Microsoft products. They're shit.

    My filesystem is 60T w/ under 10T used today. I'll consider plugging in more drives or changing them out in the Synology somewhere between 2017 and 2020...
    • 180-200M/sec throughput is the norm. On the network.

      You have a 10 gigabit network? I ask because a 1 gigabit network can only provide 125MB/sec throughput. I know that some of the Synology units offer link aggregation support, but that also usually requires support in the switch and multiple network cards in each client.

      That said, even 200MB/sec isn't particularly good if you can only provide that total to one client at a time, especially for the cost of a Synology enclosure that can hold enough drives for 60TB of storage.

      • No point nitpicking just because the "b" denoting Megabits was forgotten. A speed of 200Mb/s is not huge but it's not too bad either, even though a fairly old machine (6 years) with a few disks in an array can get close to five times that and saturate gigabit (or even twice over if a second connection is going somewhere else).
  • Check_MK (Score:4, Informative)

    by tweak13 ( 1171627 ) on Thursday October 23, 2014 @02:00PM (#48214575)
    We switched to Check_MK [mathias-kettner.com] for monitoring. It's basically a collection of software that sits on top of Nagios.

    The default disk monitoring allows alerting based on trends (full in 24hours, etc.) or thresholds based on a "magic factor." Basically it scales the thresholds so that larger disks alert at a higher percentage, adjustable in quite a few different ways to suit your tastes.
    • We're switching to check_mk too. Honestly though, anything with a graph will do - periodically stick something into Graphite or just stick another line onto a CSV. Then draw a graph, draw a rough trend line and there's your answer. Getting a nice email/text message with that information takes a bit more work (where check_mk might help), but so long as you can see it with enough advanced warning, checking the disk graphs weekly (or even monthly) is probably enough.

  • If you are an enterprise shop, you likely have so many disks spread across so many servers that you probably have an admin team responsible for projecting utilization for the next 12 months, so that procurement and installation costs can budgeted.

    For the home user, or a small business, 90% is still a good rule of thumb. I would hate to see some additional process running in the background constantly projecting when the disk will be full. Just throw a warning for the user when you reach 80-90% capacity, an

  • I would always shoot for more disk but then issues arise from managing such large disks in the 1+ TB range that we tend to fill up fast.

    For a laptop or desktop, I am targeting two large drives in a RaidZ mirror on Linux. I would do the same for a desktop.

    For more data and centralization for my house or office, I would choose an iXSystems FreeNAS Mini. It has all the features that you need for your data and can be easily configured to send out warning messages on various measurements like disk space, SMART m

  • I think most individual server filesystem monitoring for free space is kind of a waste of time anymore or at least low prioirty.

    SANs and virtualized storage and modern operating systems can extend filesystems easily. Thin provisioning means you can allocate surpluses to filesystems without actually consuming real disk until you use it. Size your filesystem with surpluses and you won't run out.

    Now you only have to monitor your SAN's actual consumption, and hopefully you bought enough SAN to cover your grow

  • Interesting things to monitor are I/O rates and read/write latency. More esoteric things might be stats about most active files and directories or percentage of recently accessed data -vs- inactive data. But these are more analysis than monitoring. What other parameters would a sysadmin want to look at?

    RLH

  • I don't need the computer to tell me when a big disk nearly full. That would be something I was aware of for some time.

    In an enterprise setting where there could be many disks... one would assume the sysadmin has set reasonable alert levels rather then leaving everything on default.

    So... I guess this is relevant to non-power users in residential contexts? But then how is a non power user filling a terrabyte harddrive? I mean... seriously.

  • the disk fills up with the same relative speed.
    okay, the OS does not get a big problem with 99% full disk. but your media collection does. you still need to upgrade your storage, when its getting full, because you will still get new big files.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...