Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Technology

SATA RAID Enclosure w/ Temperature Monitoring? 91

vanyel asks: "Yesterday, my external USB 2.0 drive enclosure finished cooking a 3/4 full 200G drive after its fan quit working who knows how long ago. In the time honored tradition of closing the barn door *after* the horse has wandered away, I'm accelerating my quest for a RAID solution. In particular, I want something that will support 4 SATA drives and has temperature monitoring that doesn't require a particular vendor's RAID card or Windows. Better yet, is there a RAID-5 NAS that isn't in the $4-5000USD price range. Anyone with a better barn door to close this problem with?"
This discussion has been archived. No new comments can be posted.

SATA RAID Enclosure w/ Temperature Monitoring?

Comments Filter:
  • Use SMART? (Score:5, Informative)

    by crow ( 16139 ) on Wednesday December 22, 2004 @10:37AM (#11158742) Homepage Journal
    At least with ATA drives, you can usually use smartd to monitor your drives. This includes temperature and various failure indicators. Usually when a drive fails, there is plenty of warning from small failures that the drive recovers from. When you run smartd, you can receive these warnings.
    • Re:Use SMART? (Score:3, Informative)

      by Sepper ( 524857 )
      I second that. You can have Smart Run a script when things fails, like Mailing the details of the failiure and then turning off the computer.

      For other computer parts lsensor might do the trick.
      For exemple: http://gentoo-wiki.com/HOWTO_Monitor_your_hard_di s k(s)_with_smartmontools [gentoo-wiki.com]

      if it's an external device, the best thing would be to get a controlable UPS And turn off (again with a small script)

      Just think RAID, UPS [apcc.com], smart monitors [santools.com] and deamons [apcupsd.com] and with a bit of imagination, you can come up with a
    • Yeah, and don't rely on Windows smart detection. After my 99% full 250 GB drive failed last week, I was able to resurrect some of the data. Included in this was the event logs, with about 500 warnings over two weeks that my drive was on the way out. Okay, so in order to figure out if my drive is going south I need to check the event log every day? WTF?
      • Re:Use SMART? (Score:3, Informative)

        by isorox ( 205688 )
        Every week would have done. Simply have a daily cron job that greps you logs for fatal messages and email them to you.
      • Re:Use SMART? (Score:3, Insightful)

        by hackstraw ( 262471 ) *
        Okay, so in order to figure out if my drive is going south I need to check the event log every day? WTF?

        Yes you do. Its no big deal at least for operating systems that log in plaintext (I don't know about windows). What I do is nightly grep for unusual stuff and I take 30seconds to a minute reviewing it for all of my systems and I admin about a hundred, so one machine should take about 5 seconds or so a day. That is much easier than trying to recreate 250 Gigs of data from a dead drive. Right?
      • Okay, so in order to figure out if my drive is going south I need to check the event log every day?

        When I figured that one out the hard way was also the exact moment I decided to convert from Windows on the servers to Linux. Real servers log plain-text.

        • Windows does log to plain text.
          Finally, a /.'r admitting that Win2k server is a "real server"... and wait, it's been about 0 degree's F here in N. Central KY (otherwise known as hell) for the last few days....
          Ragnarok is coming!
          • Windows does log to plain text.

            Really? Do you claim C:\WINNT\security\edb.log is plain text just because it doesn't contain any Unicode characters? C:\WINNT\system32\LogFiles would be a nice place for the logs, but it only contains the IIS logfiles. C:\WINNT\Debug has a few other logfiles, but nothing consistent and no system.log like you'll see in the Event Viewer.

            OK, I'll end the suspense. The system log is at C:\WINNT\system32\config\SysEvent.Evt . First place to look for a log file that is, the co

            • I just read the file in notepad.
              Next bet?
              (um, it's not EASY to read it in notepad, thats not the tool I would use by choice, as it's not formatted to be read in notepad).
              I used to keep tabs on a 70 sub-office WAN network by occasional checking the event logs.

              • You must have a weird eye-brain parser.

                Now, let's see what Slashcode does to this snippet from my logfile (actually, one of my customer's):

                †晌敌 † †&# 49324; 쁄 四 &#19585 ; † ​†㪀 & #137;†††Ԡ‒†   ⁐⁒&#8265

                • OK, let's try it raw (the previous was saved in notepad, read back in because it contained characters that Slash couldn't handle. This little bit seems to work. And no, I don't call this plain-text, even though parts of it is plain enough. No datestamps, nothing except parts of messages. Is there a "Create legible logfiles" option in the registry that you have found and turned on?

                  0 LfLe À DÀ ÛV L : 0 _ _ P R I N T S E R V E R _ P 1 / L J U S D A L / S e s s i o n 7

                • Odd, in mine it comes out like this:

                  E v e n t L o g S H A D O W 1 4 : 3 6 : 0 4 P M 1 2 / 1 4 / 2 0 0 4 Ô $ mÔ $ m LfLe bAbAy

                  Maybe you aren't opening the right file? I had to take out enough to get by the junk filter, but aside from some non-standard characters and spacing between letters, it's completely readable.
                  it's SysEvent.Evt; there are other system files there, and if you have your file extensions turned off (shudder) it's easy to click on the wrong one.
                  • Nope, it's the right file. In fact, none of the files in that directory seem to be readable. Maybe it actually IS a registry setting somewhere. :-)

                    aside from some non-standard characters and spacing between letters, it's completely readable.

                    So, can you tell me what is actually says in your example, apart from the datestamp? What does a disk read error look like, for example? That's the one I would have wanted to know about before going home for the weekend while the server happily thrashed a few hundre

    • Re:Use SMART? (Score:3, Informative)

      by endx7 ( 706884 )
      And best of all, SMART works with not just ATA, but SATA too, which is what asker seems to be asking for.
    • Re:Use SMART? (Score:2, Informative)

      by raxhonp ( 136733 )
      And then use hddtemp [guzu.net] to ease monitoring your drives.
    • if you build a storage box (NAS, USB, Firewire, whatever), why not put it in a larger box, like a small ATX case? if the fans fail, it's not going to cook anything. The only reason the OP had the failure is he was using a drive in a small case that didn't allow heat dissapation...
  • the nStor 4700 SATA series. See http://www.nstor.com for more info.

    MTW
    • nStor 4700: ~$4-6,000, depending on specific model.. no idea how many drives it comes with.. at least, that's the only price I could find after looking arount the web for the last 20 minutes (since the parent thread first was posted) I think the Topic was looking for a NON-$3,000 to $5,000 solution... which I must agree, I am too... 4 SATA 250GB HDD's costs less than $1,000.. a RAID 'card' & case shouldn't cost $4,000 for non-business use.. I have over 500GB in work already, but I can't really back
      • Yes, I searched for a price in vain for 10 minutes, too. But I found many articles saying something about the nStor 4700 being really cheap, so one could assume that $4-6k is the price complete with 12 400 GB drives.

        What the market completely misses is a easy and comfortable storage solution for the SOHO or advanced home user at a reasonable price. Not those stupidly small 1 or 2 drive boxes that cost even more than a complete PC including a drive of that size, and not those absolutely insane-priced "profe
  • I just recently setup a vinum test for freebsd. I ganged 4 200gig drives and got this:

    # df -h
    Filesystem________Size___Use___Avail__Capacity __Mounted on
    ...
    /dev/vinum/raid___734G___578G____97G____8 6%____/vinum

    (sorry for lameness in formatting. LIT doesn't work on slash...)

    (yes, 4*200=800 yet its really only 734)

    anyway, what's neat is that I used a variety of controllers and it all worked. I first started off with a $15 ide controller and 4 ports (2 channels m/s). used SiL chipset. worked fi

    • In a RAID setup where you're hoping to have some kind of redundancy, shouldn't 4 x 200 equal 600 or 400?

      losing a single 200GB disk in that setup means that you most likely say goodbye to 734GB of data, rather than giving said disk a proper burial and replacing it.

      • I use true hardware mirroring for the o/s drive. 3ware /dev/twed0 is great.

        for my data pack, I use concat (linear) mode. its not striping since it doesn't paint across all disks for each next block. it fills up one disk at a time, then moves on to the next. if I stay within that disk, no others need be accessed. this allows the mobo to spin the others down and save heat, power and life of the drive.

        but what if one of those fails? it will probably bring the whole pack down, like you say.

        in that case
        • (sorry for the quick followup)

          forgot to mention: the main reason I went with software raid for the data-pack is that its GROWABLE non-destructively. I asked 3ware and some others (highpoint, promise) if their controllers supported how disk-adds to add more storage to a pack. none had it. that really surprised me.

          I didn't want to be a slave to any one vendor for my almost-terabyte media collection. if I need to grow this array (and I will, periodically) then freebsd vinum on 4.10-stable seemed to be th
          • The Raidcore series controllers absolutely ROCK, and they can to online live growing, raid level migration (say i have a mirror set and add a drive, i can change it to a raid lvl 5, all live and without losing any data) Broadcom recently bought them, so they have great support. It will also let you add multiple cards so when you run out of ports (mine has 8 ports) you can just add another card, and span accross cards transparently.
            • is there solid stable BSD support?

              one of the things that impressed me about 3ware was that there are kernel drivers that are production quality. seriously.

              I am always interested in good hardware, but it MUST be freebsd-solid. else, its usually a good comment on how poor the hardware is, or how closed the drivers are.

              finally, since I've never come across that brand before (and I'm not so junior in pc's, raid, etc) - then I do worry if I got one of these controllers - how long they'll be around and how e
              • from their faq:

                Q: Do you have plans to support Novell NetWare, FreeBSD, MAC/OSX, or other OS's?

                A: The BC4000 Series SATA RAID Controllers currently support various Windows versions and several Linux versions. Support for other operating systems is planned but not yet scheduled. Any information you wish to provide regarding other operating system needs, the criticality and the timeframe, would be welcome.

                BEEP! wrong answer. no bsd drivers.

                3ware has 'em.

                I have no idea why there's no bsd driver
        • Interesting...you get none of the redundancy or preformance benefits of, say RAID-5, but growability seems to be your only real-time concern.

          You might consider RAID-5 for the physical disks in the data pack with concat on top of that (if it's possible). Then you have hot-swappable redundant data storage that you can grow as you please (albeit 3+ disks at a time). Still hybrid, but trading some space for better availability. Done this way (if it's even possible), dropping a physical disk won't kill your

          • right, I could layer this logical topology over some other abstract phys topo. raid 5 or any other.

            for storage that DOES grow (my media/music collection) growability is prime. rebuilding is NOT an option just to add more logical space.

            it seemed that going vinum (or gvinum for bsd5) is the right logical layering.

            and its nice to isolate dealing with failed spindles and dealing with growing space. my space problem is solved by vinum. the spindle problem is solved by having backups via friends. no, its
    • Did you try gvinum? I gather the original vinum's been getting rusty in 5, and has been removed entirely in favour of gvinum in HEAD at least.

      gmirror and gstripe are also well worth looking at.
  • both are 7200 rpm drives and are about a year old...should i worry?
    • for IMPORTANT stuff, I buy a new drive each year.

      not only is it faster (and lately, quieter) but you don't worry about spindle locking (seizing) or crashes - but you also can rest easier.

      rule: if you can re-download it, don't worry.
      if you write it yourself, BACK IT UP on multiple brands/types of drives.
      • for IMPORTANT stuff, I buy a new drive each year.

        And for an additional $30, you can turn the old drives into a portable USB drive. Each month, grab one and copy all of your important stuff to it. Then, tote it over to a friend's house for safekeeping. Then, even if your house burns down, you still have all of your old data, not more than a month out of date, for only an additional $30.

        For best results, use two separate USB drives, and do not have both in your house at the same time. It would suck i

        • while you're at it, get a proper usb2 AND firewire enclosure.

          fw still seems more robust and reliable. not sure why, but it is.

          oxford 911 chipset is your friend.

          then again, serial-ata is becoming an EXTERNAL standard and so that's even better than fw or usb2.
  • I've used 3ware's cards in hard-core production machines for years, and I just can't say enough nice things about them. Even bashing them with bonnie (disk throughput benchmark), they give better performance than Dell's PERC SCSI/RAID controllers. Now that I've had a need to use them at home, I'm still ecstatic with them. Smartd has had the capability to monitor drives on these controllers for a *long* time, and 3ware's own monitoring software (although a little clunky in a couple places) offers a bunch
    • NOTE- I have 3ware cards (7000 and 8000 series) on my freebsd box (/dev/twed0).

      works great - BUT - one big big catch. no online extends! ;(

      I had 2 drives in concat raid (striped) and wanted to add more storage. NO WAY TO DO IT unless you tear it down, break it down and totally rebuild. which means I have to move that data OFF the pack and then reformat the new pack (new drives added) then copy back.

      sucks!!!

      for a hardware card that costs a lot - its sucks in terms of online extends.

      for mirroring, its
      • 9500S supports OCE.
        • I called tech support just a few weeks ago, in fact, asking if any of their controllers, now or in the near future, will support on demand expansion.

          they said no, but it is being worked on.

          JUST being worked on NOW?? wow...

          my ancient mylex card (long format pci, scsi) could extend, I'm pretty sure.

          but things are not as advanced in ide-raid, it appears. even from the 'big guys'.
  • by Anonymous Coward

    PCPowerCooling.com sells an overheating alarm for $10. I put it in all the systems I build.

    Alarm Available Here [pcpowercooling.com]

    • According to the ad copy [pcpowercooling.com] it'll even work in your 486DX system!

      I just found it funny to see the picture demonstrating the device in a computer that needs no special CPU cooling.

      -Adam
      • i dunno, i left my 486sx33 by the window once and the sun got it. Nothing was damaged, but it wouldn't boot up for hours, until it cooled down.

        The original poster should just spend the $12k and get an Apple XServe.
    • if you need measurable readings, you could always buy a DVM that has temperature probe ability - and a serial port (a lot of voltmeters have that these days) and then talk to the DVM via the serial port and poll it for temp.

      not all that hard, really. I bet a dvm to do that might be $50 or less.

      if you have an older mobo (old asus boards, before the built-in temp probes for cpus) had these 2 pin headers where you could plug in a thermister and fit it near the cpu to get the reading. that reading could be
  • Avoid RAID5 (Score:3, Interesting)

    by photon317 ( 208409 ) on Wednesday December 22, 2004 @10:48AM (#11158871)

    There is really only one good reason to ever use RAID5, and that is that you're too tight on money to be able to afford to RAID1 (Mirror) the storage you need (If you need 400G of space, RAID1 is gonna cost you 800G of storage, whereas RAID5 might only cost you 500G of storage). RAID1 is both faster (For writes and especially reads) and more resilient than RAID5. Assuming you can afford it (and storage itself is pretty cheap today, especially if you don't get a fancy RAID5 controller), just go with RAID1.

    If you want really nice performance and you're buying 4+ drives, do RAID1+0 - mirror the drives up in pairs (where the pairs are as diverse as your setup allows, seperate controllers and/or chassis and/or power, etc...), then stripe the data volume on top of the sets of mirror-pairs.
    • raid 5 isn't fast, usually.

      but sometimes you HAVE little choice. if you have massive storage, you can't very easily full mirror it.

      what if your case is already filled with drives? what if your power supply can't handle 2x the drives?

      raid 5 is fine. slower, but not a bad setup if your controller does xor in hw.
      • Re:Avoid RAID5 (Score:1, Insightful)

        by Anonymous Coward
        raid 5 is fine. slower, but not a bad setup if your controller does xor in hw.

        Until you have two drives fail. Then, you're fucked. Don't act like that never happens, because it does. I've had it happen, as I'm sure others have. No more RAID5 for me. . .
        • Re:Avoid RAID5 (Score:3, Informative)

          true.

          no array is ever completely fault tolerant.

          you STILL need backups.

          but raid helps get you buy during the 3am disk failure and you don't want to drive 50miles to replace a failed disk.

          in the AM, when you get to work, THEN you replace it.

          raid is not subs. for backups. but it helps get you thru the single spindle failures.

          its better than NOT having it.
          • Re:Avoid RAID5 (Score:3, Insightful)

            by hackstraw ( 262471 ) *
            no array is ever completely fault tolerant.

            you STILL need backups.


            I'm not sure what the target use is, but it seems like its personal, and being that the previous external drive was only USB, performance does not seem to be a concern.

            With that in mind, I would suggest poor man's RAID1 over real RAID1. By that I mean buy two disks and cron a rsync command every night. This would take care of backups and redundancy, although its not realtime, so a disk failure after a disk write but before the rsync wou
        • PP - raid 5 isn't fast, usually.
          Depends on the controller. If it supports parallel reads/writes it can be the fastest of all RAID configurations. (Otherwise it's the slowest :).

          P - Until you have two drives fail.
          Well duh! That's true of any RAID solution. And while it does happen occasionally, I can say that it's pretty goddamn rare! I've been consulting for over ten years and out of many thousands of RAID configurations across numerous controllers/drives/etc I have seen exactly one case of honest-t
          • "P - Until you have two drives fail.
            Well duh! That's true of any RAID solution."

            Really? I kinda hope that if I have two drives fail on my 3 drive RAID-1, the array's not going to mysteriously disappear.. and I'd hope my RAID-10's can survive at least some combinations of multiple drive failure :)
            • Me - "Well duh! That's true of any RAID solution."
              Fweeky - "Really?"


              True enough, maybe I should have been a little less emphatic, eh? :)

              Out of curiousity, (not argument), since there are no "standard" RAID definitions above Raid-5, which particular manufacturer's RAID-10 were you referring to?
              • Hmm? How is layering RAID-0 on top of RAID-1 not "standard"? We're using Adaptec 2210S' if that makes any difference.. if it's not layering basic RAID levels like it says it is wtf is it doing? ;)
          • Depends on the controller. If it supports parallel reads/writes it can be the fastest of all RAID configurations. (Otherwise it's the slowest :).

            RAID5 is always going to be slowest of the "common" RAIDs (0,1,5) for disk writes (discounting seriously broken hardware and/or software) simply by virtue of the way it works.

            RAID5 reads should be as fast as anything else.

        • Until you have two drives fail. Then, you're fucked. Don't act like that never happens, because it does. I've had it happen, as I'm sure others have. No more RAID5 for me. . .

          Thats what hot spares are for. Even if you aren't monitoring your arrays.. you are aren't you? One global hot spare per enclosure and you need 3 failed drives before you loose data.

      • raid 5 is fine. slower, but not a bad setup if your controller does xor in hw.

        The RAID5 slowdown has nothing to do with parity calculations (not these days, anyway).

    • There is really only one good reason to ever use RAID5, and that is that you're too tight on money to be able to afford to RAID1 (Mirror) the storage you need (If you need 400G of space, RAID1 is gonna cost you 800G of storage, whereas RAID5 might only cost you 500G of storage).

      Well, if you're going to be accessing the data in such a way that RAID5's dismal write performance isn't an issue (eg: 90% reads, over a 100Mb network, etc) then RAID5 is by _far_ the better bang/$ solution.

    • RAID1 is both faster (For writes and especially reads) and more resilient than RAID5.

      If you've a proper controller and a sensible number of drives, RAID 5 is by far the fastest in reading. Try getting a sustained 400MB/s out of RAID 1 and get back to me.
    • There is a second good reason: You need multiple terabytes of storage and don'w want to have double that many drives lying around.
  • depending upon your data situation, how about RAID 1+0 or RAID 0+1 rather than RAID 5? RAID 5 will bite you in the ass, sooner or later.

    I've got the teeth marks to prove it.
    • ALWAYS put RAID 1 on the lower level and then use RAID 0 to bunch these two devices together. Reason being if you do that your array has a 2/3 chance of surviving two HDD crashes at the same time.

      For a lot of data this seems a bit redundant IMHO. There is a difference between data you can recreate but don't want to loose (ripped CDs etc) and data you can't recreate (documents, photos). Depending on your storage needs it may be redundant to put it all on the same type of array.
      • Uh I don't see the difference in redundancy and performance.

        Between

        AB (AB)' (striped first)
        which is the same as
        AB A'B' (striped first)

        vs
        A A' B B' (mirrored first)

        (' = mirror)

        Whether you have RAID0 first or RAID1 first. As long as two As or two Bs don't fail you're OK.

        In practice I suppose it's better to have the mirrors in physically different bay areas and attached to different controllers (some RAID controllers seem to have a nasty habit of failing before the drives fail).
        • I find your notation a bit hard to grasp. I tried to invent my own but it seems like drawing pictures of it just makes it harder than just text. (I probably just haven't been clever enough to think of a picture though.)

          If you first stripe and then mirror and one drive fails you have one partially failed subarray and one fully working subarray. If one of the drives in the fully working subarray fail the entire array fails as that subarray has the only working copy of the array. If the still working drive in
        • by adb ( 31105 ) on Wednesday December 22, 2004 @03:38PM (#11162082)

          Here's a clear and concise explanation, with pictures. [ofb.net]

          With a striped pair of mirrors, a total failure happens only if both drives in one of the mirrors fail; there are two ways this can happen.

          With a mirrored pair of stripes, a total failure happens whenever any two drives in different stripes fail; there are four ways this can happen.

          In both cases, there are (4 2) = 6 pairs of drives that can fail. Given that two drives have failed, there's a 2/6 = 33% chance that the RAID 1+0 will fail, but a 4/6 = 67% chance that the RAID 0+1 will fail.

        • Uh I don't see the difference in redundancy and performance.

          If you stripe then mirror, then losing a single drive makes your entire array non-redundant, effectively turning it into a RAID0 (and greatly increases your rebuild time when you replace the failed disk as half of the entire array needs to be rebuilt). To lose the entire array in this scenario you only need to lose a single disk from the other side of the mirror.

          OTOH, if you mirror then stripe, then losing a single disk only affects a fraction

  • Supermicro makes the SATA drive cages I use, they have an alarm on them if the fan quit, or if they overheat, and it's loud enough you'll do something about it just to shut it up. Take a look at http://www.supermicro.com/products/accessories/mob ilerack/CSE-M35S.cfm [supermicro.com]

    I got mine from http://www.newegg.com/ [newegg.com] for around $150 when you get shipping and tax involved, and they work good.

  • Get a good sized case with enough slots for fans, a S-ATA RAID Controller (from the Highpoint 1820 at ~ 180 Euro to the 3ware Escalade 9500 at ~ 550 Euro) and a decent fan/temp controller which is supported by your OS or ships with drivers for your OS.

    Complete this with board, cpu and all the rest fitting your needs and you'll have the most S-ATA fileserver you'll get for your money! :-)

    I just do the same for me, except the fan/temp controller.
  • Isn't the saying 'close the stable door after the horse has bolted'?

    Horses live in stables, not barns AFAIK, so it would make more sense.

  • Most computer fans have the decency to make a racket for a while before their bearings are so damaged that the fan stops, or doesn't spin fast enough to keep its component cool. I actually use the USB/1394 box I reviewed here [dansdata.com], and its fan started crapping out after not a whole lot of months of service. A squirt of oil [dansdata.com] silenced it, and it's been fine for quite a while now.

    Once you've ignored one noisy fan bearing and lost hardware and/or system reliability as a result, you become pretty good at picking the

  • http://www.mini-itx.com/projects/tera-itx/ [mini-itx.com] A Terabyte capable server for under $300 plus the cost of drives.

    Just need to get an old drive enclosure some where (ebay?)

    It could even run the drive health scripts itself....

  • by bhima ( 46039 ) <Bhima,Pandava&gmail,com> on Wednesday December 22, 2004 @02:53PM (#11161670) Journal
    Let me summarize all of these comments: No we Don't Know, Not Really.

    Which really disappoints me as I will soon run out of room in my PowerMac with my extra drive bracket.

    Is it just me or is Slashdot becoming exponentially more useless?

    • I have just found this...http://www.engadget.com/entry/1234000570024 663/ [engadget.com]
    • bhima,

      Check out macgurus.com [macgurus.com]. They have some good solutions that seem pretty inexpensive.

      • I have seen this...

        Plus: It's in my price range, It's in my size range.

        Minus : That whole external wiring thing (I have one of those water cooled PowerMacs and they don't like having holes in them), I don't think they ship to the EU, also it seems like a hack

        But hey what can I expect for the money! So I'll probably wind up with a variation of this

        • I have a few of these on Windows desktops, which I modified as described here. [slashdot.org]. It is a hack, but then, if you don't want external firewire, fibre, USB, or SCSI, there's not really any other choice. I haven't found anything cheaper than this, and while it certainly would be an aesthetically non-pleasing solution, it is a solution that seems to fit what the OP requested.

          Meanwhile, one click at the macgurus website gave me this:

          "International orders are shipped Fedex International Priority or UPS Worldwi

  • rackmountpro! they are best place to get server stuff, and they treat their customers good. I have purchased a decent amount of stuff from them over the years, and its a joy.
    here is the exact link for what you want:

    http://www.rackmountpro.com/productpage.php?prod id =2135

  • There are fans that have embedded Hall effect sensors that generate a pulse once per revolution. I've used them on embedded systems. Wire the sensor leads to a parallel port and write some software for a task that counts the pulses from the sensor and calculates the fan's RPM. It can generate an alarm if the fan slows down or stops.
  • I am running a Promise Fast Trak 100 Tx2 Pro, it included the RAID 0/1/JBOD Card, and 2 SuperSwap 1000 enclosures. The card is PATA, and the superswap enclosures will allow you to hot swap the drives. Link to Card [promise.com]

    I also have the RAID on the motherboard, but it does not support hot swap.

    Running FreeBSD, I have Samba, NFS, and Appletalk, BIND9, Apache (for some testing), Postfix (for relaying the mail), and other goodies.
  • We have recently purchased some hardware like this to expand both network attached storage, and desktop solutions. We have a GIS department that regularly fills up 400GB drives and were not well backed up, so we needed to get them a terabyte or so of raw storage (500GB mirrored) for their desktops, and multiple terabytes on the network for archiving.

    We ended up with a server like these rackmounts [ioncomputer.com], with 24 hot swap drive bays, Windows 200 server license, 4 hot swap power supplies (3 live, one redundant), tw

  • http://www.accusys.com.tw/Acuta/Acuta_web.html

    USB 2.0 and FireWire or External SATA connection for 4 drives in raid 0,1,1+0 or 5 set ups.

The 11 is for people with the pride of a 10 and the pocketbook of an 8. -- R.B. Greenberg [referring to PDPs?]

Working...