Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage

Building a Budget Storage Server 433

An anonymous reader noted an article running over at Firingsquad talking about building a budget storage server. Talks about cooling, power, RAID, expandability, etc. Good overview type article, with practical application.
This discussion has been archived. No new comments can be posted.

Building a Budget Storage Server

Comments Filter:
  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) * on Tuesday November 11, 2003 @01:20PM (#7445442)
    Comment removed based on user account deletion
    • Re:a tip (Score:3, Interesting)

      by supersmike ( 563905 )
      Interesting tip. I didn't know that. At the same time, 3Ware recommends using identical drives if you want maximum performace for reads on RAID1.
      • Re:a tip (Score:3, Informative)

        by kfg ( 145172 )
        Performance and data security are often at odds. So much so and so often that I think we can nearly take it as a given that increasing one decreases the other. It is the nature of the beast.

        One must ask one's self, "How fast do I want my corrupted data delivered?"

        KFG
    • Re:a tip (Score:5, Informative)

      by pmz ( 462998 ) on Tuesday November 11, 2003 @01:38PM (#7445658) Homepage
      You don't want your complete raid-array failing because to much drives fail because of a common problem in their hardware/firmware.

      Also, you don't want drives failing due to unpredictable failures of unmatched drives failing to interoperate.

      If there were truly a statistical benefit to mixing drives like you say, I would have thought the analysts and Sun, EMC, and IBM would have adopted this strategy by now. Or have they?

      Why is it that Sun's drive model numbers are also specific to a firmware revision? Why are arrays sold with matched drives and why are patches offered to upgrade firmwares to know revisions?

      How is it even possible to integration test sets of unmatched drives and have any notion of the long-term MTBF of drives with firmwares who have never met before?
    • Re:a tip (Score:3, Insightful)

      by bconway ( 63464 )
      This is extremely poor advice to give, and I hope no one takes your word on this and jumps in the deep end. There are a host of unforeseen problems that can arise from using unmatched drives. This is not what the array manufacturers designed for, and they even warn against it in their documentation.
      • Re:a tip (Score:3, Insightful)

        by Macgruder ( 127971 )
        Man, is it the day for everyone to switch off their brains?

        He said different batches and different vendors. Not different models.

        Use the same model all around, but buy them from different vendors (such as CDW, NewEgg, etc.) That way the chances of having a batch failure is minimized.
      • Re:a tip (Score:5, Informative)

        by karnal ( 22275 ) on Tuesday November 11, 2003 @02:24PM (#7446197)
        I can actually back you up on this with real world experience.

        Just for grins (since my older motherboard supported it), I had a 7200rpm maxtor 30gb. Thought, hmmm, can do raid 0 - and get better performance.

        Bought a 7200rpm seagate -- performance dropped through the floor. Why? Well, depending on where the data was, the seagate would have to reposition the head while the maxtor was still reading the same track....

        Finally bought a similar maxtor to replace the seagate, and my performance did increase. Not by any amazing amount above the norm, but it wasn't dog slow anymore on reads and writes.
    • Re:a tip (Score:5, Insightful)

      by k12linux ( 627320 ) on Tuesday November 11, 2003 @02:34PM (#7446291)
      While we're handing out tips, here is one I learned the hard way. Create your RAID paritions at the lower side of the "specified" drive capacity. In other words if your new 180Gb drives actually have 180.5Gb available, DON'T use the extra .5Gb!

      I had to replace a failed 180.4Gb drive on a 1Tb server and the replacement was exactly 180Gb. I had to back up 400+Gb of data, re-create the RAID array with 180Gb partitions and then restore. If you think backing up 60Gb is slow... ha!

      Unfortunately, the 3ware utilities don't seem to allow you to specify the partition size.. they just use the whole drive. Mixing one 180Gb drive in with the 180.4Gb drives made it use 180Gb for all of them. Unfortunately that isn't very practical when you are creating a raid array on a batch of brand new drives. (You'd have to find one slightly smaller drive.)

  • Whoa... (Score:5, Funny)

    by eurleif ( 613257 ) on Tuesday November 11, 2003 @01:22PM (#7445477)
    Their definition of "budget" is $3,140? Someone give me their budget right now!
    • he's right... (Score:5, Insightful)

      by snooo53 ( 663796 ) on Tuesday November 11, 2003 @01:37PM (#7445644) Journal
      While for a large business, $3000 must be dirt cheap.... for the rest of us it is WAY too expensive. I could either build a kick ass entertainment center for $3000 or their "budget" server.... I'll give you one guess to figure out which one I'd choose.

      I've learned to be very skeptical of any of these articles on "budget" this or that, because they rarely are. To me, a budget server means less than $500. How about an article on how to build and configure a home network server for that price?

      • by billstewart ( 78916 ) on Tuesday November 11, 2003 @04:24PM (#7447374) Journal
        They spent about $1000 of that cost on disks, and were too cheap to spend an extra $250 for RAID, but they spent $100+60 on a really cool keyboard and mouse and $100 for a really cute front-panel display.

        They spent $300 for a Pentium-3 and $200 for a high-end motherboard and $350 for the fastest most expensive memory they could find, when a "budget server" could do just fine with a ~$100-150 2GHz CPU+motherboard and $200 for 1GB of average-speed memory. (Their motherboard does sound good, though.) After all, the bottleneck here is the disk drives and network, not the CPU, though even on a budget server it's probably worth having the 1GB of RAM for caching and for staging CD or DVD burns.

        The $190 power supply seems expensive, but that may be realistic for a system that can expand to 8 drives. If you've got a UPS, you may not need as high-end a power supply, and a "budget" system might get away without it, but since they were too cheap to buy a 5th drive for RAID they're probably much more in need of highly reliable power. And their 3GHzP4 CPU and overpowered-for-a-server video card use too much power and put out too much heat - you can easily save 50-75 watts by making better choices, and probably 100. You could save even more by using a motherboard with built-in 2D video, but most of those don't have the high-performance networking support yet.

        Also, they didn't have a price for an operating system :-). That means that they're planning to use Linux, which is another reason not to waste power or cooling or money on a gamerz video card...

  • Self? (Score:4, Funny)

    by jargoone ( 166102 ) on Tuesday November 11, 2003 @01:22PM (#7445480)
    CDs may self-destruct at sustained speeds of greater than 56x

    The author (or the person who wrote the sidebar comment) needs to learn the meaning of self-destruct...
    • Of course, most people aren't quite that anal and knew exactly what he was saying.
    • Re:Self? (Score:5, Informative)

      by swordboy ( 472941 ) on Tuesday November 11, 2003 @02:02PM (#7445934) Journal
      As a side note, engineers *never* use the term "self-destruct" in a technical report. The same goes for "explode" and other synonyms. The correct term is:

      "Spontaneously Disassemble"

      If your laughing, make note that I am being completely serious. I've seen this term on paper too many times.
  • Last time I checked (Score:5, Interesting)

    by bigjnsa500 ( 575392 ) <bigjnsa500@yPERIODahoo.com minus punct> on Tuesday November 11, 2003 @01:24PM (#7445508) Homepage Journal
    Last time I checked, motherboards only come with 2 IDE channels. According to the article, they are using 4 250gig IDE in 'standard' configuration (ie, no RAID or SATA) and using a system drive. Uh, 4+1 don't add up to 4.

    Since they couldn't afford RAID, what about software RAID? Way faster than normal IDE operations.

    • by mindstrm ( 20013 )
      Lots of motherboards come with 4 IDE channels now, and onboard IDE raid. Very common, not expensive.

    • by Space cowboy ( 13680 ) on Tuesday November 11, 2003 @01:30PM (#7445586) Journal
      I've never trusted software RAID on a multi-CPU (ie: one of our typical servers) system. I've had the raid screw up far too often for co-incidence. I'm pretty sure there's a race condition in there somewhere. I've had a server run for 5 months with no problems, then suddenly I get an SMS that a node is down. Bike over to the co-lo, and the filesystem's completely screwed... Never again. Hardware raid all the time :-)

      Simon.
      • And hardware RAID isn't that expensive, especially IDE raid. Under $400 to eliminate all the worrying and guesswork is money well spent.

        Plus I'm not aware of any software RAID solutions you can install the OS itself into, it's always just the data drives.
      • by Zapman ( 2662 )
        In principle, I agree with you. HW Raid (if you can get/afford it) is the way to go. Dead drive? Yank it, and pop in another. Automatic resync. No down time.

        SW Raid however, isn't (always) as bad as you make it out to be, at least under solaris. Their 'disksuite' product has been very reliable here. Every server has it, and we've yet to see any real problems with it. Great for mirroring the OS drives.

        Before version 4, disksuite had some teething problems, but the 4.2.1 release rocks.
        • I should qualify what I said - I've not used it under Solaris. My bad experience was with Linux, and that was about a year ago last I "was tried" by it.

          I would also say that the Irix xlv system is fantastically reliable - there's a 20 Terabyte "disk" at one of our clients (Post-houses have a lot of disk for all those film-frames :-) which has never had a problem...

          Hardware raid is surprisingly cheap for commodity PC's. Certainly it's worth the peace-of-mind for me :-)

          Simon.
      • by antifun ( 648481 ) on Tuesday November 11, 2003 @02:48PM (#7446422)

        Been using Linux SW raid in the 2.4 kernel series for a year+ now and it has worked like a champ, with both IDE and SCSI devices. All disk servers were SMP (overkill but management wanted it that way). Dunno what you screwed up.

        If your criteria for an adequate disk server include either (a) high performance or (b) long-term maintainability, then you should choose SW raid.

        Most HW raid systems, especially cheapo PCI cards, but even expensive Fibre Channel-SCSI3 rackmount monsters, offer either extremely primitive performance metrics or none at all. With SW raid, you normally get the full performance-monitoring and tuning capabilities of the host OS. Big win. You will also get better performance from a SW raid, given the same drive layout, and as long as you do not use the box for anything else at the same time. It should be obvious but some people don't believe this.

        The other big win is more important when you spend more money than $3000 (a pittance in this market): there's no hardware manufacturer to get bought, go out of business, or change product lines. No multi-thousand-dollar support contract or custom software to configure the RAID or any of that other crap. Trust me, when your dedicated RAID box's motherboard flakes on you and you discover the manufacturer has gone out of business, you'll be cursing yourself for choosing HW raid every time you search Ebay for a replacement part.

        Not to mention that commodity, general-purpose HW is always cheaper to replace, and its performance/price ratio grows much faster than special purpose HW. The HW raid system with the 200MHz i760 and 64MB RAM might have looked great in 2000 but now you're stuck with the proprietary on-disk format of an out-of-business vendor with no way out except to build a new system of the same capacity and copy everything over. (In the case of large data warehouses, "full backups" don't usually happen.)

        HW raid was compelling in the past. Now, with commodity hardware so cheap, and open, stable SW raid systems floating around, you'd be a fool not to prefer them in many situations. If you want a fire-and-forget dedicated box, go for it. But be ready for the "forget" part in a year or two.

      • Never again. Hardware raid all the time :-)

        Riiiiight, because that hardware RAID doesn't have any of that untrustworthy software in it. No bugs there. Move along.
    • -1 didn't RTFA (Score:3, Informative)

      by TubeSteak ( 669689 )
      4+1 = 5 (and add in the DVD+/-R/RW for a total of 6)

      ATA-133 controllers
      We added a PCI Promise ATA-133 controller so we can run our four Maxlines as all master drives. This will improve simultaneous access performance and allows for an easy upgrade to eight storage drives.

      Cost: $30

      Seagate 120GB Barracuda 7200.7 SATA - 78%
      A good hard drive but nothing to brag about, or at least nobody will care if you brag about it. It is a little noisy during operation.

      Cost: $110

      Maxtor Maxline II Plus 250GB PATA x

    • by Lumpy ( 12016 ) on Tuesday November 11, 2003 @03:58PM (#7447096) Homepage
      and the other funny part is that they talk that reliability is important and yet dismiss SCSI right away.

      please find me IDE drives with a 5 year warrenty. Or server class IDE drives.

      I can't. I tried. and I decided that for our "cheap" server we use U160 drives off a 29160 scsi card and use software raid 5 on a linux box running samba.

      I came in less than they did, yes I have less storage but I know that my drives will still be spinning and running happily in 2007 you can't say that for today's IDE drives. I also added a DLT7 drive (anyone that spec's a server WITHOUT a backup solution is a hack.) to back things up daily.

      IDE is great for consumer class stuff. I would NEVER EVER trust critical business data to any server running IDE drives and without a good backup system like a DLT drive.
  • by tmark ( 230091 ) on Tuesday November 11, 2003 @01:24PM (#7445509)
    In an article about building your own storage server, why are they spending so much time talking about irrelevant things like *video card's 3-d performance* (128 MB in a storage server ??), mouse and keyboard choice, and yet fail to even so much as mention (as far as I could tell) OS choice or software ?
    • by 23_Elders ( 147014 ) on Tuesday November 11, 2003 @01:46PM (#7445768)
      I agree, that seemed much more like, "Watch us build an expensive PC with a lot of hard disks" than "Watch us build something useful for reliable network storage."

      I am currently trying to put together a RAID 5 file server and they do not cover any topic of use to me in that article. For example, practical backup solution? They chose a DVD burner, why that over similar tape solutions? I would guess price, but it would be nice if they at least mentioned some of their considerations. Especially since it would take 112 DVD-R's to back up a terrabyte?

      Also, aside from their DVD backups, they seem to have no data recovery plan in case a hard drive fails. I guess they aren't storing anything important on these drives?
      • by mindstrm ( 20013 ) on Tuesday November 11, 2003 @02:06PM (#7446003)
        I can't figure out why these guys thinkg a DVDR is a backup solution
        a) Likely to fail
        b) Look how much time, and how many discs it will take to back up 1TB.

        The realistic backup solution for stuff like this is: stuff like this.

        Back up to a set of hard drives. Seriously. The cost/MB is still the cheapest out there, and it's more flexible, and heck, way faster than tape.

      • by guacamolefoo ( 577448 ) on Tuesday November 11, 2003 @03:42PM (#7446932) Homepage Journal
        I agree, that seemed much more like, "Watch us build an expensive PC with a lot of hard disks" than "Watch us build something useful for reliable network storage."

        My solution to building a cheap storage system was the following:

        1. Buy old Netfinity 5000 on eBay.
        2. Order 5 x 9GB SCSI drives from my trusty IBM parts guy (csaunders at itexchange.com)for $70 each.
        3. Order basic RAID card for said box.
        4. Install RedHat 7.1 from a CD in a book under my couch.
        5. Install SAMBA
        6. Run cron job to back up user data and relevant config files to an external USB hard drive attached to a windows box on the lan.
        7. Take external hard drive to safe deposit box weekly. Get second USB drive out of safe deposit box and attach it to windows box at office to await next update. FWIW, I've been thinking about putting the USB drive that is in the office in a safe when the back up is not taking place. This is not for fear of fire or catastrophe -- I just don't want it to walk out the door.
        8. The Netfinity server has the RAID 5 array configured for a hot spare drive so that there is failover operation if a drive quits.
        9. Installed PowerChute software with a UPS to shutdown the box gracefully if power quits.

        External USB -- $100 each (2) = $200 (got enclosures and cheap-o spare IDE hard drives from scavenged boxen)
        SCSI Drives -- $70 each (5) = $350
        Netfinity box = $300
        UPS = $200 (I think)
        Redhat 7.1 on CD in book under sofa = priceless

        Total: $1,050.

        Project results:
        RAID-5 with regular offsite storage. Logical disk size is only 27 GB, but you can fatten this by using bigger SCSI drives. I didn't need mondo storage, so I saw no need to go with 36 GB drives, though you certainly could if you had more money.

        I am currently trying to put together a RAID 5 file server and they do not cover any topic of use to me in that article. For example, practical backup solution?

        External USB drives worked for me. Depends on how heavy-duty you need and how your office works. Perhaps simply connecting up two servers in different offices and doing mutual backups nightly for changed files might suffice. DVDs and CDs are an option, and tape is still useful.

        Also, aside from their DVD backups, they seem to have no data recovery plan in case a hard drive fails. I guess they aren't storing anything important on these drives?

        My data recovery plan (if everything pukes) is to buy a new chassis and drives and reinstall RH 7.1., connect it to the lan, and download old config files and user data. I think it would take a couple of days (mostly waiting on delivery of the drives and box). That time could be slashed if I were truly paranoid if I simply kept spare parts off-site. I'm just not that worried, however.

        FWIW, our office is a small lawyer's office with about 10 people on our LAN. The data we need to store is not huge.

        GF.
    • There seem to be many similar problems in this article.
      • A file server would be better off without a floppy or dvd drives. A cdrom drive would suffice and provide a small security benefit as well.
      • A second ethernet port is definately needed in a production machine such as this, but why connect it to the internet. It seems that that would be unnecessary and possibly even dangerous firewall or no.
      • His suggestions for memory and cpu are probably way too high. The gig of memory I guess I can understand, but even
    • They did mention OS choice briefly:

      This is also the reason Linux was not a good choice for our system -- it doesn't make sense to put XFS/ext3/ReiserFS drives into a USB2.0/Firewire external box.

      After skimming the article, I have some questions:
      • Why does it not make sense to put a journaling filesystem in an external box?
      • Why not just use ext2 if they don't want a journaling filesystem?
      • Do they mention their choice of OS anywhere else in the article?

      To me, this read more like an advertisement for

    • Yup.
      And they build a storage server without ECC memory. We had that - by chance we noticed that a memory chip generated single bit corruptions. We were busy for days comparing every file with older backups. And that was around 50 GB data, not 1 TB.
    • by Jeff DeMaagd ( 2015 ) on Tuesday November 11, 2003 @01:53PM (#7445847) Homepage Journal
      You are right. They are probably unbalanced gaming twits. Ignore them.

      Few to no real servers actually need 3D, and 8MB is often judged to be plenty enough if you look at the boards designed for server use.

      The only exception is if these people are making their every day system into a server, which may not be advisable for anything.
    • by kiwimate ( 458274 ) on Tuesday November 11, 2003 @02:46PM (#7446413) Journal
      In an article about building your own storage server, why are they spending so much time talking about irrelevant things like *video card's 3-d performance* (128 MB in a storage server ??), mouse and keyboard choice, and yet fail to even so much as mention (as far as I could tell) OS choice or software ?

      Look at the banner on the pages -- "Home of the Hardcore Gamer". It's because they're gamers and know everything about tuning a system for games, but don't know the first thing about building a server. What they've ended up with is a mish-mash that won't serve any particular purpose well, except possibly as a rather decent PC for a secretary (except no secretary would want something that big at her desk).

      As one reads through the article, what leaps out is that they're most comfortable when debating relative merits of 3D video cards and building uber-fancy custom machines designed for gaming excellence. Good for them, but this is far removed from building a server.

      It's got a terabyte of utterly unsafe storage. No RAID, no nothing.

      It's got a video card which is overkill for a server but which they disdain as a low-end 3D graphics card.

      They've got one hard drive for the system and everything else as data, so they're not building a "high performance" system or else they'd have a separate drive for paging.

      They haven't discussed the types of files they'll be storing at all -- will they be tiny text files, medium sized spreadsheets and documents, or massively large presentations and CAD files? This affects how you configure your system.

      Their approach to planning for hardware failure is "we bought the better quality stuff so we don't have to worry so much about MTBF". No need for RAID or redundant power supplies. (Although oddly enough they've chucked in two NICs.)

      Did I mention no RAID? Yet they've bought a 3D graphics card (overkill), a nice mouse (in case they want to do graphics editing or perform fast wrist actions on their storage server), a wireless keyboard, and a fun little LED display to tell them how fast the CPU fan is spinning.

      Look at how they're future-proofing the system, by the way. They anticipate going through 2 TB of data every year. So every six months they're going to pull out the existing 1 TB of storage, plop into an external array, and put in a new set of disks. I wonder how long this system is supposed to last...

      All in all a very odd system indeed. In fact, a pseudo-server built by gamers with no understanding of how to build a server.

  • by rf0 ( 159958 ) <rghf@fsck.me.uk> on Tuesday November 11, 2003 @01:26PM (#7445526) Homepage
    Finally a place I can store all my p0rn/warez/dvds. But seriously why did they put in a 3D Graphics cards on a server. Surely any cheap AGP card even without 3D will do. Some basic ATI's are just $20

    Rus
    • Interesting? (Score:2, Informative)

      by jargoone ( 166102 )
      Somebody didn't RTFA...

      At the same time, we wanted this server to act as a workstation with as much capability as the other systems attached to the storage server.

  • by digitalgimpus ( 468277 ) on Tuesday November 11, 2003 @01:26PM (#7445529) Homepage
    I took a Beige G3/266MHz that I got for $50... put a 120 GB WD drive, ACARD IDE Controller, and Mac OS X.

    An extra fan, to keep it nice and cool, and a 10/100 NIC.

    Runs rather well. Smooth, reliable, and fast. For a very low cost. Mac OS X 10.2 comes with AppleShare, for Macs, and Samba for windows file sharing. Apache for a webserver, and PHP, Perl...mySQL.

    You got whatever you really need.

    I added webmin, for remote control. Makes it a bit easier.

    • Interesting... May I ask what was the total cost after you added everything?
    • by The Tyro ( 247333 )
      but I admittedly went a little over the top...

      - Raid rack-mount server chassis (space for 8 drives)
      - 3ware RAID controller (great linux support)
      - multiple 120gb drives in RAID-5
      - dual-athlon MB, bunch of RAM
      - CrystalFontz LCD running LCD4Linux
      - Samba, Postfix, etc.

      It has enough extra horsepower that I can run a counterstrike server along with providing network services, primarily huge storage, for all my other machines. It's full of high-bitrate oggs (reripped everything; it took weeks, even using Grip's
      • 3ware RAID controller (great linux support)

        We have several 3ware RAID controllers, of multiple generations. On several occasions, they have dropped a drive and reported a degraded raidpack. The linux tools don't successfully fix it, so you have to go into the BIOS and start a rebuild. Slowly, the rebuild happens (dragging down the machine considerably while that goes) and then everything is fine for another month or so.

        So, the drive is OK, the card works 30 days out of the month, as do the cables.

        Has
    • OS X 10.1 users are still waiting for a patched SSH.

      While Apple includes server software in OS X, Apple is not excited about you actually making use of this software (they would rather that you buy OS X Server), so it will constantly be a thorn in your side.

      I've thought about OS X server applications, but...

      • I don't care for their support policies
      • Apple likes to use support to cut OS functionality (itunes)
      • /etc/passwd is a rump, and the c library pulls from an Apple licensing daemon, which makes me un
  • Lame (Score:5, Informative)

    by sardonic2 ( 576701 ) on Tuesday November 11, 2003 @01:26PM (#7445533) Homepage
    Article is lame when it comes to the important stuff. Its great he gave us the hardware to do it, but thats not the important part now is it? Software.... something that can do backup's to harddrive and then take backups and archive on tape. we went with tapeware [tapeware.com] because of price, but we cannot archive a current backup to tape, so that means we have 4 week online and no archive really (bad). Are there any open source solutions? I saw a couple but they look hard to setup and manage. Tapeware gives a powerful interface and makes it easy to backup from multiple machines... plus linux boxes don't need special server license (unless they have a tape drive) where any Windows 2000 Server box needs a server license.
    • Re:Lame (Score:3, Informative)

      by JamesD_UK ( 721413 )
      I currently use Bacula [bacula.org] as my open source backup solution. Clients are available for Windows, Linux and Unix although I believe the server works best with Linux or Unix. It supports most hardware, including some tape robots (something that would be useful for 1TB of data!) and appears to be extremely flexible. It's done everything I've asked of it and more without complaint. Best of all the support from the author via. the project mailing list is second to none. The interface is through a console applicati
    • Re:Lame (Score:2, Informative)

      by syzygy_001 ( 671384 )
      if your a window's shop take a look at Retrospect [dantz.com]

      our Windows guy here has used it at the last few places he's worked, and it provides good backup and disaster recover, you can bun a recovery CD which will reinstall OS and then connect to the backup server and restore the box automatically to it's last (or whichever you tell it to) backup.

  • by Space cowboy ( 13680 ) on Tuesday November 11, 2003 @01:28PM (#7445552) Journal
    I built a similar system for the web rack (disks are bulky, compared to 1U motherboards). Gave me 1.5 TB of SATA hardware RAID-5 in 2U. All the other machines boot off it - much better use of space :-)

    Simon
    • i've always been wary of network booting. how does it do speed wise (for disk access and such) in a server environment (say, web server?)
      • My attitude has always been that the bottleneck is where you have to pay attention. In our case, the bottleneck isn't the internal-to-the-rack (gigabit) network, it's the to-the-web network, which can be 100mbit, but rarely exceeds 30mbit. We can serve data internally far far faster than anyone outside can receive it over the web-network interface, so I have no qualms about separating the disk and the server.

        All the machines have a lot of RAM as well, which helps with the cacheable stuff :-)

        Simon.
  • Completely Stupid (Score:5, Insightful)

    by Anonymous Coward on Tuesday November 11, 2003 @01:29PM (#7445565)
    No raid? Going to rely on the drive's MTBF? WTF. A raid controler is like 80$ MAX and one additional drive is like 250 or so. Spend the damn money. While you're at it. Invest in a tape drive. You're data is more valuable than the drives.
    • Well, this is where I stopped reading:

      Another possibility was RAID 5, which allows 5 drives to act as 4 drives. An additional parity track is written on each drive, so if one fails, then the other drives can recover the lost data. This is available through software or hardware. This is a great solution if you do not plan to upgrade your maximum server capacity. When the time comes to replace a drive with a higher capacity drive, you will be forced to replace the entire array.

      Right. The thing reads more
  • Today's News: (Score:5, Interesting)

    by JamesD_UK ( 721413 ) on Tuesday November 11, 2003 @01:30PM (#7445579) Homepage

    ... it's possible to buy a large PC case and fill it with a large number of drives that add up to a volume of storage that was once considered to be large several years ago. What's new here?

    The article could have covered a little more than just the hardware needed to run such a setup, perhaps covering some sort of remote management interface for the storage? It would also have been nice to hear if they solved the problem of backing up this data on a budget too. (Ingoring the possiblilty of burning the data to DVD).

  • Mini-itx (Score:5, Interesting)

    by herrvinny ( 698679 ) on Tuesday November 11, 2003 @01:30PM (#7445583)
    Wouldn't a mini-itx system make more sense here? You're building a simple storage server, doesn't need to be massively huge. A 533mhz processor (the low end with mini-itx boards, I think) is plenty fast enough to run a simple storage server.

    Video card? Why on earth would you need a $70 video card for a storage server! He should have gotten a motherboard with integrated graphics, so even if he needed to attach a monitor, integrated graphics would be more than enough to handle anything. What is he building, a storage server or a full fledged PC?
    • Re:Mini-itx (Score:2, Insightful)

      by wilper ( 103281 )
      Well, in the specs he said he wanted it to be a workstation AND a storage server. However, he didn't want to run Linux on it because of lack of drivers (Which I presume rules out the BSDs as well.), so he'll probably end up running some form of Windows on it.

      So he basically built a windows workstation with lots of disks, guess the other users on the network will learn to hate the poor man who uses it when he reboots after every change in the configuration in the server, depriving them of access to the fil
    • Re:Mini-itx (Score:5, Insightful)

      by Zak3056 ( 69287 ) on Tuesday November 11, 2003 @02:45PM (#7446400) Journal
      He should have gotten a motherboard with integrated graphics, so even if he needed to attach a monitor, integrated graphics would be more than enough to handle anything.

      Because if he wasn't blowing $70 on a video card, and $160 on his keyboard and mouse, he wouldn't be able to complain about how RAID would blow the budget.

      His calculations for the power supply have SEVENTY WATTS budgeted for the video card, which, of course, forces him to spend $190 on the 450 watt power supply.

      His motherboard has dual gigabit LAN, because "an extra NIC is essential for a server." Note, he doesn't say WHY he needs that extra gigabit NIC (fault tolerance? Performance? It looks cool?) only that he considers it "essential."

      He has a hundred dollar add-on that "displays the latest stock-quotes and surf reports."

      I feel dumber for having read this article.

    • Re:Mini-itx (Score:3, Informative)

      by crucini ( 98210 )
      The mini-ITX spec only calls for 55W of power, which isn't enough for a bunch of disks. Of course you could use a mini-ITX mainboard with an ATX power supply in some custom or semi-custom case.

      As for the rest, I agree the author's goals are unclear.
    • I have a 533Mhz Via Mini-ITX motherboard driving my file server. Here's what I built:
      MB: Via 533Mhz Mini-ITX
      Video: Built into MB, crap, but who cares?
      NIC: 100 Base-TX built into motherboard
      RAM: 1x 512MB DIMM
      Storage:
      - 1x 20GB Maxtor hard drive for the OS
      - 2x Maxtor 120GB drives plugged into a Promise Ultra 66 PCI IDE controller, mirrored
      Case: Some old piece of crap mid-tower ATX case
      PSU: PC Power and Cooling 300W

      It's not uber-leet, but it gets the job done. The system also has a minimum of fans: on fo
  • stupid. (Score:4, Informative)

    by Polo ( 30659 ) * on Tuesday November 11, 2003 @01:31PM (#7445591) Homepage
    I don't think they used RAID. Drives aren't as reliable as they've been spec'd out to be.

    I guess if they have everything important backed up on DVD and/or their data wasn't worth much, it'd just be a hassle... But when the system fails you end up with a big panic: running out to buy a new drive, then trying to get everything back up and running again.

    I've built similar configurations and lost a drive (twice now!) and it's a big mess. At least with a separate system drive they eliminated one problem... if they lose the main drive they can reinstall and if they lose a data drive, they can at least reboot.

    I would recommend raid -- at least raid 5 which would give them 3/4 terabyte and less headaches.
  • I really don't understand this, RAID costs $0 when done in software. Just because you get high MTBF doesn't mean a freak accident won't trash your data, RAID is really worth losing even 1/4 to 1/2 of you space in exchange. It may not be perfect, but it is a great first line of defense against failure.
    • I really don't understand this, RAID costs $0 when done in software.

      But then you wouldn't be able to recommen on Windows for that... And who would pay for writing an article that says "Windows is out of question"?
  • by _LORAX_ ( 4790 ) on Tuesday November 11, 2003 @01:32PM (#7445598) Homepage
    If you want reliability you cannot just rely on ONE server anymore. Just get the cheapest boxes that meen the requirement and get *2* of them. Use DRDB and heartbeat to make the failover seamless. With these two cheap boxes you get 24x7 reliability at a 7-11 price. Raid, cooling, ... will all help in the one box senario delay system failure, but that box *WILL* fail. Two boxes can help not only with outages, but upgrades as well since the primary can be taken offline for upgrades without any upseting of the system.

    The latest issue has reduntancy and scalability articles that go from 2 boxen to as many as you want.

    http://www.linuxmagazine.com/
  • My solution (Score:3, Interesting)

    by olympus_coder ( 471587 ) * on Tuesday November 11, 2003 @01:32PM (#7445609) Homepage
    I salvaged a derilict dual P3x450, dug up enough 256meg sticks to give it a gig a ram and a salvaged video card.

    For drives, I watch and wait until I need more space, then I add a drive, ussually whatever Fry's has on cheap. I use LVM to add it to my partitions. Of course, I can only add a total of 4 drives this way before I'm forced to by a off board controler (I'm at that point now).

    The other downside is that there is no redundancy, but oh well. Redundancy is expensive.

    Performance stinks as I violate the rules about one device per controler. Of course, I don't care because I'm accessing it over a 10mbit network (via the phone lines in my appartment). It is sufficient to stream video to 2 or more machines so no worries.

    Total cost ~$500 worth of hard drives. Everything else was "free".

    Andrew
  • by EmagGeek ( 574360 ) on Tuesday November 11, 2003 @01:32PM (#7445613) Journal
    ECS Fully-Integrated motherboard
    Athlon 1800XP, 256MB Ram
    4x 40GB IDE Hard Disks
    Promise SX4000 Raid-5 Controller
    All in a micro_ATX chassis

    Can't get much cheaper than $700 for a 120GB storage server with at least some measure of redundancy.
  • Morons! (Score:3, Insightful)

    by phlyingpenguin ( 466669 ) <[phlyingpenguin] ... yingpenguin.net]> on Tuesday November 11, 2003 @01:33PM (#7445622) Homepage
    Since when do you need a 3ghz processor and a gig of ram let alone a GeForceFX (yes he noted it's slow, not slow enough mind you) for a fileserver?

    And why is he putting a keyboard/mouse in the picture? Oh he's putting windows on it... he forgot to buy a license for that! I'm not sure I understand the comment on it not being smart to put XFS/JFS/ReiserFS/Ext3 on a firewire drive... can somebody explain why that's not smart?

    $3,100 dollars is REALLY steep for a machine that shouldn't cost anything more than the drives it serves data from.
  • by StandardDeviant ( 122674 ) on Tuesday November 11, 2003 @01:34PM (#7445633) Homepage Journal
    Just yesterday I brought up a server here at work to server as a 1.0 TB-range backup server using 8x200gb WD 8mb cache drives strung off a 3ware escalade controller (raid5, two hot spares). The build process was suprisingly painless (used an athlon-based solution but that's relatively unimportant. you'll want 64bit/64mhz pci slots for things like the 3ware storage card, scsci card to drive a tape drive, etc. the cheapest board I found that could do this was ironically a dual CPU MPX chipset board from gigabyte, sub-$200), with a total cost for a total beast of a machine coming in at about 3400 USD with shipping and such. I'd recommend heartily the 3ware controller cards if you want to try something like this, they're worth every penny of their ~200-300 cost simply for the increased performance and reliability they bring to the table as well as the reduced hassle (the array just shows up as a single huuuuuge scsi drive to linux... always nice when /dev/sda is reported to contain something like two billion 512 byte sectors ;)). I went with a black aluminum Lian-Li case because it has enough 3.5" drive bays to hold all those drives, comes with lots of fans by default (as well as cooling a bit better than your average plastic / steel case due to the thermal properties of the material), and a monster 550w "vantec stealth" powersupply for reliability and the ability to sustain all the devices in the system. Debian stable installed with zero hassle and now I'm just left with the pain of fighting with backup software. ;) True, I'd trust something from Sun or similar more than this homebrew thing, but this is also a mere fraction of the cost of something from the commercial Unix vendors, so for the same total cost I could have multiple redundant servers... or more ale-and-whores money in the departmental budget. ;)
  • by ducomputergeek ( 595742 ) on Tuesday November 11, 2003 @01:39PM (#7445678)
    First off, you don't need a 128MB vid card for a server. Most have 2MB intergrated RAGE ATI cards. Old as dirt, stable, and works.

    Next, what are you uses? I mean most small business work groups I have seen might store larger Powerpoint, excel and other files. It takes them a while to fill up dual 160GB hd's in a raid 1.

    Still, for our company we purchased 1.6TB Xraid's from apple with Fiber cards. Why? well we are doing a lot of work with FCP and need the quick access times that come with fiber vs. ethernet.

  • My config... (Score:5, Interesting)

    by tinrobot ( 314936 ) on Tuesday November 11, 2003 @01:41PM (#7445707)
    Built a storage server two years ago, it's run like a tank since I put it online.

    Dual 800MHz PIII in a Supermicro Motherboard.
    Cheap-O video card
    Gigabit card
    40 GB system drive.
    6x80MB Maxtor drives (5400 rpm)
    Escalade RAID-5 card.

    I chose 5400 rpm drives for several reasons:

    A) A little bit cheaper
    B) Used half the power of the 7200
    C) Runs a lot cooler
    D) Higher MTBF

    Every drive that has ever failed on me has been because of heat. I put several fans in the case to make sure the drives don't overheat. So far so good (knocks wood)
  • I was surprised to see video cards, mice and keyboards covered at all. Then the author spends literally two sentences on the IDE controller.

    This seems more like a general-purpose machine that happens to have a lot of storage. Why's that a big deal? Maybe I should have written up my Dual AMD, SCSI RAID development/gaming box -- nah. Why spend time on something that's really not that interesting.
  • How Funny (Score:3, Interesting)

    by WndrBr3d ( 219963 ) on Tuesday November 11, 2003 @01:43PM (#7445722) Homepage Journal
    My company just recently invested in a mass storage solution, since it's obvious that mass, redundant storage on SCSI (>300GB) isnt a cost effective option for a small office environment. We took the easy way out and purchased the following:

    Dell PowerEdge 1600SC Server:
    Xeon 2.0Ghz
    512MB RAM
    18GB U320 15k RPM (OS Drive)
    32x CD-RW/DVD Drive

    We chose this server because it has both PCI33, PCI66, AND PCI-X slots on the bus, supports up to SIX internal hard drives and has two 5.25" drive bays.

    For the mass file store we chose Maxtor 300GB 5,400RPM 2MB Cache Drives. You have to remember this is not going to be an active file server but more just a file repository and source control/backup server for a small office (10 Clients).

    Our Mass Storage Solution Is:
    3Ware 7506-8 RAID Controller
    4x Maxtor 300GB Drives

    We're going to put the Maxtor Drives on a RAID5 and since the 3Ware is a Switching HARDWARE 64-Bit/66Mhz PCI RAID card for IDE Drives, performance should be stellar.

    I think all in all the entire solution ended up costing us around $4,000 for parts and systems, BUT, we also got OS (Win2k 5 CAL) and a 3 Year Dell Warranty on Parts.

    I think $4,000 for a 900GB Hardware RAID5 on a Xeon server aint too shabby :-)
  • Budget (Score:5, Interesting)

    by herrvinny ( 698679 ) on Tuesday November 11, 2003 @01:43PM (#7445723)
    Total $3,140

    Okay, I just looked at the article again. $3,000? Damn. I wouldn't mind having that budget...

    Seriously folks, if you think you need $3,000 to build a server, then you're out of your minds. I don't want to be modded as Flamebait, but anyone here at /. (including me) could build a server for less than half that, and I would bet that for storage activities, it would be equivalent or faster than this moron's PC.

    Video Card? Keyboard? Mouse? No. Shouldn't even be there. Yeah, sure, during initial setup, connect a secondhand monitor, mouse, etc (who doesn't have a spare monitor lying around? I have one 10 yrs old lying around somewhere and it still should work). But after initial setup, after you install and configure Linux/Apache, Windows/IIS, FreeBSD/whatever combos, forget it. After that, you should be able to telnet or remote admin the server.

    I'm going to issue a challenge. Alexis Dang (the author of this piece), if you're listening, here's a challenge. Give me $1500 and I'll build you a server that can beat your server in storage related activities. Not video games, not music, not Paintshop testing.... just pure storage. Hell, give anyone on this board $1500, and they can beat your "server" upside down.
    • Re:Budget (Score:3, Funny)

      by Idarubicin ( 579475 )
      Hell, give anyone on this board $1500, and they can beat your "server" upside down.

      Indeed. If you want to spend the full $3000, build two of your $1500 boxen, and then you have a complete backup.

  • Okay, so external drives aren't as cheap as internal drives, but they are a lot easier to cool (40 cm fan for instance), easier to swap if needs be, easier to expand the capacity (just plug in yet another drive into your FireWire-bus).

    Not sure how easy it is to raid those though ...
  • Linux!!!!
    Feel free to mod me up now.

  • Rename the article (Score:5, Informative)

    by bigjnsa500 ( 575392 ) <bigjnsa500@yPERIODahoo.com minus punct> on Tuesday November 11, 2003 @01:46PM (#7445763) Homepage Journal
    They should rename this article:

    "How to build a budget file server without knowing what we're talking about"

    3 grand is on a budget? What happened to raising from the grave an old AMD K5-166, throw some big IDE drives and you really got a budge file server.

  • From the article:
    This is also the reason Linux was not a good choice for our system -- it doesn't make sense to put XFS/ext3/ReiserFS drives into a USB2.0/Firewire external box. Since we anticipate going through 2 TB of data every year, this setup allows for that flexibility without a significant cost penalty

    I just flat out don't understand this statement. Can someone shed some light on this?
  • that looks like a fun home project but I would start here when looking for a small office server Penguin Computing Relion Servers [penguincomputing.com]
  • by PSC ( 107496 ) on Tuesday November 11, 2003 @02:00PM (#7445917)
    If you believe the numbers, running a drive in RAID mirror will double the effective MTBF, we have done that by choosing the Maxline series vs a standard consumer IDE hard drive.

    (Shakes head and bangs it violently against concrete wall)

    MTBF and RAID is about entirely different things. The R in RAID stands for REDUNDANCY. You can have a MTBF approaching infinity and you would still have no redundancy.

    Mirroring does NOT just double MTBF. It folds two probability functions. With RAID1 not only have both disks to die for data loss, but both disks have to die at the same time! (Or in fact, during the recovery window.) With a MTBF of 1.2 mio hours and a recovery window of maybe 5 hours, this really makes the difference.

    Using non-RAID IDE disks, especially on a server, no matter how small the budget, is just playing russian roulette with your data. With at least 5 chambers loaded. It's wantonly negligent. It's unprofessional. Don't do it.

    (As a side node, the MTBF is an utterly useless bit of information. It is determined by e.g. running 10,000 disks for 10 hours, with one disk failing. That is one dead disk in 100,000 hours of operation, so MTBF is 100,000. It's a bit like saying that if one woman can make a child in 9 months, 9 women can make a child in 1 month. Reality just doesn't work like that.)
  • by westyvw ( 653833 ) on Tuesday November 11, 2003 @02:01PM (#7445929)
    I have many different storage servers, at different locations. They have no clue on how to build on the cheap. They mention no Linux (or BSD) and thats just plain stupid. They put a video card in a server? WTF? You dont neeed one at all, all admin access should be with an xwindow on a main computer.

    Slashdot, News for Nobody. This was the lamest article I have read in awhile.
  • by chunkwhite86 ( 593696 ) on Tuesday November 11, 2003 @02:10PM (#7446047)
    If it's a server, isn't data integrity a higher priority than sheer performance? Why aren't they using ECC memory modules? Price is not an issue - I have a dual Athlon MP system which supports ECC and I'm running 1.5 GB of PC2100. The 512MB ECC modules were only like $112 each.

    Plus they complained about not having front-panel firewire and USB! WTF? This is supposed to be a server isn't it? Not an iMac!

    And my final rant - An NVIDIA FX video card? Are they smoking crack? A Matrox Millenium PCI card is all you need in a server. GeForce FX is the last thing I would ever imagine to find in a budget storage server.
  • Dangerous (Score:4, Insightful)

    by SomeOtherGuy ( 179082 ) on Tuesday November 11, 2003 @02:13PM (#7446091) Journal
    Talk about working without a net. I mean, why call it a file server -- sure it will serve files...But it will not do anything about redundency or recovery. Thus it is just a Desktop with lots of standalone drive space. The whole file server moniker should be reserved for machines that not only collect and serve your data -- but also protect and back-up your data. No raid, no mirrors, no tape backups -- no nothin. And some good the 3d graphics card or MTBF will do you when one of the drives goes south taking your data with it.....(Well at least you may be able to replace it under warrenty with a new EMPTY hard drive and play a mean game of Unreal Tournament or something....:)
  • My 1TB Media Server (Score:3, Interesting)

    by meehawl ( 73285 ) <{moc.liamg} {ta} {todhsals+maps.lwaheem}> on Tuesday November 11, 2003 @06:12PM (#7448472) Homepage Journal
    I built this out of cannibalized parts last January 2003. I suppose by now you could probably double the media storage for the same cost -- there's a lot of rebates for PATA drives around.

    Supermicro P6DBE (1997 vintage)
    2xP3 600MHz

    Adaptec 1940UW SCSI
    Software RAID 1
    x2 36GB Seagate SCSI drives
    (web server)

    1GB ECC PC100 RAM

    x1 WD1600JB PATA drive
    (apps)

    Promise SX6000
    Hardware RAID 5
    x6 WD1600JB PATA drives
    (media server)

    ATI Rage Pro
    (it's a server!)
    Antec 1040SX Case

    Antex True480 - 480 Watt PSU

    Basically, all I bought new were the drives, the case, and the PSU. Total cost below $1300. Serves several thousand visitors a day, peaked at 30K hits for a while following a Slashdotting. CPU usage peaks around 20%. Using J River's Media Center [musicex.com], I've tested it serving 6 simultaneous 720x480 DIVX streams to clients over LAN and WAN with no problems.

    These chumps spent 3 times what I did, and they don't even have disk redundancy. Who let the dogs out?

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...