Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Hardware IT

Entry-Level NAS Storage Servers Compared 182

snydeq writes "InfoWorld's Desmond Fuller provides an in-depth comparison of five entry-level NAS storage servers, including cabinets from Iomega, Netgear, QNAP, Synology, and Thecus. 'With so many use cases and potential buyers, the vendors too often try to be everything to everyone. The result is a class of products that suffers from an identity crisis — so-called business storage solutions that are overloaded with consumer features and missing the ease and simplicity that business users require,' Fuller writes. 'Filled with 10TB or 12TB of raw storage, my test systems ranged in price from $1,699 to $3,799. Despite that gap, they all had a great deal in common, from core storage services to performance. However, I found the richest sets of business features — straightforward setup, easy remote access, plentiful backup options — at the higher end of the scale.'"
This discussion has been archived. No new comments can be posted.

Entry-Level NAS Storage Servers Compared

Comments Filter:
  • one-page version (Score:5, Informative)

    by larry bagina ( 561269 ) on Wednesday October 19, 2011 @06:39PM (#37769098) Journal
    here [infoworld.com]
    • Re:one-page version (Score:4, Informative)

      by beelsebob ( 529313 ) on Thursday October 20, 2011 @02:43AM (#37771528)

      This is one segment where build-your-own is still *way* cheaper than any of these crazy setups:

      Intel Pentium G620T: $83
      Intel DB65AL: $85
      8GB DDR3: $50
      Hyper 212+ with fans removed: $20
      Fractal Design Mini: $100
      Corsair CX430: $40
      FreeBSD: $0
      Total without disks: $378

      5 * Hitachi 5k3000: $700

      Stick the disks in a raid-z, and wham bam, there's $1078 for 12TB of RAIDed NAS.

      • by gmack ( 197796 )

        You don't have a hot plug enclosure in there and much all of these will hot plug drives.

        • You're technically correct (the best kind of correct). But... While I agree a hot plug bay may be a nice idea, really, on a home NAS, what do you want to hot plug all the drives for? If you're trying to fail a drive and rebuild the array you probably aren't in a position where you want to be using it continuously through the rebuild.

          What's the use case for hot plugging the drives?

          • by gmack ( 197796 )

            And yes I have used a cheap dual drive NAS while it was rebuilding the array. It was slower but still functional.

            Hot swap cases make drive management much easier. Drive 3 of 5 needs replacing? Forget tracing cables back to the motherboard just pop out drive 3 in the array and replace it. This also means I can get someone else to do it even if it means walking them through it over the phone.

            These things need to be dead easy since I have been going out of my way to tell all of my non techie friends that U

            • by gmack ( 197796 )

              Don't know how that first "And" got there even though I went out of my way to proofread that.

            • by BLKMGK ( 34057 )

              I use drive bays in my unRAID. While not hot swappable there's no cable tracing to be done and it's WAAAAY cheaper than the crap these companies sell as "NAS".

  • Synology is nice (Score:5, Informative)

    by SirMasterboy ( 872152 ) on Wednesday October 19, 2011 @06:51PM (#37769192)

    I have a DS1010+ 5-bay model and absolutely love it. It's got 10TB in it right now but I may replace the drives with 3TB models eventually. With a dual-core 1.6GHz atom and 1GB DDR2 ram it easily reads and writes at 100+MB/s via a RAID5 array on my simple home gigabit network.

    Also the new NAS' that are Intel-based can run most CLI linux servers and programs which is great. You may need to add more RAM if you run lots of heavy servers or have lots of concurrent users but most have spare ram slots.

    The best thing I find about Synology is their every updating and cutting edge Web GUI. They are already using HTML-5 features to support things like dragging and dropping files right into your web-browser to upload files to the NAS remotely.

    • Re:Synology is nice (Score:4, Interesting)

      by sortius_nod ( 1080919 ) on Wednesday October 19, 2011 @08:00PM (#37769656) Homepage

      I did have a NAS a while ago, but I got rid of it in favour of building up a linux server. I found that NAS performance is slow at best, abysmal at worst, even with 1gbps networking & a decent controller. Unless you go corporate style you're always going to suffer from speed problems.

      Having 3 network cards and enough space for 15 drives makes up for the few hundred extra dollars you pay for a DIY NAS. Plus, a DIY NAS has a lot more flexibility than the consumer grade NAS.

      • by afidel ( 530433 )
        And the homebrew NAS also costs more in time to setup, has no support, and uses probably anywhere from a few dozen to a few hundred watts more power. Basically the majority of the people interested in the tested solutions would not consider a Linux box with some storage to be a viable alternative to those boxes.

        Btw I'm very surprised at the performance of the StorCenter px6, considering before the device launched they used it at EMC World to boot 100 VDI machines in like a minute and a half with SSD's I h
        • Basically the majority of the people interested in the tested solutions would not consider a Linux box with some storage to be a viable alternative to those boxes.

          Basically the majority of cheap NAS units are a Linux box with some storage, and anyone who doesn't consider them a viable alternative doesn't have the chops to build one.

          I have a Geode GX development system here I use as a NAS (with an IEEE1394 card in.) Under 5W not counting the storage itself. Perhaps you are wrong.

        • by Kjella ( 173770 )

          And the homebrew NAS also costs more in time to setup, has no support, and uses probably anywhere from a few dozen to a few hundred watts more power

          If you manage to build a storage server that pulls a few hundred more watts of power, you must be doing something very wrong. Even a gaming system with a 2600k and a HD6870 draws 75 watts at idle from the wall socket, and that's roughly the worst possible setup for a storage server you can get.

          If you just want 10TB of storage capacity to go with your laptop, setting up a Linux box and sharing it out is dead easy. If you want a full server then the NAS boxes don't deliver that. But I agree, there's a pretty

        • by DarkOx ( 621550 )

          There are plenty of ARM boards out there now that draw VERY little power. You can get these in formats like Mini-ITX now that will fit in standard cases; which you probably want so you can fit enough drives. They use regular DDR memory in most cases now as well, or have the memory integrated on a SOC type controller. Your favorite Linux distro is most likely available for ARM now as well. There are even four-core arm chips to choose from that are inexpensive.

          There is no good reason to be building somet

      • by lucm ( 889690 )

        I did have a NAS a while ago, but I got rid of it in favour of building up a linux server. I found that NAS performance is slow at best, abysmal at worst

        I would agree with that. However the best scenario I've tried with a Linux machine is using a software raid (or LVM) on a bunch of disks and then setup a iSCSI target, especially convenient in a virtualized environment. Network cards are cheap so it's easy to add custom multipath.

    • by rikkards ( 98006 )

      Totally agree. I originally picked up a Seagate BlackArmor 400 as the price seemed good. It sucked. Performance was crap, took 12 hours to build the array and ended up bricking after the latest firmware update. Took it back (this was only a day or two after buying it) and got a Synology DS411 which blew me away. I am getting 50MB up and 100MB down on a single nic. I could have built my own but decided I didn't want another computer I have to manage. I wanted something relatively turnkey.

      • Agree on the Synology recommendation. These are very nice boxes which are quite Linux-friendly, with even their initial setup running on Linux (unlike certain NAS units which use Linux internally but seem to go out of their way to make things awkward for Linux clients).

        I have a DS207 which performs admirably as web server, file server, backup server, and media server (it replaced a DS101 some years ago). It will soon be accompanied by a DS211 which will be used as our main home server (files, backup, med

  • The price of the two best ones, the Netgear and the QNAP, on Newegg, for the diskless versions, are about $230 apart - about a quarter of a difference. I think I'd go with the Netgear based on that.

    The problem with these things is that Thunderbolt is almost here for everyone else (not just Macs), and with SSDs getting less expensive all the time, I think I'd rather wait for a Thunderbolt-connected version for the sake of future-proofing. Plus a version intended only for 2.5" drives would be sized better for

    • Re:price (Score:5, Funny)

      by failedlogic ( 627314 ) on Wednesday October 19, 2011 @07:14PM (#37769352)

      Quick and easy tip to increase storage space on a budget: buy the 3.5" model and punch a hole in the top corner. When the first side is full flip over the disk and use the other side. You will need to periodically flip the disk over and make note of what side contains the data you want.

      • Quick and easy tip to increase storage space on a budget: buy the 3.5" model and punch a hole in the top corner. When the first side is full flip over the disk and use the other side. You will need to periodically flip the disk over and make note of what side contains the data you want.

        Ha. I'm old enough to remember doing that to 5.25" diskettes for my Apple ][.

    • The problem with these things is that Thunderbolt is almost here for everyone else (not just Macs), and with SSDs getting less expensive all the time, I think I'd rather wait for a Thunderbolt-connected version for the sake of future-proofing

      How is Thunderbolt going to provide a N[etwork]AS?

      • The PCIe spec is flexible enough that, in theory, you could probably network with it(directly, that is, not just by hanging a gig-E chipset off each host, which would be the sane thing to do). PCIe switches are supposed to be used for fanout of a limited number of host lanes to support more peripherals; but you could likely put one in a separate box, with thunderbolt bridges for communication off-board.

        It'd be damned expensive, and I'm sure all sorts of horrible things would happen, given that host-host
        • I made a network link using SATA and a SAS HDD.
          Two PCs, each with a single eSATA link to the SAS HDD.
          Turn one link on and the other off, dump data on the drive, turn the first link off and the other on, read data from the HDD.
          did it just for giggles. Actually was faster than my ethernet connections, but temperamental is inadequate to describe the setup.
          -nB

          • Given that, in the context of ethernet, "Jumbo frame" usually implies a whole 9000 bytes, I'd say that the HDD-based system does have the clear upper hand in potential frame size...

            Pity about the latency and the being half-duplex...
          • If you'd been a decade earlier, you could have done it with a SCSI drive and two host controllers, all assigned to different IDs. Then you could have had both drives able to access the drive at the same time. I have no idea how you would have avoiding trashing the file system or poisoned file cache, but I'm sure there's a way.

            • Haha. Great minds think alike.
              Since we're only talking block transfers for IP the file system is not a problem at all.
              Actually the disk would thrash from the bidirectional transfers so two drives would work better. Then you could use each as a FIFO and have all the goodness of its streaming transfer rate.

          • With parallel SCSI you don't even switch. Just access the HD from both hosts at the same time.
            I did that between a PC and a MicroVAX once :P

  • by Trongy ( 64652 ) on Wednesday October 19, 2011 @07:04PM (#37769278)

    The newer SMB2 protocol in post Vista version of windows is much more efficient in network usage. Samba 3.6 now has SMB2 support, but the article doesn't say which (if any) of these devices support the newer protocol.

  • Wow. (Score:3, Informative)

    by Anonymous Coward on Wednesday October 19, 2011 @07:06PM (#37769308)

    Holy cow! $1,699 to $3,799" for "10TB or 12TB" of storage?

    Case with 8 internal bays: $40
    600 Watt Power supply: $35
    MB with 8 SATA3 ports: $115
    2.5gig dual core processor: $73
    8 2TB drives: $800
    1 Gig of RAM: $30

    Total: $1093, for 16TB of storage. Yeah, yeah, you need one of them as a spare drive for redundancy, and you need an OS. You also need a few minutes to assemble and install. But for that price? Why pay twice as much? Hell yeah, roll my own, baby!

    • by Joe_Dragon ( 2206452 ) on Wednesday October 19, 2011 @07:19PM (#37769386)

      That PSU is to cheap at least get a $50+ one and don't just go for high watts.

      get 2-4 GB ram mini should only be about $50-$60 for good 8 GB DDR 3 you want at least dual channel ram.

      8 sata ports you may want to get a pci-e raid card / sata card. Maybe even SAS.

      redundancy you may want raid 6 on a raid card and not on board fake raid and most south bridges only have 6 ports any ways.

      Also some low end MB only have 10/100'.

      • That PSU is to cheap at least get a $50+ one and don't just go for high watts.

        Uh, what? I can understanding criticizing a specific PSU brand as being too unreliable or low-quality, but come on! Just saying "any PSU less than $__ is crap, you need to spend at least $__" makes you sound like a classic Conspicuous Consumer.

        get 2-4 GB ram mini should only be about $50-$60 for good 8 GB DDR 3 you want at least dual channel ram.

        This is a NAS, not a server. Half a gig would be sufficient, honestly - I've run some with 256MB. One gig is plenty, unless you want to keep files on a RAMdisk.

        8 sata ports you may want to get a pci-e raid card / sata card. Maybe even SAS.

        When you're just building a home/small office NAS, you don't need a high-performance RAID card - software RA

        • That PSU is to cheap at least get a $50+ one and don't just go for high watts.

          Uh, what? I can understanding criticizing a specific PSU brand as being too unreliable or low-quality, but come on! Just saying "any PSU less than $__ is crap, you need to spend at least $__" makes you sound like a classic Conspicuous Consumer.

          ok but don't cheap out.

          get 2-4 GB ram mini should only be about $50-$60 for good 8 GB DDR 3 you want at least dual channel ram.

          This is a NAS, not a server. Half a gig would be sufficient, honestly - I've run some with 256MB. One gig is plenty, unless you want to keep files on a RAMdisk.

          ok but for $30 you can get 2gb ram

          8 sata ports you may want to get a pci-e raid card / sata card. Maybe even SAS.

          When you're just building a home/small office NAS, you don't need a high-performance RAID card - software RAID is more than enough. Especially considering the price of those things.

          maybe but not all boards have 8 ports and some that's 6 chipset and the other from a add on sata chip also the build in software / fake raid likely will not work across 2 different chips like that. And even with 8 ports you still need 1 for the OS disk or you can mix the OS with the data drives.

          redundancy you may want raid 6 on a raid card and not on board fake raid and most south bridges only have 6 ports any ways.

          8 hard drives is not enough to justify RAID 6, unless they're EXTREMELY unreliable drives. Especially since that cuts down your storage capacity down to 12TB - not that good.

          RAID 6 is only needed when it's possible for a drive to fail, and then for another to fail while the array is still recovering. There's no point in doing it with only 8 drives.

          8 drives in raid 0 is a major risk. Raid 5 uses less space.

          Also some low end MB only have 10/100'.

          True. But then again, how many switches and computers are still only 10/100? Maybe you don't, but I still work daily with stuff that maxes out at Fast Ethernet.

          Plus, a $115 mobo isn't "low-end", at least by my definition. It's a fair assumption that if it has 8 SATA ports, you're going to have 10/100/1000 Ethernet.

          The case needs to have room for 8 HDD's + a os disk and good cooling.

          • ok but for $30 you can get 2gb ram

            Yeah? For $30 I can also add a nice SD/MicroSD card reader. And it would be just as beneficial to the system. Just because RAM is cheap, doesn't mean you need to cram absolutely everything full of it.

            maybe but not all boards have 8 ports and some that's 6 chipset and the other from a add on sata chip also the build in software / fake raid likely will not work across 2 different chips like that. And even with 8 ports you still need 1 for the OS disk or you can mix the OS with the data drives.

            Once X79 comes out, you'll have 10 ports, naturally. In any case, software RAID, at least under Linux, can handle disks on any widely incompatible set of chipsets. As well as separating the OS onto a disk partition on just one drive.

            8 drives in raid 0 is a major risk. Raid 5 uses less space.

            That would be relevant, if we were talking about RAID 0. RAID 5 and 6 are ident

            • X79 is the high end chip set that needs a i7 cpu that likely $280-$300 + a $200-$250 MB also the cpu has quad channel ram so you may want to have at least 2 ram sticks maybe even 4 also may need a low end pci-e video card as x79 has no build in video so the system may or may not boot up with out one. VS say a lower cost CPU and MB + a hardware raid card at about $300 is about the same (You do not need a i7 for that and with the lower end board on board video is ok) and hardware raid makes so you don't need

              • by BLKMGK ( 34057 )

                Software RAID does NOT require anything resembling a high end CPU. The LOWEST end Intel CPU undervolted and underclocked could do it. Boot the OS from a USB stick into a RAM disk. You will NOT need more than a gig of RAM if you do it right. Do NOT use hardware RAID, when it pukes and you try to replace the hardware you'll have all sorts of "fun".

                You have actually DONE this right not just reading about it and postulating?

            • by BLKMGK ( 34057 )

              unRAID. Boot from USB, uses a standard albeit not common FS (ReiserFS), only loses one disk to parity, and losing multiple disks doesn't kill the entire storage array. Can host a max of something like 16 disks although I've never gone past 11 on either of my systems. No OS maintenance although if you add on lots of stuff you can get into murky territory. Needs no more than maybe a gig of memory and a SLOW CPU. CPU will NOT be your bottleneck and Celeron or single core whatevers work just fine. A case that c

              • Celeron or single core + boot from USB + software raid is not a good idea. At least boot from a sata HDD or maybe firewire

                • by BLKMGK ( 34057 )

                  And WHY pray tell is that? Been doing it for well over 5 years and have gone through more than one USB stick without issue - my current stick is 2 years old. Boot takes about 2 minutes and only ever occurs when I upgrade software or a drive. The USB stick isn't written to during that time and only stores the OS to boot to RAM disk and a single config file. The image on the stick is standard from my vendor and the only thing unique I need backup anywhere is the config file - it's maybe 10K or I can print out

                  • by pnutjam ( 523990 )
                    My NAS runs NX, so I can pull up a published firefox, or do bit torrent. Anything I surf in my published firefox leaves no trace on the PC or the DNS servers of the site I am at. They only see an encrypted tunnel to my home PC. A full desktop gives me alot of flexibility at little cost.
        • RAID 6 is only needed when it's possible for a drive to fail, and then for another to fail while the array is still recovering. There's no point in doing it with only 8 drives.

          It's also extremely useful if you run into an unrecoverable read error while trying to rebuild the array.

          A lot of standard mechanical drives have an unrecoverable read error rate of about 1-in-10^14 bits (or 1-in-~12TB), meaning you're getting into some pretty nasty chances of hitting an URE on at least one of your disks when you're trying to rebuild the array after a disk failure with a decently-large array. This issue is alleviated when you have storage with an URE rate of 1-in-10^15 or higher (such as so

          • by swalve ( 1980968 )
            That's why you have cron do a raidcheck once a week. You'll know if you have a drive starting to go TU before it completely fails.
        • by afidel ( 530433 )
          No, RAID6 is the only useful level of RAID for any 7200 RPM drive over ~1TB other than RAID10. The bit error rate and time to rebuild are too high for anyone who cares about their data to use anything else.
        • by HuguesT ( 84078 )

          1- I don't know of any good-quality power supply below about $60. Good quality means Japanese capacitor, low ripple, good resistance to micro cuts, no lead, good current on the 12V rail, at least bronze-level efficiency, silence, and so on. Cheap no-name PS eventually fail, sometime taking the whole PC with them. Most people dismiss the PS, but it is an essential investment in a piece of equipment that runs all the time.

          Read this [tomshardware.com] for instance.

          2- On a homebrew NAS you want to run ZFS, you really do. In fact

          • by BLKMGK ( 34057 )

            Look at unRAID. One drive supports the parity and the FS is standard ReiserFS. Lose a disk and a rebuild is no problem. Lose TWO disks, and be completely unable to recover with standard tools, and you lose.... two disks of data NOT the entire damned thing. In 5++ years of using this system and having gone through multiple drive failures I've never once lost data. Never once had two disks die at once either and my systems run 24X7X365. I had one machine up to over 11 disks once but with larger disks have bro

      • The only real advantage "real raid" has over "fake raid" is the battery backed cache, so if it doesn't have that, you're probably better off with "fake raid". Your system CPU is faster than the CPU on board (plenty fast for parity calculations) and with "real raid", you have yet another OS (the board's firmware) to keep updated and hope doesn't crash and take our your file system.

        I'd rather have the OS handle the disks so there's no mystery disk format and I have complete control from the OS level. ZFS an

      • you may want raid 6 on a raid card

        You just added hundreds or thousands of dollars to a $1000 NAS and locked yourself to a specific piece of esoteric hardware in one swoop. BIOS-based RAID is a little too pedestrian for serious storage, yes, but there's nothing to be ashamed of in a true software RAID setup (ie, mdadm), even if it means adding SATA ports through a card.

      • get a cheap HP ML110 server with a few GB of RAM, load it up with disks. Get a bigger housing if the case is too small. Benefits: remote management (very basic ILO), server grade chipset/CPU if you get the Xeon specced model. I got one of these in a special offer and it runs my linux server very well. 1.6TB RAID 1 (mdraid), off the shelf disks, bought half a year apart so I don't get bitten by some bug that's in one firmware and not the other. Enough CPU/RAM/disk overhead to run the occasional test VMs. I a
    • You obviously haven't been involved in enterprise level purchases before. It may seem silly to your average techy but the people buying this equipment need someone to blame when it fails. If you're the head of IS in your company and the little server your suggesting goes down for 24hrs because of some obscure hardware incompatibility, what are you going to say? You built it, you maintained it, now the company has 50 people that sat at their desks pointlessly for 8hrs while you dicked around with drivers. Yo
      • by swalve ( 1980968 )
        That's why when you home-brew, you have redundant equipment. If you have extras of everything, on the shelf and ready to go, no need for any of that BS.
    • A high quality and quiet home NAS with 4GB and 15GB RAID5:

      Fujitsu server $299.99 (http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=6939649), sold out now.
      6x 3TB drives, $959.94, less if you buy 5400rpm drives.
      ----------
      $1259.93 + 2 spare 250GB drives

    • Yeah I figured that out a long time ago. I figure the only people that buy that crap are people that are lazy, want a really simple solution, or do not have the expertise to do it themselves. That is to say you take one of these pre-assembled NAS, plug it in network, do a wizard, done. For a small biz with no tech support, maybe an option. Also most of the bigger NAS support hot swappable drives, which is nice... thought if you spent about 120$ rather than 40$ on your case you can get hot swappable drives a

  • by bigdady92 ( 635263 ) on Wednesday October 19, 2011 @07:12PM (#37769344) Homepage
    There is no mention of speed, performance, file copy replication, the ins and out of each solution, just a list of features they all share and how the author went about determining them at his whim. Without metrics this article is just a sales blurb for links. Other websites do it better: Storagereview for one, Smallnetbuilder is the other.

    Another wretched sales brouchure disguised as a review by Infoworld.
    • Well, there were some metrics.
      But you're right, when I went to the review, Ghostery popped up 15 or so trackers. Don't think I ever saw that many on one page.

  • unRAID (Score:4, Interesting)

    by jtownatpunk.net ( 245670 ) on Wednesday October 19, 2011 @07:43PM (#37769568)

    It's definitely more work to set up than a pre-built appliance and I wouldn't use it in a production environment but it has some advantages and works well as my media server. I particularly like that multiple drives developing a few bad sectors won't render the entire array unrecoverable. That's a bit of a concern when combining multi-terabyte consumer level drives. I currently have 20tb of fault-tolerant storage with room for another 6tb before I run out of ports. With more ports and a larger case, I could go up to 40tb.

    • Big fan of unRAID as well.

      I set up a box for home this summer. 20-drive max capacity, currently running on 6.

      The extensibility of the system was the biggest selling point for me.

    • Count me another huge unRAID fan. ZFS has its pluses but the one thing it does not have is the dead simple ease with which to add storage capacity. Yes yes yes I've seen how it is done with ZFS, even played with it myself, but it is NOT the same, it is NOT as seamless, it is NOT as simple. With unRAID I add a drive and that is basically it. It just starts saving data to it as if it were a JBOD.

      DROBO came out and I thought that was my solution, then I saw the price tag, the speed, the proprietary FS and

  • I built a 500$ Atom NAS over 2 years ago and it had better performance then that shown in the charts of that article. And these rigs are over 1000$ today? WTF?

    • Same here. Intel Desktop Board with integrated Atom. I installed XP with most services disabled, no AV, just some IP cam software. It only has LAN access (no Internet). Runs fast and has never gone down. My favorite part is the power consumption--measuring at the wall with activity was 18W.

  • The metrics were using different raid types from one solution to the next, some say RAID10, some RAID2, etc... The "Intel file copy" test was basically unexplained and it doesn't make sense that a file copy (sequential write/read operation) would have less throughput than random reads/writes (and wtf does he talk about 256k block size in teh legend instead of how big the read/writes are?) as the other test claims to be. Also, the author calls RAID-10 and RAID-6 as modes for someone with more technical knowl
  • Something with some CPU power to take requests and get them out there plus a card that can do RAID6 and still saturate a gigabit network connection (with enough drives) doesn't really cost a lot more than some of those underpowered things.
  • Use GlusterFS http://www.gluster.org/ [gluster.org] for redundancy spanned across one or more JBOD machines for a much easier hardware and data upgrade path. Use oVirt for easy set up http://www.gluster.com/community/documentation/index.php/GlusterFS_oVirt_Setup_Guide [gluster.com]. Mount GlusterFS directly to your clients or export via iSCSI target, fibrechannel target, FCoE, NBD, or traditional NFS for a more advanced shared storage solution. And you can still run more of a NAS type setup with CIFS, WebDAV, or the like.

  • I stopped reading the slide show when they not only spelled out a definition for ISCSI, but got it wrong. Horrible article. Zero details, all fluff.

  • by Lifix ( 791281 ) on Thursday October 20, 2011 @12:34AM (#37771060) Homepage
    Dear Slashdoters. I know that you can build a better, faster, cheaper NAS that will perform fellatio over SSH and wipe your ass for you. But, I don't care... at all. According to you, I overpaid for my two NAS devices, a Drobo FS (serving media) and a Synology DS211+ (photo backups (profoto)). But I'm exceedingly happy with them. Transfer speed is sufficient on the Drobo to serve 1080p content to 2 tv's and an iPad simultaneously, and the Synology keeps up with my image editing software just fine. I've upgraded the drives in the drobo once so far, and just like their videos claim, everything just worked. The Drobo survived a drive failure last year, in the middle of 'movie night,' and video playback from the drobo was unaffected. - I'm glad that these NAS devices were reviewed, but I can't imagine why so many have come to this thread to post their server builds. The people, like myself, buying these NAS devices are buying them so we don't have to build our own servers.
    • by Lifix ( 791281 )
      Went back through and actually read the OP. The comparison is absolute shit, there's no mention of input/output speeds on any of the devices, and no clear methodology for handing out scores... advertisement disguised as reporting.
    • Well said - seriously - i'm so over hacking up my own hardware - I just want to buy something, plug it in, and it works. Maybe if I were a teenager with lots of time on my hands, and no money - OK, i'll spend a few days hacking up a NAS - but i'm not. I have a job, make good money, and have a life. I'm willing to pay for convenience.

    • Indeed- I have had a synology nas since 2007 (first a ds207, upgraded to a ds211j this year). There are tons of features right out of the box- I have been living in "the cloud" for years now. When I think of all the time I would have to spend setting up software packages for all of the features syno provides... it makes me want to cry. I have already spent way too much time getting serviio up and running to replace the standard crappy DLNA implementation.

      Buying pre-built means it works for you, and you aren

    • I think the only problem I have with Drobo is the horrifying warranty options. For a device that has roughly 80% markup, I think it's attrocious that it has a hardware warranty period (1 year) lower than a cheap USB hard drive from just about anyone else (2-3 years). Especially given that once you're out of warranty, a hardware failure ANYWHERE in that device isn't really recoverable from without going out and buying a new Drobo. You can argue that it's unlikely to fail, but I think that, if they really sta

    • I'm entirely, completely in love with Drobo as a NAS device.

      The ability to pop out a smaller drive and replace it with a larger drive is amazing - that is simply how technology is supposed to work. I have the Drobo FS at home and the DroboPro FS at work. Having used them for about a year and having tried to make them fail before I moved them into production, I'm very happy with their reliability and performance. (More on performance in a second.)

      At the high end, I have used EMC and IBM solutions. At the low

  • by gjh ( 231652 )

    My own AFP experience with QNAP was terrible, due to the dodgy FOSS stack - I forget which one - that was included. There was no useful way to authenticate (no OpenDirectory, no Kerberos, no way to automate user import). I ended up with iSCSI between the QNAP and the Mac OS Server (ATTO iSCSI) and serving AFP from there, with a 5x speed improvement.

    Was I doing something wrong? It doesn't seem to match the AFP figures in the article. Anyone else have similar awful real-world AFP performance?

  • ...for my needs anyway, so hopefully I can add something to the discussion. I'm one of those traitors who traded a homemade linux NAS for an off-the-shelf model and went through quite a few models before I found one I was happy with.

    My initial file server was built into a cheap 4U rackmount and a couple of 3ware cards and provided sterling service. However, it was exceptionally loud and very heavy, and sucked up a fair amount of power. When you've moved house as often as I have, you start to think about whe

  • See here: Backblaze Blog: Petabytes on a budget [backblaze.com].

    They use JFS on debian.. You can easily add the filesever software of your choice (Samba, Netatalk, NFS, etc.)

    On the hardware side they use a Intel mainboard with a Intel Core 2 CPU, a PCIe SATA 2 Controller and 45 SATA 2 discs (each 1.5 TB). They put it in a custom enclosure, the 3D model is available here (25 MB ZIP archive) [backblaze.com]. This all costs less than 8000€ for 67 TB (discs included!).

    There is also an update [backblaze.com], where they get 135 TB for less than $8000.

  • QNAP comes out as best in nearly all tests, yet they still recommend one of the brands that performed way worse? (disclaimer: I run a QNAPclub community forum)
  • I'm no expert. We've got a NetApp where I work now, and had a Netgear ReadyNAS at my previous job. But I will say that some ability to upgrade is going to be key.

    We got the ReadyNAS up and running with just a couple TB of storage because we really didn't think we'd need more than that. Within a year we were full and looking for some way to expand it.

    The NetApp here was installed with a good pile of storage... But we've grown our environment so much that we've exceeded the capabilities of the chassis, an

"Yes, and I feel bad about rendering their useless carci into dogfood..." -- Badger comics

Working...