Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Virtualization Hardware

Making Best Use of Data Center Space: Density Vs. Isolation 56

jfruh writes The ability to cram multiple virtual servers on a single physical computer is tempting — so tempting that many shops overlook the downsides of having so many important systems subject to a single point of physical failure. But how can you isolate your servers physically but still take up less room? Matthew Mobrea takes a look at the options, including new server platforms that offer what he calls "dense isolation."
This discussion has been archived. No new comments can be posted.

Making Best Use of Data Center Space: Density Vs. Isolation

Comments Filter:
  • by Anonymous Coward

    Man buys 1/3 rack and fills it. Looks for faster servers.

    • Re: TL;DR (Score:4, Interesting)

      by saloomy ( 2817221 ) on Friday October 17, 2014 @08:00AM (#48167551)
      He should consider using virtualization to increase his uptime since he is worried about multiple important systems on a single server. Virtualization gives you such good yields in consolidation, you can come out ahead while still using redundancy features like VMware FaultTolerance. Your vm runs "in-step" on two hosts, and will survive even if either host fails. Just requires 2X the used memory. That's still only the most extreme case though like for databases, as most servers should be able to survive a reboot (which is what happens when your host dies and there is capacity left in your cluster. The VM powers back up on another host.
  • I read the blog post, and he's just comparing having a beefy server with multiple VMs to instead having a bunch of blade servers. How is this new? Heck, 13 years ago at a Canadian federal government job we swapped our web servers for blades.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Heck, 13 years ago at a Canadian federal government job we swapped our web servers for blades.

      Which was pretty bleeding-edge at the time, since the first blade server was 2001. So not sure what your point about the government is - they weren't late to the party, far from it.

      • Heck, 13 years ago at a Canadian federal government job we swapped our web servers for blades.

        Which was pretty bleeding-edge at the time, since the first blade server was 2001. So not sure what your point about the government is - they weren't late to the party, far from it.

        If I hadn't posted on this story, I would mod the above interesting. I just assumed we were at least a couple of years behind the curve. We were buying off the shelf hardware, nothing custom.

    • Nor are they "isolated". All of the blades connect to the same backplane.

      And moving VM's between individual blades is a hassle unless you use some form of shared storage. Which makes them even less "isolated" but more redundant.

      This reads more like he just wanted to show off that he calls blade servers "dense isolation".

      So is it better to have a bunch of isolated servers which reduces the VM domino effect in exchange for increased hardware maintenance? Or just a few massive servers and be ready for the 4 am

    • Blade servers blow (Score:4, Interesting)

      by swb ( 14022 ) on Friday October 17, 2014 @08:06AM (#48167591)

      I'll accept the idea that somewhere somebody has so many servers and so little space that a blade center was the only way they could achieve the density they needed.

      Except I've never seen it -- all the blade centers I've ever seen have been partially full and the equivilent 1U and 2U servers probably would have fit in the same or less space than the blade chasis was occupying.

      And almost always there's a mongolian clusterfuck when they decide to add blades to the chasis -- which they inevitably do, because they have so much money sunk into the blades that there's no way out from under it.

      The mongolian clusterfuck is the result of the byzantine cofiguration rules each vendor has for determining a blade's NIC or FC mapping with the blade center's (overpriced) internal switch bays. Half or full height? LoM or mezzanine slot? Which mez slot? Which blade slot? Oh, you want an extra NIC on that blade? Sorry, the mapping requires an additional switching module which will cost you more than any decent L3 48 port gig switch.

      Whatever the savings from the blade center (and maybe in some metered situation there is power savings of couple hundred watts) is easily lost in hours of troubleshooting when trying to do something different.

      Blade centers always look like some kind of pre-virtualization version of server consolidation that became obsolete once 24U of servers could easily be run on 8U or less of VM host and SAN. They would be a lot more interesting if their mapping regeimes weren't hard wired -- blade advocates give me blahblah "point of failure" about a switchable/configurable backplane.

      • that was awesome...thanks.

      • by Jaime2 ( 824950 )

        HP Blades put a 2U server in 1.6U or a 1U server in 0.8U. The only downside is that there is very little room for local storage. If you are virtualizing, SAN storage is inevitable anyways. The power backplane is just a hunk of copper, and all the intelligent stuff is duplicated, so there isn't really a single point of failure - but I wouldn't go blades unless I was at the scale of needing at least three blade chassis so it would be possible to shut one down and not interrupt production. The most legitimate

        • by swb ( 14022 )

          I don't doubt there are density scenerios where they make sense, especially in some kind of purpose built farm where you can get major benefits from reduced cabling (although 10G ethernet works to the advantage of standalone servers, too).

          But in so many installs I've seen there hasn't been a full-density buildout, ever. It's always piecemeal and always leads to head scratching and downtime to sort out port mapping and other issues.

          I had a client with older HP blade center not getting the port mapping he wa

      • I'll accept the idea that somewhere somebody has so many servers and so little space that a blade center was the only way they could achieve the density they needed.

        Except I've never seen it -- all the blade centers I've ever seen have been partially full and the equivilent 1U and 2U servers probably would have fit in the same or less space than the blade chasis was occupying.

        And almost always there's a mongolian clusterfuck when they decide to add blades to the chasis -- which they inevitably do, because they have so much money sunk into the blades that there's no way out from under it.

        The mongolian clusterfuck is the result of the byzantine cofiguration rules each vendor has for determining a blade's NIC or FC mapping with the blade center's (overpriced) internal switch bays. Half or full height? LoM or mezzanine slot? Which mez slot? Which blade slot? Oh, you want an extra NIC on that blade? Sorry, the mapping requires an additional switching module which will cost you more than any decent L3 48 port gig switch.

        Whatever the savings from the blade center (and maybe in some metered situation there is power savings of couple hundred watts) is easily lost in hours of troubleshooting when trying to do something different.

        Blade centers always look like some kind of pre-virtualization version of server consolidation that became obsolete once 24U of servers could easily be run on 8U or less of VM host and SAN. They would be a lot more interesting if their mapping regeimes weren't hard wired -- blade advocates give me blahblah "point of failure" about a switchable/configurable backplane.

        The HP c-class isn't that bad. It's been pretty set it and forget it. The ESX runs off of an SD card (or maybe it's just a boot image, there's a VM team that deals with that stuff), then all the datastores are hosted on a SAN. The blades themselves are just compute and memory.

        Of course your original argument still stands, I've never seen a case where real estate is at such a premium that blades are the only way to go. Usually I see racks and racks of storage taking up room instead of servers, but for me the

      • We're actually moving away from blade servers to standalone servers. You can get so many cores on a chip now that each set of 8 blades can be replaced by 1 2U server. Each server has a number of raided 2.5" drives that we use with glusterfs for redundancy between servers which we then re-export over Fiber channel so that we can get rid of our dedicated FC arrays, yet continue to use the FC infrastructure we have in place.

      • The mongolian clusterfuck is the result of the byzantine cofiguration rules each vendor has for determining a blade's NIC or FC mapping with the blade center's (overpriced) internal switch bays.

        Cisco's blades do all of this through software...you can add and delete NICs and fiber channel cards with a couple of mouse clicks on the Java applet that runs in the browser.

      • by Monoman ( 8745 )

        Blade severs should only be considered when you have space, power, or cooling constraints. They are more expensive and create their own specific issues like you said.

        As far as scaling up vs scaling out in a VM cluster. I prefer more smaller nodes over fewer larger nodes. In a two node VM cluster you can't exceed 50% of any resource unless you are willing to deal with oversubscribed resources during a node failure. Well that isn't a very efficient use of equipment. As you add nodes there you can increase

    • It's turtles all the way down I tell ya

    • by mlts ( 1038732 )

      It really depends on the blades and 1U machines. Without exact machines, it can be a tossup, as a blade chassis takes up a ton of rack units. If comparing HP G8 blades to HP G8 1Us, the blades will edge out if they are just being use as compute nodes with the onboard storage used to load the hypervisor, then they hit the SAN for everything else. However, stacking a bunch of 1U machines can be just as good, and the advantage of 1U boxes is that you don't have to worry about the server maker discontinuing

  • Personally I keep eyeballing the SuperMicro TWIN line. Extremely dense configurations of multiple servers per unit. Spread the workload across multiple physical boxes. Use something like vCenter Server to manage the networking and other resource configurations to simplify making them all the same and adding easy of migrating VMs from one physical host to another.

    • Judging from their figures, "extremely dense" ("375 GFLOPS/kW") is actually fairly mediocre, considering that the Blue Gene/Q figures with POWER chips is several times better. That's obviously not exactly relevant for generic server workloads, but I wonder what the figures on those are.
    • by enjar ( 249223 )

      We use Twins extensively in our data center and have several racks full of them. We've been using them for several generations and are pretty pleased with how they have evolved over time. We now use the Twin2 units pretty much exclusively. We like the shared, hot-swappable power supplies and 4 systems in 2U layout -- which is certainly dense enough for our needs. We also have a great local VAR (greater Boston area) who is awesome in terms of RMAs, warranty service, and no-nonsense quoting when we need new s

  • You should have your VM images on some storage system like a NetApp, this lets you transfer the entire VM to another blade if one fails. So you have two blade racks both connected to the NetApp with software set up to fail over all the VM's from a failed blade to a blade on the second blade rack. You would probably run all the blades active on 1/2 load where on failure you transfer to the alternate blade on the 2nd rack and go to a full load on that blade. This protects you from a rack failure as well as an
    • You should have your VM images on some storage system like a NetApp

      Nope. That's a single point of failure. You need two of those, too.

      Basically you need to have two racks in different DCs with replication between their filers

      Or you need to accept that you can't guarantee uptime

      • The SAN is usually less of a single point of failure because they usually have a lot of redundancy built-in, redundant storage processors, multiple backplanes, etc. You're right that off-site replication is still important, but usually more for whole site loss than storage loss.
        • NetApp became famous by stuffing a PC in a box, adding their software, and calling it a filer. They've advanced since, but lots of people are still cheaping out on filers so the point still stands

        • by dnavid ( 2842431 )

          The SAN is usually less of a single point of failure because they usually have a lot of redundancy built-in, redundant storage processors, multiple backplanes, etc. You're right that off-site replication is still important, but usually more for whole site loss than storage loss.

          People assume the biggest source of SAN failures is a hardware failure, and believe hardware redundancy makes SANs less likely to fail. In my experience, that's false. The biggest source of SAN failures are (usually human-) induced problems from the outside. Plug the wrong FC card with the wrong firmware, knock out the switching layer. Upgrade controller incorrectly, bring down SAN. Perform maintenance incorrectly, wipe the array. SANs go down all the time, and often for very difficult to predict reas

      • by hawguy ( 1600213 )

        You should have your VM images on some storage system like a NetApp

        Nope. That's a single point of failure. You need two of those, too.

        Basically you need to have two racks in different DCs with replication between their filers

        Or you need to accept that you can't guarantee uptime

        You don't really need two separate filers, you just need a two headed filer to prevent any single point of failure. You can lose anything (even a controller on a disk shelf) and not even notice until the replacement is fedexed to you tomorrow.

        • by Zondar ( 32904 )

          Til you lose the site...

          • by hawguy ( 1600213 )

            You don't really need two separate filers, you just need a two headed filer to prevent any single point of failure

            Til you lose the site...

            I'm pretty sure that it goes without saying that single-site redundancy means that if you lose the site, you lose everything. Though if you have a segmented datacenter, Netapp will let you separate the heads by up to 500 meters. Likewise, you can separate the disk trays so you can lose an entire datacenter segment without losing data.

            If you want replication across sites, Netapp will be more than happy to help you out with a variety of synchronous and asynchronous replication options. For a price, of course

  • by Idimmu Xul ( 204345 ) on Friday October 17, 2014 @06:16AM (#48167199) Homepage Journal

    is not good.

  • Simple (Score:5, Insightful)

    by ledow ( 319597 ) on Friday October 17, 2014 @06:29AM (#48167239) Homepage

    Put all your eggs in one basket.
    Then make sure you have copies of that basket.

    If you're really worried, put half the eggs in one basket and half in another.

    We need an article for this?

    Hyper-V High Availability Cluster. It's right there in Windows Server. Other OS's have similar capabilities.

    Virtualise everything (there are a lot more advantages than mere consolidation - you have to LOVE the boot-time on a VM server as it doesn't have to mess about in the BIOS or spin up the disks from their BIOS-compatible modes, etc.), then make sure you replicate that to your failover sites / hardware.

  • by Pegasus ( 13291 ) on Friday October 17, 2014 @06:43AM (#48167275) Homepage

    Look at the likes of HP Moonshot and AMD Seamicro. Those are some nice toys to play with ...

  • At most places I've worked, the situation described was requested / imposed by the PHBs and bean counters to save costs, reliability and isolation be damned.

    Lack of physical room was NEVER, I repeat, NEVER an issue. This ain't Tokyo we´re talking about. If your office building doesn't have a spare room, you'll have other major issues down the road when ytour company expands its business.
  • He mentions just one product, while ignoring a host of other offerings.

  • In the current US situation at least, you have room to move servers from figurative jails to real ones.

  • SYS SM MicroCloud 3U SYS-5038ML-H8TRF (8 Nodes Intel Xeon E3 v3 Nodes)
    CPU Intel Xeon E3-1270v3 Quad-Core 3.50GHz (3.90GHz Turbo) 8MB 80W
    MEM DDR3 1600 8GB ECC Unbuffered (32GB Per Node)
    SSD Intel 530 Series SSDSC2BW240A401 2.5 inch 240GB SATA3 Solid State Drive
    KIT SM Black Hotswap Gen 6 3.5" to 2.5" Hard Disk Drive Tray (MCP-220-93801-0B)

    $1259 per node. $10,072 for 8 nodes

    Please call for more info.
    rackmount server specialist 408-736-8590
    www.kingstarusa.com

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...