Making Best Use of Data Center Space: Density Vs. Isolation 56
jfruh writes The ability to cram multiple virtual servers on a single physical computer is tempting — so tempting that many shops overlook the downsides of having so many important systems subject to a single point of physical failure. But how can you isolate your servers physically but still take up less room? Matthew Mobrea takes a look at the options, including new server platforms that offer what he calls "dense isolation."
TL;DR (Score:1)
Man buys 1/3 rack and fills it. Looks for faster servers.
Re: TL;DR (Score:4, Interesting)
Re: (Score:2)
Of course, there is the fact that the VM running with VMWare's fault tolerance can only have one vCPU... so this means that you can't really use it for high-availability database apps. Even a Splunk instance will set off high CPU alarms.
There are other restrictions as well. VMWare's high availability is somewhat useful (lose a running VM, it will restart the instance)... but there is the downtime waiting for the VM to come up, load its stuff, and start taking requests.
All and all, it is better than nothin
Blade Servers aren't "new server platforms" (Score:2)
Re: (Score:2, Interesting)
Heck, 13 years ago at a Canadian federal government job we swapped our web servers for blades.
Which was pretty bleeding-edge at the time, since the first blade server was 2001. So not sure what your point about the government is - they weren't late to the party, far from it.
Re: (Score:2)
Heck, 13 years ago at a Canadian federal government job we swapped our web servers for blades.
Which was pretty bleeding-edge at the time, since the first blade server was 2001. So not sure what your point about the government is - they weren't late to the party, far from it.
If I hadn't posted on this story, I would mod the above interesting. I just assumed we were at least a couple of years behind the curve. We were buying off the shelf hardware, nothing custom.
Mod parent up. (Score:3)
Nor are they "isolated". All of the blades connect to the same backplane.
And moving VM's between individual blades is a hassle unless you use some form of shared storage. Which makes them even less "isolated" but more redundant.
This reads more like he just wanted to show off that he calls blade servers "dense isolation".
Blade servers blow (Score:4, Interesting)
I'll accept the idea that somewhere somebody has so many servers and so little space that a blade center was the only way they could achieve the density they needed.
Except I've never seen it -- all the blade centers I've ever seen have been partially full and the equivilent 1U and 2U servers probably would have fit in the same or less space than the blade chasis was occupying.
And almost always there's a mongolian clusterfuck when they decide to add blades to the chasis -- which they inevitably do, because they have so much money sunk into the blades that there's no way out from under it.
The mongolian clusterfuck is the result of the byzantine cofiguration rules each vendor has for determining a blade's NIC or FC mapping with the blade center's (overpriced) internal switch bays. Half or full height? LoM or mezzanine slot? Which mez slot? Which blade slot? Oh, you want an extra NIC on that blade? Sorry, the mapping requires an additional switching module which will cost you more than any decent L3 48 port gig switch.
Whatever the savings from the blade center (and maybe in some metered situation there is power savings of couple hundred watts) is easily lost in hours of troubleshooting when trying to do something different.
Blade centers always look like some kind of pre-virtualization version of server consolidation that became obsolete once 24U of servers could easily be run on 8U or less of VM host and SAN. They would be a lot more interesting if their mapping regeimes weren't hard wired -- blade advocates give me blahblah "point of failure" about a switchable/configurable backplane.
Re: (Score:2)
that was awesome...thanks.
Re: (Score:3)
HP Blades put a 2U server in 1.6U or a 1U server in 0.8U. The only downside is that there is very little room for local storage. If you are virtualizing, SAN storage is inevitable anyways. The power backplane is just a hunk of copper, and all the intelligent stuff is duplicated, so there isn't really a single point of failure - but I wouldn't go blades unless I was at the scale of needing at least three blade chassis so it would be possible to shut one down and not interrupt production. The most legitimate
Re: (Score:2)
I don't doubt there are density scenerios where they make sense, especially in some kind of purpose built farm where you can get major benefits from reduced cabling (although 10G ethernet works to the advantage of standalone servers, too).
But in so many installs I've seen there hasn't been a full-density buildout, ever. It's always piecemeal and always leads to head scratching and downtime to sort out port mapping and other issues.
I had a client with older HP blade center not getting the port mapping he wa
Re: (Score:2)
I'll accept the idea that somewhere somebody has so many servers and so little space that a blade center was the only way they could achieve the density they needed.
Except I've never seen it -- all the blade centers I've ever seen have been partially full and the equivilent 1U and 2U servers probably would have fit in the same or less space than the blade chasis was occupying.
And almost always there's a mongolian clusterfuck when they decide to add blades to the chasis -- which they inevitably do, because they have so much money sunk into the blades that there's no way out from under it.
The mongolian clusterfuck is the result of the byzantine cofiguration rules each vendor has for determining a blade's NIC or FC mapping with the blade center's (overpriced) internal switch bays. Half or full height? LoM or mezzanine slot? Which mez slot? Which blade slot? Oh, you want an extra NIC on that blade? Sorry, the mapping requires an additional switching module which will cost you more than any decent L3 48 port gig switch.
Whatever the savings from the blade center (and maybe in some metered situation there is power savings of couple hundred watts) is easily lost in hours of troubleshooting when trying to do something different.
Blade centers always look like some kind of pre-virtualization version of server consolidation that became obsolete once 24U of servers could easily be run on 8U or less of VM host and SAN. They would be a lot more interesting if their mapping regeimes weren't hard wired -- blade advocates give me blahblah "point of failure" about a switchable/configurable backplane.
The HP c-class isn't that bad. It's been pretty set it and forget it. The ESX runs off of an SD card (or maybe it's just a boot image, there's a VM team that deals with that stuff), then all the datastores are hosted on a SAN. The blades themselves are just compute and memory.
Of course your original argument still stands, I've never seen a case where real estate is at such a premium that blades are the only way to go. Usually I see racks and racks of storage taking up room instead of servers, but for me the
Re: (Score:2)
We're actually moving away from blade servers to standalone servers. You can get so many cores on a chip now that each set of 8 blades can be replaced by 1 2U server. Each server has a number of raided 2.5" drives that we use with glusterfs for redundancy between servers which we then re-export over Fiber channel so that we can get rid of our dedicated FC arrays, yet continue to use the FC infrastructure we have in place.
Re: (Score:2)
The mongolian clusterfuck is the result of the byzantine cofiguration rules each vendor has for determining a blade's NIC or FC mapping with the blade center's (overpriced) internal switch bays.
Cisco's blades do all of this through software...you can add and delete NICs and fiber channel cards with a couple of mouse clicks on the Java applet that runs in the browser.
Re: (Score:2)
Blade severs should only be considered when you have space, power, or cooling constraints. They are more expensive and create their own specific issues like you said.
As far as scaling up vs scaling out in a VM cluster. I prefer more smaller nodes over fewer larger nodes. In a two node VM cluster you can't exceed 50% of any resource unless you are willing to deal with oversubscribed resources during a node failure. Well that isn't a very efficient use of equipment. As you add nodes there you can increase
Re: Blade Servers aren't "new server platforms" (Score:1)
It's turtles all the way down I tell ya
Re: (Score:2)
It really depends on the blades and 1U machines. Without exact machines, it can be a tossup, as a blade chassis takes up a ton of rack units. If comparing HP G8 blades to HP G8 1Us, the blades will edge out if they are just being use as compute nodes with the onboard storage used to load the hypervisor, then they hit the SAN for everything else. However, stacking a bunch of 1U machines can be just as good, and the advantage of 1U boxes is that you don't have to worry about the server maker discontinuing
Twins (Score:2)
Personally I keep eyeballing the SuperMicro TWIN line. Extremely dense configurations of multiple servers per unit. Spread the workload across multiple physical boxes. Use something like vCenter Server to manage the networking and other resource configurations to simplify making them all the same and adding easy of migrating VMs from one physical host to another.
Re: (Score:3)
Re: (Score:2)
We use Twins extensively in our data center and have several racks full of them. We've been using them for several generations and are pretty pleased with how they have evolved over time. We now use the Twin2 units pretty much exclusively. We like the shared, hot-swappable power supplies and 4 systems in 2U layout -- which is certainly dense enough for our needs. We also have a great local VAR (greater Boston area) who is awesome in terms of RMAs, warranty service, and no-nonsense quoting when we need new s
Re: (Score:2)
Thinkmate
Blades (Score:2)
Re: (Score:2)
You should have your VM images on some storage system like a NetApp
Nope. That's a single point of failure. You need two of those, too.
Basically you need to have two racks in different DCs with replication between their filers
Or you need to accept that you can't guarantee uptime
Re: (Score:2)
Re: (Score:2)
NetApp became famous by stuffing a PC in a box, adding their software, and calling it a filer. They've advanced since, but lots of people are still cheaping out on filers so the point still stands
Re: (Score:2)
The SAN is usually less of a single point of failure because they usually have a lot of redundancy built-in, redundant storage processors, multiple backplanes, etc. You're right that off-site replication is still important, but usually more for whole site loss than storage loss.
People assume the biggest source of SAN failures is a hardware failure, and believe hardware redundancy makes SANs less likely to fail. In my experience, that's false. The biggest source of SAN failures are (usually human-) induced problems from the outside. Plug the wrong FC card with the wrong firmware, knock out the switching layer. Upgrade controller incorrectly, bring down SAN. Perform maintenance incorrectly, wipe the array. SANs go down all the time, and often for very difficult to predict reas
Re: (Score:2)
You should have your VM images on some storage system like a NetApp
Nope. That's a single point of failure. You need two of those, too.
Basically you need to have two racks in different DCs with replication between their filers
Or you need to accept that you can't guarantee uptime
You don't really need two separate filers, you just need a two headed filer to prevent any single point of failure. You can lose anything (even a controller on a disk shelf) and not even notice until the replacement is fedexed to you tomorrow.
Re: (Score:2)
Til you lose the site...
Re: (Score:2)
You don't really need two separate filers, you just need a two headed filer to prevent any single point of failure
Til you lose the site...
I'm pretty sure that it goes without saying that single-site redundancy means that if you lose the site, you lose everything. Though if you have a segmented datacenter, Netapp will let you separate the heads by up to 500 meters. Likewise, you can separate the disk trays so you can lose an entire datacenter segment without losing data.
If you want replication across sites, Netapp will be more than happy to help you out with a variety of synchronous and asynchronous replication options. For a price, of course
this article (Score:3)
is not good.
Simple (Score:5, Insightful)
Put all your eggs in one basket.
Then make sure you have copies of that basket.
If you're really worried, put half the eggs in one basket and half in another.
We need an article for this?
Hyper-V High Availability Cluster. It's right there in Windows Server. Other OS's have similar capabilities.
Virtualise everything (there are a lot more advantages than mere consolidation - you have to LOVE the boot-time on a VM server as it doesn't have to mess about in the BIOS or spin up the disks from their BIOS-compatible modes, etc.), then make sure you replicate that to your failover sites / hardware.
Re:Simple (Score:4, Interesting)
I have just put in a Blade / VM configuration at a school (don't ask what they were running before, you don't want to know).
Our DR plan is that we have an identical rack at another location with blades / storage / VM's / etc. on hot-standby
Our DDR (double-disaster recovery!) plan is to restore the VM's we have to somewhere else, e.g. cloud provider, if something prevents us operating on that plan.
The worries I have are that storage is integrated into the blade server (a SPOF on its own, but at least we have multiple blade servers mirroring that data), and that we are relying on a single network to join them.
The DDR plan is literally there for "we can't get on site" scenarios, and involves spinning up copies of instances on an entirely separate network, including external numbering. It's not a big deal for us, we are merely a tiny school, but if even we're thinking of that and seeing those SPOF's, you'd think someone writing their article into Slashdot would see that too.
All the hardware in the world is useless if that fibre going into the IT office breaks, or a "single" RAID card falls over (or the RAID even degrades, affecting performance). It seems pretty obvious. Two of everything, minimum. And thus two ways to get to everything, minimum.
If you can't count two or more of everything, then you can't (in theory) safely smash one of anything and continue. Whether that's a blade server, power cord, network switch, wall socket, building generator, or whatever, it's the same. And it's blindingly obvious why that is.
Physicalisation (Score:3)
Look at the likes of HP Moonshot and AMD Seamicro. Those are some nice toys to play with ...
It's all about cost, not physical space. (Score:2)
Lack of physical room was NEVER, I repeat, NEVER an issue. This ain't Tokyo we´re talking about. If your office building doesn't have a spare room, you'll have other major issues down the road when ytour company expands its business.
Reads like an ad... (Score:2)
He mentions just one product, while ignoring a host of other offerings.
Jails Are a Steal (Score:2)
In the current US situation at least, you have room to move servers from figurative jails to real ones.
Cost Calculation- 3U, 8 nodes microcloud: $10,072 (Score:1)
SYS SM MicroCloud 3U SYS-5038ML-H8TRF (8 Nodes Intel Xeon E3 v3 Nodes)
CPU Intel Xeon E3-1270v3 Quad-Core 3.50GHz (3.90GHz Turbo) 8MB 80W
MEM DDR3 1600 8GB ECC Unbuffered (32GB Per Node)
SSD Intel 530 Series SSDSC2BW240A401 2.5 inch 240GB SATA3 Solid State Drive
KIT SM Black Hotswap Gen 6 3.5" to 2.5" Hard Disk Drive Tray (MCP-220-93801-0B)
$1259 per node. $10,072 for 8 nodes
Please call for more info.
rackmount server specialist 408-736-8590
www.kingstarusa.com