Low Redundancy Data Centers? Providers Adapt As Tenants Seek Options (datacenterfrontier.com) 57
1sockchuck writes: Data center providers are offering space with less power infrastructure than traditional mission-critical facilities, citing demand from customers looking to forego extra UPS and generators in return for more affordable pricing. The demand for "variable resiliency" space reflects a growing emphasis on controlling data center costs, along with a focus on application-level requirements like HPC and bitcoin mining. Data center experts differed on whether this trend toward flexible design was a niche, or a long-term trend. "In the next 12 months,data center operators will be challenged to deliver power to support both an HPC environment as well as traditional storage all under one roof," said Tate Cantrell, CTO at Iceland's Verne Global. "HPC will continue the trend to low resiliency options." But some requirements don't change. "Even when they say they're OK with lower reliability, they still want uptime," noted one executive.
Variable resiliency? (Score:5, Insightful)
Re: (Score:3)
Yeah, planning in advance for quantum computing where everything is relativistic, including up time, I just provisioned 8 identically configured cloud nodes with different variable resiliency providers with every provider guaranteeing a 75% uptime.
This allows me to have 99.9984800% uptime given the probability that all nodes are down at the same time.
Re: (Score:2)
You're building one? I'm only thinking about building it. Well, more like sort of thinking about it.
Re:"OK with lower reliability,they still want upti (Score:4, Insightful)
Why? Failover keeps high uptime even if you have less reliable hardware.
Unless the unreliable hardware is your power source.
Re: (Score:2)
Unless the unreliable hardware is your power source.
That's why you put your failovers in different data centers. If you spread the load over 5 different data centers, with 20% extra capacity, losing one isn't a big deal.
Especially if you go with some of the more modern plans that let you scale within moments.
Wait, what? (Score:4, Insightful)
Re: (Score:3)
Again, a particular service is not mission critical, so we don't need to spend the money to make it so. Things like test servers and the like could fall into this category. After all, you really don't need 99.999997% uptime for this.
At least until the power goes out at the data center and you've got a room full of testers and developers sitting around waiting for it to come back on and there's a deadline looming. Then you don't look so smart for saving an extra few bucks a month.
I wonder how dynamic the
Re: (Score:1)
Again, a particular service is not mission critical, so we don't need to spend the money to make it so. Things like test servers and the like could fall into this category. After all, you really don't need 99.999997% uptime for this.
At least until the power goes out at the data center and you've got a room full of testers and developers sitting around waiting for it to come back on and there's a deadline looming. Then you don't look so smart for saving an extra few bucks a month.
I wonder how dynamic the services like that are. I could see, say, going with the less reliable service for 10 months and, as I approach crunch time, telling them "reliability is key" and paying the extra money for those couple months.
It's a bit more than that. First of all, back in the old days when you had your dev and test using on-site equipment, you took those outages in stride and didn't pay for redundancy. Has there been a change in mindset? If so, why? This article suggests that some companies are still willing to give up some redundancy, so apparently the mindset hasn't changed everywhere.
Second, when it comes to dev and test, how much redundancy do you really need? You've got your source code in a git repo, right? If you'
Re: (Score:2)
... they're talking about saving a few bucks by not having redundant power supplies, not because a hard disk isn't redundant.
You're devs aren't doing shit with their git repo if your CI machine and environment are down because the power went out, and that killed interop and your big data clones of the production machines you use for integration testing are silent.
You don't need 3 copies of them cause you can rebuild if need be, but you still need the thing to work. The likelihood of a power failure is cons
Re: (Score:2)
In this day and age, how hard is it to spin up a new VM instance "in the cloud" and just run with it as a functional equivalent of a different network share?
Re: (Score:2)
You can't just have one sitting idle at AWS waiting for you or have some sort of CDN-esque creature where you just failover to a new DC where you've already got everything setup and spin up a VM as you need with data already synced from a main image somewhere? This seems like it'd be pretty easy to have ready and waiting and you should probably be relying on multiple data centers now if you're using the network to do your computing. Maybe I'm missing something?
Re: (Score:2)
That sounds like a lot of footwork and configuration and synchronization projects required up-front - above and beyond the cost savings (meaning getting rid of your IT staff) of going to a cloud service in the first place. Hardly automatic "ready and waiting".
Re: (Score:2)
LOL I think I get it now. So, instead of this being something that, you know, doesn't really much matter it's likely to be a bundle of crap because people are unwilling to pay? I mean, if I were already hosting in a variety of data centers (which I would be, if I were to move stuff "to the cloud") then I'd not care if one fell over - the rest can pick up the slack and that was kind of the point of having multiples.
I am really glad that I'm retired. It sometimes, literally (true sense of the word), makes me
Re: (Score:2)
I can't quite envision exactly why you would need this, but lets say you have a compute cluster spread out across many locations. Additionally assume that you are using this for something internally on-demand rather than automated or in response to customer/user interaction. Things you might use AWS for like ad-hoc queries and machine learning exercises being run by your analytics team instead of whatever it is
Re: (Score:2)
We're slowly working towards building out a genetic algorithm to plan factory resources. It works on some small tasks but scaling it up to plan our whole operation is going to require more compute power than a single server can provide and is also well suited to being processed in a distributed environment.
The nature of the task is that there's no "right" answer so I can see us doing something where we have 10 servers that run for 4 hours at night making a plan for the next few days, then after 4 hrs is ove
Windows proves the demand (Score:2)
Some people still run Windows on -servers-. This proves that reliability isn't always the most important thing.
Re:Wait, what? (Score:4, Insightful)
If you're using ONE datacenter, then yes, you need to take a good hard look at trying to save a few bucks.
But if you've got datacenters geographically spread out, or even have multiple data centers, do you need 99.999% uptime? If you implement your switching and load balancing correctly, then the failure of one datacenter means you shift to another one and go on. Maybe a bit of extra latency, but if you're geographically distributed, then it really doesn't make sense.
Sure, maybe one of your datacenters, your primary one is 99.999% reliable. But your auxiliary ones that serve to provide faster service to local clients, doesn't have to be - at the worst, they then have to wait more milliseconds to hit your primary.
There's plenty of opportunity for non-highly-redundant services as well - perhaps you have a personal website - save a couple of bucks a month to host it on a less reliable hosting service, because you don't necessarily need it up 24/7.
So it's good for operations that are already redundant and operations that can tolerate downtime.
Maybe you have a data center and use Amazon AWS to handle overload. Well, you can downgrade the reliability of the data center knowing you can spin up more AWS instances if the primary goes down. You're already paying for both services, and they can backup the other.
It's basically RAID - redundant array of independent datacenters.
Re: (Score:2)
I can see having a redundant data center at a lower uptime rate, positioned for a few tasks:
1: For a smaller company, it can be "good enough" as a place to stash machines.
2: It could be useful for a disaster recovery center, especially if it had compute nodes ready to go, a SAN that took async replications from the main site, and other items. This helps provide geographic redundancy, although the diminished uptime might be an issue if the site becomes the primary.
3: This might be ideal for relative low
Re: (Score:2)
The real problem is illustrated in the last paragraph:
"Even when they say they're OK with lower reliability, they still want uptime,"
In other words, no matter how many times a customer says they will accept lower reliability in exchange for a lower price, the first time things go down, they are going to be screaming at you for not keeping things running perfectly all the time.
Re: Wait, what? (Score:2)
I build systems that reside in data closets and centers. With modern clustering and virtualization systems you don't need to worry too much about local redundancy. Local redundancy (dual power, network, storage, everything) is expensive for the off chance something might happen.
Then you still need a backup elsewhere for the time actual physics intervenes (a lightning bolt or other natural disaster to the building). And usually double everything doesn't help much with user errors, load issues, DDoS, in fact
Disaster Recovery (Score:2)
We spend big bucks at a tier 1 data center to host our data. Our application cluster is fully redundant, and we serve large amounts of high-value data. (think: business intelligence and workflow information, not video streams)
We have a DR cluster, a back-stop for when all else has failed. Although it is a redundant copy of our production cluster, it itself is, by design, non-redundant. Where our production cluster has at least two physical machines (and often more) providing any specific service, the DR clu
Bitcoin mining? (Score:2)
Bitcoin mining is still an in-demand data center application?
Makes you wonder ... just how much fossil fuels will have been burned by the time we've "created" all of the Bitcoin there is to make?
Re: (Score:3)
My bitcoin mining rig is on a satellite orbiting mercury and is powered exclusively by the sun. I am currently the top world wide (galaxy wide?) producer of bitcoins.
Re: (Score:2)
My bitcoin mining rig is on a satellite orbiting mercury and is powered exclusively by the sun. I am currently the top world wide (galaxy wide?) producer of bitcoins.
I'm planning a dyson sphere and diverting all the energy to mining bitcoins. That should give me enough money to invest in a Kardashev type III [wikipedia.org] level bit-mining operation. It might result in the unfortunate extinction of a few planetary systems, but this is capitalism.
Maturing service tends toward commoditization. (Score:2)
As with a lot of other services (in fact, all other services that I can think of) that reach a certain level of maturity and ubiquity in the market place, one of two things seem to happen. Large-scale consolidation reducing the number of competitors until a small number of actors, or a single monolithic entity, remain; or reduced perceived value of aspects of the service leading to a bare-bones offering because customers decide they are less willing to pay for services they are not going to use very often (
Re: (Score:2)
You mean, like the fact that we now have to fill up our cars by ourselves nowadays at the gas station? Next, we will go to the data center by ourselves to turn the power back on...
Re: (Score:2)
You have obviously never been to New Jersey or Oregon. Or at least not gone to a gas station in either of those two states.
Re: (Score:2)
You have obviously never been to New Jersey or Oregon. Or at least not gone to a gas station in either of those two states.
I went to New Jersey once... it smelled funny. ok, I lied and spread terribly bad geographical stereotypes. Never been to NJ other than as a layover stop at Newark when flying between Europe and San Francisco.
If NJ and Oregon are still using Gas Station pump attendants, I suspect that is more about padding employment figures than service levels. Because if an attendant were adding value, they would be common in other states too.
Hmm, just heard a whooshing sound. Was that the point of the comment, flying ove
Re: (Score:2)
It's the law in those two states. I have no idea why it is the law in Oregon but I suspect it's mob-related in NJ.
Re: (Score:2)
You mean, like the fact that we now have to fill up our cars by ourselves nowadays at the gas station? Next, we will go to the data center by ourselves to turn the power back on...
To a large extent, yes. Although the example I initially had in mind was the airline industry - flying in the 1950's was a little bit dangerous, but also very glamorous. As it grew and became more of a mass-market thing through the 70's and 80's, competition on price became the norm, while it had previously been competition based on added value services and the prestige of travel. The current state, with low-cost carriers and "cattle class" in every sense of the phrase, is not where we are with data centers
Re: (Score:1)
That's cachet, you imbecile. Unless you mean it's hidden.
Cost? (Score:2)
Because a few generators, transfer switches and UPSes are such a large portion of data center costs ... Its not like they are outweighed by just a single months power bill or anything in most cases.
Increase infrastructure costs 200%-400% (Score:3)
The cost is two to four times higher. The basic idea is you can choose:
___Non-redundant A-only power____
1 router (1 amp) + 1 switch (1 amp) = 2 amps.
You need one power plant capable of providing 2 amps.
___Redundant with A/B power___
2 routers (2 amps) + 2 switches (2 amps) = 4 amps
You need two sets of power, each capable of providing the 4 amps, so 8 amps total.
Note you need not twice as much power capacity, but FOUR TIMES as much in order to have full A/B redundancy. Plus the more complex (expensive) desig
Actually they are (Score:2)
Re: (Score:2)
Don't forget the transfer switches. If those hose up, the generator can start on time to pick up the load... but if the ATS doesn't hand those kilo-amps over to the genset PDQ, it can cause an outage, as the UPS thinks the load has moved, so goes back into bypass mode.
Of course, transfer switches can fail so it may not even shunt the load over, come time it is needed. Learned that the hard way, and you can't even get near them unless you are wearing full arc-flash gear.
Generators are not cheap. They need
Re: (Score:2)
Test that theory by doing a "DR Test." Turn everything off for two days and see what happens.
What would happen is the bill for fuel would go through the roof and a lot of people would complain about that noise - and that's if everything works perfectly. Running for a few hours is a different situation.
On the point of tests there's a 20MW generator I saw (old jet engine) which was tested every month for about 20 years without many problems. On the one day in it's life that it was needed (to get coal conveyers and sootblowers running on a small city based power station unit when the local grid had
I can see a real business need for this (Score:2)
Maybe it's a new business venture from the same guys who run Jiffy Express [imdb.com]?
advertising vs reality (Score:3)
why would you want to pay for 99.99% uptime when it's rarely provided?
Depends (Score:1)
Depends on the service. Many things can easily be distributed over a few datacenters. 10 cheap servers with so-so uptime may be better than one with a very high uptime from a cost point of view.
Still better than the mop closet? (Score:2)
This seems like the kind of thing that would benefit some SMBs I've worked with.
I've worked with several that have awkwardly architected homegrown applications or very low-quality vertical market applications that they're highly dependent on. The net result is applications which don't translate well to cloud-hosted scenarios for various reasons, or at least not at cost levels that make any sense.
Colocation would work, but datacenters' relentless focus on being super duper redundant makes them too expensive