Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Power IT

Low Redundancy Data Centers? Providers Adapt As Tenants Seek Options (datacenterfrontier.com) 57

1sockchuck writes: Data center providers are offering space with less power infrastructure than traditional mission-critical facilities, citing demand from customers looking to forego extra UPS and generators in return for more affordable pricing. The demand for "variable resiliency" space reflects a growing emphasis on controlling data center costs, along with a focus on application-level requirements like HPC and bitcoin mining. Data center experts differed on whether this trend toward flexible design was a niche, or a long-term trend. "In the next 12 months,data center operators will be challenged to deliver power to support both an HPC environment as well as traditional storage all under one roof," said Tate Cantrell, CTO at Iceland's Verne Global. "HPC will continue the trend to low resiliency options." But some requirements don't change. "Even when they say they're OK with lower reliability, they still want uptime," noted one executive.
This discussion has been archived. No new comments can be posted.

Low Redundancy Data Centers? Providers Adapt As Tenants Seek Options

Comments Filter:
  • by sinij ( 911942 ) on Thursday December 10, 2015 @04:18PM (#51097035)
    Now that variable resiliency data centers are finally available, I can run my sometimes available services in the partially secure cloud space I am building.
    • by ls671 ( 1122017 )

      Yeah, planning in advance for quantum computing where everything is relativistic, including up time, I just provisioned 8 identically configured cloud nodes with different variable resiliency providers with every provider guaranteeing a 75% uptime.

      This allows me to have 99.9984800% uptime given the probability that all nodes are down at the same time.

    • You're building one? I'm only thinking about building it. Well, more like sort of thinking about it.

  • Wait, what? (Score:4, Insightful)

    by HideyoshiJP ( 1392619 ) on Thursday December 10, 2015 @04:25PM (#51097081)
    I'm reading this as.. "Well, we need to have redundancy, and we're already ponying up this much money, but how can we spend less and still say we're "redundant?" I'm not faulting the datacenters for offering such a service, but the customers should really have a hard look in the mirror.
    • Again, a particular service is not mission critical, so we don't need to spend the money to make it so. Things like test servers and the like could fall into this category. After all, you really don't need 99.999997% uptime for this.

      At least until the power goes out at the data center and you've got a room full of testers and developers sitting around waiting for it to come back on and there's a deadline looming. Then you don't look so smart for saving an extra few bucks a month.

      I wonder how dynamic the

      • by Anonymous Coward

        Again, a particular service is not mission critical, so we don't need to spend the money to make it so. Things like test servers and the like could fall into this category. After all, you really don't need 99.999997% uptime for this.

        At least until the power goes out at the data center and you've got a room full of testers and developers sitting around waiting for it to come back on and there's a deadline looming. Then you don't look so smart for saving an extra few bucks a month.

        I wonder how dynamic the services like that are. I could see, say, going with the less reliable service for 10 months and, as I approach crunch time, telling them "reliability is key" and paying the extra money for those couple months.

        It's a bit more than that. First of all, back in the old days when you had your dev and test using on-site equipment, you took those outages in stride and didn't pay for redundancy. Has there been a change in mindset? If so, why? This article suggests that some companies are still willing to give up some redundancy, so apparently the mindset hasn't changed everywhere.

        Second, when it comes to dev and test, how much redundancy do you really need? You've got your source code in a git repo, right? If you'

        • ... they're talking about saving a few bucks by not having redundant power supplies, not because a hard disk isn't redundant.

          You're devs aren't doing shit with their git repo if your CI machine and environment are down because the power went out, and that killed interop and your big data clones of the production machines you use for integration testing are silent.

          You don't need 3 copies of them cause you can rebuild if need be, but you still need the thing to work. The likelihood of a power failure is cons

          • by KGIII ( 973947 )

            In this day and age, how hard is it to spin up a new VM instance "in the cloud" and just run with it as a functional equivalent of a different network share?

      • I see this being used for more distributed tasks and for tasks not tied to the core business.

        I can't quite envision exactly why you would need this, but lets say you have a compute cluster spread out across many locations. Additionally assume that you are using this for something internally on-demand rather than automated or in response to customer/user interaction. Things you might use AWS for like ad-hoc queries and machine learning exercises being run by your analytics team instead of whatever it is

        • We're slowly working towards building out a genetic algorithm to plan factory resources. It works on some small tasks but scaling it up to plan our whole operation is going to require more compute power than a single server can provide and is also well suited to being processed in a distributed environment.

          The nature of the task is that there's no "right" answer so I can see us doing something where we have 10 servers that run for 4 hours at night making a plan for the next few days, then after 4 hrs is ove

    • Some people still run Windows on -servers-. This proves that reliability isn't always the most important thing.

    • Re:Wait, what? (Score:4, Insightful)

      by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Thursday December 10, 2015 @05:58PM (#51097541)

      I'm reading this as.. "Well, we need to have redundancy, and we're already ponying up this much money, but how can we spend less and still say we're "redundant?" I'm not faulting the datacenters for offering such a service, but the customers should really have a hard look in the mirror.

      If you're using ONE datacenter, then yes, you need to take a good hard look at trying to save a few bucks.

      But if you've got datacenters geographically spread out, or even have multiple data centers, do you need 99.999% uptime? If you implement your switching and load balancing correctly, then the failure of one datacenter means you shift to another one and go on. Maybe a bit of extra latency, but if you're geographically distributed, then it really doesn't make sense.

      Sure, maybe one of your datacenters, your primary one is 99.999% reliable. But your auxiliary ones that serve to provide faster service to local clients, doesn't have to be - at the worst, they then have to wait more milliseconds to hit your primary.

      There's plenty of opportunity for non-highly-redundant services as well - perhaps you have a personal website - save a couple of bucks a month to host it on a less reliable hosting service, because you don't necessarily need it up 24/7.

      So it's good for operations that are already redundant and operations that can tolerate downtime.

      Maybe you have a data center and use Amazon AWS to handle overload. Well, you can downgrade the reliability of the data center knowing you can spin up more AWS instances if the primary goes down. You're already paying for both services, and they can backup the other.

      It's basically RAID - redundant array of independent datacenters.

      • by mlts ( 1038732 )

        I can see having a redundant data center at a lower uptime rate, positioned for a few tasks:

        1: For a smaller company, it can be "good enough" as a place to stash machines.

        2: It could be useful for a disaster recovery center, especially if it had compute nodes ready to go, a SAN that took async replications from the main site, and other items. This helps provide geographic redundancy, although the diminished uptime might be an issue if the site becomes the primary.

        3: This might be ideal for relative low

    • The real problem is illustrated in the last paragraph:

      "Even when they say they're OK with lower reliability, they still want uptime,"

      In other words, no matter how many times a customer says they will accept lower reliability in exchange for a lower price, the first time things go down, they are going to be screaming at you for not keeping things running perfectly all the time.

    • I build systems that reside in data closets and centers. With modern clustering and virtualization systems you don't need to worry too much about local redundancy. Local redundancy (dual power, network, storage, everything) is expensive for the off chance something might happen.

      Then you still need a backup elsewhere for the time actual physics intervenes (a lightning bolt or other natural disaster to the building). And usually double everything doesn't help much with user errors, load issues, DDoS, in fact

    • We spend big bucks at a tier 1 data center to host our data. Our application cluster is fully redundant, and we serve large amounts of high-value data. (think: business intelligence and workflow information, not video streams)

      We have a DR cluster, a back-stop for when all else has failed. Although it is a redundant copy of our production cluster, it itself is, by design, non-redundant. Where our production cluster has at least two physical machines (and often more) providing any specific service, the DR clu

  • Bitcoin mining is still an in-demand data center application?

    Makes you wonder ... just how much fossil fuels will have been burned by the time we've "created" all of the Bitcoin there is to make?

    • by ls671 ( 1122017 )

      My bitcoin mining rig is on a satellite orbiting mercury and is powered exclusively by the sun. I am currently the top world wide (galaxy wide?) producer of bitcoins.

      • by Chrisq ( 894406 )

        My bitcoin mining rig is on a satellite orbiting mercury and is powered exclusively by the sun. I am currently the top world wide (galaxy wide?) producer of bitcoins.

        I'm planning a dyson sphere and diverting all the energy to mining bitcoins. That should give me enough money to invest in a Kardashev type III [wikipedia.org] level bit-mining operation. It might result in the unfortunate extinction of a few planetary systems, but this is capitalism.

  • As with a lot of other services (in fact, all other services that I can think of) that reach a certain level of maturity and ubiquity in the market place, one of two things seem to happen. Large-scale consolidation reducing the number of competitors until a small number of actors, or a single monolithic entity, remain; or reduced perceived value of aspects of the service leading to a bare-bones offering because customers decide they are less willing to pay for services they are not going to use very often (

    • by ls671 ( 1122017 )

      You mean, like the fact that we now have to fill up our cars by ourselves nowadays at the gas station? Next, we will go to the data center by ourselves to turn the power back on...

      • by KGIII ( 973947 )

        You have obviously never been to New Jersey or Oregon. Or at least not gone to a gas station in either of those two states.

        • You have obviously never been to New Jersey or Oregon. Or at least not gone to a gas station in either of those two states.

          I went to New Jersey once... it smelled funny. ok, I lied and spread terribly bad geographical stereotypes. Never been to NJ other than as a layover stop at Newark when flying between Europe and San Francisco.
          If NJ and Oregon are still using Gas Station pump attendants, I suspect that is more about padding employment figures than service levels. Because if an attendant were adding value, they would be common in other states too.
          Hmm, just heard a whooshing sound. Was that the point of the comment, flying ove

          • by KGIII ( 973947 )

            It's the law in those two states. I have no idea why it is the law in Oregon but I suspect it's mob-related in NJ.

      • You mean, like the fact that we now have to fill up our cars by ourselves nowadays at the gas station? Next, we will go to the data center by ourselves to turn the power back on...

        To a large extent, yes. Although the example I initially had in mind was the airline industry - flying in the 1950's was a little bit dangerous, but also very glamorous. As it grew and became more of a mass-market thing through the 70's and 80's, competition on price became the norm, while it had previously been competition based on added value services and the prestige of travel. The current state, with low-cost carriers and "cattle class" in every sense of the phrase, is not where we are with data centers

        • by Anonymous Coward

          That's cachet, you imbecile. Unless you mean it's hidden.

  • Because a few generators, transfer switches and UPSes are such a large portion of data center costs ... Its not like they are outweighed by just a single months power bill or anything in most cases.

    • The cost is two to four times higher. The basic idea is you can choose:

      ___Non-redundant A-only power____
      1 router (1 amp) + 1 switch (1 amp) = 2 amps.
      You need one power plant capable of providing 2 amps.

      ___Redundant with A/B power___
      2 routers (2 amps) + 2 switches (2 amps) = 4 amps
      You need two sets of power, each capable of providing the 4 amps, so 8 amps total.

      Note you need not twice as much power capacity, but FOUR TIMES as much in order to have full A/B redundancy. Plus the more complex (expensive) desig

    • I looked into it a while ago when I had unreliable power and outages lasting more than an hour so well beyond UPS capacity. Once you get beyond just having to fill in for a few minutes the generators get expensive, and then you need a plant mechanic to keep the generators in working order, fuel storage, a test regime heaps of electrical work etc etc. A purpose built large scale data centre can be expected to have such things and absorb the cost but adding it in after the fact with something that is not h
      • by mlts ( 1038732 )

        Don't forget the transfer switches. If those hose up, the generator can start on time to pick up the load... but if the ATS doesn't hand those kilo-amps over to the genset PDQ, it can cause an outage, as the UPS thinks the load has moved, so goes back into bypass mode.

        Of course, transfer switches can fail so it may not even shunt the load over, come time it is needed. Learned that the hard way, and you can't even get near them unless you are wearing full arc-flash gear.

        Generators are not cheap. They need

  • Maybe it's a new business venture from the same guys who run Jiffy Express [imdb.com]?

  • by Gravis Zero ( 934156 ) on Thursday December 10, 2015 @05:12PM (#51097305)

    why would you want to pay for 99.99% uptime when it's rarely provided?

  • by Anonymous Coward

    Depends on the service. Many things can easily be distributed over a few datacenters. 10 cheap servers with so-so uptime may be better than one with a very high uptime from a cost point of view.

  • This seems like the kind of thing that would benefit some SMBs I've worked with.

    I've worked with several that have awkwardly architected homegrown applications or very low-quality vertical market applications that they're highly dependent on. The net result is applications which don't translate well to cloud-hosted scenarios for various reasons, or at least not at cost levels that make any sense.

    Colocation would work, but datacenters' relentless focus on being super duper redundant makes them too expensive

No spitting on the Bus! Thank you, The Mgt.

Working...