Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Can Maintenance Make Data Centers Less Reliable? 185

miller60 writes "Is preventive maintenance on data center equipment not really that preventive after all? With human error cited as a leading cause of downtime, a vigorous maintenance schedule can actually make a data center less reliable, according to some industry experts.'The most common threat to reliability is excessive maintenance,' said Steve Fairfax of 'science risk' consultant MTechnology. 'We get the perception that lots of testing improves component reliability. It does not.' In some cases, poorly documented maintenance can lead to conflicts with automated systems, he warned. Other speakers at the recent 7x24 Exchange conference urged data center operators to focus on understanding their own facilities, and then evaluating which maintenance programs are essential, including offerings from equipment vendors."
This discussion has been archived. No new comments can be posted.

Can Maintenance Make Data Centers Less Reliable?

Comments Filter:
  • In between maybe? (Score:5, Insightful)

    by anarcat ( 306985 ) on Sunday November 27, 2011 @02:20PM (#38182956) Homepage

    Maybe there's a sweet spot between "no testing at all" and "replacing everything every three months"? In my experience, there is a lot of work to do in most places to make sure that proper testing is done, or at least that emergency procedures are known and people are well trained in them. Very often documentation is lacking and the onsite support staff have no clue where that circuit breaker is. That is the most common scenario in my experience, not overzealous maintenance.

    • Re:In between maybe? (Score:5, Interesting)

      by Elbereth ( 58257 ) on Sunday November 27, 2011 @02:40PM (#38183084) Journal

      I suppose that I'd agree. Back in the early 90s, I inherited from a friend a fear of rebooting, turning off, or performing maintenance on a computer. Half the time he opened the case, the computer would become unbootable or never turn back on. Luckily, as a talented engineer, he could usually fix whatever the problem was, but it was a huge pain in the ass. Of course, back then, commodity computer hardware was hugely unreliable, with vast gaps in quality between price ranges, and we were working with pretty cheap stuff. Still, to this day, I dread the thought of turning off a computer that has been working reliably. You never know when some piece of crap component is nearing the end of its life, and the stress of a power cycle could what pushes it over the edge into oblivion (or highly unreliably behavior). I used to be fond of constantly messing with everything, fixing it until it broke, but his influence moderated that impulse in me, to the point where I usually freak out when anyone suggests unnecessarily rebooting a computer. Surely, there's something to say for preventative maintenance, and I'd rather be caught with an unbootable PC during regularly scheduled maintenance than suddenly experiencing catastrophic failure randomly, but there's something to be said for just leaving the shit alone and not messing with it. Every time you touch that computer, there's a slight chance that you'll accidentally delete a critical file directory, pull out a cable, or knock loose a power connector. The fewer the times you come into contact with the thing, the better. If I could build a force field around every PC, I probably would.

      • by mehrotra.akash ( 1539473 ) on Sunday November 27, 2011 @03:02PM (#38183210)

        fixing it until it broke

        Thats the spirit!!

        • Re:In between maybe? (Score:5, Interesting)

          by greenfruitsalad ( 2008354 ) on Sunday November 27, 2011 @08:25PM (#38185330)

          i can't agree. i used to but now i cannot afford to.

          we recently experienced 2 catastrophes (datacentre-wide downtimes, you know things that NEVER happen) and the results were unbelievable. GRUBs failed to load OSes, machines were without a bootloader (due to emergency disk hotswaps), some machines simply didn't turn on, services didn't autostart, a few virtual servers autostarted on multiple hosts (instead of just one), fsck on some of our volumes took hours to finish, 30% of supermicro IPMI cards were unresponsive, etc. it revealed that almost nobody had followed procedures properly.

          after that, every single service we have is built in a clustered manner with nodes spread across multiple datacentres. I now restart machines and pull cables at regular intervals to test bgp/ospf, clustering, recoveries, to check filesystems, etc. i am now also ABLE TO SLEEP.

      • Re:In between maybe? (Score:4, Informative)

        by sphealey ( 2855 ) on Sunday November 27, 2011 @03:02PM (#38183212)

        ===
        Back in the early 90s, I inherited from a friend a fear of rebooting, turning off, or performing maintenance on a computer. Half the time he opened the case, the computer would become unbootable or never turn back on.
        ===

        Neither you nor your friend are alone in thinking that:

        AD-A066579, RELIABILITY-CENTERED MAINTENANCE, Nowlan & Heap, (DEC 1978) [this used to be available for download from the US Dept of Commerce web site; now appears to be behind a US government paywall (!)]

        A more recent summary:

        http://reliabilityweb.com/index.php/articles/maintenance_management_a_new_paradigm/ [reliabilityweb.com]

        sPh

      • by 9jack9 ( 607686 )
        Are you sure you're on the right web site?
      • Some times software / os needed at least a soft reboot from time to time to clean up stuck software and remove memory leaks.

        Now some stuff like firmware updates may need a hard reboot.

        As for power cycleing some times you need to do it to get back into a crashed system.

        • Some times software / os needed at least a soft reboot from time to time to clean up stuck software and remove memory leaks.

          What operating system are you using, Windows 98? The worst case I've had with Linux (CentOS 5.4) is NFS locking up and taking one of the CPUs for a ride to jump the load to around 8 to 9. Even then, a little patience killed the process after a while. There have been times when it was FASTER to hard boot (but the risks suck), but most modern applications and operating system ON THE S

          • well some windows updates still need reboots it's less then it was in the past but still more then with linux.

            Also a lot of NON OS software updates / installers at least say they need a reboot.

          • by Bigbutt ( 65939 ) on Sunday November 27, 2011 @04:21PM (#38183714) Homepage Journal

            You must not deal with any Oracle database servers. They leak like a sieve.

            [John]

            • by afidel ( 530433 )
              Or JAVA, we run all the big enterprise application servers and they all run considerably better if they are rebooted on a regular basis.
          • NFS locking up is ultimately a part of the spec. It was originally a stateless filesystem that operated over UDP. Unless you're using a more recent revision of the protocol and have it configured as such, you're going to have issues with it locking up regularly.

          • by PCM2 ( 4486 )

            Desktop PCs and servers seem to have largely overcome the need to reboot regularly, but other segments of the industry seem to be moving backwards. My Android handset actually says in the manual that you should power cycle it regularly. With a firmware upgrade, it even started giving me a warning from time to time, telling me I had not power cycled the phone in X amount of times and that I should do that now or risk instability. (Am I crazy for assuming that a phone OS is a markedly less complex environmen

      • Re:In between maybe? (Score:5, Interesting)

        by mspohr ( 589790 ) on Sunday November 27, 2011 @03:57PM (#38183564)
        Do you know why satellites last so long in a hostile environment?... because nobody touches them.

        "If it's not broken, don't fix it."

        • by CyprusBlue113 ( 1294000 ) on Sunday November 27, 2011 @04:26PM (#38183742)

          Do you know why satellites last so long in a hostile environment?... because nobody touches them.

          "If it's not broken, don't fix it."

          Actually I'm pretty sure it's the millions that are spent engineering each individual one so that it specifically can survive many years in said hostile enviroment.

          If we spent anywhere near what is spent on proper engineering in time and money, everyday crap would be pretty damn reliable too, just not nearly as cost effective

          • Actually, the components would be pretty close to as cost effective. The executives just wouldn't get their uber-fat rewards.

        • by jaymz666 ( 34050 )

          how many are hooked into the internet running an os that gets attacked?

      • by dave562 ( 969951 )

        I am still that way with firmware upgrades. I think it probably has something to do with our generation. In the 90s, computer hardware was touchy and was expensive to replace. If you're like me, you probably grew up blowing into Nintendo game cartridges when they did not work. But back to firmware, I only upgrade it when necessary. Over the last fifteen years I have seen too many firmware upgrades bork hardware that was working just fine. With security patches I do them monthly, but not firmware. And

      • One bad automated update can lead to your system hosed or obscure reliability problems, perhaps not showing up for a while and the worst ones again leaving you with little option but to rebuild a system.

        So I turn off auto update on everything I can, and manually update periodically. I consider the security risk smaller this way. I get it stable, and let it run that way for a few months at least. Then update security fixes etc.

        • Absolutely correct. My servers run for many years, till the hardware eventually fails, with zero updates and zero restarts. Don't fix it if it ain't broke.
      • by Lorens ( 597774 )
        I've seen for myself that hard disks that run for a long time (years) have problems starting up again after a power off. I've long supposed that it had to do with some bearings wearing out or oil getting used up. RAID is of course the correct answer to that, but even if I have to offline a service for some reason, I've gotten into the habit of not powering off the second side of a HA pair until the first one is safely back up.
      • by mabhatter654 ( 561290 ) on Sunday November 27, 2011 @06:21PM (#38184530)

        if that's the case, you don't have CONTROL over your equipment.

        That was acceptable for Windows 95 but not even for desktop PCs anymore, let alone server equipment. My opinion is that your equipment isn't stable UNTIL you can turn it off and on again reliably. And yes... that is an ENORMOUS amount of work.

        If you can't reliably replace individual pieces then you don't have control for maintenance... sure you can stick your head in the sand and just not touch anything... but that's just piling up all the things you didn't take time to figure out until come critical time later.

      • Comment removed based on user account deletion
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        If your buying new or refurbished electronics are THAT unreliable, why the !%!@#$!@%! are you using them?

        If a router fails to come up because a cap is ready to blow, what happens when it blows WHILE IT'S RUNNING?

        I had that happen with 2 Cisco ASA firewalls. One was 5 years old, the other was a few months. They were using HSRP and decided fighting amongst each-other for control was a great idea because one of the ports was going out. We took the old one offline; wouldn't turn on anymore. The new one? Wo

      • I have seen the same, my long running hard drives wouldn't boot up again. When I opened one of these stopped hard drives the head was glued to the boot sector. I assumed that since boot sector is never used during the 3-5 years of operation it collects some trash that will attach to the reading head when HD is stopped and cools down.

        Sometimes moving the case is enough to get bad connections appear (or opposite, when computer is brought for maintenance it starts to work) but pressing down all the cards and c

  • by sandytaru ( 1158959 ) on Sunday November 27, 2011 @02:23PM (#38182982) Journal
    I believe the article is referring to major hardware replacements, stress testing, etc. But there is other preventative or even detective work that needs to be done in data centers large and small that have nothing to do with equipment. You can't just blithely assume that things are always going to work as they are supposed to work. One time, we discovered that the camera server for one of our clients had stopped recording for no good reason, and upon closer inspection discovered that the hard drive failed and we had no alert system in place since it wasn't a "real" server but just a heavy duty XP machine. After that blunder, I was asked to check on all the cameras servers once a week and make sure I could actually open up and view recordings from days past. This is a preventative action, but not really a maintenance one.
    • by belrick ( 31159 )

      After that blunder, I was asked to check on all the cameras servers once a week and make sure I could actually open up and view recordings from days past. This is a preventative action, but not really a maintenance one.

      No, it's not preventative. It does nothing to prevent the problem. It detects the problem earlier (before, say, a business user does). That's monitoring. It's proactive, not reactive - perhaps that's what you mean?

      • by sphealey ( 2855 )

        ===
        No, it's not preventative. It does nothing to prevent the problem. It detects the problem earlier (before, say, a business user does). That's monitoring. It's proactive, not reactive - perhaps that's what you mean?
        ===

        It is deeply unclear whether what is traditionally termed "preventative maintenance" (intrusive work involving disassembling, eyeballing, software probing, etc) actually improves reliability over conditioning monitoring tests followed by break-fix work as described by the parent post. More

        • https://en.wikipedia.org/wiki/Preventive_Maintenance_Checks_and_Services [wikipedia.org]

          Or to use a more common example, think about changing the oil in your car every (time interval) or (distance interval). Will it stop failures? Maybe. Maybe not.

          On the other hand, every time you "work" on a system you introduce entropy.

          As long as you remove more entropy than you introduce, you should have a more reliable system (than if you hadn't worked on it at all). But that gets into the training/knowledge of the person performing th

    • by bussdriver ( 620565 ) on Sunday November 27, 2011 @02:57PM (#38183174)

      Planned obsolescence has been promoted in all aspects of life since post WW2 and now it is hard to imagine the world without it. That line of thinking has been creeping into everything even in areas where it doesn't seem to apply.

      Does this play a factor on the perception of preventative maintenance or its frequent application? I think it probably does in at least a couple ways, don't you?

    • Internal monitoring of components is a lot better now than it used to be. We used to go around and check all of the supply fans once a month because it was the highest failure rate component on the desktops we were using and there was no indication until the machine started crashing from overheating.

  • Security updates (Score:5, Informative)

    by bjb_admin ( 1204494 ) on Sunday November 27, 2011 @02:26PM (#38182990)
    Sometimes I get the feeling that security updates can in most cases cause more problems than the issues themselves.

    I can think of many occasions that a security update has broken a server/router/etc. Obviously the lack of a security update can lead to a bigger headache in the future. But the typical user doesn't understand and has the attitude "IT broke the server again".

    If a virus or hacker causes an issue the attitude is "I hope they fix that soon. I hate viruses/hackers" (obviously this is a huge generalization).
    • by kasperd ( 592156 )

      Sometimes I get the feeling that security updates can in most cases cause more problems than the issues themselves.

      Some vendors will push security updates and other updates through the same channel, sometimes even without the user knowing if a particular update is fixing a security problem. Occasionally the update will even be installed without the user's accept.

      If it was just a matter of only installing updates to fix known security problems and no other changes were made to the software, I think cases

  • by sphealey ( 2855 ) on Sunday November 27, 2011 @02:26PM (#38182994)

    ===
    "Is preventive maintenance on data center equipment not really that preventive after all? With human error cited as a leading cause of downtime, a vigorous maintenance schedule can actually make a data center less reliable, according to some industry experts.'
    ===

    It isn't just human error: the very act of performing intrusive tasks under the theory of "preventative maintenance" can greatly reduce reliability of systems built of reasonably reliable components. This was studied extensively by the US airlines, US FAA, and later the USAF in the 1970s when the concept of reliability centered maintenance was developed for turbine engines and eventually full airliners. Look up the classic report by Nowland & Heap. Very much counter-intuitive if one has been trained to believe in the classics of "preventative teardowns" and fully known failure probability distribution functions, but matches up well to what experience field mechanics have been saying since the days of the pyramid construction.

    sPh

    Of course, today there is a huge "RCM" consulting industry, 7-step programs, etc that bears little resemblance to the original research and theories; don't confuse that with the core work.

  • by ExtremeSupreme ( 2480708 ) on Sunday November 27, 2011 @02:27PM (#38183000)
    That being said, it was because their procedures were shit, not because they were doing maintenance.
    • by crankyspice ( 63953 ) on Sunday November 27, 2011 @02:51PM (#38183156)

      That being said, it was because their procedures were shit, not because they were doing maintenance.

      Actually, no, the Chernobyl disaster was sparked with a 'live' test of a new, untested mechanism for powering reactor cooling systems in the event of a disaster that brought down the power grid. http://en.wikipedia.org/wiki/Chernobyl_disaster#The_attempted_experiment [wikipedia.org] (And even that test was delayed several hours, into a shift of workers that weren't properly prepared to conduct the test.)

      • by jbengt ( 874751 )

        Actually, no, the Chernobyl disaster was sparked with a 'live' test of a new, untested mechanism for powering reactor cooling systems in the event of a disaster that brought down the power grid.

        Like the GP said, it was because their procedures were shit, not because they were doing maintenance.

      • Actually, it was pretty much everything. Flawed design, untested technology, substandard construction, cost cutting, politics galore, arrogant and overconfident leaders, nuclear cowboys with something to prove, unqualified staff, etc.

        The disaster itself was huge, but I'm surprised that nuclear power has been as safe as it is given how badly humans are at coping with such enormous responsibilities. Looking bad at the the Communist years, having a culture driven by financial profit doesn't seem as bad as a

    • Unfortunately, that is where real world data and experience comes into play.

      GIVEN that their procedures were shit, maintenance actually made things worse and thus cased Chernobyl.

      Now theoretically, you just need better procedures to make maintenance be a net positive. However, that doesn't change the advice that you shouldn't do such maintenance... GIVEN that your procedures are bad.

      Given that humans are error prone and the IT procedures are shit, it is probably good advice to not do the maintenance.

      • by vlm ( 69642 )

        GIVEN that their procedures were shit, maintenance actually made things worse and thus cased Chernobyl.

        I'm guessing you were going for the sarcasm points, but for those who don't know about nuke eng as much as myself and presumably scamper, they had perfectly good procedures for experiment engineering evaluation that they mysteriously chose not to follow, and there was no maintenance involvement at all. Its the opposite of what he was claiming.

        The quickie one liner of what happened is a RBMK has an extremely sensitive control loop by the very nature of what it means to be a RBMK, and the engineers who know

        • by vlm ( 69642 )

          I thought about it a bit, and it would have worked on a PWR, but it would have worked REALLY WELL on a BWR, if you can survive the pressure fluctuations, which it probably could have. Of course if they're not running the reactivity control loop past engineering, they're not going to run the hydrodynamic control loop past engineering, so they might have industriously found a way to blow themselves up that way too.

          PWRs are dead stable, not terribly sexy, quite heavy and bulky, and have more moving parts. If

  • by Anonymous Coward on Sunday November 27, 2011 @02:33PM (#38183040)

    The guy at the garage always recommends I do an $80 transmission flush.

  • by Anonymous Coward on Sunday November 27, 2011 @02:37PM (#38183072)

    Seriously...I sometimes think the average IQ is dropping on a daily basis (and, yes, I get the irony)...Both with what I read, and my own experiences working in IT, I become more and more convinced that society will eventually collapse under the weight of bad advice from consultants (and, no, I don't own a fallout shelter)...and I spend more and more time thinking about ways that I can profit off of the stupidity of leadership.

    • Seriously...I sometimes think the average IQ is dropping on a daily basis (and, yes, I get the irony)...Both with what I read, and my own experiences working in IT, I become more and more convinced that society will eventually collapse under the weight of bad advice from consultants (and, no, I don't own a fallout shelter)...and I spend more and more time thinking about ways that I can profit off of the stupidity of leadership.

      Read it [google.com] and grab your tin foil and duct tape. You're gonna need lots.

  • In days of old, running "big iron" from Control Data and Cray, the worst days of system instability were those following "preventive maintenance". Plus ca change....
  • by DragonHawk ( 21256 ) on Sunday November 27, 2011 @02:44PM (#38183110) Homepage Journal

    From TFS:

    "... poorly documented maintenance can lead to conflicts with automated systems ..."

    That doesn't mean maintenance makes datacenters less reliable. It means cluelessness makes datacenters less reliable.

    Sheesh.

  • by Animats ( 122034 ) on Sunday November 27, 2011 @02:46PM (#38183120) Homepage

    There's something to be said for this. Back when Tandem was the gold standard of uptime (they ran 10 years between crashes, and had a plan to get to 50), they reported that about half of failures were maintenance-induced. That's also military experience.

    The future of data centers may be "no user serviceable parts inside". The unit of replacement may be the shipping container. When 10% or so of units have failed, the entire container is replaced. Inktomi ran that way at one time.

    You need the ability to cut power off of units remotely, very good inlet air filters to prevent dust buildup, and power supplies which meet all UL requirements for not catching fire when they fail. Once you have that, why should a homogeneous cluster ever need to be entered during its life?

    • by DarthBart ( 640519 ) on Sunday November 27, 2011 @03:13PM (#38183274)

      There's also been a shift in the mentality of how well computers operate. It went from not tolerating any kind of downtime to the Windows mentality of crashing and "That's just how computers are".

      • by brusk ( 135896 )
        I think that predates Windows. Crashes of various kinds were frequent on Apple IIs, Commodores, etc. You just got use to various reboot/retry routines.
        • If you read the Unix Haters Handbook, you'll see that they notice that "Unix boots fast", a lot of times.

          I wasn't using computers (even less "real" computers) by that time, so I don't know how reliable is their description. But they describe exactly that shift in mentality, and way before Windows time. Then free unixes came, and everything changed...

    • The future of data centers may be "no user serviceable parts inside". The unit of replacement may be the shipping container. When 10% or so of units have failed, the entire container is replaced. Inktomi ran that way at one time.

      I saw a while back (probably a year or two) that this is the way Microsoft will run (runs?) their Azure systems.

      By the time 10% of the units are not working, it may be time to upgrade to the latest technology anyway. If you exclude disks, then I am certain you could run such a container for more than 10 years that way.

    • Possibly because ten out of 100 units have failed because a $200 hard drive has failed in each one? Does that mean that the whole $100,000 cluster needs to be replaced? Spending $100,000 instead of $2000 is not a great decision.

      • by Lennie ( 16154 )

        My guess is, the logic is here:

        If after a couple of years running a bunch of systems and 10% has already failed (in a certain short amount of time), the others will probably follow soon.

        • That logic applies to mechanical parts and electronic circuits that operate way above their current limit. It is not valid for electronic circuits operating with reasonable currents.

          Try "If after a couple of years 10% has already failed, we still have a couple of years until the next 9% (10% of 90%) fail".

  • Any maintenance done wrong or in excess can be more damaging to a system than no maintenance.
  • If it isn't broken, don't fix it.

  • by zensonic ( 82242 ) on Sunday November 27, 2011 @02:50PM (#38183148) Homepage

    ... is actually quite simple: You keep your hands off the systems. Period.

    In detail, you plan, install and _test_ your setup before it enters production. You make sure that you can survive whatever you throw at it wrt. errors and incidents. You then figure out how much downtime you are allowed to have according to SLA. You then divide this number into equal sized maintaince windows together with the customer. And then you adhere to these windows! No manager should ever be allowed to demand downtime out of band. Period. In between you basically minimize your involvement with the systems and plan your activities for the next scheduled closing window.

    And you ofcourse only deploy stable, true and tested versions of software and operating systems. And even though your OS supports online capacity expansion on the fly, you really shouldn't use the capability unless you absolutely have to. Instead you plan ahead in your capacity management procedure and add capacity in the closing windows. And you do not test and rehearse failures! It only introduces risks ... besides that you have already tested and documented them. And as you haven't changed the configuration, there is no need to test again.

    So in essence. Common sense will easily yield 99.9%. Carefull planning and execution will yield 99.99%. The really hard part is 99.999%... /zensonic

    • by Smallpond ( 221300 ) on Sunday November 27, 2011 @03:03PM (#38183222) Homepage Journal

      Which means for every online server you need an offline test machine -- and a way to simulate the operating environment in order to test. Not many companies have the skill of cash to do that.

      • You only need a server for each item that is different. So if you standardize on hardware / OS then you only need 1 server to test hardware drivers and OS updates and so forth.

        Beyond that, you really should have a test database system and a test app system. You never want to deploy updates into a production environment without going through a test system first (which is NOT the same as a development environment).

        Virtual systems can help a lot with the server requirements. But you still need to understand th

    • And I would have my own personal unicorn that craps Skittles on demand. Also, I could eat candy and poop diamonds.

      Meanwhile, here in the real world... systems experience unexpected failures that will require them to be patched/rebooted/etc at the most inconvenient of times.

  • Obviously not doing maintenance is much worse than the risk incurred by doing maintenance. However in the 2 years of using the datacenter my company relies on, I can say the only 3 major outages have been due to non-routine maintenance. Once was during a power upgrade, the datacenter supposedly has redundant power company connections, but the plug was somehow pulled during the upgrade anyhow. Another was during network maintenance, our dual redundant internet connections turned out not to be so redundant wh

    • ===
      Obviously not doing maintenance is much worse than the risk incurred by doing maintenance.
      ===

      That's far from obvious, actually, and is demonstrably wrong for many types of systems and installations.

      sPh

    • by dave562 ( 969951 )

      It is time to switch data centers. We're with AT&T (and AT&T is far from the best) and they do quarterly power tests without a single problem. They've done core infrastructure router upgrades with zero down time. All in all, I'm very happy with the service. Any competent co-lo provider should be able to handle the issues you've had without any hiccups.

  • by vlm ( 69642 ) on Sunday November 27, 2011 @02:52PM (#38183158)

    Check your transfer switch ratings. I guarantee it will be spec'd much lower than you think. The electricians think it'll only be switched a couple times in its life. The diesel service provider thinks you're running it twice a week. Whoops. If you run it once a week, it'll only survive a couple years, then you'll get a facility wide multi-hour outage. I've personally seen it over and over again over the past two decades. The best part is "we have a procedure" so it'll only be run during maint hours and the desk jockeys 200 miles away will run it rain or shine, so its guaranteed that the xfer switch destroys itself at 2 am during a blizzard and it'll take half a day to repair.

    Very few xfer switches are more reliable than commercial utility power. Installing a UPS actually lowers reliability in almost all professional situations.

    My favorite power outage was caused by a gas leak a couple blocks away, where the utility co shut down the AC and then threatened to take an axe to the gen/UPS if not also shut off. This was not in the official written report, just word of mouth.

  • by Vellmont ( 569020 ) on Sunday November 27, 2011 @03:00PM (#38183196) Homepage

    I read through the entire article, and saw zero data to support his assertion. I'm sure he has the data, but the article didn't reference a single piece of it. Without any data to support the theory all we have is a fluff opinion piece. Shame on Data Center Knowledge for writing an article about a scientific investigation, and not presenting a single piece of scientific evidence.

  • by igb ( 28052 ) on Sunday November 27, 2011 @03:04PM (#38183228)
    Some years ago, the F1 rules were changed so that cars were in parc ferme conditions, with strict limits on what can be done to them, from the start of qualifying on Saturday lunchtime until the race finishes on Sunday afternoon.

    The purpose was partly to stop qualifying being its own arms race, with cars in completely different specification than for the race, and partly to reduce costs and the number of travelling staff. At the same time, "T Cars" --- a third car, available as a spare --- were banned, so that if a driver destroys a car in practice the team either have to rebuild it or not race. They're allowed to travel with a spare monocoque, but it cannot be built-up and it does not get pit space.

    There were endless howlings from the teams, claiming that without a complete strip-down after qualifying, with a large crew working overnight to check everything on the car, reliability would go through the floor and races would finish with only a handful of stragglers fighting a durability battle (our US viewers may find this ironic in light of a certain US Grand Prix, of course).

    The same argument was advanced, mutatis mutandis, over limitations on engines and gearboxes, limitations on the number of gear clusters available, limitations on certain forms of telemetry and a wide variety of "the cars can't just be left to run themselves, you know" interventions.

    In fact, reliability is now far greater than ten years ago. It's not uncommon for there to be no mechanical retirements, certainly not from the longer-standing teams, and the days of engines imploding on the track are long gone. A front-running driver will probably only have one, if even that, mechanical DNF per season. The teams deliver a functioning car when the pit lane opens at 1pm Saturday, and that car then runs twenty or thirty laps in qualifying and sixty or seventy in the race, a total of perhaps 250 miles, without much maintenance work beyond tyres, fluids and batteries (section 34.1 on page 18 of the sporting regulations [fia.com]).

    So again, we see that "preventative maintenance" turns out to really be "provocative maintenance", and leaving working machines alone is the best medicine for them.

    • by scattol ( 577179 ) on Sunday November 27, 2011 @04:23PM (#38183716)
      Those cars, to be competitive, were engineered to fall apart on the other side of the finish line. Without maintenance they would have failed. They are now engineered to last a few races instead of just one. Odds are they are slightly slower in one form or the other but it being a level playing field, it doesn't matter.
      • by vlm ( 69642 )

        Also don't forget driving style is modified. "F it, you guys are going to tear the thing down anyway so I'm gonna lean it out to sneak in a couple more laps per tank" is replaced with babying the car.

    • So they removed the maintenance between qualifying and race. That does not mean that the race team does not do "maintenance" between races. Yes there is "less maintenance" but not "no maintenance".

    • Another way of looking at it, is that it doesn't matter what the rules are, so long as everyone follows the same rules.

      Shorter races allow people to run closer to the edge, longer races require conservative builds. In the end, all the teams have the same problems to overcome, so in theory, nobody has the competitive advantage.

      This is a different situation than, say, international trade, where one country is forced to meet safety regulations and another is not. Now that's when you end up with "wrecks and b

  • by John Hasler ( 414242 ) on Sunday November 27, 2011 @03:06PM (#38183242) Homepage

    ...is the one farthest from the nearest engineer."

    Consider The Pioneer and Voyager spacecraft and the Mars landers.

  • battery maintenance / changing out old battery's is need as a dieing battery can fail to work when need or at the worst they can have a explosion.

  • http://www.theatlantic.com/magazine/archive/1998/03/the-lessons-of-valujet-592/6534/ [theatlantic.com]

    William Langewiche, Atlantic Monthly, "The Lessons of ValuJet 592". It was basically done in because it was transporting safety equipment itself, which was vulnerable to a hard-to-predict failure. The more complex we make air travel, with its multiple checks and layers of protection, the more opportunities for failure. Adding another check to avoid 592, as they did, creates yet another opportunity.

    It is, as they say, a H

  • by __aajwxe560 ( 779189 ) on Sunday November 27, 2011 @03:35PM (#38183390)
    Having been involved in Technical Ops of both large and small companies for many years, I have seen DR exercises and design that have run the gambit. I tend to think The key thing I have found to the success of any organization, exercise, or philosophy, is the underlying process that drives execution. The larger the team/org, the more change points, which in turn leads to more variables between tests. This creates complexity, as a test that ran fine a few months ago may not run the same today. However, ensuring change does not overrun process in understanding and applying the change into the greater design is a key to ensuring each test improves upon the last, until such time this is a finite process.

    For example, when working for one of the big 401k's, the first DR exercise evaluated the data center completely being leveled and re-locating both technical services as well as the ~300 on site employees to another location. Long story short, the first exercise of this was scheduled for 2 days, and while it worked, we identified dozens of issues. We scheduled the next test 6 months later and addressed what we believed were all of the issues; on next test, we ran into perhaps ~10 issues. The next test we scheduled 3 months ahead and ran into ~2 issues. All awhile, things continue to change and innovation is occurring, change process control is ensuring that new things are being factored into the continual DR process/exercise. For a small telecom I worked for, the same type of testing was accomplished with ~2-3 week turn around time (smaller team, less change points, more dynamic response), but with same underlying principles.

    Documentation of such things is critical, and employee turnover is often one of the greatest risk points. Having a diversified staff with overlapping knowledge should minimize the later risk to some degree, and if implemented fully, risk should be diminished.

    So how does all this tie back into maint? Well, it is anticipated that if any system runs long enough, their will be opportunity for failure. It is preparation for when such failure occurs, one can balance the capability of providing a measured window of downtime (if any) and provide some degree of predictability (i.e. I test once a quarter). The counter to this can certainly be overzealous maint, so certainly their is a point to being reasonable. For example, what many of go through with our cars - the dealer wants us to come in every 3k miles for an oil change, whereas realistically most mfr's and my own experience dictates that ~5k (if not longer depending on circumstance) is much more cost effective. Either way, this is providing some degree of confidence that this should prolong engine life.
    • by vlm ( 69642 )

      For example, what many of go through with our cars - the dealer wants us to come in every 3k miles for an oil change, whereas realistically most mfr's and my own experience dictates that ~5k (if not longer depending on circumstance) is much more cost effective.

      LOL old Saturn cars were famous for a valve issue where the engine suddenly starts to burn about a quart of oil per 1k mile after about 125K miles of service ... engine capacity is 4 quarts oil... "most" owners don't even know what a dip stick is, much less how to read it... lots of saturn engines dead with an empty oil pan...

      You'd be surprised how often the manufacturers actually know what they're doing with stuff like that.

      Something I never understood about that whole mentality.. pay the bank $40000 in pa

  • If "maintenance" means doing a forklift upgrade of all the computer and networking equipment every year or two then of course your reliability is going to suck, especially the human error factor with all of that new, unfamiliar equipment.

    On the other side of things if someone thinks that never changing the oil in the generator is going to make it more reliable then they're in for a surprise. When I think about datacenter "maintenance" I think: changing the CRAC air filters, cleaning any outdoor coils, chang

  • The Nowlan & Heap report is a bit heavy to read, but there is an illuminating web-page here: http://www.mutualconsultants.co.uk/rcm.html [mutualconsultants.co.uk] that conveys the essence.

    See especially the sections "How equipment fails" and "Operating Context and Functions"

    • by sphealey ( 2855 )

      Add some really heavy-duty math to that with "Mathematical Aspects of Reliability-Centered Maintenance" by H. L. Resnikoff ! Way over my head. But the basic idea is simple. In a reasonably well-designed system with reasonably reliable components, you have the least information about that which interests you the most: failure rates. Making standard probability-distribution failure analysis virtually impossible (even if one discards the questionable "everything has a bathtub failure distribution" assumpt

  • by petes_PoV ( 912422 ) on Sunday November 27, 2011 @04:08PM (#38183622)
    Although everyone makes mistakes, some people make hundreds of times more errors than others. Whether that's due to inherent lack of ability, poor training, lacking oversight, laziness, time pressures or just a slapdash attitude varies with each person. One place I was involved with (as an external consultant) made over 12,000 changes to their production systems every year. It turned out that well over half of those were backing out earlier changes, correcting mistakes/bugs from earlier "fixes" or other activities (a lot that resulted in downtime, and far too much of it unscheduled or emergency downtime) that should not have happened and could have been prevented.
    • by gweihir ( 88907 )

      Was just writing my posting while you did yours. I could not agree more. Additional aspects are
      - Engineers and managers that try to justify their existence by performing a lot of maintenance
      - Incompetence due to bad training, arrogance and inexperience

      Example: I recently pulled an Ethernet cable with broken connector out of a mission critical server (was not in production, we were reviewing cabling correctness). Turns out that some brain-dead person did the cabling with old used cables. 1 Minute of downtime

    • This sounds very much like a phenomenon I called "al dente programming"; throw code at a problem until something sticks. There is very little thought to the consequences of actions and assumptions that any issues will be fixed later. If people would just slow down a bit, do some research and think about the action maybe they would be starting fewer fires that need to be put out. It is circular logic; I don't have time to think about something because it is a fire but not thinking about something creates fir

  • If you have rushed, underqualified people do the maintenance, then sure, it decreases reliability. If you have careful, non-rushed and competent people doing it, I doubt very much that the same is true. These people tend to be a bit more expensive, but cutting cost in the wrong places is a traditional occupation of managers in IT.

    • by sphealey ( 2855 )

      ===
      If you have rushed, underqualified people do the maintenance, then sure, it decreases reliability. If you have careful, non-rushed and competent people doing it, I doubt very much that the same is true.
      ===

      Go read some of the original references on Reliability Centered Maintenance, particularly the Nowlan & Heap report referenced upthread by multiple posters. Your basic assumption has been shown to be very often incorrect in practice.

      sPh

      • by gweihir ( 88907 )

        I think what is more likely is that competence is overstated in practice. Nobody will admit they use low-competence people for difficult jobs. Also, competent people will take a mostly hands-off approach to maintenance.Of course if maintenance always means "change something", the assumption is wrong. But that approach to maintenance is the wrong one in the first place.

  • All I want are systems with interchangeable fan/air inlet filters on the outside of the case that do not require a tool to remove and replace - let alone a power cycle. Is that so much to ask?

    It's funny...I have cases where that sort of filter exists for a bottom-mounted power supply, but the case's own fans? Have to take 'em apart to properly clean the filters. And please don't say "Just lug a vacuum cleaner around." - they rarely do a good job if they actually are "luggable"; they (in a rare phrase) d
  • ... a very long time ago, we had a saying about this.

    "It's called preventive maintenance because it prevents the computer from working".

  • I remember they found that reducing the bank cleaning frequency increased to reliability of strowger exchanges (old school mechanical phone exchanges)
  • Whenever I hear this meme touted (and I've heard it a *lot* over the last 40 years) I immediately think -- someone wants to shave a few maintenance dollars, trading short-term gain for long-term pain.

    Your money, your choice.

  • according to some industry experts.'The most common threat to reliability is excessive maintenance,' said Steve Fairfax of 'science risk' consultant MTechnology. 'We get the perception that lots of testing improves component reliability. It does not.' In some cases, poorly documented maintenance can lead to conflicts with automated systems, he warned. Other speakers at the recent 7x24 Exchange conference urged data center operators to focus on understanding their own facilities, and then evaluating which ma

It is easier to write an incorrect program than understand a correct one.

Working...