Forgot your password?
typodupeerror
Data Storage IT

Software-Defined Data Centers Might Cost Companies More Than They Save 173

Posted by timothy
from the bottlenecks-and-gradients dept.
storagedude writes "As more and more companies move to virtualized, or software-defined, data centers, cost savings might not be one of the benefits. Sure, utilization rates might go up as resources are pooled, but if the end result is that IT resources become easier for end users to access and provision, they might end up using more resources, not less. That's the view of Peder Ulander of Citrix, who cites the Jevons Paradox, a 150-year-old economic theory that arose from an observation about the relationship between coal efficiency and consumption. Making a resource easier to use leads to greater consumption, not less, says Ulander. As users can do more for themselves and don't have to wait for IT, they do more, so more gets used. The real gain, then, might be that more gets accomplished as IT becomes less of a bottleneck. It won't mean cost savings, but it could mean higher revenues."
This discussion has been archived. No new comments can be posted.

Software-Defined Data Centers Might Cost Companies More Than They Save

Comments Filter:
  • by cosm (1072588) <.moc.liamg. .ta. .3msoceht.> on Sunday July 28, 2013 @11:12PM (#44409675)
    GTFO.
    • by bhcompy (1877290)
      Well, it's not the IT people, rather the Information Technology part.
      • Re: (Score:3, Insightful)

        by Anonymous Coward
        "Well, it's not the IT people, rather the Information Technology part."

        Cost savings are irrelevant when the data centre operators are outright price-gouging.

        The world’s largest tech companies have failed to justify their Australian pricing regimes, with a 12-month government inquiry into the matter finding that Australians pay more for products for little to no legitimate reason. In a report, the committee found that Australians pay anywhere between 50 to 100 per cent more for IT-related goods than our overseas counterparts.

        http://www.businessspectator.com.au/news/2013/7/29/technology/it-price-inquiry-spells-out-australia-tax [businessspectator.com.au]

      • by MrMickS (568778)

        Well, it's not the IT people, rather the Information Technology part.

        Sorry, but frequently its the people.

        - Its those people that are in IT because its a career that will earn them a living rather than because they have a gift for it.
        - Its those people that blindly follow rules because they only know how, they don't now why.
        - Its those people who only have round peg and try to use it to fill every hole whatever the shape.
        - Its those people that decide to implement things from scratch rather than build on experience gathered elsewhere.
        - Its those people that are more concerne

    • by LordLucless (582312) on Sunday July 28, 2013 @11:23PM (#44409723)

      Yes. That doesn't mean that it's IT's fault. At my current workplace, we have 150+ people, and 2 IT people. Getting stuff through IT is slow. However, the problem isn't with IT - they don't get to set their own budget.

      • Re: (Score:3, Funny)

        At my current workplace, we have 150+ people, and 2 IT people

        As time marches on, people are becoming more IT literate and IT is becoming more people literate. In 20 years, those 2 IT people will be sitting in the basement playing Halo 16 justifying their existence by requiring a backup person to hold the passwords for the network infrastructure.

        • by Zaelath (2588189) on Sunday July 28, 2013 @11:56PM (#44409847)

          As time marches on, people are becoming more IT literate

          Hahahaha

          http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect [wikipedia.org]

        • by theshowmecanuck (703852) on Monday July 29, 2013 @01:37AM (#44410147) Journal
          We allowed a senior developer to manage his own AWS EC2 instance for development of a server. Then we noticed a few hundred gigs of data were moving through the server when he wasn't around. We shut it down and audited it. There were ports and security vulnerabilities exposed that just shouldn't have been, because he had set it up to be easy for development etc etc etc. The IT bottleneck was removed. So was a good chunk of money from the company paying for some people at a number of Chinese IP addresses to move data through our servers (that costs on cloud services ya know). Just because people know how to make things work, doesn't mean they know how to do so safely. Nor does it mean they have the inclination to learn to do it safely either. I don't like data Nazis any more than the rest. But they serve a purpose that no-one else is willing to take on. Let the average computer users control things, and your company will be fucked.
          • by Anonymous Coward on Monday July 29, 2013 @03:23AM (#44410367)

            I can give you a counter example. We had a Cloud Ops team that had the task to exactly prevent this stuff you described.
            Great. Except, they didn't even know how to set an EC2 instance with EBS. Also they couldn't provide the EC2 instance types that were needed.
            So in the end, we just worked around them. Instead of taking *days* to explain them what we needed, we had our EC2 instance running in 5min exactly how we needed it.

            • by Jawnn (445279)

              I can give you a counter example. We had a Cloud Ops team that had the task to exactly prevent this stuff you described. Great. Except, they didn't even know how to set an EC2 instance with EBS. Also they couldn't provide the EC2 instance types that were needed. So in the end, we just worked around them. Instead of taking *days* to explain them what we needed, we had our EC2 instance running in 5min exactly how we needed it.

              Whoever picked/assigned that team failed, because the "team" obviously lacked the skill set required to do the job, just like the developer in GP's example lacked those skills. That's and argument for better training, but hardly an argument for placing the management of IT resources in the hands of end users.

          • by mjwalshe (1680392)
            Well obviously your company has a strange view of what a senior developer is if he made those trivial errors.

            When I set up a AWS based system for part of reed Elsevier I made damm sure that I was only running the services I needed and I locked it down so that only people coming from our ip could access the web side of the system. And both I and My manger kept a strict eye on what we where spending on AWS.

            Ideally I woudl have liked to lock it Down further with ssl certs to secure the link between my s
        • by SuricouRaven (1897204) on Monday July 29, 2013 @04:58AM (#44410553)

          They are becoming just IT literate enough to be a problem.

          Any idiot can set up their own department server now. But that idiot won't know how to configure firewall rules, stop unneeded services or make sure patches are up-to-date.

          Any idiot can move their data around on USB sticks and dropbox - and this will greatly increase productivity, as they subvert the frustrating demands of IT to keep all confidential data within the office and start catching-up at home and on the commute too. Until someone loses the stick or has their laptop stolen, leaving half your customer database floating around the street somewhere.

          • by MickLinux (579158)

            Our company moved to Microsoft Cloud, and user rates plummeted. Everything became uselessly slow, the good spreadsheets can't even be opened by cloud services, the app forces you into the online Outlookwhich is better performed by the local ap, data limits are sometimes a bottleneck, and basically we ended up using it for end-of-the-week backup.

            To get things done, we depend on email and thumbdirves.

            Oh, and we fired our IT guy, and then started paying him as a contractor.

            So we are experiencing the opposite o

        • by Bert64 (520050)

          The problem is that the IT dept is becoming less IT literate...
          The industry has expanded very rapidly, and demand for skilled people has massively outpaced demand. This is then coupled with vendors who try to claim their products don't require highly skilled staff to manage them.

          Also as you point out, people are becoming more IT literate but this can be dangerous, as these people often think they know a lot more than they really do and are prone to breaking things. These are also the kind of people who try

        • by Rakishi (759894)

          I can setup a server. I don't want to. I don't want to manage a server. To keep up with updates. To deal with issues. To read up on standards. To setup firewall rules.

          Also, it'd take me five times as long to create a server that is half as reliable as one done by a proper sys admin who does nothing but that all day. That's time I'm not doing my job and that's time I'm basically being overpaid a lot to be a shitty sys admin.

          Fuck that.

      • Yes. That doesn't mean that it's IT's fault. At my current workplace, we have 150+ people, and 2 IT people. Getting stuff through IT is slow. However, the problem isn't with IT - they don't get to set their own budget.

        What do they do all day? Browse slashdot?!

        I just left a company where we had 2.5 (one was part time) doing I.T for 1,300 users! I left because users had to wait 10+ days to get a response sometimes and always yelled at me while I worked for free off the clock not to get fired.

        If we can handle this with an average response time of 7 business days then why can't your company do so with 1/10 the demands? ... maybe I should ask where you work as I could be relieved not to have ulcers?

        • by JanneM (7445) on Monday July 29, 2013 @12:59AM (#44410067) Homepage

          If you had a response time of a week for issues, and you had to work enough unpaid overtime that you left rather than facing an intolerable work situation then quite obviously you were not able to handle it with the staffing at hand.

        • by satsuke (263225)

          Probably depends on the nature of the business.

          Some types are business are inherently more IT dependent than others.

          At least, once upon a time I contracted to/ in an industrial factory making insulation.

          The permanent IT staff was 7 for a workforce of around 1200 .. and most of those were dedicated to the production control systems in the plant (ancient honeywell machines).

          Most of the workers were union tradesmen .. whom responded to the new email system training materials was to circular file them before I

        • by LordLucless (582312) on Monday July 29, 2013 @01:09AM (#44410085)

          If we can handle this with an average response time of 7 business days then why can't your company do so with 1/10 the demands? ... maybe I should ask where you work as I could be relieved not to have ulcers?

          Yeah, waiting a week to get an expired password reset is precisely what I mean when I say going through IT is slow.

          • by gagol (583737)
            At that rate, using Ophcrack is actually faster! Even with Win7.
          • by Jawnn (445279)

            Yeah, waiting a week to get an expired password reset is precisely what I mean when I say going through IT is slow.

            So the obvious solution then, is to allow users to reset their own passwords. Right? Who needs an "administrator" for the access control system? Just give all the users the admin passwords. Right?

            Obviously, that's an absurd suggestion, but you've offered nothing in the way of a solution to the problem you describe. Hell, you have not even offered a half-assed analysis of why it takes so long for a password reset. I have not seen things so bad that it took a week to get that done, but I have seen it take

            • Obviously, that's an absurd suggestion, but you've offered nothing in the way of a solution to the problem you describe. Hell, you have not even offered a half-assed analysis of why it takes so long for a password reset. I have not seen things so bad that it took a week to get that done, but I have seen it take hours.

              How the hell would I know? I'm not in the IT department - I'm a user on that system. Moreover, it's a windows environment, which I have no knowledge nor interest in. I have no idea, nor do I care. If you expect your users to perform a systemic analysis of your IT department to determine why the turn around time is so long, methinks you're expecting too much.

    • by cosm (1072588)
      And to reply to myself in bad form, it is rarely the network that is the limiting factor in the corporate environment. How many users out there are continually saturating their 1G links patched in to some top of rack from their cube? Not many, that's how much. Compute resources, maybe. Large dev shops with build farms, ok that I understand if your trying to get a bunch of builds kicked off before everybody else, but build servers' compute power is limited usually by production server budget, not core switch
    • by plover (150551) on Monday July 29, 2013 @12:33AM (#44409965) Homepage Journal

      Sure, IT's the bottleneck. Why? Because users who roll their own solution without understanding what they're creating will create a fragile business model.

      Andy Accountant decides to do General Ledger on Quickbooks, while Polly Payables decides to do billing on the bank's web server. How does one update the other? It starts out as a manual process. But let's say Polly is clever and signs up for IFTTT.com to automate the integration. She also hands the task of entering the bills over to Carlos Clerical. Later, when Polly is on vacation, Andy downloads an upgrade to Quickbooks - but IFTTT was set up only to modify the original Quickbooks. Now Polly's billing doesn't work, and Carlos has no idea what's going on. Polly is the only support person, but she's on vacation. Andy only knows about the manual processes, so he can't help Carlos. So the bills don't get paid.

      And the IT guy only knows about the PCs, the printers, the network, and the file server. He doesn't know about the apps, because the users got tired of waiting for him and rolled their own.

      Repeat this scene for each and every system, service, and person involved with computers in the organization. It starts out easy and fast, but the dependencies quickly crust over every activity the company performs. Support becomes a nightmare, and changes go from "difficult" to "impossible".

      If the IT guy put the pieces together, he (should) document the connections, provide troubleshooting knowledge, and at least know who to call for support. At least that's the theory.

  • by mysidia (191772) on Sunday July 28, 2013 @11:40PM (#44409799)

    Because when people read the label and see that the food is lower calorie or "more healthy"; they eat a larger amount of the food because they feel less guilty due to it being "more healthy"; and the additional consumption more than offsets the decrease in calorie count of the "healthier food"

    So eating lower calorie foods makes you less healthy....

    • by khasim (1285)

      As users can do more for themselves and don't have to wait for IT, they do more, so more gets used. The real gain, then, might be that more gets accomplished as IT becomes less of a bottleneck.

      As with your calorie example, you won't end up with more work being "accomplished".

      You'll end up with more fat.

      Look! I can record HD video and upload it to the data center and then embed it in my Power Point presentation and then email it to everyone as an attachment. With just a few clicks. Instantly.

      Right now most o

    • I'll take a triple bacon and cheese burger with super sized fries annnnnnnd... a diet coke.
  • In the early 1950's, there were only a few computers (mainframes). The idea that we would now have only a few dozen computers in the world, which would each cost a fraction of a cent due to Moore's Law, sounds pretty dumb, doesn't it? Obviously the ability to do more is at least as important as declining cost for a fixed capability. Nothing new at all.
  • Not a 1:1 ratio (Score:5, Insightful)

    by Tony Isaac (1301187) on Sunday July 28, 2013 @11:44PM (#44409817) Homepage

    Virtualization makes it easier to stand up a new "server." True.
    This simplicity will lead to using more "servers." Granted.
    But those virtual servers require far less hardware than the old physical servers. Many of these virtual servers are used only a small percentage of the time. Depending on the load, 10, 20, or even more servers can run on one physical piece of hardware.

    So even if we use, say, five times more "servers" with virtualization, we will be using fewer physical units--fewer "resources."

    In short, the math is not so simple.

    • by mlts (1038732) *

      There are other variables as well. If the servers have disk images stored on deduplicated backend filesystems that have autotiering, the hypervisor is able to swap to a dedicated fast disk or SSD and swap the VM out if unused, then adding another VM might take very little in physical resources.

      What is happening is that because VMs are easier to create, modify and archive, it allows developers to spin up new boxes as opposed to adding more tasks to existing hardware or VMs. Is this good? Possibly. It is

      • by ArsonSmith (13997)

        VMs will eventually provide what Java always promised. Write once run anywhere, because the entire OS is encapsulated within the VM and not just the development environment. Java is still likely to even be a big part of this.

    • by hawguy (1600213)

      Virtualization makes it easier to stand up a new "server." True.
      This simplicity will lead to using more "servers." Granted.
      But those virtual servers require far less hardware than the old physical servers. Many of these virtual servers are used only a small percentage of the time. Depending on the load, 10, 20, or even more servers can run on one physical piece of hardware.

      So even if we use, say, five times more "servers" with virtualization, we will be using fewer physical units--fewer "resources."

      In short, the math is not so simple.

      Even if the resource cost to stand up and run a new server (with automation to patch and maintain the operating system) is zero, there's still a support cost in maintaining the application. Someone still has to patch (and test) the application to keep it up to date. Someone has to test the application after operating system patches to make sure nothing broke. Someone has to set up automated monitoring of an application that may not have been designed for any automated monitoring. Someone has to track down

      • by MightyYar (622222)

        Someone still has to patch (and test) the application to keep it up to date.

        A lot of the virtualization that my IT department is doing involves moving legacy boxes running ancient applications over to new servers. They are taking these 10 year old boxes running Windows 2000 and moving them into VMs as the hardware starts to die. In other words, the applications and OSes weren't being maintained before, and the VMs won't be maintained either. I'm not in IT, so don't flame me :)

    • You still need to hire someone to install and configure the virtualization apps. Not much different than running all your apps on the one piece of hardware. Virtualization: A solution in search of a problem, in a saturated market ...
      • >Not much different than running all your apps on the one piece of hardware

        Do you even sysadmin?

        > Virtualization: A solution in search of a problem

        I'll take that as a no.

  • by NotSoHeavyD3 (1400425) on Sunday July 28, 2013 @11:44PM (#44409819)
    Since engineers have always worked on efficiency so pretty much everything you use these days is more efficient that the equivalent item from 30 years ago. However people in the US use more energy per capita than they did 30 years ago.(So for example instead of 4 people in a family watching the same 25" TV during prime time each one of them has their own and or they watch it more. End result is the amount of energy used to watch TV is greater even though the actual TV uses less power.)
    • by xvan (2935999)
      That's not true.
      30 years ago, you had no LCD/plasma tv's.
      So with a ratio of 3:1 tv's, you're saving power.

      With a 4:1 you'd spend a little more power on prime-time, but if there was an "always on" tv on the house, you'd be still saving power.
      • Well you have to remember 30 years ago we watched less TV than now.(Yes, I'm old enough to remember how much TV I watched in the late 70-80's.) Not only are there more TV's in my house now than 30 years ago(when I was a kid) they're on a lot more so the end result is more electricity usage in my house on TV's than 30 years ago.
    • No. I think most code is generally nowhere near as efficient as 30 years ago. 30 years ago you were severely limited in processing speed, memory, and storage. To run a major enterprise business system, you needed to code thins as efficiently as possible. As computer systems became more advanced, businesses had coders write stuff faster, with less efficiencies in the code, but overall more cheaply because they didn't have to such experts to create highly optimized code. It didn't need to be as highly optimiz
      • >Ever notice that games take as long or longer to load now than before, even though computer systems are orders of magnitude more powerful now?

        Storage latency. Spinning hard disks are not orders of magnitude faster when loading gigabytes of random data then the 64k off of what ever medium 20+ years ago. Load that same huge game off a fast SSD or RAMdisk and it's pretty much instant.

  • Refreshing that old quote: You implement Cloud, the new network structure, where they pay dollars an hour, have no health care, no retirement, no pollution controls, etc., and you're going to hear a giant sucking sound of US admin jobs being pulled out of this country. We have great telco agreements across the world.
    Do US admins, technical staff, CS graduates, staff with double degrees really think US multinationals will let you work overtime with Seattle civilian aircraft engineers like wages for generati
    • by iggymanz (596061)

      Give sensitive data to poor country's IT workers? they've already proven themselves not trustworthy, the horror stories from India alone boggle the mind. And good luck with any legal venue.

      • Sounds like an MBA's wet dream.
      • Give sensitive data to poor country's IT workers

        An accountant in my company is often bitching about the slow speed of internet banking, and the problem is there are a lot of bottlenecks between here and where the local bank holds their sensitive data on servers in India. It's the same with a lot of data entry of medical records.

        I'm not suggesting it's a good idea but merely pointing out it's an idea that seagull management (makes noise, shits on everything then flies out) implemented in a lot of places at

  • Jevon's Paradox (Score:5, Insightful)

    by Okian Warrior (537106) on Monday July 29, 2013 @12:00AM (#44409859) Homepage Journal

    Jevon's paradox is valid, but only under specific economic assumptions.

    It's only true so long as there is more demand for the resource, and it's only a problem when the resource has a cost attached. Essentially, it's true in a "scarcity" economy, but not true under "post scarcity".

    We've achieved "post scarcity" for several resources already; for example, phone calls and computer time.

    Phone calls used to be expensive and billed by the minute, but nowadays it's virtually free. Similarly, computer time used to be metered and charged - in college, the CPU time for each program run was deducted from your account. Nowadays people can have as much un-metered computer time as they want.

    CPU time and phone service aren't literally free, but the cost is so small as to be negligible.

    Despite this, we do not see infinite consumption. People have a certain level of need [wikipedia.org] for a resource, and when that need is met they stop consuming more. Coupled with a declining population, there is no reason to expect infinite consumption.

    Your company may be using more resources than it needs... but so what? Computer resources are remarkably cheap - so cheap, in fact, that it may be more effective to ignore the problem. Optimize the biggest expenses first: if that turns out to be IT resources, then take a closer look. Otherwise, just ignore it.

    (For another example of post-scarcity, consider the Chinese "dollar stores" that have cropped up. The cost of goods is so small that the time and expense of price tags makes a big difference. This is almost post-scarcity of tangible goods.)

    • CPU time and phone service aren't literally free, but the cost is so small as to be negligible.

      Despite this, we do not see infinite consumption.

      Well yes. Infinite cnsumption would require zero cost. There is no such thing as negligible when it comes to infinite use.

      The thing is we always want more. Bigger supercomputers, faster desktops, a phone as fast as a desktop and at the bottom end, more power than the teeny low power embedded 8051 running at 32.768kHz gives.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      CPU time and phone service aren't literally free, but the cost is so small as to be negligible.

      Despite this, we do not see infinite consumption.

      CPU time and electronic communication (and data storage) are excellent examples of Jevon's paradox. The cost has become negligible, and in response consumption has increased to the point where almost all use of these resources is either completely pointless, or of benefit to society so marginal that is is difficult to measure.

      It's almost too easy to start citing examples:
      - High frequency trading - astronomical use of low-latency coms and CPU power to perform arbitrages whose payoff rapidly converges to the

    • Cost of using computer or phone includes things other than just money. Speaker's time is valuable too. That is what limiting infinite consumption. But in corporations such things can take a completely surreal turns. There was the apocryphal story about the build group of a major software vendor that was doing daily builds on products (both release and debug, mind you) on branches way after the end of life of products. The product teams had been disbanded but none of them issued "stop build" request and the
  • by Chas (5144) on Monday July 29, 2013 @02:37AM (#44410275) Homepage Journal

    Work expands to fill the space given to it.
    Give it no definable boundaries?

    SpaceMonster: OOOOOOOH!!! *Wiggles fingers acquisitively*

  • IT is really important and users need IT services even though they don't think they do, says IT services company

    • by Jawnn (445279)

      IT is really important and users need IT services even though they don't think they do, says IT services company which, by definition, understands IT services far better, from an operational as well as a strategic point of view, far, far better than most users.

      TFTFY.

  • Using more resources is exactly what happens... As hardware gets faster, software gets slower. While some of the slowness can be attributed to additional features and larger data sets, much of it is down to using higher level languages. Very few people bother writing efficient code anymore, on the basis they can always throw more hardware at it.
    I have personal experience with a few games that were deemed too slow and rather than try to improve the code, they were simply shelved for a couple of years until t

  • The whole idea of SDDC and Cloud Computing is to basically end up with "IT as a Service". The rest are just marketing words. The goal is to have a service pretty much like electricity: you don't necessarily care where it comes from or how it's delivered to your premises. All you care is that it's there, it's reliable, it's consistent and you know exactly how much you are paying for.

    The problem I've seen in the 10 years I've been in this particular industry, is that very few large companies are doing char
  • "Might"

    "Might" cost more than they save based on data gleaned from coal burning plants. I was going to call this an apples-to-oranges comparison, but those two things are actually fairly similar. This is more of an apples-to-hemidemisemiquaver comparison.

  • As users can do more for themselves and don't have to wait for IT, they do more, so more gets used.

    Sticking stuff in teh cl0ud makes accounting clerks and order pickers transmogrify into software engineers?

The bogosity meter just pegged.

Working...