Become a fan of Slashdot on Facebook


Forgot your password?
Networking Hardware

Enterprise Datacenter Hardware Assumptions May Be In For a Shakeup ( 100

conner_bw writes: For the entire careers of most practicing computer scientists, a fundamental observation has consistently held true: CPUs are significantly more performant and more expensive than I/O devices. The fact that CPUs can process data at extremely high rates, while simultaneously servicing multiple I/O devices, has had a sweeping impact on the design of both hardware and software for systems of all sizes, for pretty much as long as we've been building them. This assumption, however, is in the process of being completely invalidated.
This discussion has been archived. No new comments can be posted.

Enterprise Datacenter Hardware Assumptions May Be In For a Shakeup

Comments Filter:
  • by turbidostato ( 878842 ) on Thursday January 07, 2016 @12:54PM (#51256259)


    For the entire careers of most practicing computer scientists, a fundamental observation has consistently held true... and you won't believe what happens next!!!

    • LOL, yeah ... my thinking was "it was a true fact, not an assumption".

      Throw in some fundamentally new pieces to this (which as I gather is suddenly everything has it own damned CPU) ... and, yes, the rules will change.

      Hundreds of CPUs spread across devices will cumulatively have more CPU power than the single CPU which has always been at the top of the food chain. All that really means is everything now has a ton of embedded compute power which previously wasn't there.

      Things which used to be classed as sup

      • Re: (Score:2, Informative)

        by Penguinisto ( 415985 )

        Same here.

        Now if they found a way to practically combine RAM and disk into one unified whole, and made the two faster than frig (and able to reallocate on-the-fly w/ minimal disruption as the workload changed, maybe on a curve or as load >= n )? That would be news.

        TFA... TFA has a lot of stuff to sift through to get anything of note out of it at all, and it wasn't much.

      • You're not kidding about the supercomputer in a cracker jack box. The average iPhone now has as much or more compute power as a Cray Y-MP from the early 90s.

        • LOL, maybe a little with the Cracker Jack box ... but, no, I really wasn't kidding.

          For those of us old enough to remember when a gigabyte was a theoretical number nobody would ever encounter ... you can buy what used to be astronomical amounts of storage as an afterthought in the express checkout at Wal Mart for a couple of bucks.

          I'm afraid these days to know how cheap, small, and ubiquitous a 1GHz chip is ... because there was a time that was considered munitions grade hardware which was covered under expo

    • Re: (Score:3, Interesting)

      by jellomizer ( 103300 )

      Technology changes, so does how to better use them.

      Old technology long term storage was very slow. So we used to CPU calculate a lot of data. Think Mario and Luigi in the original NES They were 1 bitmap and you just swapped the pallets, as well many of the creator's same expensive bitmap image, and use the CPU to cheaply give them different color. As times goes on Storage is cheaper and faster. So we have independent bitmaps for Mario and Luigi so they are different in appearance, luigi being taller and

      • > But lets get away from games and onto serious computing.

        And games are not serious computing how? There are grand challenges that have been overcome in the technologies that power video games. Path finding, latency reduction, 3D computation, and so on.

        Not to mention budgets that exceed that of some "enterprise" firms.

    • by jedidiah ( 1196 )

      Meh. This assumption wasn't even true before. Those of us that are actually in the trenches already know this. Some magical new technology change doesn't really alter things.

      You actually have to pay attention to your workload and how your application is handling it.

      It's nice that some academic or journalist has finally caught up.

    • Actually that observation wasn't even true in the past. A lot of mainframes had very fast and complex I/O processors with central CPUs that weren't necessarily faster (depending upon what models you bought).

    • by mikael ( 484 )

      A "paradigm shift" from "disruptive technology" that goes beyond "24/7"

  • by petes_PoV ( 912422 ) on Thursday January 07, 2016 @12:54PM (#51256263)
    ... would be the pundits.

    This piece is citing articles written in 2005 as "ye olde world" and saying "OMG! something amaaaazing has happened.

    Well, those 10 years represent 2 or 3 generations of datacentre hardware, depending on how you amortise your assets. So if the author has only just woken up to SSDs or SCMs then what have they been doing for the past decade?

    In practice, the biggest bottleneck in the datacentre has been the network for a longish time. And the biggest bottleneck in most systems is the user's think-time. It is that last aspect which lies at the heart of multi-user systems.

    However, the guy does have a point: the need for "olde worlde" performance management - designing the bottlenecks out of a system and diagnosing where the choke-points are (ans. the network) when things slow down has largely disappeared. But as for the rest of his stuff? Yes, we know all that.

    • While mostly true that the answer is usually/often network, there are some high data loads where actual database performance might still be a bottleneck and others where the actual calculations or other manipulations(=cpu) are the slow link so there is still a need to look at the values.

      • by Anonymous Coward

        In some cases the RAM speed may be the bottleneck, and that one sure is a hard one to analyze.

        First you run 2 tasks concurrently on a 4-core (real cores, not hyperthreaded) machine, and get 2 tasks per second done. Then you run 4 of the same tasks concurrently, and get 3 tasks per second done.

        All the CPUs show 100% utilization, yet you see only 50% gains instead of the expected 100%.

  • Never seen the word "performant" until today. Must be an obscure five-dollar word that scientists love to toss around. Meanwhile, I'll stick with cheap performance as my word of choice. []

    • Re: (Score:1, Flamebait)

      "Performant" is an invaluable word. It instantly identifies those who use it seriously as people who may be safely ignored.
      • Language is a tool. Just because you're not versed in its intricacies doesn't mean that someone who is is inferior to you.
        • by Anonymous Coward

          Language is a tool. Just because you're not versed in its intricacies doesn't mean that someone who is is inferior to you.

          People who use buzzwords to hide the fact that they aren't really saying anything are tools. It's been a long time since I've read an academic article so full of bullshit.

      • by Anonymous Coward

        "Performant" is an invaluable word. It instantly identifies those who use it seriously as people who may be safely ignored.

        Speaking of invaluable, I have found that those who spew the most buzzwords in their vernacular also happen to control the budget.

        In other words, tread lightly. The "PHB" wasn't born from pure fiction...

    • by jonnythan ( 79727 ) on Thursday January 07, 2016 @01:00PM (#51256291)

      Performance is a noun. Performant is an adjective. I guess he could have said "faster"

      • by SuperKendall ( 25149 ) on Thursday January 07, 2016 @01:21PM (#51256445)

        Performant is actually a pretty useful word in place of "real" ones like "faster", because "performance" is a word that can change meaning depending on what you consider to be good (or desired) performance.

        Maybe good performance means that it's using all of the cores on a CPU well. maybe it means that it's not using much of the system at all, but is using the network very well, or work is spread out across a cluster in an extremely balanced fashion. "Faster" may be a by-product, but it may not, because people using the word "performant" often value stability over absolute speed.

        I guess the closest concept "performant" comes to is being well-balanced, or perhaps meeting some goal you had set during design.

        So don't be too dismissive of a new word, it can be the case a new word was made because old ones wouldn't really fit without a lot of verbosity.

      • by tsqr ( 808554 )

        Performance is a noun. Performant is an adjective. I guess he could have said "faster"

        performant []

        a performer

        Word Origin
        based on informant, etc.

    • I have a paper-based unabridged dictionary and performant ain't in it. (ain't is, btw)

      So either the word is relatively new, or in niche use.

      • sorry, but "unabridged" means nothing with respect to the comprehensiveness of a dictionary -- except that if there is an abridged edition of the same dictionary the abridged version will have fewer words (or smaller definitions, or something).

        Your conclusion is based on a flawed assumption. I've been using a small dictionary for around thirty five years now. I can't check it (the dictionary is at home), but based on frequency of word use over time and the quality of that dictionary I expect the word would

        • Sorry, but performant isn't in the Webster online dictionary, and even Chrome thinks it's a misspelled word. Googles Ngram views also shows that up until the last few decades it's a rarely used word. However, in the books the Ngram viewer references it's not being used to indicate performance in at least one case in 1812 []. Heck even the usage in the 70's and 80's [] seems to be referencing it as an actor in something, and having nothing to do with performance or efficiency as this article want it to be.
      • Concur. I am the proud possessor of a paper copy of the 20-volume Oxford English Dictionary. "Performant" isn't in the OED. "Performancer", as in "he / she who performs", however, is....

    • A system which has good performance is said to be performant.

      Your own link says this has been in used for at least since the 70s.

      It's hardly a new term. It may only come up in specific contexts related to computing performance, but it ain't new.

  • This issue has been known to anyone using SSD's. The CPU's are still fast enough but the bandwidth between clients and servers (10Gbps is the average these days within a datacenter) no longer uses the full capacity of the disk subsystem (which is now connected at >10Gbps to each drive). Even having multiple disks in a single subsystem you can no longer use the capacity, not because of CPU issues but because of bandwidth issues between the CPU and the PCIe bus. That's why we're going away from large disk arrays and using 1 or 2U servers with 4-12 SSD drives and hooking them together with 'object storage' or other distributed storage mechanisms. That way you don't have a single point of failure and resource contention slowing you down.

    But that's not the point, the point the article is making is that CPU's are getting too slow and that's not true. The CPU's are plenty fast and using any sort of off-loading mechanism would result in RAID controllers with CPU's that have to be just as powerful because if they aren't, you get the issues you have with current RAID controllers: they are slow and expensive (a single link to a 12Gbps chip is a bottleneck to an entire array of 12Gbps drives). Also you lose the scheduling, checksumming, hardware monitoring and all the other fancy things software-based solutions do these days.

    Using CPU's as glorified RAID controllers is just fine and I don't foresee another solution as long as your software is fast and concise (eg. ZFS). If you start handing off anything to dedicated CPU's then you're just losing the control and customization a software based solution allows you to have.

    • ceph is cool and just want NON raid cards to link the back planes to the system board. Hardware raid was good in the past but now days multi node software is better with out the hardware raid lock in / losing 1-2 disks = data lost.

    • Well. I think the authors do have some points although at least some of them are existing in embedded systems (which execute directly out of Flash) for a long time:
      * CPU cycle hungry, most efficient disk caching algorithms are not that efficient anymore once "disk" (or rather Flash) access manages to catch up to the CPU. Less efficient but also less resource hungry algorithms might be advantageous then.
      * Issuing lots of read accesses in advance to keep your worker threads busy might only help in occupying R

  • The reason for the shakeup, according to TFA:

    The arrival of high-speed, non-volatile storage devices, typically referred to as Storage Class Memories (SCM), is likely the most significant architectural change that datacenter and software designers will face in the foreseeable future. SCMs are increasingly part of server systems, and they constitute a massive change: the cost of an SCM, at $3-5k, easily exceeds that of a many-core CPU ($1-2k), and the performance of an SCM (hundreds of thousands of I/O opera

  • storage is slower than processors until you consider caching things in ram and in which case its magically faster.
    Other points mentioned:
    Balanced Systems: you can have lots of ram but make sure you have the network to serve it. CPUs were unavaillable for comment.
    Contention-Free I/O-centric Scheduling: uh, has been around for nearly 15 years since the invention of the X86_64 least...formally in the domain of commodity hardware. CPUS could not be reached for comment.
    Workload-aware Storage Tiering: remember all that crap we mentioned about memory caches for everything? well now we're drifting into the realm of object stores so sit tight. tiered storage has existed for 15 years as we're a bit late to the party for this one.

    The Future: RAM + Acronym + expensive support contract = Storage Class Memory!. learn it, embrace it, and most importantly, make sure its on the fucking purchase order this year*

    *not applicable if youre using redis, memcached, ceph, couch, hadoop, hypercube, or any one of about 30 other different commodity hardware centric distributed data frameworks designed to purge the vendors from the budget as jesus purged the jews from the temple.
  • Now tell me something I don't already know..

    OK, OK, so CPU speeds are not trending up at quite the same pace and nonvolatile storage. But it's not like this has gone unnoticed or we haven't been making hardware changes to take advantage of this over the last decade in the data center. Just like we've adjusted to new power, network and virtualization technologies in the data center.

    The real story is that CPU speeds are not trending up as steeply as they where 10 years ago, but we've been seeing huge leap

    • by jandrese ( 485 )
      Really what we're seeing is storage finally starting to catch up with CPU after lagging behind for nearly 30 years. The author is freaking out that ZOMG the disk isn't always the slowest thing on the system anymore, but this is not really news at this point. The exciting part for me is that in some cases you may be able to eliminate one cache from the system. Caches are a necessary evil that introduce big headaches into system design, so being able to eliminate one can greatly simplify parts of your syst
      • I did find it odd that the use case he kept going back to was someone buying some crazy expensive RAM storage array, and then sticking a single commodity server on the thing and being shocked that the server was CPU bound. The point about the Linux IO subsystem not being up to the task is interesting, but having not looked into it myself I can't help but to wonder if there isn't some kernel tuning or optional module support he could have enabled to improve the situation.

        Well there is new kernel tech for i

        • by MrKaos ( 858439 )

          Well there is new kernel tech for it (

          That is really interesting, thanks for pointing it out. I missed your question when you posted it:

          When do you decide to have a system managed service (for example apache) or a /etc/init.d initscript ?

          If it is a process I want to stay up, then I use inittab. Apache is a pretty good choice for an init service, another example is databases or messaging systems. However , if it is someone else's system I just do it how they do it to fit in.

          For example, apache could be be set up in inittab with a 'respawn' directive, so if the process is terminated it restarts automatically, if there is a problem with the ser

  • by Anonymous Coward

    What does it mean for a CPU to be "more performant" that an I/O device? They do totally separate things. You can't even measure them with the same units.

    Is a drill "more performant" than a hammer?

    • by Anonymous Coward

      Your autism is showing! I'm sure you know exactly what was meant, but you're just being a pedantic dickweed by choice.

      The measurements in this case are in terms of operations per unit of time, or if you prefer, the amount of time required per operation.

      Typically, a CPU can perform one of its operations (executing an instruction) much, much, much faster than a spinning platter hard drive could perform one of its operations (reading or writing a sector of data).

      So when both a CPU and a spinning platter hard d

    • Is a drill "more performant" than a hammer?

      Obviously you missed the boat. Both the hammer and the drill have OBSOLETE! Introducing, the Hammer Drill: []

  • by m.dillon ( 147925 ) on Thursday January 07, 2016 @01:28PM (#51256501) Homepage

    At least, not totally correct. Memory bus non-volatile storage such as Intel's X-Point stuff still requires significant cache management by the operating system. Why? Because it doesn't have nearly enough durability to just be mapped as general purpose memory. A DRAM cell goes through trillions of cycles in its live time. Something like X-Point might be 1000x more durable than standard flash, but it is still 6 orders of magnitude LESS durable than DRAM. So you can't just let user programs write to it however they like.

    Secondly, in terms of data-center machines becoming obsolete. Also not correct. SSDs make a fine bridge between traditional HDD or networked storage and something like X-Point. For two reasons: First, all data center machines have multiple SATA busses running at 6GBits. Gang them all together and you have a few gigabytes/sec worth of standard storage bandwidth. Secondly, you can pop nVME flash (PCI-E based flash controllers) into a server and each one has in excess of 1 GByte/sec of bandwidth (and usually much more).

    Third, in terms of memory management, paging to/from SSD or nVME 'swap' space, or using it as a front-end cache for slower remote storage or spinny disks, already provides servers with a fresh new life that means they won't be obsolete for years to come.

    And finally there is the cost. These specialized memory-bus non-volatile memories are going to be expensive. VERY expensive. To the point where existing configurations still have a pretty solid niche to play in. Not all workloads are storage-intensive and these new memory-bus non-volatile memories don't have the density to be able to replace the storage required for large databases (or anywhere near it).

    So, the article is basically a bit too pie-in-the-sky and ignores a lot of issues.


    • A DRAM cell goes through trillions of cycles in its live time.

      Typical DDR4 (2133) runs at a little over 1 GHz, or a billion cycles per second of operation.

      We're in the quadrillions scale, not trillions.

      • by subk ( 551165 )

        A DRAM cell goes through trillions of cycles in its live time.

        Typical DDR4 (2133) runs at a little over 1 GHz, or a billion cycles per second of operation.

        We're in the quadrillions scale, not trillions.

        Maybe he meant "cycles during which its own state is changed". They get less wear coasting for a few quadrillion laps than they do altering the value a few trillion times.

    • by LDAPMAN ( 930041 )

      SATA, even 6 or 12GB SATA, will be going away for HPC application and it may go away for even more moderate performance systems. Direct PCI is even faster. Thats what the article is about. The speed of storage and memory are converging. That will change how we build systems.

  • The basic point of the article is dead on. The major assumption that I/O is extremely slow has driven the organization of computer architecture from the beginning. But, as the article noted, in the last few years, that equation can be changed drastically. The memory hierarchy is going to get more complicated: DRAM, NVDIMM, NVM, SSD, HD, Optical/Tape, and best using that hierarchy means that there are changes that need to be made.

    For one, I think there will be a lot of research in this area. Just like modern

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake