Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Open Source Intel

ARM In the Datacenter Isn't Dead Yet (theregister.co.uk) 147

prpplague writes: Despite Linus Torvald's recent claims ARM won't win in the server space, there are very specific use cases where ARM is making advances into the datacenter. One of those is for use with software-defined storage with open-source projects like CEPH. In a recent The Register article, Softiron's CTO Phil Straw states about their ARM-based CEPH appliances: "It's a totally shitty computer, but what we are trying to do here is storage, and not compute, so when you look at the IO, when you look at the buffering, when you look at the data paths, there's amazing performance -- we can approach something like a quarter of a petabyte, at 200Gbps wireline throughput." Straw claimed that, on average, SoftIron servers run 25C cooler than a comparable system powered by Xeons." So... ARM in the datacenter might be saying, "I'm not quite dead yet!"
This discussion has been archived. No new comments can be posted.

ARM In the Datacenter Isn't Dead Yet

Comments Filter:
  • The FUD is strong in this submission ...

    • by e3m4n ( 947977 )

      Hey its in datacenters. I bet those smart temperature/environmental monitoring units are running ARM processors. ;-)

      Possibly even the wireless APs.

      • A lot of the RAID controllers as well(I think that's the purpose that Intel held onto their last bit of ARM for when they sold the rest to Marvell, not sure if they've fully divested at this point or if it remains the case).

        That's what makes the report of these SoftIron storage nodes (and the fact that the storage nodes are accompanied by management nodes whose architecture the article doesn't specify and 'router' nodes that expose iSCSI, NFS, and SMB; architecture also unspecified) unsurprising (if you
        • by Junta ( 36770 )

          There have been vendors chasing the 'high' end, but all but Marvell have bowed out. AMD has cancelled future ARM product. Qualcomm seems to have failed to bring their offering to market. Cavium got bought by Marvell. Broadcom's rumored Xeon competitor is nowhere to be seen.

          So Cavium ThunderX is the only platform to get the PCIe and bunch of DIMMs. In fact the *one* benchmark Marvell can show as compelling is the memory bandwidth compared to Xeon. It also can be seen with fairly normal 'pc-like' firmwa

          • I'm not so surprised by the failure to take on Xeons(possibly barring the 'it's an i3 we didn't rip ECC out of' ones, which aren't too punchy); Intel can be obnoxious about their pricing and blatant market segmentation; but they know a few things about high performance cores; and people who don't need x86 but do need performance also have options like Power.

            What surprises me more is that the niche that the existence of Avoton (C2000 series) and Denverton C3000 series) parts suggests exists doesn't have m
    • Let's not forget he did take that job with Transmeta. He's a really smart guy, but I don't think he's necessarily a perfect prognosticator. A lot of this hardware is in a much more usable state than it seems just because the patches are still floating around back channels and haven't been mainlined yet.

    • Horseshit...and furthermore, I should add: Linus doesn't opine; he observes. It's certainly not his fault if you lack his brainpower or fail to grasp his perspective).
    • by Spazmania ( 174582 ) on Thursday March 28, 2019 @09:41AM (#58347446) Homepage

      it reverses the traditional storage server layout by moving CPUs to the front of the PCB and storage drives to the back. This means cool air from the fans blows over the drives first, and then the CPUs â" which wouldn't make any sense in a compute server.

      Um... What?

      Modern servers cool front to back. They place the drives in front where they are cooled first. Some place the CPUs behind the drives. Others place the CPUs in parallel with the drives so that they're also cooled directly from ambient air.

      No one in their right mind places the drives after the CPUs. Losing a CPU is just money. Losing the drive is everything.

      Were you maybe trying to say they put the fans in front of the drives instead of behind them, pushing air instead of pulling it? That doesn't make any real difference to the cooling, but it makes hot-swap harder.

      If these machines actually cool back to front, that's a bad thing. Modern data centers are laid out with a hot-aisle cold-aisle design. The wiring and exhaust side (the back) faces the hot aisle. Equipment which reverses that flow effs everything up. Cisco is especially bad about this, but servers mostly get it right.

      • by Anonymous Coward

        If these machines actually cool back to front, that's a bad thing. Modern data centers are laid out with a hot-aisle cold-aisle design. The wiring and exhaust side (the back) faces the hot aisle. Equipment which reverses that flow effs everything up. Cisco is especially bad about this, but servers mostly get it right.

        They cool back to front, but they install them in reverse.

        • by green1 ( 322787 )
          That does happen with some equipment, and it makes the guys doing the wiring want to kill the idiot who designed it. The bays are all set up with the expectation that wiring will be in a certain place, when it's on the other side it can be awkward to work around depending on how full the bay is.
  • by e70838 ( 976799 ) on Thursday March 28, 2019 @06:41AM (#58346678)
    The claim of Linus is "that as long as everybody does cross-development, the platform won't be all that stable". I have my web hosting on arm and I compile on arm. I cannot find any good and cheap arm laptop with ubuntu. If this does not happen soon, arm in servers will die shortly, like a hype. IMHO the future is not decided yet and what give Linus is a good indicator to analyse where it is aiming. If this summer we have many arm laptops that sell reasonably well on the market, I continue hosting on arm. Otherwise I go back to intel.
    • by e3m4n ( 947977 )

      Sometimes the best engineering, the best design, or the best science, still feels to make market share because of marketing, financials, and deliberate partnerships. Betamax vs VHS. Or HD-DVD vs BluRay or even X2 vs 56KFlex. It’s not always the best design that wins. There are patents which result in recurring revenue from licensing at stake. This usually results in a whole lot of finagling and cutting deals in board rooms regardless of which technology was superior.

    • by AmiMoJo ( 196126 )

      Since when do you need a dev machine with the same architecture as the server? I write ARM code all day on an x86 laptop.

      The solution is obviously .NET Core. Then the CPU architecture doesn't matter. Only half kidding.

    • Comment removed based on user account deletion
    • Is cross-compilation not working out for you? Or something else?

      I've been curious about setting up arm-based servers so I'd love to know what pitfalls you've encountered.

    • Linus makes the mistake though of assuming that "Compile" is a step. Lots of Serverless applications are just javascript snippets. There are no compile bug concerns between platforms you just specify a .js block of code and execute. The platform is irrelevant. Serverless cloud providers don't even tell you what underlying CPU is executing your command.

      There's plenty of serverless web tasks to justify ARM if it is cheaper/more efficient/has lower startup times.

  • by Anonymous Coward

    ARM adoption will increase because AWS offers the a1 instance family now. You can now easily fire up servers with ARM hardware to work on your software solutions. For many applications it will be a viable solution with substantial cost savings. Watch the stories and statistics that you start seeing at the summits and reinvent from customers in 2019.

  • And that's got intel scared.

    Anyone can license the cores, Apple is doing it now for laptops, workstations won't be far behind once the laptops prove they have some serious oomph.

    And you can dump the legacy x86 craphole shithacks that place that old cpu way of thinking, and end up with something that is actually quite good and easy to work with.

    ARM will win in the datacenter for the simple reasons such as:

    1) Cost of CPU is a fraction on intel/amd offerings.
    2) Cost of running the CPU in heat and power is *sig

    • by Anonymous Coward

      Lack of Intel Management Engine or other spyware built-in features, that can't be removed without a high degree of risks of permanently damaging the hardware.

      • What they are used for varies by implementation, since ARM is all kinds of things to various people; but 'Trustzone' extensions are specifically designed to provide analogous capabilities(at lower cost, the invisible super-privilege enclave is logically separated but runs on the same CPU rather than being a separate processor); and tends to be used for similar purposes in cases where conditional access enforcement or 'platform integrity' are design goals. ARM SoCs commonly also implement all the features on
        • I have not understood why the ARM TrustZone "worlds" isn't used with a hypervisor. It would provide a very armored attack surface, preventing malware in one VM from trying to jump to another. It also would be useful for stuff the OS wants to protect (user credentials to guard against a pass the hash attack).

          • There are some hypervisors that use Trustzone for various things; mostly commercial and relatively low profile(Sierraware has one, as does Mentor Graphics, and there are a number of other projects and research papers; no personal experience with them). What's less common is a hypervisor used as we are accustomed to(just carving a big system up into a bunch of smaller VMs for resource efficiency and abstraction purposes); and the prevailing use seems to be adding features that stock trustzone doesn't have,
    • ARM can be easily scaled to hundreds of cores maybe more without having an astronomical price and and without requiring a nuclear power station sitting on the desk right behind [gaming] PC :)

      • by Miamicanes ( 730264 ) on Thursday March 28, 2019 @09:41AM (#58347444)

        > ARM can be easily scaled to hundreds of cores

        And yet, an Android phone with 8+ cores and nominal clock speed of 2GHz+ still can't render a Javascript-heavy web site (like Amazon, Walmart, or Sears) as well as a 15 year old 700MHz Pentium III.

        > without having an astronomical price

        Scale an ARM-based solution up to the point where it's capable of genuinely matching the performance of an i9, and you'll find that the ARM-based solution is probably quite a bit MORE expensive.

        > without requiring a nuclear power station sitting on the desk

        Compared to the power and cooling requirements of a Pentium IV with 15kRPM hard drive, an i9 with RTX and SSD is practically a laptop watt-wise. 20 years ago, I literally cut a hole in the wall between my computer room and the hallway so I could put my computer in the hall & pass the cables through the wall to get the heat and noise out of my face.

        • I dunno, my 8-core Galaxy S9+ seems fine with rendering websites. After all, that's what I'm posting from and do 98% of my browsing from.

          As to the Pentium IV, definitely! Perhaps 7 years ago, I was given a Pentium IV tower, and I threw it in a corner as a headless media server. It only lasted the first month, because of the $40 spike in my electric bill.

          Whereas my bench at home has 20 Cortex-A53 cores on it, and the kill-a-watt doesn't creep past 65 W, including 1 TB external RAID, 2 USB hubs (why not?), el

          • The real problem is that ARM is currently nothing even approaching competitive on a per-core performance metric with Intel.
            You need 20 A53 cores to match the performance of 4 Xeon cores.
            In some workloads, this doesn't matter, because they can be scaled well.
            In a lot of workloads, it simply does matter.

            I administrate around 150 servers, and we run 7 datacenters.
            I already avoid the slower I know a lot of armchair computing experts like to claim that it is, but I'm sorry. The reality on the ground is th
            • *avoid the slower clocked Xeons.
            • Sigh.
              Should have been:

              I already avoid the slower clocked Xeons. Aggregate performance is simply not comparable in most workloads with highly disparate per-core performance. I know a lot of armchair computing experts like to claim that it is, but I'm sorry. The reality on the ground is that it is not. That's why we're not using AMD, and we're not using ARM. Though I promise you- we look forward to being able to some day.
        • still can't render a Javascript-heavy web site (like Amazon, Walmart, or Sears) as well as a 15 year old 700MHz Pentium III.

          When was the last time you used a 15 year old 700MHz Pentium III? Eight years ago?

          • About 4 years ago, I dusted off an old Compaq Armada laptop (700MHz Pentium III, 512mb ram) and tried using it with a minimalist Linux distro. The performance of Chrome or Firefox with Amazon.com and Walmart.com was SLIGHTLY better than the performance of the same two web sites with my then-new Galaxy Note 4 (all using wifi, so the mobile network quality never entered into the equation).

            The Pentium III is a great reference point, because its zenith (the 1.4GHz Pentium III Xeon) marked the point when Intel t

      • by Bengie ( 1121981 )
        It can only be scaled so well because it trades throughput for latency when it comes to cross-core communications. This makes any concurrent workload where there is lots of shared mutable data very slow or requires a completely different design. Message passing could be done, but it does use more memory, which means more cache is used and more memory bandwidth is used. Every design has its pro and cons.
    • > 2) Cost of running the CPU in heat and power is *significantly* less than intel/amd

      ONLY true when the ARM's performance is significantly less than Intel/AMD as well. Beef an ARM up to i9 specs, and it's going to burn as much power and throw off as much total heat AS an i9 with identical raw performance.

      It's like LED lighting. A single LED might throw off light with just milliwatts of power... but crank it up so it's throwing off EXACTLY the same amount of light as a 100-watt halogen lightbulb (measured

      • by bluefoxlucid ( 723572 ) on Thursday March 28, 2019 @10:15AM (#58347650) Homepage Journal

        ONLY true when the ARM's performance is significantly less than Intel/AMD as well.

        ARM has historically had more performance per clock than x86 and x86-64; and modern ARM chips run like 2.4GHz at a watt of peak TDP on four cores.

        Think about linear character matching ("abc" in "aaabc" -> "a=a, b!=a" -> "a=a, b!=a" -> "a=a, b=b, c=c" -> match) versus Booyer-Moore ("abc" in "aabc" -> "c:a = 3" -> "c=c, b=b, a=a" -> match). Booyer-Moore finds a string--faster with longer search strings--in large amounts of text with few comparisons, thus issues fewer CPU instructions.

        CPUs can implement ALUs, decoders, and pipelines to execute the same instruction code in fewer clock cycles. Just like using a different software algorithm, you can use a different hardware approach.

        Prefixed instructions and fixed-length instruction sets are core to ARM. Literally every instruction is prefixed. That means where you might compare for one cycle, then jump or not jump on the next cycle, ARM simply jumps or doesn't jump. One fewer cycle.

        The decoder doesn't have to deal with figuring out instruction size or the content if it picks an instruction prefixed to only execute if ZF is set, so if you SUB r2, r1 and the result is zero, the next instruction that executes only if ZF is not set is just skipped and the decoder moves on.

        Because the CPU will read ahead and cache (preload) the next several instructions (fetches from RAM are slow!), it's technically-possible to block out the next e.g. 10 instructions as IFZ [INSN], and have an ARM CPU internally identify the next several instructions are prefixed IFZ and just skip the instruction pointer ahead that many. Remember: every instruction is exactly one word wide; you don't need to know what the next instruction is to know where the following instruction starts. You don't have to decode the instructions if they won't be executed.

        This feature frequently eliminates a large number of comparisons and jumps, trimming down the size of the code body (you'd think variable-length insns would do that, but that usually doesn't work out). More instructions fit into cache, and branch prediction becomes simpler (less power) and more-effective.

        ARM also has 30 GPRs. x86-64 has 10 GPRs, plus source/destination/base/count pointer registers that are basically GPRs. A lot happens without using RAM as an intermediate scratch pad.

        It's like LED lighting. A single LED might throw off light with just milliwatts of power... but crank it up so it's throwing off EXACTLY the same amount of light as a 100-watt halogen lightbulb (measured from every direction), with color fidelity that's at least as good as that 100-watt halogen bulb (none of this "80+ CRI" shit, or even "92+ CRI with weak R9"), and it's going to CONSUME at least 70-80 watts and throw off almost as much heat AS the original incandescent bulb

        Halogen and incandescent bulbs are black-body emitters: much of their light is in the infrared range. LEDs are narrow emitters and use combinations of materials to emit in multiple ranges when providing white light. That means an LED operating on 100 watts of power emits about 80 watts of visible light, while a halogen operating at 100 watts emits about 20 watts of visible light, and an incandescent tungsten-coil bulb emits about 10 watts of visible light.

        An LED emitting the same broad-spectrum visible light as a 100-watt halogen would consume 25 watts of power.

      • by Bengie ( 1121981 )

        ONLY true when the ARM's performance is significantly less than Intel/AMD as well. Beef an ARM up to i9 specs, and it's going to burn as much power and throw off as much total heat AS an i9 with identical raw performance.

        It's actually worse. You can't really have high performance and low power usage at the same time, it's a trade off. CPUs have gotten better, but the transistors themselves become more leaky when you design for higher performance, among other architectural trade offs made to increase performance. And pushing transistors optimized for low power to be faster will consume more power than transistors optimized for high performance. The gap is shrinking (pun), but it's still a practical difference.

    • by bluefoxlucid ( 723572 ) on Thursday March 28, 2019 @09:52AM (#58347500) Homepage Journal

      AMD created the x86-64 architecture, and is making inroads with Epyc. AMD also has some RISC-V work in the pipeline. I'm predicting RISC-V will be big: Intel may try to capitalize on ARM thanks to mobile space, and AMD will start shoving RISC-V (no license fees) into processors for Chromebooks and the like, then into servers running Linux for RISC-V or something.

      The next Raspberry Pi might be RISC-V. It's been mentioned. Nobody's taking that seriously yet, and they're not suggesting it seriously yet.

      AMD beat Intel once doing this. They invented a whole new architecture and killed IA-64.

    • by Bengie ( 1121981 )
      Intel/AMD use less power than ARM when it comes to compute loads. ARM is great for high idle and they're claiming IO loads. I wouldn't mind a many core ARM file server or router, but not an app server where the server is under high load.
  • by sad_ ( 7868 ) on Thursday March 28, 2019 @07:56AM (#58346946) Homepage

    remember that time when everybody said intel x86 would never make it in the data center...

    • Comment removed based on user account deletion
    • by dfghjk ( 711126 )

      I don't remember that time nor does anyone who modded you insightful.

    • by jwhyche ( 6192 ) on Thursday March 28, 2019 @09:56AM (#58347530) Homepage

      Can't seem to recall any one saying that. What I do recall is there was a significant effort for everyone to have their own custom processors, PowerPC, Sparc, PA-risc, Clipper, etc etc etc. All of them eventually gave way to the x86.

      • I can. Of course the people saying it were the people pushing "PowerPC, Sparc, PA-risc, Clipper, etc etc", but yeah, I remember the notion that 80x86 wasn't proper server hardware being expressed from time to time back in the 1980's and maybe even early 1990's.

        I don't believe anyone used the term "data center", though. Was the term even invented back then? "Mainframe" and "server", more like.

    • by Anonymous Coward

      remember that time when everybody said intel x86 would never make it in the data center...

      Hell, just use the same stupid YotLD "Linux is everywhere" argument, ARM is everywhere inside a DC already. Practically every out of band management card and HBA is an embedded ARM chip. They probably outnumber Intel chips in many DCs.

      If Linux running on a bunch of cell phones is some big win for Desktop Linux then I'm sure a bunch of ARM SoCs running busybox is a win for ARM in the DC right?

    • remember that time when everybody said intel x86 would never make it in the data center...

      Ooh, I member!

  • I remember when ARM was the cool kid. Now I guess it's just some old geezer yellin', "Get offa my greenboard!"

  • as I said in on of my weekly news Bits livestreams, IMHO it was always about costs, and x86 was just so much cheaper than stuff from Sun, Sgi, IBM, etc. you name it: https://www.youtube.com/watch?... [youtube.com]
  • amd epyc is good for pci-e storage nodes with the 128 pci-e lanes.

    also CEPH / ZFS like lots of ram as well.

  • by Freischutz ( 4776131 ) on Thursday March 28, 2019 @07:59AM (#58346970)
    There's a disembodied zombie ARM in the datacenter! Oh God, it's not dead yet and, ... and it's coming for us!!!!!! AAAAAAAAAAAAAAAAAAAAAAAAAAAAH THE DOOR IS LOCKED!!!
  • by Anonymous Coward

    Another advantage is that some ARM processors aren't affected by the speculative execution vulnerabilities. In particular the ARM Cortex-A53 [raspberrypi.org], which is used in this server

    http://www.socionextus.com/products/synquacer-edge-server/ [socionextus.com]

    is immune to speculative execution vulnerabilities.

    • The A72 does, however, do speculative execution. And ARM chips aren't invulnerable to cache attacks, either. That said, I really like the A53 (as implemented by the BCM2837) But what got me was this about Ceph,

      The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

      Hard pass, thanks.

  • Make sure your new CPU has:
    Ram.
    Power supply.
    Cooling.
    Networking.
    That can be found, that is on sale and that people like.
    A CPU is wonderful.
    Now make all the other parts that support the CPU. That work for servers 24/7. At a low price.
    An OS would be good too. Software?
  • by aaronb1138 ( 2035478 ) on Thursday March 28, 2019 @10:38AM (#58347796)
    Both Juniper and Palo Alto use Cavium ARM processors in their hardware, usually for management plane tasks (FPGAs and ASICs do the heavy traffic processing on high end units). And ARM SoCs are popular for switches and routers where raw compute power isn't necessary. Certainly Cisco is the only one willing to stick with low end, neglected Intel Atom offerings even after the Nexus 9k, ISR 4k, And ASA 55x6 series got bit by defective Atom C2000s (sorry bro, your $55k switch just died because of a $41 CPU).

    So ARM is great anytime you don't care about CPU processing power, but still want to move data -- storage appliances and network. Which is odd given that in the mobile space the few Atom x86 Android phones to reach the market had lesser raw CPU benchmarks than their ARM contemporaries at the time, yet in actual usage felt much smoother because of wider / faster buses and superior throttling (Had a Zenfone 2 with the Atom and it's still smoother than a lot of Snapdragon 6xx midrange phones).
  • I need suggestions for commercially made ARM systems that will work in temperature ranges from -35F to 140F (-37C to 60C) for an engineering project. These things are going to be in metal boxes on the side of Texas Highways.

    Right now we've got some very impressive Intel systems, but those are in air-conditioned boxes, I'm looking for something that can survive a non-air-conditioned box. When I look for ARM stuff I find a lot of industrial boards, but not a lot of pre-made industrial systems, especially in

  • The real answer is RISC.

    SPARC, POWER and older brother RS/6000, MIPS*, and ARM's granddaddy DEC Alpha dominated the data center space for decades. It was the cost/performance ratio of the far less efficient Intel architectures that let them win in this space.

    We could easily reduce data center footprint by 1/3 by using RISC, but that's not how a free market works.

    *I have installed huge SGI servers

Work continues in this area. -- DEC's SPR-Answering-Automaton

Working...