Forgot your password?
typodupeerror
HP Intel Hardware

HP Announces ARM-Based Server Line 125

Posted by Soulskill
from the go-small-or-go-home dept.
sammcj writes with news that HP is developing servers based on 32-bit ARM processors from Calxeda. Their current model is only a test setup, but they plan to roll out a finalized design by the middle of next year. "HP's server design packs 288 Calxeda chips into a 4U rack-mount server, or 2,800 in a full rack, with a shared power, cooling, and management infrastructure. By eliminating much of the cabling and switching devices used in traditional servers and using the low-power ARM processors, HP says it can reduce both power and space requirements dramatically. The Redstone platform uses a 4U (7-inch) rack-mount server chassis. Inside, HP has put 72 small server boards, each with four Calxeda processors, 4GB of RAM and 4MB of L2 cache. Each processor, based on the ARM Cortex-A9 design, runs at 1.4GHz and has its own 80 gigabit cross-bar switch built into the chip"
This discussion has been archived. No new comments can be posted.

HP Announces ARM-Based Server Line

Comments Filter:
  • SATA?! (Score:1, Insightful)

    by Anonymous Coward on Wednesday November 02, 2011 @05:33AM (#37917518)

    Come on, guys, it's 2011. We're talking servers here. Forget SATA; throw in native iSCSI support (or fibre channel, but iSCSI would probably be significantly easier - if only because it uses standard Ethernet ports, rather than needing extra protocol support), and you'll have something that's a serious contendor in that space.

    Think about it: with SATA, you have a bunch of hard disks, probably mostly disused, almost all of them performing atrociously (SATA is notorious for only being good with large sequential I/O). With iSCSI, you can hook up any disk array you damn well want, whatever its performance characteristics. Throw 10 Gb ethernet into the mix, and you have a winner (an expensive winner when you factor in the switch ports, but at least it gives the architect the option.)

    • by Junta (36770) on Wednesday November 02, 2011 @06:59AM (#37917962)

      FC/FCoE/iSCSI all deliver much much lower aggregate I/O performance than coordinated use of direct attached storage. Google, Hadoop, GPFS, Lustre all facilitate that sort of usage. You will in any of those remote disk architecture have an I/O bottleneck along the line.

      That said, I would presume netboot at least would be there, and from there you can do iSCSI in software certainly. FCoE tends to be a bit pickier, so they may not be able to do that in the network fabric provided.

      On the whole, I'm skeptical still yet. So far ARM has proved itself when low power is critical and performance. I'm not sure if performance per watt is going to be impressive (e.g. if it hypothetically takes 10% of the power of a competitor and gave 9% of the performance, that can work well for places like cell phones but perhaps not so much for a datacenter). ARMv8 may make things very interesting though...

      • by postbigbang (761081) on Wednesday November 02, 2011 @07:59AM (#37918272)

        You can argue, successfully, that via virtualization and multi-core relationships that the ARM power argument is goofy, as number of threads per process and virtualization favors the CISC architectures. The ARM infrastructure, however, the foundation for a couple of decent server product lines. The architecture cited is very much like getting a bunch of ARM CPUs together to do what more power hungry quad/multi-core Intel and AMD chips are doing to day. Remember: the ARM is 32-bit, and the number of threads are limited both by inherent architecture as well as the memory ceiling.

        What's scary to me is that someone wrote that it has a crossbar switch on it without understanding what that implies in terms of inter-CPU communications, cache, cache sync/coherence, etc. A well-designed system will perform almost as well with iSCSI (on a non-blocking, switched backplane) as it will with SAS so IO isn't quite the issue; the power claim vs thread density per watt expended claim has yet to be proven.

      • by Lumpy (12016) on Wednesday November 02, 2011 @09:24AM (#37919070) Homepage

        BAH, why? build a metric buttload of ram on it and have it simply make snapshots of the ramdisks to rotating media when changes are made using a coprocessor letting the main process scream along. you get insane speeds and ram is dirt. if each processor had 64 gig of ram, each can run 4 website VM's with plenty of memory and storage and still outperform the quad bonded OC48 connections into the Server Farm.

        This is how Comcasts Video on demand system runs. Main spinning storage servers spool out to ramdisk only servers at local headends.

      • by OneMadMuppet (1329291) on Wednesday November 02, 2011 @03:14PM (#37924178) Homepage
        Not where I work, they don't. I/O on VM's (ESX, etc) is generally woeful, and it's significantly faster to pass through a FC card and access LUN's on a DMX or VMax than to use local storage. Hadoop uses local storage for a completely different reason.
        • by Junta (36770) on Wednesday November 02, 2011 @05:03PM (#37925552)

          Ignoring virtualization overhead (which is a factor), if the storage is underutilized, yes a massive amount of cache/number of spindles a FC hop away in certain scenarios can blow away one or two local spindles. The problem is when you up utilization, the equation slips the other way. If you have low utilization or insane number of disks behind an FC compared to number of hosts in the SAN, the SAN can do better. Most places I see are heavily utilized on a relatively small amount of storage relative to number of systems due to pricing, and IO to dedicated disks reigns supreme.

          I would say the 'hadoop-like' use case is the likely set of customers ready to entertain something as exotic as an ARM server anyway, so local disk very appropriate.

    • by JoeMerchant (803320) on Wednesday November 02, 2011 @08:15AM (#37918372) Homepage

      Has anybody seen the Googleplex "server" spec? from what little I've read, I'd assume they're on SATA.

    • by fuzzyfuzzyfungus (1223518) on Wednesday November 02, 2011 @09:23AM (#37919064) Journal
      The fact that they've special-magic-backplane-fabric-ed away all the other busses, while leaving each card bristling with SATA connectors, seems rather weird, just because that's a lot of headers to bring out if nobody is going to use them and it'll be a hell of a rat's nest if you actually try(could they really not have stretched their backplane fabric a little bit more, to include allocating direct attached storage to nodes across it?).

      The use of SATA, though, seems reasonable enough, given the low-performance, low-cost, low-energy focus of the design. It just seems really weird that the connectors are on the cards, rather than their being a few high-density SAS connectors on the back, allowing you to either use an iSCSI device over the 10gigE ports or a big SATA/SAS cage directly cabled, with disks being farmed out over the backplane, rather than via internal SATA cabling...
  • by unixisc (2429386) on Wednesday November 02, 2011 @05:37AM (#37917542)

    Let's count - they have Xeon/Opteron, Itanium, and among their dead platforms, they have PA-RISC, Alpha (DEC/Compaq) and MIPS (Tandem/Compaq). What made them pick this for servers?

    Would one be right in guessing that their Itanium based Integrity servers have been a disaster?

  • by Viol8 (599362) on Wednesday November 02, 2011 @05:49AM (#37917600)

    With the world moving to 64 bits to accomodate huge databases in memory and on disk they must be aiming for low hanging fruit here. Still, I'd like to get hold of one IF they ever convert it into a desktop version - would be nice to have a linux installation at home that doesn't pay homage to wintel in any way.

    • by unixisc (2429386) on Wednesday November 02, 2011 @06:21AM (#37917784)
      Not just that, what does ARM have that the other processors of HP don't? Even if one doesn't count PA RISC and Alpha, which are dead, HP could still use MIPS processors in their platforms. And how would Xeons be any worse?
    • by janoc (699997) on Wednesday November 02, 2011 @06:34AM (#37917854)

      Easy - ARM doesn't yet have 64bit cores available, they were only recently announced. It will take a while until the manufacturers license them, integrate them into their products and only then can HP buy them and build a server around them.

      From the looks of it, this prototype machine is unlikely to be built for databases (4GB of RAM per chip is not a lot for something like Oracle), so the 32bit limit is not really an issue. On the other hand, this screams HPC cluster/supercomputing or some other well parallelizable load, such as web servers. 32bit CPU is plenty enough for that. 64bit on a server buys you only more RAM, not much else.

      It would be *very* interesting to see performance comparison between this solution and the traditional Intel one. If it is only 50% as fast, it should give Intel a lot to worry about - the higher installation density, the power savings will easily outweigh the raw power advantage Intel may have.

      • by unixisc (2429386) on Wednesday November 02, 2011 @06:45AM (#37917912)
        All very good - but what about the software? What software are they going to offer on ARM that's not already on Xeon (which itself is both 32-bit & 64-bit flavors)? And what performance advantage will ARM bring? If it's power consumption, how compelling is the argument to switch to a completely new platform w/ little supported software (no, Android apps don't count) and no performance advantages just to lower the electric bills? HP might as well have worked w/ either Intel or AMD to get lower powered Xeons or Opterons to market.
        • by Anonymous Coward on Wednesday November 02, 2011 @07:47AM (#37918202)

          TheRegister had the best analysis of what the Sales pitch for one of these is:

          "The sales pitch for the Redstone systems [the HP hyperscale offering with the EnergyCore ARM boards], says Santeler, is that a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.

          A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m. The big, big caveat is, of course, that you need a workload that can scale well on a modestly clocked (1.1GHz or 1.4GHz), four-core server chip that only thinks in 32-bits and only has 4GB of memory."

          That makes economic sense.

          As to Software - what is the problem? I run Ubuntu on an allways-on ARM box at home. Pretty much anything written for Linux can be compiled for ARM instead of x86.

          • by Afell001 (961697) on Wednesday November 02, 2011 @09:23AM (#37919058)
            If....if...if...you have access to the source code, have software vendors working (or willing to work) on a recompile, or an in-house development team who is familiar with ARM architecture, to include best practices to get the highest performance. This is the Achilles' heel, really. You toss a stone and you will hit a halfway-competent developer who understands X86...not so easy with any of the RISC architectures, and to find efficient coders working with ARM processors, you are going to have to go shopping in the mobile development market. Most businesses are conservative anyway, and won't take the extra effort or spend the extra money to switch operating platforms, especially if the ARM architecture only offers lukewarm benefits compared to staying with tried-and-true X86.
            • by janoc (699997) on Wednesday November 02, 2011 @12:01PM (#37921350)
              That's a red herring. For majority of Linux applications you *do have* source code, thanks to the OSS licensing. And you won't even have to recompile, there are distros targeting ARM already. The only exception are proprietary applications like Oracle, SAP or Exchange, but this machine isn't designed for such workloads (Oracle needs more memory, SAP and Exchange are Windows-only).

              Regarding development - development for Linux on ARM is exactly the same as development for Linux on x86 and very similar to any other Unix. Most people do not write in assembler anymore and the platform differences from the point of view of a business application writer are negligible at best.

            • by RocketRabbit (830691) on Wednesday November 02, 2011 @06:09PM (#37926466)

              Remember the early days of Linux? Silly people trying to run a wanna-be Unix on their little piddly home computers using the ridiculous Intel architecture. What a bunch of tards. Those little pissant boxes didn't even have SCSI*, and certainly didn't sport the massive RAM expansion that a real computer like a VAX could boast.

              Of course, the naysayers from back then are all retired now, and those piddly X86 machines run practically all the servers on the planet, and that OS has turned into a multi-billion dollar operation.

              The other point I'd make is that in this day, developers emphatically do NOT sit there hand optimizing all their code. This is the job of the compiler suite and it has been since probably the late 80s.

              If you have an in-house developer team making an in-house product which they control the source of, they can probably have it running on your new ARM box in a few hours.

              With the amount of data processing that truly huge operations do, and figuring that an Intel solution costs you at least 10x as much just for the electricity bill, trust me - vendors and in-house developers both will be either learning all they can about ARM or looking for a job in a different field. Intel is in serious trouble and this is the first real crack in the wall that shows through to the other side.

              * - recalling a debate I had in 1995 with a senior IT guy at an unnamed corporation, explaining why Linux and X86 would never win. He always gravitated back to the SCSI in his arguments.

        • by Pieroxy (222434) on Wednesday November 02, 2011 @07:56AM (#37918248) Homepage

          Linux provides good software for servers. Ubuntu even has released Ubuntu-server for ARM.

          As far as performance per Watt, that's the key point and it is missing from the article. A pity.

          That said, what makes an architecture successful? I think it's the amount of R&D that everyone puts in it. x86 has seen obscene amounts of R&D (as compared with other platforms). ARM is getting a fair share with all the smartphones and tablets nowadays. So in my view, it is much much much better to bet on ARM for the future rather than unearth a dead platform.

        • by janoc (699997) on Wednesday November 02, 2011 @11:55AM (#37921258)
          FYI - ARM is well supported by Linux since ages ago, not only by Android. These CPUs have been around for a very long time, probably longer than Intel's Xeon. So while you probably won't run your Exchange or IIS on such machine in the near future, it will do just fine for everything else. There are plenty of uses for non-Windows servers ...
    • by Imbrondir (2367812) on Wednesday November 02, 2011 @06:42AM (#37917894)

      In 2010 ARM announced 40 bit virtual memory extension for 32bit ARMv7. That's 1 Terabyte of RAM. Which should be enough for everybody :)

      On the other hand ARM a couple of days ago announced 64 bit ARMv8. But you can probably can't buy one of those for 6-12 months or so. Perhaps HP is simply using ARM chips available now more as a pilot for when the knight in full shining 64 bit address space comes along

    • by raddan (519638) * on Wednesday November 02, 2011 @07:33AM (#37918110)
      There are plenty of applications that don't need to be able to address 64 bits worth of memory. Think webapps. Lots of cores with fast I/O are what you want. Core speed itself is less important since you're usually I/O bound.
    • by Alioth (221270) <no@spam> on Wednesday November 02, 2011 @09:42AM (#37919350) Journal

      Not all servers acommodate huge databases. There are plenty of servers that have to service high numbers of users for tasks which are not computationally or memory intensive. 32 bit is likely to be better for these kinds of tasks.

    • by White Flame (1074973) on Wednesday November 02, 2011 @03:38PM (#37924510)

      Each thread/process deals with a 32-bit slice of a larger processing domain. Even when working with huge databases, there's no reason that each processing node of it can't work well within 1GB of RAM. (It seems there are 4 cores per 4GB of RAM).

      In the "many low-power CPU" strategy, saddling each CPU to work with 64-bit by default could be a real waste of memory bandwidth compared to the actual slice of the workload that it will get. But I expect this line to get full 64-bit just for ease & transparency in not too long. The full 64-bit ARM stuff has been announced already, but is still a few years out.

    • by RocketRabbit (830691) on Wednesday November 02, 2011 @05:54PM (#37926270)

      The low hanging fruit is probably 95% of the server market. Most servers sit around all day doling out a few files and maybe handling email. This could all have been done on a PDP-11 with plenty of juice left over.

      Whatever fantasy land you are living in sounds very hot and noisy. Take a look at how many machines in a typical corporate datacenter are running under any significant load sometime - it's usually only a few, if any.

  • by CrazyBusError (530694) on Wednesday November 02, 2011 @06:02AM (#37917672) Homepage
    Are we going back to transputers again, then?
  • Alike most DSLAMs (Score:5, Informative)

    by La Gris (531858) <lea,gris&noiraude,net> on Wednesday November 02, 2011 @06:02AM (#37917676) Homepage

    This type of setup is already used in Most DSLAMs. Full rack, 2PSU, cooliing, 24 or 48 port (x)DSL cards with ARM CPU as independent servers, Internal management card and network switch. Think of blade server racks.

  • by bertok (226922) on Wednesday November 02, 2011 @06:24AM (#37917806)

    Those processors run at only about 1.1 GHz, and ARM isn't quite as snappy on a "per GHz" basis as a typical Intel core because of the power-vs-speed tradeoff, so I figure that a 1.1 GHz ARM quad-core chip has about the same computer power as a single ~3GHz latest generation Intel Xeon core.

    They say the can pack 288 quad core ARM processors into 4 rack units (with no disks). For comparison, HP sells blade systems that let you pack in 16 dual-socket blades into 10 rack units. Populate each socket with a 10 core Intel Xeon, and we're talking 320 cores. So for comparison, that's the equivalent of 72 cores per rack unit with ARM, vs 32 with Intel. The memory density is the other way around, with 288 GB per rack unit for ARM, and 614 GB with Intel.

    So, if you have a an embarrassingly parallel problem to solve that can fit into 4GB of memory per node, doesn't use much I/O, and can run on Linux, this might be a pretty good idea.

  • This looks to me to be similar to Bluegene supercomputers. A Bluegene essentially consists of packaged PowerPC processors with a scalable high-performance switch interface on board. The two first current generation Bluegenes were using 32bit CPUs as well.

    Markus

  • by drfishy (634081) on Wednesday November 02, 2011 @07:22AM (#37918060)
    Make a Minecraft themed one and I will find a reason to need it.
  • by Sez Zero (586611) on Wednesday November 02, 2011 @08:45AM (#37918626) Journal

    So, HP, are you really going to do this or should I just wait a few weeks and wait for the cancellation announcement?

    'Cause recently you guys have been a little wishy-washy...

  • by bberens (965711) on Wednesday November 02, 2011 @09:34AM (#37919234)
    Where would this fit in the market? My first thought is things with high number of threads but low compute complexity like web servers or something but Oracle essentially flopped in that arena with their ultrasparc or whatever it was with a bunch of threads. It's possible ARM is very fast but I'm only accustomed to seeing it in set top boxes, phones, and such. My understanding is they're great on power consumption but not so great on compute speed...
    • by Amouth (879122) on Wednesday November 02, 2011 @10:06AM (#37919716)

      Oracle essentially flopped in that arena with their ultrasparc or whatever it was with a bunch of threads

      It was Sun who did it before Oracle bought them - it was the Niagara CPU line. It didn't flop, for the people who needed that and where Sun customers it was wonderful, but out side of that ecosystem it had nearly zero application. then Oracle bought Sun and well everything seems to have flopped from that.

  • by strangel (110237) <strangel AT antitime DOT net> on Wednesday November 02, 2011 @10:30AM (#37920020) Homepage

    ...does it run Android?

  • by FunkyELF (609131) on Wednesday November 02, 2011 @11:20AM (#37920768)

    What kind of applications would this be used for. The only thing I can think of would be web hosting. Does KVM / Xen even work on ARM?

    There wouldn't be any serious enterprise applications that would run on ARM (right now) are there? Java?

Their idea of an offer you can't refuse is an offer... and you'd better not refuse.

Working...