Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
HP Hardware

HP Launches Moonshot 168

New submitter linatux writes "HP has announced their 'Moonshot 1500 server' — up to 1,800 servers per 47U rack are supported. The tech certainly seems to be an advance on what is currently available — will it be enough to revive HP's server fleet?" From Phoronix: "Moonshot began with Calxeda-based ARM SoCs, but in the end HP settled for Intel Atom processors. Released today were HP's Moonshot system based on the Intel Atom S1200. Hewlett-Packard claims that their Moonshot System uses 89% less energy, 80% less space, 77% less cost, and 97% less complexity than traditional servers."
This discussion has been archived. No new comments can be posted.

HP Launches Moonshot

Comments Filter:
  • Does it compute? (Score:5, Insightful)

    by Solid StaTe_1 ( 446406 ) on Monday April 08, 2013 @11:14PM (#43398087)

    Low power and massive amounts of parallel cores is alright, but does it compute? How do these low power servers benchmark against EC2 or equivalent? This article didn't talk benchmarks. Maybe you get all these gains in consumed power, cost, space etc... because it is 90% less powerful than competitors.

    • Re:Does it compute? (Score:5, Informative)

      by symbolset ( 646467 ) * on Monday April 08, 2013 @11:38PM (#43398213) Journal
      This is 1800 servers per rack, each a dual-core 2.0GHz 64bit Atom processor with 8GB RAM. It has a custom low-latency redundant mesh network between the nodes. For workloads like Hadoop it should be outstanding. If they're priced right I could see them including this as a type of machine in their cloud. 3,600 cores per rack, vs Xeon at 768 cores per rack (blades). This could be interesting.
      • Re: (Score:3, Interesting)

        No this type of node is not appropriate for Hadoop. First of all Hadoop is all about data locality when you run it on physical hardware (if you really need performance), and this is not the case here. Moreover 8G of RAM can be quite a limitation for many Hadoop related task (Hbase node will require more). Today you can have blade system with 2000 core per rack with AMD, why if cores matters would you limit yourseld to Intel CPU?
        • Today you can have blade system with 2000 core per rack with AMD, why if cores matters would you limit yourseld to Intel CPU?

          I imagine that the power draw and corresponding cooling requirements of that rack stuffed with AMD cores will be significantly higher than the Intel one.

          • I imagine that the power draw and corresponding cooling requirements of that rack stuffed with AMD cores will be significantly higher than the Intel one.

            You're imagining a workload where the CPUs are busy all the time, which is the opposite of pretty much everything but scientific computing.

            • What sort of workload is concerned with stuffing a rack with 3000+ cores only to have those cores idle?

              Besides, you don't need scientific computing workloads to keep the CPU busy. Isn't that what virtualization and over-provisioning is about?

        • You can have 2000 cores per rack with Intel as well. Dell will sell you a 10U Blade Chassis you can fill with quarter height blades for a total of 32 servers per 10U. Each server can have up to two 8 core CPUs for a total of 2048 cores per rack.
      • Except for a few problems:

        Hadoop tops out around 4000-6000 nodes, then you run into serious scalability issues in the jobtracker and HDFS scalability. Granted, with HDFS federation and YARN these should improve, but today you can't build this wider than a few racks without spending a good chunk of time doing some significant hadoop engineering.

        Second, disk. Where's the disk? Hadoop needs disk. Hadoop likes disk. Disk likes hadoop. Hadoop likes lots and lots of disk. Nice, you've built a 6 watt SoC. N

  • Too little, too lame (Score:2, Informative)

    by Gothmolly ( 148874 )

    HP tried this with Transmeta a while back, and produced blades that completely sucked - WAY too slow. Individual machines on blades are dead, unless you need HPC type power, and Atom ain't that. If you need to squeeze 1800 limp servers into a rack, VMWare and its children are already there.

    Sorry HP, you suck. Go back to making shitty printers, and then get out of the way. Hopefully your corpse will provide the fertilizer for some new market leader to grow from.

    • Re: (Score:2, Funny)

      by gagol ( 583737 )

      Hopefully your corpse will provide the fertilizer for some new market leader to grow from.

      I hope this is a reference to ender's game trilogy. Let me know!

    • by jon3k ( 691256 )
      The whole point of this is to run a lot of web and web application servers. I'll wait until I see the benchmarks considering the new Intel Atom processors are probably several orders of magnitude faster than the Transmeta chips HP tried last time. They also allow you to mix in not only X86 but also DSP, GPU and FPGU depending on your workloads. I'm certainly not sold on the whole concept, but I'm interested to see the benchmarks at least.

      And yes HP, go back to making those shitty NonStop printers [hp.com].
  • Whenever slashdot asks "Will it be enough?" what do we say everybody? NO! We say N-O. No.

    HP has been attracting fail like it's a government project with unlimited funding and no congressional oversight. I mean seriously, we may be breaking into new physics here with the strong attractive force that all things HP have to all things Fail. And no technology is going to fix that, because the ultimate source of the bogon radiation is (wait for it) HP senior management. They'll figure out a way to screw this up,

    • by rekoil ( 168689 ) on Monday April 08, 2013 @11:35PM (#43398193)

      So...former HP customer, or former employee?

    • by Anonymous Coward on Monday April 08, 2013 @11:35PM (#43398195)

      I'm guessing you haven't actually used HP servers or compared them to the competition. In my experience they completely kick Dell's butt, and give IBM a real run for their money, at much lower cost. I evaluated a ProLiant Gen8 and the manageability features were pretty impressive. The thing can update it's firmware and send SNMP traps, etc, from bare metal, without an OS.

      Granted, HP had some crappy CEOs, and on the low-end consumer stuff they race to the bottom with everyone else, but their servers are serious and arguably industry-leading. They also sell more PCs than anyone anywhere, unless you start counting every iPod touch as a "computer."

      • by FuegoFuerte ( 247200 ) on Tuesday April 09, 2013 @01:16AM (#43398597)

        I'll absolutely second this - HP's servers kick ass, quite frankly. They've had a few pretty major problems in recent years (P400 and P800 array controllers were absolute pieces of shit from a reliability standpoint, and the P410 STILL doesn't work quite right with SATA drives, though it rocks with SAS disks), but overall the engineering that goes into HP servers puts them well ahead of their competition, from what I've experienced. I've used Dell, IBM, white box, and HP, on the scale of "hundreds to thousands" of each brand, stretching back 10+ years.

        The HP's have been more reliable, more configurable, more robust (yes, this is different from reliable), more manageable, and FAR better supported. There's a reason companies pay a premium for HP hardware, and it's because it pays for itself many over during the life of the hardware.

        There are companies and applications that don't need that kind of reliability and run on shoddy white-box hardware... think Google, Facebook, etc. There are others, particularly stateful services like telephony and conferencing, that depend on reliable hardware. For those like that, servers like what HP provides will always be in demand. So long as HP maintains their focus on engineering in the server space, they won't be going anywhere soon.

        • more robust (yes, this is different from reliable)

          What is the difference between robust and reliable?

          • by swalve ( 1980968 )
            They break in predictable ways. All equipment will have failures. Do you want to spend the time swapping power supplies and hard drives that tell you when they have failed, or figuring out what the fuck is the matter with this broken box?

            A good example is something I've experienced multiple times. A hard drive in an array fails hard. In a Proliant, you get a red light and the machine keeps running. In a Dell or IBM, it takes down the whole disk bus and you have to take time pulling individual drives a
      • by AK Marc ( 707885 )
        Yeah, HP servers weren't bad after they bought Compaq and pretty much abandoned the old line of HP servers.
    • I dunno.

      The HP P4000s SANs are pretty nice when compared with comparable equipment.

      Of course, they got them by buying LeftHand.

      But yeah, long gone are the days of the solid Laserjet 4250 days with millions of prints that made them worth refurbishing.

  • The Money Shot?

  • by zbobet2012 ( 1025836 ) on Monday April 08, 2013 @11:28PM (#43398151)
    From the HP Site: "The HP Moonshot 1500 Chassis has 45 hot-pluggable servers installed and fits into 4.3U. The density comes in part from the low-energy, efficient processors. The innovative chassis design supports 45 servers, 2 network switches, and supporting components."

    Each pluggable unit support 1x 2GHZ intel atom S1200 series cpus (2x core, 4x thread), up to 1 dimm @ 8gb, and one SFF sata drive. That gives you 90 cores/180 threads, 360GB's in 4.3u.

    For comparison a 6RU cisco UCS chasis can put down up to 160 Cores / 320 threads, 4TB of memory. Those are high performance Xeon cores. Not sure on the $$$ per compute/memory between the two.

    The really big question is are there enough use cases for that many "thin" servers. At 2 cores and 8GB of ram you are very thing by modern standards and there is 0 opportunity for vertical growth.
    • It looks like a decent option for hosting companies to sell dedicated web servers to people, or internally for a company that is running virtual desktops
      • by jon3k ( 691256 )
        Better in what regard than virtualizing a host with big beefy Xeon processors? I'd like to see the workload where this comes out ahead. Honestly, I'm interested in following this it very well may be better I just don't know yet.
        • One of the services that you can buy from a hosting company is your own dedicated physical server, this will allow them to sell that service cheaper than a Xeon server, but still marked up higher than a virtual.
          • by jon3k ( 691256 )
            That's definitely true, but seems like an awfully small market for so much engineering. That can't possible be the only intended use, right?
            • All the press releases aren't particularly clear, but it seems the next iteration will have 4 processor nodes per cartridge which should make it more feasible for creating HPC.
    • I don't really understand the market for something like this either. When the S1200 was launched, Intel was careful to point out that if you try to scale it up as a cheap alternative to E5/E7 Xeons, the economics and power consumption of the S1200 (let alone the complexity of an order of magnitude more servers to manage) is not favourable. Totally understandable, as Intel would be foolish to cannibalize their own Xeon market.

      Having said that, I do like the S1200, but more for something like a low traffic VP

      • What I find curious about HP's design is how half-hearted it is about being a heavily-integrated blade box:

        For 60k, you'd expect the chassis to handle more than just power and cooling(and it does apparently handle networking between the server modules and between the server modules and the switch modules, and I assume that HP's chassis management software is baked in in various places); but every single node still has its own dinky little hard drive, just waiting to die, and RAM is also per-node and cannot

    • For comparison a 6RU cisco UCS chasis can put down up to 160 Cores / 320 threads, 4TB of memory. Those are high performance Xeon cores. Not sure on the $$$ per compute/memory between the two.

      With cheap commodity 1U boxes, you can get 64 ahem modules in 1U, or if you prefer 256 in 4U, along with 2TB of RAM (512/U is cheaply achievable). Not as good as Xeon cores, but completely thrash Atom clock for clock, and they clock higher. You can also fit in 64T of disc.

      It will set you back about $40k for that lot, as

    • by jon3k ( 691256 )
      HP can do 512 threads in a C7000 (16 half height blades, 2 sockets per blade, 8c/16t per blade: 16 threads * 2 sockets * 16 blades = 512 threads) in 16U, which comeso out to 32 threads per rack unit, which ironically is the same as a 1RU server with 2x8c/16t Xeons. Cisco UCS comes out to a little over 50 threads per rack unit, but IIRC Cisco can still do more memory per socket as well. The ironic part about all this amazing density is that no one can fill a rack with these things because no one can provi
  • by thedarknite ( 1031380 ) on Monday April 08, 2013 @11:41PM (#43398239) Homepage
    Someone at Phoronix really needs to learn basic math. The Chassis is 4.3U and hold 45 of these Moonshot servers, so a 47U rack could fit 10 chassis' for a total of 450 servers.
    • by hawguy ( 1600213 )

      Someone at Phoronix really needs to learn basic math. The Chassis is 4.3U and hold 45 of these Moonshot servers, so a 47U rack could fit 10 chassis' for a total of 450 servers.

      Yeah, I think they got confused between threads and servers. Each server has a 2 core CPU, each core can "handle" 2 hyper threads. So 450 servers * 2 cpus * 2 threads = 1800

      Not nearly as impressive.

  • "[...]and 97% less complexity than traditional servers."
    Wait, what? How in the world did they measure this? I'm seriously curious as to this dubious number.
    • by dkf ( 304284 )

      "[...]and 97% less complexity than traditional servers."

      Wait, what? How in the world did they measure this? I'm seriously curious as to this dubious number.

      "Now with 67.3% more dubious numbers than traditional advertising copy!"

    • "Hi, yeah, could I get a number 2 with a coke? Oh, and large fries. And can you reduce the complexity on that? By how much? I don't know, 100%? Oh, you can only do 97? Ok fine, I'll take that. Oh, and a chocolate shake."

  • Come one, I call shenanigans on this one. Seriously, a site where the majority of the submissions seem to take at least a day or more to propagate to actually being posted has a post about a random new HP product where the only really informative link is to what basically amounts to a press release hosted on their own site?

    I understand things are tough all over and you gotta make money to survive, but do they really think their readers are that stupid?

    • by Maudib ( 223520 )

      I doubt they would pay to have a post go live at midnight.

    • by linatux ( 63153 )

      I don't know about Slashdot's priorities when it comes to deciding what makes the cover, but I submitted in good faith.

      I'm not a fan of HP by any means & we have truck-loads of their servers at work. The concept for this sounded interesting & maybe there is a place for it in the 'cloud'.
      What we really need at work is big kick-ass servers like IBM's new Power 7 machines (IBM - please direct debit my account ASAP)

      • by Dahamma ( 304068 )

        Ok, mod me into oblivion if I'm wrong, I guess every blue moon there is someone with a 5 digit id as a "first time submitter" ;) I was just rather surprised they posted a story with a link to their own article that was posted a few hours before the referencing one... last couple of times I had a story posted it was at least a day after I submitted it. And of course the HP AD I SAW next to it didn't help the situation one bit...

        • by linatux ( 63153 )

          :-) 1st time accepted, 2nd or 3rd submission I think. Sad how the quality of stories seems to have declined over the last few years, but the quality of the posts & the turns they take still sometimes surprise me. Without their seasoned contributors, this site would fade away in no time. I hope the people running it recognise that.

  • Considering the fact that only a small percentage of even IT people understand just how much server horsepower is required for many typical tasks that environments require of them, I don't expect a huge demand for these Moonshot servers. The specs however are very well suited for many applications used in small to medium sized businesses. And when you get to those who would see appropriate use for these, the price of the chassis is very ugly.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...