Forgot your password?
typodupeerror
Data Storage Power Hardware

ARM-Based Servers Coming In 2011 253

Posted by timothy
from the leg-and-torso-on-the-way dept.
markass530 writes with this from the EE Times: "Arm Holdings chief executive officer Warren East told EE Times Wednesday that servers based on ARM multicore processors should arrive within the next twelve months. The news confirms previous speculation stemming from Google's acquisition of Agnilux and a recent job advertisement posted by Microsoft. East said that the current architecture, designed for client-side computing, can also be used in server applications."
This discussion has been archived. No new comments can be posted.

ARM-Based Servers Coming In 2011

Comments Filter:
  • by wvmarle (1070040) on Friday April 30, 2010 @05:55AM (#32042214)

    And how about small businesses?

    I bet those millions of servers handling an office of five people can happily do with half the horsepower and 10% of the power use.

    And I'm not just thinking of my own business.... with a 1.8 MHz or so Intel based computer idling most of the time handling the e-mail and files of my staff and me.

  • by devent (1627873) on Friday April 30, 2010 @05:57AM (#32042220) Homepage
    I favour anyone who can build and deliver a laptop with 12 hours battery live. In addition, a low power ARM server for office work (small and middle enterprise) is a nice to have, too. I think most users don't give a piece if it's x86 or ARM, as long as their applications are running and it's a good deal. I, for myself, am really glad finally see any innovation in desktop CPUs. I thought in 20 years we will still be using x86 compatible CPUs.
  • by gzipped_tar (1151931) on Friday April 30, 2010 @06:09AM (#32042256) Journal

    I've always thought that the x86 architecture is a dead horse beaten to the speed of light. It is the 21th century and we need something slightly better than rocks and sticks and x86 to throw at the old monstrosity known as computation. If we're still going to depend on x68 in 20 years I'd rather kill myself by banging my head against an x86 chip.

  • Re:To serve what (Score:3, Insightful)

    by ByOhTek (1181381) on Friday April 30, 2010 @06:23AM (#32042306) Journal

    You don't need multiple fast cores necessarily - it depends on the server.

    You do need good I/O on most servers. The earlier benchmarks of the Sun T1 was a nice example of this. IIRC 8 cores, each with two threads of execution, back when x86 was single and dual core. The cores were wimpy, but on many server applications (web, file, I believe database) it beat x86.

    You need a lot of cores, yes, but they don't need to be powerful for most server applications - since most are parallel.

  • by petermgreen (876956) <plugwashNO@SPAMp10link.net> on Friday April 30, 2010 @06:53AM (#32042418) Homepage

    The problem is more the apps, windows itself could probably be ported without too much trouble but most windows apps are likely to have code that makes x86 specific assumptions and are closed source so only the vendors can fix them.

    Emulation is an option but unless arm cores start performing a LOT better than intel cores of a similar power envelope that won't help much.

  • by TheRaven64 (641858) on Friday April 30, 2010 @07:06AM (#32042468) Journal

    Another advantage might be lowering the number of components. A Beagleboard would make a great low-volume server, except that it lacks any way other than USB for connecting disks and network adaptors. The same ARM core with the GPU removed and a couple of SATA and GigE controllers added would be a great SMB server platform. You could pop the OS and most apps in the flash and connect an external disk for served files. With the disk spun down, you'd be using under 2W for the rest of the system.

    Performance per Watt is a useful metric, but performance-that-you-actually-use per Watt is a better one. There's no advantage to making the machine take 10W and be 100 times as fast if it's already powerful enough for your needs.

  • by squizzar (1031726) on Friday April 30, 2010 @07:24AM (#32042570)
    But in this case that's a good thing. It suggests that they have designed portable code (it was one of the goals of the NT architecture) so they should be able to move to another platform.
  • by Anonymous Coward on Friday April 30, 2010 @07:34AM (#32042624)

    No, benchmarks really aren't BS. Ultimately if you want to compare two systems you have to run a test workload and compare based on that. Otherwise it's all just theorical performance.

    The problem with benchmarks is that they sometimes don't represent real use cases, so sometimes you don't get realistic results. This was the problem with the notorious Transmeta Crusoe benchmarks, for example.

    Theoretical ARM vs x86 comparisons omit considerations such as the difficulty of making a true superscalar out-of-order ARM with similar issue width to recent x86s. When you go superscalar out-of-order with ARM, all of the RISC-like benefits basically cease to apply, because you're dealing with instructions that read three registers and update two more, and instructions that can do stupid things like address the PC as if it were a GPR. At that level of performance the ARM benefits are gone, and you have something very like an x86 in terms of performance per watt and chip area. The dirty secret about the ARM ISA is that it's massively braindamaged, which is why they'd like everyone to be using Thumb2 now please.

  • by Joce640k (829181) on Friday April 30, 2010 @07:48AM (#32042696) Homepage

    MS provides email, Outlook, SQL and web server applications. Why would you need anything more?

  • by IGnatius T Foobar (4328) on Friday April 30, 2010 @08:03AM (#32042758) Homepage Journal

    As the cost of energy continues to rise (due to purely political reasons rather than any actual scarcity, which is sad) there's going to be more and more demand for computing equipment with low power consumption. ARM fits that requirement nicely ... and it's all going to be running Linux, even if Microsoft enters the game.

    Why?

    Windows running on ARM would suffer from the same (imho perceived) problem that desktop Linux on x86 has: it wouldn't be able to run Windows x86 binaries. In fact, for Microsoft it would actually be worse because they'd have to deal with irate customers who thought they'd be able to pop in that CD and install some application they already own.

    Linux has been playing this one well by establishing a large base of open source software that can be built on any platform. Combine this with your favorite APT or YUM repository and what do you get? The equivalent of an "app store" which is something the world is now quite familiar with. Linux for the win!/p?

  • by DrDitto (962751) on Friday April 30, 2010 @08:35AM (#32042936)
    ARM currently supports 4 GB of memory since the ISA is 32-bits. Full 64-bit addessing support is years away. Interim "PAE" extensions will be just as ugly and unused as the x86 PAE.
  • by Alastair (3224) on Friday April 30, 2010 @09:28AM (#32043350) Homepage

    I'm already running an Arm based server. It's called a QNAP NAS and the TS419P runs a Marvell Feroceon CPU "Feroceon 88FR131 rev 1 (v5l)" (cpuinfo).

    It's running Debian Lenny (2.6.30-2-kirkwood) and thanks go to the Debian Arm team and Martin Michlmayr. Runs great.

    Alastair

  • by wvmarle (1070040) on Friday April 30, 2010 @09:33AM (#32043398)

    Why Windows? I thought we were talking about servers here.

  • by Anonymous Coward on Friday April 30, 2010 @10:08AM (#32043750)

    Still waiting for my ARM based Linux running smart book.

  • by Anonymous Coward on Friday April 30, 2010 @10:49AM (#32044218)

    I think the word "server" confuses the fuck out of some people. They think it's code for "heavy workload" when in 90% of the cases, it's exactly the opposite.

  • by Anonymous Coward on Friday April 30, 2010 @10:58AM (#32044308)

    If I see another idiot claiming that LLP64 is a "hack" for the sake of endian compatibility, I'm going to smash something. Yes, Windows uses LLP64 most of the time. That's because too many developers used things like DWORD in their structure definitions, which would be broken if DWORD was suddenly 64 bits wide.

    And anyone who has ever made the "assumption" that sizeof(void*) = sizeof(long) is an idiot. Sorry, but if you rely on undefined behavior in the standard you accept the results that come from that result. You shouldn't ever be putting pointers into longs anyway.

  • by ckaminski (82854) <[ckaminski] [at] [pobox.com]> on Friday April 30, 2010 @12:23PM (#32045656) Homepage
    The MIPS/Alpha/PowerPC failure of Windows was caused by 1 thing only:

        The disgustingly cheap price of the Pentium Pro.

    For $10,000 dollars you could have the same (two socket) performance as a $40,000 Netpower, or a $30,000 DEC Alpha.

    Intel's volume and engineering skill is what made porting to anything except Intel a waste of time, except on some very special applications.

    The fact that MIPS/Alpha/PowerPC where all 64bit CPU platforms back in 1996 should incense anyone who bought into the Itanium myth. Thank GOD we had AMD around to force Intel to move to x64.
  • by Anonymous Coward on Friday April 30, 2010 @12:39PM (#32045918)

    Correct me if I'm wrong, but wouldn't the Nano make a better choice for a low power server chip, with its hardware based encryption support?

    In the vast majority of situations where low power server chips make sense, No. If you are serious about low power, Nano doesn't even begin to compare to arm and if you are willing to go with something with the power envelope of a Nano, you have enough CPU power that encryption won't even be noticed as remember, this thing is going to be doing light duty and is limited to network speeds anyway. Furthermore, if this thing reboots for some reason, and you don't notice for a while, how is it going to be doing its job if it can't get to its own data before you decrypt it? What if you're gone for a week or two and have something important running on your server and it's sitting at some prompt waiting for a password before you can even log in? You're stuck until you get back. At the very least, encryption on a server is a hassle. Anything that really requires encryption can be done on an as needed basis and it is unlikely that hardware acceleration is going to be much benefit.

    The Nano will run existing x86 software, so backwards compatibility is no problem

    Oh, I get it, you're a Windows troll. For people doing real work, backwards compatibility between architectures on servers ceased [wikipedia.org] being a problem long ago.

  • by shutdown -p now (807394) on Friday April 30, 2010 @02:27PM (#32047388) Journal

    If I see another idiot claiming that LLP64 is a "hack" for the sake of endian compatibility, I'm going to smash something. Yes, Windows uses LLP64 most of the time. That's because too many developers used things like DWORD in their structure definitions, which would be broken if DWORD was suddenly 64 bits wide.

    Presumably, if Windows went LP64 tomorrow, this wouldn't mean that DWORD is suddenly 64-bit wide. It would just mean that DWORD would become a typedef for unsigned int, rather than unsigned long.

It's time to boot, do your boot ROMs know where your disk controllers are?

Working...