Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

ARM Chips Designed For 480-Core Servers 132

angry tapir writes "Calxeda revealed initial details about its first ARM-based server chip, designed to let companies build low-power servers with up to 480 cores. The Calxeda chip is built on a quad-core ARM processor, and low-power servers could have 120 ARM processing nodes in a 2U box. The chips will be based on ARM's Cortex-A9 processor architecture."
This discussion has been archived. No new comments can be posted.

ARM Chips Designed For 480-Core Servers

Comments Filter:
  • by ikarys ( 865465 ) on Monday March 14, 2011 @04:27AM (#35477390)
    It'll likely cost an ARM and a leg.
    • Have a beowulf cluster of cell phones.

    • No.
    • by lwsimon ( 724555 )

      Nice. I was thinking "My God... It's full of cores!"

    • Mmm, reminds me of the prototype card for Acorn computers that had 32, 600MHz ARM processors. They never released an estimated price though. This was back in the early year 2000's so would have been incredibly expensive. Cortex A9's are now in mass production, not in the hundreds/low thousands that Acorn used to make, so might be cheaper than you actually think.

      • I suspect that cost will largely boil down to the "fabric", type unspecified, and whatever the "because we can" premium for this device happens to be.

        Since the A9s are in mass production, and have some vendor competition, they should be reasonably cheap, and of basically knowable price; but, depending on what sort of interconnect this thing has, you could end up paying handsomely for that. "Basically ethernet; but cut down to handle short signal paths over known PCBs" shouldn't be too bad; but if it is s
        • The second they try something smart - someone is gonna pull a HT interconnected chip and screw them over. Though the cache coherency protocol may be an issue. OTOH, 4x1Gbps backplane Ethernet, is standard, and wouldn't be too expensive to slap between chips, and let Xen in cluster mode handle the cache coherency. Beowulf SSI in a box. Sounds nice.
    • I am dying.. you have killed me. Way too funny for a Monday morning. Now I am at work literally laughing out loud and I can't explain what is funny to anyone who will get it... I am dead inside, killed by your humorous post...
  • by metalmaster ( 1005171 ) on Monday March 14, 2011 @04:36AM (#35477424)

    When you start piling all you can onto a chip the power consumption is going to naturally creep up. Once you reach a certain threshold of x chips you lose on the benefit of ARM being "low-power." Am i wrong?

    • Re: (Score:3, Insightful)

      by swalve ( 1980968 )
      Its low power in that the cores (I assume) can be shut down that aren't being used. Like a switchmode power supply versus a linear one. So you are always using the least amount of power possible.
      • by SlashV ( 1069110 )
        The analogy with a switchmode power supply is completely b0rked. It doesn't contain any cars. (furthermore, switching off cores in a multicore server is complete unlike the 'switching' in a switch mode power supply)
    • Re:is it worth it? (Score:5, Interesting)

      by L4t3r4lu5 ( 1216702 ) on Monday March 14, 2011 @04:44AM (#35477444)
      Cortex A9 is 250mW per core at 1GHz [wikipedia.org]

      You're looking at, for a 240 core 2U node, 60W for CPUs. Pretty impressive.
      • by arivanov ( 12034 )

        5W average, so let's assume up to 10W per CPU according to the article.

        Not bad. In fact good enough to replace completely a commercial non-metered hosted VM offering of the kind memset (http://www.memset.co.uk/) offers at present.

        The interesting question here is what is the interconnect between them. After all, who cares that you have 480 cores in 2U if 90% of the time they are twiddling their thumbs waiting for data to be delivered to them.

        • TFA said 5W per node, meaning per 4 cores + RAM. That's 600W for the entire system, which is fine for a 2U enclosure.

          Aside from the interconnect, the other important question is how much RAM are they going to have? They're using the Cortex A9, not the A15, so they just have a 32-bit physical address space. In theory, this lets them have 4GB of RAM per node (1GB per core), but some of that needs to be used for memory-mapped I/O, so I'd be surprised if they got more than 3GB, maybe only 2GB. That would m

          • 512MB per core really isn't bad at all, when you consider that core has about the same performance of a 10yr old Pentium 3.
            • Benchmark to back the claim up?
              Besides, ARM do not suffer some of the insane x86 problems.

              • The comment wasn't intended to be derogatory against the ARM. The ARM was just designed from the ground up with low power consumption in mind, not performance. The Cortex A9 has an 8-stage pipeline, 2.5 instructions per clock, around 13M transistors per core, runs at 800MHz to 1.5GHz, and has up to 512KB of L2 cache. The Pentium 3 has a 10-stage pipeline, 2.5 instructions per clock, around 10M transistors, runs at 500MHz-1.4GHz, and has up to 512KB of L2 cache. They're fairly comparable processors, with

                • They're fairly comparable processors, with the ARM probably having a better instruction dispatcher and branch predictor, and the P3 having better floating point performance.

                  The ARM chip probably doesn't have a better branch predictor. The Pentium 4 had a very good one, which was back-ported to the Pentium-M. The Pentium 3 one was pretty good. ARM chips didn't have one at all until very recently, because branch prediction is much less important with the ARM ISA.

                  A lot of ARM instructions are predicated, meaning that they are evaluated, but their results are only retired if a specific condition register is set. Branch prediction on x86 is very important, because short if se

          • 600W per 2U server is possible but very impractical -- a full rack will require 12kW (to power it and 12kW of cooling).

            I also don't believe, they thought it through, how to stuff 120 processors and at least 120 DIMMs into 2U case and cool them efficiently -- one ARM CPU requires no forced-air cooling, and one DIMM can be cooled by whatever blows around for ther reasons, but 120 of them need airflow, and plenty of it. If they don't use separate DIMMs and have fixed RAM (I hope, it's ECC and enough to run a d

            • They almost certainly aren't using DIMMs. To get the power consumption that they talk about, they'll be using MobileDDR in a package-on-package (PoP) configuration. This means that the ARM SoC and the memory are cooled as a single unit.
              • Not with the density of power they are trying to achieve -- it will mean that 10-20% of all power dissipation will happen on chips with nearly perfect thermal insulation around them (board, layer of air and another chip). It will be probably the first device ever to overheat ARM with the heat it produced. Even if air will be eliminated, RAM chips are not good at conducting heat from bottom to top.

                It will make sense to place RAM on the opposite side of the board, and have airflow on both sides, but again, it

    • by Bert64 ( 520050 )

      That is the benefit of arm, the threshold for how many chips you can have is much higher because each individual chip uses less power.

    • Yes, you are wrong.
    • Re:is it worth it? (Score:5, Interesting)

      by fuzzyfuzzyfungus ( 1223518 ) on Monday March 14, 2011 @05:30AM (#35477588) Journal
      It really depends on how much(and what kind of) support hardware ends up being involved in having lots and lots of them together in some useful way. That and what inefficiencies, if any, are present because your workload was really expecting a smaller number of higher-performance cores.

      The power/performance of the core itself remains the same whether you have 1 or 1 million. The power demands of the memory may or may not change: phones and the like usually use a fairly small amount of low-power RAM in a package-on-package stack with the CPU. For server applications, something that takes DIMMS or SODIMMs might be more attractive, because PoP usually limits you in terms of quantity.

      The big server-specific questions are going to be the nature of the "fabric" across which 120 nodes in a 2U are communicating. Because 120 ports worth of 10/100 or GigE would occupy 3Us and nonzero power themselves, I'm assuming that this fabric is either not ethernet at all, or some sort of cut-down "we don't need to care about the standards because the signal only has to travel 6 inches over boards we designed, with our hardware at both ends" pseudo-ethernet that looks like an ethernet connection for compatibility purposes; but is electrically more frugal. Whatever that costs, in terms of energy, will have to be added on to the effective energy cost of the CPUs themselves.

      Then you get perhaps the most annoying variable: Many tasks are(either fundamentally, or because nobody bothered to program them to support it) basically dependent on access to a single very fast core, or to a modest number of cores with very fast access to one another's memory. For such applications, the performance of 400+ slow cores is going to be way worse than a naive addition of their individual powers would suggest. Sharing time on a fast core is both fundamentally easier, and enjoys a much longer history of development, than does dividing a task among small ones. With some workloads, that will make this box nearly useless(especially if the interconnect is slow and/or doesn't do memory access). For others, performance might be nearly as good as a naive prediction would suggest.
      • Most servers do not do heavy computing work: they serve up (dynamic) web pages, handle SQL queries, process e-mail, serve files. That sounds to me like lots and lots of threads that each have relatively little work to do.

        For example /.: the serving of a single page to a single visitor will take a few dozen SQL queries and the running of a Perl script to stitch it all together. This takes, say, 0.001 seconds of time of an x86 core - a wild guess, may be an order of magnitude off, good enough for the sake of

        • But web pages won't even need you to do any floating point arithmetic.

          Provided your application is written in a language that supports not-floating-point arithmetic. In PHP, for example, any division returns a floating-point result, as does any computation with numbers over 2 billion (such as the UNIX timestamps of dates past 2038).

      • by npsimons ( 32752 ) *

        It really depends on how much(and what kind of) support hardware ends up being involved in having lots and lots of them together in some useful way. That and what inefficiencies, if any, are present because your workload was really expecting a smaller number of higher-performance cores.

        I've been saying for years that people should make their chunks of code smaller (eg, smaller functions, et al) so it's easier to understand and maintain. The old argument has always been that the compiler will inline it even

    • There are two arguments for hardware in enterprise. 1: Power to watts ratio. This is substantially more capable than just about anything out there for X86 right now, shy of supercomputers.

  • Right now I'm running an Intel D510 rack server with dual 2.5" drives, it's great, does a lovely job even with it running Ubuntu 10.04 server + VirtualBox ( Ubuntu 8.04 LTS ), however, I'd dearly love to shift over to something even more low-power/compact/SOC, so long as it has SATA, Ethernet, USB and runs a debian-based distro I'd be happy.

    Something like a dual-core ARM machine would run ample for the server loads I'm seeing.

    So, anyone seen anything like that yet? Or even just a MB in Mini-ITX ?

    (btw, why

    • I want one too (probably three). But I want to run OpenBSD on mine.

    • Take a look at the PandaBoard [pandaboard.org], if you want a low-power, dual-core ARM server, although you'd have to use CF + USB for storage, not SATA. Note, however, that VirtualBox is x86-only. If you want virtualisation, you're currently pretty limited on ARM. There is a Xen port, but it's not really packaged for end users yet.
      • by fnj ( 64210 )

        Why does the spec page omit the single most important spec: power consumption?

      • Good luck getting one of those in your hands. My coworker right across the aisle ordered one in January. Still not sure when it will ship.

    • While not in 1U format or a lot of off the shelf NAS boxes use ARM. My LG N2R1 NAS has a 800MHz Marvell 88F6192 and runs Lenny. I won't be surprised to see some NanoITX boards out running similar hardware. Plus, I've been very impressed with how many Debian packages are available for ARMEL. While not perfect, it's the most useful Linux server I've ever had.

      • by inflex ( 123318 )

        That's a good point about the NAS systems, they're comparatively cheap too!

        • by Nursie ( 632944 )

          You need to watch out with them also though. The WD Sharespace I have uses a 500MHz chip which is totally inadequate for decent throughput between the 4-disk array and the GigE interface.

          And I had to write my own device support into the kernel to get it running a modern OS! It came with 2.6.12!

          • by inflex ( 123318 )

            Thanks - I've seen some Netgear MS-2000 ones on sale recently for about $130 AUD. and then the RND-2000 for $250.

            Meh, maybe I'll just wait for AMD to bring out their "low power" options in Mini-ITX :sigh:

            • I bought an RND-2000 and 2 fairly slow 2TB drives (5900 rpm for less noise) since it was to be installed in my bedroom. I got the whole thing shipped with 2 drives for around $430

              Software-wise it's fairly nice, with support for Time Machine, AFP, CIFS etc and works great for any single task. But ask it to do more than 1 task and it just doesn't have the horsepower -- for instance copying a large file and trying to play a song causes the song playback to be delayed. If you're using an iPad to stream music o
              • by Nursie ( 632944 )

                Wow, that is *awesome* compared to the max transfer of around 24MB (bytes at least, not bits) I get out of the sharespace.

                That's over vanilla ftp and the processor is max'd at that point. Not the drives or the network interface, the processor. Dammit so much...

          • I've had miserable performance with mine, Start moving data to it and the interface comes back with "Too Busy!" for 2 weeks. Then it slowed down and needed to be rebooted.
            • by Nursie ( 632944 )

              It's pretty damned poor, yup. I figured the onboard software was probabl crap so I hacked mine to hell:

              Managed to find the onboard serial pins and solder on a line-levelling serial adaptor, downloaded the WD GPL source, translated the needed Orion/Marvell code tree settings to modern/mainline kernel initialisation code, built a whole bunch of custom kernels, figured out the internal flash layout and how to create u-boot kernel images and initiramfs images and eventually got it to boot debian squeeze.

              And it

    • Comment removed based on user account deletion
      • by inflex ( 123318 )

        A shame, even with 50% off on some, they're as expensive as something like a FitPC2 :(

        I'm hoping at some point we can see a $99 personal server option, maybe cram 4~6 into a 1U rack.

        • by fatphil ( 181876 )
          Do any of the later offerings that followed ShivaPlug, such as GuruPlug, do what you want?
  • ARM _still_ has no real 64-bit support (only something resembling PAE on x86). So building a single-image server beyond 2-4 way is not really feasible.

    It's fun that we're having all the past x86 problems with ARM.

    • by jabjoe ( 1042100 )
      Do many websites need a 64bit memory range? I don't think so. Big database servers and the like, yes, but I doubt many website servers.
      • by Cyberax ( 705495 )

        Yes, they do. First, if you're hosting a single web-site on a single server then you'll probably want to install more than 4Gb just because RAM is so cheap now. And you'll inevitably use it (for databases, file cache, etc.). If you're hosting multiple sites on a single server, then you DEFINITELY need more than 4Gb of RAM per server (as it's going to be the limiting component).

        Maybe ARM is justified for large Google-style server farms doing specialized work which does not require great amounts of RAM.

        • by GeLeTo ( 527660 )
          ARM's Large Physical Address Extensions (LPAE) allows access to up to 1TB of memory. While I doubt applications will use this, it will allow each virtualized host on the server to use 4GB of memory.
          • by Cyberax ( 705495 )

            PAE-like schemes always have a lot problems. Just read Linus' rants about it :)

            • by TheRaven64 ( 641858 ) on Monday March 14, 2011 @06:41AM (#35477782) Journal

              How about a link to this rant, if you want us to read it? And, if you've got a problem with PAE-like extensions, then I presume you're aware that both Intel's and AMD's virtualisation extensions use PAE-like addressing?

              All that PAE and LPAE do is decouple the size of the physical and virtual address spaces. This is a fairly trivial extension to existing virtual memory schemes. On any modern system, there is some mechanism for mapping from virtual to physical pages, so each application sees a 4GB private address space (on a 32-bit system) and the pages that it uses are mapped to some from physical memory. With PAE / LPAE, the only difference is that this mapping now lets you map to a larger physical address space - for example, 32-bit virtual to 36-bit physical. You see exactly the opposite of this on almost all 64-bit platforms, where you have a 64-bit virtual address space but only a 40- or 48-bit physical address space.

              The big problem with PAE was that most machines that supported it came with 32-bit peripherals and no IOMMU. This meant that the peripherals could do DMA transfers to and from the low 4GB, but not anywhere else in memory. This dramatically complicated the work that the kernel had to do, because it needed to either remap memory pages from the low 4GB and copy their contents or use bounce buffers, neither of which was good for performance (which, generally, is something that people who need more than 4GB of RAM care about).

              The advantage is that you can add more physical memory without changing the ABI. Pointers remain 32 bits, and applications are each limited to 4GB of virtual address space, but you can have multiple applications all using 4GB without needing to swap. Oh, and you also get better cache usage than with a pure 64-bit ABI, because you're not using 8 bytes to store a pointer into an address space that's much smaller than 4GB.

              By the way, I just did a quick check on a few 64-bit machines that I have accounts on. Out of about 700 processes running on these systems (one laptop, two servers, one compute node), none were using more than 4GB of virtual address space.

              • by pmontra ( 738736 )

                How about a link to this rant

                http://blog.linuxolution.org/archives/117 [linuxolution.org]

                • His complaint basically boils down to the fact that the kernel needs to be able to map all of physical memory, and have some address space left over for memory-mapped I/O. This is a valid complaint for a kernel developer (although Linus' 'everyone who disagrees with me is an idiot' style is quite irritating), but it largely irrelevant to the issue at hand. There is nothing stopping a kernel on ARM with LPAE from using 64-bit pointers internally. You still need to translate userspace pointers, but you nee
                  • by Cyberax ( 705495 )

                    No, the problem is:
                    1) Kernel is starved for _address_ _space_ for its internal structures.
                    2) Userspace is starved for address space, because it has to view all the RAM through a small aperture (think EMS in 80286).
                    3) Constant address space remapping is costly.

                    And it doesn't matter that you use 64-bit pointers internally, because you can't address data directly.

                    • 1) Kernel is starved for _address_ _space_ for its internal structures.

                      This is addressed by using physical addresses in the kernel, as I said. It can use 64-bit pointers, and the compiler emits direct loads and stores that bypass the MMU.

                      Userspace is starved for address space, because it has to view all the RAM through a small aperture (think EMS in 80286).

                      Which is only relevant if the process actually wants more than 4GB of address space, i.e. not very often (yet).

                      Constant address space remapping is costly

                      True, but this is only required on x86 because the kernel is using its own virtual address space. This is not an issue on ARM.

                    • AFAIK, most OSes shut down the MMU in kernel mode - linux for instance. Address space remaps are costly because of a lot of explicit, non-cached memory accesses. Though I don't see why some more PAE bits can't replace 64-bit mode - you just need an IOMMU. And possibly hardware virtualization with a simple hypervizor. Though that might actually be faster, considering all the savings you make from pointers, not to mention that if the MMU and wide load/store instructions trap to the hypervizor directly - the c
                    • AFAIK, most OSes shut down the MMU in kernel mode - linux for instance

                      Linux certainly doesn't do this on x86. It uses the segmentation mechanism. The kernel's memory is in a segment, marked as only visible to ring 0 code. When you make a system call, the current process's segment(s) remain visible to the OS, as does the kernel's segment. This means that you typically have 1GB of address space reserved for the userspace process, and 3GB for each userspace process. RedHat used to ship a kernel that used an entirely separate address space, so you got 4GB for the kernel and

                    • Paging is shutdown. The MMU does paging. The segmentation mechanism is separate.
                    • There are so many things wrong with that, that I don't even know where to start. The MMU on x86 handles both paging and segmentation. Segments map from virtual addresses to linear addresses. Paging maps from linear addresses to physical addresses. Both are part of the virtual memory mapping handled by the MMU, which first walks the LDT / GDT, then the page tables, to translate from a virtual address to the physical.

                      It sounds like you're repeating something that you heard and didn't understand. What yo

              • by Theovon ( 109752 )

                I do scientific computing where we regularly use virtual address spaces larger than 4GB. Not all of that is in the working set, of course, but it's often necessary to have that much mapped. One recent example is my leakage power and delay models for near-threshold circuits. I implemented the Markovic forumlas and found them to be too slow. My simulations would take days. So, I figured out the granularities I needed for voltage, power, and temperature, and I implemented those models as giant look-up tab

                • If you are doing scientific computing, then you are not in the target market for a system like this. The virtual address space size is the least of your problems - the relatively anaemic floating point performance is going to cripple your performance.
                • You're more the target market for a nice G34 AMD system - 24 cores in 2 sockets, 64G of ram. This is more about serving lots of php.
            • by GeLeTo ( 527660 )
              Linus' rant is about using PAE in a desktop enviroment, which I agree with (that's why I said that I doubt any applications will use PAE). It says nothing about virtualisation. LPAE will work just fine for VMs.
        • by Anonymous Coward

          Utter bollocks. I work for a data centre, and there is no way 4GB is *required* for multiple sites or anything like that. How about one server, running 20-odd Linux Jails, each with between 20-32 sites, all in 2GB.

        • Instead of virtualising ten servers on a single physical box, you could of course consider running a single server on a single piece of hardware again. And still win power/flexibility wise if you can get your "low-power" ARM board to cost much less than your souped up x86 board. If only because if a single board fails, just one server goes down. Not all ten.

      • Even programs that you wouldn't expect to need much memory often benefit heavily, as any modern desktop or server OS uses free RAM for disk cacheing. Adding more memory means fewer slow, slow disk reads are needed.
      • by Bengie ( 1121981 )
        64bit memory range? Each node is going to have it's own memory slot(s). 120 cores, 4 cores per node = 30 nodes. If you plan to have less than 4GB of memory in this system, how small does each stick have to be when you plug 30 in? ~128mb. Good Luck finding a bunch of DDR2/3 128MB sticks to plug into your 4GB 120 core web server. Anyway, each node needs its own local copy of the data it needs to serve up. If you web page needs ~256MB, each node is going to need the same 256MB of data duplicated, plus any ext
    • by JackDW ( 904211 )

      It couldn't be an SMP machine though, not with so many cores.

      My bet would be that each of the 120 nodes actually is a complete computer with 4 cores and its own memory - linked to the other 119 only via Ethernet. In this arrangement the 32-bit memory limit is not such a big issue. Each individual machine will not be particularly powerful anyway.

      • This kind of arrangement gets brought up over and over - one of the more recent examples is SiCortex, and it sucked. Having a Single System Image is always preferable to a "cluster in a box."
  • 160 more (Score:2, Funny)

    by Hognoxious ( 631665 )

    Another 160 and that should be enough for anybody!

  • by Anonymous Coward

    The real question is, can anyone afford to install an oracle database on that server?

  • Now with 480 cores....2x as fast and with 9x better graphics than the iPad 19.

Heard that the next Space Shuttle is supposed to carry several Guernsey cows? It's gonna be the herd shot 'round the world.

Working...