ARM Chips Designed For 480-Core Servers 132
angry tapir writes "Calxeda revealed initial details about its first ARM-based server chip, designed to let companies build low-power servers with up to 480 cores. The Calxeda chip is built on a quad-core ARM processor, and low-power servers could have 120 ARM processing nodes in a 2U box. The chips will be based on ARM's Cortex-A9 processor architecture."
Going to be expensive! (Score:5, Funny)
Cheaper way (Score:2)
Have a beowulf cluster of cell phones.
Re: (Score:1)
the service contracts or ETF charges would cost way more.than the server would.
Re:Cheaper way (Score:5, Funny)
Re: (Score:1)
Don't be a CISCy
Re: (Score:1)
Nice.
Re: (Score:1)
Re: (Score:2)
Nice. I was thinking "My God... It's full of cores!"
Re: (Score:2)
Re: (Score:2)
Your approval is enough for me. Consider me modded appropriately.
Re: (Score:2)
Mmm, reminds me of the prototype card for Acorn computers that had 32, 600MHz ARM processors. They never released an estimated price though. This was back in the early year 2000's so would have been incredibly expensive. Cortex A9's are now in mass production, not in the hundreds/low thousands that Acorn used to make, so might be cheaper than you actually think.
Re: (Score:3)
Since the A9s are in mass production, and have some vendor competition, they should be reasonably cheap, and of basically knowable price; but, depending on what sort of interconnect this thing has, you could end up paying handsomely for that. "Basically ethernet; but cut down to handle short signal paths over known PCBs" shouldn't be too bad; but if it is s
Re: (Score:1)
Re: (Score:1)
Re: (Score:2, Funny)
Right now my system doesn't even have 480 live processes on it, let alone ones contending for execution time.
You're obviously not running Gentoo.
Re: (Score:1)
is it worth it? (Score:3)
When you start piling all you can onto a chip the power consumption is going to naturally creep up. Once you reach a certain threshold of x chips you lose on the benefit of ARM being "low-power." Am i wrong?
Re: (Score:3, Insightful)
Re: (Score:1)
Re:is it worth it? (Score:5, Interesting)
You're looking at, for a 240 core 2U node, 60W for CPUs. Pretty impressive.
Re: (Score:2)
5W average, so let's assume up to 10W per CPU according to the article.
Not bad. In fact good enough to replace completely a commercial non-metered hosted VM offering of the kind memset (http://www.memset.co.uk/) offers at present.
The interesting question here is what is the interconnect between them. After all, who cares that you have 480 cores in 2U if 90% of the time they are twiddling their thumbs waiting for data to be delivered to them.
Re: (Score:2)
TFA said 5W per node, meaning per 4 cores + RAM. That's 600W for the entire system, which is fine for a 2U enclosure.
Aside from the interconnect, the other important question is how much RAM are they going to have? They're using the Cortex A9, not the A15, so they just have a 32-bit physical address space. In theory, this lets them have 4GB of RAM per node (1GB per core), but some of that needs to be used for memory-mapped I/O, so I'd be surprised if they got more than 3GB, maybe only 2GB. That would m
Re: (Score:2)
Re: (Score:1)
Benchmark to back the claim up?
Besides, ARM do not suffer some of the insane x86 problems.
Re: (Score:3)
The comment wasn't intended to be derogatory against the ARM. The ARM was just designed from the ground up with low power consumption in mind, not performance. The Cortex A9 has an 8-stage pipeline, 2.5 instructions per clock, around 13M transistors per core, runs at 800MHz to 1.5GHz, and has up to 512KB of L2 cache. The Pentium 3 has a 10-stage pipeline, 2.5 instructions per clock, around 10M transistors, runs at 500MHz-1.4GHz, and has up to 512KB of L2 cache. They're fairly comparable processors, with
Re: (Score:3)
They're fairly comparable processors, with the ARM probably having a better instruction dispatcher and branch predictor, and the P3 having better floating point performance.
The ARM chip probably doesn't have a better branch predictor. The Pentium 4 had a very good one, which was back-ported to the Pentium-M. The Pentium 3 one was pretty good. ARM chips didn't have one at all until very recently, because branch prediction is much less important with the ARM ISA.
A lot of ARM instructions are predicated, meaning that they are evaluated, but their results are only retired if a specific condition register is set. Branch prediction on x86 is very important, because short if se
Re: (Score:2)
600W per 2U server is possible but very impractical -- a full rack will require 12kW (to power it and 12kW of cooling).
I also don't believe, they thought it through, how to stuff 120 processors and at least 120 DIMMs into 2U case and cool them efficiently -- one ARM CPU requires no forced-air cooling, and one DIMM can be cooled by whatever blows around for ther reasons, but 120 of them need airflow, and plenty of it. If they don't use separate DIMMs and have fixed RAM (I hope, it's ECC and enough to run a d
Re: (Score:2)
Re: (Score:2)
Not with the density of power they are trying to achieve -- it will mean that 10-20% of all power dissipation will happen on chips with nearly perfect thermal insulation around them (board, layer of air and another chip). It will be probably the first device ever to overheat ARM with the heat it produced. Even if air will be eliminated, RAM chips are not good at conducting heat from bottom to top.
It will make sense to place RAM on the opposite side of the board, and have airflow on both sides, but again, it
Re: (Score:2)
A lot of servers are idling for most of the day, but you need them to be able to scale up quickly at certain peak times.
Re: (Score:2)
A lot of servers are idling for most of the day, but you need them to be able to scale up quickly at certain peak times.
Do you mean power up quickly?
Re:is it worth it? (Score:4, Interesting)
Not really, the server could stay powered up the whole time (unless you really get 0% usage at non-peak times, and those times are predictable, in which case it makes sense to just power down completely at those times). By scaling up I mean enabling more cores, thus improving the processing capacity of the server. Then you'd get the best of both worlds, with the server being fine for anything from small to massive workloads, while still using less power than the equivalent x86 setup. Like modern engines which can enable or disable cylinders at will to conserve fuel when not much power is needed.
Re: (Score:1)
Re: (Score:2)
That is the benefit of arm, the threshold for how many chips you can have is much higher because each individual chip uses less power.
Re: (Score:1)
Re:is it worth it? (Score:5, Interesting)
The power/performance of the core itself remains the same whether you have 1 or 1 million. The power demands of the memory may or may not change: phones and the like usually use a fairly small amount of low-power RAM in a package-on-package stack with the CPU. For server applications, something that takes DIMMS or SODIMMs might be more attractive, because PoP usually limits you in terms of quantity.
The big server-specific questions are going to be the nature of the "fabric" across which 120 nodes in a 2U are communicating. Because 120 ports worth of 10/100 or GigE would occupy 3Us and nonzero power themselves, I'm assuming that this fabric is either not ethernet at all, or some sort of cut-down "we don't need to care about the standards because the signal only has to travel 6 inches over boards we designed, with our hardware at both ends" pseudo-ethernet that looks like an ethernet connection for compatibility purposes; but is electrically more frugal. Whatever that costs, in terms of energy, will have to be added on to the effective energy cost of the CPUs themselves.
Then you get perhaps the most annoying variable: Many tasks are(either fundamentally, or because nobody bothered to program them to support it) basically dependent on access to a single very fast core, or to a modest number of cores with very fast access to one another's memory. For such applications, the performance of 400+ slow cores is going to be way worse than a naive addition of their individual powers would suggest. Sharing time on a fast core is both fundamentally easier, and enjoys a much longer history of development, than does dividing a task among small ones. With some workloads, that will make this box nearly useless(especially if the interconnect is slow and/or doesn't do memory access). For others, performance might be nearly as good as a naive prediction would suggest.
Re: (Score:3)
Most servers do not do heavy computing work: they serve up (dynamic) web pages, handle SQL queries, process e-mail, serve files. That sounds to me like lots and lots of threads that each have relatively little work to do.
For example /.: the serving of a single page to a single visitor will take a few dozen SQL queries and the running of a Perl script to stitch it all together. This takes, say, 0.001 seconds of time of an x86 core - a wild guess, may be an order of magnitude off, good enough for the sake of
Language-imposed gratuitous use of floating point (Score:2)
But web pages won't even need you to do any floating point arithmetic.
Provided your application is written in a language that supports not-floating-point arithmetic. In PHP, for example, any division returns a floating-point result, as does any computation with numbers over 2 billion (such as the UNIX timestamps of dates past 2038).
Re: (Score:2)
I've been saying for years that people should make their chunks of code smaller (eg, smaller functions, et al) so it's easier to understand and maintain. The old argument has always been that the compiler will inline it even
Re: (Score:2)
There are two arguments for hardware in enterprise. 1: Power to watts ratio. This is substantially more capable than just about anything out there for X86 right now, shy of supercomputers.
WANTED: 1U low-power rack server (Score:2)
Right now I'm running an Intel D510 rack server with dual 2.5" drives, it's great, does a lovely job even with it running Ubuntu 10.04 server + VirtualBox ( Ubuntu 8.04 LTS ), however, I'd dearly love to shift over to something even more low-power/compact/SOC, so long as it has SATA, Ethernet, USB and runs a debian-based distro I'd be happy.
Something like a dual-core ARM machine would run ample for the server loads I'm seeing.
So, anyone seen anything like that yet? Or even just a MB in Mini-ITX ?
(btw, why
Re: (Score:2)
I want one too (probably three). But I want to run OpenBSD on mine.
Re: (Score:3)
Re: (Score:2)
Why does the spec page omit the single most important spec: power consumption?
Re: (Score:2)
Hence the question, why would they omit that most salient fact?
Re: (Score:2)
Good luck getting one of those in your hands. My coworker right across the aisle ordered one in January. Still not sure when it will ship.
Re: (Score:3)
While not in 1U format or a lot of off the shelf NAS boxes use ARM. My LG N2R1 NAS has a 800MHz Marvell 88F6192 and runs Lenny. I won't be surprised to see some NanoITX boards out running similar hardware. Plus, I've been very impressed with how many Debian packages are available for ARMEL. While not perfect, it's the most useful Linux server I've ever had.
Re: (Score:2)
That's a good point about the NAS systems, they're comparatively cheap too!
Re: (Score:3)
You need to watch out with them also though. The WD Sharespace I have uses a 500MHz chip which is totally inadequate for decent throughput between the 4-disk array and the GigE interface.
And I had to write my own device support into the kernel to get it running a modern OS! It came with 2.6.12!
Re: (Score:2)
Thanks - I've seen some Netgear MS-2000 ones on sale recently for about $130 AUD. and then the RND-2000 for $250.
Meh, maybe I'll just wait for AMD to bring out their "low power" options in Mini-ITX :sigh:
Re: (Score:2)
Software-wise it's fairly nice, with support for Time Machine, AFP, CIFS etc and works great for any single task. But ask it to do more than 1 task and it just doesn't have the horsepower -- for instance copying a large file and trying to play a song causes the song playback to be delayed. If you're using an iPad to stream music o
Re: (Score:2)
Wow, that is *awesome* compared to the max transfer of around 24MB (bytes at least, not bits) I get out of the sharespace.
That's over vanilla ftp and the processor is max'd at that point. Not the drives or the network interface, the processor. Dammit so much...
Re: (Score:1)
Re: (Score:2)
It's pretty damned poor, yup. I figured the onboard software was probabl crap so I hacked mine to hell:
Managed to find the onboard serial pins and solder on a line-levelling serial adaptor, downloaded the WD GPL source, translated the needed Orion/Marvell code tree settings to modern/mainline kernel initialisation code, built a whole bunch of custom kernels, figured out the internal flash layout and how to create u-boot kernel images and initiramfs images and eventually got it to boot debian squeeze.
And it
Re: (Score:2)
Re: (Score:2)
A shame, even with 50% off on some, they're as expensive as something like a FitPC2 :(
I'm hoping at some point we can see a $99 personal server option, maybe cram 4~6 into a 1U rack.
Re: (Score:1)
Re: (Score:2)
Hows the dual drive support on the sheeva plug? Looks like the pogo also uses usb as its "drive interface"
Something like a soekris board / case than handles two SATA drives in a RAID mirror would be nice.
The best bet for the original poster is to ask the mythtv guys for low power / fanless options, and stuff it all into a 1U case (assuming rackmount is mandatory)
Re: (Score:2)
The MythTV guys have completely different needs than an underutilized server operator. We have to deal with a very complex scheduler, which if it takes too long to run can cause problems, and with HD video that typically can only be decoded single threaded. Single threaded performance, and a lot of it, is a must, meaning our minimum recommendation is 2.5GHz Core 2 or Athlon II, or better.
That's not to say you can't be low power while you're at it. Tom's Hardware did an article last year where with not co
Re: (Score:2)
And it's useless. No 64-bit support. (Score:1)
ARM _still_ has no real 64-bit support (only something resembling PAE on x86). So building a single-image server beyond 2-4 way is not really feasible.
It's fun that we're having all the past x86 problems with ARM.
Re: (Score:2)
Re: (Score:2)
Yes, they do. First, if you're hosting a single web-site on a single server then you'll probably want to install more than 4Gb just because RAM is so cheap now. And you'll inevitably use it (for databases, file cache, etc.). If you're hosting multiple sites on a single server, then you DEFINITELY need more than 4Gb of RAM per server (as it's going to be the limiting component).
Maybe ARM is justified for large Google-style server farms doing specialized work which does not require great amounts of RAM.
Re: (Score:2)
Re: (Score:1)
PAE-like schemes always have a lot problems. Just read Linus' rants about it :)
Re:And it's useless. No 64-bit support. (Score:5, Informative)
How about a link to this rant, if you want us to read it? And, if you've got a problem with PAE-like extensions, then I presume you're aware that both Intel's and AMD's virtualisation extensions use PAE-like addressing?
All that PAE and LPAE do is decouple the size of the physical and virtual address spaces. This is a fairly trivial extension to existing virtual memory schemes. On any modern system, there is some mechanism for mapping from virtual to physical pages, so each application sees a 4GB private address space (on a 32-bit system) and the pages that it uses are mapped to some from physical memory. With PAE / LPAE, the only difference is that this mapping now lets you map to a larger physical address space - for example, 32-bit virtual to 36-bit physical. You see exactly the opposite of this on almost all 64-bit platforms, where you have a 64-bit virtual address space but only a 40- or 48-bit physical address space.
The big problem with PAE was that most machines that supported it came with 32-bit peripherals and no IOMMU. This meant that the peripherals could do DMA transfers to and from the low 4GB, but not anywhere else in memory. This dramatically complicated the work that the kernel had to do, because it needed to either remap memory pages from the low 4GB and copy their contents or use bounce buffers, neither of which was good for performance (which, generally, is something that people who need more than 4GB of RAM care about).
The advantage is that you can add more physical memory without changing the ABI. Pointers remain 32 bits, and applications are each limited to 4GB of virtual address space, but you can have multiple applications all using 4GB without needing to swap. Oh, and you also get better cache usage than with a pure 64-bit ABI, because you're not using 8 bytes to store a pointer into an address space that's much smaller than 4GB.
By the way, I just did a quick check on a few 64-bit machines that I have accounts on. Out of about 700 processes running on these systems (one laptop, two servers, one compute node), none were using more than 4GB of virtual address space.
Re: (Score:3)
How about a link to this rant
http://blog.linuxolution.org/archives/117 [linuxolution.org]
Re: (Score:3)
Re: (Score:2)
No, the problem is:
1) Kernel is starved for _address_ _space_ for its internal structures.
2) Userspace is starved for address space, because it has to view all the RAM through a small aperture (think EMS in 80286).
3) Constant address space remapping is costly.
And it doesn't matter that you use 64-bit pointers internally, because you can't address data directly.
Re: (Score:2)
1) Kernel is starved for _address_ _space_ for its internal structures.
This is addressed by using physical addresses in the kernel, as I said. It can use 64-bit pointers, and the compiler emits direct loads and stores that bypass the MMU.
Userspace is starved for address space, because it has to view all the RAM through a small aperture (think EMS in 80286).
Which is only relevant if the process actually wants more than 4GB of address space, i.e. not very often (yet).
Constant address space remapping is costly
True, but this is only required on x86 because the kernel is using its own virtual address space. This is not an issue on ARM.
Re: (Score:1)
Re: (Score:2)
AFAIK, most OSes shut down the MMU in kernel mode - linux for instance
Linux certainly doesn't do this on x86. It uses the segmentation mechanism. The kernel's memory is in a segment, marked as only visible to ring 0 code. When you make a system call, the current process's segment(s) remain visible to the OS, as does the kernel's segment. This means that you typically have 1GB of address space reserved for the userspace process, and 3GB for each userspace process. RedHat used to ship a kernel that used an entirely separate address space, so you got 4GB for the kernel and
Re: (Score:1)
Re: (Score:2)
There are so many things wrong with that, that I don't even know where to start. The MMU on x86 handles both paging and segmentation. Segments map from virtual addresses to linear addresses. Paging maps from linear addresses to physical addresses. Both are part of the virtual memory mapping handled by the MMU, which first walks the LDT / GDT, then the page tables, to translate from a virtual address to the physical.
It sounds like you're repeating something that you heard and didn't understand. What yo
Re: (Score:2)
On a database server, if it's highly used, is largely stuck on the slowest part (disk i/o) when it has to do full table scans. You solve this by building proper indexes
Until you have to use a DBMS that ignores your indexes. For example, MySQL appears unable to make efficient use of an index on a subquery that uses GROUP BY. From the manual [mysql.com]: "A subquery in the FROM clause is evaluated by materializing the result into a temporary table, and this table does not use indexes. This does not allow the use of indexes in comparison with other tables in the query, although that might be useful." The only reason I haven't already rewritten it as a join is that the subquery uses GROU
Re: (Score:1)
Drop MySQL in favor of what? (Score:2)
I can't imagine a better workaround than dropping MySQL.
In favor of what? PostgreSQL, or something one has to pay for? Either way, dropping MySQL support in the next version would require a lot of clients to drop their current hosting provider and switch from (cheap) shared hosting to a (more expensive) VPS.
Re: (Score:1)
Re: (Score:2)
A proper webserver only needs 1 thread per core. Each socket/connection should only consume a few KB of RAM at most. A webserver shouldn't use more than a couple dozen MB of RAM at most, not including the OS file system cache. Look into Nginx or Lighttp.
Re: (Score:2)
I do scientific computing where we regularly use virtual address spaces larger than 4GB. Not all of that is in the working set, of course, but it's often necessary to have that much mapped. One recent example is my leakage power and delay models for near-threshold circuits. I implemented the Markovic forumlas and found them to be too slow. My simulations would take days. So, I figured out the granularities I needed for voltage, power, and temperature, and I implemented those models as giant look-up tab
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
Utter bollocks. I work for a data centre, and there is no way 4GB is *required* for multiple sites or anything like that. How about one server, running 20-odd Linux Jails, each with between 20-32 sites, all in 2GB.
Re: (Score:2)
Instead of virtualising ten servers on a single physical box, you could of course consider running a single server on a single piece of hardware again. And still win power/flexibility wise if you can get your "low-power" ARM board to cost much less than your souped up x86 board. If only because if a single board fails, just one server goes down. Not all ten.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
It couldn't be an SMP machine though, not with so many cores.
My bet would be that each of the 120 nodes actually is a complete computer with 4 cores and its own memory - linked to the other 119 only via Ethernet. In this arrangement the 32-bit memory limit is not such a big issue. Each individual machine will not be particularly powerful anyway.
Re: (Score:3)
Re: (Score:1)
160 more (Score:2, Funny)
Another 160 and that should be enough for anybody!
Re: (Score:2)
Damnit, second last post currently and you beat me to that joket!
the real question (Score:1)
The real question is, can anyone afford to install an oracle database on that server?
iPad 20...now with 480 cores (Score:1)
Now with 480 cores....2x as fast and with 9x better graphics than the iPad 19.
Re: (Score:1)
I think you would have more luck over at ExpertSexchange.
Try titling your post 'Urgent: I password-protected my 1TB porn collection and I forgot my p/w'.
Re: (Score:2)
And you're posting on Slashdot, instead of flying your private jet to Japan to personally pick up debris and rescue people.
Oh right, only rich people have private jets, a lot planes won't fly to Japan now, and even if you get a flight, unless you are currently in Japan with a car (most public transportation is down where help would be needed, and most Japanese people don't own cars), you'd have to walk to the disaster areas. You can't do anything except donate money and hope.
Grow up and learn that shit hap
Re: (Score:1)
So basically you want Slashdot to turn into every news outlet on earth right now?
If I want to hear more about any of the current natural disasters, the state of Libya or even what lipgloss Jooolia is wearing this week - I'll turn on the Television or read a news-corporation owned website.
This is Slashdot, News for Nerds - just because a disaster happened doesn't mean we stop wanting to know about anything else.
Jeez.
Re: (Score:1)
leave britney alone! (Score:2)
The worst natural disaster in recorded history occurred less than a week ago, and you people are discussing Calxeda's first ARM-based server chip, designed to let companies build low-power servers with up to 480 cores; as the chip is built on a quad-core ARM processor, and low-power servers could have 120 ARM processing nodes in a 2U box; chips will be based on ARM's Cortex-A9 processor architecture???? My *god*, people, GET SOME PRIORITIES!
The bodies of nearly 10,000 dead people could give a good god damn about the advent of LAN parties, your childish Lego models, your nerf toys and lack of a "fun" workplace, your Everquest/Diablo/D&D addiction, or any of the other ways you are "getting on with your life".
I have inlaws and friends in Japan, and thank God they are all fine. But even if something have had happened to them, what would you expect me, a /. reader, or anyone, to do? To cut my veins and pour ash on my head? What about the rest of the readers. You are just an attention whore looking for a cause celebre to be upset about. Nothing more as your little rant does nothing constructive.
You don't know if people reading this donated for the cause. You do not know anything about anyone here, about what they