Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Upgrades Hardware IT Technology

ARM Readies Cores For 64-Bit Computing 222

snydeq writes "ARM Holdings will unveil new plans for processing cores that support 64-bit computing within the next few weeks, and has already shown samples at private viewings, InfoWorld reports. ARM's move to put out a 64-bit processing core will give its partners more options to design products for more markets, including servers, the source said. The next ARM Cortex processor to be unveiled will support 64-bit computing. An announcement of the processor could come as early as next week, and may provide further evidence of a collision course with Intel."
This discussion has been archived. No new comments can be posted.

ARM Readies Cores For 64-Bit Computing

Comments Filter:
  • by MarcQuadra ( 129430 ) on Friday November 19, 2010 @07:14PM (#34286952)

    You don't see the use?

    low-latency bare-metal fileservers that consume only a few watts, but can natively handle huge filesystems and live encryption? It's a lot easier to handle a multi-TB storage array when you're 64-bit native, same for encryption. Look at Linux benchmarks for 32 vs 64-bit filesystem and OpenSSH performance.

    Do you have any idea how many $4,000 Intel Xeon boxes basically sit and do nothing all day at the average enterprise? If you can put Linux on these beasties, you could have a cheap and inexpensive place for projects to start, if load ever kills the 2GHz ARM blade, you can migrate the app over to an Intel VM or bare metal. I'll bet 80% of projects never leave the ARM boxes, though.

    My whole department (currently seven bare-metal Intel servers and five VMs) could run entirely off of a few ARM boxes running Linux. It would probably save an employees'-worth of power, cooling, upkeep, and upgrade costs every year.

  • by bradgoodman ( 964302 ) on Friday November 19, 2010 @07:34PM (#34287188) Homepage
    MOD UP!
  • by KiloByte ( 825081 ) on Friday November 19, 2010 @07:39PM (#34287238)

    n900 may be a nice device otherwise but only 256MB is totally crippling. Most recent smartphones come with 512MB these days. So even for just RAM, having merely "plans" about migrating to 64 bit today is not overkill, it's long overdue.

    About your idea of just mmapping everything: the speed difference between memory and disk/flash is so big that the current split is pretty vital to a non-toy OS. I'd limit mmap to specific tasks, for which it is indeed underused.

  • by KiloByte ( 825081 ) on Friday November 19, 2010 @07:58PM (#34287466)

    Also, the idea of persistent programs has been thought before. Heck, I once came up with it myself when I was studying (>12 years ago), and talked about it with a professor (Janina Mincer). She immediately pointed a number of flaws:
    * you'll lose all your data the moment your program crashes. Trying to continue from a badly inconsistent state just ensures further corruption. COWing it from a snapshot is not a solution since you don't know if the original snapshot didn't already have some hidden corruption.
    * there is no way to make an upgrade -- even for a trivial bugfix
    * config files are human-editable in sane systems for a reason, having the setup only in internal variables would destroy that

  • by TheRaven64 ( 641858 ) on Friday November 19, 2010 @08:10PM (#34287558) Journal

    Look at Linux benchmarks for 32 vs 64-bit filesystem and OpenSSH performance

    What benchmarks are you looking at? If you're comparing x86 to x86-64, then you are going to get some very misleading numbers. In addition to the increased address space, x86-64 gives:

    • A lot more registers (if you're doing 64-bit operations, x86-32 only has two usable registers, meaning a load and a store practically every other instruction).
    • The guarantee of SSE, meaning you don't need to use (slow) x87 instructions for floating point.
    • Addressing modes that make position-independent code (i.e. anything in a .so under Linux) much faster.
    • Shorter instruction sequences for some common instructions, replacing some short-but-rarely-used sequences.

    Offsetting this is the fact that all pointers are now twice as big, which means that you use more instruction cache. On a more sane architecture, such as SPARC, PowerPC, or MIPS, you get none of these advantages (or, rather, removal of disadvantages), so 64-bit code generally runs slightly slower. The only reason to compile in 64-bit mode on these architectures is if you want more than 4GB of virtual address space in a process.

    The ARM Cortex A15 supports 40-bit physical addresses, allowing up to 1TB of physical memory to be addressed. Probably not going to be enough for everyone forever, but definitely a lot more than you'll find in a typical server for the next couple of years. It only supports 32-bit virtual addresses, so you are limited to 4GB per process, but that's not a serious limitation for most people.

    ARM already has 16 GPRs, so you can use them in pairs and have 8 registers for 64-bit operations. Not quite as many as x86-64, but four times as many as x86, so even that isn't much of an advantage. All of the other advantages that x86-64 has over x86, ARM has already.

  • Re:lol (Score:3, Insightful)

    by shogarth ( 668598 ) on Friday November 19, 2010 @08:28PM (#34287744)

    One of the more amusing blog entries from Sun engineers was a discussion of the amount of energy needed to completely fill a ZFS file system. A 128-bit address space isn't just optimistically big, it's "freaking huge!"
    http://blogs.sun.com/bonwick/entry/128_bit_storage_are_you [sun.com]

  • by xswl0931 ( 562013 ) on Friday November 19, 2010 @08:38PM (#34287808)

    In large datacenters, power and cooling costs have become a significant part of the TCO. For smaller server rooms x86 compatibility is probably more important.

  • Comment removed (Score:1, Insightful)

    by account_deleted ( 4530225 ) on Friday November 19, 2010 @10:00PM (#34288412)
    Comment removed based on user account deletion
  • by Anonymous Coward on Friday November 19, 2010 @11:05PM (#34288756)

    >Giant research institution with some parallelisable code trying to figure out how molecules do something naughty during supernovas?

    ARM is competitive with x86 in terms of FLOPS per anything? I don't think so, Tim.

    That leaves only the ultra low end for ARM.

  • by Pentium100 ( 1240090 ) on Saturday November 20, 2010 @12:07AM (#34288984)

    Yes, emulation is an option, but I don't think that ARM running x86 emulation layer will be competitive with native x86 CPUs. Didn't this happen to Itanium? Slow x86 performance and AMD's x86-64 resulted in virtually zero market for Itanium.

  • by imroy ( 755 ) <imroykun@gmail.com> on Saturday November 20, 2010 @12:28AM (#34289054) Homepage Journal

    ...considering how much software, both for Windows AND Linux, that isn't for ARM based CPUs...

    CPU architecture doesn't really matter with FOSS - once you have a working compiler, you just compile everything from source. Alright, you need some arch-specific work in the kernel and a few other places too. But by the time you get to end-user applications, all of that is long gone. So I would reply with "almost all Linux software already is for ARM-based CPUs". Or MIPS. Or POWER/PowerPC. Or whatever architecture you want.

    And one advantage that ARM's low power/heat could bring is high density. Take a look at the Gumstix [gumstix.com] boards. Now imagine a "blade server" board with 16 or more processors crammed onto one board. You could easily get at least a few hundred CPU's in a 19 inch rack, with each CPU draining less than a watt of power. Now I'm not really sure what could be done with such a system - either do everything over the network (NFS or ATAoE), or equip each CPU with a good lump of flash storage for data and programs. But it would draw very little power and is something to think about.

  • by Confusador ( 1783468 ) on Saturday November 20, 2010 @12:54AM (#34289178)

    There are a lot of boxes out there doing nothing but serving files and printers, if ARM did start to be popular you can be sure that MS would be sure not to lose that business. And then, once you have the things installed, it suddenly makes sense to write some of your new programs to run on them...

  • by jensend ( 71114 ) on Saturday November 20, 2010 @01:01AM (#34289202)

    PAE _is_ frequently used- whenever an x86-64 processor is in long mode it's using PAE. PAE has been around for a lot longer than long mode but few people had much of a reason to use it before long mode came around- not because it didn't accomplish anything but because memory was too expensive and people had little reason to use that much. On a processor where long mode is available there's little reason to use PAE without long mode- long mode gives you all those extra registers etc.

    What I and my homeboy Knuth are talking about for x86 has more to do with the ABI than with hardware. As Knuth says some of the first places work would need to be done are the compiler etc and the libc; some OS support would also be required.

    Yes, current 64-bit processors can't use more than 40 bits of physical memory or 48 bits of virtual. But the pointers are still the full 64 bits wide, and at no point does the processor store them in anything less than 64 bits. Limiting things to only using 48 bits of address space just simplifies MMUs etc, it doesn't save space. Trying to store other things in the unused bits in a register holding a 48 bit pointer would be more hassle than anybody wants to deal with. I mean, sure, you can do a bunch of bit twiddling to try to put junk in those other bits when you're storing a pointer, but it's going to be more expensive than it's worth.

    I don't think there's any game out there which uses more than 4GB of address space in a single process, regardless of the settings you're using. If you can find concrete evidence of one, let me know.

    Even finding situations where games really benefit from more than 4GB of total system memory is rare. I haven't seen too much in the way of benchmarks comparing differing amounts of RAM for this year's DX11 games, but I know that practically no games released before this year benefit from more than 3GB of system memory (of the benchmarks I saw the one which really contested that was published by Corsair, and they can't be accused of being indifferent to how much memory people buy). For games that do appear to benefit at their very highest detail settings at extreme resolutions, I'd still like to see evidence that the visual quality is noticeably different from what you get when you bump the settings down a notch and save a gig of RAM.

    It's true that people working on films in ridiculously high resolutions and some 3d modeling/rendering/CAD folks may want more than 4GB of RAM available to a single process. But those and the other uses for >4GB in one process are a tiny portion of the overall market and have nothing at all to do with ARM and mobile. And you've vastly overstated the effort it takes to be able to support smaller pointers and the simplifications available if you stick with 64-bit.

To the systems programmer, users and applications serve only to provide a test load.

Working...