Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Upgrades Hardware IT Technology

ARM Readies Cores For 64-Bit Computing 222

snydeq writes "ARM Holdings will unveil new plans for processing cores that support 64-bit computing within the next few weeks, and has already shown samples at private viewings, InfoWorld reports. ARM's move to put out a 64-bit processing core will give its partners more options to design products for more markets, including servers, the source said. The next ARM Cortex processor to be unveiled will support 64-bit computing. An announcement of the processor could come as early as next week, and may provide further evidence of a collision course with Intel."
This discussion has been archived. No new comments can be posted.

ARM Readies Cores For 64-Bit Computing

Comments Filter:
  • by MarcQuadra ( 129430 ) on Friday November 19, 2010 @07:05PM (#34286862)

    I know folks think it's 'overkill' to have 64-bit CPUs in portable devices, but consider that the -entirety- of storage and RAM can be mmapped in the 64-bit address space... That opens up a lot of options for stuff like putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet, and other cool futuristic stuff.

    I'm wondering when the first server/desktop OS is going to come out that realizes this and starts to merge the 'RAM' and 'Storage' into one 64-bit long field of 'fast' and 'slow' storage. Say goodbye to Swap, and antiquated concepts like 'booting up' and 'partitions'.

  • by newcastlejon ( 1483695 ) on Friday November 19, 2010 @07:40PM (#34287254)

    You can't map a complete 2TB disk into a 32-bit address space.

    That I can understand.

    ...putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet...

    This stuff, however, defies comprehension.

  • by moxsam ( 917470 ) on Friday November 19, 2010 @07:46PM (#34287310)
    Would be the most exciting revolution to watch. Since it has a totally different design it changes the parameters of how hardware end products can be built.

    As ARM cores are so simple and ARM Holding does not have their own fabs, anyone could come up with their own optimized ARM-compatible CPUs. It's one of those moments when the right economics and the right technology could fuse together and change stuff.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Friday November 19, 2010 @08:03PM (#34287504)
    Comment removed based on user account deletion
  • by del_diablo ( 1747634 ) on Friday November 19, 2010 @08:09PM (#34287544)

    Well, considering that somewhere between 60-90% of the desktop marked in reality does not care what their computer is running, so long their got access to a browser and facebook and in worst case a office suit on the side for minor work, it would not really have mattered.
    The only real problem is not Windows, it is getting the computers into the mainstream stores to be sold alongsides the Macbooks and the various normal Windows OEM solutions. Just getting it there would mean instant markedshare over night, because only a minority is application bound in reality.

  • by del_diablo ( 1747634 ) on Friday November 19, 2010 @08:13PM (#34287600)

    It should in theory scale better than x86-64 anyhow, and the performance per watt is quite superior, so yes, it has a major place in the server room.

  • by Anonymous Coward on Friday November 19, 2010 @09:04PM (#34288018)

    Sure, but they will lose markedshare on the initial wave when the markeds starts appearing. When it finally comes to "5% of desktop(desktop+laptop,+etc) sales and rising?!", then Windows will pull out a version.
    Before that, Linux will gain markedshare, most likely, unless they mess up attempts at markeding again.

    Are you redarded?

  • by jensend ( 71114 ) on Friday November 19, 2010 @09:12PM (#34288080)

    This isn't like the 16->32 bit transition where it quickly became apparent that the benefits were large enough and the costs both small enough and rapidly decreasing that all but the smallest microcontrollers could benefit from both the switch and the economies of scale. 64-bit pointers help only in select situations, they come at a large cost, and as fabs start reaching the atomic scale we're much less confident that Moore's Law will decrease those costs to the level of irrelevance anytime soon.

    Most uses don't need >4 gigabytes of RAM, and it takes extra memory to compensate for huge pointers. Cache pressure increases, causing a performance drop. Sure, often x86-64 code beats 32-bit x86 code, but that's mostly because x86-64 adds registers on a very register-constrained architecture and partly because of wider integer and FP units. 64-bit addressing is usually a drag, and it's the addressing that makes a CPU "64-bit". ARM doesn't have a similar register constraint problem, and the cost of 64-bit pointers would be especially obvious in the mobile space, where cache is more constrained- one of the most important things ARM has done to increase performance in recent years was Thumb mode i.e. 16-bit instructions, decreasing cache pressure.

    Most of those who do need more than 4GB don't need more than 4G of virtual address space for a single process, in which case having the OS use 64-bit addressing while apps use 32-bit pointers is a performance boon. The ideal for x86 (which nobody seems to have tried) would be to have x86-64 instructions and registers available to programs but have the programs use 32-bit pointers, as noted by no less than Don Knuth [stanford.edu]:

    It is absolutely idiotic to have 64-bit pointers when I compile a program that uses less than 4 gigabytes of RAM. When such pointer values appear inside a struct, they not only waste half the memory, they effectively throw away half of the cache.

    The gcc manpage advertises an option "-mlong32" that sounds like what I want. Namely, I think it would compile code for my x86-64 architecture, taking advantage of the extra registers etc., but it would also know that my program is going to live inside a 32-bit virtual address space.

    Unfortunately, the -mlong32 option was introduced only for MIPS computers, years ago. Nobody has yet adopted such conventions for today's most popular architecture. Probably that happens because programs compiled with this convention will need to be loaded with a special version of libc.

    Please, somebody, make that possible.

    It's funny to continually hear people clamoring for native 64-bit versions of their applications when that often will just slow things down. One notable instance: Sun/Oracle have told people all along not to use a 64-bit JVM unless they really need a single JVM instance to use more than 4GB of memory, and the pointer compression scheme they use for the 64-bit JVM is vital to keeping a reasonable level of performance with today's systems.

  • by LWATCDR ( 28044 ) on Friday November 19, 2010 @09:13PM (#34288084) Homepage Journal

    Funny but in 1990 I bet the said the same thing about Intel.
    In any office of say 50 or so people a 64 bit ARM would probably do just fine. NAS and SANs in bigger installations would probably also run very well on a 64 bit ARM. And then one has to wonder just how many ARM cores might fit on a die?
    ARM is a much more modern ISA than X86 so it will be interesting to see just where it goes. Trust me if you had told anyone in 1982 that someday there would be an X86 that was faster per clock cycle than a Cray1, ran with a multi ghz clock, and had a 64 bit address space they would have locked you in a rubber room.

  • by CODiNE ( 27417 ) on Friday November 19, 2010 @09:16PM (#34288118) Homepage

    Rumor is that's what Apple is working towards with Lion and iOS API's being added to the Desktop OS.

    With built in suspend and resume on all apps it becomes trivial to move a running process over to another device. I suppose they'll sell it to end-users as a desktop in a cloud, probably a Me.com service of some kind.

  • by VortexCortex ( 1117377 ) <VortexCortex AT ... trograde DOT com> on Friday November 19, 2010 @09:41PM (#34288298)

    ...That opens up a lot of options for stuff like putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet, and other cool futuristic stuff.

    You can do this "futuristic stuff" on both 32 bit and 64 bit platforms. I had to write my own C++ memory manager to make it easy to store & restore application state.

    To do real-time syncing applications (esp. games) over the Internet I implemented another layer to provide support for mutual exclusion, client/server privileges, variables with value prediction and smoothing -- which I needed anyhow for my Multi-Threaded JavaScript-like scripting language (Screw coding for each core separately, I wanted a smarter language).

    I've also achieved distributing "applications that are already running", (I hear smalltalk has this feature as well as other languages that support VM Images).

    It would be nice if these features were supported by the OS, but I'm not waiting around for something I can do right now.

    Also: I'm not sure I want all of these features built in to the OS (complexity = potential security holes), esp. when I can achieve them via cross platform libraries and/or an even higher level programming language on our current OSes & hardware.

  • by Anonymous Coward on Friday November 19, 2010 @09:56PM (#34288402)

    Well, that is interesting and all, just wondering about something a bit more modern. We have 1 ghz ARM in cellphones now,and larger coming, etc, which is enough with sufficient RAM to work as a modern desktop for most uses. I currently still run an old slow single core, works fine, but if I could get comparable performance at only 1/10th the electricity use and eliminate all the fans...see what I mean? Way back in 2008 canonical announced serious ARM support and so on, but still no machines to buy from anyplace. I contemplated using a high end cellphone, but none of them have full keyboard and mice support and are beastly expensive and you can't get any with at least two gigs of RAM, which is defacto about the tipping point for a desktop today between "works" and "tear your hair out".

    I mean really, the chips themselves are wicked cheap compared to intel or amd, so where is a plain vanilla ARM based normal form factor desktop, ATX or miniITX or like that? Seems like they could be making a good enough desktop for some serious cost reductions and hit the niche that fits. Now I have an old VIA miniITX board but dang they require super expensive RAM (via specific, generic pc133 stuff do NOT work) just to get it to one full gig, and the 256 megs that I have, just don't cut it. "Good enough" quiet, cheap to run and cheap is what I am after and it sure seems like an ARM solution would fit, just I can't find one, and I have looked now for two years off and on. I don't want a teeny netbook, I want a bog standard desktop cheap machine, just with ARM instead of AMD or Intel.

  • by ld a,b ( 1207022 ) on Saturday November 20, 2010 @01:10AM (#34289236) Journal

    To be fair, that doesn't counter his argument, amd64 has more registers than i386 and they do make a big difference. Repeat the tests with 32-bit pointers and 64 bit registers and then get back to us.

    As of today, the method he mentions would probably provide a bit better performance, assuming the processor optimizations didn't break when their expectations weren't met.

    However, I think it is very short-sighted to miss the fact that about the only thing increasing these days is memory and that apps tend to grab all the address space they can get. By 2050 I can see machines with 1TB ram, but I can't see apps keeping themselves under 0xFFFFFFFF.

    Furthermore, thanks to ASLR, which is a feature available now on most OSes, address space fragmentation is a problem today even for programs well under the 4Gb mark. The future is 64:64. 32 bit architectures are already dead, they just haven't realized it yet.

All I ask is a chance to prove that money can't make me happy.

Working...