Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Upgrades Hardware IT Technology

ARM Readies Cores For 64-Bit Computing 222

snydeq writes "ARM Holdings will unveil new plans for processing cores that support 64-bit computing within the next few weeks, and has already shown samples at private viewings, InfoWorld reports. ARM's move to put out a 64-bit processing core will give its partners more options to design products for more markets, including servers, the source said. The next ARM Cortex processor to be unveiled will support 64-bit computing. An announcement of the processor could come as early as next week, and may provide further evidence of a collision course with Intel."
This discussion has been archived. No new comments can be posted.

ARM Readies Cores For 64-Bit Computing

Comments Filter:
  • ARM has to walk the power way up. I don't see how 64 bit computing would let them snatch server oriented clients. Similarly, I doubt Intel would be wise to deliver chips for the wristwatch market without first having something more compelling for the smartphone.

    • by MarcQuadra ( 129430 ) on Friday November 19, 2010 @06:14PM (#34286952)

      You don't see the use?

      low-latency bare-metal fileservers that consume only a few watts, but can natively handle huge filesystems and live encryption? It's a lot easier to handle a multi-TB storage array when you're 64-bit native, same for encryption. Look at Linux benchmarks for 32 vs 64-bit filesystem and OpenSSH performance.

      Do you have any idea how many $4,000 Intel Xeon boxes basically sit and do nothing all day at the average enterprise? If you can put Linux on these beasties, you could have a cheap and inexpensive place for projects to start, if load ever kills the 2GHz ARM blade, you can migrate the app over to an Intel VM or bare metal. I'll bet 80% of projects never leave the ARM boxes, though.

      My whole department (currently seven bare-metal Intel servers and five VMs) could run entirely off of a few ARM boxes running Linux. It would probably save an employees'-worth of power, cooling, upkeep, and upgrade costs every year.

      • by PCM2 ( 4486 )

        I'm seeing 64-bit ARM powered NAS boxes, too, dontchathink?

      • by TheRaven64 ( 641858 ) on Friday November 19, 2010 @07:10PM (#34287558) Journal

        Look at Linux benchmarks for 32 vs 64-bit filesystem and OpenSSH performance

        What benchmarks are you looking at? If you're comparing x86 to x86-64, then you are going to get some very misleading numbers. In addition to the increased address space, x86-64 gives:

        • A lot more registers (if you're doing 64-bit operations, x86-32 only has two usable registers, meaning a load and a store practically every other instruction).
        • The guarantee of SSE, meaning you don't need to use (slow) x87 instructions for floating point.
        • Addressing modes that make position-independent code (i.e. anything in a .so under Linux) much faster.
        • Shorter instruction sequences for some common instructions, replacing some short-but-rarely-used sequences.

        Offsetting this is the fact that all pointers are now twice as big, which means that you use more instruction cache. On a more sane architecture, such as SPARC, PowerPC, or MIPS, you get none of these advantages (or, rather, removal of disadvantages), so 64-bit code generally runs slightly slower. The only reason to compile in 64-bit mode on these architectures is if you want more than 4GB of virtual address space in a process.

        The ARM Cortex A15 supports 40-bit physical addresses, allowing up to 1TB of physical memory to be addressed. Probably not going to be enough for everyone forever, but definitely a lot more than you'll find in a typical server for the next couple of years. It only supports 32-bit virtual addresses, so you are limited to 4GB per process, but that's not a serious limitation for most people.

        ARM already has 16 GPRs, so you can use them in pairs and have 8 registers for 64-bit operations. Not quite as many as x86-64, but four times as many as x86, so even that isn't much of an advantage. All of the other advantages that x86-64 has over x86, ARM has already.

        • ARM already has 16 GPRs, so you can use them in pairs and have 8 registers for 64-bit operations. Not quite as many as x86-64, but four times as many as x86, so even that isn't much of an advantage. All of the other advantages that x86-64 has over x86, ARM has already.

          The amd64 architecture gets away with so few registers only because it can operate on memory directly. With a load-store architecture, eight registers would be extremely constraining. If anything, ARM should expand the register set for 64-bit mode.

          • Kind of. Compilers try very hard to avoid using the memory-register instructions, because their performance is hard to predict. If you're in the top couple of stack slots, the address will be aliased with a register, so you will get the same performance as the register-register version. If it is not, then you will have to fetch the value from the cache, which can cause a brief stall in the pipeline (if it's some random heap address, it will cause a much longer stall if it's not in the cache, but the top
        • No floating point is ever required for filesystems or encryption.

          • No, for those two applications the reduced register churn is the big improvement. For other benchmarks, it's usually the SSE. This doesn't make a difference on OS X, where the oldest supported x86 chips have SSE, but for Windows and *NIX there's a big difference between compiling for i686 (includes PPro and P2) and i686+SSE. It's really hard to generate good code for x87 (weird hybrid of stack and register architecture) and even if you're not using the vector capabilities of SSE you typically get a 20-50
      • Makes sense but most enterprises are moving towards high density virtualization. This seems to be going the other direction towards specialized appliances rather than.general purpose computing. I could see workstations/terminals going the arm route as well as highly customized and code optimized app servers. But I don't think you'll see many enterprises switching over just yet.
    • by vadim_t ( 324782 )

      Cell phones with ARM CPUs and 512MB RAM already exist. That's a pretty big chunk of the 32 bit address space, so it seems to make a lot of sense to be ready for when it's exhausted.

    • Re:What's the point? (Score:5, Informative)

      by forkazoo ( 138186 ) <wrosecrans AT gmail DOT com> on Friday November 19, 2010 @07:12PM (#34287582) Homepage

      Arm servers make sense in two places: the small and the giant. They fall down in the medium and large space.

      In other words, my personal server currently runs a "low power" AMD Sempron. The CPU uses something like 40 Watts, and it is plenty fast enough for my needs. It makes my RAID work, and it serves stuff over NFS and Samba. There are only ever a few clients, and the CPU spends most days nearly idle. It's a small box with a small workload, and it would work just fine with an ARM CPU instead of an x86. (Assuming the hypothetical ARM system could physically connect my external RAID enclosure.) More CPU wouldn't hurt, and it would occasionally make a few things faster, but mostly putting a Xeon in this box would just make it louder.

      In the realm of giant workloads, you have jobs that can't possibly be done by a single machine, no matter the budget. You are looking at needing many hundreds of even the biggest machines you can get. If you have a job that parallelizes that well, doing it with 1000 x86 boxes or 4000 ARM boxes isn't that big of a difference. If the ARM boxes are smaller, cheaper, and lower power enough that it outweighs the fact that you need more of them, then it would be crazy to go with whizzy Xeon boxes instead of Arm. Buzzword enthusiasts will throw labels like "Cloud scale computing" at this sort of thing.

      Where ARM falls down on the job is anything that can be done by a 4 core Xeon, up to a handful of 32 Core Xeons. That's a big chunk of what we normally think of as the Server market. ARM doesn't compete very well in this space. When people say that ARM is a ridiculous idea for servers, this middle segment of the market is generally what they are thinking of. A cluster of a dozen little ARM boxes competes rather poorly with a single machine with four Xeon sockets in terms of management overhead, and the amount of effort required to parallelise workloads, and the amount of bandwidth between distant cores. If you have an application that has an expensive per-machine license, that speaks in favor of a single big machine, etc.

      So, small office that needs a little NAS server to stash under the secretary's desk? ARM can pwn the market. Giant research institution with some parallelisable code trying to figure out how molecules do something naughty during supernovas? ARM can pwn the market. "Enterprise" level IT in a smallish, but uncrowded data center with adequate, already provisioned power and cooling... ARM may well be suitable in some cases, but it's certianly not an easy sell.

      And, relatively common cell phones have 1 GB of RAM. In two years or so, a cell phone with 4 GB of RAM will seem perfectly reasonable. At that point, 64 bit ARM stops being a data center/desktop issue, and is simply required to hold onto the existing ARM core market.

      • by rsborg ( 111459 )

        Arm servers make sense in two places: the small and the giant. They fall down in the medium and large space.

        That is only because of the WinTel duopoly of the past decade and a half. Given a decent enough operating system (ChromeOS, OSX-iOS hybrid, Ubuntu Unity) and either a standards based information access model (html/http) or native app-stores, the requirement for x86(-64) disappears and we can liberate ourselves from the Intel processor hegemony... and the world will be a better place for it. (note: Intel isn't going away anytime soon, and neither is Windows... but they won't exist as we have known them for

      • by laffer1 ( 701823 )

        I see a huge difference in 4000 arm vs 1000 x86.. someone has to manage 4 times the machines! System administrators cost money too.

        ARM chips make sense for home servers or small business devices in some cases. Maybe something rather custom with many cores could work for higher end products. I guess I'm skeptical after I watched x86 take over the server space. x86 always seems to win. This time Intel and AMD want to keep it that way.

        • I see a huge difference in 4000 arm vs 1000 x86.. someone has to manage 4 times the machines! System administrators cost money too.
          They do but once you've got that many machines working on the same problem you are likely to put in place a management system that allows you to manage the machines without working on each one individually.

  • by MarcQuadra ( 129430 ) on Friday November 19, 2010 @06:05PM (#34286862)

    I know folks think it's 'overkill' to have 64-bit CPUs in portable devices, but consider that the -entirety- of storage and RAM can be mmapped in the 64-bit address space... That opens up a lot of options for stuff like putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet, and other cool futuristic stuff.

    I'm wondering when the first server/desktop OS is going to come out that realizes this and starts to merge the 'RAM' and 'Storage' into one 64-bit long field of 'fast' and 'slow' storage. Say goodbye to Swap, and antiquated concepts like 'booting up' and 'partitions'.

    • I know folks think it's 'overkill' to have 64-bit CPUs in portable devices

      I don't think of it that way - I think of it as laying foundations for the future. I would much rather be prepared for when 64bit CPUs in mobile devices is a necessity instead of trying to play catchup when it does. We have the technology, so why not? Like the whole limitted IPv4 Address Space - wouldn't it have been sweet if we switched to IPv6 BEFORE it became an issue?

    • Re: (Score:2, Funny)

      by noidentity ( 188756 )
      You can do all this with a 32-bit address space as well. The only thing that must be swapped is the data. All the code can have its own addresses, unless you plan on having more than 4GB of application code on your mobile device. 4GB should be enough for anybody...
    • Don't you count laptops as portable? I realise that you were talking about smartphones but I'm skeptical that having wider use of 64-bit processors will bring about all the cool future stuff you name.

      Having decayed into a near-layman when it comes to CS I'm also curious as to why we need those extra bits for said stuff in the first place. It seems that there ought to be a reason why fast and slow storage is separated logically, and I would also say that - on first glance - there's no reason why needing to b

      • by 0123456 ( 636235 )

        Having decayed into a near-layman when it comes to CS I'm also curious as to why we need those extra bits for said stuff in the first place.

        You can't map a complete 2TB disk into a 32-bit address space.

        However, I think the idea is kind of bogus, because you don't want every application having access to all blocks on the disk, you don't want every application having to deal with filesystem layout (you can't just write to byte 42 on the disk without ensuring no-one else is going to) and you do want to keep applications' memory separate. And at some point you have to reboot even if just because you upgraded your OS kernel.

        • Re: (Score:3, Interesting)

          You can't map a complete 2TB disk into a 32-bit address space.

          That I can understand.

          ...putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet...

          This stuff, however, defies comprehension.

        • Just because the distinction between RAM and disk would go away doesn't mean that all access-control goes with it, I don't see how that's implied. An application would just be running in a sandbox that's really just a 'file' sitting in the portion of the address space that's hosted by RAM. If the app doesn't get used, or the kernel needs to 'swap' it to free up resources closer to the 'hot' side of the stack, or gets put to sleep for any other reason, it migrates out of RAM and back to disk.

          Same with user s

          • by 0123456 ( 636235 )

            An application would just be running in a sandbox that's really just a 'file' sitting in the portion of the address space that's hosted by RAM. If the app doesn't get used, or the kernel needs to 'swap' it to free up resources closer to the 'hot' side of the stack, or gets put to sleep for any other reason, it migrates out of RAM and back to disk.

            And you don't need a 64-bit address space to do that, if it was really important we could have done it long ago.

            • The limits were always right around the corner with 8-to-32 bit computing. Everyone knew that 4GB hard drives were coming when the 386 came out. With 64 bits of address space, there's 2048 petabytes to play with. That's not coming to a PC or a business near you anytime soon.

              • Again, the ability to access more than x bytes of storage isn't the issue. What I asked - and what you've yet to answer - is how the more widespread adoption of 64-bit processors is going to mean we can do the "cool" stuff you mention*

                *I should point out that you still haven't defined what, for example, a one-time-use application is supposed to be, because frankly it sounds like just another meaningless marketing term. After that perhaps you might explain why it needs 64-bits' worth of address space to pull

          • That's called checkpoint/restart and it's been around on 32-bit machines for decades. Its not commonly used, maybe internet ubiquity might change that, but 64-bitness isn't even close to necessary.

        • However, I think the idea is kind of bogus, because you don't want every application having access to all blocks on the disk

          Just because you have hardware that allows for something to be done doesn't mean that the OS has to allow "every application" to make full unsupervised use of that capability.

          OTOH, if the hardware doesn't support it well, then no application -- and no part of the operating system -- can effectively leverage the capacity.

          And at some point you have to reboot even if just because you upgr

      • "there ought to be a reason why fast and slow storage is separated logically"

        The entire computing paradigm that we're familiar with is based on the idea that address space is very limited, RAM is expensive, and disk is cheap. That's not true anymore; with 64-bits of address space, you could have access to a 'field' of over 2000 petabytes. That's more than the entire storage available at the research university I work at. You could literally have common address pointers and direct, unbrokered access to vast

        • RAM is still expensive, and storage is still cheap. The number of addresses you can, well, address is irrelevant to the problems of making RAM cheaper or storage faster.

          You say that you can address a gajillion bytes with a 64-bit CPU. So what? That won't make the spindles spin any faster, or multiply the RAM cells you have and magically do away with NUMA.

          Correct me if I'm wrong, but a narrow address bus isn't the reason memory and storage are separate. Even if they are, what's so special about moving from 3

          • by h4rr4r ( 612664 )

            RAM is cheap as hell. $50 will get you as much ram as 32bits can address. So for $100 you are talking twice as much as it could address. Kids these days.

    • by KiloByte ( 825081 ) on Friday November 19, 2010 @06:39PM (#34287238)

      n900 may be a nice device otherwise but only 256MB is totally crippling. Most recent smartphones come with 512MB these days. So even for just RAM, having merely "plans" about migrating to 64 bit today is not overkill, it's long overdue.

      About your idea of just mmapping everything: the speed difference between memory and disk/flash is so big that the current split is pretty vital to a non-toy OS. I'd limit mmap to specific tasks, for which it is indeed underused.

      • I'm pretty sure you don't need 64 bits to get above 512MB. Once we're having phones with 2GB we're on our way to problems.

        Then again, I agree that a bit more memory would be nice for heavy(ish) web-use. (Tried to check from Apple's tech specs the amount of memory in iPad for comparison, but curiously this detail is omitted. Found that there's the same 256MB from Wikipedia)

      • by dbIII ( 701233 )

        n900 may be a nice device otherwise but only 256MB is totally crippling

        Which applications running out of memory have crippled yours?
        Did it really happen?
        You are just making an ungrounded assumption here aren't you?
        It's not as if the things are used for complex photo editing of multiple images at once.

      • by dwater ( 72834 )

        > n900 may be a nice device otherwise but only 256MB is totally crippling

        Well, it *is* quite old now...but I do wonder why you think that. I can't say I've had any problem with its supposed lack of memory. No, the most serious problem with the n900, in my opinion, is the battery life, and that certainly is a problem with all smart phones these days, to a varying degree. I "manage" by carrying several batteries "just in case" and a smaller charger...and plugging it in is pretty much the first thing I do w

    • Re: (Score:3, Insightful)

      by KiloByte ( 825081 )

      Also, the idea of persistent programs has been thought before. Heck, I once came up with it myself when I was studying (>12 years ago), and talked about it with a professor (Janina Mincer). She immediately pointed a number of flaws:
      * you'll lose all your data the moment your program crashes. Trying to continue from a badly inconsistent state just ensures further corruption. COWing it from a snapshot is not a solution since you don't know if the original snapshot didn't already have some hidden corrup

      • Also, the idea of persistent programs has been thought before. Heck, I once came up with it myself when I was studying (>12 years ago), and talked about it with a professor (Janina Mincer). She immediately pointed a number of flaws:
        * you'll lose all your data the moment your program crashes. Trying to continue from a badly inconsistent state just ensures further corruption. COWing it from a snapshot is not a solution since you don't know if the original snapshot didn't already have some hidden corruption

      • That's because your professor is locked-in to the model we've been using, one that evolved when resources were scarce. That's no longer the case.

        -Losing data: Obviously, we're not talking about getting rid of files, just pre-packaging apps in a running state between sessions. Your word processor is going to save the data to a 'file' that by requirement, must live on 'cold' storage (disk-backed or network) or in the cloud. The session of the app would have a little check when it is restored to verify if the

      • Also, the idea of persistent programs has been thought before. Heck, I once came up with it myself when I was studying (>12 years ago), and talked about it with a professor (Janina Mincer). She immediately pointed a number of flaws:
        * you'll lose all your data the moment your program crashes. Trying to continue from a badly inconsistent state just ensures further corruption. COWing it from a snapshot is not a solution since you don't know if the original snapshot didn't already have some hidden corruption.
        * there is no way to make an upgrade -- even for a trivial bugfix
        * config files are human-editable in sane systems for a reason, having the setup only in internal variables would destroy that

        Your professor wasn't very smart then. All of those problems are easily addressed, as other posters have pointed out.

        You know what they say. Those who can do, do. Those who can't, teach.

        • You know what they say. Those who can do, do. Those who can't, teach.

          Ha!! My father, who works in education, always told me this one; "Teaching is what you do if you can't get a real job." Guess the phrase was popular back in the 60s.

          But the point should be that the real world and the theoretical are not always the same. I took an electronics program at BCIT (British Columbia Institute of Technology) and they made a point of hiring professors from industry. So when you were taught about generating power from hydroelectric dams, it was from people who designed / built s

    • by CODiNE ( 27417 ) on Friday November 19, 2010 @08:16PM (#34288118) Homepage

      Rumor is that's what Apple is working towards with Lion and iOS API's being added to the Desktop OS.

      With built in suspend and resume on all apps it becomes trivial to move a running process over to another device. I suppose they'll sell it to end-users as a desktop in a cloud, probably a Me.com service of some kind.

    • Re: (Score:3, Interesting)

      ...That opens up a lot of options for stuff like putting entire applications to sleep and instantly getting them back, distributing one-time-use applications that are already running, sharing a running app with another person and syncing the whole instance (not just a data file) over the Internet, and other cool futuristic stuff.

      You can do this "futuristic stuff" on both 32 bit and 64 bit platforms. I had to write my own C++ memory manager to make it easy to store & restore application state.

      To do real-time syncing applications (esp. games) over the Internet I implemented another layer to provide support for mutual exclusion, client/server privileges, variables with value prediction and smoothing -- which I needed anyhow for my Multi-Threaded JavaScript-like scripting language (Screw coding for each core separately, I wanted a

    • 64 bit CPUs are godsend for chess, checkers and reversi programs making use of 64-bit bitboards [wikipedia.org].

  • by moxsam ( 917470 ) on Friday November 19, 2010 @06:46PM (#34287310)
    Would be the most exciting revolution to watch. Since it has a totally different design it changes the parameters of how hardware end products can be built.

    As ARM cores are so simple and ARM Holding does not have their own fabs, anyone could come up with their own optimized ARM-compatible CPUs. It's one of those moments when the right economics and the right technology could fuse together and change stuff.
    • by 0123456 ( 636235 )

      As ARM cores are so simple and ARM Holding does not have their own fabs, anyone could come up with their own optimized ARM-compatible CPUs. It's one of those moments when the right economics and the right technology could fuse together and change stuff.

      The problem is... Windows. More precisely, proprietary closed-source software which can't just be recompiled for a new architecture.

      The huge amount of installed Windows software out there won't run on ARM, so it won't change the mainstream laptop/desktop market any time soon.

      • Re: (Score:2, Interesting)

        by del_diablo ( 1747634 )

        Well, considering that somewhere between 60-90% of the desktop marked in reality does not care what their computer is running, so long their got access to a browser and facebook and in worst case a office suit on the side for minor work, it would not really have mattered.
        The only real problem is not Windows, it is getting the computers into the mainstream stores to be sold alongsides the Macbooks and the various normal Windows OEM solutions. Just getting it there would mean instant markedshare over night, b

        • by slapys ( 993739 )

          The only real problem is not Windows, it is getting the computers into the mainstream stores to be sold alongsides the Macbooks

          What makes you assume Apple won't switch to ARM sometime in the next couple years? They dumped PPC for X86 due to the more favorable power/performance ratio. It's only natural to assume that when high-powered ARM processors appear, Apple will switch to that architecture without a moment's hesitation.

      • The problem is... Windows. More precisely, proprietary closed-source software which can't just be recompiled for a new architecture.

        Much less of a problem than it used to be. Aside from games, how many closed-source software packages do you run that are CPU-limited? In typical usage, the CPU monitor on my laptop rarely goes over 20%. Even emulating everything, it wouldn't be too slow to use. Modern emulators don't emulate everything though, they thunk out to native libraries for things like drawing. That's how Rosetta works on OS X, for example; OS X ships with stub versions of all of the native frameworks for PowerPC, which call t

        • I was going to mention a few, but then I realized that almost all of them are .NET based. MS already has a .NET implementation on ARM (for their mobile devices) and I believe Mono also works on ARM.

          The remaining ones are MS Office (ported to x64 and PPC), Visual Studio (partially .NET and hopefully somewhat portable), Opera (portable), Foxit (there are other PDF apps even if it's not portable), and probably a few more.

          Of course, you can't just ignore games. Relatively few of those are portable, and I happen

      • Apple, Google and Canonical have seen the writing on the wall: Make the apps independent of the ISA, and your platform can go anywhere.

        Best way to do this is to provide the storefront, and handle distribution integrated with the OS.

        I think the App Store is the biggest software revolution from the 00's ... and it's yet to play out completely.

        • by h4rr4r ( 612664 )

          I think you have no idea what a repository is, otherwise app stores would not impress you at all.

      • by LWATCDR ( 28044 )

        Not that big of an issue in the server space. Sparc and Power5 don't run Windows. And almost all the big server apps already run under Linux so those can recompile without much effort.

      • Apple managed to make the switch from PowerPC to Intel almost seamlessly, thanks to a well-written emulator. Microsoft might be able to do the same.
      • The huge amount of installed Windows software out there won't run on ARM

        All the software for Pocket PC aka Windows Mobile (based on Windows CE) already runs on ARM.

      • Comment removed based on user account deletion
  • by jensend ( 71114 ) on Friday November 19, 2010 @08:12PM (#34288080)

    This isn't like the 16->32 bit transition where it quickly became apparent that the benefits were large enough and the costs both small enough and rapidly decreasing that all but the smallest microcontrollers could benefit from both the switch and the economies of scale. 64-bit pointers help only in select situations, they come at a large cost, and as fabs start reaching the atomic scale we're much less confident that Moore's Law will decrease those costs to the level of irrelevance anytime soon.

    Most uses don't need >4 gigabytes of RAM, and it takes extra memory to compensate for huge pointers. Cache pressure increases, causing a performance drop. Sure, often x86-64 code beats 32-bit x86 code, but that's mostly because x86-64 adds registers on a very register-constrained architecture and partly because of wider integer and FP units. 64-bit addressing is usually a drag, and it's the addressing that makes a CPU "64-bit". ARM doesn't have a similar register constraint problem, and the cost of 64-bit pointers would be especially obvious in the mobile space, where cache is more constrained- one of the most important things ARM has done to increase performance in recent years was Thumb mode i.e. 16-bit instructions, decreasing cache pressure.

    Most of those who do need more than 4GB don't need more than 4G of virtual address space for a single process, in which case having the OS use 64-bit addressing while apps use 32-bit pointers is a performance boon. The ideal for x86 (which nobody seems to have tried) would be to have x86-64 instructions and registers available to programs but have the programs use 32-bit pointers, as noted by no less than Don Knuth [stanford.edu]:

    It is absolutely idiotic to have 64-bit pointers when I compile a program that uses less than 4 gigabytes of RAM. When such pointer values appear inside a struct, they not only waste half the memory, they effectively throw away half of the cache.

    The gcc manpage advertises an option "-mlong32" that sounds like what I want. Namely, I think it would compile code for my x86-64 architecture, taking advantage of the extra registers etc., but it would also know that my program is going to live inside a 32-bit virtual address space.

    Unfortunately, the -mlong32 option was introduced only for MIPS computers, years ago. Nobody has yet adopted such conventions for today's most popular architecture. Probably that happens because programs compiled with this convention will need to be loaded with a special version of libc.

    Please, somebody, make that possible.

    It's funny to continually hear people clamoring for native 64-bit versions of their applications when that often will just slow things down. One notable instance: Sun/Oracle have told people all along not to use a 64-bit JVM unless they really need a single JVM instance to use more than 4GB of memory, and the pointer compression scheme they use for the 64-bit JVM is vital to keeping a reasonable level of performance with today's systems.

    • Several points here:
      • There actually is a method for using 32-bit addresses in programs, but letting the OS address 52-bit space. It's called PAE, and it's been around since the Pentium Pro. It's almost always enabled on Linux and Mac OS X, but isn't available for non-server versions of Windows. So it's been tried, but isn't well-known or used.
      • Currently, 64-bit processors only use a 48-bit address space, precisely for some of the reasons you listed. The architecture is designed to scale up to full 64-bit ad
      • Re: (Score:3, Insightful)

        by jensend ( 71114 )

        PAE _is_ frequently used- whenever an x86-64 processor is in long mode it's using PAE. PAE has been around for a lot longer than long mode but few people had much of a reason to use it before long mode came around- not because it didn't accomplish anything but because memory was too expensive and people had little reason to use that much. On a processor where long mode is available there's little reason to use PAE without long mode- long mode gives you all those extra registers etc.

        What I and my homeboy Knu

    • He's ignoring at least two benefits that help no matter the architecture.

      First, address space allocation is a hashing problem. Calls to malloc (mmap) need to find free space; this gets slow as the address space fills up.

      Second, we care about security and we realize that programs may have bugs. ASLR benefits greatly from extra address space. This can be the difference between an attacker having a 1-in-256 chance or having about a 1-in-trillion chance.

      He's also focusing on the cache-related downside while ign

  • So, now I can really get WinCE Jamming! lol J/K of course...
  • When will it run Android?

  • Bah (Score:3, Funny)

    by Yvan256 ( 722131 ) on Saturday November 20, 2010 @12:06PM (#34291800) Homepage Journal

    I've got eight 8-bit AVRs and duct tape right here. That's almost the same thing.

The goal of Computer Science is to build something that will last at least until we've finished building it.

Working...