Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Upgrades Power Hardware

ARM Unveils Next-Gen Processor, Claims 5x Speedup 283

unts writes "UK chip designer ARM [Note: check out this short history of ARM chips in mobile devices contributed by an anonymous reader] today released the first details of its latest project, codenamed 'Eagle.' It has branded the new design Cortex-A15, which ARM reckons demonstrates the jump in performance from its predecessors, the A8 and A9. ARM's new chip design can scale to 16 cores, clock up to 2.5GHz, and, the company claims, deliver a 5x performance increase over the A8: 'It's like taking a desktop and putting it in your pocket,' said [VP of processor marketing — Eric Schorn], and it was clear that he considers this new design to be a pretty major shot across the bows of Intel and AMD. In case we were in any doubt, he turned the knife further: 'The exciting place for software developer graduates to go and hunt for work is no longer the desktop.'"
This discussion has been archived. No new comments can be posted.

ARM Unveils Next-Gen Processor, Claims 5x Speedup

Comments Filter:
  • Give ARM a chance. (Score:2, Insightful)

    by Anonymous Coward

    I for one certainly hope that ARM gets a chance in the more mainstream market; the more competition for Intel, the better!

    • by PCM2 ( 4486 ) on Friday September 10, 2010 @05:04AM (#33531778) Homepage

      How much more mainstream can it get? ARM is everywhere. It's in your iPhone -- probably every single phone out there, actually -- in tablets, in NAS boxes, in DVD players... countless applications. If you mean it should compete with Intel CPUs for PC processors, on the other hand, one impediment may be that ARM is (at least at present) a 32-bit architecture.

      • The best bit: ARM chips are everywhere, and they are presumably very friendly to implementing Acorn Archimedes emulators. Archimedes on your fridge, yeah!

        • by jimicus ( 737525 )

          There are already perfectly good emulators that run quite happily on x86, but getting hold of RISC OS is rather trickier unless you've bought and paid for a license (which is surprisingly expensive considering it's got very little modern software available, and these days is only really of any interest as an exercise in "how to design a very small OS for a 1980's version of a chip without many of the things we take for granted these days such as multi-user security or protected memory")

        • by JackDW ( 904211 ) on Friday September 10, 2010 @08:07AM (#33532474) Homepage

          Surprisingly, no. Archimedes actually used an initial version of the ARM architecture with 26 bit addressing. The high bits of the program counter register were used to store the CPU status and condition flags, giving an easy way to save/restore those flags across function calls. A clever trick, but unfortunately 64Mb of code address space wasn't enough for everyone, and so ARM moved to the fully 32-bit architecture in current use. For a transitional period, ARM CPUs supported both architectures, but that time is long gone now.

          Sadly, this means that modern ARMs can only run Archimedes software through software emulation. I understand that a newer version of RISC OS does exist for the 32-bit architecture, but it's not compatible with older binaries. Programs have to be recompiled for it, and if written in assembly, partially rewritten! So, no "Sibelius 7" or "Lander"...

      • Re: (Score:3, Informative)

        The cortex15 line extended the address range for memory to 40 bits which ought to be enough for the next few years.

        • by Cyberax ( 705495 )
          Interesting. Do they have direct 64-bit addressing or do they use page mapping tricks?
          • Think of it in terms of virtualization. The hypervisor has access to up to 1TB or memory but the individual OS instances are 32-bit and can only address up to 4GB.

        • by Sycraft-fu ( 314770 ) on Friday September 10, 2010 @08:47AM (#33532680)

          That means back to segmentation. That isn't a killer problem, but it is significant. In terms of how that works in modern computers, you can see it on Windows systems on Intel PAE processors. Basically the OS gets access to all the memory in the system, but it has to be divided up to be used. In the case of the Windows implementation, the kernel can get only 2GB and each application can get only 2GB. You can have multiple 2GB apps running, but they can't have more.

          For an app to get more, it has to implement memory management internally. Basically it talks to Windows and gets a range of memory set up that will be paged, it then gets more RAM allocated and specifies how to page through it. Called AWE and used by a couple apps, like MSSQL. Of course that is complex on the part of the app and would be problematic if you had multiple ones running.

          Also it makes task switching hit the system harder over all, because of the segmentation.

          So i mean it works, don't get me wrong, I have seen servers doing it. However 64-bit is a much, much, cleaner solution both OS wise and software wise. It really is a hack when you get down to it.

          I like current desktop CPUs, which have larger virtual address spaces than physical. You are right, 40-bits is fine for now. As far as I know the top end Intel CPUs only have 48-bits of address lines currently. No reason to implement all 64-bits, you wouldn't use it. However having a flat virtual memory space is something that is extremely useful. There's a reason everyone wanted to move to that with 32-bit CPUs as soon as it became feasible. We don't really want to go back to segmentation.

      • by node 3 ( 115640 ) on Friday September 10, 2010 @06:13AM (#33532054)

        How much more mainstream can it get?

        I think he means in terms of being something consumers are aware of, like they are with Intel and AMD. Yeah, I think the contrast is being exaggerated more than a little bit here, as a lot of people don't really know about Intel or AMD, and vice versa it's not like nobody knows about ARM, but there definitely is a difference in mindshare here.

        If you mean it should compete with Intel CPUs for PC processors, on the other hand, one impediment may be that ARM is (at least at present) a 32-bit architecture.

        I can't speak for AC, but I think ARM netbooks would do the trick. Unfortunately, the longevity of the netbook market isn't exactly clear, and ARM netbooks implies Linux, which is even more uncertain a consumer market than Windows netbooks is.

        But yeah, phones and tablets, ARM is where it's at for now.

      • by devent ( 1627873 )
        Who cares if it's a 32-bit or a 48 or a 17 bit architecture? 64bit architecture is 20 years old on the desktop but right now nobody is using it anyway. If I get a Notebook with an ARM, which can run OpenOffice, Email, Firefox and maybe Flash, for half the price and have a battery life of 8 hours and more I really don't care what architecture it have.
        • by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Friday September 10, 2010 @06:26AM (#33532110) Homepage

          64bit architecture is 20 years old on the desktop but right now nobody is using it anyway.

          They're certainly using more memory than is practically addressable on 32-bit. Ordinary people do need that memory. They do work with large images. They do handle lots of data. They do have many things open at once. They do run large games. Not everyone needs it for everything, but being stuck with only 4GB of address space would really suck. (Luckily ARM isn't limited this way; cortex15 can address 1TB of memory directly, which is rather a lot more than anyone currently puts in a single machine at the moment.)

          If I get a Notebook with an ARM, which can run OpenOffice, Email, Firefox and maybe Flash, for half the price and have a battery life of 8 hours and more I really don't care what architecture it have.

          The apps are what people care about, yes. But many apps like to have lots of memory because they work with lots of data. (Funny, that...)

          • by devent ( 1627873 )
            Like I said, 64bit architecture is 20 years old but nobody (or a small fraction) is using 64bit Linux or 64bit Windows.

            Anyway, it would be a great start if I can finally buy any ARM notebooks. So far I couldn't see any of them.

            • Nearly all PCS & Laptops sold these days are 64bit and ship with 64bit Windows. Have a look at dell.com and try to find a 32bit PC.
      • by Kjella ( 173770 )

        If you mean it should compete with Intel CPUs for PC processors, on the other hand, one impediment may be that ARM is (at least at present) a 32-bit architecture.

        I don't see that as a huge drawback for at least taking on the netbook market, or possibly extending the netbook market to an even lower price point. There's not been any significant push for more memory in recent years, in fact the 4x4GB DDR2 is mostly replaced by 6x2GB DDR3 as the "top of the line" at mortal prices.

        By far the greatest challenge is software, with no Windows or Mac support you'd be pushing Linux. A linux with no option to run Windows software through WINE or virtualbox for those occasional

        • Unless Windows or Mac get their heads out from their asses and go cross platform. I'd prefer Linux myself, but either way it wouldn't be a bad thing. Wishful thinking all around.

          • Re: (Score:3, Informative)

            by mr_mischief ( 456295 )

            Well, Mac has been 68xxx series, PPC, and Intel Xeon. OS X has worked on both PPC and Intel wioth AMD's 64-bit extensions. I wouldn't be surprised terribly if they changed platforms again someday if it was evident they could get a good deal and be competitive. They're already using ARM in several products and hosting the devel environments for those on OS X.

            Windows has actually been on IA32, Alpha, MIPS, PowerPC, IA64, and AMD64. The Alpha, MIPS, and PowerPC versions were short-lived. The IA-64 version is b

      • by imakemusic ( 1164993 ) on Friday September 10, 2010 @06:32AM (#33532134)

        ARM is everywhere. It's in your iPhone [...] in tablets, in NAS boxes, in DVD players... countless applications.

        Sorry, I wasn't listening. I was looking at the woman in the red dress.

      • one impediment may be that ARM is (at least at present) a 32-bit architecture.

        Not all desktop applications need dozens of GB of RAM. A 32bit architecture is more than enough for the browsing/mail/chatting crowd.
        I think the main impediment is that ARM runs a different instructions set.

        Which means that a big proportion of the users won't even be able to run their x86-only favourite OS on it. And even, in the unlikely case of Microsoft finally delivering the so-long promised ARM port of Windows 7 (that it is still currently failing to produce), the software to which said users are addic

      • Until then, could there be a 64b emulation with multi-core 32b ARM processors?

      • Comment removed (Score:5, Insightful)

        by account_deleted ( 4530225 ) on Friday September 10, 2010 @11:01AM (#33533984)
        Comment removed based on user account deletion
  • Docks (Score:5, Interesting)

    by ozmanjusri ( 601766 ) <aussie_bob@hoMOSCOWtmail.com minus city> on Friday September 10, 2010 @05:01AM (#33531764) Journal
    It would be a great time to develop a standards-based dock/charger platform so we could drop our phones/tablets into an adaptor and have them display on a large monitor and accept standard USB peripherals.

    That would really shake up the Wintel alliance.

    • by Twinbee ( 767046 )

      Can't you connect any of the portables via HDMI to a monitor already?

      • by arivanov ( 12034 )

        You can, however you have to play with cables. Cables != dock. Geek != consumer.

        • Funny, most consumers I know don't have any problem connecting chargers and/or audio cables. An HDMI isn't any different...

      • by Cwix ( 1671282 )

        I know the droid x does hdmi. Its the only one ive heard of having it.

      • which is not quite the same as a standard dock/charger with keyboard,mouse, lan, sound, charger... connectivity.

        • HDMI, charger, and a USB hub for everything else. Meh. Docks always either break too quickly or far outlast the single device they were designed to take.

          No, I'm not missing that you said "standard dock", but that's pretty much an oxymoron. Other than Palm and Handspring, it's nearly been impossible to get one manufacturer to standardize a dock for their multiple devices. Good luck getting several to agree to one.

          • Docks always either break too quickly or far outlast the single device they were designed to take.

            So on average, they are just right?

            • Yes. Let's find the Goldilocks dock. ;-)

              Averages can be deceiving, can't they? Sometimes it's better to average the customer's happiness, which I think for docked computer devices as a whole would be below 50% satisfied customers, with most customer somewhere between 2 and 5 on a scale of 1 to 10, with 10 being "totally satisfied".

          • A "standard dock" design would be a cradle with a USB3 port in the center, and a USB3 port on the device. USB3 ought to offer enough power to charge most anything you want to carry in your pocket.

            I wouldn't mind seeing an HDMI connector at a specified distance so that docks could have HDMI support as well. That would be logical. Thus it will never happen.

      • Re: (Score:3, Informative)

        by bytta ( 904762 )

        Can't you connect any of the portables via HDMI to a monitor already?

        GSMArena lists 13 different phones with an HDMI port, and the trend seems to be increasing. http://www.gsmarena.com/results.php3?sFreeText=HDMI [gsmarena.com]

    • Re:Docks (Score:4, Interesting)

      by bgarcia ( 33222 ) on Friday September 10, 2010 @05:39AM (#33531914) Homepage Journal

      It would be a great time to develop a standards-based dock/charger platform so we could drop our phones/tablets into an adaptor and have them display on a large monitor and accept standard USB peripherals.

      Not USB. I want a BlueTooth keyboard & mouse.

      I'll accept an HDMI monitor connection for now (some phones have HDMI already), but eventually that should be wireless as well.

      When that happens, I'll have no need for a laptop.

      • Re:Docks (Score:4, Insightful)

        by nbharatvarma ( 784546 ) on Friday September 10, 2010 @05:59AM (#33531992)
        Once you start getting consumers used to no-buttons-no-wires sort of a thing, there's no stopping.

        I think we will see monitors / tv displays coming with an in-built wireless adapter, streaming content from the mobile which is lying on a charging pad.
        The flip side is that we will get more and more locked on to proprietary content platforms.
        • by Sycraft-fu ( 314770 ) on Friday September 10, 2010 @08:39AM (#33532636)

          While people love the idea of wireless, it just isn't going to happen for everything. In terms of power, it is impossible basically. You can do inductive charging which is technically wireless, I suppose, but it doesn't really fix anything. You device has to sit directly on the charger, which of course has a wire back to the outlet. It's been around forever, electric toothbrushes use it because having a waterproof system is important, but it just isn't that useful over all. Better to just use a wire, or have exposed connectors in a dock. Cheaper and more efficient.

          You'll never see actual wireless, longer range, power until we discover some way of getting around that pesky inverse square law thing.

          As for communications, well bandwidth is just an ironclad bitch, and one with no easy solution. The very best wireless technology can, in the best circumstances, compare favorably with old ass wired technology. Have a look at Wireless N as an example. If you have a good multi-antenna transmitter and receiver and you aren't too far away and there's no interference you can get 300mbps raw data rate. That works out to 100mbps of throughput. Oh yay. A whole 100mbps, you know, what the cheapest of the cheap wired ethernet can handle.

          The real problem starts with video. So HDMI needs 2.8gbps so support 1920x1080 @ 60Hz. That is just the video, no audio. If we start to want things like higher resolutions, higher refesh rates/3D more than 8bpp and so on, it takes even more. Can't do that with any cheap wireless tech these days.

          Also when trying to make ultra high bandwidth wireless you run in to the problem that is Shannon's Law. Bits per second is related to bandwidth and SNR. Well SNR is something you can't do much about with wireless. The noise level is what it is, so you have to increase bandwidth to increase throughput. That means increasing frequency. Here there's a problem, the higher the frequency, the less ideal the transmission characteristics. The high GHz stuff, what you need for big bandwidth links, gets rather directional, is quite short range (air even attenuates it) and doesn't pass through hardly any barriers, even walls. This is all aside from the general difficulties making stuff that signals cheap at those frequencies.

          You also get the additional problem of needing even more bandwidth to avoid contention. With wires, there's no interference. I can HDMI to three displays side by side, and there's no problem. With wireless, each needs its own channel, which just further increases the amount of RF bandwidth you need to make things work.

          Wireless is useful, don't get me wrong, but I don't see this "All wireless, all the time" future you do. You could spend a lot of money trying to do wireless video from your Blu-ray player to your TV, or you could just get a cheap cable. Given that both devices are going to be plugged in anyhow, is it really such an issue?

          • Re: (Score:3, Insightful)

            by dave420 ( 699308 )
            My Touchstone charger for my Palm is awesome. I prefer that to fiddling around with wires. You just place it on the stone, and it starts charging. Yeah, the stone is plugged in to the wall, but that's not a problem. If something is moving around constantly, wires suck. If it's practically permanently stationary, wires are fine.
      • Not USB. I want a BlueTooth keyboard & mouse.

        So how do you recharge it? Lemme A proprietary AC adapter? Induction? Tiny power gnomes?

    • by hitmark ( 640295 )

      https://secure.wikimedia.org/wikipedia/en/wiki/PDMI [wikimedia.org]

      You will find this at the bottom of the Dell streak, and most likely the Samsung Galaxy Tab as well. And i suspect the Toshiba Folio 100 may also sport such a connector.

      • Off-topic...

        But did you really just link an SSL wikimedia page to slashdot? Not very kind of you.

        I'd also be curious as to why you'd browse something like wikipedia via SSL, but I tunnel traffic via SSH so I don't have much to talk here either :P

        • by bsDaemon ( 87307 )

          If you install the HTTPS-Everywhere plugin from the EFF, Wikipedia is one of the sights on the default encryption list. I suspect that's what he did.

    • Oh stop (Score:5, Insightful)

      by Sycraft-fu ( 314770 ) on Friday September 10, 2010 @06:13AM (#33532052)

      Not with the idea of a standards based chargers but this "Wintel alliance," crap. There is no such thing. x86 chips are used for desktop computers because they are the only things that have been cheap, common, and powerful. MS has no special interest in pushing Intel. DOS, and thus earlier Windows versions, were tied to x86. When NT came out, they abstracted it and indeed you could get NT4 for x86, PowerPC, and Alpha. Let me give you a hint how well those other versions sold. As such, they were discontinued.

      Also when it came to 64-bit for the desktop time, MS cast in with AMD. Intel was pushing Itanium, which MS does support on their server OSes, but AMD's 64-bit extensions, called amd64 internally by the Windows tools, were what was used for the desktop. So you can get Windows 7 in x86 and x64 variants, and Server 2008R2 in x64 and IA64 variants.

      Now for Windows CE (also the basis for Windows Mobile), their mobile/embedded OS, well then that runs on all sorts of things. x86, MIPS, ARM, and SuperH. Again, more could be added, this is just what is supported as that is what there is currently a market for.

      What it comes down is they support the architectures that are used in the markets their OSes work in. There is no ARM version of Windows 7 because there are no ARM desktops that demand it. Porting an OS to a new architecture and maintaining it is not a zero effort task, so it isn't done unless it is worth it (unless it is NetBSD :D).

      Also the reason x86/x64 continues so strong on the desktop is it works so well. It provides binary compatibility will all your old apps, and the CPUs that use it are fast and cheap. Thus far, I've seen nobody who can beat Intel and AMD in that market. Sure there are higher end CPUs that cost more and use tons more power, like Itanium and Power7. There are also chips that use less power and are cheaper, the ARM. However I've yet to see the chip that does better in their market, as in can do more operations with the same or less power and costs less.

      So you want ARM desktops? Well first an ARM CPU that is competitive in that market has to come out. Competitive, please note, doesn't mean "Barely can compete with the low end." I'm talking something that makes you say "Wow, that is faster than my i5, and for less money." Then maybe there's interest. Should ARM desktops start to become popular, you can be pretty confident MS would move Windows over to them.

      But please, stop pretending like there's some sinister conspiracy to keep alternate architectures down. There are only two reasons for the x86 dominance:

      1) Compatibility. It is far nicer to have a chip that works with your old stuff. People will default to what's compatible unless given a good reason. I'm not going to pay the same amount for a CPU with the same performance that doesn't run my apps as for one that does. So whoever wants to break in to the market has to offer a good reason. Less cost, more performance, etc. Probably still need have a good emulators to support older apps.

      2) Intel is really, really, good. Everyone likes to hate on Intel because they are big and there's automatic underdog love on Slashdot, but they are good at what they do. They spend a ton on R&D and the result is they are almost always ahead in terms of fabs and their CPUs tend to offer great performance for the money. Yes, they've bad problems, Netburst (P4) was an example, but currently it is impossible to touch the Core i series. They are fast, do a lot given their power budget, and have a good price.

      • Re: (Score:3, Funny)

        by Muad'Dave ( 255648 )

        AMD's 64-bit extensions, called amd64 internally by the Windows tools...

        There's no need to insult Microsoft's programmers by calling them 'tools'. They have enough vitriol hurled at them already from all the users that experience BSODs and viruses.

    • All "data enabled" phones in the EU are going to be chargeable through a micro-USB based standard socket.

      • by PhilHibbs ( 4537 )

        Maybe - Apple agreed to this last summer, but still brought out the iPhone 4 with only their proprietary connector.

    • That would really shake up the Wintel alliance.

      As soon as smartphones have enough power to run excel, word and outlook at (current - 5years) office speeds, office wintel will be replaced on a phone price basis.

      We'll still have desktops for photo/video edition, gaming, and many other things that can still burn any amount of computing power.

      And, some years later, someone will invent the direct brain connection and we'll go back to needing massive hardware beasts to process our home virtual worlds.

  • by Nursie ( 632944 ) on Friday September 10, 2010 @05:07AM (#33531794)

    I thought most of the interesting stuff took place on the server?

    Well either way, I wish them luck. Having competition and diversity in the processor market is a very good thing and forces everyone to step up to the mark, benefiting everyone.

    And if they've managed to keep the power envelope down then even better.

    • Re: (Score:3, Interesting)

      by dbIII ( 701233 )
      If it has 16 cores and doesn't use a lot of power it will be on the server or at least in RAID cards.
    • This is a laptop / server chip design. It fits into ARM's product line above the A9 and provides features that are more interesting on the server than anywhere else. It is also likely to have a larger power envelope than the A9.
      • This is a laptop / server chip design. It fits into ARM's product line above the A9 and provides features that are more interesting on the server than anywhere else. It is also likely to have a larger power envelope than the A9.

        No, it's a general purpose chip design like Intel has with the Core series, like AMD has with the Hammer series. The cores stay pretty much the same, but the support hardware and the number of cores will change from implementation to implementation. We'll likely see up to four cores in actual portable devices.

    • That's exactly what I was wondering too. I'm pretty sure I'm not going to see a company putting its WebSphere application server or Oracle or DB/2 database on a cell phone or netbook any time soon. Nor (I hope to the elder gods) where they'll make their personnel enter the data or program those servers on cell phones instead of some kind of desktop.

      Granted, some of those servers may or may not have ARM CPUs, but then that's not what he's implying there. And a lot are running on PowerPC already. The server w

    • Virtualization of server farms/Infrastructure

  • by Tapewolf ( 1639955 ) on Friday September 10, 2010 @05:07AM (#33531796)
    32-bit addressing was seriously impressive in 1987, compared to Acorn's then-current machine with 32KB, including video memory. But now even smartphones are starting to come with 512MB, 1GB of memory. Does ARM have a strategy for getting past 4GB?
    • by forkazoo ( 138186 ) <<wrosecrans> <at> <gmail.com>> on Friday September 10, 2010 @05:22AM (#33531860) Homepage

      32-bit addressing was seriously impressive in 1987, compared to Acorn's then-current machine with 32KB, including video memory. But now even smartphones are starting to come with 512MB, 1GB of memory. Does ARM have a strategy for getting past 4GB?

      From what I understand, the A15 will support 40 bit physical addressing. So far, I'm not certain if that's segmented, or sane. I heard a claim that in a multicore setup, different cores might be configured with distinct memory controllers so that the various cores need not address strictly the same 40b worth of memory, enabling some sort of NUMA setup. Dunno if that will ever happen in practice. 1 TB RAM is likely to be sufficient for the commercially relevant life of the CPU.

      • by hitmark ( 640295 ) on Friday September 10, 2010 @05:59AM (#33531996) Journal

        Combined with the virtualization support, i suspect one could allocate the different cores to different OS images and use the address space to slice up the RAM as needed. Consider having a rack of these in a web hotel, with each core running its own server instance. Hell, given that one can fit a ARM SoC on a DIMM, one could make such a rack very easily expandable with the correct mother/logic-board.

      • by MemoryDragon ( 544441 ) on Friday September 10, 2010 @06:09AM (#33532042)

        It will come down to, if you know the old intel address modes to things called segments, which means you have so called segments of max 4 gigs you have to juggle around. This system on assembly level was quite evil because you had to shift around with segments for code data stack and whatsoever.

        The + side it offered another layer of code injection protection. But for complexity reasons it was very unpopular, and when the segment spaces became big enough most compilers just rolled one huge segmetn and placed code and data there.

        For a processor designer this approach however is very elegant because they can increas the memory range ad inifnitum while keeping the register size the same and thus keeping backwards compatibility.

        From a programmers point of view segments are hell because you never know when you run into the boundary set by the segment and then the shuffeling beings. Also if you have data bigger than the segment you have to press it into multiple ones.

        I am not sure if I like the way arm is going there just to keep the backwards compatibility. One point in time they will have to break it to keep the power consumption low (Intel just added on top of everything the next fluff), and I guess given their current success in the mobile phone area, they shun it a little bit to roll out the next breach in backwards compatibility like they had done in the past.

        • Actually, it's not _that_ bad for most applications.

          I have actually programmed assembly back in ye goode olde days of 16 bit CPUs and segment registers, and the reason it was evil was that you ran into that limit all the time. Even the most trivial operations had to juggle registers. You couldn't even process a 640x480 pixel image in 16 colours without running into segment maths. (Incidentally that aforementioned image would need about twice the memory you could address with 16 bits without segment maths.)

      • by thue ( 121682 )

        But what if you want to memory-map your 2TB Hard disk as virtual memory?

    • by romiz ( 757548 ) on Friday September 10, 2010 @05:33AM (#33531896)
      According to ARM's web site [arm.com], there are 'Long Physical Address Extensions (LPAE)', that allow addressing 1 TiB (40 bit). The marketing schematics for the processor mentions a "Virtual 40b PA" for each CPU.

      Unfortunately, the detailed A15 documentation is not available yet, so we're left to speculate over what this means. But at the same time, the supported architecture remains ARMv7 and there is no hint of any major changes on the instruction side. An easy implementation would use a MMU with 40-bit physical addresses to map this amount of memory, but the process size would remain at 4 GiB to avoid any drastic change to the programming model.
      • Unfortunately, the detailed A15 documentation is not available yet, so we're left to speculate over what this means. But at the same time, the supported architecture remains ARMv7 and there is no hint of any major changes on the instruction side. An easy implementation would use a MMU with 40-bit physical addresses to map this amount of memory, but the process size would remain at 4 GiB to avoid any drastic change to the programming model.

        Yeah, that's the picture I'm getting from the collection of links provided to my query. A 64-bit address register would have been nice, but it looks more like they're aiming this at virtualisation, e.g. to provide multiple 'instances' of a 4GB address space to several VMs.

    • by jimicus ( 737525 )

      Bit of a shame, then, that in 1987 it didn't support 32-bit addressing (IIRC instructions were 32 bits wide but the address bus was only 24 bits) and even if it did, it relied on a separate memory controller.

      The MEMC1 in the early Archimedes models supported.... oooh, 1MB of RAM. You could upgrade the memory (no SIMM sockets then, you had to have it soldered on), you also had to upgrade the chip to a MEMC1a, which supported 4MB.

      (note: much of this is a hazy recollection - constructive correction welcomed!

    • They extended the addressing to 40 bit... but only for the memory, the register addressing still is 32 bit for backwards compatibility reasons.

    • by miffo.swe ( 547642 ) <daniel@hedblom.gmail@com> on Friday September 10, 2010 @06:40AM (#33532154) Homepage Journal

      The 4 GB barrier was overcome a long time ago on 32 bit systems. The reason people still think its a problem is because Microsoft decided you as a customer shouldnt be able to use more than 4 GB memory on 32-bit since Windows 2000 . The limitations are solely artificial today on Windows 32-bit but linux gladly handle any memory you toss at it.

      Excellent article explaining the issue:
      http://www.geoffchappell.com/viewer.htm?doc=notes/windows/license/memory.htm [geoffchappell.com]

      I have also yet to see a benchmark where 64-bit in itself gives significant advantage outside large calculations an simulations.

      • The 4 GB barrier was overcome a long time ago on 32 bit systems. The reason people still think its a problem is because Microsoft decided you as a customer shouldnt be able to use more than 4 GB memory on 32-bit since Windows 2000 .

        Er, ARM is not an x86 derivative. This new revision does seem to have added some flavour of PAE, but AFAIK 4GB is an absolute limit for all currently-manufactured ARM microprocessors.

      • by pstorry ( 47673 ) *

        I think you'll find it's giving significant advantage.

        To the bank accounts of Intel and AMD, as it's giving people (often gamers) a "reason to upgrade"... ;-)

        Generally, though, I'd agree with you.

        When I last bought a machine, it was before the time of Windows 7. 64-bit was an option, but not a good one. So I went with 32-bit and 4Gb of RAM, mostly because of reasons I suspect you'd agree with:
        a) For playing games under Windows, I lose nothing. A 768Mb graphics card means I lose 768Mb of RAM under Windows

        • Re: (Score:3, Interesting)

          Problem is the memory mapped IO, add 2 gigs of graphic card data mapped into memory and you have a problem...

  • by Eternal Vigilance ( 573501 ) on Friday September 10, 2010 @05:09AM (#33531808)

    "It's like taking a desktop and putting it in your pocket," said Schorn.

    That's gotta be one of the most uncomfortable marketing images ever.

    "Is that an ARM in your pocket or are you just glad to see me?"

  • Right now my Samsung 5000 series LED tv runs an arm with busybox linux as the firmware. It is only a matter of time before TVs become fully internet capable and use usb 3 for storage. I also have seen demos of touch screen remotes that have qwerty capability for your TV. So the only thing missing is a simple cursor system and presto you have it all. Seeing that arm processors are becoming this powerful the market for all in one home entertainment devices is there. If Microsoft does not see this coming and c
  • by udippel ( 562132 ) on Friday September 10, 2010 @06:11AM (#33532048)

    'The exciting place for software developer graduates to go and hunt for work is no longer the desktop.'

    Why, actually, why??
    I am really really looking forward to a desktop with low power footprint. There is no need here to run MS-crapware; no Crysis or other high-resource gaming.
    Gimme a nice desktop, low-low power, that boots to Debian on ARM, and I throw mine out of the window. And I already have a 80+ PSU, single row of RAM, dual-core EE AMD. It still has a 45W TDP; plus AMD does not sell the Energy Efficient (EE) any longer except to OEMs; at least in this country.
    Throw out the 24-pin plus 12 V power supply, let's do everything on 12 V, give it 6 USBs, Sata, HDMI/DVI, Ethernet and WiFi. A mini ARM.
    And, yes, I want to be able to add a hard disk of my own, maybe a DVD- or BlueRay-Drive, so add some space.

    • Re: (Score:3, Interesting)

      by gbjbaanb ( 229885 )

      I'm thinking of the marketplace these would be targetted at.

      Sure, hard-core gamerz will not want one if it doesn't run the absolute latest super-graphics games that require 2 PSUs and 4 Gfx cards for their neon-light equipped gaming rigz. but, ignoring them....

      My account manager always has his (old) smartphone glued to his ear when i see him. And he uses his PC for email and the odd word document. That's easily replaced with a smartphone, one that could connect to a big monitor and keyboard while still bein

      • by udippel ( 562132 )

        I'm thinking of the marketplace these would be targetted at.
        Me too. All those machines that I see being run for our receptionists, point of sales, kiosks.
        No, I don't think everyone will bring their HDMI-enabled mobile from home and plug it in a docking station. As security-conscious person I even wouldn't want.
        Though, just think about the saving in energy, when 1 billion PCs can be replaced by boxen consuming a fraction of the current power-sucklers! Let the other 1 billion still sit on WIntel, if need be.

    • by gmarsh ( 839707 ) on Friday September 10, 2010 @07:25AM (#33532326)

      Marvell OpenRD-client:

      http://www.globalscaletechnologies.com/t-openrdcdetails.aspx [globalscal...logies.com]

      Has an ARM9 at 1.2GHz, half a gig of RAM, sound, VGA video, lots of USB, SD card reader, 2 GbE ports, eSATA and a spot for a 2.5" hard drive in it. Mine draws 10W from the wall. And it happily runs Debian.

      My only beef is the video (XGI Z11) has absolutely horrible driver support, so don't expect the thing to play Blu-ray.

      • by udippel ( 562132 )

        Of course, that's what I can foresee. But 1280x1024 is already far out of the scope. Nevermind the lack of hardware acceleration at those specifications. A bit of compiz is quite okay for me; doesn't have to be rotating cubes.
        Give it an extra slot 5.25", and HDMI. Otherwise Marvell won't sell all too many.
        And ping me when that machine comes to the market, please.

    • plus AMD does not sell the Energy Efficient (EE) any longer except to OEMs; at least in this country.

      What is "this country"?
      In Germany, you can still get some "Energy Efficient" models, although they have a small "e" instead of the "EE" now. For instance the Athlon_II_X2_240e:
      http://www.alternate.de/html/product/CPU/AMD/Athlon_II_X2_240e/137074/?tn=HARDWARE&l1=Prozessoren+(CPU)&l2=Desktop&l3=Sockel+AM3 [alternate.de]
      2 x 2.8GHz at 45W TDP, for 65 Euros. That is pretty nice.

    • by gatkinso ( 15975 )

      A Mac Mini, while Intel based, almost fits these specs. It is fairly lower power.

  • by Chrisq ( 894406 ) on Friday September 10, 2010 @06:39AM (#33532150)
    I know many CS graduates who have thought that the most interesting stuff to play with is in the pocket.
  • by Anonymous Coward on Friday September 10, 2010 @07:29AM (#33532348)

    I'm currently working with several concurrency development groups within the SUNY system; we are partnered with Oracle, Google, and IBM as well as a few others. Upon mention of ARM not a single co-worker has been able to resist going into rant mode about the lack of reasonably quick CAS and LL/SC implementations. Further, barriers and fences apparently take so long to establish that to fake a CAS you are looking at three to six hundred cycles compared to about a dozen for current generation i7's and SPARCs (optimistic CASing). Can anyone speak to the implementation of the features on this new chip?

    • Re: (Score:3, Informative)

      by xianthax ( 963773 )

      What i assume he means is:

      CAS - Compare and Swap

      LL/SC - Load-Link/Store Conditional

      Without getting into too much detail both are design concepts/operations that are critical components of any system that requires atomic operations. For example, implementing semaphores/mutexes which are in turn critical components of most symmetric multi-processing systems such as the linux kernel (when so configured), or windows. While these operations are most critical in multi-core systems, single core systems also have

  • by GuyFawkes ( 729054 ) on Friday September 10, 2010 @07:35AM (#33532366) Homepage Journal

    Have run all of these, in anger, in production, at one point or another.

    I still have an extremely soft spot for the RAQ2, 64 bit MIPS processor.

    Image link - http://dev.gentoo.org/~vapier/pics/mipsel-raq2/inside-main-board.jpg [gentoo.org]

    Nota Bene, NO HEAT-SINKS OF ANY KIND, and yet these puppies could saturate a 10 Mbit connection (of course this was the days before flash and stuff) and the whole mainboard used about 10 watts, most of which was the RAM, the biggest power eater was the IDE HD.

    Downside was it was MIPS, which is a lot like the downside of the Acorn ARM based A series and Risc-PC series, eg not x86 compatible, ergo not mainstream.

    Now that ARM is used is zillions of other devices, ARM is no longer the backwoods, everywhere except in "a computer" eg desktop or server.

    Which means ARM on the desktop or ARM on the server won't suffer so badly for not being x86... it will still suffer, but not so badly.

    RAQ3 went away from MIPS to x86, IMHO because of this accessibility and availability of x86 code, not because it was technically superior to MIPS... one RAQ3 wasn't more powerful than two RAQ2 in any sense except power consumption and thermal rejection.

    In practical terms x86 has gone nearly as far as it can go, both in terms of light speed and die size, and thermal dissipation per cubic mm, so the alternatives are catching up, not so much because of sheer lifting power, but because of thermal dissipation per cubic mm they still have "development room" left to play around in.

    The next 5 years or so are going to be interesting, as this "development room" is explored and used up, and especially so if anyone comes out with a robust cross architecture compiler / translator.

    • Like so many others, I have an SGI Indy, it has a 64 bit MIPS processor. I have the R4400SC which is faster than the R5000 which didn't come with secondary cache. It has and needs a heat sink (but no fan.) But it's 200MHz and by today's standards not fit to be a game console.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...