Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Desktops (Apple) Businesses Intel Portables (Apple) Apple Hardware Technology

Apple Developing Custom ARM-Based Mac Chip That Would Lessen Intel Role (bloomberg.com) 267

According to Bloomberg, Apple is designing a new chip for future Mac laptops that would take on more of the functionality currently handled by Intel processors. The chip is a variant of the T1 SoC Apple used in the latest MacBook Pro to power the keyboard's Touch Bar feature. The updated part, internally codenamed T310, is built using ARM technology and would reportedly handle some of the computer's low-power mode functionality. From the report: The development of a more advanced Apple-designed chipset for use within Mac laptops is another step in the company's long-term exploration of becoming independent of Intel for its Mac processors. Apple has used its own A-Series processors inside iPhones and iPads since 2010, and its chip business has become one of the Cupertino, California-based company's most critical long-term investments. Apple engineers are planning to offload the Mac's low-power mode, a feature marketed as "Power Nap," to the next-generation ARM-based chip. This function allows Mac laptops to retrieve e-mails, install software updates, and synchronize calendar appointments with the display shut and not in use. The feature currently uses little battery life while run on the Intel chip, but the move to ARM would conserve even more power, according to one of the people. The current ARM-based chip for Macs is independent from the computer's other components, focusing on the Touch Bar's functionality itself. The new version in development would go further by connecting to other parts of a Mac's system, including storage and wireless components, in order to take on the additional responsibilities. Given that a low-power mode already exists, Apple may choose to not highlight the advancement, much like it has not marketed the significance of its current Mac chip, one of the people said. Building its own chips allows Apple to more tightly integrate its hardware and software functions. It also, crucially, allows it more of a say in the cost of components for its devices. However, Apple has no near-term plans to completely abandon Intel chips for use in its laptops and desktops, the people said.
This discussion has been archived. No new comments can be posted.

Apple Developing Custom ARM-Based Mac Chip That Would Lessen Intel Role

Comments Filter:
  • Walk before you run (Score:4, Interesting)

    by Anonymous Coward on Wednesday February 01, 2017 @11:51PM (#53786143)

    ARM has only been doing 64-bit out-of-order execution and branch prediction for two generations, the first of which (A57) seemingly had worse IPC than Intel's Netburst architecture. They may catch up one day but for now they are no closer to besting Intel than Transmeta was back in the days of Crusoe. Let's just hope their revenue stream lasts long enough for that to happen.

    • by cerberusss ( 660701 ) on Thursday February 02, 2017 @02:25AM (#53786497) Journal

      ARM has only been doing 64-bit out-of-order execution and branch prediction for two generations

      In a single-core benchmark, Apple's A9X @ 2.25 GHz already defeats Intel's 1.3 GHz Core M7 CPU.

      The idea is not to compete with a desktop Xeon but instead, to nibble at Intels feet at the bottom end. Check out this 2016 benchmark between the 12" MacBook (Intel @ 1.3 GHz) and the 12.9" iPad Pro: http://barefeats.com/macbook20... [barefeats.com]

      GeekBench 3 single-core, higher is better:
      MacBook Intel @ 1.3 GHz: 3194
      iPad Pro: 3249

      GeekBench 3 multi-core, higher is better:
      MacBook Intel @ 1.3 GHz: 6784
      iPad Pro: 5482

      GFXBench Metal, more FPS is better:
      MacBook Intel @ 1.3 GHz: 26.1 FPS
      iPad Pro: 55.3 FPS

      JetStream javascript benchmark, higher is better:
      MacBook Intel @ 1.3 GHz: 175.68
      iPad Pro: 143.41

      • In a single-core benchmark, Apple's A9X @ 2.25 GHz already defeats Intel's 1.3 GHz Core M7 CPU.

        Wow, that's closer than I thought. For Intel, that must be much too close for comfort. But there's a benchmark that's Intel doesn't much bigger headaches: Arm Holdings only charges a couple of percent per chip, so high end ARM SoCs come in at just a sliver over fab cost, well under what Intel can sell their parts for and continue to live in the manner to which they have become accustomed.

      • by marcansoft ( 727665 ) <hector@TOKYOmarcansoft.com minus city> on Thursday February 02, 2017 @03:54AM (#53786691) Homepage

        Except the A9X doesn't have an ARM core, which is what the parent was talking about. It's a chip that implements the ARM instruction set. Big difference.

        IP cores from ARM Holdings Inc, today, do not compete with Intel. Nor do any of the other ARM cores around (e.g. Qualcomm's, Nvidia's). But it seems Apple right now has better engineers than all of those and is actually managing to design ARM-compatible cores that are starting to be comparable to Intel chips.

      • Geekbench *sigh* (Score:2, Interesting)

        by Anonymous Coward

        While I honestly would like to see more of these comparisons (and the A9X IS a beast, esp. re. IPC and Perf/W) - could everyone please stop using Geekbench scores for cross-arch comparisons, especially 3 or older.

        The codepaths and compilation flags are wildly arbitrary and the author has shown time and again his lack of understanding of cross-platform benchmark caveats and pitfalls. Especially GB3 has been shown as useless for that regard, among others by Linus Torvalds himself no less. (just look up his f

    • by gtall ( 79522 ) on Thursday February 02, 2017 @06:00AM (#53786923)

      ARM is owned by SoftBank, they are not a standalone company any longer. Softbank can afford to take the long view. I was sorry to see them bought out.

    • ARM has only been doing 64-bit out-of-order execution and branch prediction for two generations

      That's a big combination of features. ARM has been doing branch prediction for a couple of decades. The Cortex A9 was their first out-of-order design. The A8 was two-way superscalar, but in-order. These were introduced in 2005 and (I think) 2010. 64-bit is newer, but in microarchitectural terms not nearly as big a jump as the others - data paths are wider, but that's basically it (and a load of the difficult things, like store multiple and the PC as a target for arbitrary instructions went away with AA

  • by Snotnose ( 212196 ) on Thursday February 02, 2017 @12:11AM (#53786187)
    My understanding is a significant percentage of Intel dies are supporting ancient x86 instructions.

    Apple doesn't care about backward compatibility If they can deliver a next gen chip with zero support of existing apps, they may have the money to pull it off.

    If Intel could write off the x86 instruction set I'm guessing it's benchmarks would at least double. .
    • by Z80a ( 971949 )

      The translation layer is actually quite tiny, with the more arcane instructions being handled by a rom.

      • The translation layer is actually quite tiny

        The translation layer is not actually tiny, it's just that the rest of the chip is gigantic.

        • by Z80a ( 971949 )

          Given the current manufacturing processes etc, its most likely a lot smaller than an 1980ish 6502 for example.

        • The translation layer is not actually tiny, it's just that the rest of the chip is gigantic.

          Big steak makes the potatoes look smaller. Or something.

          Perhaps I should have just gone for "to-MAY-toes, to-MAH-toes"

    • by Jeremi ( 14640 )

      Apple doesn't care about backward compatibility If they can deliver a next gen chip with zero support of existing apps, they may have the money to pull it off.

      It sounds like Apple does care about backwards compatibility, which is why the ARM is only designed to work as a co-processor to offload a few specific tasks from the Intel CPU, rather than as a general-purpose CPU for developers to target.

      That said, Apple has changed CPU architectures before without (significantly) breaking compatibility; you may recall Rosetta, which allowed x86-based Macs to run PowerPC MacOS/X executables for a number of years (until Apple made it an optional install, and then later dro

      • The real problem is Windows support. Apple sales doubled once they switched to Intel, once you could have full performance Windows and macOS on the same machine. Once you no longer had to choose Mac or Windows but could have both.

        Emulation works well today since they don't have to emulate an instruction set architecture. Recompilation of the binary from one ISA to another could help but may still feel sluggish, its not quite the same as starting from the source code. And of course there is Boot Camp whic
      • Apple has changed CPU architectures before without (significantly) breaking compatibility; you may recall Rosetta, which allowed x86-based Macs to run PowerPC MacOS/X executables for a number of years (until Apple made it an optional install, and then later dropped it entirely).

        Yes, they dropped it entirely long before the machines in question were obsolete. Shit, I got the last dome iMac (for five bucks, on a whim) and it's snappy enough to do pretty much everything on if you're not in a big rush. But you can't get Chrome for it, and you can't run x86 binaries, so it's landfill. (If it didn't have scratches on the display, I'd try to figure out how to use the display, and wedge some tiny PC or a R-Pi into the case. Alas.) This is precisely why Windows/Intel is a smarter move than

    • by vux984 ( 928602 )

      My understanding is a significant percentage of Intel dies are supporting ancient x86 instructions.

      Nope. You understand wrong. The ancient x86 instructions are a tiny insigificant slice of the die.

      The CPU cores are RISC and have been for ever now. The x86 instruction set is all converted to RISC in the decoder. The decoder itself is pretty tiny part of the core, and the 'ancient obsolete instrutions' amount for a dozen or so bytes of "RISC lookup" in a table in the decoder on each core.

      Cache, GPU, and the memory controller is what dominates the die of a modern i5 or i7.

      Its like those old HSP modems that

      • If you want to worry about legacy stupidity bloating Intel chips, look at their cache model, not their instruction set. Their legacy "everything is coherent everywhere" requirement means they need snooping/invalidation logic around every single little cache block (e.g. the branch predictor). ISAs where, for example, you are not allowed to execute dynamic code without first flushing it from D cache and invalidating that range from I cache don't have this problem.

    • And gate count decrease which increases yields.
    • Apple always included emulators for the older binaries and new software usually was delivered as a "fat binary" that included the code for all supported CPUs. In other words, there is no "next gen chip with zero support of existing apps"

      I would not wonder if future CPUs have cores with different instruction sets anyway or we go back to multiple CPUs and then one CPU is an ARM or whatever exotic CPU might be interesting.

  • by caseih ( 160668 ) on Thursday February 02, 2017 @12:33AM (#53786245)

    Sounds like a great way to lock OS X, or macOS or whatever they call it these days, solidly back to Apple hardware and preclude any possibility of running on stock x86 hardware. Though there's less and less reason to run a hacintosh all the time (it was always a maintenance nightmare). Though virtualization might be a way of getting around that. I've often thought Apple should sell a complete OS X (excuse me, macOS) vm for Windows users as it would provide an easy way to woo users to the platform. However the VM on your average Windows machine would probably outperform the Mac Pro, given Apple's commitment to high end users these days.

  • by Solandri ( 704621 ) on Thursday February 02, 2017 @01:02AM (#53786337)
    They moved to Intel because the Mac doesn't have enough sales volume to drive its own CPU R&D. The Macs started on Motorola, but switched to PowerPC when they started to fall behind Intel. Unfortunately the Macs (home and office PCs) accounted for something like 1% of PowerPC sales, so IBM didn't give a damn what Apple wanted. Their meat and potatoes was in the server market so that's what they tuned the PowerPC CPUs for, when the PC market was clearly moving towards low-power consumption laptops. That's what drove Apple to Intel in the first place.

    They're gambling that ARM CPUs (SoCs) will become powerful enough to accomplish the tasks people ask of from Macs, while revenue from phone, tablet, and other small device sales (e.g. Apple TV) will be enough to sustain R&D to keep it progressing as rapidly as Intel CPUs. That could happen, but I'm not convinced it will. The tablet market is already floundering after reaching saturation. I'm guessing phones will soon join them once 5G arrives (5G data will be fast enough there will be no compelling reason to upgrade your phone for 5-10 years). In a saturated marketplace, the Mac commands so little of the PC market it wasn't able to keep Motorola competitive nor sway IBM. And this battle - CISC (Intel) vs RISC (Alpha, MIPS, Sparc, Power, ARM) - has been fought before. Every time, CISC has come out the winner.

    Intel (and Microsoft) is successful because they managed to find a market with consistently large annual sales (and profit margins) even after reaching saturation. So far Apple has been riding a growing mobile market to success - basically coasting downhill. It remains to be seen whether they can continue that momentum once the hill levels out, people stop upgrading every 2 years, and they're forced to really, truly innovate to create demand to sustain their sales.
    • by Fallen Kell ( 165468 ) on Thursday February 02, 2017 @01:50AM (#53786429)

      They're gambling that ARM CPUs (SoCs) will become powerful enough to accomplish the tasks people ask of from Macs, while revenue from phone, tablet, and other small device sales (e.g. Apple TV) will be enough to sustain R&D to keep it progressing as rapidly as Intel CPUs.

      It won't happen, and mainly for the exact reasons you stated. Phones and tablets have already taken over the "I don't do much other than browse the internet/watch youtube/update facebook/snapchat/twitter/email" jobs that low performance CPUs can handle. The only reason someone has a need to purchase a real computer now is because they have a real need for processing power (gaming, photo/video editing, developing software, running simulations). Everything else is already being done by the lightweight CPUs.

      • Or they are business people who want to answer email and create other content using a real keyboard. Those users don't need an incredible amount of CPU power, but the laptop form factor is pretty ideal. 2 in 1s are good in economy class, but if you fly in business class, a real laptop is much nicer on the plane. A low power mode used to be really good on a long flight although most long-haul flights have power outlets now so it's less of an issue. Admittedly the new MacBooks with the touch bar are suppo
    • by erice ( 13380 ) on Thursday February 02, 2017 @02:11AM (#53786467) Homepage

      And this battle - CISC (Intel) vs RISC (Alpha, MIPS, Sparc, Power, ARM) - has been fought before. Every time, CISC has come out the winner.

      It wasn't really a battle of RISC vs CISC. It was a battle between incumbents and upstarts.

      In the workstation arena, the CISC incumbent was Motorola with they 68k series. Despite being better CISC architecture than Intel, 68k lost to the RISC upstarts. Motorola had more resources than MIPS and Sun but not enough more and their customers were nimble enough to take advantage of the performance advantages the RISC upstarts offered.

      Intel's had a much larger customer base and those customers were much more dependent on binary compatibility. It took a little while. Neither the 386 or 486 were a match for their RISC competitors. But Intel was able to outspend their RISC competitors on R&D, holding their ground until chips became complex enough that process and ISA independent features dominated. If Intel's architecture were also RISC, they would still have won, even sooner if the upstarts were CISC. Actually, with Intel RISC and CISC upstarts. there would not even have been a battle. Without a short term advantage to exploit, the upstarts would have not have gotten off the ground.

      I can't see an Apple only processor wining over Intel, either. At minimum, Intel's process advantage would have to be nullified and I can't see that happening until scaling comes to a full stop.

      • But Intel was able to outspend their RISC competitors on R&D, holding their ground until chips became complex enough that process and ISA independent features dominated.

        Don't forget getting DEC Alpha at bargain bin discount prices.

        • Don't forget getting DEC Alpha at bargain bin discount prices.

          You mean, once it was shown that there was no more headroom in it and it wouldn't scale past about 400 MHz? What a bargain! Meanwhile, AMD also got the only interesting part of Alpha, the bus. That was almost as good a buy as when Intel bought an ARM core (XScale) and then ironically couldn't get it to "scale" down; it was the fastest ARM core, but it was also the most power-hungry by far.

      • by AmiMoJo ( 196126 )

        What really screwed RISC was that CISC processors stole all their great ideas anyway. x86 is basically an intermediate language at this stage, with modern x86 CPUs being RISC internally and translating x86 CISC instructions as part of the execution pipeline.

        For performance applications that works really well, because the CPU designer can optimize higher level instructions to each CPU's specific architecture in a way that RISC makes more difficult because RISC instructions are much more atomic.

        For example, I

        • It's not quite that clear cut. The big win for RISC was that the decoder was a lot smaller (and didn't rely on microcode). That gave them a lot more area to add ALUs for the same transistor budget. As chip sizes increased, the decoder area went from being a dominant part of the core to a fairly small one, and a denser CISC instruction encoding meant that the extra i-cache that RISC chips needed used more area overall. Add to that, CISC had more headroom for microarchitectural improvements. For example,

      • If Intel's architecture were also RISC, they would still have won, even sooner if the upstarts were CISC.

        Intel has been internally RISC since the Pentium (and AMD since the Am586). They didn't go to a RISC instruction set because there are actually numerous advantages to the x86 set once you work around its worst deficiency (that the x86 ISA has only one general purpose register since all the other ones are used for specific things) with register renaming.

        Actually, with Intel RISC and CISC upstarts. there would not even have been a battle.

        In short, you are wrong [wikipedia.org]. Intel tried to make a full-risc chip and failed. Well, they didn't fail to make one, but they did fail to sell them. That's not thei

        • You are both wrong.

          There never was a battle.

          Like there never was a battle between gasoline and diesel engines or fuel.

          It is just two different approaches for designing CPU instructions sets and hence designing the CPU.

          Why people now try to call research and development "a battle between" is beyond me.

          • There never was a battle.

            I disagree. I think there really was a battle between CISC and RISC, with the last real competitors being the 486 and... I forget, honestly, exactly what the competition was. I want to say at that time it was SuperSPARC on Sun's side, HP actually had their own architecture still, and IBM was just inventing POWER for RS6ks. (Wikipedia... yep, and Ross HyperSPARC, too. We had a SS10 quad-HyperSPARC at SEI, IIRC. Or maybe we had a dual-HyperSPARC SS10 and a quad SS20. That was a while back.) The end result is

    • They moved to Intel because the Mac doesn't have enough sales volume to drive its own CPU R&D.

      Back then though a leading edge CPU required a leading edge chip fab, which is a huge (majority?) part of the cost. That's not the case these days.

    • I'm guessing phones will soon join them once 5G arrives (5G data will be fast enough there will be no compelling reason to upgrade your phone for 5-10 years).

      No, that's easy. RAM will get cheaper, too. So you just add more ram, make iOS more memory hungry, update the API so that some new apps won't run on the old iOS, and bingo! Everyone upgrades whether they need a new phone or not. And this ain't a conspiracy theory, this is exactly what Apple has done so far, consistently. I say this because it is not what Google has done; several releases of Android have actually improved performance on older devices. The problem there is whether the vendor will bother to d

      • Erm ... new IOS versions happily run on old devices.
        My iPhone 4S is minimum 5 years old, btw.

        People upgrade because they find the new phone more shiny. There is rarely a "software compatibility" reason.

        • Erm ... new IOS versions happily run on old devices.

          Everyone but you has complained about the performance impact of new iOS on old iDevices. I don't think that you're a genius and they're all idiots.

  • Look. A 4 year old HP has awesome battery life with an I5. Cook does not have a big picture view of the market or product demand, which is easily seen in all product design decisions seen after Jobs' death. Investing in ARM development is not a sound investment, but they can probably weather the loss.
  • by Anonymous Coward on Thursday February 02, 2017 @01:52AM (#53786433)

    Posting as AC for a damned good reason.

    Apple already has several ARM powered laptops drifting around internally. I've seen several of them with my own eyes. There's at least five different prototypes, all constructed in plastic cases with varying degrees of complexity (some are literally just a clear acrylic box, others look more like 3D printed or milled parts designed to look like a chunky MBA or iBook). There's a few that literally recycled the chassis and case from an MBA, just with a different logic board (which was coloured red for some reason), and others sporting a radically different design than anything Apple currently sells (not going anywhere near the details on those because of NDA).

    All of them boot encrypted and signed OS images, which are fully recoverable over the internet so long as you've got WiFi access (similar to how their Intel powered systems do it). You cannot chose a version of the OS to load, you get whatever the latest greatest one is and that's it. They've completely ported OS X to ARM (including all of Cocoa and Aqua), however a ton of utilities that normally come with OS X are missing (there's no Disk Utility, Terminal, ColorSync, Grapher, X11, Audio/MIDI setup, etc). A lot of that functionality has been merged into a new app called "Settings" (presumably to match the iOS counterpart), which takes the place of System Preferences.

    Likewise, App Store distribution appeared to be mandatory. I didn't see any mention of Gatekeeper or any way to side load (unsigned) binaries, presumably because Gatekeeper is simply part of the system now. The systems I saw could all access an internal version of the MAS that was specifically designed for the ARM systems (and under heavy WIP, judging by the broken page formatting and placeholder elements). The filesystem seemed a bit... peculiar, to say the least. Everything was stored in the root of the disk drive- that is to say, the OS didn't support multiple users at all, and everything that you'd normally see in your home directory was presented as / instead. I don't think the physical filesystem was actually laid out like this, it's just that the Finder and everything else had been modified to make you believe that's the way the computer worked. There was no /Applications folder anymore, your only option for launching and deleting apps was through Launchpad. Drivers (now called "System Extensions") were handled 100% automatically by the OS. If you plugged anything into the computer that it didn't support, it would automatically launch the MAS and take you to a page where you could download and install the relevant stuff. Those things would show up in Settings.app where you could manage them by way of customized preference panels or uninstall them completely. The rest of it more or less looked like a modern day version of 10.12 without some of the historical features accumulated over the years (for example, Dashboard was nowhere to be found).

    From what I was told, there's a huge push to get this stuff out the door as soon as they think the market will accept it. That might be in a year, or two years, or three or four, but that's where Apple is inevitably heading. Custom hardware, custom software, total vendor and user lock in. They want to own everything, everywhere, at all times, and ARM is going to let them do exactly that. They're not stupid though and they're not going to commit suicide by releasing this stuff tomorrow, but they will sometime in the future. I guess in that regard the summary is correct- they don't have any "near term" plans to abandon Apple, but they've sure as shit got some long term ones, and I'm assuming Intel knows about it since a lot of the chips on the transparent prototypes had Intel marketings on them.

    • I'm saying this as someone who's used and enjoyed Apple products for over 10 years, brought family friends and colleagues over from Apple's competitors simply by being enthusiastic about the products I enjoyed using:

      If at any time what you say comes to pass and these devices replace the pre-Cook era functional and usable devices that I've found so enjoyable to use, I will take my business elsewhere.
    • by AmiMoJo ( 196126 )

      So basically it's a Chromebook, only more locked down.

    • by MobyDisk ( 75490 )

      The parent's claim is totally consistent with Apple's recent move to stop supporting 32-bit applications. [slashdot.org] They probably don't want to bother emulating 32-bit code, and they only can guarantee the cross-compiler can target 64-bit applications.

      When they moved from PowerPC to x86, they did so with emulation. That was possible because they were moving to a faster, power powerful processor. But in this case, they are actually moving to a slower, less powerful architecture. So emulation is probably not an opt

  • by speedplane ( 552872 ) on Thursday February 02, 2017 @02:50AM (#53786553) Homepage
    Device makers choosing to roll their own chips is a direct effect of the end of Moore's law. If Intel could keep up with their original promise of doubling transistor count (or performance, or power savings, or whatever metric) every 18 months, then Apple would not need to invest in their own chips. I fear that for Intel, the death of Moore's law means the death of independent chip makers, and to get the most performance, you'll have to go the custom ASIC route.
    • by AmiMoJo ( 196126 )

      Now is a golden time for independent chip makers. The tools to design chips are more accessible and cheaper than ever. You can prototype on an FPGA and have the exact same code turned into a higher performance ASIC. Well, it's a bit more complicated, but not much.

      High end fabrication used to only be available to big companies with their own facilities too.

      And performance wise, there is as much focus on custom chips as there ever was now, because as CPU performance increases slow that's the only way to get m

  • When you own Intellectual Property that others depend upon, you're enjoying a sunny day. When you depend upon someone else's IP, you worry. Those with an abundance of Intellectual Property can bargain with their peers and exclude certain potential competitors.

    Our world is interdependent in amazingly complex ways. You and I are at the mercy of the producers of the software and operating systems we use, and the evolving hardware platforms. Even mighty Apple is at the mercy of Intel and many other unique suppl

  • Is there a story actually confirming this (whatever it is) from an outlet that we can be a little more certain has reporters that know the difference between a chipset and a cpu?

  • Much like a GPU, the Intel CPU will be a co-processor that will run resource intensive tasks.
    Macs will basically be ipads, but with an Intel CPU to aid it for complex calculations.

  • For years, there was a shift towards avoiding expensive coprocessors and related by having more and more work done by the CPU. The massive growth in single core speeds in e.g. Intel chips made this sensible. Now that single core speeds are not getting faster, and we are having to go multi-core, and now that power consumption is becoming more of an issue, rethinking is becoming more pertinent. Way back when, mainframes would have things like I/O done by independent hardware subsystems, to avoid using expensive time on the main CPUs, and now it seems this is being rediscovered.

    Firstly, especially in something like MacOS, there has been progress towards offloading more and more of Quartz to the GPU. Many GUI things could quite happily be handled by a low-power ARM chip on the GPU itself. Already with programmable shaders, and now Vulkan, we are getting to the place where, for graphics, things are accomplished by sending programs, request and data buffers over a high speed interconnect (usually the PCIe bus). To some degree, network transparent graphics are being reinvented, though here the 'network' is the PCIe bus, rather than 10baseT. Having something like an ARM core, with a few specialised bits, for most drawing operations, and having much of the windowing and drawing existing largely at the GPU end of the bus, is one step towards are more efficient architecture: for most of what your PC does, using an Intel Core for it is overkill and wasteful of power. Getting to a point where the main CPUs can be switched off when idling will save a lot of power. In addition, one can look to mainframe architecture of old for inspiration.

    Another part of that inspiration is to do similar with I/O. Moving mounting/unmounting and filesystems off to another subsystem run by a small ARM (or similar) core, makes a lot of sense. To the main CPU you have the appearance of a programmable DMA system, to which you merely need to send requests. The small I/O core doing this could be little different to the kind of few-dollar chip SoC we find in cheap smartphones. Moreover, it does not need the capacity for running arbitrary software (not should it have: since its job is more limited, it is more straightforward to lock it down).

    This puts you at a point where, especially if you do the 'big-core/little-core' thing with the GPU architecture itself, the system can start up to the point where there is a useable GUI and command line interface before the 'main processors' have even booted up. Essentially you have something a bit like a Chromebook with the traditional 'Central Processing Unit' becoming a coprocessor for handling user tasks.

    I'd also go so far to suggest that moving what are traditionally the kernel's duties 'out-of-band', namely on a multi-core CPU, have a small RISC core handling kernel duties, and so far as hyperthreading is concerned, having this 'out of band kernel' able to save/load state from the inactive thread on a hyperthreading core. (Essentially if you have a 2-thread core, the chip then has a state-cache for these threads, where it can move them, and from there save/load thread state to main memory: importantly, much of the CPU overhead for a context switch is removed.)

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...