Apple Developing Custom ARM-Based Mac Chip That Would Lessen Intel Role (bloomberg.com) 267
According to Bloomberg, Apple is designing a new chip for future Mac laptops that would take on more of the functionality currently handled by Intel processors. The chip is a variant of the T1 SoC Apple used in the latest MacBook Pro to power the keyboard's Touch Bar feature. The updated part, internally codenamed T310, is built using ARM technology and would reportedly handle some of the computer's low-power mode functionality. From the report: The development of a more advanced Apple-designed chipset for use within Mac laptops is another step in the company's long-term exploration of becoming independent of Intel for its Mac processors. Apple has used its own A-Series processors inside iPhones and iPads since 2010, and its chip business has become one of the Cupertino, California-based company's most critical long-term investments. Apple engineers are planning to offload the Mac's low-power mode, a feature marketed as "Power Nap," to the next-generation ARM-based chip. This function allows Mac laptops to retrieve e-mails, install software updates, and synchronize calendar appointments with the display shut and not in use. The feature currently uses little battery life while run on the Intel chip, but the move to ARM would conserve even more power, according to one of the people. The current ARM-based chip for Macs is independent from the computer's other components, focusing on the Touch Bar's functionality itself. The new version in development would go further by connecting to other parts of a Mac's system, including storage and wireless components, in order to take on the additional responsibilities. Given that a low-power mode already exists, Apple may choose to not highlight the advancement, much like it has not marketed the significance of its current Mac chip, one of the people said. Building its own chips allows Apple to more tightly integrate its hardware and software functions. It also, crucially, allows it more of a say in the cost of components for its devices. However, Apple has no near-term plans to completely abandon Intel chips for use in its laptops and desktops, the people said.
Walk before you run (Score:4, Interesting)
ARM has only been doing 64-bit out-of-order execution and branch prediction for two generations, the first of which (A57) seemingly had worse IPC than Intel's Netburst architecture. They may catch up one day but for now they are no closer to besting Intel than Transmeta was back in the days of Crusoe. Let's just hope their revenue stream lasts long enough for that to happen.
Re:Walk before you run (Score:5, Interesting)
ARM has only been doing 64-bit out-of-order execution and branch prediction for two generations
In a single-core benchmark, Apple's A9X @ 2.25 GHz already defeats Intel's 1.3 GHz Core M7 CPU.
The idea is not to compete with a desktop Xeon but instead, to nibble at Intels feet at the bottom end. Check out this 2016 benchmark between the 12" MacBook (Intel @ 1.3 GHz) and the 12.9" iPad Pro: http://barefeats.com/macbook20... [barefeats.com]
GeekBench 3 single-core, higher is better:
MacBook Intel @ 1.3 GHz: 3194
iPad Pro: 3249
GeekBench 3 multi-core, higher is better:
MacBook Intel @ 1.3 GHz: 6784
iPad Pro: 5482
GFXBench Metal, more FPS is better:
MacBook Intel @ 1.3 GHz: 26.1 FPS
iPad Pro: 55.3 FPS
JetStream javascript benchmark, higher is better:
MacBook Intel @ 1.3 GHz: 175.68
iPad Pro: 143.41
Re: (Score:2)
In a single-core benchmark, Apple's A9X @ 2.25 GHz already defeats Intel's 1.3 GHz Core M7 CPU.
Wow, that's closer than I thought. For Intel, that must be much too close for comfort. But there's a benchmark that's Intel doesn't much bigger headaches: Arm Holdings only charges a couple of percent per chip, so high end ARM SoCs come in at just a sliver over fab cost, well under what Intel can sell their parts for and continue to live in the manner to which they have become accustomed.
Re: (Score:2)
"that's giving Intel much bigger headaches"
Re:Walk before you run (Score:5, Interesting)
Except the A9X doesn't have an ARM core, which is what the parent was talking about. It's a chip that implements the ARM instruction set. Big difference.
IP cores from ARM Holdings Inc, today, do not compete with Intel. Nor do any of the other ARM cores around (e.g. Qualcomm's, Nvidia's). But it seems Apple right now has better engineers than all of those and is actually managing to design ARM-compatible cores that are starting to be comparable to Intel chips.
Re: Walk before you run (Score:3, Funny)
Sooo... what your saying is that it's an Apple core? ;-)
Geekbench *sigh* (Score:2, Interesting)
While I honestly would like to see more of these comparisons (and the A9X IS a beast, esp. re. IPC and Perf/W) - could everyone please stop using Geekbench scores for cross-arch comparisons, especially 3 or older.
The codepaths and compilation flags are wildly arbitrary and the author has shown time and again his lack of understanding of cross-platform benchmark caveats and pitfalls. Especially GB3 has been shown as useless for that regard, among others by Linus Torvalds himself no less. (just look up his f
Re: (Score:3)
A cpu that runs at 2x the rate beats another CPU but not in all cases? Do go on.
:) Assuming you're not trolling, here's an explanation.
That intel is an 1.3 GHz dual-core Intel Core m5-6Y54 Skylake processor. It has Turbo Boost up to 2.7 GHz.
Re: (Score:2)
A cpu that runs at 2x the rate beats another CPU but not in all cases? Do go on.
:) Assuming you're not trolling, here's an explanation.
That intel is an 1.3 GHz dual-core Intel Core m5-6Y54 Skylake processor. It has Turbo Boost up to 2.7 GHz.
And it is an m-processor that has been intentionally crippled to be slow and use little power.
Re:Walk before you run (Score:5, Interesting)
A cpu that runs at 2x the rate beats another CPU but not in all cases? Do go on.
:) Assuming you're not trolling, here's an explanation.
That intel is an 1.3 GHz dual-core Intel Core m5-6Y54 Skylake processor. It has Turbo Boost up to 2.7 GHz.
And it is an m-processor that has been intentionally crippled to be slow and use little power.
And the A9x is a processor that has been intentionally designed to be fast and use even less power. So we can finally see that this is an obviously biased comparission - else the Intel chip wouldn't lose.
Re: (Score:2)
Re: (Score:2)
A cpu that runs at 2x the rate but only uses one third the power beats another CPU but not in all cases? Do go on.
FTFY
Re:Walk before you run (Score:4, Informative)
ARM is owned by SoftBank, they are not a standalone company any longer. Softbank can afford to take the long view. I was sorry to see them bought out.
Re: (Score:3)
ARM has only been doing 64-bit out-of-order execution and branch prediction for two generations
That's a big combination of features. ARM has been doing branch prediction for a couple of decades. The Cortex A9 was their first out-of-order design. The A8 was two-way superscalar, but in-order. These were introduced in 2005 and (I think) 2010. 64-bit is newer, but in microarchitectural terms not nearly as big a jump as the others - data paths are wider, but that's basically it (and a load of the difficult things, like store multiple and the PC as a target for arbitrary instructions went away with AA
This could get interesting (Score:3)
Apple doesn't care about backward compatibility If they can deliver a next gen chip with zero support of existing apps, they may have the money to pull it off.
If Intel could write off the x86 instruction set I'm guessing it's benchmarks would at least double. .
Re: (Score:3)
The translation layer is actually quite tiny, with the more arcane instructions being handled by a rom.
Re: (Score:3)
The translation layer is actually quite tiny
The translation layer is not actually tiny, it's just that the rest of the chip is gigantic.
Re: (Score:3)
Given the current manufacturing processes etc, its most likely a lot smaller than an 1980ish 6502 for example.
Re: (Score:2)
The translation layer is not actually tiny, it's just that the rest of the chip is gigantic.
Big steak makes the potatoes look smaller. Or something.
Perhaps I should have just gone for "to-MAY-toes, to-MAH-toes"
Re: (Score:2)
Apple doesn't care about backward compatibility If they can deliver a next gen chip with zero support of existing apps, they may have the money to pull it off.
It sounds like Apple does care about backwards compatibility, which is why the ARM is only designed to work as a co-processor to offload a few specific tasks from the Intel CPU, rather than as a general-purpose CPU for developers to target.
That said, Apple has changed CPU architectures before without (significantly) breaking compatibility; you may recall Rosetta, which allowed x86-based Macs to run PowerPC MacOS/X executables for a number of years (until Apple made it an optional install, and then later dro
Its Windows compatibility, not backwards compat (Score:2)
Emulation works well today since they don't have to emulate an instruction set architecture. Recompilation of the binary from one ISA to another could help but may still feel sluggish, its not quite the same as starting from the source code. And of course there is Boot Camp whic
Re: (Score:2)
Apple has changed CPU architectures before without (significantly) breaking compatibility; you may recall Rosetta, which allowed x86-based Macs to run PowerPC MacOS/X executables for a number of years (until Apple made it an optional install, and then later dropped it entirely).
Yes, they dropped it entirely long before the machines in question were obsolete. Shit, I got the last dome iMac (for five bucks, on a whim) and it's snappy enough to do pretty much everything on if you're not in a big rush. But you can't get Chrome for it, and you can't run x86 binaries, so it's landfill. (If it didn't have scratches on the display, I'd try to figure out how to use the display, and wedge some tiny PC or a R-Pi into the case. Alas.) This is precisely why Windows/Intel is a smarter move than
Re: (Score:2)
My understanding is a significant percentage of Intel dies are supporting ancient x86 instructions.
Nope. You understand wrong. The ancient x86 instructions are a tiny insigificant slice of the die.
The CPU cores are RISC and have been for ever now. The x86 instruction set is all converted to RISC in the decoder. The decoder itself is pretty tiny part of the core, and the 'ancient obsolete instrutions' amount for a dozen or so bytes of "RISC lookup" in a table in the decoder on each core.
Cache, GPU, and the memory controller is what dominates the die of a modern i5 or i7.
Its like those old HSP modems that
Re: (Score:2)
If you want to worry about legacy stupidity bloating Intel chips, look at their cache model, not their instruction set. Their legacy "everything is coherent everywhere" requirement means they need snooping/invalidation logic around every single little cache block (e.g. the branch predictor). ISAs where, for example, you are not allowed to execute dynamic code without first flushing it from D cache and invalidating that range from I cache don't have this problem.
Re: (Score:2)
Re: (Score:2)
Apple always included emulators for the older binaries and new software usually was delivered as a "fat binary" that included the code for all supported CPUs. In other words, there is no "next gen chip with zero support of existing apps"
I would not wonder if future CPUs have cores with different instruction sets anyway or we go back to multiple CPUs and then one CPU is an ARM or whatever exotic CPU might be interesting.
Ultimate hardware dongle for killing hacintosh (Score:3)
Sounds like a great way to lock OS X, or macOS or whatever they call it these days, solidly back to Apple hardware and preclude any possibility of running on stock x86 hardware. Though there's less and less reason to run a hacintosh all the time (it was always a maintenance nightmare). Though virtualization might be a way of getting around that. I've often thought Apple should sell a complete OS X (excuse me, macOS) vm for Windows users as it would provide an easy way to woo users to the platform. However the VM on your average Windows machine would probably outperform the Mac Pro, given Apple's commitment to high end users these days.
Re:Ultimate hardware dongle for killing hacintosh (Score:5, Funny)
Re: (Score:2)
Dunno how serious you are, but I think it stopped being "Mac OS Ten" after Mac OS X 10.7 Lion. I think the one after that was OS X 10.8 Mountain Lion.
Re: (Score:2)
Re: (Score:2)
No, Apple has went back to calling the latest release MacOS Sierra. OS X "version name" is no more.
We recently added documentation macros that warned if we used the term OS X, and advised using macOS instead. Once we did that we triggered several year or macros warning us not to use MacOS but use OS X instead ;D
Consider why they moved to Intel in th first place (Score:5, Informative)
They're gambling that ARM CPUs (SoCs) will become powerful enough to accomplish the tasks people ask of from Macs, while revenue from phone, tablet, and other small device sales (e.g. Apple TV) will be enough to sustain R&D to keep it progressing as rapidly as Intel CPUs. That could happen, but I'm not convinced it will. The tablet market is already floundering after reaching saturation. I'm guessing phones will soon join them once 5G arrives (5G data will be fast enough there will be no compelling reason to upgrade your phone for 5-10 years). In a saturated marketplace, the Mac commands so little of the PC market it wasn't able to keep Motorola competitive nor sway IBM. And this battle - CISC (Intel) vs RISC (Alpha, MIPS, Sparc, Power, ARM) - has been fought before. Every time, CISC has come out the winner.
Intel (and Microsoft) is successful because they managed to find a market with consistently large annual sales (and profit margins) even after reaching saturation. So far Apple has been riding a growing mobile market to success - basically coasting downhill. It remains to be seen whether they can continue that momentum once the hill levels out, people stop upgrading every 2 years, and they're forced to really, truly innovate to create demand to sustain their sales.
Re:Consider why they moved to Intel in th first pl (Score:4, Insightful)
They're gambling that ARM CPUs (SoCs) will become powerful enough to accomplish the tasks people ask of from Macs, while revenue from phone, tablet, and other small device sales (e.g. Apple TV) will be enough to sustain R&D to keep it progressing as rapidly as Intel CPUs.
It won't happen, and mainly for the exact reasons you stated. Phones and tablets have already taken over the "I don't do much other than browse the internet/watch youtube/update facebook/snapchat/twitter/email" jobs that low performance CPUs can handle. The only reason someone has a need to purchase a real computer now is because they have a real need for processing power (gaming, photo/video editing, developing software, running simulations). Everything else is already being done by the lightweight CPUs.
Re: (Score:2)
Re:Consider why they moved to Intel in th first pl (Score:5, Insightful)
And this battle - CISC (Intel) vs RISC (Alpha, MIPS, Sparc, Power, ARM) - has been fought before. Every time, CISC has come out the winner.
It wasn't really a battle of RISC vs CISC. It was a battle between incumbents and upstarts.
In the workstation arena, the CISC incumbent was Motorola with they 68k series. Despite being better CISC architecture than Intel, 68k lost to the RISC upstarts. Motorola had more resources than MIPS and Sun but not enough more and their customers were nimble enough to take advantage of the performance advantages the RISC upstarts offered.
Intel's had a much larger customer base and those customers were much more dependent on binary compatibility. It took a little while. Neither the 386 or 486 were a match for their RISC competitors. But Intel was able to outspend their RISC competitors on R&D, holding their ground until chips became complex enough that process and ISA independent features dominated. If Intel's architecture were also RISC, they would still have won, even sooner if the upstarts were CISC. Actually, with Intel RISC and CISC upstarts. there would not even have been a battle. Without a short term advantage to exploit, the upstarts would have not have gotten off the ground.
I can't see an Apple only processor wining over Intel, either. At minimum, Intel's process advantage would have to be nullified and I can't see that happening until scaling comes to a full stop.
Re: (Score:2)
But Intel was able to outspend their RISC competitors on R&D, holding their ground until chips became complex enough that process and ISA independent features dominated.
Don't forget getting DEC Alpha at bargain bin discount prices.
Re: (Score:2)
Don't forget getting DEC Alpha at bargain bin discount prices.
You mean, once it was shown that there was no more headroom in it and it wouldn't scale past about 400 MHz? What a bargain! Meanwhile, AMD also got the only interesting part of Alpha, the bus. That was almost as good a buy as when Intel bought an ARM core (XScale) and then ironically couldn't get it to "scale" down; it was the fastest ARM core, but it was also the most power-hungry by far.
Re: (Score:2)
What really screwed RISC was that CISC processors stole all their great ideas anyway. x86 is basically an intermediate language at this stage, with modern x86 CPUs being RISC internally and translating x86 CISC instructions as part of the execution pipeline.
For performance applications that works really well, because the CPU designer can optimize higher level instructions to each CPU's specific architecture in a way that RISC makes more difficult because RISC instructions are much more atomic.
For example, I
Re: (Score:2)
It's not quite that clear cut. The big win for RISC was that the decoder was a lot smaller (and didn't rely on microcode). That gave them a lot more area to add ALUs for the same transistor budget. As chip sizes increased, the decoder area went from being a dominant part of the core to a fairly small one, and a denser CISC instruction encoding meant that the extra i-cache that RISC chips needed used more area overall. Add to that, CISC had more headroom for microarchitectural improvements. For example,
Re: (Score:2)
If Intel's architecture were also RISC, they would still have won, even sooner if the upstarts were CISC.
Intel has been internally RISC since the Pentium (and AMD since the Am586). They didn't go to a RISC instruction set because there are actually numerous advantages to the x86 set once you work around its worst deficiency (that the x86 ISA has only one general purpose register since all the other ones are used for specific things) with register renaming.
Actually, with Intel RISC and CISC upstarts. there would not even have been a battle.
In short, you are wrong [wikipedia.org]. Intel tried to make a full-risc chip and failed. Well, they didn't fail to make one, but they did fail to sell them. That's not thei
Re: (Score:2)
You are both wrong.
There never was a battle.
Like there never was a battle between gasoline and diesel engines or fuel.
It is just two different approaches for designing CPU instructions sets and hence designing the CPU.
Why people now try to call research and development "a battle between" is beyond me.
Re: (Score:2)
There never was a battle.
I disagree. I think there really was a battle between CISC and RISC, with the last real competitors being the 486 and... I forget, honestly, exactly what the competition was. I want to say at that time it was SuperSPARC on Sun's side, HP actually had their own architecture still, and IBM was just inventing POWER for RS6ks. (Wikipedia... yep, and Ross HyperSPARC, too. We had a SS10 quad-HyperSPARC at SEI, IIRC. Or maybe we had a dual-HyperSPARC SS10 and a quad SS20. That was a while back.) The end result is
Re: (Score:2)
Apple only went from PowerPC to Intel because IBM told them to naff off when they wanted more control.
That is bollocks.
Intel could not and did not want to provide the mobile PowerPCs in quantities Apple demanded and did not really put R&D into mobile PowerPCs.
IBM didn't need tthem as they were producing chips for the Wii, Xbox and PS at the time. Apple was a tiny fish pretending it was a shark. No skin off IBM's nose. This are all desktop/workstation PowerPCs ...
Now Apple are moving hardware vendors
Re: (Score:2)
[asshole alert: I am making fun of your simple, understandable brainfart.]
Yeah, last I heard, Intel still hasn't produced their first one. Somewhere along the way, they got all distracted by their existing and future x86 products.
It seems like this incompetence and lack of commitment has infected all sorts of industries. Ford still can't deliver enough Accords an
Re: (Score:2)
They moved to Intel because the Mac doesn't have enough sales volume to drive its own CPU R&D.
Back then though a leading edge CPU required a leading edge chip fab, which is a huge (majority?) part of the cost. That's not the case these days.
Re: (Score:2)
I'm guessing phones will soon join them once 5G arrives (5G data will be fast enough there will be no compelling reason to upgrade your phone for 5-10 years).
No, that's easy. RAM will get cheaper, too. So you just add more ram, make iOS more memory hungry, update the API so that some new apps won't run on the old iOS, and bingo! Everyone upgrades whether they need a new phone or not. And this ain't a conspiracy theory, this is exactly what Apple has done so far, consistently. I say this because it is not what Google has done; several releases of Android have actually improved performance on older devices. The problem there is whether the vendor will bother to d
Re: (Score:2)
Erm ... new IOS versions happily run on old devices.
My iPhone 4S is minimum 5 years old, btw.
People upgrade because they find the new phone more shiny. There is rarely a "software compatibility" reason.
Re: (Score:2)
Erm ... new IOS versions happily run on old devices.
Everyone but you has complained about the performance impact of new iOS on old iDevices. I don't think that you're a genius and they're all idiots.
Re: (Score:2)
"Their meat and potatoes was in the server market"
I'm actually curious to what degree IBM's PowerPC engineering focus is/was on the server market, even at the time. Clearly the custom embedded stuff accounts for a lot more shipped units these days. With that said, I really have no idea who is using IBM PowerPC workstations/servers or for what or what so it's hard to guess what portion of the dollars are involved. IBM always seems to have a bunch of capacity-on-demand type offerings available and doing almost all of the design in-house is a way to make those cost-effective to provide.
I think it is more fair to say that IBM's meat and potatoes was not the laptop market. Apple was getting killed in the laptop market. They needed lower power processors but no one else was making PowerPC laptops and IBM was not inclined to make a special low power processor just for Apple. I think even embedded PowerPC's were generally hooked up to main power, not batteries. (Bear in mind that this was before power and heat became a significant problem for desktop PC's and servers)
Feh. (Score:2)
Last sentence is (almost) BS. (Score:5, Interesting)
Posting as AC for a damned good reason.
Apple already has several ARM powered laptops drifting around internally. I've seen several of them with my own eyes. There's at least five different prototypes, all constructed in plastic cases with varying degrees of complexity (some are literally just a clear acrylic box, others look more like 3D printed or milled parts designed to look like a chunky MBA or iBook). There's a few that literally recycled the chassis and case from an MBA, just with a different logic board (which was coloured red for some reason), and others sporting a radically different design than anything Apple currently sells (not going anywhere near the details on those because of NDA).
All of them boot encrypted and signed OS images, which are fully recoverable over the internet so long as you've got WiFi access (similar to how their Intel powered systems do it). You cannot chose a version of the OS to load, you get whatever the latest greatest one is and that's it. They've completely ported OS X to ARM (including all of Cocoa and Aqua), however a ton of utilities that normally come with OS X are missing (there's no Disk Utility, Terminal, ColorSync, Grapher, X11, Audio/MIDI setup, etc). A lot of that functionality has been merged into a new app called "Settings" (presumably to match the iOS counterpart), which takes the place of System Preferences.
Likewise, App Store distribution appeared to be mandatory. I didn't see any mention of Gatekeeper or any way to side load (unsigned) binaries, presumably because Gatekeeper is simply part of the system now. The systems I saw could all access an internal version of the MAS that was specifically designed for the ARM systems (and under heavy WIP, judging by the broken page formatting and placeholder elements). The filesystem seemed a bit... peculiar, to say the least. Everything was stored in the root of the disk drive- that is to say, the OS didn't support multiple users at all, and everything that you'd normally see in your home directory was presented as / instead. I don't think the physical filesystem was actually laid out like this, it's just that the Finder and everything else had been modified to make you believe that's the way the computer worked. There was no /Applications folder anymore, your only option for launching and deleting apps was through Launchpad. Drivers (now called "System Extensions") were handled 100% automatically by the OS. If you plugged anything into the computer that it didn't support, it would automatically launch the MAS and take you to a page where you could download and install the relevant stuff. Those things would show up in Settings.app where you could manage them by way of customized preference panels or uninstall them completely. The rest of it more or less looked like a modern day version of 10.12 without some of the historical features accumulated over the years (for example, Dashboard was nowhere to be found).
From what I was told, there's a huge push to get this stuff out the door as soon as they think the market will accept it. That might be in a year, or two years, or three or four, but that's where Apple is inevitably heading. Custom hardware, custom software, total vendor and user lock in. They want to own everything, everywhere, at all times, and ARM is going to let them do exactly that. They're not stupid though and they're not going to commit suicide by releasing this stuff tomorrow, but they will sometime in the future. I guess in that regard the summary is correct- they don't have any "near term" plans to abandon Apple, but they've sure as shit got some long term ones, and I'm assuming Intel knows about it since a lot of the chips on the transparent prototypes had Intel marketings on them.
Re: (Score:2)
If at any time what you say comes to pass and these devices replace the pre-Cook era functional and usable devices that I've found so enjoyable to use, I will take my business elsewhere.
Re: (Score:2)
So basically it's a Chromebook, only more locked down.
Re: (Score:2)
The parent's claim is totally consistent with Apple's recent move to stop supporting 32-bit applications. [slashdot.org] They probably don't want to bother emulating 32-bit code, and they only can guarantee the cross-compiler can target 64-bit applications.
When they moved from PowerPC to x86, they did so with emulation. That was possible because they were moving to a faster, power powerful processor. But in this case, they are actually moving to a slower, less powerful architecture. So emulation is probably not an opt
Re:Last sentence is (almost) BS. (Score:4, Informative)
For me this would mean I would drop Apple entirely. I dropped their iPhone line already after I had to deal with shoddy quality in my old iPhone 5. I dropped the Mac Minis because of disinterest from Apple to provide decent machines (the NUCs are better nowadays).
I dropped the Airport line after apple kicked it off their products list. I am a happy Fritzbox customer now, way better in any regard.
The Ipad 3 was replaced by a Sony tablet after apple patched the performance out of the thing with their third annual software update.
The last remaining piece of Apple hardware I still use is the Macbook Pro but Apple makes it harder every year to stay with those as a customer.
Re: (Score:2)
I like Apple products, but this turns my desktop into essentially a locked down iOS box. No development, no UNIX tools, no "vagrant up", what I would have is something less functional than a Chromebook for 20-50 times the cost.
Lenovo and Dell is running rings around Apple. I can buy a 13" laptop from Dell that is a better 13" MBP than the MBP. It has USB-C... but it has USB, a SD card slot, and everything else one would need or use on a daily basis. To boot, it is a fraction of the price. If I were gag
Death of Moore's Law (Score:3)
Re: (Score:2)
Now is a golden time for independent chip makers. The tools to design chips are more accessible and cheaper than ever. You can prototype on an FPGA and have the exact same code turned into a higher performance ASIC. Well, it's a bit more complicated, but not much.
High end fabrication used to only be available to big companies with their own facilities too.
And performance wise, there is as much focus on custom chips as there ever was now, because as CPU performance increases slow that's the only way to get m
IP is the name of the game (Score:2)
When you own Intellectual Property that others depend upon, you're enjoying a sunny day. When you depend upon someone else's IP, you worry. Those with an abundance of Intellectual Property can bargain with their peers and exclude certain potential competitors.
Our world is interdependent in amazingly complex ways. You and I are at the mercy of the producers of the software and operating systems we use, and the evolving hardware platforms. Even mighty Apple is at the mercy of Intel and many other unique suppl
Confirmation elsewhere? (Score:2)
Is there a story actually confirming this (whatever it is) from an outlet that we can be a little more certain has reporters that know the difference between a chipset and a cpu?
Intel CPU = copro (Score:2)
Much like a GPU, the Intel CPU will be a co-processor that will run resource intensive tasks.
Macs will basically be ipads, but with an Intel CPU to aid it for complex calculations.
From the 'why not earlier' department... (Score:3)
For years, there was a shift towards avoiding expensive coprocessors and related by having more and more work done by the CPU. The massive growth in single core speeds in e.g. Intel chips made this sensible. Now that single core speeds are not getting faster, and we are having to go multi-core, and now that power consumption is becoming more of an issue, rethinking is becoming more pertinent. Way back when, mainframes would have things like I/O done by independent hardware subsystems, to avoid using expensive time on the main CPUs, and now it seems this is being rediscovered.
Firstly, especially in something like MacOS, there has been progress towards offloading more and more of Quartz to the GPU. Many GUI things could quite happily be handled by a low-power ARM chip on the GPU itself. Already with programmable shaders, and now Vulkan, we are getting to the place where, for graphics, things are accomplished by sending programs, request and data buffers over a high speed interconnect (usually the PCIe bus). To some degree, network transparent graphics are being reinvented, though here the 'network' is the PCIe bus, rather than 10baseT. Having something like an ARM core, with a few specialised bits, for most drawing operations, and having much of the windowing and drawing existing largely at the GPU end of the bus, is one step towards are more efficient architecture: for most of what your PC does, using an Intel Core for it is overkill and wasteful of power. Getting to a point where the main CPUs can be switched off when idling will save a lot of power. In addition, one can look to mainframe architecture of old for inspiration.
Another part of that inspiration is to do similar with I/O. Moving mounting/unmounting and filesystems off to another subsystem run by a small ARM (or similar) core, makes a lot of sense. To the main CPU you have the appearance of a programmable DMA system, to which you merely need to send requests. The small I/O core doing this could be little different to the kind of few-dollar chip SoC we find in cheap smartphones. Moreover, it does not need the capacity for running arbitrary software (not should it have: since its job is more limited, it is more straightforward to lock it down).
This puts you at a point where, especially if you do the 'big-core/little-core' thing with the GPU architecture itself, the system can start up to the point where there is a useable GUI and command line interface before the 'main processors' have even booted up. Essentially you have something a bit like a Chromebook with the traditional 'Central Processing Unit' becoming a coprocessor for handling user tasks.
I'd also go so far to suggest that moving what are traditionally the kernel's duties 'out-of-band', namely on a multi-core CPU, have a small RISC core handling kernel duties, and so far as hyperthreading is concerned, having this 'out of band kernel' able to save/load state from the inactive thread on a hyperthreading core. (Essentially if you have a 2-thread core, the chip then has a state-cache for these threads, where it can move them, and from there save/load thread state to main memory: importantly, much of the CPU overhead for a context switch is removed.)
Comment removed (Score:5, Interesting)
Re:Why not buy Intel? (Score:5, Insightful)
I doubt most people know what it actually takes to design and manufacture a CPU like an i7. There is huge investments in R&D, and then even bigger investments in the foundries to make said chip. It would significantly increase the cost of a CPU to something like $4000/pop if the only customers were about 20,000,000 Macs a year. Even if Apple managed to double their sales as being the only "Intel computer" available, their margins would topple and the stock would crash.
Re: (Score:2)
If you ignore the fact that AMD would now have a total monopoly, sure, it would be much the same world as before.
Re:Why not buy Intel? (Score:5, Funny)
Re: (Score:2)
Re: (Score:3)
Face it, they suck.
They really do. They just suck less than the alternatives.
Re: (Score:2)
No one beats intel for performance/density/dollar, not even close. In time that could change, but not for awhile yet. Why do you think datacenters use intel chips?
Re: (Score:2)
It isn't, but ARM is better at the low-power scale in absolute terms, and less complex chips have lower leakage. It's hard to build a single chip that can scale from high to low power, and Intel doesn't know how to build small chips. But yes, at desktop/server scale, Intel still smokes ARM. High-end POWER does better than ARM but Intel still wins.
Re:Why not buy Intel? (Score:5, Informative)
There are some real costs to x86. It's more fair to call the decoder a parser for x86 - instructions are between one and 15 bytes long, they map to between one and a few dozen micro-ops. You need to keep the decoder powered almost all of the time (and when it's unpowered, you need to have the trace cache, which contains decoded micro-ops, powered) that you're executing instructions. ARM (AArch64 and Thumb-2) instruction sets are tuned to give good cache usage, so the typical win of CISC over RISC in i-cache usage doesn't really apply.
That said, when you get up to desktop or server power consumption levels, the power consumption is dominated by the register rename engine and the ALUs. Here, Intel has an advantage over ARM because they control their process and integrate their chip design very closely with the fab technology. This lets them put analogue components for monitoring power consumption and power / clock gating throughout the chip. Dark Silicon (i.e. the end of Dennard Scaling) means that you keep getting more transistors to put in the IC, but you can't power more of them at a time. Being able to switch off parts of the chip faster than the competition means that Intel still has some advantages. Some of the ARM partners who design their own cores and control their own fabs could do this, but ARM licenses IP cores that are produced by multiple vendors with different processes. Apple is in a similar situation, as they're careful to have a second source for fabbing their ARM cores.
Re:Why not buy Intel? (Score:5, Interesting)
What really blew my mind was reading that Apple's biggest desktop customer is now IBM. That should tell you something when big blue is distancing themselves from Microsoft.
Re:Why not buy Intel? (Score:4, Interesting)
What really blew my mind was reading that Apple's biggest desktop customer is now IBM.
And according to this 3 year old article [techeye.net] it was Google before them: "Google staff now can use Windows PCs only with a business case making the company the world’s biggest Apple shop with 43,000 devices."
Re:Why not buy Intel? (Score:5, Interesting)
Re: (Score:2)
Apple has more cash on hand than all Intel stock is worth. They could divest Intel of its side businesses and focus on building high performance, low power chips at lower cost.
Patents, talent, institutional knowledge and incredible upfront costs might get in the way of that.
Re: (Score:2)
..... low power chips at lower cost.
Oh, so close. Well, actually, lower cost to them maybe.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
no one has good proven commercial grade software for arm, and its even a grey area for mac
Commercial grade? More or less everything open source runs on Arm, just fine. There's plenty of cmmercial-grade software depending on your industry.
x86 has been competing with the power and power of arm for a bit now, and when it comes down to it what am I supposta use? hacked up libre office offered by zen ding dong jacnoff for arm that needs root access and web add's or a a 80$ copy of MS office on X86 that majority
Re: (Score:2)
Yeah, but will commercial software vendors follow Apple down the garden path to an ARM future when the rest of the world (Linux and Windows) is still on x86?
My guess is its more complicated than just telling the compiler to target ARM CPUs, and will an ARM Mac generate enough sales to make it worthwhile for vendors to do the extra work on their code base?
I'm assuming that most of the added work for Mac support now is fairly small scale UI stuff, and that the functional parts of applications and optimization
Re: (Score:2)
My guess is its more complicated than just telling the compiler to target ARM CPUs, and will an ARM Mac generate enough sales to make it worthwhile for vendors to do the extra work on their code base?
It was in the olden days when compilers were slow and people favoured all sorts of non portable hacks. People fixed their stuff a fair bit during the 32 to 64 bit transision. Unless you use embedded ASM, the move to ARM is probably easier than 32 to 64 bit x86: the endianness matches so most of the hacks would
Re: (Score:2)
I guess maybe more conventional code would be pretty easily portable then, perhaps only really performance sensitive code might be affected.
Re: (Score:2)
I guess maybe more conventional code would be pretty easily portable then, perhaps only really performance sensitive code might be affected.
Yep. If you stick to normal C++ code, you'll be fine. These days it's actually easier to to theings the proper way than it is to not. If you're very performance sensitive, you might be using SSE/AVX intrinsics, in which case they're nonportable. The compilers can now do basic vectorisation themselves, so the space needed for that is shrinking. It's not that common to wr
The vast majority of Linux installs are ARM (Score:3)
> when the rest of the world (Linux and Windows) is still on x86?
Linux has supported ARM since 1994. Today, the vast majority of Linux kernels running are running on ARM processors.
Re: (Score:2)
Yeah, but will commercial software vendors follow Apple down the garden path to an ARM future when the rest of the world (Linux and Windows) is still on x86?
Windows runs on x86 and ARM, though not many people run it on ARM. Linux is installed on at least one order of magnitude more ARM devices than x86. If anything, x86 is now a niche architecture for Linux.
Re: (Score:2)
I thought MS was looking to launch an arm version of win 10 this year that supported win32 apps on a snapdragon 835
Re:except (Score:4, Funny)
Put down the bong. It'll make it much easier to type.
Re: (Score:2)
hacked up libre office offered by zen ding dong jacnoff for arm that needs root access and web add's or a a 80$ copy of MS office on X86 that majority of the universe uses at this point
You realise that Microsoft ships Office for three different operating systems on ARM, I presume?
Re: (Score:2)
I buy Macs because I enjoy using MacOS/X more than using Linux (and much more than using Windows).
And believe me, I don't look hip.
Because Windows & Linux are terrible? (Score:5, Insightful)
Windows is terrible because it's Windows.
Linux is a great server/workstation OS--but it's a pain on a consumer device. I'm long past the point in my life when I'm okay with recompiling a kernel to fix my sound. My intra-family IT work has gone down by about 95% since I've moved family members over to Macs.
So, yeah, if you want to use an un-terrible OS where everything basically works--then OSX is a pretty good choice. If you'd rather spend your life reading stackoverflow to figure out how to print to a wireless printer--then please feel free to use Linux. And if what makes you happiest is installing anti-virus software while Microsoft logs your every keystroke--then please, by all means, install Windows 10. Actually--just leave your Windows 8 computer plugged in and Microsoft will install it for you.
Re: (Score:2)
Linux is good on the server site maybe workstation but there are pro apps that are windows only (some are also on windows)
But no autocad, adobe CC, QuarkXPress, others.
Re:Because Windows & Linux are terrible? (Score:5, Informative)
I'm long past the point in my life when I'm okay with recompiling a kernel to fix my sound.
Does anyone do that any more? I've not compiled a custom kernel in a very long time. I dunno, I guess if you wander into pea-sea world and pick up the latest shitbox that got released yesterday for $200, then you might be in for some pain, but I don't know. I expect my laptops to last a long time (my 8 year old eee900 says hi), so I've generally stuck to decent brands like Thinkpad and Asus. I can't recall having to recompile a kernel to deal with hardware issues ever.
On my work laptop (runs ubuntu LTS), I can't ever remember anything ever breaking either. I basically set it up (installed a bunch of packages, program preferences, my bashrc) and it ran more or less maintenance free until I upgraded it to the next LTS, after which it continued the same maintenance free running to the present day.
My home laptop runs Arch, so it takes a bit more fiddling, but that's me choosing a distro known to be explicitly fiddly, just for fun. Remarkably stable though considering. The funniest though was when xorg split apart things from one monolithic distribution into a package tree. Arch cheerfully upgraded x to the newer x, which meant replacing the old monolithic package with just the minimal xserver. No keyboard or mouse drivers included. Thankfully, you can browse the arch website in a terminal based browser...
Anyway TL;DR, if you're recompiling kernels to fix hardware in this day and age, you're doing it wrong.
Re: (Score:2)
I haven't been using my Ubuntu box directly (occasionally remoting in) because one day I updated it and it stopped outputting graphics. I get the text stuff at boot and then a black screen thereafter. Let's not pretend that it's all competence and roses in Linux-land. I've tried dpkg-reconfiguring some things without success, but my Windows 7 machine just keeps working so I just keep doing graphics things there because it's easier than actually figuring out what's wrong this week on my Linux box.
Linux is st
Re: (Score:3)
Windows is terrible because it's Windows.
Hmmm, Windows is working fine on all of my devices. It also runs all of my games from the last 30 years.
Linux is a great server/workstation OS--but it's a pain on a consumer device. I'm long past the point in my life when I'm okay with recompiling a kernel to fix my sound.
I have no issues with Linux, either on the desktop or the server. I haven't had to recompile a driver in 10 odd years (seriously, it was when I got my first media centre PC in 2005, since then... nothing). I've run Ubuntu and then Linux Mint when Ubuntu Macified the UI without a single problem on any hardware I've put it on. It sounds like you've never used these operating systems... Ever and are just li
Re: (Score:2)
Windows is terrible because it's Windows.
I see that the age old Slashdot tradition that Windows is bad is still in full effect. I bet you can't even tell me what's terrible about it without just going "well it's closed source so it's BAD!" or "Micro$oft".
Re: (Score:2)
Full Unix on the corporate network, MS Office (Score:2)
My last couple of work computers have been Macs because I'm a technical person, someone who likes Linux/Unix, working in an organization that uses Active Directory and other "Windows" stuff. Mac bridges those two.
MacOS plays nicely with Active Directory and all the other corporate stuff, runs Microsoft Office, etc. It's a perfectly good company computer.
Also, it's certified Unix, and runs all the open source applications used on Linux. It's a good OS for technical people. You can pretty much open a termina
Re: (Score:2)
Of course, you safe the hassle to support a Hackintosh.
Or how do you want to run OS X / macOS?
If you want to run Windows your question was kinda silly ...
Re: (Score:2)
m.2 pci-e storage? and not that apple only stuff?
You seem to misunderstand the direction Apple is going in.
They want to give you less options, not more of them. Because thinking is hard. Solder all the things to the mainboard.