Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Desktops (Apple) Graphics Microsoft Portables (Apple) Power Software Apple Hardware Technology

Why Apple and Microsoft Are Using Last Year's Skylake Processors In Their New Computers (gizmodo.com) 136

Apple released new MacBook Pros yesterday that feature Intel's year-old Skylake microarchitcure, as opposed to the newer Kaby Lake architecture. Two days earlier, Microsoft did the same thing when it released the Surface Studio. Given the improvements Kaby Lake processors have over Skylake processors, one would think they would be included in the latest and greatest products from Microsoft and Apple. Gizmodo explains why that's not the case: In the case of the new 15-inch MacBook the answer is simple. "The Kaby Lake chip doesn't exist yet," an Apple rep told Gizmodo. Kaby Lake is being rolled out relatively slowly, and it's only available in a few forms and wattages. The 15-inch MacBook Pro uses a quad-core processor that has no Kaby Lake equivalent currently. That particular laptop really does have the fastest processor available. The same goes for the Microsoft Surface Studio and updated Surface Book -- both also use a quad-core Skylake processor with no Kaby Lake counterpart. But the Studio and Surface Book are also using much older video cards from the Nvidia 900 series. Nvidia has much faster and less power-hungry chips (the 1000 series) available based on the Pascal architecture. Microsoft's reasoning for going with older video cards is nearly identical to Apple's for going with a slower processor in its 13-inch MacBook Pro: the Nvidia 1000 series came out too late. The major intimation was that Kaby Lake and Pascal came so late in the design process that it would have delayed the final products if they'd chosen to use them. New technology, no matter how amazing an upgrade it might be, still requires considerable testing before it can be shipped to consumers. One minor bug, particularly in a system as engineered as the Surface Studio or MacBook Pro, can turn catastrophic if engineers aren't careful. In the case of Microsoft, it's frustrating, because that old GPU is significantly slower than the Pascal GPUs available. It's a little less frustrating in Apple's case, largely because of the old processor microarchitecture that Apple elected to shove into its new 13-inch MacBook Pro. Apple went with a new Skylake dual core processor that draws a lot of power -- more so than any Kaby Lake processor available. It then uses all that extra power to ramp up the speeds of the processor. Which means it is capable of pulling off speeds that can actually match those of the fastest Kaby Lake processor out there. The only downside to this decision is battery life.
This discussion has been archived. No new comments can be posted.

Why Apple and Microsoft Are Using Last Year's Skylake Processors In Their New Computers

Comments Filter:
  • by See Attached ( 1269764 ) on Friday October 28, 2016 @10:45PM (#53173429)
    The dark side of this relationship between manufacturor and user is that the provider might want to sell both product lines rather than just the first one. "consumers on both sides of the tracks will have the unquenchable desire to have the latest flangle". In both cases, there may already be plenty of CPU horse power, so that even last years model works fine . Sorta feels like the cable industry letting go of the Triple play. Sometimes we users just dont need a new version. Or they will down-spec the initial to make the next rev required?
  • FOMO? (Score:3, Interesting)

    by Anonymous Coward on Friday October 28, 2016 @10:46PM (#53173431)

    No, FOTU. Fear Of The Unknown

    Current chipsets have enough power to make any device seem very quick to the average users. Only the super high-end buyers would even be able to name the latest. Why risk using a brand new chip?

    How many incremental units do you ship because you used the latest new chipset v. downside risk of potential issues with a chip that has not been tested in a full market release?

    It's math. Nothing complicated about it.

    • Why risk using a brand new chip?

      If computer manufacturers had taken your advice, we'd still all be running machines with 8086 processors.

      How many incremental units do you ship because you used the latest new chipset v. downside risk of potential issues with a chip that has not been tested in a full market release?

      So how does a chip get a "full-market release" unless the company that is supposed to be the super-premium level S-Rank of personal computing actually uses it?

    • Re:FOMO? (Score:5, Informative)

      by janoc ( 699997 ) on Saturday October 29, 2016 @07:26AM (#53174549)

      Sorry, but only person who has absolutely no clue about how a hardware product is being developed (and how long does it actually take!) can say nonsense like this.

      A new product like the Surface computer or Mac Book is in development for more than a year, often even 2-3 years. And in the latter stages you need actually a stable and working system so that things like drivers can be developed, OS adapted, demo units produced, CE/FCC testing done, etc.

      So if a new CPU/chipset combo shows up in the last 9-12 months of the cycle, it is simply too late - it would delay the release of the product by at least that much. This is *not* about just swapping a motherboard/CPU/GPU - the board for the chips needs to be actually *developed* first, before you can even start thinking about integrating it into a product.

      The risk mitigation is also important, but that comes into play only after everything above is sorted out already. If there is nothing new to put in your product, you have no "unknown" to fear in the first place.

      • by Anonymous Coward

        Then how do other manufacturers that released laptops with Kaby Lake did it?

        • by Herve5 ( 879674 )

          indeed. The German Tuxedo, ahead of Syst 76 and else, show among many others a config with KL il 17500 , 32GB RAM, all possible ports (incl USB 3-1) , up to 3 SSD HD incl. 2048 fast ones , removable battery (yes), 2kg , the preloaded Linux you want, all of this within roughly the same cost.

        • by janoc ( 699997 )

          Because these guys are shipping what is basically the reference design from Intel packaged into a case, sans custom OS and with very little to nothing to develop?

          That's not quite apples to apples comparison. Apple has pretty much everything custom - the motherboard, the OS, the peripherals on the MB, ton of tuning and tweaking so that the system doesn't only boot but actually runs well, etc.

          If you want to compare, then look at major manufacturers that are using custom motherboards - e.g. DELL or Lenovo.

  • by tlambert ( 566799 ) on Friday October 28, 2016 @10:59PM (#53173487)

    Perhaps if nVidia would quit changing BGA pinouts, companies would be more likely to substitute their newer processors.

    Of course if they did that, companies might also substitute a competitors part instead. Then nVidia would end up having to compete on price/performance. And no one wants that.

  • I don't care. (Score:4, Insightful)

    by Anonymous Coward on Friday October 28, 2016 @11:01PM (#53173495)

    This doesn't matter to me at all.

    What matters to me is:

    1) Moderately powerful discrete GPU options
    2) Anti-glare LCD panels
    3) Ports (you know, things like USB 2.0/3.0, Ethernet, headphone/microphone jacks, DisplayPort, etc)
    4) More than 16GB of RAM
    5) User replaceable batteries, OR a built-in battery of sufficient capacity this doesn't matter
    6) Keyboards with a reasonable amount of key travel (0.5mm or whatever it is on the nMBP is hardly sufficient)
    7) Apparently, I can add "keyboards with a reasonable amount of physical keys" to this list as well

    A quad core CPU would be nice. Beyond that, I don't really care because anything "i7" is already fast enough for me. I don't need the latest greatest CPU the moment it comes out. It would be nice if the rest of the machine were kept up to date though, in terms of GPU options and other stuff, so that when I do decide to purchase a machine I'm actually getting something indicative of modern day technology (even if the CPU is a generation behind). Situations like the MBP (where everyone waited for this "major update") and nMP are pretty much inexcusable for a company with $200B in the bank.

    • 1) Moderately powerful discrete GPU options

      15" MBP is a Pascal based GPU, not the most powerful but fairly powerful. 4GB at max.

      2) Anti-glare LCD panels

      They have been since forever. My 15" from 2013 has anti-glare stuff on the screen.

      3) Ports (you know, things like USB 2.0/3.0, Ethernet, headphone/microphone jacks, DisplayPort, etc)

      It has four ports that are any of those things you want plus more, with a very high rate of transfer.

      4) More than 16GB of RAM :Not impossible you know. [stackexchange.com] It will just cost a lot

  • Text (Score:5, Insightful)

    by dohzer ( 867770 ) on Friday October 28, 2016 @11:37PM (#53173607)

    Did they really need that much text to explain the situation? I feel like that paragraph contained a lot of words, but said very little.

    • And even then, it doesn't explain the whole situation.

      Apple typically uses the Intel quad-cores with the high-end integrated graphics (Iris Pro, or whatever it's called now). And although they were published on Intel's Ark database, they didn't have a price and were not used in hardware until June or so.

      Thus, the new 13" MacBook Pros use 4 month old chips. That's not my definition of old.

      Please someone correct me here. Intel's release schedule has gotten so complicated that I can't keep up.

      • Re:Text (Score:5, Informative)

        by cfalcon ( 779563 ) on Saturday October 29, 2016 @02:39AM (#53173985)

        From what I can tell (I don't think Apple has given us the chip numbers), it goes like this:

        (remember that "i7" and "i5" don't have meanings- they are just marketing garble, and don't, for instance specify the difference between hyperthreading and non-hyperthreading, or two and four cores: all of these chips have hyperthreading)

        https://en.wikipedia.org/wiki/... [wikipedia.org]
        (I could have messed up something in transcription)

        13" Cheap Model, with TDP 15W:
        base: 2 core i5-6360U @ 2.0GHz single core boost to 3.1GHz with Iris 540, (listed as unreleased on wikipedia)
        high end: 2 core i7-6660G @ 2.4GHz single core boost to 3.4GHz with Iris 540 (listed as unreleased on wikipedia)
        13" Spensy Model, with TDP 28W:
        base: 2 core i5-6267U @ 2.9GHz single core boost to 3.3GHz with Iris 550 (listed an unreleased on wikipedia)
        midline: 2 core i5-6287U @ 3.1GHz single core boost to 3.5GHz with Iris 550 (listed as unreleased on wikipedia)
        high end: 2 core i7-6567U @ 3.3GHz single core boost to 3.6GHz with Iris 550 (listed as unreleased on wikipedia)

        In this case, all of the high end models have Iris Pro 550, and all of the low end models have Iris Pro 540. Intel's actual highest listed Iris Pro models are Iris Pro 580, but all of those are on chips that are either pretty expensive, have a higher TDP, or both.

        Meanwhile, the 15" laptops all have Radeon graphics cards in them. These have chips that offer more processing power, but less graphics power (with the obvious assumption that the Radeon graphics will be used for that purpose).

        15" models all have TDP 35W chips.
        15" 256 GB model base: 4 core i7-6700HQ @ 2.6GHz single core boost to 3.5GHz with HD 530 (listed as Sep 1 on wikipedia)
        15" 512 GB model base: 4 core i7-6820HQ @ 2.7GHz single core boost up to 3.6GHz with HD 530 (listed as Sep 1 on wikipedia)
        Both model high end: 4 core i7-6920HQ @ 2.9GHz single core boost up to 3.8GHz with HD 530 (listed as Sep 1 on wikipedia)

        These models are all generally more capable than similar models released earlier. It is likely that Intel and Apple actually reached an agreement via branding and capability on these: it is likely not a coincidence that Intel happened to have highly compatible i5/i7 branding for each step of Apple's needs, for instance.

        Regardless, I've seen folks pointing out that Apple really IS using the best Intel chips available on social media, including doing it myself some, as people were all 'muh kaybee layke?' over the last day. These chips are a mix of hyperthreaded 2 core chips with Iris 540 or Iris 550 (on the 13 inch) and hyperthreaded 4 core chips with the lesser HD 530 (on the 15 inch). Meanwhile, the only Kabylake that looks like it could be show up to this party at all is the 7500U, a 15W chip with 2 cores, going from 2.7GHz base to 3.5GHz single threaded boost with HD 620. This chip could maybe have sat in over the cheap model (it costs more though), would require just that one model to be designed and tested around Kabylake stuff, and wouldn't have the Iris graphics (and doesn't have a graphics card). Intel certainly doesn't have the Kabylakes needed to fit their intended build case.

        • Excellent post, should be modded up!

        • You're almost right about the best possible CPUs, Apple could have released low end (2 core/4 thread) MacBook Pros with Kaby Lake CPUs, but I don't think marketing would allow it.

          Apple only really cares about the immensely profitable iOS devices. The iPad Pro is too close to the price of the abandoned MacBook Air and the forgotten MacBook, and explains why there's no touchscreen and Apple Pen on new Macs.

          About release dates, according to ark.intel.com all these CPUs were released in Q3 2015, except the i7-6

          • by cfalcon ( 779563 )

            Yea, it's the 6660U. The G was a typo. G and U are nowhere close to each other on my keyboard, so no clue how that happened.

            I mention the possibility of shoving a Kabylake into the low end, along with some theories as to why they would not (it could require different chipset and testing, it lacks the top end graphics option), and yea, marketing could be a part as well. But if there are valid technical reasons that we can see, there's probably more that Apple can see.

            > Apple only really cares about the

        • It still is a crappy product and a lemon. I mean that sincerely and not a troll.

          First off the AMD graphis is an RX 450! 450 you know the gpu that has about 1 terraflop or about the speed of a 2011 era card and probably close to your cell phone??! Even the consoles of 2013 have the same quality graphics.

          Where is the "PRO" in this? Apple's current pro has 4 year old hardware while its non pro version is more modern. Only a dual core skylae? You're kidding for a $2700 system? GOD Almight!

          Even if they had to st

          • by cfalcon ( 779563 )

            > First off the AMD graphis is an RX 450!

            The low end 15" costs 2400 bucks and has a "Radeon Pro 450". The high end 15" costs 2800 bucks and has a "Radeon Pro 455". Both can be upgraded to the "Radeon Pro 460". You are correct about the "1 teraflop" in the low end one. I don't *think* you can straight compare to the desktop RX cards, and I don't *think* that teraflops is the best metric (especially when comparing to consoles). That being said, it is absolutely clear that these are not super powerful

            • Oh please the current Macbook pros are 2012 era hardware. They are never current or fast since Steve Jobs passed.

              If I am paying premium I want professional grade and up to date components. Who gives a shit about USB-C when the CPU chokes as soon as you compile code, do video editing , or run virtual machines. I own a dual core hyper threaded and know first hand!

              Sorry, admit Apple lost. This is a great MacBook air.

              • by cfalcon ( 779563 )

                You aren't backing up your absurd statements with data. Do you really want me to refute your "2012" claims? You have multiple generations of processors between now and then, the RAM in question wasn't used in 2012, you couldn't even get this level of graphics processing in a laptop, USB-C was years away still, etc.

                You own a "dual core hyperthreaded" that appears to be a Haswell i3. The lowest end macbook pro is faster than that by a lot, and costs 1500 bucks. That's a lot, but you are constantly compari

                • Fine here is my source on older hardware [theverge.com].

                  And here is the cpu which is a glorified i3 with hyperthreading [wikipedia.org] also called the i5. It is not the quadcore model.

                  Apple has alot of explaining to do. If this were a normal company this product would bomb unless priced appropriately sub $1000. Like I said this is a 2016 MacBook Air. Not a power anything.

                  • by cfalcon ( 779563 )

                    Oh, by "current Macbook", I thought you meant the ones that came out (which are *technically* current, in that you can purchase them). Yes, the ones based on the 2012 refresh are quite fairly characterized as four year old hardware, even though that does ignore hardware such as the CPU and other parts that get refreshed yearly. Apple does that with a lot of their hardware.

                    The link to wikipedia is interesting, but calling it a "glorified i3" doesn't make too much sense. The "i3/i5/i7" don't have any actua

            • by tlhIngan ( 30335 )

              The escape key is present unless the application overrides it. Why would vim and emacs override the default escape key? The bar defaults to normal keyboard buttons, and changes based on the application. It should run console stuff A-ok. I'd me more concerned about it not behaving properly in Linux or Windows, but it is very likely that it either has sensible hardware defaults or drivers- but that's still a risk if you wanted to dual boot, until someone checks it out.

              On boot, it's a standard top row function

              • by cfalcon ( 779563 )

                > Now, Vim/Emacs would be well poised to use that touch bar.

                In insert mode, it could say:
                ESCAPE
                Then when you press it and switch to command mode it could dynamically change to
                MAKEBEEP

    • Basically MS and Apple selected the CPUs and GPUs in their latest computers based on practical problems of release dates. These decisions were not to screw you as the consumer over. Film at 11.
      • Basically MS and Apple selected the CPUs and GPUs in their latest computers based on practical problems of release dates. These decisions were not to screw you as the consumer over. Film at 11.

        Nonono!

        Microsoft used intelligent and astute marketing decisions that are already showing how smart they are, and apple is a bunch of goddamn hipsters selling overpriced shit to stupid people that like shiny things

      • Got to get those Christmas sales I guess.

        Both MS and Apple should have waited 3 months. Much much better graphics could have been on their high end products. I mean why pay $2700 for a MS Surface Design with a slow 960 GPU? Also it is unforgivable that Apple included just a dual core with hyper-threading on their so called professional line.

        I own the Haswell version of this chip and it is not anything like a real quad core when you add loads that professionals use it starts to break quickly in Visual Studio

        • by Uberbah ( 647458 )

          Both MS and Apple should have waited 3 months. Much much better graphics could have been on their high end products.

          Except then you'd never release anything, ever, because there's always "something better" coming in 3 months.

          • Well they released them knowing the components would be out of date at launch for ultra expensive prices. I mean who pays $2700 for a PC anymore? Both of these are insane and yes AMD fusion is 2 months Kaby Lake is practically here now. If they waited for 3 to 6 months they would have ultra new components in their ultra expensive products at launch for a longer product cycle.

            Just as an example the nvidia 10xx series is instrumentally faster as even their low end 1060 performs like a high end 970 last year.

            • First of all the components are not "out of date". We're not talking about Core 2 Duo CPUs and Radeon 6000 GPUs. The components simply are not leading edge and the newest generation. Second, it's not like Apple and MS targeting new products for the holiday season is a bug surprise to anyone including Intel, Radeon, and Nvidia. According to you they should delay release of a product that works fine and forgo the busiest retail season for the vast number of consumers just so that a few geeks can boast they ha
              • These are pros. Not airs for consumers. Yes the newer GPUs use Samsung 14 nm processes compared to the 28 nm from previous generation. Big boast in performance.

                These are supposedly for professionals. Not capable of any real work dealing with VMware fusion, Adobe premiere, compiling code, or anything else a professional would use.

                • These are pros. Not airs for consumers

                  [sarcasm]And pros never buy during the Christmas season. And consumers never buy the MacBook Pro.[/sarcasm]

                  Yes the newer GPUs use Samsung 14 nm processes compared to the 28 nm from previous generation. Big boast in performance.

                  [Citation Needed]. My information says that the Radeon Pro 455 which is Radeon Arctic Islands architecture and is manufactured on TSMC 16nm FinFET process not a 28nm process. [wccftech.com]

                  These are supposedly for professionals. Not capable of any real work dealing with VMware fusion, Adobe premiere, compiling code, or anything else a professional would use.

                  Let's look at this argument. You are saying that pros would benefit greatly enough from using Kaby Lake over Skylake. The fastest mobile Kaby Lake is the Core i7 7500u [cpubenchmark.net](cpu score: 5381) vs a Core i7 6700T [cpubenchmark.net] (cpu score:8971) which A

        • by cfalcon ( 779563 )

          Please post the cpuinfo of the Haswell chip you own. I suspect that we will find a discrepancy with this claim, based on the other post you made. The low end Surface Pro 3 you mention has a 1.5 GHz chip branded as i3, and none of the macbook pros have that.

  • And it was a disaster. Drivers just plain weren't ready. BSODs for months, epic fail. Thankfully, Apple is smarter than that.

    • by cfalcon ( 779563 )

      Apple also released a Skylake last year, in the iMac. I haven't heard of such problems (though possible they exist, I think I'd have seen people flipping their shit). These are new Skylakes, of course, and were not available last year.

    • Skylake STILL isn't ready on most Linux distributions. On Ubuntu 16.04 LTS, the kernel is missing Skylake support for several features that cause issues, from a black screen upon boot for 10+ minutes (monitor shows no signal shortly after the boot log messages stop (ie when it gets to the console login screen), and stays that way for 10+ minutes. The IPMI KVM console also shows no signal, but the IPMI serial console works), to IOMMU isolation issues. Don't plan to use the stock kernel if you intend to use I

      • Skylake STILL isn't ready on most Linux distributions. On Ubuntu 16.04 LTS, the kernel is missing Skylake support for several features that cause issues, from a black screen upon boot for 10+ minutes (monitor shows no signal shortly after the boot log messages stop (ie when it gets to the console login screen), and stays that way for 10+ minutes. The IPMI KVM console also shows no signal, but the IPMI serial console works), to IOMMU isolation issues. Don't plan to use the stock kernel if you intend to use IOMMU isolation on a Skylake Xeon for PCI passthrough or SR-IOV, everything gets lumped together in the same IOMMU group making it impossible. It doesn't have the Skylake patches yet, which only recently came out.

        I have to manually add in a set of Skylake patches from newer kernels and recompile to get it to work each time there's a kernel update. Hopefully that'll be fixed when the 16.10 kernel is backported to 16.04, and I can switch to that. But if Skylake support is still iffy on current LTS Linux distributions, then forget about Kaby Lake. It's just not ready yet and will frustrate end users.

        Shoot it isn't ready on Windows either :-)

        Especially if you own a Windows 7 box as Intel seems to care about Windows 10. NVMe on 7 is quite buggy from what I hear. I am glad I am still on Haswell as it serves my purposes fine. I am just irritated I can't do GPU passthrough on my K series i7 but it is stable and works.

        Hopefully AMD fusion which is about to come out can provide some competition. Intel could use it

  • by thegarbz ( 1787294 ) on Saturday October 29, 2016 @03:44AM (#53174115)

    "Apple and Microsoft use Skylake processors in new computers due to Kerby Lake unavailability"

    There, a headline which explains everything you need to know about the summary and the article, and is one word shorter to boot.

  • Yeah, about 4W at full load. Is that "a lot" these days?

    Also, from what (little) i've read about Kaby Lake the improvements from Skylake are minor - power consumption is in fact expected to be identical.

  • Somewhere in the bowels of 1 Infinite Loop I'll bet there's a mockup of a MacBook with an A10 processor. Or multiple A10 processors. Running a crude port of macOS. But because that would mean another round of porting legacy software over to the new chips it won't happen until they can get a good emulator experience. Seems to me that's where things should be headed, just basing on what's come up over the last few years.

    • by dgatwood ( 11270 )

      I think it's safe to say that they've played with such a configuration. However, unless their R&D is way ahead of what they're shipping, such a device won't actually ship for a long time, if ever.

      The biggest problem is that the A10 is still a two-core chip. It has four cores, technically, but two of those are slow cores for reducing power consumption. I don't know if it is possible to use all four cores, but even if it is, it would still be nowhere near as fast as a modern four-core Intel chip. The

      • The biggest problem is that the A10 is still a two-core chip.

        It is 2+2 because it was designed for phones. It does not have to 2+2 if defined for a laptop and could be quad-core. Remember Apple ships variants of their Ax processors all the time. The AppleTV used a single core A5 which was only ever made for the AppleTV. Still I don't see it being as powerful as an Intel chip.

        The second biggest problem is that the GPU in the A10 is designed to drive a 1920×1080 screen. Even the iPad Pro's GPU is designed to drive only a 2732 x 2048 screen. That's less than half the pixels on an iMac screen, and the iMac's GPU has to routinely drive up to two 3840x2160 UHD screens on top of that. So basically, the GPU performance would probably need to go up by an order of magnitude to replace what's there now, or else they would have to use an external GPU.

        The 6 core GPUs drive a 1920x1080 display. Again for a laptop, there would be more room to squeeze in more cores to handle a bigger display. Also there is no reason that Apple has to use a PowerV

        • by dgatwood ( 11270 )

          It is 2+2 because it was designed for phones. It does not have to 2+2 if defined for a laptop and could be quad-core. Remember Apple ships variants of their Ax processors all the time.

          Yeah, but adding cores isn't free. The more cores you add, the more challenging it is to keep their caches in sync. Two cores are relatively easy. Four cores are considerably harder. Six or more cores to match the multicore performance of modern Intel chips are harder still. Obviously it can be done (because it has been d

          • Yeah, but adding cores isn't free. The more cores you add, the more challenging it is to keep their caches in sync. Two cores are relatively easy. Four cores are considerably harder. Six or more cores to match the multicore performance of modern Intel chips are harder still. Obviously it can be done (because it has been done many times), but the point is that cranking up the core count is a non-trivial piece of engineering.

            Which is a problem for all multi-core CPUs and not just Apple. I would argue though optimizing two different set of cores (2+2) might be harder than 4 of the same core. My point still is that Apple has optimized the number of cores for each device sometimes removing a core. Apple could design a quad core Ax laptop CPU if it wanted.

            The reason it makes sense for Apple to build their own chips for cell phones is because they turn around and build ten million of each model. It makes a lot less sense to spread higher R&D expenses (for a much more complex chip) across a tenth as many devices (or less).

            Not really. They design their own chips because their requirements for each device can be optimized as opposed to accepting whatever design Samsung or even Qualcomm had made to wo

            • by dgatwood ( 11270 )

              Not really. They design their own chips because their requirements for each device can be optimized as opposed to accepting whatever design Samsung or even Qualcomm had made to work with a vast array of different devices and customers. There's no reason they could not extend that to laptops.

              Of course they could. My point is that they would end up needing to modify the cores themselves significantly to ramp up the core count, and that those sorts of changes would, I suspect, be too significant to pay off wh

              • Of course they could. My point is that they would end up needing to modify the cores themselves significantly to ramp up the core count, and that those sorts of changes would, I suspect, be too significant to pay off when you're talking about a laptop.

                Well ramping the core count is one way of boosting performance; however, I don't expect an Ax MacBook to be a powerhouse. Again, I expect Apple would replace the MacBook Air with it if they do it.

                And no, the reason they design their own chips is that they can blow the doors off of what the other companies achieve in terms of power consumption by hand-optimizing the heck out of the core designs. On cell phones, that makes a big difference, and in quantities of tens of millions, the R&D cost per unit is small. On laptops, that makes a much smaller difference (because the batteries are huge by comparison) and the R&D cost per unit is relatively large.

                But Apple is not doing this from scratch. They have a design already. Whether it is enough to power a laptop is a different question.

                And yet every time I open up my Xcode project at work, Xcode sits there with a single CPU code pegged at 100% for a couple of minutes just to load the project, and several more minutes before it stops SPODing long enough to be usable, and basically the CPU is pegged at 100% for about an hour before indexing finishes. Real-world code doesn't always parallelize easily.

                I'm not understanding your argument. First you are saying it's hard to do multicore CPUs and get it right. But you're also saying multicore usage in the real world does not work well.

    • by AHuxley ( 892839 )
      Why spend on new chips design when the real design team is really busy with ARM?
  • Yup. The only downside. In a portable device. Idiots.

Do molecular biologists wear designer genes?

Working...