Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Hardware Technology

Intel's Upcoming Coffee Lake CPUs Won't Work With Today's Motherboards (pcworld.com) 240

Intel's upcoming Coffee Lake CPUs won't work with existing 200-series motherboards that support Kaby Lake, a manufacturer confirmed on Wednesday. In a Twitter post by Asrock last Saturday, the company confirmed the news when asked if "the Z270 Supercarrier [will] get support for the upcoming @intel Coffee Lake CPUs." Their response: "No, Coffee Lake CPU is not compatible with 200-series motherboards." PCWorld reports: According to at least one reliable source outside of Intel, the new Coffee Lake CPU will indeed not be compatible with Z270 boards, even though the chipsets with the upcoming Z370 appear to be the same, PCWorld was told. The source added that there are hopes in the industry that Intel will change its mind on compatibility. Tomshardware.com said it had independently confirmed the news with Asrock officials as well.

Why this matters: The vast majority of new CPU sales are in new systems, and they likely won't be impacted by the incompatibility. However, there's also a very large and very vocal crowd of builders and upgraders who still swap out older, slower CPUs for newer, faster CPUs to maximize their investment. An upgrade-in-place doesn't sell an Intel chipset, but it at least keeps them on the Intel platform. If consumers are forced to dump an existing Z270 motherboard for a newer Z370 to get a six-core Coffee Lake CPU, Intel risks driving them into the arms of AMD and its Ryzen CPUs.

This discussion has been archived. No new comments can be posted.

Intel's Upcoming Coffee Lake CPUs Won't Work With Today's Motherboards

Comments Filter:
  • Why is this news? (Score:5, Insightful)

    by sinij ( 911942 ) on Thursday August 03, 2017 @09:36PM (#54937703)
    Intel, for as long as I remember, needlessly changed sockets.
    • by cheesybagel ( 670288 ) on Thursday August 03, 2017 @09:48PM (#54937759)

      No shit. They do this *all* the time.

    • that, and to do things like introduce the Core "i" series with GPU (775 -> 1156), shits and giggles (1156 -> 1155), add integrated voltage regulators in to chips (1155 -> 1150), switch from DDR3 to DDR4 (1150 -> 1151)

      • by gfxguy ( 98788 )
        I don't see the big deal.... everybody whines and complains when compatibility is broken. People still think Windows 3 programs should still run under Windows 10, and that MS made it that way to force people to upgrade other software. It's not like they added any new features to the OS that forced them to finally let go of backwards compatibility. People here are right - those most affected here are a tiny fraction of the population, and most of them will just upgrade their MBs next time they want to upg
    • by Misagon ( 1135 )

      Because it is a "bait and switch" situation.

      Intel had previously announced that Coffee Lake would use the 1151 socket, and so people assumed that that meant compatibility with previous motherboards that have that socket.
      I would bet that there are quite a few consumers out there who had got 1151 socket motherboards and a 4-cores or less CPU (which is all that is available) with the intention to upgrade to a Coffee Lake 6-core CPU in the future.

      Right now, even a 2-core CPU with high clock is considered a good

      • It's not bait-and-switch if new information invalidates people's incorrect assumptions that are based on zero relevant information from the manufacturers. Anyone who purchased a motherboard that didn't explicitly state compatibility with the upcoming "Coffee Lake" and expected compatibility was taking a gamble, and is personally liable for the result of that gamble.

        It's bait-and-switch if Intel advertised that the new CPU would work in Kaby Lake motherboards, and then it didn't work. They didn't do that,

    • Was going to say - anyone remember the "Slot 1" debacle? And then after that turned out to suck and cause significant challenges, back to ZIF sockets halfway through the Pentium-3 products. They didn't even wait for a new architecture.

      How is this news? They've literally been doing this for longer than 15 years.

      • Take socket 2011. There are actually several electrically incompatible versions of that for no good reason.

    • Intel, for as long as I remember, needlessly changed sockets.

      Goes back to the 90s when Pentium was first introduced: Pentium sockets were nothing like Pentium Pro or II or III or 4. AMD would try and leverage the existing infrastructure by designing CPUs that were drops-in to Pentium sockets, but Intel created everything from scratch.

      Having said that, though, today's CPUs - if one wants to upgrade them, one has to closely match them w/ the chipset & everything else, or suffer a performance hit. Anyone who wants to upgrade won't notice a performance boost: th

  • by rahvin112 ( 446269 ) on Thursday August 03, 2017 @09:39PM (#54937713)

    For the last 10 years since Intel gained complete monopoly control over Intel chipsets for Intel CPU's they go out of their way to make minor changes to force new motherboards to feed their income from chipsets. They add a pin or two or make some other minor change that makes it impossible to use new cpus with older montherboards even if the chipset is identical in features.

    This is SOP at Intel these days. Use that Monopoly power to extract maximum revenue. Hell the new Platinum Xeon chips have MSRP's of up to $13,000. Something that would not be possible with legitimate competition.

  • Upgrading CPUs? (Score:5, Insightful)

    by kugeln ( 680574 ) on Thursday August 03, 2017 @09:46PM (#54937743)
    Do people really "upgrade" processors? I mean, I've been building computers for almost 20 years now and I think I got over the whole idea of upgrading the processor after the first time, circa 1997.

    Outside of the gimmicky super-shredder-killer-fps-man-slayer motherboards, it's not like they have been the most expensive part of a computer build for a long time. Introducing a new video card incompatibility like the transition from PCI -> AGP -> PCI Express would be a whole different story.

    • Re:Upgrading CPUs? (Score:5, Interesting)

      by glitch! ( 57276 ) on Thursday August 03, 2017 @10:18PM (#54937895)

      No. When I upgrade, I get a new motherboard and CPU. And often, new memory for the MB. I have built systems for maybe 25 years, and I don't remember doing a simple CPU upgrade. But I did swap out a Cyrix CPU because it kept crashing Win95.

      • Re: (Score:3, Interesting)

        I've built systems for about that long and I did a simple CPU upgrade about 2 years ago. About 6 years ago I built a dual Xeon E5645 workstation for myself ($500 per CPU at build time) and two years ago I upgraded them to Xeon X5690 CPUs. The X5690 CPUs were about $2000 each when I built the machine but only $200 each used on eBay 4 years later. I've also piecemeal upgraded a bunch of other parts like RAM, disks, etc.

        The end result is a 6 year old workstation with shockingly good performance when compared t

      • Builders that lean towards AMD upgrade their cpu's... builders that lean towards Intel never could.
        • Builders that lean towards AMD upgrade their cpu's

          Have you? I had a Phenom II on AM3 and kind of wanted an upgrade. No option for that since the FX line used AM3+. But hey, if I bought a brand new system with an AM3+ mobo at least I could downgrade to a shitty Phenom! This just the nature of technology.

    • I finally did this for the first time recently. My almost 5 year old build with a Sandy Bridge i3 and Radeon 6850 was barely acceptable for Overwatch...playable at the lowest settings. Got an Ivy Bridge i5 off eBay (dual to quad core was huge but also better clock) and a new 1060 and now Iâ(TM)ve got even Doom and Gears 4 running max settings at 1080p with 60fps.

    • Do people really "upgrade" processors?

      Sure... on AMD systems. On intel systems? No, they really don't, because Intel changes their CPU socket at the drop of a hat. Any hat.

      My desktop PC started as a Phenom II X3 720, then it was a Phenom II X6 1045T, and now it's a FX-8350. All in the same socket. I built other PCs to take the hand-me-down processors, and do other jobs. (One Linux box, one test bench.)

    • I'm one of those guys who likes to have a top-end gaming PC. Spend quite a lot of money on it. Put a lot of thought into when to do part-swaps and when to go for a whole new system.

      I'm not really sure this is going to matter all that much for most people, even for people like me. Even for high-end gaming (i.e. trying to hold 60fps with high/ultra graphics settings at 4k), the CPU upgrade cycle isn't particularly intense. If you're using a decent Skylake or Kaby Lake (e.g. a 6700K or 7700K) you should be goo

      • by gfxguy ( 98788 )

        That's the thing - we're not talking about people who buy a complete system, or even who hack together a system because they simply prefer picking and choosing components (like me) over getting whatever vendors decided to put in. I build my own system so that I can do things like getting quiet fans and power supplies; I also know the graphics card installed on most systems (usually some built in crap) is not going to be good enough. I'm not a big gamer, but I do play, and I do use some graphics apps profe

    • by Ramze ( 640788 )

      I upgraded an entire engineering computer lab's worth of CPUs once... in 1996. lol. Those Pentiums swapped out pretty easily and Cyrix P166+ processors fit nicely into the same slots as old Pentium 60s (though even then, the motherboards were a bit of a bottleneck at times). A roughly 2.5 x jump in cpu speed back then meant a lot for NT4 boxes running AutoCAD and other engineering software.

      Today... I can't imagine why anyone would bother (though maybe it's my lack of imagination's fault).

      CPUs aren't us

    • by fnj ( 64210 )

      Do people really "upgrade" processors?

      Yes. Duh.

  • I keep my systems at least 3 years. Although the theory is that you can swap to a better CPU I've only done this one time. Most of the time Intel deliberately continues evolving the sockets, not for any real technical reason AFAIKT, but to keep you buying those motherboards. This is one of the reasons that I don't upgrade processors very often (I skip a few generations) as the gains are small enough that it's just not worth it for the cost and hassle.

    • by jonwil ( 467024 )

      The last time I upgraded my CPU without replacing the motherboard would have been when I upgraded from a Pentium 166 MMX to a 300MHz Cyrix part and even then I ended up regretting it and wishing I had moved to a Pentium II or something instead.

  • by deathguppie ( 768263 ) on Thursday August 03, 2017 @09:48PM (#54937765)
    I'ts been a long time since AMD has released a competitive product. Intel in a show of appreciation and friendship has decided that the best way to help them along is to assure that unlike the new series of ryzen processors coming out theirs will not be backwards compatible with the hardware you buy. Why else would they restrict the pcie lanes in their top of the line chips by price and lock out features unless they were trying to help AMD along.
  • by xxxJonBoyxxx ( 565205 ) on Thursday August 03, 2017 @09:50PM (#54937777)
    I've already switched to AMD Ryzen CPUs for new systems because they're fast, cheap and stable. Not sure why I'd use Intel for anything here on out; instead I can spend more on video cards and larger SSD storage.
    • Intel rocks for gaming. They make faster CPUs per core by a good margin still as not everything can be run in parallel

      • by dbIII ( 701233 )
        Gaming is one of the classic examples of embarassingly parallel as shown by all those processing units in video cards.
        Of course some developers can't wrap their heads around more than one thread - even ones born after multiple CPUs were in desktop computers!
        • by Kjella ( 173770 )

          Gaming is one of the classic examples of embarassingly parallel as shown by all those processing units in video cards.

          Graphics, yes. Gaming? No. From what I've understood most games have divided threads by task, this thread does AI, this thread does rendering and so on. Which is why so many games still do well on dual cores, there's one core running the main loop and one running everything else. Not even Civilization VI, the kind of game that possibly could use lots of cores for the computer's AI manages to use 8 cores.

          • by dbIII ( 701233 )

            Not even Civilization VI, the kind of game that possibly could use lots of cores for the computer's AI manages to use 8 cores.

            That's a good illustration of that problem with developers isn't it?
            The number of times I've seen something struggling on one core when there are seven free is maddening.
            Games, especially very graphical ones with a simulated 3D environment and sound sources located in 3D have a lot of things they could be doing at once. The sensible thing is to divide threads by task (as said above)

            • Not even Civilization VI, the kind of game that possibly could use lots of cores for the computer's AI manages to use 8 cores.

              That's a good illustration of that problem with developers isn't it?
              The number of times I've seen something struggling on one core when there are seven free is maddening.
              Games, especially very graphical ones with a simulated 3D environment and sound sources located in 3D have a lot of things they could be doing at once. The sensible thing is to divide threads by task (as said above) but there isn't a lot of that going on.

              It's all very well offering armchair advice but the task of actually thinking through and making reliable logic in a parallel environment is beyond most humans, we just don't think that way and can't think that way. Until we come up with new programming paradigms that make this stuff easier & more reliable, we're always going to have this problem for general computing. Our languages and methods are designed around a single-threaded world.

              • by dbIII ( 701233 )

                It's all very well offering armchair advice

                That is insulting.

                making reliable logic in a parallel environment is beyond most humans

                We are not supposed to be "most humans", we are supposed to be the ones that get those collections of silicon, copper etc running well. We're supposed to use all those years since high school to pick up the tricky stuff instead of stagnation.

                Our languages and methods are designed around a single-threaded world.

                Stuff was most likely written in FORTRAN to run in parallel before yo

                • It's all very well offering armchair advice

                  That is insulting.

                  Wow, you're really easy to insult! I would say it's insulting to just assume that everybody is lazy rather than try to think of why the task might be just too hard. When software is still released buggy, when as an industry we still haven't worked out how to 'over-engineer' for safety without still throwing up catastrophic security holes or system crashes in important systems, when games are written by over-worked programmers in crunch mode for months on end... the whole industry is still horribly immature.

                  • It's all very well offering armchair advice

                    That is insulting.

                    Wow, you're really easy to insult! I would say it's insulting to just assume that everybody is lazy rather than try to think of why the task might be just too hard. When software is still released buggy, when as an industry we still haven't worked out how to 'over-engineer' for safety without still throwing up catastrophic security holes or system crashes in important systems, when games are written by over-worked programmers in crunch mode for months on end... the whole industry is still horribly immature. But sure, it's laziness.

                    The world needs hundreds of thousands of programmers, and they're not all going to be at your level. The process of making software needs to be resilient enough to handle them, you can't simply wish everyone was as proficient because they're never going to be.

                    It was a little be-littling. But back to you? Why should a game developer waist his time with the CEO of Rockstar games want the unfinished piece of crap shipped before Christmas whether it is finish or not waste time optimizing for what 1% of gamers have?

                    According to Steam majority still had 2 core cpus until 18 months ago thanks to the core2duos and cheap crappy laptops parents but their kids. Today it is now just approaching 4 core with no hyper-threads. Sure Linus Tech Tips and Gamers Nexus on youtube a

                    • You are both retarded.

                      The reason Civ6 performs so poorly and doesnt use a lot of threads is because the entire fucking game is an interpreted script built on top of shitty xml garbage.

                      Its why those loading times are enormous even on an SSD.

                      These Civ games have not gotten more advanced since Civ3. They have just been pushed into easier to maintain scripting so that their lead developers can hand off the maintenance to junior developers sooner, freeing up the lead developers for another title.
                    • by bsDaemon ( 87307 )

                      Civ6 on Steam is the only game I have. My laptop has a Xenon w/ 64GB of ECC RAM. The fist disk is 256GB of NVMe, then I have a 2TB disk I use for holding VMWare images, basically.

                      After about 200 turns, it starts crawling. The other day, I finished a full 500 turn game after a few days poking at it here and there, and when I was done I tried to "exit to desktop" and the whole thing fucking crashed. Of course, Windows 10 Pro wouldn't let me start task manager on the monitor hooked in over the DVI, and I coul

                    • My biggest issue w/ Civ6 is how constrained it has been. In previous versions of the game - aside from I & II, you could name your leader, your civilization, your cities. Civ 4 was the best - they had a scenario editor where you could start all the players you wanted in certain spots, preload them w/ whatever units, money, cities & resources you wanted, including renaming anything right from the base game, and then play. In Civ V, there never was a scenario editor: the closest to it was a mod cal

              • by Alioth ( 221270 )

                Well - no - /imperative/ languages and methods are designed around a single threaded world and have been graunched to fit the multithreaded world.

                However, there are languages and methods that work very well in a parallel world. Erlang to give one example - the only problem is no one's going to be writing games in Erlang any time soon, even though it's quite easy to write a highly reliable Erlang application that spawns tens of thousands of threads.

              • It's all very well offering armchair advice but the task of actually thinking through and making reliable logic in a parallel environment is beyond most humans

                Having done this many times, it's not nearly as hard as you portray.

                It does require more thought than single-threaded, and it does require letting go of the mindset that you know exactly what every single bit is doing at any moment.

                But it's well within the grasp of the vast majority of programmers. Especially with modern languages making it easier and easier to do.

        • by Dutch Gun ( 899105 ) on Friday August 04, 2017 @03:34AM (#54938691)

          As a professional game developer, I can assure you that videogames are the exact opposite of an "embarrassingly parallel" problem. There's a LOT more to videogames than graphical processing: AI, pathfinding, physics, audio, resource and memory management, animation, world updates / occlusion systems, etc. These are all CPU-intensive subsystems, and many of them are inherently bound to global data (a virtual world simulation), which makes it extremely tricky to split off into independent threads (probably the only exception being audio).

          That means that each subsystem in the game must be carefully and painstakingly optimized for threaded performance. It's not possible to trivially split up all these subsystems by thread either. Many of these systems tend to interact with each other and the global world database, and that means the gains tend to be smallish and non-scalable in nature.

          Generally speaking, it's an extremely difficult problem, and one which I don't think the industry has really cracked yet. Believe me, if it were trivial to do, we've have done it a long time ago.

          • by dbIII ( 701233 )
            I seem to have had this discussion about just about everything in computing since the mid 1990s, the difference being most people found a situation where more than a single thread worked. You've given some examples yourself about multi-threading.
            I know that games are pretty well all about state but that doesn't mean some tasks can't be done with in parallel. You mentioned physics - consider how that's simulated in the numerical computing world and how some game engines are dealing with it in a similar way
            • That probably came off as teaching an old dog - you know all of that stuff - my post was really just addressing the situation of a single fast core running a single thread versus multiple cores - less relevant than it used to be but some stuff still pegs a CPU at 100% leaving the user to wait around and doesn't have another thread when it can.
              Now that the average software developer has finally grasped 64 bit and is starting to get a feel for multiple threads that difficult problem will be chipped away at a
            • I seem to have had this discussion about just about everything in computing since the mid 1990s, the difference being most people found a situation where more than a single thread worked. You've given some examples yourself about multi-threading.
              I know that games are pretty well all about state but that doesn't mean some tasks can't be done with in parallel. You mentioned physics - consider how that's simulated in the numerical computing world and how some game engines are dealing with it in a similar way. Many problems in a simulation do seem to be highly parallel if more than a tiny area is modelled.

              Did you just read his post? He is a game developer. What I wonder and perhaps grand parent could answer is if the problem is also not difficulty but rather most gamers still used 2 core cpus just 18 months ago on Steam survey? If some still have core2dio and cheap AMD dualcore Walmart special laptops to i5's then is it economical to parallel painstakingly code anyway?

              Until Ryzen takes over and 6 to 8 cores and hyper-threaded goodness become standard then why bother optimizing?

              • Until Ryzen takes over and 6 to 8 cores and hyper-threaded goodness become standard then why bother optimizing?

                It's important not to conflate average numbers on Steam versus what your game's particular demographic tends to use. When you say "Steam gamers", for instance, that's pretty much meaningless, as it's just an average across a *very* wide swath of gamers and gaming hardware.

                The latest AAA games will not simply not run well on a dual core laptop with 4GB of RAM and an integrated Intel GPU. The target audience for those types of games tends to be someone who has a moderately powerful desktop machine or a *ver

                • Until Ryzen takes over and 6 to 8 cores and hyper-threaded goodness become standard then why bother optimizing?

                  It's important not to conflate average numbers on Steam versus what your game's particular demographic tends to use. When you say "Steam gamers", for instance, that's pretty much meaningless, as it's just an average across a *very* wide swath of gamers and gaming hardware.

                  The latest AAA games will not simply not run well on a dual core laptop with 4GB of RAM and an integrated Intel GPU. The target audience for those types of games tends to be someone who has a moderately powerful desktop machine or a *very* beefy laptop. Those machines are likely to have 4 cores and eight hardware threads at *minimum*. So, threading optimization is obviously very much worthwhile for these types of games.

                  Now that I've gone indie (I left my professional job about four years ago), my own game's target hardware is much more modest. So, in that case, I don't have to put in the same degree of optimization effort in my own engine as large studios do, since I'm a one man studio and can't put in the resources they do. There are plenty of older or lightweight indie games that those WalMart laptops will run just fine.

                  In short, it really all depends on your game's minimum requirements and intended target audience.

                  Cool thanks for answering my question. Curious if you own an AMD ryzen for your work or plan too if you do not mind me asking :-)

                  I want AMD to succeed and wish they had better gaming performance for IPC per core. But hey for running Vms at my job they are great lol.

    • Ehh, depending on your needs, Intel can still make a lot of sense. If single or low-threaded performance is more important to you (e.g. a lot of gaming, sadly), Intel still has the lead in terms of per-core performance. And from what I've gathered (admittedly, I haven't investigated either side much), AMD's integrated video performance lags behind Intel's, so if you're forgoing the video card, Intel may make more sense (again, take that with a massive grain of salt). But if you're going to be using it for w

      • And from what I've gathered (admittedly, I haven't investigated either side much), AMD's integrated video performance lags behind Intel's

        AMD has destroyed Intel on integrated graphics since the first AMD APU's which was over half a decade ago. Its not even close. The only parts Intel has that can compete are the "Iris Pro" chips which have extremely expensive edram bolted onto them such that a similar performing (in every way, cpu, gpu, i/o, etc..) AMD APU is literally 50% of the cost of Intels "Iris Pro" solutions.

        So "what you gather" is complete crap. "what you gather" is apparently so sketchy that you need to question the sources you "

        • I was thinking specifically of Iris Pro, but thanks for calling me on that. I always welcome corrections when I get things wrong.

  • Not a big deal? (Score:5, Insightful)

    by Kjella ( 173770 ) on Thursday August 03, 2017 @09:52PM (#54937793) Homepage

    Intel has hardly ever had usable CPU upgrade on the same motherboard, generally they have kept compatibility for two consecutive generations. It's only like one year in between and has probably been for the OEMs' sake not the consumers. Maybe that's up to two years now that they've switched from tick-tock to process-architecture-optimization, but in any case the year-over-year improvements has been minimal so why? If you so desperately want to replace last year's Z270+CPU, sell them as a package deal and buy a new Z370+CPU combo. Though if you're doing it for the six-core, do yourself a favor and buy a Ryzen or if you must buy Intel then an X299. Doing it just for the two extra cores is stupid. Except for the fanbois who'll take any chance to trash talk the opposing team, is there anyone here who'll stand up and say they'll miss this upgrade path? I expect crickets...

    • Intel has hardly ever had usable CPU upgrade on the same motherboard

      Single core > multiple core > generation upgrade.

      There have been plenty of "usable" CPU upgrades on the same motherboard, especially on the lower end of the spectrum where the motherboard upgrade triples the cost of the total upgrade.

  • I needed to upgrade, and LGA 2011 came out, figured it would be a good platform to go. Then intel moves to V3 and no more cpu upgrades for v1.

    So I'm stuck with 2x 2011-v1 systems, but I don't trust intel, and this just proves it.

  • Coffeelake got shaken up by AMD Ryzen. Intel freaking out is loading more cores on their CPUs as the newer i7s will go from 4 cores 4 threads to 8 cores 16 threads. The newer i5s are rumored to go from 4 cores to 4 to 8 cores with no hyperthreading.

    My guess is Intel quickly glued 2 CPUs together like they did with the i9 and now the socket has doubled in size :-)

    • The plan is apparently i3 goes from 2 to 4 cores, i5 goes from 4 to 6 and mainstream i7 goes from 4 to 6 with HT

      • The K series of both the i5 and i7 will have 2 more cores so i5 8670K will have 6 cores and the i7 8770k will have 8 to match Ryzen. Another rumor is the i5 series will now have hyperthreading so the i5 will still have 4 cores with the exception of the K with 6 but will be hyperthreaded.

        These are just rumors. But it would make sense if INtel had to quickly double the cores that the die and pins would change. Of course this would double the cost of CPU production with 2 sizes?

    • IF they added more pci-e in a new socket then it's not so bad more like it's about time they moved off of LGA 1151 / 1150 they are just about the same in number of pci-e lanes / ram channels.

  • Given that Intel has abused its industry dominance to first create and then abandon de facto socket standards perhaps two dozen times - who's keeping count now? - over its history, this is hardly a shocking maneuver. Rather it is entirely expected. They like to force people to buy all new hardware sooner rather than later, considering they're collecting royalties for much of it that doesn't have its brand name on it. Back in the Good Olde Days when there were actually other manufacturers competing to populate those same de facto standard sockets, Intel would abandon sockets just to shake up those little guys and drain their resources trying to retool and keep up. Having fully succeeded in eliminating ALL competition for their own de facto socket standards, they now do it just for grins and giggles (and perhaps for those licensing fees).

  • by AbRASiON ( 589899 ) * on Thursday August 03, 2017 @10:22PM (#54937913) Journal

    We are in a bad bad timeline for hardcore and even regular PC enthusiasts, the technological leaps have stagnated significantly, where people with 7 year old PCs need only double their memory and add an SSD (if they didn't already have one) and almost all tasks are fast enough.

    The delay in shift from 14nm to 10nm has been pretty bad across the industry, in fact considering the performance improvements for processors, GPUs over the past 7 years, it seems quite apparent that the manufacturing process still plays a very heavy part in the performance boost between generations, just as much as architectural design of the processor.

    I have a fairly specific use case, similar but not quite the same to gamers (I want a ridiculously fast PC for general use, I'm an extreme browser, exceeding 100-400 tabs at a time, but I don't game anymore, so I like mid to small ITX, quiet, professional looking machines)
    I almost always have open from 8 to 25 applications open of varying kinds. I really like a very responsive system at sub $5000 expense (a 64gb, quad channel, DDR4 4000 machine with 12 cores, liquid cooled, would be great, but the cost would be insane and honestly, a complete top of the line, but not HEDT machine would likely do what I need at easily 30 to 50% savings)

    Unfortunately Intel is all over the place with product varieties, when you look around the Intel ARK site (the new one is awful, great job web developers, great job, another unecessary redesign) you can see just how many processors they make, from 6w to 150w across all kinds of segments.
    Sadly the days of a "preemo desktop" CPU being their primary bread and butter is over and that's why we see ridiculous things like this article is stating, they are diversified everywhere and the complexity seems beneficial to their bottom dollar.

    The rumor is the coffee lake 6 core desktop processor won't work in the existing z170/270 chipset, despite the fact it's basically the same family as the last 2 CPUs for those boards (i7-6700 / i7-7700 etc) just 2 more cores 'glued on'
    We also don't know if this new processor was ever intended to come out at 14nm or it was originally 10nm.
    There's talk that the new chipset, Z370 isn't even any more than a re-badge of the z270! Which makes forcing people to use it even more ridiculous.
    There's a "z390" (?) is a cannonlake chipset or "PCH" - and it's coming out next year - but that chipset is only for cannonlake processors, except there are (apparently) none of those planned for desktop.

    So, do you buy an i7-8700k now and put it on a z370, knowing that you might be missing out on some new features in 2018, like bluetooth 5 and wifi ac being built into the chipset itself?

    The whole thing is messy and awkward to follow, it's only gotten worse the past few years.
    Honestly, I think the best thing to do, if you're capable is to stop reading the news about this stuff and just buy what's best when you need a new machine. It's endlessly time consuming and confusing to be an educated consumer with PC stuff. (I should know, I've wasted possibly years of my life googling / reading this rubbish since I first started building my own machines 20 years ago)

    But the long and short of it is, stuff just isn't improving at a fantastic rate anymore. Even if you're silly rich, you can't buy a machine that utterly decimates other machines easily. People can get 60 to 80% of your performance for 1/4 or less.

    • The delay in shift from 14nm to 10nm has been pretty bad across the industry, in fact considering the performance improvements for processors, GPUs over the past 7 years, it seems quite apparent that the manufacturing process still plays a very heavy part in the performance boost between generations, just as much as architectural design of the processor.

      AMD went from 22nm everything, to both 14nm and 10nm ... this year.

      The only company delayed on 10nm is Intel. Their specialized 3D "Tri-Gate" transistors apparently can't be produced economically at 10nm. Intel has just back-peddled back to working with regular old FinFET's like the rest of the industry. They are late to the 14nm FinFET game so its still going to be another year before Intel figures out how to do 10nm FinFET's like the 3 other companies that heave already beaten them to 10nm production.

    • It seems improving means better phones with better battery life. PCs are booring like how kids in the 1980s snubbed their noses at Mainframes and DECs even though nerds at that time debated big iron and mainframes were the wave of the future and cool etc.

      It seems crappy no name phones can last for many days without a recharge. Not same with heavy smart phones.

      More transistors due jack with the x86 which is why INtel and AMD are trying cores. Intel gave up on this as they are afraid once people upgrade to a

    • So, do you buy an i7-8700k now and put it on a z370, knowing that you might be missing out on some new features in 2018, like bluetooth 5 and wifi ac being built into the chipset itself?

      Yes. Dongles are cheap and effective.

      Honestly, I think the best thing to do, if you're capable is to stop reading the news about this stuff and just buy what's best when you need a new machine.

      I agree completely.

  • I'm still waiting for quad-core Arduino ATmega328P.

  • by WittyName ( 615844 ) on Thursday August 03, 2017 @11:12PM (#54938073)

    I upgrade for each new generation of memory. I will soon upgrade to a DDR4 based system.

    Wait for the new standard to hit price parity, then grab whatever CPU is at the best price/performance point. New faster PCI or whatever, sure. Give me the new fast RAM!

    All computing comes down to bandwidth. Memory bandwidth is always the first roadblock. Then disk, and later network.

    Yes, about every 4-5 years. Shrug, works for me!

    SSD was my only upgrade in about 4 years!

  • Why same socket then?

    They need to add more pci-e lanes / boost the DMI link speed. Just going to 6 cores at the top end seems like an other kaby lake x joke.

    AMD is killing them and AMD has more pci-e lanes at all levels (other then maybe an 4 cpu Intel system that cost will be way higher at least X2 or more then an good amd server system)

  • Intel is thrashing around. They've moved from "you want our stuff because it's the best" to, "you're going to buy our stuff because we'll make deals with people you buy computers from". That's not well described, and I'm not an expert (my computing needs are modest) but I've seen this happen before with other tech and non-tech companies. They get big and powerful, and they forget it was willing buyers who made them that way. As far as I'm concerned, you can put me down as a default AMD customer for my n

    • by Dunbal ( 464142 ) *

      They get big and powerful, and they forget it was willing buyers who made them that way.

      This is typical of most large businesses. They get big and powerful and most of the people who got them there cash out and move on to other things or retire. A bunch of MBA's who have never learned how to actually make money are hired. Nepotism and cronyism take hold, and management goes from a staff of competent people to a staff of sycophants, brown nosers and yes men/women. And eventually the golden goose is dead and butchered

      Large corporations are part of the death cycle of business. Usually some of t

  • Intel just keeps giving me more and more reasons to make sure my next CPU purchase in AMD. Add another one to the pile. Well done, Intel.
  • Intel has always been doing this. However, I'm not so sure the upgrade problem is that much of a problem in real life.

    I always build my own PCs and typically go for the best performance-per-euro solution. I have often looked into upgrades, but hardly ever were such impossible-due-to-socket-changes-upgrades really worth it from a performance-per-euro point of view. It's almost always a better idea to save your money and buy a new cpu+ram+mobo combo a year later than to upgrade now.

    Upgrading might be interest

  • Intel is likely looking at this in terms of market size, potential income, etc. That's like using only technical analysis (looking exclusively at the random charts trying to find patterns in the noise) for your stock trades.

    The people they are talking about are the more technically proficient users among consumers, their perceptions of the superiority of one platform vs another are what drive the decisions everyone else makes and repeats to those who think they are the "technical guy" and it spreads from th

"If it ain't broke, don't fix it." - Bert Lantz

Working...