Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Intel Hardware

Intel Launches Its First 10-Core Desktop CPU With Broadwell-E 184

Two years since the release of Intel's Haswell-E platform, which popularized 8-core processor to users. On Tuesday, the chipmaker unveiled Broadwell-E family, which consists of an "Extreme Edition" of Core i7 chipset that has 10 cores and 20 threads. (Do note that Intel is intentionally not calling it deca-core.) Intel says the Extreme Edition is designed for games, content creators, and overclockers. From an NDTV report: The 7th generation Intel Core processors are built on the 14nm fabrication process, and are part of the 'semi-Tock' release -- neither in the Intel Tick or Tock cycle. and come with Turbo Boost Max Technology 3.0 for more efficient core allocation for single-threaded processes, giving up to 15 percent better performance compared to the previous Haswell-E generation. All four new Intel Core i7 Enthusiast processors, codenamed Broadwell-E, support 40 PCIe lanes, quad-channel memory, and bear a TDP of 140W. Give Intel $1,723 and the Extreme Edition pack is yours.
This discussion has been archived. No new comments can be posted.

Intel Launches Its First 10-Core Desktop CPU With Broadwell-E

Comments Filter:
  • by Anonymous Coward

    "Intel says the Extreme Edition is designed for games, content creators, and overclockers."

    Also known as people too dumb to realize they're paying a thousand percent markup for commodity hardware.

  • by funwithBSD ( 245349 ) on Tuesday May 31, 2016 @11:30AM (#52217349)

    but does it go to 11?

  • deca-core (Score:4, Interesting)

    by LichtSpektren ( 4201985 ) on Tuesday May 31, 2016 @11:31AM (#52217361)
    "Do note that Intel is intentionally not calling it deca-core."

    Perhaps somebody could elaborate on this?
    • by U2xhc2hkb3QgU3Vja3M ( 4212163 ) on Tuesday May 31, 2016 @11:39AM (#52217437)

      deca is one letter away from decay?

    • by Pascoea ( 968200 )

      I was thinking the exact same thing. Perhaps this? link [wccftech.com]

      That really is a weird comment though.

    • deca-core = dick a core

      Intel doesn't want to be confused with AMD.

    • Re:deca-core (Score:5, Informative)

      by ShooterNeo ( 555040 ) on Tuesday May 31, 2016 @11:58AM (#52217571)

      One possibility is that "deca" is the slang for "Deca Durabolin". This anabolic steroid is infamous for causing loss of sexual function as a side effect - which is in a way quite ironically amusing, as one would imagine that most steroid users are trying to make their bodies more attractive for mates of whichever gender they prefer.

      The loss of sexual function is called "deca dick".

      Still, one would imagine that having your monstrously powerful new CPU named after a steroid isn't the worst thing. This chip certainly is "juiced up", it's the most powerful CPU Intel has ever released.

    • by MobyDisk ( 75490 )

      Maybe it's just time to end that practice. Once we get to 17 cores I don't want to hear "Intel launches it first septendecacore CPU..."

      • by cfalcon ( 779563 )

        Oh, it is FAR too late for that...

        https://www.amazon.com/Intel-X... [amazon.com]


        Deca-core is the correct name, whether Intel calls it that or not. Everyone knows that deca means ten, and the strange stuff I see proposed in this thread (among bodybuilders it is slang for a certain anabolic...) is just not the reason they aren't going with it. They probably found that "ten core" markets better than "deca-core", and they may not even have a reason beyond that.

    • by AmiMoJo ( 196126 )

      Most normal people probably don't know what "deca" means, so Intel wants to call it a 10 core chip to avoid confusion. I think outside of computer and maths enthusiasts quad is about as far as most people go.

  • by DidgetMaster ( 2739009 ) on Tuesday May 31, 2016 @11:55AM (#52217551) Homepage
    People who might actually need something like this are those who are running a lot of different applications simultaneously or have individual apps that were programmed to do lots of processing in parallel. I am currently building a data management system that uses lots of threads to greatly speed up processing. The more cores are available, the faster I can process large data sets. With column based relational tables, I can assign a different thread to process each column separately. If there are 100 columns in a table, lots of threads are needed. The more threads that can run at the same time in the CPU, the faster the query will complete. These processors are not just for gaming.
    • by Junta ( 36770 )

      One could even argue that they aren't even that good for gaming. In fact, if I were building for gaming, I'd go for the 6 core version that has the highest of the single-core performance, as that tends to matter a lot more to games, as games as a rule don't have a lot of CPU threads.

      However, for what you describe, Intel would steer you toward 'Xeon' server or workstation.

    • Yeah, I'm going to save my money until Intel can release a deca-core CPU designed for web browsing enthusiasts.

    • by ( 4475953 )

      I would like to have one for audio processing, but I can't afford it. :-/

    • by AmiMoJo ( 196126 )

      I wish more IDEs had good support for multi-threaded compilation. Atmel Studio lets you use it, but the errors and warnings all get mixed up between modules with no easy way to sort them. The abomination known as MPLAB is even worse... I don't think it even supports it, with Microchip's custom (commercial!) version of GCC.

      • Edit the make script to pipe to separate logs.

        The autogenerated make isn't multithread aware enough, but there should be a project setting to use a custom make. Of course you will have to futz with each one. Once you get it down to a routine, you will have a really good bug report for the devs.

        The OS is doing backflips to mix those errors up like that, I bet fixing it speeds up compiles a tiny bit.

    • I could use it for 3D rendering. Lots of threads mean way faster results, means I can push up the quality that much more. It helps to have a beefy video card (or two), but depending on your software and the content itself, vast improvements in rendering time/quality can still be had with more CPU cores. I definitely want this.
    • Except each of those threads is going to pollute the L3 cache of the others, which occurs in data-intensive workloads like your db case. This means you're not going to see anywhere near 100% scaling for each additional core.
  • by Anonymous Coward on Tuesday May 31, 2016 @12:02PM (#52217607)

    With AMD Zen right around the corner (October-ish) I believe Intel is milking their performance monopoly as much as they can with their $1700 CPUs.

    The Zen should give us roughly Skylake IPC (Some predict a little better, some predict a little worse.) Being it's AMD, they'll have to undercut Intel's price if they want marketshare. If the arch is good, this will lead to a price war, which should drive down Intel to AMD price levels.

    With any luck, high end Zen launch will be a 16-core with Skylake level single thread performance for $999. Sign me up for one of those!

    • by Joe_Dragon ( 2206452 ) on Tuesday May 31, 2016 @12:27PM (#52217865)

      AMD ZEN more pci-e then skylake. With all the pci-e based storage around Intel's skylake can't even power 2 M2 pci-e cards + 1 video card at full speed.

      with amd zen it seems like 2 videos cards at full speed + lot's left over for storage / network / usb / TB 3.0 and more.

    • I like AMD but it's not going to happen. AMD had their chance when Intel didn't do 64bit with the Pentium D, they weren't able to capitalize on that because Intel played dirty and prevented OEM's from using them.

      Now that Intel is actually competing even if AMD can produce an equally good processor design Intel's process lead will give Intel a 15% advantage. AMD will never catch up to that. The process advantage is substantial, it allows Intel to have significantly more transistors for the same power usage.

      • The process advantage is substantial, it allows Intel to have significantly more transistors for the same power usage.

        ..except that process advantage not only ends this year, Intel is falling behind this year as TSMC opens ite doors to full production in its 10nm fab.

        This doesnt mean AMD gets that fab time (contracts with GloFlo may prevent it), but AMD isnt really Intels competitor. Only in imaginary simpleton land is a cpu design company like AMD a competitor to a semiconductor manufacturer like Intel.

        The competitors to Intel are TSMC, GloFlo, Samsung, and Toshiba/Sandisk, At this moment all of these have 14nm, 15nm

      • by Pulzar ( 81031 )

        The process advantage is substantial, it allows Intel to have significantly more transistors for the same power usage. Intel is significantly ahead in the process game, almost 2 whole nodes. To beat Intel, AMD would need a processor design that's 20% better than Intel's. Everything is IPC/watt these days so they need to squeeze more IPC per watt than Intel and they just can't do it with the process node problem.

        Don't worry though, Intel has to keep AMD around for anti-trust reasons, so they won't ever go ou

  • by M0j0_j0j0 ( 1250800 ) on Tuesday May 31, 2016 @12:05PM (#52217627)

    Do the coils also whine 10 times more, it has been a nightmare with this new Skylake.

  • I suspect that they avoided the "dec" prefix because it is too close to "decimate". This was the old roman practice military discipline in which soldiers were divided into groups of 10. Each group of 10 would draw lots, and the soldier who drew the unlucky lot was killed by his 9 fellow soldiers. Probably not so great an idea from the marketing perspective. Sort of like an early form of stacked ranking.
    • Beaten to death with the bare hands of his fellow soldiers.

      Drawing lots would likely produce a better outcome vs. letting the PHB pick (stacked ranking). The Roman legions couldn't have worked with 50% of effort devoted to gaming rank and rate.

    • by gtall ( 79522 )

      It was military punishment, and meant for soldiers who were mutinous in one form or another en masse. It wasn't universally used. It was felt, not by the soldiers involved, that to whack them all meant you had no soldiers but you could blow some non-mutinous air into the lot if only 1/10 got whacked.

      Modern corporations do it all the time but not adhering to the 1/10 rule. Instead it is felt by the MBAs that when bonuses are reduced, this must be because of inattention to their bonuses by the rank and file m

    • by swb ( 14022 )

      From what I understand, it was largely a forgotten practice even among Romans. I think it was Marcus Crassus who revived it after his personally funded army lost an initial battle with Spartacus' slave army.

  • by rsilvergun ( 571051 ) on Tuesday May 31, 2016 @12:06PM (#52217641)
    Besides maybe Ashes of the Singularity does any game use more than 2 cores ( not counting crap like Far Cry 3 where it binds to core 3 for some inexplicable reason)?
    • I don't think this is hardware for gamers. But if you can type make -j 20 happily you really will like this CPU.

    • ArcheAge uses the CryEngine and uses however many cores you have IME.
    • not counting crap like Far Cry 3 where it binds to core 3 for some inexplicable reason

      Far Cry 3...core 3...sounds like an easter egg :-P

  • While a few companies are re-writing code to multi-thread, many are not. Intel's single-thread champ is still the venerable two year-old i7-4790K, which will smoke any "enthusiast" chip out there for most things that enthusiasts care about, like gaming, rendering, etc.
    • All their effort has been on power savings, which for desktop PC's is a pretty darn low priority. I'd much rather they dropped the crappy integrated GPU in the higher end chips and filled that area with a couple more cores. Instead of go beyond 4 cores you have to but the higher end motherboard and pay Extreme pricing.

      Who would buy a 6600k or 6700k and not buy a separate GPU?! It boggles my mind. A good 40% of the die area will never get used by the majority of buyers, there is enough room to fit 8 core

      • I suppose the integrated GPU is useful when your discrete GPU burns out (it happens). But on a "K" chip I'd prefer the most barest minimest iGPU just-functional-to-get-a-desktop-running, and use the saved space for a core or more cache.

  • ...in bases of two, four or eight.
  • Value Isn't in the Chips, but in the fact that you are less likely to need a less cantankerous dual-CPU motherboard for doing workstation chores such as CAD, Design, Photoshop, Video editing, and moderate raytracing.

  • Few average people will be running these LGA2011 boards/processors. The important news is that the mainstream i7 now has 6 cores. It really isn't affecting much else, as workstations have been built with Xeon processors for many years now, which have all had more than 4 cores for quite a few years now.

    Only gamers and people with an OC fetish buy "Extreme" processors; everyone else just buys Xeons.

Never tell people how to do things. Tell them WHAT to do and they will surprise you with their ingenuity. -- Gen. George S. Patton, Jr.