Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Upgrades Hardware

Intel Launches Core I7-4960X Flagship CPU 180

MojoKid writes "Low-power parts for hand-held devices may be all the rage right now, but today Intel is taking the wraps off a new high-end desktop processor with the official unveiling of its Ivy Bridge-E microarchitecture. The Core i7-4960X Extreme Edition processor is the flagship product in Intel's initial line-up of Ivy Bridge-E based CPUs. The chip is manufactured using Intel's 22nm process node and features roughly 1.86 billion transistors, with a die size of approximately 257mm square. That's about 410 million fewer transistors and a 41 percent smaller die than Intel's previous gen Sandy Bridge-E CPU. The Ivy Bridge-E microarchitecture features up to 6 active execution cores that can each process two threads simultaneously, for support of a total of 12 threads, and they're designed for Intel's LGA 2011 socket. Intel's Core i7-4960X Extreme Edition processor has a base clock frequency of 3.6GHz with a maximum Turbo frequency of 4GHz. It is easily the fastest desktop processor Intel has released to date when tasked with highly-threaded workloads or when its massive amount of cache comes into play in applications like 3D rendering, ray tracing, and gaming. However, assuming similar clock speeds, Intel's newer Haswell microarchitecture employed in the recently released Core i7-4770K (and other 4th Gen Core processors) offers somewhat better single-core performance."
This discussion has been archived. No new comments can be posted.

Intel Launches Core I7-4960X Flagship CPU

Comments Filter:
  • Die size? (Score:5, Informative)

    by msauve ( 701917 ) on Tuesday September 03, 2013 @08:32AM (#44745779)
    "a die size of approximately 257mm square."

    I suspect that should be 257 square mm. A 257 mm square die couldn't even be covered by a standard sheet of paper (US:letter, EU:A4)
    • TFA says "15 mm x 17.1 mm"
    • It also wouldn't fit on a 300mm (diameter) wafer... 400mm would work, and even have some room around the edges; but I probably don't even want to know what a CPU so large that you get only 1 die/400mm wafer would cost.
      • It also wouldn't fit on a 300mm (diameter) wafer...

        Well... perhaps if you cut the ingot lengthwise instead of normal to the axis?

        • Wouldn't be compatible with any of the other processing equipment; but you could do it.

          My impression(as a layman) is that getting fairly substantial amounts of silicon isn't a big deal, with difficulty increasing as your demands concerning purity, mono-crystallinness, and dimensional accuracy go up; but that the cost of the entire chip fabrication process get very big, very fast, if you want to work with larger wafers.
    • Skip the die size. What's the SPECint and SPECfp? Do processor makers submit these numbers anymore?

      Any other metrics are secondary.

      • I believe that those numbers went away when RISC went away
      • by amorsen ( 7485 )

        SPECint and SPECfp are a bit useless, they only test a single core and with modern CPUs you cannot just multiply that number by the number of cores and get a meaningful result.

        SPEC has attempted to fix that simply by running multiple copies of the benchmark and aggregating the result as "SPECrate". Whether that measures anything which is useful for actual workloads is debatable. It certainly does not reflect a modern multithreaded workload.

  • by K. S. Kyosuke ( 729550 ) on Tuesday September 03, 2013 @08:32AM (#44745781)

    Low-power parts for hand-held devices may be all the rage right now, but today Intel is taking the wraps off a new high-end desktop processor

    Actually, I think that useful computation per joule is all the rage all over the device size scale. See? This one works everywhere.

    • That's not necessarily true. Someone running a photo editing app on their Galaxy and saying it's slower than their PC is one thing but that's wrong on so many levels. My not-so-smart phone runs Brew and the 1000mAH battery has a realistic idle time rating of 27 days and screen-off talk time of something like 16 hours. If someone basically wants a 4 ounce laptop with a 4" screen that runs for 48 hours, they're dreaming. More reasonable people just want an absurd battery life and realize that a phone can'
  • by CajunArson ( 465943 ) on Tuesday September 03, 2013 @08:37AM (#44745805) Journal

    These chips are slightly faster (given equal core counts) than their predecessors but not in any interesting way.

      However, you have to remember that these are really server chips that are repurposed for high-end desktop use. The one vital metric where these chips shine is in their power consumption (or lack thereof): Techreport did a test where the 6-core 4960X running full-bore is using about the same amount of power as a desktop A10-6800K part ( http://techreport.com/review/25293/intel-core-i7-4960x-processor-reviewed/9 [techreport.com] )

    That level of power efficiency will do wonders in the server world and these chips (and their 12-core bigger brothers) should do quite well in servers.

    • Considering that AMD is a gen or two behind, and their chips arent currently known for their efficiency, I dont know how impressive that is.

    • I doubt anyone being serious about cutting their server power costs would go with this new chip in the first place. The socket Xeon E5 T-series are purposely underclocked but with a high single-core turbo so they benchmark (at single operations) at a somewhat close speed but take up immensely less power. It's like 50% less on most chips if I remember correctly.
    • That level of power efficiency will do wonders in the server world and these chips (and their 12-core bigger brothers) should do quite well in servers.

      And later this year, when Atom goes to 22nm, it may also do quite well in mobile phones, given they've already developed a quality ARM emulator.

    • This chip looks like it would be fantastic in engineering workstations - particularly ones running the Linuxes or BSDs. Whereas HDL CAD applications of old would run on Sun or HP workstations, the current ones would do well on one of these running either Windows 7 or Scientific Linux, and then the cad apps in question
  • by JoeyRox ( 2711699 ) on Tuesday September 03, 2013 @08:37AM (#44745807)
    It's laughable how small the performance gains are between recent generations of Core processors. I realize there are other improvements like power consumption and integrated GPU performance but the desktop gamer isn't going to drop another grand to save watts or get better performance on an IGPU he never will use anyway.
    • by gweihir ( 88907 ) on Tuesday September 03, 2013 @08:49AM (#44745881)

      There are two reasons:
      1) AMD is really behind after they reworked their architecture, hence no pressure on Intel.
      2) Moore's Law has ended some time ago on a per-core basis and nobody noticed.

      • Everyone says AMD is behind but that's based on a ridiculous comparison. Just do the #1 most important benchmark, speed vs price, and AMD is winning. Yeah, power vs performance comes into play but at least in bang for your buck, they're crushing Intel. It's just like Roundy's with food. If you're almost as good, just price it lower to compensate and everyone will buy your product instead. If Intel wanted to put AMD in some real trouble, they wouldn't have kept the i3-2100 at the same price for 2 years
        • Speed vs Price is important when comparing similar speeds. Price doesn't matter if the speed isn't good enough, which is where Intel is winning.

          • "Not good enough for what?" is the question though. We're talking about desktop CPUs here.
            • That's a fair question. I can think of many things that I do and new features in programs that I love that would probably easily run on a very old computer. I used a 2003-era laptop until 2011 that met the vast majority of my needs. That's why I choose an i3 for my new desktop. It had excellent bang-for-the-buck and was so much faster.

        • by Hadlock ( 143607 )

          And yet nobody is buying AMD products in the desktop or server space. AMD has consistently been below 10% for over a decade I believe.
           
          Price/performance doesn't matter a whole lot when the difference in price on the chips is less than $100. If you're buying i3/i5/i7-class chips you're already looking at real world performance rather than budget.

      • There are two reasons:
        1) AMD is really behind after they reworked their architecture, hence no pressure on Intel.

        That's a really stupid thing to say, as if thousands of highly skilled engineers at Intel turn up every morning and just don't give a shit. If you've been paying attention, if there's any lacking on the desktop/server chips it's probably due to Intel going all out to take ARM's business in the mobile and tablet space.

      • I'm as ignorant about hardware as they come on slashdot, so this is an honest question. By two are you saying the demand for per-core performance decreased or the capabilities bottomed out? I remember hearing we were at some point going to break moores law because there are finite limits to how much performance you can get out of silicone, and I've also not been blown away by increases in graphics or power requirements on games or applications. Which is it?
    • Lower watts in = lower watts out = more thermal room for overclocking.

      Tell me again how gamers aren't interested in how much power a stock CPU uses.
      • by JoeyRox ( 2711699 ) on Tuesday September 03, 2013 @08:56AM (#44745941)
        Sure, I'll tell you again. Even though the power consumption drops for each new process shrink the heat drop isn't commensurate because the transistors are packed more tightly together. Do a search online about how poorly Ivy Bridge OC's vs Sandy Bridge on a relative CPU frequency basis.
        • Comment removed (Score:4, Informative)

          by account_deleted ( 4530225 ) on Tuesday September 03, 2013 @09:30AM (#44746233)
          Comment removed based on user account deletion
        • While you are correct, that wasn't the reason in that case. Sandy vs Ivy overclocking was due to the way they added thermal compound inside the CPU itself. They did it differently and it jacked up the average core temp on every Ivy chip by as much as 10C in some cases.

          Anyway, in response to the original post, lower power means cheaper power components that can't handle as many watts so it actually limits the amount of power the CPU can use. They don't make a chip twice as efficient and then leave the s
          • Anyway, in response to the original post, lower power means cheaper power components that can't handle as many watts so it actually limits the amount of power the CPU can use.

            Do you have any evidence of this? That sounds like pure conjecture to me.

    • Who said anything about a gain? I looked at 3 respectable benchmarks and overall, this chip is slightly slower than a 4770K.
    • AVX v.s. SSE4.2 : 8 x floats per instruction v.s. 4 floats per instruction. (Nehalem v.s. Sandy Bridge)
      AVX2 v.s. AVX : 8 x 32 integers per instruction v.s. 4 x 32 bit integers per instruction. (Ivy Bridge v.s. Haswell)

      The performance gains certainly are there. As per usual, meaningless benchmarks are meaningless.
      • by bored ( 40072 )

        Its not that clear cut. The throughput of the instructions count as well. In many cases AVX isn't faster than SSE because the core can retire 2x the SSE instructions per cycle. Furthermore, it can be harder to get a x8 vector than a x4 one.

        Think how useful 4x4 matrix operations are for 3d graphics. Then consider how to write optimal code using a x8 vector.

        Now all that said, AVX(/2) can really win in some cases

    • by Cajun Hell ( 725246 ) on Tuesday September 03, 2013 @11:13AM (#44747191) Homepage Journal

      It's laughable how small the performance gains are between recent generations of Core processors. I realize there are other improvements like power consumption and integrated GPU performance but the desktop gamer isn't going to drop another grand to save watts or get better performance on an IGPU he never will use anyway.

      The only thing that's laughable, is that the desktop gamer thinks everything is about him and that his concerns add up to even 1% of the market.

      • by amorsen ( 7485 )

        If you aren't gaming, why buy a desktop? I suppose there is still AutoCAD and compiling, but that market seems even smaller than the gaming market.

  • 3.6GHz base clock is the fastest we've had since the last generation P4's, and with the obviously superior IPC of the IB this thing's going to be a monster for certain workloads where the code doesn't scale well to multiple cores. The only downside is it's not 8 cores/16 threads at those speeds which is a bummer for virtualization hosts. Oh well, the E5-2670's at 2.6GHz do a pretty good job =)

  • by Lumpy ( 12016 ) on Tuesday September 03, 2013 @09:58AM (#44746471) Homepage

    Because the only Multi Chip processors are still 4 years behind this. Why dont they just enable the ability for me to drop 4 of these on a single motherboard so I can have my 24 core monster for editing and rendering 4K video?

    • I still have an old Abit BP6 system sitting next to my desk gathering dust if you want it. I even have 4 extra celeron processors for it!

      Back when men where men, and dual core meant two processors!

      Sadly other than specialized software, most are still only designed for single core anyway, making the performance gains negligible for most people, which means other than an expensive marketing ploy to a small enthusiast market, not much of a market advantage for any company to do so...

  • Thats an important question for me as I write the base level concurrency libraries for our company.

    I wanted to get a 4770K but Intel disabled TSX (Transactional Synchronization Extensions) on that CPU.
    • by adisakp ( 705706 )
      For what it's worth, none of the Haswell 'K' line supports TSX. You actually have to buy a cheaper CPU to get this feature which is odd... maybe it didn't validate well with overclocking though? The new 'HQ' line seems to support it but the new 'R' line does not.

      Anyhow, I'm wondering if the 'X' line supports TSX or not. I can't find docs or specs that answer one way or another right now.
    • It's almost an oxymoron if you are talking about a single-socket Intel cpu. You don't actually need the transactional extensions to make things go fast. It's only when you get to multi-socket where the cache management bandwidth (which is what the transactional extensions are able to avoid) becomes a big deal.

      If the purpose is to test code performance then it is better to test without transaction support anyway since transaction support is not a replacement for proper algorithmic design. Or to put it ano

      • by adisakp ( 705706 )

        It's almost an oxymoron if you are talking about a single-socket Intel cpu. You don't actually need the transactional extensions to make things go fast

        Not true... I've written an entire concurrency system including a lock free library and a multicore memory mananger [gdcvault.com]. There are a number of places where TSX offers a large speed improvement even on a single core.

        If the purpose is to test code performance then it is better to test without transaction support anyway since transaction support is not a replacement for proper algorithmic design. Or to put it another way... if you code SPECIFICALLY for one of the two intel transactional models that means you will probably wind up with very sloppy code (such as using global spinlocks more than you need to and assuming that the underlying transaction just won't conflict as much). The code might run fine on an Intel cpu but its performance value will not be portable.

        Are you even familiar with how TSX works? Hardware Lock Elision is a very simple replacement for atomic locking. You can write a very simple user level mutex using atomic operations that has a fallback to an OS yielding construct. In fact that's what we do in my concurrency library. Uncontested

        • Well, for video games (or anything you sell to the consumer), you clearly do not want to rely on Intel's transactional extensions because doing so could significantly reduce or destroy performance on any customer systems that don't have them.

          Basically the way the basic (the prefixed) transactional extension works is to avoid dirtying the cache line(s) associated with the spin lock or unlock operations, with the assumption that the operations which are run within the locked section are less likely to conflic

          • by adisakp ( 705706 )
            Nope... it's very valuable. Basically what you do is you write code that makes use of fine-grained user space locking that has a fallback to OS locks on contention. This runs very fast on multicore systems.

            Then you add HLE extensions and it runs even faster on CPU's that support TSX and you get a rather large performance bonus for free as at that point a majority of your atomic operations become free.

            It also allows you to do substitute simple lock-free and non-blocking algorithms that rely on multiadd
            • Not to put too fine a point on it, but I've written hundreds of thousands of lines of SMP code on modern systems (and, frankly, I was doing SMP code with paired 8-bit CPUs over 28 years ago), so if you think you are somehow stating something in regards to my knowledge base, I would humbly suggest that you take your opinions and shove them down a toilet somewhere because you clearly have no clue whatsoever as to what I've been doing the last 20 years.

              -Matt

          • by adisakp ( 705706 )
            Also, I don't see why you keep referencing global locks and spin locking as the only things that would benefit. Did you get a chance to read the presentation I linked to [gdcvault.com]? Mind you, it's based on work I did 4-5 years ago and presented almost 4 years ago, but even back then we were well ahead of the starting point you seem to feel developers are using as a base.

            We are already using fine-grained locking, striped locking, reader/writer locks, lock-free atomic SList, lock free allocators, etc. I am interes
          • by adisakp ( 705706 )

            So as far as game design goes, the transaction stuff is worse than worthless.

            I want to feel you're not just trolling me because apparently you've been developing software since at least the Amiga days (we have that in common). However, I feel you are quite misguided on some of your assumptions here.

            Not to say I may have a more informed opinion than you because I don't know your personal experience in game development, but I certainly feel that TSX isn't worthless for games and I've been writing performance code full time for games for over 20 years.

            • I don't think you actually bothered to read and understand what I wrote. Try again. This time read my responses (or at least the first two) a bit more carefully.

              I'm not in the least saying that transactional hardware support is bad. I am saying that programming to Intel's transactional interface FIRST, as your primary programming model, particularly for consumer applications, can lead to very undesirable results on hardware that doesn't support it.

              Intel tends to implement first-run features with very wei

              • by adisakp ( 705706 )
                I felt the choice of wording was a bit prematurely dismissive (i.e. saying it shouldn't apply to single socket CPU's or to Game Programming -- especially since that is the primary target of my concurrency research).

                Also, we are not trying to write specifically to HLE. We are trying to write stuff that runs well on multicore systems and then layer HLE on top of it for an added performance benefit for when we do have lock conflicts.

                I agree that well written applications don't have nearly as many locking
    • It safe to say it does not, as TSX is a Haswell feature and 4960X is an Ivy Bridge CPU.
      What you would need is the 4960X's successor, which is Haswell-E on a new socket called LGA 2011-3 with ddr4, and its server counterparts. Or get a vanilla 4770 or 4771.

  • by stanlyb ( 1839382 ) on Tuesday September 03, 2013 @10:52AM (#44747011)
    Since their devotion to TPM, my answer to intel was, is and will be: GO F**** yourself.
  • So for $1000 I can get 1.5x the peak multithreaded performance over the $300 processor released three months ago. And if you run lightly threaded apps, the processor from earlier in the summer may still be faster. Wow...what a bargain. I'd say sign me up for two but, alas, Intel won't let you run multiple processors without paying the xeon tax.

  • This CPU very low, if not the lowest performance per price of current models, so in one category it is the worst possible buy you can make; it is incredibly over-priced.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...