Forgot your password?
typodupeerror
Intel Upgrades Hardware

Intel Core i7-4790K Devil's Canyon Increases Clocks By 500 MHz, Lowers Temps 57

Posted by timothy
from the free-lunch dept.
Vigile (99919) writes "Since the introduction of Intel's Ivy Bridge processors there was a subset of users that complained about the company's change of thermal interface material between the die and the heat spreader. With the release of the Core i7-4790K, Intel is moving to a polymer thermal interface material that claims to improve cooling on the Haswell architecture, along with the help of some added capacitors on the back of the CPU. Code named Devil's Canyon, this processor boosts stock clocks by 500 MHz over the i7-4770K all for the same price ($339) and lowers load temperatures as well. Unfortunately, in this first review at PC Perspective, overclocking doesn't appear to be improved much."
This discussion has been archived. No new comments can be posted.

Intel Core i7-4790K Devil's Canyon Increases Clocks By 500 MHz, Lowers Temps

Comments Filter:
  • by 140Mandak262Jamuna (970587) on Saturday June 07, 2014 @05:41AM (#47185623) Journal
    Not everyone is limited by speed. In fact a vast majority of the users are limited by network latency and bandwidth. They will not be helped by overclocking much. Even among people running heavy duty local executables farm out graphics to GPU. So non-GPU heavy duty local apps limited by clock speed makes up for a small subset of the users and applications.

    For common people video re-rendering is probably the most CPU intensive task. Even that could be farmed out to GPU, if not already pretty soon.

    There are some power users in the accounting and finance department who commit crimes against software using atrociously written Excel macros. Their spreadsheet update time scales as the square or cube of the number of cells. They blame the computer for being slow and demand faster computers. Even this group does not benefit by overclocking because Excell is such a bloat, it triggers so many page faults and long (out of L1, L2, L3 cache) fetches.

    So might benefit? May be people like me, doing finite element analysis, mesh generation or other such physics simulations.

    For a vast majority of the users, reducing the temperature and applying it to more reliability, longer lasting, less power consuming chips would give bang for the buck. But that is difficult to test, does not garner press reports and more importantly cuts into future sales. So they will obsess with overclocking gimmicks.

    • Who might benefit? Gamers, I imagine.

      • Rendering the scene is the most computationally intensive task. It has already been farmed out to the GPU. Rest of the gaming software does not benefit as much by CPU. Many of the game algorithms are embarrassingly parallel. They nicely scale up in multi core chips. So most threads in a gaming executable idles much of the time. They don't benefit as much by overclocking.

        Secondly gamers form a very small segment of the computer users. The mobile phone gaming market is bringing in so many non-traditional ga

        • by Anonymous Coward on Saturday June 07, 2014 @09:06AM (#47185995)

          As a gamer, there are a lot of games that are CPU bound. ArmA 3 and Skyrim are great examples, basically any game with a complicated physics engine or that's dependent on a lot of AI (Gamer AI, means NPCs) calculations. A faster CPU can offer a much better performance than a slower one with the same video hardware

        • Not true at all! My phenom II with a ati 7850 slows to a crawl with polygon counts in bf 4 and swtor in the imperial fleet. Guild members with i7s with lower end cards have double fps.

          The card needs polygons fed to reach render and wait upon DirectX to redraw. A faster CPU is required

        • by Anonymous Coward

          Supposedly the Cryengine 4 is CPU bound. It's one of those engines where you really want to throw cores at it.

      • by Krneki (1192201)

        Yap, Star Citizen Arena Commander just came out and you need a properly OC CPU to reach 60FPS. My i5 2500k @ 4.5 is only able to give me 48FPS, while stock CPUs won't give you more then 40FPS.

    • by Kjella (173770)

      Those things go mostly hand in hand, either you can increase performance or reduce power usage. Things get a little more complicated as you approach SoC power levels, but in general the one with the highest performing chips also can scale them down to the lowest consuming chips. There's a reason Intel can sell $500-1000 mobile chips, in that power envelope AMD doesn't have a match on performance so Intel is free to set the price at will.

    • May be people like me, doing finite element analysis, mesh generation or other such physics simulations.

      Probably not even you. Such tasks demand a huge amount of memory and the bottleneck is often the memory channels on the CPU chip per core. If you scale the benchmark result with the amount of cores, a CPU with 4 cores and 4 channels will outperform a CPU with, say, 6 cores and 3 channels even if the 6-core CPU is clocked higher. Given that the software scales nicely, it will be better to add more CPUs to the cluster than increasing the clock speed. Also, if CUDA takes off, the clock speed of the CPU will be

      • That my friend is what the difference between an i5 and i7. For having many apps and tabs open an i5 is fine. For compiling code more ram channels from an i7 help

    • by jonwil (467024)

      It seems to me that this CPU would be the perfect choice for a MAME setup being that MAME is one of the few things out there these days that is genuinely CPU bound.

    • But the users of i7s are not the friendly receptionist at work or Grandma checking her FB. These are almost all gamers mixed with software developers and a few niches.

      So yes CPU is important. If not then the rest of the normal users buy i3s and maybe an i5 if they have more cash and do heavy workflows. Unless you make 1 tb video edits or play Wolfenstein the average users won't see any benefit between a 4 core i5 or i7.

    • So might benefit? May be people like me, doing finite element analysis, mesh generation or other such physics simulations.

      Actually, finite element analysis is one of the fields that stands to benefit most from the use of GPUs. Although it does depend on just what it is you are doing.

      I remember back in the early 90s when our office got a brand-new 25MHz 486. We had great expectations for it. We set up a groundwater flow model (I forget how many cells it was) on the Friday it came in, after I got it all set up. The simulation ran the rest of Friday... we went home. Monday when we came back it was still running...

    • Agreed. It's completely irrelevant to most use cases. But not all. For instance, pro audio, which is a part of what I do, still benefits greatly from increased CPU speed as well as reduced cache latency. The tools I use have not been architected to take advantage of the immense power of modern GPUs. Eventually they likely will be, but, for now, every couple years' worth of CPU improvements does make a significant difference for what I do.
  • Seymor Cray material?
  • Repost (Score:4, Informative)

    by Zanadou (1043400) on Saturday June 07, 2014 @06:17AM (#47185689)
    • by Anonymous Coward

      ...a polymer thermal interface material that claims to improve cooling on the Haswell architecture,...

      Captain! The polymer thermal interface is failing along the Haswell architecture! I dunno how long I can keep 'er go'in!

    • Re:Repost (Score:5, Informative)

      by Vigile (99919) * on Saturday June 07, 2014 @09:00AM (#47185969)

      That was a paper launch announcement. This post is a review with benchmarks and overclocking.

  • Intel bumped stock clock on a K designated unlocked part that NEVER works on stock. Pointless gesture.

    And it appears to be OC limited to boot, W T F Intel?

    • by Xenx (2211586)
      One sample doesn't show anything. They even mention that in TFA. Also, it appears Intel reported it achieving 5.5ghz on air cooling during Computex for their OC challenge. I wouldn't be the least bit surprised if they cherry picked the cores for the challenge, but it shows they're at least capable of higher.
  • How much one can overclock has always been a roll of the dice and down to luck. With that being said it is well known in the water cooling community that the cpu thermal interface on sandy/ivy/original haswell was a thermal cooling efficiency limiting factor on water-cooled rigs and there were no spectacular results even on super lucky draws. I am very excited to see how this might change with this new thermal interface material .
  • Yawn (Score:5, Informative)

    by BrendaEM (871664) on Saturday June 07, 2014 @09:43AM (#47186127) Homepage

    30 Percent Faster in 3 Years:
    http://www.pcper.com/reviews/P... [pcper.com]

    Overclocking issues?
    http://linustechtips.com/main/... [linustechtips.com]

  • Apart from games and video encoding, we've kind of reached a plateau of user requirements such that a platform like the AMD Kabini would be plenty for most of the average user. Since the average joe can live with only a tablet, I'm guessing even Kabini is overkill by comparison.

    • by Bryan Ischo (893) * on Saturday June 07, 2014 @11:02AM (#47186415) Homepage

      Agreed. I've said it before and I'll say it again: significant performance increases in the x86 world are a thing of the past.

      There simply isn't enough money in the market chasing higher performance to make the development cost of faster chips worth the investment.

      This is actually an opportunity for AMD. I expect it costs AMD less to catch up to Intel than it costs Intel to push to faster speeds, and since Intel isn't being paid anymore to get faster, AMD can, like the slow and steady tortoise, gradually catch up to Intel. I believe it will take a couple more years, but if AMD survives that long, I believe that it will have achieved near performance parity with Intel by then.

      And then neither company's offerings will get much faster, forever thereafter, until there is some new kind of 'killer app' that demands increased CPU speeds that people are willing to pay for (could happen anytime; but the way things are going, with everyone moving to mobile phones and pads, I think we're in for a relatively long haul of form factor and power usage dominating the marketable characteristics of CPUs).

      I believe Intel will continue to hold a power advantage over AMD for a long time though, but AMD will gradually narrow that gap as well.

      The thing is, AMD will be fighting Intel for a stagnating/shrinking CPU market, and more than likely AMD won't increase its margins significantly during this process, it will just reduce Intel's margins. Not really good news for either company, but worse for Intel.

  • by Anonymous Coward

    Any young tikes in here remember when processor speeds used to increase by 500 MHz in 18 months instead of being stuck in the 3 - 4 GHz range for almost a decade?

    • by Anonymous Coward

      yes, good times.. anything seemed possible by just waiting a few months until the hardware could take it.
      On the other hand now and going forward good software developers will be valued highly because efficient cpu resource usage and smp are difficult but increasingly important.

    • by 15Bit (940730) on Saturday June 07, 2014 @03:02PM (#47187257)
      Yeah, but there comes a point where it is technologically easier to increase the amount done with each clock tick than it is to make logic that can switch faster. We reached that point about 10 years ago....
    • by Anonymous Coward

      Yeah, I miss those days in the late 1980's ... it wasn't just the clock speeds that were revving upwards, the hard disk drives were getting larger, doubling every six months (10, 20, 40, 80, 160 Megabytes ... we're up to 2 Terabytes now), the screen resolutions were getting larger (320x200, 640x480, 1024x768, 1280x1024, 1600x1200) as well as pixel sizes (8-bit, 16-bit, 24-bit). Now we're into quad-monitors with HD resolutions, so maybe that is still going upwards.

      But CPU cores are increasing, and caching is

  • Speed and Temp (Score:2, Insightful)

    by Anonymous Coward

    So, Intel lowers the temp, increase the speed by 50 and there are issues with overclock ability? Man, just go invent and build your own proc. Want you cake and eat it too?

  • Why, on a modern machine running a modern flavor of Windows, does a heavy RPG only use 2 cores?

  • by Anonymous Coward

    im still running First generation 1366 and i havent even Begun to find a reason to upgrade anything but the video and thats just to stay on top of gaming.
    it would be nice to get usb3 i suppose but truthfully thats just a pcie addin card away.

Bus error -- driver executed.

Working...