Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel's Skylake Architecture Reviewed 99

Vigile writes: The Intel Skylake architecture has been on our radar for quite a long time as Intel's next big step in CPU design. We know at least a handful of details: DDR4 memory support, 14nm process technology, modest IPC gains and impressive GPU improvements. But the details have remained a mystery on how the "tock" of Skylake on the 14nm process technology will differ from Broadwell and Haswell. That changes today with the official release of the "K" SKUs of Skylake — the unlocked, enthusiast class parts for DIY PC builders. PC Perspective has a full review of the Core i7-6700K with benchmarks as well as discrete GPU and gaming testing that shows Skylake is an impressive part. IPC gains on Skylake over Haswell are modest but noticeable, and IGP performance is as much as 50% higher than Devil's Canyon. Based on that discrete GPU testing, all those users still on Nehalem and Sandy Bridge might finally have a reason to upgrade to Skylake. Other reviews available at Anandtech, Hot Hardware, [H]ard|OCP, and TechSpot.
This discussion has been archived. No new comments can be posted.

Intel's Skylake Architecture Reviewed

Comments Filter:
  • by Anonymous Coward

    The performance increase is going to be negligible until the "new instructions" on the skylake are utilized more in daily software use. Buy today, pay a premium for basically no bump.

  • I Wish (Score:3, Insightful)

    by Traciatim ( 1856872 ) on Wednesday August 05, 2015 @09:58AM (#50255915)
    I just wish that Intel would make a version that's 8 cores instead with lots of cache rather than waste the space and power on the integrated graphics. If you are gaming with it then you would have a dedicated card since all IGPs pretty much suck, and if you aren't gaming on it there isn't much point to improving it since even the most basic IGP can run video and 2d applications just fine.
    • by Bengie ( 1121981 )
      They do, and they sell them for $600+ under their Xeon brand.
      • and again, they don't have consumer level pricing, and don't have unlocked multipliers. What I'm wishing for is essentially an i5 and i7 'max core' edition that removes the IGP but has a mirror of the existing cores in it's place so they each are essentially like 2 K edition chips stuck together.

        That would be a huge leap in progress for the CPU side of things for people who run dedicated graphics cards already. In gaming benchmarks the amount of difference at usable resolutions like 1080P and higher there
        • by Anonymous Coward

          Because AMD is in a heap of trouble. Why should Intel push the performance envelope when you need to buy a 8 Core, 200W TDP AMD CPU to beat a 90W 4 core Haswell i5 in terms of performance?

          Until AMD returns to the performance race with a good, competitive CPU lineup, you won't see major jumps from Intel. There is just no money to be had.

        • Yeah, I would love to see a real high end enthusiast processor with 8 cores, hyperthreading, a 4+ GHz clock speed, and no integrated graphics.

          I'm thinking THAT processor would have more than a puny 30% performance increase over a 4 year old Sandy Bridge part.

          Maybe they could brand it with something new like "Core i9 Extreme Edition" to make it sound even more badass to the l33t gamer types.

          • Yeah, I would love to see a real high end enthusiast processor with 8 cores, hyperthreading, a 4+ GHz clock speed, and no integrated graphics./quote.

            So would I... but they aren't doing that to avoid hurting Xeon sales...

            And in fairness, they DO have such a CPU... It is the Haswell-E line of chips, but it'll cost you a thousand bucks...

        • by Bengie ( 1121981 )
          The new graphics APIs will allow IGPs to be faster for some work loads. Discrete GPUs have high throughput but also high communications latency, about 100x that of an IGP that is on chip. The IGP can communicate with the CPU directly via the L2/L3 cache, but a discrete GPU has to go over a high latency PCIe link with all kinds of encoding latency overhead.

          Let the IGP do physics, AI, or whatever, and leave the heavy crunching to the big boy. Newer game engines are looking into better ways to allow the scre
          • You do make a pretty good point, but I doubt that programmers will take much advantage of an IGP to offload physics for example unless it can be completely transparent to the program that it's happening since you can never rely on the performance or even availability. So if you spend time optimizing that you end up taking time away from other things that could be done that target a broader group of people (figuring out which parts of your program could thread out to better support more cores as an example).
          • by Kjella ( 173770 )

            The new graphics APIs will allow IGPs to be faster for some work loads.

            The question is for how much extra work. As I understand it, from the DirectX/OpenGL side you don't really "notice" SLI/CF, the cards just take turns. That is why you effectively only get half the memory, they must mirror all the assets. With the low-level APIs all the details are exposed, but if you want to take advantage of the special cases well you have to write special case code. And the bulk of your market will not have any fancy new feature you introduced, so in the world with limited time and resour

            • by Bengie ( 1121981 )
              DX12 and OpenGL Vulkan are based around treating a GPU as a compute engine and not a "graphics" anything. You compute the work, then write it out to a buffer to be displayed. At this point you just have a bunch of compute tasks and a bunch of compute engines. This new way of doing stuff scales nearly perfectly linearly with the number of total compute in your computer. There have been some tech demos of a system with xfire ATI cards, paired with an Nvidia and the Intel IGP, and the engine made near perfect
        • Re:I Wish (Score:5, Informative)

          by Kjella ( 173770 ) on Wednesday August 05, 2015 @01:31PM (#50257705) Homepage

          What I'm wishing for is essentially an i5 and i7 'max core' edition that removes the IGP but has a mirror of the existing cores in it's place so they each are essentially like 2 K edition chips stuck together.

          It already exists, it's called the i7-5960x and costs $999 and needs an expensive X99 motherboard. Eight cores, no IGP, fully unlocked and uses standard DDR4 UDIMMs which are now almost at price parity with DDR3. You just don't like the price.

    • My wish is for more low power Intel CPUs. An Ivy Bridge Core i3-class (ca 2012) CPU in a sub-5W envelope would be nice so I can run a fanless setup with room for doing kernel and Android ROM compilation.
    • by Retron ( 577778 )

      They do, the i7-5960X is a consumer, unlocked i7 chip. Loads of cache, no integrated graphics and a whacking great price because there's zero competition.

      (Of course, the X99 chips are only Haswell, but as the IPC improvements are minimal with Skylake they're still worth considering - especially the 6-core i7-5820K, which is actually cheaper than the new quad-core Skylake i7 here in the UK. The X99 chips are essentially Xeons with some bits turned off and overclocking enabled. They have vt-d enabled, amongst

      • I do have a habit of forgetting about the 2011 based chips due to price of the whole platform. Maybe I'll have to take a look at whatever sits in that space when it comes time to replace my, now aging, 3570k that didn't want to overclock much.
      • They're going to eventually come out with a skylake at that price point and level of power as well probably in Q2-Q4 2016, so I'd just wait for that to come out. It is unfortunate that there isn't any competition at that level, if there was I bet the chip's actual market price would be closer to 599-699.
    • Speaking as someone who primarily uses his computer for gaming, I actually like having the IGP around, that way if my graphics card takes a dump my computer is still viable until I can get a replacement. Also at this point there's not a lot of consumer-level tasks that would benefit much from having 8 cores. Lastly Intel DOES have enthusiast level parts with 6 cores and no IGP.
      • The best part of the IGP that I've found is when I lucked into a free 3770K, I was able to re purpose the 2600K into my Linux box, and suddenly that IGP that I was "never ever" going to use was a well supported, perfectly adequate solution for my Linux desktop without have to deal with binary blobs and whatnot.

        But seriously, what gamer doesn't have a stack of old GPU's around? I'm not even a gamer and the number of old graphics cards I have is impressive.

        • I have one computer with discrete graphics, I don't upgrade often. My last card I think ended up going to a local computer recycler. I might have been able to get $30 for it, but I was employed then and didn't want the hassle. Also I purged my inventory of excess equipment when I moved last.
  • by Anonymous Coward

    Still a deal breaker for me.

  • My next CPU is sooooo going to be an AMD.
    • My next CPU is sooooo going to be an AMD.

      I just built a system with an AMD CPU, but the latest Haswell i5 with four cores is faster than it is. Skylake should beat it into a corner.

    • by armanox ( 826486 )
      I really wish AMD could put out something competitive.
      • Next year's Zen should be competitive, is said to have 40% better IPC than excavator, and will be on 14nm FinFet. Being stuck on 28nm has really limited their ability to compete.

    • I still have two Core i7-920 systems running as well and I won't replace them with this either.

      But I'm not going AMD, too much power use. Over a five year lifespan, the difference in power use and AC to cool the rooms adds up.

  • 30% Percent Faster, perhaps, with the wind behind it, and if I don't overclock my rig.
    Cinebench 931 VS 694 Multicore.

    No, to be fair--it's only 25.4564983888292% Faster.

  • by FlyHelicopters ( 1540845 ) on Wednesday August 05, 2015 @11:04AM (#50256439)

    If you're on Sandy Bridge or newer, don't bother unless you really need the new chipset features.

    Benchmarks of course show a small gain, but in the real world I suspect you could do a blind test of Sandy Bridge next to Skylake and you couldn't tell the difference.

    Anyone who needs the performance difference shouldn't be on either chip, if you do serious image/video editing, you should be on Xeon anyway with 8+ cores if you make a living doing such work. The cost of such a system is trivial compared to the cost of the employee doing such things.

    I have several systems in my office, ranging from a single Q6600 machine and two Core i7-920 machines all the way up to a Haswell Refresh i7-4790k. The difference in general Windows performance between all those machines is minor. Games play, more or less, the same in anything Sandy Bridge or newer, and we don't do anything so intensive to require more power.

    Come on AMD, get back in the game so Intel has some real competition. Since Core2Duo came out, you haven't been coming to the party.

  • by Anonymous Coward

    It's 2015. I want ECC support on all chipsets. Don't make me buy a Xeon just to get ECC.

  • https://01.org/linuxgraphics/i... [01.org]

    "No reverse engineering, decompilation, or disassembly of this software is permitted."

  • by FlyHelicopters ( 1540845 ) on Wednesday August 05, 2015 @12:55PM (#50257431)

    Reading AnandTech's review, they make a bold statement at the end:

    "Sandy Bridge, Your Time Is Up."

    That is an interesting thought, but is it really?

    If you need USB 3, if you want some of the other newer chipset features, perhaps. But for performance?

    In benchmarks, Skylake appears to be about 25% faster than Sandy Bridge. Sure, if you're doing video encoding all day or other CPU intensive applications, it is... (and if you ARE doing that stuff, why aren't you on Xeon?)

    But for most desktop computer uses, you likely won't see any difference between the two. What is worse, is that most of the above gains came from Haswell, not Skylake.

    http://www.anandtech.com/show/... [anandtech.com]

    Look at the "Gains over Sandy Bridge" chart on that page. Look at the red lines, then the purple lines. The red lines are the Haswell gain over Sandy Bridge, then the purple lines are the Skylake gains over Sandy Bridge.

    • by Anonymous Coward

      I am currently on a Sandy bridge cpu. I am considering a newer computer. It will probably be of the skylake generation. Maybe mid next year.

      Mostly because of the power reductions achieved in haswell. Plus the 50 other things that have improved in the ~3 years since I bought it. Such as more laptops having better mini pci slots. Built in video being better etc etc etc...

  • 640K ought to be enough for anybody.

  • I'm still gaming on a Lynnfield. So, yeah, just about due for an upgrade.

Technology is dominated by those who manage what they do not understand.

Working...