Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory 181

crookedvulture writes: Intel has updated its high-end desktop platform with a new CPU-and-chipset combo. The Haswell-E processor has up to eight cores, 20MB of cache, and 40 lanes of PCI Express 3.0. It also sports a quad-channel memory controller primed for next-gen DDR4 modules. The companion X99 chipset adds a boatload of I/O, including 10 SATA ports, native USB 3.0 support, and provisions for M.2 and SATA Express storage devices. Thanks to the extra CPU cores, performance is much improved in multithreaded applications. Legacy comparisons, which include dozens of CPUs dating back to 2011, provide some interesting context for just how fast the new Core i7-5960X really is. Intel had to dial back the chip's clock speeds to accommodate the extra cores, though, and that concession can translate to slower gaming performance than Haswell CPUs with fewer, faster cores. Haswell-E looks like a clear win for applications that can exploit its prodigious CPU horsepower and I/O bandwidth, but it's clearly not the best CPU for everything. Reviews also available from Hot Hardware, PC Perspective, AnandTech, Tom's Hardware, and HardOCP.
This discussion has been archived. No new comments can be posted.

Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

Comments Filter:
  • just wait (Score:5, Interesting)

    by hypergreatthing ( 254983 ) on Friday August 29, 2014 @02:54PM (#47786281)

    until next year. 14nm shrink should be a huge boost in both efficiency and performance.
    The x99 is an "enthusiast" platform and has pricing along those lines.
    DDR4 is also extremely new. Expect it to get faster/better timing specs as time progresses.

  • by CajunArson ( 465943 ) on Friday August 29, 2014 @02:55PM (#47786293) Journal

    The 5820K is packing 6 cores and an unlocked multiplier for less than $400. If you don't absolutely need the full 8-core 5960X, then the 5820K is going to be a very powerful part at a reasonable price for the level of performance it delivers.

  • Re:Price (Score:5, Interesting)

    by SirMasterboy ( 872152 ) on Friday August 29, 2014 @03:23PM (#47786497)

    Though the lower-end model is only $300 for a 6-core 12-thread!

    http://www.microcenter.com/pro... [microcenter.com]

  • Re:DDR2/3/4 (Score:4, Interesting)

    by pjrc ( 134994 ) <paul@pjrc.com> on Friday August 29, 2014 @04:18PM (#47786861) Homepage Journal

    Just to put "some time now" the time frame into perspective, the last mainstream PC memory form-factor to use asynchronous DRAM was 72 pin SIMMs.

    When PCs went from 72 pin SIMMs to the first 168 pin DIMMs, in the mid-1990s, the interface changed to (non-DDR) synchronous clocking.

  • Image processing (Score:5, Interesting)

    by fyngyrz ( 762201 ) on Friday August 29, 2014 @07:06PM (#47787737) Homepage Journal

    I use -- and write -- image processing software. Correct use of multiple cores results in *significant* increases in performance, far more than single digits. I have a dual 4-core, 3 GHz mac pro, and I can control the threading of my algorithms on a per-core basis, and every core adds more speed when the algorithms are designed such that a region stays with one core and so remains in-cache for the duration of the hard work.

    The key there is to keep main memory from becoming the bottleneck, which it immediately will do if you just sweep along through your data top to bottom (presuming your data is bigger than the cache, which is typoically the case with DSLRs today.) Now, if they ever get main memory to us that runs as fast as the actual CPU, that'll be a different matter, but we're not even close at this point in time.

    So it really depends on what you're doing, and how *well* you're doing it. Understanding the limitations of memory and cache is critical to effective use of multicore resources. You're not going to find a lot of code that does that sort of thing outside of very large data processing, and many individuals don't do that kind of data processing at all, or only do it so rarely that speed is not the key issue, only results matter. But there are certainly common use cases where keeping a machine for ten years would use up valuable time in an unacceptable manner. As a user, I am constantly editing my own images with global effects, and so multiple fast cores make a real difference for me. A single core machine is crippled by comparison.

  • Re:*drool* (Score:4, Interesting)

    by TheRaven64 ( 641858 ) on Saturday August 30, 2014 @05:52AM (#47790089) Journal
    For building big C++ projects, as long as the disk (yay SSDs!) can keep up, you can throw as many cores as you can get at the compile step and get a speedup, then sit dependent on single-thread performance for the linking. I got a huge speedup going from a Core 2 Duo to a Sandy Bridge quad i7, then another noticeable speedup going to a Haswell i7 in my laptop. The laptop is now sufficiently fast that I do a lot more locally - previously I'd mostly work on a remote server with 32 cores, 256GB of RAM (and a 3TB mirrored ZFS array with a 512GB SSD for ZIL and L2ARC), but now the laptop is only about a factor of 2 slower in terms of build times, so for developing individual components (e.g. LLVM+Clang) I'll use the laptop and only build the complete system on the server.

I've noticed several design suggestions in your code.

Working...