Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Upgrades Hardware

Intel's 128MB L4 Cache May Be Coming To Broadwell and Other Future CPUs 110

MojoKid writes "When Intel debuted Haswell this year, it launched its first mobile processor with a massive 128MB L4 cache. Dubbed "Crystal Well," this on-package (not on-die) pool of memory wasn't just a graphics frame buffer, but a giant pool of RAM for the entire core to utilize. The performance impact from doing so is significant, though the Haswell processors that utilize the L4 cache don't appear to account for very much of Intel's total CPU volume. Right now, the L4 cache pool is only available on mobile parts, but that could change next year. Apparently Broadwell-K will change that. The 14nm desktop chips aren't due until the tail end of next year but we should see a desktop refresh in the spring with a second-generation Haswell part. Still, it's a sign that Intel intends to integrate the large L4 as standard on a wider range of parts. Using EDRAM instead of SRAM allows Intel's architecture to dedicate just one transistor per cell instead of the 6T configurations commonly used for L1 or L2 cache. That means the memory isn't quite as fast but it saves an enormous amount of die space. At 1.6GHz, L4 latencies are 50-60ns which is significantly higher than the L3 but just half the speed of main memory."
This discussion has been archived. No new comments can be posted.

Intel's 128MB L4 Cache May Be Coming To Broadwell and Other Future CPUs

Comments Filter:
  • Re:first post (Score:4, Informative)

    by lorinc ( 2470890 ) on Saturday November 23, 2013 @08:15AM (#45500097) Homepage Journal

    Seems other users have a bigger cache than yours...

  • by GiantRobotMonster ( 1159813 ) on Saturday November 23, 2013 @08:18AM (#45500105)

    At 1.6GHz, L4 latencies are 50-60ns which is significantly higher than the L3 but just half the speed of main memory.

    Hmmm. L4 cache runs at half the speed of main memory? That doesn't seem right Why bother reading these summaries? The people posting them certainly don't

  • by SimonTheSoundMan ( 1012395 ) on Saturday November 23, 2013 @08:19AM (#45500111)

    The only benchmarks I have found is from SiSoftware. http://www.sisoftware.co.uk/?d=qa&f=mem_hsw [sisoftware.co.uk]

    But how is this going to effect Firefox, Photoshop, or video conversion?

    Does it have an effect on battery life?

  • by fuzzyfuzzyfungus ( 1223518 ) on Saturday November 23, 2013 @08:38AM (#45500187) Journal
    At least as marketed, the main advantage is allowing the GPU some RAM that isn't DDR3 stolen from the main system a couple of hops away (which has traditionally been one of the things that make integrated graphics really suck, and cheap discrete parts that use DDR instead of GDDR, and/or an excessively narrow or slow memory bus kind of suck).

    Given that even intel's marketing optimists don't say much about CPU performance (and also given that it's a mobile-only feature, you can't even buy an non-BGA part expensive enough to have it, which would be unusual if it actually improved CPU performance enough to get enthusiasts worked up; but is downright sensible if the target market is laptops sufficiently size/power constrained not to have discrete GPUs; but where pure shared memory was dragging GPU performance down.)
  • by muridae ( 966931 ) on Saturday November 23, 2013 @09:23AM (#45500301)

    Photoshop? Considering that the adobe rgb or other color spaces combined with the file sizes of some of the larger images coming out of cameras, your gains in latency would really depend on Photoshop and the OS being able to handle the L4 cache and keep the right part of the image in the cache. Video editing, with file sizes into the gigabyte range would probably see no gains at all. Video conversion, with a program that keeps a reasonably sized buffer, should see a good performance gain; but it would require code that knows the L4 is available or the OS to predict that L4 is a good place to put a 10-50-100MB buffer. The real gain will be in common things: playing a video, browsing the web (seen how much memory a bit of javascript or the JRE can eat up lately? Or Silverlight/Flash?) and email clients (cache all your email in L4 for faster searching).

    As for battery life, I have no idea. It might use more power, since DRAM requires constant power to refresh data where SRAM is pretty stable; but the lower leakage of using a single transistor instead of 6 might prove to be a benefit. It would take a good bit of time and some pretty good test code to figure the difference, I suspect.

  • not on die (Score:5, Informative)

    by Gravis Zero ( 934156 ) on Saturday November 23, 2013 @10:00AM (#45500467)

    128MB L4 cache. [...] on-package (not on-die) pool of memory

    what this means is the memory is not on the same piece of silicon as the CPU, just stuffed in the same chip package. this means they have to be connected by a lot of tiny wires instead of being integrated directly. the downside to this is that there is bandwidth between the L4 memory and the CPU is very limited and it uses more power. like AMD's first APUs where just two ICs on the same chip, i dont not think this will result in a drastic performance improvement but i'm unsure of the power savings. If AMD gets wise, they will beat Intel to the punch but then again. though if AMD is really smart, they would put out ARMv8 chips not just for servers(/desktops?) but for smartphones/tablets and laptops.

  • Re:Why only 128 MB? (Score:5, Informative)

    by Kjella ( 173770 ) on Saturday November 23, 2013 @10:32AM (#45500591) Homepage

    Broadwell represents a miniaturization step from 22 to 14 nm structures. Why do they keep the capacity of the Crystalwell L4 cache at 128 MB? They could put twice that memory onto a die with the same area as the 22 nm Crystalwell version. Is the Crystalwell die for the Haswell CPUs so large and expensive that they have to reduce its size?

    From Anandtech's article on Crystalwell [anandtech.com]:

    There's only a single size of eDRAM offered this generation: 128MB. Since it's a cache and not a buffer (and a giant one at that), Intel found that hit rate rarely dropped below 95%. It turns out that for current workloads, Intel didn't see much benefit beyond a 32MB eDRAM however it wanted the design to be future proof. Intel doubled the size to deal with any increases in game complexity, and doubled it again just to be sure. I believe the exact wording Intel's Tom Piazza used during his explanation of why 128MB was "go big or go home". It's very rare that we see Intel be so liberal with die area, which makes me think this 128MB design is going to stick around for a while.

    I get the impression that the plan might be to keep the eDRAM on a n-1 process going forward. When Intel moves to 14nm with Broadwell, it's entirely possible that Crystalwell will remain at 22nm. Doing so would help Intel put older fabs to use, especially if there's no need for a near term increase in eDRAM size. I asked about the potential to integrate eDRAM on-die, but was told that it's far too early for that discussion. Given the size of the 128MB eDRAM on 22nm (~84mm^2), I can understand why. Intel did float an interesting idea by me though. In the future it could integrate 16 - 32MB of eDRAM on-die for specific use cases (e.g. storing the frame buffer).

  • Re:not on die (Score:5, Informative)

    by lenski ( 96498 ) on Saturday November 23, 2013 @11:20AM (#45500783)

    what this means is the memory is not on the same piece of silicon as the CPU, just stuffed in the same chip package.

    Which allows the designers to count on carefully controlled impedances, timings, seriously optimized bus widths and state machines, and all the other goodies that come with access to internal structures not otherwise available.

    Such a resource could, if used properly, be a significant contributor to performance competitiveness.

  • Eh, it's been done. (Score:2, Informative)

    by Anonymous Coward on Saturday November 23, 2013 @01:17PM (#45501301)

    POWER8, anyone? With actual SMT instead of flakey HT, and lots more threads, and so on, and so forth.

    Too bad they're unobtanium and if not cost too much. But otherwise... anything intel does has basically been done better before. Except process. That is the only thing they really lead with. The rest isn't half as interesting as most of the world makes it out to be.

  • by windwalkr ( 883202 ) on Saturday November 23, 2013 @07:48PM (#45504109)

    Yes and no. Applications can't typically "put things into the cache", but algorithms can (and often are, when it comes to image processing) tuned to suit a particular cache size. Processing the image in an appropriate order, breaking the image into cache-sized chunks, and so on can all be effective strategies which pay off big-time in terms of performance.

  • by MrKaos ( 858439 ) on Saturday November 23, 2013 @09:22PM (#45504507) Journal

    I have a Retina MacBook Pro with this Crystal Well processor. What advantages does it really bring?

    Unsure of any real world benchmarks compared to standard Haswell processors.

    I've written papers on the effect however I am unable to share them here. The bottom line is the application should be exposed to reduced minor page faulting and, if all goes well, improved context switching, all dependent on the way the CPU scheduler is configured - of course.

    IMHO an L4 cache will alleviate the cache miss penalty when the CPU Scheduler looks for data in L1-3 however any increase in the penalty due to a cache miss will be highly dependent on the application and the way the CPU scheduler is configured.

    The idea is to try and keep the L1-3 Cache as hot as possible, really it's because as programmers, many of us still have a long way to go to writing code that scales to parallel processing well (in the 21st century!!!) plus there is a lot of code out there already.

    For Linux and Apple based systems (I can examine the code of these CPU Schedulers - just not the Microsoft as it is proprietary) this should mean that the amount of time the CPU spends on application tasks, as opposed to O.S tasks is increased, essentially boiling down to reduced application latency and improved "responsiveness". I don't mean to use such wishy-washy terms however at this level cpu instructions are carried out in the nano-femto second range and the duration imposed by a cache miss penalty and a context switch will also be dependent on the ram installed - which is another factor in the duration of a minor page fault.

    Assuming that the schedulers, in a "fair and balanced" configuration I expect the following. For code that scales to parallelism you should see improvements because a task will exist on multiple cores well and not incur penalties for hogging CPU resulting in the L1-3 caches staying hot with application data longer (ideally, with threads running on multiple cores). For code that doesn't I expect it to hog a core, get pushed back to ram by the scheduler and be exposed to all of the performance penalties that come as a result.

    Personally I have always thought it's a contest between Cycles and Cache - not a direct effect on battery life or power consumption however if the CPU is spending more time on application than OS then you are closer what the original Amdahl's law [wikipedia.org] sought to show - if your application allows it.

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...