Not All Cores Are Created Equal 183
joabj writes "Virginia Tech researchers have found that the performance of programs running on multicore processors can vary from server to server, and even from core to core. Factors such as which core handles interrupts, or which cache holds the needed data can change from run to run. Such resources tend to be allocated arbitrarily now. As a result, program execution times can vary up to 10 percent. The good news is that the VT researchers are working on a library that will recognize inefficient behavior and rearrange things in a more timely fashion." Here is the paper, Asymmetric Interactions in Symmetric Multicore Systems: Analysis, Enhancements and Evaluation (PDF).
unsurprising. (Score:5, Interesting)
Anyone who thinks computers are predictably deterministic hasn't used a computer. There are so many bugs in hardware and software that cause it to behave differently than expected, documented, designed. Add to that inevitable manufacturing defects, no matter how microscopic, and it's unimaginable to find otherwise.
It's like discovering "no two toasters toast the same. Researches found some toasters browned toast up to 10% faster than others."
multicore dev is fun... much like prison rape! (Score:4, Interesting)
Re:unsurprising. (Score:3, Interesting)
Re:Linux schedules better than this (Score:3, Interesting)
Helpful reading list :)
http://www.google.com/search?q=linux+%22scheduler+domain%22+%22multi+core%22 [google.com]
Re:Linux schedules better than this (Score:5, Interesting)
Possibly... but it appears an SMP kernel treats each core as a separate physical processor.
Take an Intel Core2 Quad machine and start a process that takes 100% of one CPU. Then watch top/htop/gnome-system-monitor/etc where you can watch the process hop around all four cores. It makes sense that the process might hop between two cores -- the two that share L2 cache -- but all four cores doesn't make sense to me. Seems like the L2 cache is wasted when migrating between each core2 package.
Re:multicore dev is fun... much like prison rape! (Score:5, Interesting)
Re:Linux schedules better than this (Score:4, Interesting)
Re:unsurprising. (Score:5, Interesting)
Re:unsurprising. (Score:5, Interesting)
Actually, (sorry no link) there was a researcher that was using FPGAs and AI code to create simple circuits, but the goals was to have the AI design it. What he found is that due to minor manufacturing defects, the code that was built by AI was dependent on the FPGA it was tested on and would not work on just any FPGA of that specification. After 600 iterations, you'd think it would be good. One experiment went for a long time, and in the end when he analyzed the AI generated code, there were 5 paths/circuits inside that did nothing. If he disabled any or all of the 5 the overall design failed. Somehow, the AI found that creating these do nothing loops/circuits caused a favorable behavior in other parts of the FPGA that made for overall success. Naturally that code would not work on any other FPGA of the specified type. It was an interesting read, sorry that I don't have a link.
Re:not a surprise (Score:5, Interesting)
We have this problem at work.
We have a render farm of 16 machines. 12 of them are effectively identical but despite all of our coaxing one of them always runs about 30% slower. It's maddening. But "What can you do?". Hardware is the same. We Ghost the systems so the boot data is exactly the same... and yet... slowness. It's just a handicapped system.
Close (Score:3, Interesting)
But you have to think about it too much.
How about:
Things.ParallelEach(function(thing){
Console.Write("{0} is cool, but in parallel", thing);
# serious business goes here
});
There are lots of stupid loop structures that are used in desktop apps that are just begging to be run in parallel, but the current crop of languages dont make it braindead easy to do so. Make it so every loop structure has a trivial and non ugly (OpenMP pragmas) way of doing it.
Also, IMHO, not enough languages do stuff like the Javascript Array.Each(function(element){}). Am I blind, or is this construct missing from C#?
Re:unsurprising. (Score:3, Interesting)
There is a very interesting channel on youtube called googletechtalks. There, you can find a lecture called "We have it easy, but do we have it right" about performance measures that really made me worry. Basically you can't just easilly compare performance by measuring the cpu time, because there are a lot of factors that determine performance. E.g.: by adding a environment variable before running a program, this can cause page allignments to change (even if the environment variable isn't used by the program), changing the performance dramatically in some cases. Same goes for changing the link order: performance can change by 20%.
So much for determinism.
http://www.youtube.com/watch?v=DKVRkfXrBpg [youtube.com]
Re:unsurprising. (Score:3, Interesting)
Re:unsurprising. (Score:3, Interesting)
Processor affinity is even harder on modern CPUs. You often have 2 or so contexts sharing execution units and L1 cache in a core, then a few cores sharing L2 cache in a chip. Deciding whether to move a process is tricky. There's a penalty for moving, because you increase the cache misses proportionally to the distance you move it (if you move it to a context that shares the same L1 cache, it's not as bad as if you move it to one that shares only the L2 cache, for example), but there's also a cost for not moving it if a lot of processes on a single context are suddenly doing a lot of work while those on another core are idle.
Cache isn't the only problem though - with something like the AMD architecture, each core has its own memory, so if you allocate memory on one RAM chip then migrate the process to a different one then you end up with memory accesses being slower (and slowing down accesses on the other chip, since its memory controller is having to interleave remote requests with local ones).
Reminds me of an issue... (Score:3, Interesting)
...I had with an Asterisk VOIP server. Under certain conditions, calls transferred from one of two receptionist's phones were bouncing back and ending up at the wrong voicemail. Since only two phones had a problem I suspected it was something specific to these phones. After checking the configuration and even hardware on the phones, I checked the server. I narrowed the problem down to one macro (a macro in asterisk is basically a user-defined function) that allows a "fallback line" to ring if the first is busy, it seemed to be getting an argument for this line when there should have been none. Soon it became evident that the variable was changing "mid-macro", apparently out of nowhere (there are variables with special names that are used in macros to receive arguments, nowhere was this variable changed, the macro's less than 30 lines long). I eventually got so frustrated I put debugging lines in between every single line of the macro to make it print the variables to the output log. Then I narrowed it down to one line - one where a Dial() command is executed (this is the function that actually places the call, this function isn't supposed to even be able to change anything in the macro that called it, and there are no other problems like this). Now that had me totally stumped. I could demonstrate exactly what was happening but I couldn't figure out why. Stranger still, the results changed slightly with the debugging lines in place, as if it's a race condition of some sort.
The problem still exists to this day :(