Cliff Click's Crash Course In Modern Hardware 249
Lord Straxus writes "In this presentation (video) from the JVM Languages Summit 2009, Cliff Click talks about why it's almost impossible to tell what an x86 chip is really doing to your code due to all of the crazy kung-fu and ninjitsu it does to your code while it's running. This talk is an excellent drill-down into the internals of the x86 chip, and it's a great way to get an understanding of what really goes on down at the hardware and why certain types of applications run so much faster than other types of applications. Dr. Cliff really knows his stuff!"
Fast forward... (Score:5, Informative)
I can't say I've WTFV like I usually RTFA before you get to see it... but I can tell you this: The first four minutes of the video are spent asking which topic the room wants to see. No need to watch that part. Then it gets more interesting.
Re:Code in high-level (Score:5, Informative)
Also either start with the assembly the compiler generates, or at the very least make sure to bench your own against what it makes. The Intel Compiler in particular is extremely good at what it does. As such, it is worth your while to see what its solution to your problem is, and then see if you can improve, rather than assuming you are smarter and can do everything better on your own.
Of course all that is predicated on using a profiler first to find out where the actual problem is. Abrash accurately pointed out years ago that programmers suck at that. They'll spend hours making a nice optimized function that ends up making no noticeable difference in execution time.
Re:Code in high-level (Score:4, Informative)
Coding in x86 ASM is never fun. Weird and odd and masochistically pleasurable for some, maybe, but not fun. Other architectures, on the other hand (like ARM), can be fun. x86-64 manages to increase the "funness" value somewhat, but I still wouldn't quite qualify it as "fun".
On the other hand, it's very true that knowing some ASM can help you write code that the compiler will translate into better assembly code, without going through all of the trouble yourself.
Re:Premature optimization is evil... and stupid (Score:4, Informative)
Using shift to multiply is often a great idea on most CPUs.
Which CPU's are those? The fastest way to multiply today on AMD/Intel is to use the multiply instructions.
Didn't know that? yeah... it seems like only assembly language programs know this.
Re:Code in high-level (Score:3, Informative)
It also depends on the compiler. GCC, for example, sucks at auto-vectorization, so it's easy to get 30% or more on loopy scientific code just by using SSE instructions properly.
In contrast, PGI or ICC is much harder to beat using assembly.
It's not just x86 (Score:4, Informative)
Features like out of order execution, caches, and branch prediction/speculation are commonplace on many architectures, including the next generation ARM Cortex A9 and many POWER, SPARC, and other RISC architectures. Even in-order designs like Atom, Coretex A8, or POWER6 have branch prediction and multi-level caches.
The most important thing for performance is to understand the memory hierarchy. Out-of-order execution lets you get away with a lot of stupid things, since many of the pipeline stalls you would otherwise create can be re-ordered around. In contrast, the memory subsystem can do relatively little for you if your working set is too large and you don't access memory in an efficient pattern.
Re:Code in high-level (Score:2, Informative)
Or you could get NASM, which is open source :)
Re:Premature optimization is evil... and stupid (Score:5, Informative)
Those with a barrel shifter.
Then someone needs to beat the GCC developers with a cluestick. ...
$ cat test.c
int main(int argc, char **argv) {
return 4*(unsigned int)argc;
}
$ gcc -march=core2 test.c -o test
$ objdump -d test
00000000004004ec <main>:
4004ec: 55 push %rbp
4004ed: 48 89 e5 mov %rsp,%rbp
4004f0: 89 7d fc mov %edi,-0x4(%rbp)
4004f3: 48 89 75 f0 mov %rsi,-0x10(%rbp)
4004f7: 8b 45 fc mov -0x4(%rbp),%eax
4004fa: c1 e0 02 shl $0x2,%eax
4004fd: c9 leaveq
4004fe: c3 retq
4004ff: 90 nop
I program in assembly language, but not for x86. I usually program in ARM, which always has a barrel shifter. I guarantee shifts are faster than multiplies there.
Re:Fast forward... (Score:5, Informative)
A little javascript-fu reveals that the video player points to a file (at http://flv.thruhere.net/presentations/09-sep-JVMperformance.flv [thruhere.net]) on some poor guy's machine through a dynamic DNS service! I hope somebody grabbed a copy before he (or slashdot) took his server down.
It's just outdated knowledge (Score:3, Informative)
People learn a trick way back when, or hear about the trick years later, and assume it is still valid. Not the case. Architectures change a lot and what used to be the best way might not be anymore.
Michael Abrash, one of the all time greats of optimization, talks about this in relation to some of the old tricks he used to use. One was to use XOR to clear a register on x86. XORing a register with itself gives 0, of course, and turned out to be faster than writing an immediate value of zero in to the register. Reason is that loading a value was slower than the XOR op, and the old CPUs had no special clear logic, zero was just another number.
Ok well that's changed now. Our more complex modern CPUs have special logic for clears, and doing a move to the register with 0 is faster. So it was a time limited trick, useful back when he started doing it, but no longer something worth trying.
However, you'll still hear people say it is a great trick because they haven't updated their knowledge.
Re:Premature optimization is evil... and stupid (Score:3, Informative)
Re:It's just outdated knowledge (Score:4, Informative)
You are correct. XOR reg,reg was such a common instruction on the x86, that essentially it became the special case CLR instruction. Essentially, if you see a CLR instruction on an x86 assembly printout, it is the XOR instruction in disguise. The x86 has no CLR instruction.
Essentially, all current "simple" CPU instructions execute with the same speed. However, the XOR instruction is still faster than the MOV instruction because of instruction bandwidth and cache effects. Most code today is limited by cache and bandwidth limits, like the need to load instructions into the instruction decode pipeline immediately after a jump instruction. The MOV reg, 0 immediate move instruction is a two-byte instruction, and the XOR reg, reg instruction is a one-byte instruction. As such, in real code, the XOR instruction is usually slightly faster, because it results in smaller code.
Additionally, all of the modern x86 CPU implementations special case the XOR reg,reg instruction into a MOV reg, 0 immediate move instruction inside the instruction decode stage anyway. As such, no significant functional difference exists. The only case where a move instruction is quicker is when the condition codes are propagating a side-effect via the condition code registers. Thus, in theory:
ADD AL, AH
MOV CL, 0
JC somewhere
should execute quicker with a MOV instruction as opposed to a XOR instruction. However, in practice, this piece of code:
XOR CL, 0
ADD AL, AH
JC somewhere
executes with exactly the same speed, because the out-of-order execution units inside the x86 automatically optimize the code and make it equivalent. As such, you are best with the "short small" code, which means that the XOR reg, reg instruction is still the fastest way to do a register clear.
rule of the code (Score:3, Informative)
Just write good clean code that works properly first. The only time you optimize is after it has been profiled to see if there are troublesome spots. The way CPUs run and how compilers are designed, there is very little need to do optimization. Unless you have taken some serious courses of how the current CPU’s work, you efforts will mostly result in bad code that gains you nothing in respect in speed. Your time is better spent on writing CORRECT code.
The compilers are very intelligent in proper loop unrolling, rearranging branches, and moving instruction code around to keep the CPU pipeline full. They will also look for unnecessary/redundant instruction within a loop and move them to a better spot.
One of the courses I took was programming for parallelism. For extra credit, the instructor assigned a 27K x 27K matrix multiply; the person with the best time got a few extra points. A lot of the class worked hard in trying to optimize their code to get better times, I got the best time by playing with the compiler flags.
Re:Code in high-level (Score:5, Informative)
One of the biggest drawbacks of a language like C (and even more C++, and even more Java), is that they don't give you a whole lot of control of how stuff is arranged in memory
I'd say this is more of a C/C++ problem than a Java problem. Or, rather, they are different problems. The problem with C and C++ is that they do give the programmer a whole lot of control about how things are arranged in memory. They don't, on the other hand, give the compiler a lot of freedom to rearrange things.
Java, on the other hand, uses the Smalltalk memory model and so the compiler (and/or JVM) is free to rearrange things in memory as much as it wants to (whether it does, of course, is a matter for the compiler writer). For example, a Java compiler that notices that you are doing the same operation on three instance variables is free to put them next to each other aligned on a 128-bit boundary with some padding at the end so that you can easily use vector instructions on them, even if they were originally declared in different classes. A C compiler can not do this with structure fields.
If you really care about alignment in C, you are free to use valloc() to align on a page boundary and then subdivide the memory yourself. Most of the time, however, it's not worth the effort.
Re:Code in high-level (Score:4, Informative)
Re:Premature optimization is evil... and stupid (Score:3, Informative)
Re:It's just outdated knowledge (Score:3, Informative)
Actually the reason us old fogies normally used XOR A, A rather than LD A, 0 wasn't because it was faster but rather because it was smaller - 1 byte rather than two bytes (instruction + immediate operand). On the old memory constrained 8-bitters, these assembly "tricks" were all about saving a byte here, another byte there...
Re:Could someone give me a crash course (Score:3, Informative)
I hear they have nice 0xC0FFEE
Re:Fast forward... (Score:4, Informative)
some poor guy's machine through a dynamic DNS service!
Some poor guy? It's on an Amazon EC2 server!
Re:Fast forward... (Score:5, Informative)
You've done it! Interested slashdotters can download the video file at this link:
http://67.202.36.223/presentations/09-sep-JVMperformance.flv [67.202.36.223].
Good detective work, partner!
Re:Code in high-level (Score:1, Informative)
One of the biggest *advantages* of C/C++ as a systems language is that it gives you lots of control on how you arrange memory. You can write your allocators. You get a guaranteed layout of your structs and using the extensions the C/C++ compilers implement you *can* force alignments. You go a bit outside of the standard by using the "extensions", but you can encapsulate its use by using the preprocessor and porting is less a hassle than if you write your stuff in straight assembly.
What the compiler cant do rearrange the data automagically so that it runs faster, so it is the programmer who has to think about that... just as in assembly (but in a more comfortable way)
Re:Fast forward... (Score:4, Informative)
And please stop hitting the server liked in GP's post. The poor server hardly sustains 30 KB/s.
Re:Fast forward... (Score:4, Informative)
And change the the number at the end to change slides
PS: I am no good in javascript, but the above, in FF 3.5/Linux, just displayed a page saying "true"
Re:rule of the code (Score:3, Informative)
Re:rule of the code (Score:2, Informative)
Re:rule of the code (Score:3, Informative)
One of the courses I took was programming for parallelism. For extra credit, the instructor assigned a 27K x 27K matrix multiply; the person with the best time got a few extra points. A lot of the class worked hard in trying to optimize their code to get better times, I got the best time by playing with the compiler flags.
Really? Because I had a similar assignment (make Strassen's algorithm as fast as possible, in the 5-10k range) in my algorithms class a while back. I found that the key to a blazing fast program was careful memory layout: divide the matrix into tiles that fit into L1, transpose the matrix to avoid striding problems. Vectorizing the inner loops got another large factor. Compiling with -msse3 -march=native -O3 helped, but the other two were critical and took a fair amount of effort.
Re:Code in high-level (Score:3, Informative)
intel compilers have options to optimize to more than one target, and its runtime engine uses code that was made for X cpu. Sure your binary is larger, but everyone is happy.
Re:Code in high-level (Score:5, Informative)
This will give you a 64-bit vector type, so you can fit one in an MMX register, or two in an SSE or AltiVec register. You can then create these and do simple operations on them. For example, if you wanted to add two together, you could do this:
In this case, the add is constant so it will be evaluated at compile time, but in the case where a and b have unknown values GCC will emit either four scalar add operations or one 64-bit vector add.
You can also pass them as arguments to vector intrinsics, which are listed in the manual under target-specific builtins. These correspond directly to a single underlying vector instruction, so if you look in the assembly language reference for the target CPU then you will find a detailed explanation of what each one does.
Rather than declare vector types directly, it's often a good idea to declare unions of vector and array types. This lets you use the same value as both an array and a vector.
I wrote a longer explanation a while ago [informit.com].
Re:Premature optimization is evil... and stupid (Score:5, Informative)
This is one of the reasons that GCC sucks compared to ICC and VC++.
Let me give you the facts as they are today. In isolation, both the shift instructions and the multiply instructions have the same latency and throughput, and are also performed on the same execution units.
If this was the entire story, then they would be equal. Buts its not the entire story.
The shift instructions only modify some of the flags in the flags register. Essentially, the shift instructions must do a read/modify/write on the flags. The multiplication instructions, however, alter the entire flags register, so only perform a write.
"But Rockoon.. they are the same latency anyways, right?"
Essentially, one of the inputs to the shift instruction is the flags register so all prior operations that modify the flags register must be completed first, and no instruction following the shift that also partially modify the flags register can be completed until that shift is completed.
In some code, it wont make any discernible difference, but in other code it will make a big difference.
As far as that GCC compiler output.. thats code is horrible, and not just because its AT&T syntax.
There are two alternatives here for multiplying by 4 that should be in competition here, and neither uses a shift.
One is a straight multiplication (MASM syntax, CDECL):
main:
mov edx, [esp + 4] ; 32-bit version, so +4 skips the return address
imul eax, edx, 4
ret
The other is leveraging the LEA instruction (MASM syntax, CDECL):
main:
mov eax, [esp + 4] ; 32-bit version, so +4 skips the return address
lea eax, [eax * 4]
ret
The alternative LEA version on some processors (P4..), in isolation, is slower
GCC is great at folding constants and such, even calculates constant loops at compile time.. but its big-time-fail at code generation. GCC is one of the processors that one optimization expert struggled with because he was trying to turn a series of shifts and adds into a single far more efficient multiplication.. the compiler converted it back into a series of shifts and adds on him. Fucking fail.
Re:Premature optimization is evil... and stupid (Score:2, Informative)
Two books to look at Cormen et. al Intro to Algorithms, and Bentley's Programming Pearls. The second is more practical, the former is used in many CS Algorithms courses.