Understanding Pipelining and Superscalar Execution 87
Zebulon Prime writes "Hannibal over at Ars has just posted a new article on processor technology. The article uses loads of analogies and diagrams to explain the basics behind pipelining and superscalar execution, and it's actually kind of funny (for a tech article). It's billed as a basic introduction to the concepts, but as a CS student and programmer I found it really helpful. I think this article is a sequel to a previous one that was linked here a while ago."
Bad Timing (Score:1)
Re:Bad Timing (Score:1, Funny)
My Dad said I'd figure out what college was for some day. I'm kinda disappointed.
Re:Ouch! (Score:3, Funny)
Re:Ouch! (Score:2, Funny)
I'll have you know I have rolling credit, thank you. And I'm in VERY good standing.
Linux RULZ! MS SUCKS! Down with RIAA! Hilary Rosen is a dummy-head!
See?
Not exactly a dupe (Score:1, Redundant)
Re:Optimizing (Score:5, Insightful)
Re:Optimizing (Score:3, Informative)
I've posted the link earlier, but here it is again... Alan Cox recently gave an IRC talk: here [linux.org.uk].
P&H - Pipelining (Score:5, Interesting)
I just finished a CS course [berkeley.edu] co-taught by Professor Patterson, and our primary text this semester was Patterson and Hennessy's Computer Organization and Design [berkeley.edu].
When we discussed pipelining this semester, the analogy used was the four stages of doing laundry: washing, drying, folding, and stashing. Here are the lecture [berkeley.edu] notes [berkeley.edu] (both PDF). The notes spend a good deal of time going over the hazards of pipelines and how to avoid them.
Re:P&H - Pipelining (Score:1)
How clever. It's not really a difficult concept to understand, but that analogy makes it even easier.
Re:P&H - Pipelining (Score:3, Funny)
Re:P&H - Pipelining (Score:2, Funny)
Re:P&H - Pipelining (Score:2)
Re:P&H - Pipelining (Score:5, Interesting)
I just finished an engineering course with the exact same book, and we did that exact same pipelining example. I must say that the books is really very good at explaining the workings of the CPU.
The best teacher, though, is design. As part of the course, we all formed groups and actually designed and implemeted basic non-pipelined CPUs in VHDL and, if they fit, we implemented them on FPGA (field programmable gate array) boards. And by 'simple' CPUs, I mean that the CPUs have like 4K RAM, maybe 8 registers, ran at ~3 cycles per instruction, and only have about 12 instructions total. But it was REALLY informative because it forced us to learn the exact purpose of everything in the CPU.
Re:P&H - Pipelining (Score:1)
Re:P&H - Pipelining (Score:1)
What the heck? Were we in the same class? My grades were just released a few minutes ago as well! Were your prof's initials M.M.?
Re:P&H - Pipelining (Score:1)
(I don't know when my grades were actually released, but I got just back from Vegas an hour ago and found they were online when I checked
Re:P&H - Pipelining (Score:1)
OK, it was just a coincidence then. I was taking an embedded systems architecture course at a University in Canada (not Waterloo.)
Re:P&H - Pipelining (Score:2)
Re:P&H - Pipelining (Score:2)
Re:P&H - Pipelining (Score:3, Insightful)
Re:P&H - Pipelining (Score:4, Informative)
Actually, there is a lot of research about pipeline depths, and here [colorado.edu] is a paper that calculates the optimal pipeline for x86 to be around 50 stages. In fact, they theorize that you could see up to a 90% increase in performance in the P4 by making the pipeline even deeper. So not everybody thinks that the P4 pipeline is "a bit excessive."
think rambus (bad idea... memory width is a good thing!)
I'm a little confused here- until the past few months, Rambus still offered superior memory bandwidth. It wasn't until DDR333 and higher that SDRAM started to catch up. Rambus didn't lose in the market because of performance.
Itanitium (scrap an entire architechture for one that allows you to disable instructions, so that it is gauranteed that part of the processor won't be used at that point)
That is a pretty strange complaint about Itanium. In fact, I think that it is weird that you even think that is a problem.
Re:P&H - Pipelining (Score:3, Informative)
Rambus is a bad idea. It had very much inferior latency that increased with each memory module you add. The processor waits everytime you ask for ram. That's wasted cycles. There is a reason the P4 has an internal bus that is VERY wide compared to other processors. DDR's memory speed is in no way related to the bus's inferiority. The faster you make the chips, the faster the processor gets the data. With RAMBUS, you have additional latencies for the same speed of RAM. They are all made from the same DRAM design, and with the same speed chips, a wider bus will be faster.
I'm not upset that the Itanium changes instructions sets. I just don't think that any processor that disables part of itself will ever be optimal. I don't think lugging around old instruction sets are a good idea either. It's a waste of space that could have been used for some more full multipliers.
Don't take my word for it... go clock a P4 against another CPU and see how well it performs in sorting with RAMBUS memory. The bulk of the P4's gains on any other CPU comes from SSE2, 400Mhz FSB, and CPU clock speed. Take those away, and it will be slower per clock cycle than any other CPU (including P3), especially if it has RAMBUS memory.
Re:P&H - Pipelining (Score:2)
It's pretty appearant from comparing P3 to P4 that longer pipelines are only better if you can manage to crank out a factor of speed greater than the factor of increase in pipeline length.
Look at the clock speeds of the P3 (architecture maxed out around 1.5 GHz) and P4 (3 GHz with lots of room to grow). They have "cranked up the speed" by more than enough to compensate for the lower IPC. If you are comparing a 1.5 GHz P3 to a 1.5 GHz P4 then you have missed the whole point entirely.
Longer pipelines need better branch prediction.
Branch prediction is already very good. Are you suggesting that you should slow down the rest of the chip so the less than 5% branches that are mispredicted won't be as expensive? That's a pretty dumb trade-off.
Don't take my word for it... go clock a P4 against another CPU and see how well it performs in sorting with RAMBUS memory
Ok- how about this: [tomshardware.com] With higher latencies and all (which are becoming less significant), the P4 has always and still does perform the best with Rambus.
The bulk of the P4's gains on any other CPU comes from SSE2, 400Mhz FSB, and CPU clock speed. Take those away, and it will be slower per clock cycle than any other CPU (including P3)
Once again, nobody disputes that the P4 is "slower" per clock cycle than other CPUs. That is why it is clocked 1 GHz higher than its competition, and it still has room to grow (current Athlon designs can't go much faster). Your statement is analogous to saying "Take away these extra french fries and larger drink from my super-sized value meal, and its just the same as a regular value meal".
Re:P&H - Pipelining (Score:2)
If they catch branches early on (by the 5th stage), then they will not loose much performance. But because there are 10 instructions still not complete, it's unlikely they could catch the branch until all the instructions are complete and ready to write back to ram. Consider also the super scalar nature of the CPU. Now we are sitting around the 13th stage of the pipeline and we know which way to branch. If we were wrong, we loose 12*2=24 clock cycles. If branch prediction is 90% accurate(a stretch I would say) and code is about 10% branch statements, then 1 in every 100 instructions will cause you to loose 24 clock cycles. That's only branching! Now lets take that 20% of the time, the next instruction references the result of the current instruction. That usually causes the processor to insert NOPs into the pipe just after the decode until there is enough spacing between them so that the data can be forwarded to this instruction when it is ready for it's operands. In a normal pipe, that's only about 2 clock cycles. In a P4 pipe, that has to at least be 6. This is a factor or 3 slower about 20% of the time. That's about the same for branches. A good 1/3 of the cpu time is 3x slower on a P4. Sure you can compile around some of it, but that is just silly. That's why P4 introduced hyper threading that nothing is using. So instead of inserting NOPs, it can insert instructions from other programs that can't possibly reference the same data. That would be great, but it can be applied to shorter pipes just as well.
The majority of the performance increase of P4 has nothing to do with the pipeline. If the P4 was built on a normal pipeline, you could have expected to see a 2.2 Ghz machine right now that blew the AMD away.
Intel had a good chance with their Itanium to make a good RISC processor. A good load-store, 4/8 way super scalar, with a short pipeline, and it could still do 3Ghz easy. Instead, they decided to graft on the kitchen sink and a million other transistors that will lie dormant half the time. Most of the CPU will be sleeping even when you need it. I'm more excited about the new IBM processors for the mac.
Re:P&H - Pipelining (Score:2)
Am I the only person who therefore uses the word "clock" for what you do when a step has finished?
Branch Prediction (Score:5, Informative)
For those of you who didn't major in CS...
Imagine that we finish the first stage of building our SUV (building the engine) and commence with stage 2 (putting the engine in the stasis). While we are doing that we are building another engine for SUV #2. However, what if the next customer didn't want an SUV, but instead wanted a compact car. We have to throw away our engine for SUV #2 and start over. We wasted an entire stage!!
This analogy doesn't work so well it seems. So we'll stick with computers. If you have 5 instructions in your pipeline and one of them is a conditional branch (think, If the user hit ENTER, print a message to the screen. If they hit escape, BSOD).
If the conditional instruction is high up in the pipeline then every instruction under it could be wasted. Obviously, if the processor could predict which path the branch would follow it would waste less instructions.
Branch predicting algorithms are extremely interesting. The early ones were very simple with:
Prediction: Never take the branch
OR
Prediction: Always take the branch
People soon realized that most branches were in loops, so they came up with a new algorithm
Prediction: If the last time we were here we took the branch, take it again, otherwise don't take it. Basically, repeat what we did the last time we ran this instruction.
IIRC there are lots of branch prediction algorithms, some of which are eerily accuratae (above 90%). Unforunately, branch prediction requires cache which takes away from the cache your programs need.
Re:Branch Prediction (Score:5, Informative)
Unforunately, branch prediction requires cache which takes away from the cache your programs need.
This [arstechnica.com] notes that the branch prediction unit has some cache that is separate from the other cache. It also notes that the PIII BPU has the "eerily accurate" prediction success you describe.
Re:Branch Prediction (Score:4, Informative)
I won't take the time to dig up references to see if any CPU architecture currently does all of these tricks, but consider all the things that can be done to minimize the impact of branches:
When a branch occurs, fetch instructions from *both* possible branch locations. Begin executing both sets of instructions in parallel, keeping the CPU's back-end busy. Flag these instructions with a "left branch" or "right branch" tag. When the branch test completes, toss out the wrong instructions and keep the good ones. Both branches will execute more slowly than a correctly-guessed branch that executes only one, but in the case of a mispredict, there is no pipeline flush, and no delay waiting for the PC to update and new instructions to flow in. Also, it's hard on the caches and RAM subsystem, since two sets of instructions rather than 1 need to be fetched.
Build a better predictor. By analyzing the type of branch and surrounding code, the branch predictor can get eerily accurate. Way better than 90%. A K6-2 could get 90% accuracy with no sweat, and that's a pretty old chip.
Prefetch. Grab the branch test and run it way ahead of the branch itself (when possible). If the outcome of the branch can be determined before the branch is reached (using instruction reordering trickery), there is effectively no branch at all. This can be "stacked" with other techniques as well.
I'm sure you can come up with several more. It's an interesting problem to think about, with most techniques having a good mix of benefits and drawbacks.
Re:Branch Prediction in PowerPC 970 (Score:3, Interesting)
(Disclaimer: recalling all of this from memory based on the paper I wrote a few weeks ago on the PPC 970. Forgive me if I over-simply or mis-state something.)
The PPC 970 hast three branch history tables (BHTs). Each one has 16k (2^14) entries of one bit each. One BHT follows the more or less traditional method of tracking whether or not the branch prediction from a previous execution of the instrution was successful. One BHT has its entries associated with an 11-bit vector which tracks the last 11 instructions executed by the CPU (and using this to determine if the branch prediction was successful. The third and final BHT is used to determine which BHT has been more successful for the corresonding instructions. For each individual branch instruction, the third BHT is used to determine which method has had better success in the past and then that BHT is used as the branch prediction method for this execution of the instruction.
CyberDave
Re:Branch Prediction (Score:1)
Holy Crap! (Score:5, Funny)
Re:Holy Crap! (Score:2, Informative)
Make sure you can identify RAW, WAW, and WAR hazards. (R == read, A == after, W == write)
Love those tongue-in-cheek jokes... (Score:1, Offtopic)
-Cyc
huh? (Score:1)
Soviet sad man is saying: (Score:1, Funny)
Great (Score:3, Funny)
And you're a CS student?!!! (Score:5, Interesting)
In fact, Alan Cox gave a talk on this recently: UMeet2002 [uninet.edu].
Alpha Platform (Score:2, Funny)
perks (Score:1)
A request for a future Ars Technica story (Score:2, Funny)
Re:A request for a future Ars Technica story (Score:3, Interesting)
That'll teach me for trying to make a snide comment
Re:A request for a future Ars Technica story (Score:2)
I don't understand this. Black on white is a historical remnant of ink-on-paper technology, and it has no basis in CRTs or LCDs. In fact I find black on white quite annoying, and irritating to my eyes, particularly on a CRT.
In a reflective LCD (no backlight), black on white might make more sense. In general I think the background (dominant color) should be the "neutral state", which is black on CRTs and white/gray on reflective LCDs.
Re:A request for a future Ars Technica story (Score:2)
Not that this has anything to do with the Ars sight. They use white on black which is just as bad.
Re:A request for a future Ars Technica story (Score:2)
Enough with the bad analogies! (Score:3, Interesting)
Let's hope (Score:1)
At last! (Score:4, Funny)
Tomasulo's Algorithim... (Score:1)
Here [ed.ac.uk] is a great link if you want to visualize how this works.
Cut the bs (Score:1)
Last Post! (Score:1)
joey: what's so funny?
shh, joey is losing all sanity from lack of sleep
'yes joey, very funny'
Humor him
-- Seen on #Debian
- this post brought to you by the Automated Last Post Generator...