

Intel to Increase Stages in Prescott 524
Alizarin Erythrosin writes "Further contributing to the MHz Myth, The Register and ZDNet are reporting that the new P4 core, codenamed Prescott, will have a longer pipeline then Northwood. No official numbers have been released, but The Reg is saying an Intel spokesman said that 30 stages seems to be a reasonable estimate. As most of us know, a longer pipeline can lead to slowdowns in the form of branch mispredictions and pipeline stalls. 'And just as the PIII proved faster than the early P4s in some applications, it's likely that Northwood will similarly prove faster than Prescott, which has clearly been designed for speeds of the order of 4GHz.'"
Holy pipelines (Score:3, Funny)
Re:Holy pipelines (Score:5, Funny)
Re:Holy pipelines (Score:3, Interesting)
Re:Holy pipelines (Score:3, Offtopic)
a trans-afghan pipeline has been encouraged by the us for years preceding the latest invasion of the country. it may never be built, but it is still being pushed by the US [yahoo.com]. There has been news trickling in fairly steadily in the past two months about this. eg from times of india jan 12 [indiatimes.com]
the kazakhs HAVE a good deal of oil/gas - it needs to get south and west. maybe you're referring to the BTC pipeline project that replaced the first trans-afghan pipeline plan.
the idea put forth b
Bang for your buck (Score:5, Funny)
2 stars.
Re:Bang for your buck (Score:3, Funny)
Size of pipeline (Score:4, Funny)
Re:Size of pipeline (Score:3, Funny)
Re:Size of pipeline (Score:5, Interesting)
A 6-stage pipeline with terrible branch prediction and all sorts of holes in it isn't going to do any good at all, while a 30 stage pipeline with great branch prediction (and the P4 does have great branch prediction) and few bubbles or holes (improved SMT, aka hyperthreading, is supposed to help here) will do wonders.
Of course, the real question is now how long the total pipeline is, but the branch mispredict penalty. It should be noted that the "Northwood" P4 has a 28-stage pipeline, but only a 20-stage mispredict penalty. If the "Prescott" has a 30-stage pipeline with a 22-stage mispredict penalty, it isn't exactly a huge change.
I guess the home market rules... (Score:5, Interesting)
-ghostis
Re:I guess the home market rules... (Score:2)
One-off number crunching... (Score:4, Interesting)
Programmer time is much more expensive than faster machines.
Re:I guess the home market rules... (Score:4, Interesting)
Back in my days of internship at the canadian space agency, I'd program multiple custom apps to pre-process the data before it being fed to the mainframes of a contractor for finite element analysis. Matlab is the tool to use for anybody involved in scientific projects. Yes, your code in C will run much faster, but it'll take significantly longer to get it up and running.
If you run a lot of loops and it's really bogging the performance down, you can program just those sections of code in C and compile with matlab libraries to be able to use it in Matlab like the native commands. I did one piece of code that took a finite element file and created the 3d model in matlab. Took 20 minutes to run the code in matlab, 3.45 seconds once I had compiled the tough part of the code in C.
In the end it's all about using the right tool, and for engineering/matlab, Matlab is excellent.
Re:I guess the home market rules... (Score:3, Interesting)
Matlab, Schmatlab, I want to write some code! (Score:5, Informative)
Your EE or ME or ChemE full professor as a grad student could have written a FORTRAN program to compute some stuff and write output to a numeric text file or perhaps draw some plots using a subroutine library. You are probably thinking that anyone who can't sling together C programs using VI to draw graphics straight to X is a luser, but I am talking about pretty technically savy people who don't have time to spend on this stuff and who employ armies of Engineering majors from foreign lands who are not up on this stuff either.
My own take is that if a particular numerical calculation can be easily programmed by some package, it must not be on the cutting edge of research because someone has already done it. Besides, if your software package is really deep, most of the effort goes into the architecture and the data flows and into graphics, and the RAD bit is only simplifying a tiny part of what you are spending your time. A high-power scientific data visualization is really a video game, and how many video games are implemented in Matlab?
But what Perl is to text processing, Python is to collections, and VB is to slinging together a GUI, Matlab is to numerics (what used to be FORTRAN libraries) -- it may not have the best algorithms, but it has a lot of algorithms -- it has a semi-decent scripting language, and it has some facility with producing plots from your computations and other data.
Now that's the thing -- if you are doing matrix operations or using some canned function (most likely C under the hood), Matlab is as fast as fast can be. The minute you start looping in Matlab, it is interpreted and the speeds are in the Python range.
Before you knock it completely, it has very good integration with Java modules -- more seamless than with C modules. While Java may be pokey for its GUI, for tight numeric loops the JIT is almost as fast as C -- no joke, a person should consider writing numeric extensions to Matlab in Java of all things, especially on Windows where they tweaked up Java 1.4.2_03. And how many scripting languages (OK, Jython) have this level of Java integration?
But as a scripting language, Matlab has its shortcomings. It started out as a matrix calculator and has had features grafted on in a hodge-podge Visual Basic 6.0 kind of way. In terms of its data type restrictions and fubar scoping rules and brain-dead object extensions, I don't think, as they say, it scales very well.
My other peeve is that it is proprietary, and while Math Works is not Microsoft, I worry if engineering schools, emphasizing use of "commercial packages students will use in the real world when they graduate" (as opposed to professors dinking around with their homebrew software for use in instruction), are becoming trade schools shilling for the big software houses. I don't have a lot of experience with it, but in place of Matlab we should be using stuff like Python and the Python NumPy extension -- Open Source alternative, comparable performance, C extensions for speed, but much more Turing complete, consistent, and scalable.
And where is Matlab 6.5 using Java internally? Try doing a Files Open to start editing a Matlab script (M-file) with the Matlab editor window. One potato, two potato, three potato, and the window comes up. Now what language has that kind of GUI lag, I wonder what it could be?
Re:Matlab, Schmatlab, I want to write some code! (Score:3, Insightful)
Really, the New window should be made once, the optimizations saved in the assembly cache, and the same window used to subseq
Re:Matlab, Schmatlab, I want to write some code! (Score:3, Insightful)
Unfortunately, all of 'em (including MatLab) suck if you're working with chunks of data that are bigger than your cache, because you end
Nice plug how about.... (Score:3, Informative)
Re:I guess the home market rules... (Score:4, Insightful)
However, Intel rates their chips by clockspeed, and with the less-efficient pipeline, a 3 GHz P4 is not three times as fast as a 1GHz P3.
Thus, as chips get faster, AMD's chips will get better performance, not only cycle-for-cycle, but even rating-for-rating!
Re:I guess the home market rules... (Score:5, Informative)
Re:I guess the home market rules... (Score:3, Insightful)
I don't have hard data on this, but doesn't the impact of the pipeline depend on how the software it runs is compiled? If the object code is compiled to reduce branches, the longer pipeline should drastically speed up processing. That would theoretically make a 3GHz P4 MORE than three times as fast as a 1GHz P3.
Re:I guess the home market rules... (Score:5, Interesting)
MUL EAX,EBX [DIMMMM]
ADD ECX,EAX [_D___IE]
So in total takes seven cycles.
The same code on the P4 would take at least 15 cycles. What's worse is consider
MUL EAX,EBX [DIMMMM_]
ADD ECX,EBX [_DIE___]
INC ESI [_DIE___]
DEC EBP [__DIE__]
ADD EBX,EDX [__D__IE]
Again this takes seven cycles. Specially since instruction 1 and 2 can go start in cycle two in pipes 1/2.
Compare that to the P4 which only has two ALU pipes [one of which is now stalled for 14 cycles for the MUL to finish].
Tom
Re:I guess the home market rules... (Score:3)
MUL EAX,EBX [DIMMMM__]
ADD ECX,EBX [_DIE____]
INC ESI [_DIE____]
DEC EBP [____DIE_]
ADD EBX,EDX [____D_IE]
[use a fixed-width font to read that...] for eight cycles not seven.
[Where D = decode, I = issue, E = execute]
Tom
Re:I guess the home market rules... (Score:5, Funny)
"DIMMMM / DIE / DIE / DIE / D_IE" ... You aren't an employee of Rambus Inc. by any chance?
Re:I guess the home market rules... (Score:3, Informative)
No. You are clearly wrong. The PR rating is relative to an AMD Thunderbird Core. If you don't know what you are talking about, you should just shut up. Here is a link [pureoc.com] and here is another. [ezresult.com]
Intel are shouting about megahertz because its all they have. For most real world applications (ie. Not en
Pipelines != Math Performance (Score:4, Interesting)
How come your computer takes seconds to multiply two 400 digit #s, but ages to factor them?
Re:Pipelines != Math Performance (Score:5, Interesting)
The decoder can send upto three instructions into the pipeline per cycle. Actually that's only for directpath instructions [e.g. simple ALU/FP]. Vector instructions stall all three decoders.
The ALU scheduler is fairly strong but it does have several weaknesses. from the manual I can't see that it can resolve dependencies from other pipelines. For instance,
ADD EAX,EBX [DIE ]
ADD EBX,EAX [D IE ]
ADD ECX,EBX [D IE] - critical path
INC ESI [ DIE ]
D == decode, I == issue, E == execute [pp.. 227 of the athlon opt manual].
So the fourth instruction will always start on the second cycle despite the fact that ALU1/2 are blocked.
Similarly the Athlon memory ports are a bit weak. There are read/write buffers but you still can only issue two reads or one write per cycle which is annoying.
However, the strength of the Athlon ALU over the P4 ALU is that for the most part it can keep all three pipelines busy even if they are blocked at some stage [e.g. it can decode/issue even if blocked]. It doesn't say in the documentation but I could swear the Athlon can cross-pipe things too. Cuz sometimes I can mess the order of ops [e.g. create a dependecy] and it executes in the same time regardless.
Anyways, yeah it's all about the 3 ALUs and a decent scheduler. Something the P4 does not have.
Tom
Re:Pipelines != Math Performance (Score:3, Interesting)
Yup. E.g. splitting movps -> movlps+movhps does indeed make a performace gain."
I meant VectorPath instructions like DIV, LGDT, etc...
They stall all three decoders. As for alignment the trick is to pack as many instructions into 8-byte aligned windows. According to the manual it fetches 24-byte windows and performs one [or two I forget... PDF is so far away] of scan/early decoding.
So the trick is to organize your code so that each 8-byte segment has as
Re:Pipelines != Math Performance (Score:3, Insightful)
Please tell me you have at least the 2 brain cells required to know that this benchmark is far from accurate.
Anyone who does ANY form of editting on a Mac wont touch Premiere 6 with a 100-foot pole. Why? Because Final Cut Pro smashes it to little tiny pieces you could use to flavor your coffee.
Microsoft Word? Tell me you're kidding. The benchmark was doing search-and-replaces. This is dependent on so many things ranging from hard di
Do you know what you're talking about ? (Score:5, Interesting)
It's much more likely the size of the L2 cache is affecting you (i.e. your working set does not fit into P4's L2 cache but it does in Barton's).
If you don't believe me, try the demo version of Intel Vtune performance analizer on matlab running one of your programs.
How well your caches perform is probably the most important thing for a processor today, as the speed of the main memory is a couple of orders of magnitude under the speed of the processor. It takes a couple of hundred cycles to service an L2 miss, while a long FP operation takes at most 20 cycles.
Re:Do you? (Score:3, Insightful)
Re:I guess the home market rules... (Score:3, Interesting)
Disclaimer: I am not an EE, so I could very well be full of shit.
Re:I guess the home market rules... (Score:5, Insightful)
If you were to use SSE2 you would see an incredible performance boost.
I doubt it, I really do. Present-day x86 chips aren't limited by their FP processing speed, the real problem is memory latency and bandwidth. For instance, my 1.8 GHz P4 regularly performs in excess of 1 Gflops when running benchmark tests for the ATLAS [sourceforge.net] BLAS. However, these benchmarks are specifically designed to fit in cache, to have predictable branching, etc etc.
Unfortunately, in real-world situations cache thrashing is difficult to avoid, and accurate branch prediction is a highly non-trivial affair. When a prediction turns out to be wrong, the cost of refilling a stalled pipeline increases in proportion to the pipeline length. The ever-lengthening pipelines of P4 chips means that, although its FP performance may r0x0r, the overhead of stalls makes production code run like treacle.
Re:I guess the home market rules... (Score:3, Interesting)
intel pentium IV, 3.2 GHz: 5.0 minutes
athlon XP, 1.533 GHz: 5.7 minutes
intel pentium III 733 MHz: 8.1 minutes
From the PIII to the PIV, a 340% increase in processor speed, I get 60% increase in performance...
what is "processor speed"? (Score:4, Informative)
No processor, barring a complete architecture change (in which case its a different processor entirely) will double its performance simply by doubling the clock speed.
It really depends on how you define performance too and what your software is doing. Doing heavy I/O? Processor has little to nothing to do with I/O - it just hands it off to the bus and I/O controllers to take care of and then does something else while waiting for the interrupt.
Re:I guess the home market rules... (Score:3, Informative)
Re:I guess the home market rules... (Score:3, Interesting)
Re:I guess the home market rules... (Score:3, Informative)
Re:I guess the home market rules... (Score:4, Informative)
Both SSE and 3DNOW use formats the normal FPU can read so I'd say it's standard [hint: you can assign an array of two well aligned floats to a 3dnow 64-bit word and use it].
SSE supports both double/float precision [as another poster pointed out]. Heck even the Athlon supports SSE [though I wouldn't use it. Hint: SSE reg == 128-bits and the Athlon CPU can only perform upto 64-bits of read per cycle...]
Tom
Re:I guess the home market rules... (Score:4, Informative)
It performs precise math by default. You can only use 32 or 64 bit floats, the "long double" 80 bit floats are not supported. But this often isn't a problem. You can also turn off denormals, and with interupts on bad math (divide-by-zero type stuff). Turning those off hasn't given me any performance boost, but I still consider these things features not bugs. There are some low precision operations available, but no compiler I know of uses them unless you ask for em. I do in some cases but then I know what I'm getting.
A math person may give you a better answer than me. I'm a graphics person, a field where SSE2 is a godsend compared to the stack based floating point units that came before.
History repeats itself..... (Score:5, Interesting)
I suspect AMD and even Apple are going to shrink Intel's bragging rights in that same time frame unless Intel gets their act together. From AMD's recent earnings report it sure seems somebody is buying Athlon 64's.
Intel blew it when they made the decision to let 32 bits ride for another 2 to 3 years. They look like old fuddy-duddys now. It's AMD and Apple via IBM thats has the cool shit.
Re:History repeats itself..... (Score:5, Insightful)
In the meantime, Intel has the one-two bait and switch with P4-Celeron and the true P4. If they didn't have a TON of money and market clout, they'd be in big doo-doo right about now. As it is, AMD is the one in big doo-doo, not because they have the lesser product, but because of Intel's clout.
Listen to any computer commercial, and they pretty much all have those 5 co-advertising tones at the end. That's monopoly power, that's market clout. (If I were in charge, the antitrust penalty would ratchet up every time those tones sounded.)
Maybe Intel blew it, but they'll survive.
Re:History repeats itself..... (Score:5, Insightful)
We don't want them to die. We want them to pass through it and come out an older and wiser company, less inclined to pull shit it has learned the hard way it can't get away with, no matter how big it is.
Compare the IBM of 2004 to the IBM of 1984.
If Intel were to "die", the resulting market would have lost the wisdom that Intel is likely to learn over the next couple of years, barring some technical miracle.
Re:History repeats itself..... (Score:5, Insightful)
Same with the P3, the P2, the Pentium, the 486, 386, 286 (Even though no one adapted to this shit) and the 086. So yes, history repeats itself, and it is for good (at least on this one).
So What ? (Score:4, Interesting)
Re:So What ? (Score:2, Insightful)
Re:So What ? (Score:5, Insightful)
And of course, Intel's motivations are entirely performance, or at least price/performance, not marketing.
The fact that every other company has chosen a different design decision and has made better chips as a result is just an illusion foisted on us by those who think there own thoughts.
Re:So What ? (Score:4, Informative)
Intel P4 and Xeon beat 4 of the 5 you name on SPEC.
Re:So What ? (Score:5, Insightful)
I'm not wishing to knock Intel but it seems that these days whoever has the newest fabrication plant. Intel brings out a new line of chips: they're faster. So AMD brings out a new line of chips later on: bang! they're faster still. And so the merry dance goes on.
Of course, this is all to the consumer's good as it means there's far more competition. But as far as the consumer is really concerned it doesn't matter so much who currently has the fastest chip as whose chip currently offers the best value while still being "fast enough". For my money that's been AMD for a while now.
Re:So What ? (Score:4, Insightful)
Re:So What ? (Score:5, Funny)
Oh. Yeah... LINUX.
Nevermind-- go back to writing the best OS there is.
Re:So What ? (Score:3, Interesting)
Intel has the fastest chips (by a fine RCH), but AMD has
I doesn't take much experience to notice flaws. (Score:3, Insightful)
When the tire of my car explodes in an open road, it would not take much expertise on my part to diagnose it as a problem with my tire (they really aren't supposed to explode). And, when it happens to many other people with the same tire, it wouldn't take any e
Re:So What ? (Score:3, Insightful)
The company I work for invented the first 16-bit microprocessor EVER, the CP1600 (ok, to be fair, it was a joint effort between us and a partner company), which was released in late 1974, when Intel was a scant 6 years old and PC meant "Pissing Clear." Intel was still a long 4 years away from introducing the 8086, which was only an 8-bit CPU anyway.
Nobody ever talks about the CP1600 because it was not oriented toward "personal" computers. After all, why the he
Re:So What ? (Score:3, Informative)
Pipeline stalls (Score:4, Interesting)
They could minimize this by creating two different conditional branch instructions for each condition. One for cases where the programmer expects the branch to occur most of the time, and one for where the branching rarely occurs. They could then optimize the pipeline behavior for each case. If its a 'likely branch' instruction, it could start fetching commands from the branch. If its an 'unlikely branch' instruction, it could prefetch the next instructions after the branch.
This would work well in loops where every time but the last, the processor branches back to the top.
thats branch prediction... (Score:3, Informative)
You can also go back and "fix" instructions to an extent (and not in all cases) while in the pipeline in case of incorrect branching. x86 sort of sucks for this though because of the variable length instructions.
Alot of computer science is based on those kind of statistics. You see it
It;'s not that it'll be slower... (Score:5, Informative)
What this means, is that it will take a faster clock cycle (4GHZ, for instance) to do the same amount of processing as the Northwood core. However, increasing the pipeline should allow Intel engineers to achieve higher clock speeds, as the longest transistor path will likely be shorter (faster switching times).
In essence, Intel is attempting to increase the speed of their CPU's by focusing on increasing the clock speed (P4), while AMD is focusing on increasing the amount of calculations per clock cycle (Hammer).
Of course, there are a lot of more complex tradeoffs that factor in (ie. branch prediction). I highly recommend reading a computer architecture book if you're at all interested. It's really facinating stuff.
Re:It;'s not that it'll be slower... (Score:5, Funny)
dude, i don't even read the articles.
Re:It;'s not that it'll be slower... (Score:3, Interesting)
It'll most likely be slower per clock cycle.
Yes, I agree. My guess is that they're trying to achieve higher absolute performance. What surprises me is that this is still considered a P4 core, since adding pipeline stages (even 1 stage) is a very non-trivial task.
This'll also kill the benefits of reduced power consumption of 90 nm technology (increase in area from the additional pipeline registers, increase in frequency), which is important in server design. An argument about the benefits of having a t [wisc.edu]
hmmm... (Score:3, Informative)
Re:hmmm... (Score:5, Informative)
yep (Score:5, Insightful)
Stay away from x86 if you're just starting out...
Intel bit by their own tricks? (Score:5, Interesting)
Re:Intel bit by their own tricks? (Score:3, Informative)
Pentium 4-M: Redesigned cooler yet P4
Pentium M (Centrino): Redesigned Pentium III to take advantage of modern technology (400MHz bus, SSE2, etc.) and be cooler yet.
Celeron M: Pentium M failure/economic bin. Half the cache.
Re-read the article the reg is GUESSING 30 (Score:5, Informative)
Slower than Northwood? (Score:4, Interesting)
Low-power consumption devices (Score:5, Interesting)
Word has it that VIA is readying a new x86 processor to their line that supposedly has P3-class FPU performance while maintaining the same levels of poser consumption as its predecessors. It is expected that this processor may actually have a big win in front of it for DirecTV boxes. With the extra CPU horsepower, it should be exciting to see what nifty features come out of this, especially considering most set-top CPUs generally just act as "traffic cops" for the data moving between ASICs. If they're really making the move to this class of processor, perhaps they've got more in mind.
--JT
Re:Low-power consumption devices (Score:4, Insightful)
Re:Low-power consumption devices (Score:3, Funny)
Think what that would do for the world! Poser-powered PCs? They'd absolutely *FLY* off the shelves. e=mc^2 says I could stop worrying about the electric bills and heat he house with computers. One poser a decade would more than do it.
Utility computing my arse! What we really want is computing *without* using utilities, and this is it, folks, the real deal. Buy your poserPC today!
compilers (Score:4, Informative)
Sounds Like Marketing (Score:4, Interesting)
I've been working with Dual Opterons for a few months now, and have been very impressed as to their speed, heat dissapation, and bang for the buck.
A large data transformation job (really doing a scrape of a mainframe report for data) on the order of 1.1GB processed much faster on an IBM E325 Dual Opteron 2.0ghz running 32bit Windows (ack) than my Dual 2.4ghz Xeon (w/HT) running Windows (double ack)....
Yeah- it's not a benchmark, but it is real world performance.
Prescott vs. Northwood - Insides exosed (Score:3, Informative)
(Credit: Got it off The Register from this article [theregister.co.uk])
Myth? (Score:5, Funny)
Let me guess - 'Alizarin Erythrosin' is Cupertinus Elvish for 'Mac User', right?
ummm... (Score:3, Funny)
no, i didn't know that
Re:ummm... (Score:5, Funny)
Re:ummm... (Score:3, Funny)
Doesn't matter to me... (Score:4, Insightful)
Personally I'm tired of trying to keep up with the gHz war between AMD and Intel. With our current technology, the only areas really pushing processing speeds are gaming and video/image applications(that I'm aware of). My grandmother doesn't need a P5 4gHz to check her email, and neither do I if I simply want to write a paper.
Scientific work on optimal pipeline depth (Score:5, Informative)
A. Hartstein and Thomas R. Puzak (IBM): The Optimum Pipeline Depth for a Microprocessor [colorado.edu], ISCA 2002.
M.S. Hrishikesh, Norman P. Jouppi, Keith I. Farkas, Doug Burger, Stephen W. Keckler, Premkishore Shivakumar (UT Austin, Compaq): The Optimal Logic Depth Per Pipeline Stage is 6 to 8 FO4 Inverter Delays [utexas.edu], ISCA 2002.
Eric Sprangle , Doug Carmean (Intel): Increasing Processor Performance by Implementing Deeper Pipelines [colorado.edu], ISCA 2002.
A. Hartstein and Thomas R. Puzak (IBM): Optimum Power/Performance Pipeline Depth [microarch.org], MICRO 2003.
What all these papers have in common is that they find that increasing the pipeline depth past 20 stages increases performance.
Summary of article (Score:3, Funny)
Let me guess...42?
Re:Scientific work on optimal pipeline depth (Score:4, Interesting)
> What all these papers have in common is that they find that increasing the pipeline depth past 20 stages increases performance.
Is that a typo, or am I misinterpreting the papers you liked above?
In all but the Intel paper, it looked to me like they were saying the optimal pipeline depth was somewhere between 6 and 20 (depending on workload).
In the introduction of the Intel paper, it says "Focusing on single stream performance". So, basically they are focusing on artificial benchmark performance.
Most of us know (Score:3, Funny)
4-stage pipeline (Score:3, Funny)
Is this the right move? (Score:5, Interesting)
Most real world tests point to AMD chips being faster. The Int and Floating Point Tests still belong to the P4 3.2, but the P4 is having to pass the 1st place troughy to AMD when it comes to games and office productivity.
And then there is price. For $320 you can get $700 worth of Intel performance. Mind you this is the AMD64 running in 32-bit mode.
It would appear that all that is really needed to justify mass market adoption is a consumer OS, that would be Windows XP 64-Bit extended. Currently in Beta. The only delay there is that the
After that - we just need to see some AMD adoption in the mainstream pc builders.
Effective pipeline (Score:4, Interesting)
Technical discussion (Score:5, Informative)
what, are you an expert? (Score:5, Insightful)
Get off your high horse. Intel architects aren't dummies. Itanium benchmarks are starting to whoop some serious ass and the P4 and Athlon have been neck-and-neck for years. I'm sure Prescott will perform very well.
I can get into all kinds of architecture speak as to why your simplistic notions of mispredictions and pipeline stalls might not be so terrible. Who knows? Maybe Intel will execute both paths of a branch? They've already got partial instruction replay to make squashes much less expensive. With deep speculation, a big instruction window, good bypassing capabilities, and effective non-blocking caches, "pipeline stalls" are not an issue due to branch mispredictions. The bigger issue is memory latency/bandwidth and Intel has always done well with that. A branch misprediction can be easily tolerated...an L2 cache miss can't.
More misinformation -- for "MHz Myth" fans (Score:4, Informative)
It's nothing personal, but articles like this one, as well as posts like this, drive me absolutely batty with the amount of incorrect ideas propagated. It's not that one particular person is misinformed -- it's just that the amount of generally bogus information is silly.
First off, at some point, as far as I can tell, a bunch of people read Maximum PC or somesuch consumer "PC enthusiast" magazines, and read about "The Megahertz Myth". Maybe Ars Technica ran the story that started all this. Heck if I know. All that the original author was trying to do was point out that people shouldn't judge processors strictly by clock speed.
Boy, did they ever create a monster. Somehow, a bunch of folks managed to get the idea that Intel was pulling this as some sort of PR job to deliberately trick people into buying their processors. For Chrissake, this is such an incredibly stupid idea. The OEMs have purchasers that know what they're buying. Not only are they not going to just sit down and look at benchmarks, they're going to have a bunch of test machines built when deciding what to go with. That and business considerations outweight any "MHz rating". The OEM market just plain doesn't care. The only people getting excited about the "MHz Myth" are the "PC enthusiasts", a tiny, tiny sliver of a group when it comes to dollar value. If the sort of "PC enthusiast
riffraff really think that they constitute any kind of a significant market to Intel -- enough for Intel to *redesign their entire processor*, using a longer pipeline and higher clock rate, around getting them to purchase a computer, they are vastly overestimating their own importance in the universe.
When Intel makes the decision about a new processor, it's a pretty safe bet that they don't run out and say "Gee, how would Joe Assmunch in Marketing like us to structure this thing?" They have many, many PhDs in chip and circuit design who have many competing ideas about what the best designs would be. They run many, many simulations before even thinking about deciding on major design decisions.
The "PC enthusiast" folks who think that Intel has taken this path to trick those people that buy from Dell, and that, ho ho ho, *they* are smart enough to see through the trick are ridiculous. If Intel wanted a high clock rate to put on stickers, they could jack the thing through the sky, run at 10GHz, then demux data and only accept data at a lower rate into the various units. Some of the units would move to even more instructions per cycle.
The *current* poster is talking about *keyboard* and *mouse* events? "USB chatter"? Those don't even show up on the *radar*. You roll that mouse, send your 200 Hz interrupts, and you worry about 200 measly mispredictions per second? Just blowing away the page table cache during process switches (which runs at 100 Hz on Linux 2.4 x86 by default) already dwarfs any misprediction performance hit from the said devices, and folks frequently bump it up by an order of magnitude or so and don't see any measurable performance hit -- on Pentium IIs.
As for DMA, the entire point of DMA is so that the processor *isn't* running code from the host. It can continue on in its own happy little world while a co-processor pokes at the memory bus.
You might see significant branch misprediction issues with an inner loop with a branch statement that flicks back and forth just about every loop or so to screw over the branch caching. And "significant" is still pretty minor. The compilers hint to the CPU whether a branch is likely to be taken...it's not as if there's this massive, awful mistake that all the chip designers in the world are making that Joe I-Built-My-Own-Computer-
Re:More misinformation -- for "MHz Myth" fans (Score:3, Informative)
More details on Intel's processor (Score:5, Funny)
The Quantium has the following new features:
Re:Hmmm. (Score:2)
Re:Why? (Score:2)
and + more clock speed headroom == faster again later.
Re:Why? (Score:5, Interesting)
In reality the CPU will be somewhat faster than current ones due to the higher clock, but much less efficient.
Why not just dump MHZ as a rating altogether? Wouldn't FLOPS-based (Floating Operations Per Sec) or something similar be a better measurement? Maybe how far a simple program can compute PI in a second? We should really be looking at an operational-based measurement rather than a clock-based one.
Re:Why? (Score:5, Interesting)
Didn't AMD try to organise this and recently concede it wasn't going to happen [theregister.co.uk]?
As long as any metric favours one particular manufacturer, the rest will try to replace it with a new one. The result will be more FUD and ore confused users ("I've finally worked out what GHz are and you tell me I have to look at the number of flops?!?")
</Pessimist>
Re:Why? (Score:5, Interesting)
The MHz myth is the belief that the OneTrue measure of CPU performance is clockspeed. A 2GHz CPU is twice as fast as a 1GHz CPU. A 4GHz CPU is twice as fast as a 2GHz CPU.
While it may not seem common to many of us, if you speak with a large number of average people about computer performance, you will quickly want to kill yourself. Or them. Or both.
This isn't the fault of the general public, as Intel's marketing machine takes advantage of this common belief. Intel Pentium IV processors are some of the highest clocked processors in the world, and they benefit from everyone that thinks this somehow matters.
Re:Why? (Score:5, Interesting)
Yes, high clockspeed "speed demon" chips can and often do outperform high-IPC "braniac" chips. Whether the final performance of the fastest Pentium IVs ends up being as high or even higher than the fastest competitor does not change the fact that Intel has made no effort to dispel the MHz myth--and it IS a myth, and have in fact encouraged it.
I said nothing of final performance figures. I was stating that the marketing gimmick is that MHz is an accurate measure of speed, which it is not--even between different revisions of Intel's own Pentium IV core, let alone in comparison to their competitors.
"Until the Athlon64/Opterons AMD had no answer to the P4. They just couldn't quite keep up. And you people harped on the same thing "Ooh, it's a marketing gimmick!"."
Athlons and Pentium IVs have been leapfrogging each-other for years. If you believe that 32-bit Athlons were never competitive with Pentium IVs, you are quite mistaken. I would be happy to help you research the issue.
You want a marketing gimmick? How about selling a 64-bit CPU to people who have like 512M of memory. There's your gimmick.
You may not be aware of this, but it is actually an intelligent idea to fix problems before they become problems.
--LBA-48 was introduces before more than a tiny fraction of people had hard drives that were larger than the 128GB limit. Is it a marketing gimmick that LBA-48 supports multi-petabyte drives? (2^48-1 512 byte sectors).
--Serial ATA, and even ATA100 were introduced long before any hard disk drive could possibly approach 100MB/sec sustained transfer rate. Even today's world's fastest hard drive, the Fujitsu MAS3735, cannot quite reach 80MB/sec. DId you know, however, that the same situation occurred with ATA66, ATA33, ATA16, etc.? Perhaps engineers should have waited until the performance barriers were making drive upgrades pointless before introducing faster means of communication? After all, "no hard drive could possibly even approach 33MB/sec" --1995.
The same applies to 64-bit processors.
The average Dell comes with what, 256MB RAM? Probably 512MB now? That is 1/8 of the "4 GB barrier" of 32-bit pointers. Actually, that barrier is either 1.5GB, 2GB, or 3GB depending on your operating system.
Now, let's think: Have you ever seen the average amount of RAM in a system double? I seem to remember 4MB being "plenty" and 16MB being "wastefull and rediculous". I seem to remember 32MB being the standard, and anything over 128MB was an unwise waste of money.
Do you think that maybe, possibly, that pattern might repeat? Perhaps--since it has happened every few years for decades--the average amount of RAM in a system might increase? Applications might want more than 4GB of address space? Quake 5 may require 6GB RAM minimum (16GB recommended)?
In case you were not aware, the 64-bit mode of the Athlon64 provides real performance benefits, whether software cares about the extra address space or not. Many algorithms, particularly encryption, data management, HL math, high precision math, media en/decoding, and compression can make use of the larger register size.
The fact that there are double the number of GPRs (that stands for "General Purpose Register" Ohhh, ahhh) and that the amount of data that one can fit into those GPRs has quadrupled, helps ALL software that is more than a 20-line assembly language experiment. Hell, even having 16GPRs (twice as many as previous x86 chips), the AMD64 architecture is still considered register-starved. Look at the PowerPC, the IA64, the AXP, the UltraSPARC, and just about any other mainstream high-performance processor architecture.
You may want to look at the reviews from reputable publications showing substantial performance gains from 64-bit Opteron software, including software that could not care less if you have >4GB of memory. Hint: Tom's Hardware is not on that list.
Is a 10%-30% performance boost a gimmick?
Re:Why? (Score:4, Informative)
Imperfect a measure that it may be, it's a hell of a lot easier to relate to and compare than "how many FPS of Quake3 can I get?" or "how quickly can it compile the 2.6 kernel?"
That very question has long been a topic of heated debate. Years ago, AMD launched an initiative to create a nonbiased (so they say), general purpose universal benchmark. It never went anywhere as far as I know.
Overall, Winbench 'XX is a good benchmark because it shows actual performance in real-world applications (albeit somewhat old ones). For games, the only reliable means of benchmarking is to test those individual games, or at least assume similar performance across many games that use the same game engine. The game industry is converging because of the extreme difficulty of developing truly sophisticated 3D graphics engines. I predict that within 5 years, there will be at most 3-5 major game engines used by 90% of high-budget games. A general benchmark of these 3-5 engines (or however many there turn out to be) could be used, either taking their average and giving an overall "gaming score", or predicting the performance of the many games based on each engine based on extensive benchmarking of a few titles using each.
Server benchmarking is not an issue, because those involved in the tests often know what they are doing.
As far as unix benchmarking, well, that is a major pain in the ass. That certainly does not mean that we should rely on clockspeed, or god forbid on BogoMIPS. A standard benchmark based on the compilation time of a certain version of BASh was proposed not too long ago. Because many Unix geeks are developers, this would not be a bad start. As for pure CPU tests, perhaps a mix of BZip2, large-scale encryption, and
Benchmarking is a science, an art, and a rather large pain in the ass.
Your point is well taken though.
Re:Why bother with x86... (Score:3, Insightful)
Re:Silly intel (Score:3, Informative)
Re:Silly intel (Score:4, Insightful)
WTF? Please, just have a look at some IA-64 assembly code! It's NOT pretty, especially if you want it to go fast. You've got to do the whole explicitly parallel thing, manually pack together independent instruction according to what pipelines you want to run them in.
Itanium is NOT a RISC machine like Sparc, not in the least. Sparc is much more closely related to x86 than it is to IA-64. The Itanium is a VLIW chip, or EPIC in Intel-speak. It's a whole different animal altogether.
FWIW, here's a brief article [intel.com] where Intel talks about implementing a bubble-sort in IA-64 assembly vs. the original C. In particular, they start with the code that the Intel C compiler generates and optimizes it. Their final, optimized version of the algorithm is on page 5, and it's anything but easy.