ArsTechnica Compares the P4 and G4e: Part II 192
Deffexor writes "It looks like Hannibal of ArsTechnica fame has put Part 2 of his original comparison article between Intel's P4 and the Apple/Motorola G4e. In a nutshell, this second article covers the execution core, the AltiVec unit and SSE2, as well as a myriad of other interesting factoids. An interesting read, if not a little technically intense for those of us with less than a CE/EE degree. Have at it boys!"
As Always an interesting read (Score:1)
Re:As Always an interesting read (Score:1)
His CPU Articles are like Tootsie Pops (Score:2)
Like with a Tootsie Pop, you start licking, and finally get too impatient and just bite the damn thing.
Mmm...chocolatey goodness...
Allright! (Score:2, Interesting)
Re:Allright! (Score:1)
Re:Allright! (Score:2)
Doubling the clock speed *should* provide noticable differences in performance for *all* applications, not just new games. If these new games are all that benefit, then the hardware has been designed wrong for a general-purpose PC.
Case in point: here at work my old P2-266 was replaced with a P3-500. I noticed absolutely NO difference in performance. My Linux box (K6-2/400) at home seemed slow with only 64 MB of RAM. Adding 128MB to it (192 MB total) did more for performance than doubling the clock speed of the machine at work!
Re:Allright! (Score:1)
Re:Allright! (Score:1)
Re:Allright! (Score:2)
Re:Allright! (Score:2, Informative)
Anyone who has ever upgraded or put together their own PC should know that performance depends on all parts, not just the CPU or memory. Maybe your applications don't speed up because you are still using that old hard drive, which is the bottleneck? Yes, it would be nice to purchase a magic chip that you throw into a computer which speeds everything up dramtically, but the reality is no computer (that I know of) works that way. I'd love it if I could install a new CPU and get 20% more bandwidth, but it just ain't gonna happen. The reason your applications do not seem faster is because they are probably already as fast as possible (to detect by any mere mortal, that is).
Depends what you use your computer for (Score:1)
Another view (Score:5, Funny)
Very, very Off Topic (Score:3, Informative)
Thats a very amusing comic, much more so than UF. Any idea why its never mentioned on SD?
technically intense.. (Score:4, Funny)
Tell me about it, I do have more then CE, two letters even, namely MCSE and even I had to stop when they started throwing around the heavy stuff. I mean, A = A + B is supposed to make sense even if B isn't equal to zero.
Re:technically intense.. (Score:1)
Wow...
Re:technically intense.. (Score:1, Offtopic)
That doesn't apply. The important formula here is that A>=A+B if and only if B is non-negative. MCSE-CE = MS, and since you didn't understand the full article, this implies that the addition of MS to a title actually reduces knowledge, not increases it.
:)
Re:technically intense.. (Score:1)
Re:Who let this one out of his cage? (Score:2)
As for the math part, (yes I knew it was about computer instructions) there isn't a single problem with A = A + B with B not equal to 0. Modulo arithmic for example. But then again, I am not sure what grade math level you need to have for that
Re:Who let this one out of his cage? (Score:1)
What? You just defined 0 in a mathematical sense. What's your example?
Re:Who let this one out of his cage? (Score:1)
First of all, I am interested which number would be defined at 0 since I assumed B to not be equal to 0 (unless you are counting module B of course).
Furthermore, any mathematician who would accept a definition of 0 in a mathematical sense without a uniqueness clause should be fired.
And if you still do not get it, think clock times, where 1 am + 24 hours equals 1 am.
Re:Who let this one out of his cage? (Score:1)
But this was not it.
~jeff
Re:Who let this one out of his cage? (Score:1)
OT: Potential Immortals (Score:1)
From one potential immortal to another, I'm not dead yet!
Steve M
Re:Who let this one out of his cage? (Score:1)
nuff said.
t.
conclusion (Score:2, Redundant)
Although he does confirm Steve Job's words of wisdom: Mhz aren't everything
(I, on the other hand, am picking sides)
G5 is coming soon (Score:3, Interesting)
A more interesting comparison will be to pit the P4 against the comming G5. According to the Register, Apple has begun seeding early G5's at up to 1.6GHz [theregister.co.uk] to key devlopers. Other sources [macosrumors.com] are claiming limited yeilds in the 2.4GHz range already.
There's still bugs to be worked out before production ramps up for release early next year, and supposedly AltiVec will not be as strong on the G5 as it is in the G4. But at 2.4GHz on an already-superior FPU, who needs it?
Re:G5 is coming soon (Score:1, Funny)
Oh yes, and we all trust MOSR.. especially me, as I type this on my flat-panel iMac. Oh, wait, that's not a product (yet), isn't it? :-)
Seriously.. rumour sites aren't credible.. don't try to make them be.
Re:G5 is coming soon (Score:1)
They also seem to be more adept at spotting a bad photoshop job than some sites [slashdot.org] :)
Re:G5 is coming soon (Score:1)
ie: Will I be able to just buy a chip upgrade and keep my pretty tower?
Re:G5 is coming soon (Score:1)
Re:Um... (Score:3, Insightful)
The whole market for motherboard upgrades comes from this situation. Apple does not support motherboard that it did not manufacture, and the OS used to check for the precence of genuine ROMs. So third parties could not build replacement boards. By upgrading only the CPU subsystem, the rest of the motherboard would remain genuine Apple and therefore run the system without problems.
Also remember that Mac hardware tends to be more expensive and last longer than PCs. While the performance boost you win by upgrading only the CPU system is lower, the impact on the workstation is also lower. Changing a motherboard means changing the system, having new drivers, so basically more maintainance work.
This situation might change with darwin, theoretically, nothing prevents some company from producing PPC motherboards, recompile Darwin for it and then build a installer that instals OS X on top of Darwin. Old machines that Apple does not support can run OS X this way.
Re:Um... (Score:1)
And you're right -- with the entire bottom half of the OS being open source, anyone can wangle around with it and try and get it working on their hardware.
MOSR is more reliable than /.! (Score:1, Funny)
Re:G5 is coming soon (Score:3, Insightful)
Re:G5 is coming soon (Score:1)
Re:G5 is coming soon (Score:1)
For example, the K6 was designed to compete with Pentium MMX and early Pentium II chips. There should be few surprises presented by a study that compares the performance of the K6 to the performance of the Pentium IV.
Apples to apples (Score:1)
Re:Apples to apples (Score:1)
come on, who are they [apple] kidding?
Re:G5 is coming soon (Score:2)
It is fair to compare whatever happens to be on the market at any point in time.
If Motorola is reacting instead of acting, that's just too bad for them. It doesn't make the comparison unfair.
Yes, that will be a good and interesting comparison when I can buy a machine with a G5 in it.
ppc power (Score:4, Insightful)
the high end ppc desktops are topping out around 900MHz, while the p4's are hitting 2GHz. there has to be another explanation besides the complaint that jobs is ignorantly sitting on his thumbs. i think he knows what he's doing.
note: i am not a mac zealot.. i don't even own a mac - only 4 x86 pc's (1 athlon, 2 p133, 1 p120). i simply can appreciate the speed of the ppc.
Re:ppc power (Score:3, Insightful)
What a strange statement, considering that neither Jobs nor Apple create the PPC chips!
Re:ppc power (Score:1, Troll)
Re:ppc power (Score:5, Interesting)
Two factors come into play here.
The first is that, if I remember correctly, PPC and x86 chips use a different clocking scheme. This means that clock rates between them aren't even directly comparable (what a "clock" is depends on the clocking scheme).
The second is that it's perfectly possible that the PPC architecture is limited to lower clock rates than the x86 architecte. Signal propagation through gates takes time. If one architecture expects signals to propagate through logic three gates deep per clock, and another architecture expects signals to propagate through logic five gates deep, then of *course* one will have a faster maximum clock rate than the other. They would hopefully still be doing the same amount of work per unit of real time.
You should already be familiar with this from the Athlon/P4 spin war. A 0.18 micron Athlon core simply cannot be clocked as quickly as a 0.18 micron P4 core - no matter what you do. Does this make the Athlon automatically a poor performer? No, because it can do more per clock. Does this make the Athlon automatically kill the P4, because it "can do more per clock"? No, because the P4 can be clocked faster. Only real benchmarks will tell.
A point against Apple is that Apple has been allergic to publishing SPECmarks for its processors for the past couple of years (the only PPC-ish benchmarks are IBM's benchmarks of the Power series of chips, which forked after the G3 IIRC). This removes a very consistent (if somewhat flawed) means of comparison.
Re:ppc power (Score:1)
Someone should create a site debunking the Photoshop myth. I would if I weren't so lazy...
The truth, as usual, probably lies between the megahertz and Photoshop camps and is about what a resaonable person would expect: the G4 is roughly equivalent in performance to the P4 and Athlon, code can be highly optimized for any of those processors, and most users don't need that much computational power anyway.
Re:ppc power (Score:3, Insightful)
Actually the Photoshop benchmark is completely valid. Apple's largest market for their G4s is the designer community. The most popular application for those people, and the application they spend the most time waiting for, is Photoshop. Therefore, when they're out to buy a new computer, the most important thing to them is
Additionally Apple has been using two benchmarks lately: Photoshop and movie compression. High-end video is Apple's second-largest market for their G4s, and this market spends most of their time waiting for video effects, such as compression. This is also a valid benchmark.
Thirdly, microprocessors are increadibly complex, plus the end-user speed is also dependant on many factors including software and OS optimizations. It is absolutely useless to compare the speed of processors by some numerical benchmark. What's important to anybody who wants to be productive on a computer, is how quickly your key applications run. And lo and behold, this is what Apple is benchmarking.
Remind me again why this benchmark is invalid? If anything I would suggest that any benchmark besides end-application speed is useless.
- j
Re:ppc power (Score:2)
-AP
Re:ppc power (Score:3, Interesting)
Furthermore, Motorola isn't sitting on their collective thumbs-- they're simply targetting a market whose requirements are different from Apple's. The embedded market.
This is all hugely ironic, because the RISC architecture was s'posed to result in chips that could cycle faster but did less per cycle. Instead, it's the CISC chips that are cycling faster and doing less per cycle.
Re:ppc power (Score:2)
Sorry, try again:
Of course I know that current x86 CPUs have a RISC-like core. They're still the heirs of the 8086, and that's what's ironic.
Re:ppc power (Score:2)
The Intel chips are really fast at "hurry up and wait", but not nearly as fast at "hurry up and do something".
Nice. (Score:5, Interesting)
Re:Nice. (Score:2, Interesting)
Plus, the reports are from Adobe, who make Photoshop... an extremely Mac-optimised piece of software usually held up by the Mac brigade [techtv.com] whenever they attempt comparative benchmarks.
Look, I *want* to believe that the G5 makes great coffee, gives fantastic backrubs, cures cancer and runs faster than every P4. I do. I've just heard all these lines before, with the G4.
Re:Nice. (Score:2, Insightful)
I'm pretty sure Adobe doesn't care whether you spend your $900 on the Windows or the Mac version of Photoshop.
Re:Nice. (Score:1)
Re:Nice. (Score:3, Informative)
I just got back from a seminar with Motorola and the architecture of their new and upcoming chips. Some nice stuff on the horizon. The did mention some of the less than stellar performance on prior chipsets, and explained that it was due to not taking advantage of the chip features. An operating system or user application that must be backwards compatible will not be able to utilize the chipsets to full advantage.
You don't judge high end chipsets based on mass market consumer applications.
Maybe pointlessly detailed (Score:5, Interesting)
When I read articles like this, there's so much detail that I find myself--even willingly--losing sight of the big picture. Sure, you could read a detailed write-up about Toyota's new engine, but those details don't really matter much unless you've just made a hobby of knowing about engines. Realistically, you'll have a hard time connecting those details to your driving experience. Heck, someone could put in a different engine, tell you that its a Toyota, and you'd be saying things like "Oh, yes, this feels just like a Toyota, I can tell that the designers did blah and blah."
After the Pentium II generation of CPUs, things have gotten very, very muddled. Amazing features that are supposed to increase performance don't always do so. Sometimes they make things worse. Little compiler tweaks can make one program be twice as fast as another, given the same hardware. Chips with higher clock rates can be significantly slower than chips with 20% slower clocks. Certain applications run much faster than on previous chips, but there are others that show no increase.
It's all very chaotic and confusing, even for people in the know. I suspect that if you took a program that people claimed to need a P4 or Athlon for--something very performance sensitive--and set yourself the task of making it run faster on a PII than an Athlon, you could do it. But that doesn't matter, as everyone seems to be clamoring for newer chips.
Re:Maybe pointlessly detailed (Score:1, Insightful)
> have gotten very, very muddled.
To continue with your engine analogy, after the fifties things have gotten more complex. Variable valve timing is common. BMW is working on technology (if it's not in production already) that opens/closes valves using electromagents - no more camshaft! All electrical. Probably both more reliable and more efficient.
Turbocharging is now common - the combination of a small displacement engine and turbo is commonly found and provides a good compromise of power and efficiency/mileage.
As the technology increases, designs become more complex. This isn't always a bad thing.
Re:Maybe pointlessly detailed (Score:2)
Feel free to debunk it. Explain why it's better for developers (and the user experience) to have to work out how to optimise to a new pipeline every couple of years, rather than to squeeze every last drop of speed out of one design before moving onto the next on (e.g.) a five year cycle.
Ever looked at the specification of a Playstation [e-scapegames.co.uk] and wondered how on earth developers got it to do what they had it doing by the end of its lifecycle? Ever wondered why early Playstation 2 games bit the weenie?
I'm not claiming that the original poster was right, or you are wrong (if that's indeed your position), I'm saying that it's debatable. How about debating rather than sneering?
Re:Maybe pointlessly detailed (Score:3, Insightful)
Feel free to debunk it. Explain why it's better for developers (and the user experience) to have to work out how to optimise to a new pipeline every couple of years, rather than to squeeze every last drop of speed out of one design before moving onto the next on (e.g.) a five year cycle.
The whole point of modern processor architectures is for the work of pipeline optimisation be done by either the processor or the compiler. The goal is less work for the application developer. This coincides with the trend towards higher level programming languages - I can't think of any large major applications architected such that the code needs to be hand optimized at instruction level. Sure, after profiling some parts may be tuned, but the size of applications today are just too difficult to design in overview with that scope. A huge amount of the transistor budget these days is taken by microcode that performs these optimizations on the fly, but with architectures such as Itanium you're seeing a move towards compile time ordering. But now I'm getting sidetracked...
Ever looked at the specification of a Playstation [e-scapegames.co.uk] and wondered how on earth developers got it to do what they had it doing by the end of its lifecycle?This is a time-honoured trend in closed-hardware systems. Look at the last generation of games on the SNES. Look at the scene demos being put out for the Amiga in the mid 90's, essentially a decade old hardware platform at that point.
Ever wondered why early Playstation 2 games bit the weenie?
All first generation titles do not harness the full capacity of a machine; I think this is your point. But on open systems developers don't have to optimize anything, thanks to Moore's Law. Maybe it doesn't strictly live up to the "small-is-beautiful" aesthetic, but software development is about optimizing results. Time will be spent where the greatest payoff is, and since performance boosts are a natural consequence of progress more resources are devoted to development.
Re:Maybe pointlessly detailed (Score:2)
How long has Microsoft Visual Studio 6.0 been out for? How many new pipelines have come out in that period. Thanks for making my point for me. ;-)
Re:Maybe pointlessly detailed (Score:2)
to use there compiler and have a plugable interface into the IDE so you can place the compiler of your choice into it......but MS would neve do that would they
Re:Maybe pointlessly detailed (Score:1)
Visual Studio without any difficultly
at all.
Re:Maybe pointlessly detailed (Score:2)
Re:Maybe pointlessly detailed (Score:2)
Guess who does the porting and profiling where I work...
Re:Maybe pointlessly detailed (Score:1, Offtopic)
Re:Maybe pointlessly detailed (Score:1)
Maybe you, or someone else read more into his comments than I did. That's why I asked to be clued in.
OT : Non Apple G4 boards? (Score:1, Interesting)
Re:OT : Non Apple G4 boards? (Score:1, Informative)
Will cost user $1000 with graphics card, audio, firewire, processor, ethernet, memory, apparently.
Great Article! (Score:5, Insightful)
The preceding discussion should make it clear that the overall design approaches I outlined in the first article can be seen in the execution cores of each processor. The G4e continues its "wide and shallow" approach to performance, counting on instruction-level parallelism to allow it to squeeze the most performance out of code. The P4's "narrow and deep" approach, on the other hand, uses fewer execution units, eschewing ILP and betting instead on increases in clock speed to increase performance.
This is exactly the case. Unfortunately the popular masses don't understand all of this wide vs narrow stuff, so they go for the higher clock speeds. In reality, Intel is really pulling one over on us, charging more money and all we're getting is a higher clock rate, not a whole lot of performance gain. PPC has proven itself time and time again to be the better processor, but unfortunately they aren't used in very popular machines (mostly Macs,) so we don't get to reap the benefits.
On a related note, this article touches on one of the many reasons why the Gamecube will run circles around the Xbox. GameCube's processor is a 485Mhz PPC designed specifically for video games, while the Xbox just uses a common Pentium running at 733 MHz.
This all brings up a good question: why haven't Macintosh's or GameCube's marketers come up with a bench mark to put next to the processor speed? Maybe I missed it, but I've never seen a Macintosh commercial saying "comes with a G4 800 MHz, comparable to a P4 1.5 MHz." There might be too many legalities involved to do something like that, but it seems like they need to educate people somehow of the non 1 to 1 relationship between clock speeds of P4s and PPCs.
Re:Great Article! (Score:3, Interesting)
The problem is that you can compare processors in far too many ways. Apple likes to use Adobe Altivec-enabled applications when it compares speed (big suprise), but the reality is that comparing two processors is only marginally more useful than comparing two human beings. Computers are far too multi-purpose to be able to come up with a useful comparison.
Processors are even worse... It's like trying to compare two brains without letting education or experience skew the comparison.
In the end, manufacturers will choose from the dozens of possible benchmarks to make their processor look the best (or make up their own if none of the others will do).
Re:Great Article! (Score:1)
That's now easy
Maybe because it doesn't really matter (Score:2, Insightful)
This all brings up a good question: why haven't Macintosh's or GameCube's marketers come up with a bench mark to put next to the processor speed? Maybe I missed it, but I've never seen a Macintosh commercial saying "comes with a G4 800 MHz, comparable to a P4 1.5 MHz." There might be too many legalities involved to do something like that, but it seems like they need to educate people somehow of the non 1 to 1 relationship between clock speeds of P4s and PPCs.
Cyrix used to sell PR parts, PR133 might have been a 116Mhz chip, but it was as fast as a 133Mhz pentium. So there's precedent, and it's probably legally OK, but I suspect the reason is it doesn't really matter.
What really matters is that the CPU is fast enough for what you want to do. I run OS9, OSX and linux on my machines. My home machine is a G3/350, and it's plenty fast for running OS9 for everthing but compressing MPEG1 video. It's not fast enough for running OSX. My work machine is a G4/400 and it's just fast enough for running OSX. But it's not fast enough for compressing MPEG1 video. If I had a dual-800 G4 it would be more than fast enough for OSX, but it would still be too slow for compressing MPEG1 video. My linux machine is a Dual-800, P3. It's just fast enough for running linux with all the crap I have running. It's still too slow for compressing MPEG1 video, though. I also use a 1.2GHz Athlon machine occasionally, and I consider that just fast enough to run Windows 2000. I assume XP is similar. But it would still take a long time to compress MPEG1 video.
So, how would you structure a comparison benchmark? SPECint? BYTEMark? PhotoShop duals? I think the answer is that you don't. It doesn't matter, as long as the computer is fast enough to do what you want it to do. The semi-annual MacWorld Photoshop duals are interesting since they actually show that the computer is too slow for designers but the Windows machines aren't any better. Perhaps they need to enunciate more, but I think their current stand of , "it's fast enough," is the mature one.
Be more specific (Score:1)
What do you mean by "too slow?!"
Do you mean in real time? If so, say it.
Saying it's "too slow to compress MPEG1 video" is bollocks. Look at the recent iDVD2 demo on a dual 800, that's MPEG 2 which is harder to compress.
Re:Be more specific (Score:2)
But MPEG2 implementations typically do *way* less compression than MPEG1. MPEG2's bitrate is typically at least twice as high as MPEG1, sometimes 10x more, and when you're doing discrete cosine transforms that greatly increased size along with motion vectors and variable bitrate encoding allow you to do alot less work during encoding. That's why it's so much faster. You also need alot more space to store the resultant file (DVD vs. CD) Just because 2>1 doesn't mean it's compressing more, it just means it came after.
Re:Great Article! (Score:4, Informative)
In reality, Intel is really pulling one over on us, charging more money and all we're getting is a higher clock rate, not a whole lot of performance gain
This is a debatable point. I think it is wrong to conclude intel is "pulling one over on us". It has been demonstrated that as more EU's are added, the effectiveness and utilization of EU's goes down. The quest for ILP comes to a crashing screeching halt before you even get to 4 EU's. IIRC, only one processor-scheduled CPU is designed with more than 4 EUs.
The necessity of the chip to extract ILP in realtime is what leads us to these big hairy controllers and limited clock speeds. Controller shrink was what led to RISC in the first place, and now that we've had to add in superscalar "goo" there's hardly a difference between the CISC philosophy and the RISC one. Never mind that Intel chips have been re-writing CISC instructions as multi-EU uops forever.
The point is, adding additional EU's has been desmontrated to be of dubious merit. Right NOW, the P4 speed improvements come from SSE2, just like the G4's speed improvements come from AltiVec. Both do essentially the same thing, although i've read more about AltiVec and it seems "cooler"
The difference is this - When the P4 core hits 3ghz, its retire rate will just destroy anything a G4 or Athlon will do. Intel took the pipeline length hit NOW and will reap the benefits later.
They also spent the time to get their prediction units as top notch as possible, because iirc statistically there will be > 3 conditional branches in progress in those ridiculous 20 stage pipes
So - the problem with intel's approach - a single instruction takes longer to complete, and the fill/drain penalty for mispredictions is high.
The retire rate however, is amazing, and the clock rate ramping ability is similarly amazing.
Your assertion that MOTs approach _relies_ on adding additional EU's is surely incorrect, because "everyone" knows that controller complexity is again dominating cpus, and much of that is dedicated to extracting and managing ILP on 4 or less EUs (and that it just isn't there beyond 4.. i think the Power4 was supposed to have 6 EUs, and the Alpha 364 or 464 was going to have 8 ?)
Intel has already "side stepped" the SuperScalar risc EU problem with IA64 - Thats what LIW does. LIW is interesting again now because of the reliazation that controller extracted ILP was too expensive and not good enough for the performance increases needed.
Re:Great Article! (Score:2)
On a related note, this article touches on one of the many reasons why the Gamecube will run circles around the Xbox. GameCube's processor is a 485Mhz PPC designed specifically for video games, while the Xbox just uses a common Pentium running at 733 MHz.
What horseshit. All console CPUs are outright or adapted cores of commodity outdated CPUs.
the gamecubes cpu is not any more specifically designed for gamecube than the R4300 was "specifically designed" for the nintendo 64.
Does the fact that XBox has a different memory system, "northbridge", cache size, etc, mean its 733mhz proc was "specially designed" for xbox ? You wont find any of the xbox core components in anything besides an xbox... it must all be CUSTOM ENGINEERED FOR GAMING!
I try not to be a fan boy, I only ask that others do the same. Have you played anXbox ? What about a gamecube ? Have you done performance testing on their respective processors ? What about their GPUs ?
You can speculate all you want to, but dont write some article about how "I've taken CE classes, I can tell that gamecube whoops xbox". It just smears your credibility all over the place.
Re:Great Article! (Score:2)
Secondly, they could be telling and outright lie, but in a press release by Nintendo which I unfortunately can't find right now, I read that the processor was a modified design that originated from a general purpose PPC and was designed specifically for the GameCube. Now, you may not believe that, but that's not the only time I've heard it, and based on the performance of the thing, I don't doubt it.
Re:Great Article! (Score:2, Informative)
At least compared to that of the X-Box.
Re:Great Article! (Score:1)
He misses one important difference... (Score:4, Informative)
The two operand Intel architecture does not allow the fused multiply add, so that the latency of such an operation is the latency of a multiply plus the latency of an add (and the destination register has to be one of the operands, although the other operand can be in memory, saving you a load). There are plenty of practical algorithms which benefit greatly from the fused multiply-add, for example polynomial evaluations, matrix multiplications, etc, a feature pioneered by IBM in the RS6000 series and that Intel is using in Inanium.
And people who claim that you can do loop unrolling to hide the latencies should check their math: with only 8 registers, there is no way to hide the latencies of a multiply plus an add on a P4, while it is almost trivial on a G4 (32 registers and shorter latencies between accumulates). Furthermore many transcendental function evaluations are evaluated in libraries through polynomial approximations, which cannot be unrolled nor easily sped up: the number of coefficients is usually large enough to make the routine limited by the latency of the back to back floating point operations, but not large enough to take a divide and conquer approach.
While the G4 is clearly the better architecture (not having double precision Altivec is not that important, I consider vector processing is only worth if you can do more than 4 elemnts per vector), the memory susbystem of the P4 is far superior. Hopefully the G5 will be comparable in this area (and I can't buy a desktop Power4 system :-().
Masking loop latency. (Score:2)
And people who claim that you can do loop unrolling to hide the latencies should check their math: with only 8 registers, there is no way to hide the latencies of a multiply plus an add on a P4, while it is almost trivial on a G4
Actually, it turns out that you can still mask the loop latency with a limited register set.
First, you can use "software pipelining" to mask quite a bit of the loop latency without having to unroll (it's a clever reshuffling of the loop instructions; for brevity, I won't describe it here). This requires one extra FP register over the straightforward implementation of an x86 dot-product loop (four instead of three, because I can no longer re-use scratch registers between steps).
Second, branch prediction will to a limited extent perform unrolling for you. While the architectural register file has only 8 registers, there are many more internal registers on the chip. Register renaming allows the processor to run several iterations of the loop in parallel without having to worry about namespace conflicts (though true dependencies remain intact). This works as long as the total number of iterations being unrolled fits within the scheduler's window (usually 8-16 instructions; I don't know how big the P4's window is).
In summary, for something as straightforward as a dot product, it's certainly possible to write x86 code that will avoid the penalty of having separate add and multiply instructions.
[You'll really be bound by the memory subsystem for both chips, but that's moot point for this discussion.]
Re:He misses one important difference... (Score:1)
Error in the article (Score:3, Funny)
add A, B
mov C, A
The first command adds the two numbers, and the second command moves the result from A to C. Of course, you still have the potential problem that the original value of A was erased by the add command, so if you wanted preserve A's value then you'd have to insert even more instructions to store A in a temporary register and then restore its value once the addition has been performed.
----snip----
Not quite. I'm sure even people who _dont_ know x86 assembly language will realise all you don't need any extra instructions at all. Simply reorder them:
mov C, A
add C, B
Obviously, the example was being used to show how much nicer it would be to have three or more operands in your instructions, but it was a lousy example.
On a sidenote, we've been able to specify more than two operands with certain instructions since the 80386. Look up the syntax for the "imul" instruction.
Re:Error in the article (Score:2)
mov C, A
add C, B
True, but that's going to create some stalls, and is still two commands rather than one:
add C, A, B
as with the PPC chip.
Re:Error in the article (Score:1)
--
Oops, correction (Score:1)
--
Re:Error in the article (Score:1)
And I agree - the article basically sucked.
PS: moderators - the above should be marked "Insightful". He makes a good point.
Re:Error in the article (Score:1)
Re:Error in the article (Score:1)
Re:Error in the article (Score:1)
WTF article did you read? The article I read said:
mov C, A
add C, B
The first command moves A into C in order to preserve the value of A, and the second command ands the two numbers.
With a three- or more operand format, like many of the instructions in the PPC ISA, you get a little more flexibility and control. For instance, the PPC ISA has a three-operand add instruction of the format add destination, source 1, source 2, so if you wanted to add A to B and store the result in C without erasing the values in either A or B (i.e. "C = A + B") then you could just do:
add C, A, B
First actually read the article and make sure your dyslexia isn't acting up before you post something that makes you look like a complete idiot.
And yes the article does say that the second instruction ands the two numbers when it should say adds.
Re:Error in the article (Score:1)
Re:Error in the article (Score:1)
Re:Error in the article (Score:2)
I don't know either, I don't understand. ;)
Summation for the "Shiney Object" Crowd. (Score:1)
While the G4e has fairly standard, fairly unremarkable floating-point hardware, the PPC ISA does things the way they're supposed to be done
[snip]
The P4, on the other hand, has slightly better hardware but is hobbled by the legacy x87 ISA.
I could have sworn tomshardware stated it best as "Essentially we have a P4(86) 2Ghz".
I'm paraphrasing, mind you and possibly taking it out of context, *but* instead of increasing the cache (instruction/data/registers) they combined and dropped it down to 8k of instruction and data.
Oh, and on the P4 vs AMD's XP chip, how would this analogy [slashdot.org] be changed or overhauled as it stands with the P4 vs G4e?
I'd really like to know. Or have a better "real world" analogy geared for the newbie user who usually winds up asking me, and I have to be able to explain complex things in simple terms to myself first.
Thanks.
GISboy
Overall picture isn't quite that easy to see... (Score:1, Interesting)
What I wanted to convey though, for people who may not deal on a hardware level with this stuff. Is that it is very hard to really get a good understanding for the whole processor. These projects are IMMENSE. Trying to keep track of millions of transistors, and lay them out, etc... is a nightmare. I know. So while it is good to talk about the higher level concepts of narrow Vs. Wide on a conceptual level that all falls away when you start looking at transistors. More than anything these projects are all about coordination. You can have a team of engineers working on a specific part that have NO IDEA what the other bits and pieces look like unless they have to interface with them. So just keep that in mind when we're judging these companies. I just think we lose sight of the massive scale of these projects sometime.
Resources (Score:2, Informative)
Re:4 is 4, right? (Score:1)
It'd be easy to use this in the video card realm as well. 2 is 2, right? Voodoo2 and a GeForce2.. I won't touch the microsoft jab.
Re:Huh? (Score:1)
--
me and my P166 MMX had a good laugh over that (Score:1)
not pointless (Score:2, Insightful)
BUT, for the audience the article is intended for - geeks, technophiles, nerds & propeller heads it was not pointless at all. On this forum in particular there are a lot of people that use neither Windows nor MacOS but other operating sytems which run happily on either processor. Even if there is no *practical* point there is always sheer geek curiosity - alot of us find such articles entertaining.
Might as well have Car and Driver running a comparison of a Jaguar S-type and a 10-ton dumptruck.
I don't think that the difference between a P4 and a G4 is quite as wide as that - and they are being marketed by both sides as roughly equivalent products. Most techie people may know which is the "dumptruck" and which the "Jaguar" but it is still interesting to see a technical explanation of WHY and precisely HOW they are so different.
Re:What Good is the G4? You're mistaken (Score:2, Informative)
On the G4 you can run OSX, OS9, XP and rootless XWindows all at the same time. The only problem is you have to reboot to run Linux. But then you can run the MacOS from within Linux.
Flexibility of the Mac is one of its strong suits. Check out the different Gnu Darwin, Darwin, and Xon X sites. That is where the action is
Yes I am running BSD, you still running Windows?