Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

ArsTechnica Compares the P4 and G4e: Part II 192

Deffexor writes "It looks like Hannibal of ArsTechnica fame has put Part 2 of his original comparison article between Intel's P4 and the Apple/Motorola G4e. In a nutshell, this second article covers the execution core, the AltiVec unit and SSE2, as well as a myriad of other interesting factoids. An interesting read, if not a little technically intense for those of us with less than a CE/EE degree. Have at it boys!"
This discussion has been archived. No new comments can be posted.

ArsTechnica Compares the P4 and G4e: Part II

Comments Filter:
  • gotta love AT, they dont publish often but when they do its fantastic stuffs.
  • Allright! (Score:2, Interesting)

    by Ashcrow ( 469400 )
    This is exactly what I've been trying to find out for some time now. I've been increasingly upset with the x86 line of chips since it seems that there is hardly any diffrence between 600Mhz and 1.2Ghz.
    • But.. for the next generation of apps (games, especially), a 600mhz box will not cut it. a 1.2ghz one may. and 1.5-2ghz probably will.
      • I disagree completely.

        Doubling the clock speed *should* provide noticable differences in performance for *all* applications, not just new games. If these new games are all that benefit, then the hardware has been designed wrong for a general-purpose PC.

        Case in point: here at work my old P2-266 was replaced with a P3-500. I noticed absolutely NO difference in performance. My Linux box (K6-2/400) at home seemed slow with only 64 MB of RAM. Adding 128MB to it (192 MB total) did more for performance than doubling the clock speed of the machine at work!
        • Check the bootimes. I upgraded my Celeron 333 to a PIII 500 and it booted in about half the time. Upgraded that PIII to a P4 1.6 and it boots in about 3/4 of the time that the PIII does. In the end, if you aren't going to USE the power, then don't bother upgrading. Don't think that a 2 Gig P4 is going to make your word processor run faster.
          • Boot times have less to do with proc power and more to do with OS implementation. Case in point may slackware system PII350 boots much faster than my win2k PIII500
        • Not exactly. I'm sure you know clock speed is meaningless for determining how fast a CPU really is. Now higher model numbers should mean huge advancements, and they do many times. One reason all applications are not sped up is because the bottleneck is in different places for different applications. If you tried to play those same games that sped up on a newer CPU, but with an older graphics card, well it would be exactly the same. The word processor would then seem faster than games. Another thing, many games take advantage of special CPU instructions which many applications do not need.

          Anyone who has ever upgraded or put together their own PC should know that performance depends on all parts, not just the CPU or memory. Maybe your applications don't speed up because you are still using that old hard drive, which is the bottleneck? Yes, it would be nice to purchase a magic chip that you throw into a computer which speeds everything up dramtically, but the reality is no computer (that I know of) works that way. I'd love it if I could install a new CPU and get 20% more bandwidth, but it just ain't gonna happen. The reason your applications do not seem faster is because they are probably already as fast as possible (to detect by any mere mortal, that is).
        • I have a bunch of boxes I use for dev/test purposes. My primary is a 2Ghz box, and I also have a 800Mhz P-III next to it. Both with 512mb of ram... My apps sure as heck compiles MUUUUUCH faster on the P4 then the P-III. When I run some of my framework apps, you should see the performance. I have a Celery 800 as well. That thing slows to a crawl when I crank up the number of clients, but the P4 continues to fly. The P-III, the animations will slow down a bit. The P4 runs full steam through it...
  • by Mik!tAAt ( 217976 ) on Wednesday November 07, 2001 @09:44AM (#2532505) Homepage
    Here's another comparison: Joy Of Tech [joyoftech.com] (and the next 6 pages as well)
  • by smaughster ( 227985 ) on Wednesday November 07, 2001 @09:45AM (#2532509)
    >An interesting read, if not a little technically intense for those of us with less than a CE/EE degree.

    Tell me about it, I do have more then CE, two letters even, namely MCSE and even I had to stop when they started throwing around the heavy stuff. I mean, A = A + B is supposed to make sense even if B isn't equal to zero.
  • conclusion (Score:2, Redundant)

    Big article, only had time to glance over it (and I'm not technically qualified to understand it in its full detailed glory), but as far I can see, the dude isn't picking sides. Wich is a rare treat.

    Although he does confirm Steve Job's words of wisdom: Mhz aren't everything :)

    (I, on the other hand, am picking sides)
  • G5 is coming soon (Score:3, Interesting)

    by Lemur catta ( 459575 ) on Wednesday November 07, 2001 @09:48AM (#2532520) Homepage
    Its not really fair to compare the G4 and the P4, since the G4 was aimed at competing with the P3.

    A more interesting comparison will be to pit the P4 against the comming G5. According to the Register, Apple has begun seeding early G5's at up to 1.6GHz [theregister.co.uk] to key devlopers. Other sources [macosrumors.com] are claiming limited yeilds in the 2.4GHz range already.

    There's still bugs to be worked out before production ramps up for release early next year, and supposedly AltiVec will not be as strong on the G5 as it is in the G4. But at 2.4GHz on an already-superior FPU, who needs it?

    • by Anonymous Coward
      Other sources [macosrumors.com] are claiming limited yeilds in the 2.4GHz range already.

      Oh yes, and we all trust MOSR.. especially me, as I type this on my flat-panel iMac. Oh, wait, that's not a product (yet), isn't it? :-)

      Seriously.. rumour sites aren't credible.. don't try to make them be.

      • Actually, MOSR is one of the better rumour sites. Their next-gen iMac pieces are worded more like 'what prototypes Apple has in-house at the moment. You'd be stupid if you didn't think they have at least several designs for a flat-panel iMac, they're just sticking with the current one because it's not yet economical to release an LCD-based consumer machine.

        They also seem to be more adept at spotting a bad photoshop job than some sites [slashdot.org] :)

    • Does anybody know if the fabled G5 will be compatible with the existing motherboards of the G4 machines?

      ie: Will I be able to just buy a chip upgrade and keep my pretty tower?
      • You could, if somebody made a upgrade, which is possible, but since non of the upgrade makers have made a G4e/7450 card yet, it's unlikely they will make a G5 card in the near future (and I'm not talking Internet time).
    • by Anonymous Coward
      Last month they said the G5 was going to be released with that iPad wireless web tablet (not the iPod, the tablet photoshop job). It scales perfectly to 16GHz, costs $0.13 each, and does 543TFlops.
    • by tmark ( 230091 )
      It *is* fair to make these comparisons since the G4 and the P4 are the best Motorola and Intel have to offer us, right now. It's irrelevant what the G4 was "aimed at competing with"; what is relevant is what we have in our hands now.
      • thank you! isn't that obvious?
      • "Fair" is an interesting word. If I make a product that is specifically designed to compete with my competitor's Product X, it's not accurate to say that my product is a failure if my competitor's Product X+2 is better than my product. Is it "fair" to judge something using criteria that it was never intended to consider?

        For example, the K6 was designed to compete with Pentium MMX and early Pentium II chips. There should be few surprises presented by a study that compares the performance of the K6 to the performance of the Pentium IV.
        • It would be fair to compare the K6 to the Pentium 4 if the K6 was the best chip AMD had to offer as is the case with the G4 vs P4. If Motorolla had released the G5 before the P4 came out, a comparison of the G5 vs P3 would be fair because it would be the best thing each company had to offer.
          • this reminds me of how Microsoft always tries to avoid judgement by saying, 'don't look here, everything will be fixed and more in the _next_ release!'

            come on, who are they [apple] kidding?
    • Its not really fair to compare the G4 and the P4, since the G4 was aimed at competing with the P3.

      It is fair to compare whatever happens to be on the market at any point in time.

      If Motorola is reacting instead of acting, that's just too bad for them. It doesn't make the comparison unfair.

      A more interesting comparison will be to pit the P4 against the comming G5.

      Yes, that will be a good and interesting comparison when I can buy a machine with a G5 in it.

  • ppc power (Score:4, Insightful)

    by peachboy ( 313367 ) <slimindie.gmail@com> on Wednesday November 07, 2001 @09:52AM (#2532532) Homepage Journal
    i personally believe that flexibility of the assembly instructions as well as the number of instructions executed per cycle contribute greatly to the dominant speed (at any given MHz/GHz) of the ppc processor. compare any intel/amd processor to a ppc at the same clock speed, and the ppc will kick its x86 ass.

    the high end ppc desktops are topping out around 900MHz, while the p4's are hitting 2GHz. there has to be another explanation besides the complaint that jobs is ignorantly sitting on his thumbs. i think he knows what he's doing.

    note: i am not a mac zealot.. i don't even own a mac - only 4 x86 pc's (1 athlon, 2 p133, 1 p120). i simply can appreciate the speed of the ppc.
    • Re:ppc power (Score:3, Insightful)

      by tswinzig ( 210999 )
      the high end ppc desktops are topping out around 900MHz, while the p4's are hitting 2GHz. there has to be another explanation besides the complaint that jobs is ignorantly sitting on his thumbs. i think he knows what he's doing.

      What a strange statement, considering that neither Jobs nor Apple create the PPC chips!
    • Well, it's convenient that we don't have to compare PPCs and x86s at the same clock speed.
    • Re:ppc power (Score:5, Interesting)

      by Christopher Thomas ( 11717 ) on Wednesday November 07, 2001 @10:52AM (#2532789)
      the high end ppc desktops are topping out around 900MHz, while the p4's are hitting 2GHz. there has to be another explanation besides the complaint that jobs is ignorantly sitting on his thumbs.

      Two factors come into play here.

      The first is that, if I remember correctly, PPC and x86 chips use a different clocking scheme. This means that clock rates between them aren't even directly comparable (what a "clock" is depends on the clocking scheme).

      The second is that it's perfectly possible that the PPC architecture is limited to lower clock rates than the x86 architecte. Signal propagation through gates takes time. If one architecture expects signals to propagate through logic three gates deep per clock, and another architecture expects signals to propagate through logic five gates deep, then of *course* one will have a faster maximum clock rate than the other. They would hopefully still be doing the same amount of work per unit of real time.

      You should already be familiar with this from the Athlon/P4 spin war. A 0.18 micron Athlon core simply cannot be clocked as quickly as a 0.18 micron P4 core - no matter what you do. Does this make the Athlon automatically a poor performer? No, because it can do more per clock. Does this make the Athlon automatically kill the P4, because it "can do more per clock"? No, because the P4 can be clocked faster. Only real benchmarks will tell.

      A point against Apple is that Apple has been allergic to publishing SPECmarks for its processors for the past couple of years (the only PPC-ish benchmarks are IBM's benchmarks of the Power series of chips, which forked after the G3 IIRC). This removes a very consistent (if somewhat flawed) means of comparison.
      • A point against Apple is that Apple has been allergic to publishing SPECmarks for its processors for the past couple of years (the only PPC-ish benchmarks are IBM's benchmarks of the Power series of chips, which forked after the G3 IIRC). This removes a very consistent (if somewhat flawed) means of comparison.
        That, and Apple seems to have this delusion that Photoshop (highly optimized for the Macintosh) is everything. That's akin to arguing that the P4 is infinitely faster than the G4 just because something written for x86 doesn't run on the G4.

        Someone should create a site debunking the Photoshop myth. I would if I weren't so lazy...

        The truth, as usual, probably lies between the megahertz and Photoshop camps and is about what a resaonable person would expect: the G4 is roughly equivalent in performance to the P4 and Athlon, code can be highly optimized for any of those processors, and most users don't need that much computational power anyway.
        • Re:ppc power (Score:3, Insightful)

          by iso ( 87585 )
          Someone should create a site debunking the Photoshop myth.

          Actually the Photoshop benchmark is completely valid. Apple's largest market for their G4s is the designer community. The most popular application for those people, and the application they spend the most time waiting for, is Photoshop. Therefore, when they're out to buy a new computer, the most important thing to them is ... the speed of Photoshop!

          Additionally Apple has been using two benchmarks lately: Photoshop and movie compression. High-end video is Apple's second-largest market for their G4s, and this market spends most of their time waiting for video effects, such as compression. This is also a valid benchmark.

          Thirdly, microprocessors are increadibly complex, plus the end-user speed is also dependant on many factors including software and OS optimizations. It is absolutely useless to compare the speed of processors by some numerical benchmark. What's important to anybody who wants to be productive on a computer, is how quickly your key applications run. And lo and behold, this is what Apple is benchmarking.

          Remind me again why this benchmark is invalid? If anything I would suggest that any benchmark besides end-application speed is useless.

          - j
          • Remind me again why this benchmark is invalid? If anything I would suggest that any benchmark besides end-application speed is useless.

            Well, that is of course unless you are an applications developer: in which case you would like to know some lower level benchmarks so you can take advantage of the things that the hardware is good at and converely stay away from the stuff that stinks.

            -AP

    • Re:ppc power (Score:3, Interesting)

      by Webmonger ( 24302 )
      Of course there's another explanation-- namely, it's not Jobs who's sitting on his thumbs-- it's Motorola. (Ouch-- can you imagine Motorola sitting on Jobs' thumbs?)

      Furthermore, Motorola isn't sitting on their collective thumbs-- they're simply targetting a market whose requirements are different from Apple's. The embedded market.

      This is all hugely ironic, because the RISC architecture was s'posed to result in chips that could cycle faster but did less per cycle. Instead, it's the CISC chips that are cycling faster and doing less per cycle.
    • It's mainly the internal architecture of the chips. Is the memory address you need in cache or will you have to spend 50 clock cycles to fetch it? What if you want to read an address before a previous instruction finished writing it?

      The Intel chips are really fast at "hurry up and wait", but not nearly as fast at "hurry up and do something".
  • Nice. (Score:5, Interesting)

    by tcc ( 140386 ) on Wednesday November 07, 2001 @09:53AM (#2532533) Homepage Journal
    But with the G5 around the corner, I think THAT will be THE interresting comparison.. expecially since Intel plans on keeping the P4 for a while (, ramping it up in speed, when you Read adobe saying the G5 are significantly faster than P4 [theregister.co.uk] (and if you go read the article, the same people do say that the P4 is faster than a G4 (exept for altivec stuff) so if they say G5 is faster than P4, it probably will be :)...it should be really nice to see something that kills the P4 in raw performance other than AMD).
    • Re:Nice. (Score:2, Interesting)

      by shut_up_man ( 450725 )
      Hang on... that article says "we'd caution against taking them as gospel", and that's coming from The Register, a site that has been... uh... less than correct on some issues in the past.

      Plus, the reports are from Adobe, who make Photoshop... an extremely Mac-optimised piece of software usually held up by the Mac brigade [techtv.com] whenever they attempt comparative benchmarks.

      Look, I *want* to believe that the G5 makes great coffee, gives fantastic backrubs, cures cancer and runs faster than every P4. I do. I've just heard all these lines before, with the G4.
      • Re:Nice. (Score:2, Insightful)

        You don't think Adobe optimizes any of their code for x86? Of course they do, the reason why the G4 so soundly whips the P4 in Photoshop is because of the nature of the AltiVec units. If you had read the article, you would know this.

        I'm pretty sure Adobe doesn't care whether you spend your $900 on the Windows or the Mac version of Photoshop.

      • Photoshop is at least as optimized for x86 (by Intel engineers no less) as for PPC.
      • Re:Nice. (Score:3, Informative)

        by Arandir ( 19206 )
        Look, I *want* to believe that the G5 makes great coffee, gives fantastic backrubs, cures cancer and runs faster than every P4. I do. I've just heard all these lines before, with the G4.

        I just got back from a seminar with Motorola and the architecture of their new and upcoming chips. Some nice stuff on the horizon. The did mention some of the less than stellar performance on prior chipsets, and explained that it was due to not taking advantage of the chip features. An operating system or user application that must be backwards compatible will not be able to utilize the chipsets to full advantage.

        You don't judge high end chipsets based on mass market consumer applications.
  • by Junks Jerzey ( 54586 ) on Wednesday November 07, 2001 @09:53AM (#2532535)
    Note: I have a B.S. in computer science, a solid understanding of hardware issues, and have been programming for 19 years.

    When I read articles like this, there's so much detail that I find myself--even willingly--losing sight of the big picture. Sure, you could read a detailed write-up about Toyota's new engine, but those details don't really matter much unless you've just made a hobby of knowing about engines. Realistically, you'll have a hard time connecting those details to your driving experience. Heck, someone could put in a different engine, tell you that its a Toyota, and you'd be saying things like "Oh, yes, this feels just like a Toyota, I can tell that the designers did blah and blah."

    After the Pentium II generation of CPUs, things have gotten very, very muddled. Amazing features that are supposed to increase performance don't always do so. Sometimes they make things worse. Little compiler tweaks can make one program be twice as fast as another, given the same hardware. Chips with higher clock rates can be significantly slower than chips with 20% slower clocks. Certain applications run much faster than on previous chips, but there are others that show no increase.

    It's all very chaotic and confusing, even for people in the know. I suspect that if you took a program that people claimed to need a P4 or Athlon for--something very performance sensitive--and set yourself the task of making it run faster on a PII than an Athlon, you could do it. But that doesn't matter, as everyone seems to be clamoring for newer chips.
    • by Anonymous Coward
      > After the Pentium II generation of CPUs, things
      > have gotten very, very muddled.

      To continue with your engine analogy, after the fifties things have gotten more complex. Variable valve timing is common. BMW is working on technology (if it's not in production already) that opens/closes valves using electromagents - no more camshaft! All electrical. Probably both more reliable and more efficient.

      Turbocharging is now common - the combination of a small displacement engine and turbo is commonly found and provides a good compromise of power and efficiency/mileage.

      As the technology increases, designs become more complex. This isn't always a bad thing.
  • by Anonymous Coward
    Does anyone know if an ATX board with a G4 exists? I just started developing my own little OS and, frankly, x86 assembly stinks, I hadn't touched it for 4 years and didn't remember how crap that ISA indeed is. The 68k series were such a nice development platform, the PPC ISA looks quite cool as well.
    • by Anonymous Coward
      bPlan have one called the Pegasos coming out soon (next year) with dual G4 sockets.

      Will cost user $1000 with graphics card, audio, firewire, processor, ethernet, memory, apparently.

  • Great Article! (Score:5, Insightful)

    by Uttles ( 324447 ) <uttles@[ ]il.com ['gma' in gap]> on Wednesday November 07, 2001 @10:00AM (#2532566) Homepage Journal
    This article is extremely informative and gives you a good insight into how these processors are designed, as well as how they compare. I disagree with the poster though, you don't need a CE or EE degree to get the idea of what's going on. I'm a CE and I had classes on this sort of thing so yes I could follow all the gritty details, but I think the author did a good job of explaining things so that most people could understand. Also, I thought the author summed things up perfectly saying:

    The preceding discussion should make it clear that the overall design approaches I outlined in the first article can be seen in the execution cores of each processor. The G4e continues its "wide and shallow" approach to performance, counting on instruction-level parallelism to allow it to squeeze the most performance out of code. The P4's "narrow and deep" approach, on the other hand, uses fewer execution units, eschewing ILP and betting instead on increases in clock speed to increase performance.

    This is exactly the case. Unfortunately the popular masses don't understand all of this wide vs narrow stuff, so they go for the higher clock speeds. In reality, Intel is really pulling one over on us, charging more money and all we're getting is a higher clock rate, not a whole lot of performance gain. PPC has proven itself time and time again to be the better processor, but unfortunately they aren't used in very popular machines (mostly Macs,) so we don't get to reap the benefits.

    On a related note, this article touches on one of the many reasons why the Gamecube will run circles around the Xbox. GameCube's processor is a 485Mhz PPC designed specifically for video games, while the Xbox just uses a common Pentium running at 733 MHz.

    This all brings up a good question: why haven't Macintosh's or GameCube's marketers come up with a bench mark to put next to the processor speed? Maybe I missed it, but I've never seen a Macintosh commercial saying "comes with a G4 800 MHz, comparable to a P4 1.5 MHz." There might be too many legalities involved to do something like that, but it seems like they need to educate people somehow of the non 1 to 1 relationship between clock speeds of P4s and PPCs.
    • Re:Great Article! (Score:3, Interesting)

      by west ( 39918 )
      This all brings up a good question: why haven't Macintosh's or GameCube's marketers come up with a bench mark to put next to the processor speed? Maybe I missed it, but I've never seen a Macintosh commercial saying "comes with a G4 800 MHz, comparable to a P4 1.5 MHz."

      The problem is that you can compare processors in far too many ways. Apple likes to use Adobe Altivec-enabled applications when it compares speed (big suprise), but the reality is that comparing two processors is only marginally more useful than comparing two human beings. Computers are far too multi-purpose to be able to come up with a useful comparison.

      Processors are even worse... It's like trying to compare two brains without letting education or experience skew the comparison.

      In the end, manufacturers will choose from the dozens of possible benchmarks to make their processor look the best (or make up their own if none of the others will do).
    • This all brings up a good question: why haven't Macintosh's or GameCube's marketers come up with a bench mark to put next to the processor speed? Maybe I missed it, but I've never seen a Macintosh commercial saying "comes with a G4 800 MHz, comparable to a P4 1.5 MHz." There might be too many legalities involved to do something like that, but it seems like they need to educate people somehow of the non 1 to 1 relationship between clock speeds of P4s and PPCs.

      Cyrix used to sell PR parts, PR133 might have been a 116Mhz chip, but it was as fast as a 133Mhz pentium. So there's precedent, and it's probably legally OK, but I suspect the reason is it doesn't really matter.

      What really matters is that the CPU is fast enough for what you want to do. I run OS9, OSX and linux on my machines. My home machine is a G3/350, and it's plenty fast for running OS9 for everthing but compressing MPEG1 video. It's not fast enough for running OSX. My work machine is a G4/400 and it's just fast enough for running OSX. But it's not fast enough for compressing MPEG1 video. If I had a dual-800 G4 it would be more than fast enough for OSX, but it would still be too slow for compressing MPEG1 video. My linux machine is a Dual-800, P3. It's just fast enough for running linux with all the crap I have running. It's still too slow for compressing MPEG1 video, though. I also use a 1.2GHz Athlon machine occasionally, and I consider that just fast enough to run Windows 2000. I assume XP is similar. But it would still take a long time to compress MPEG1 video.

      So, how would you structure a comparison benchmark? SPECint? BYTEMark? PhotoShop duals? I think the answer is that you don't. It doesn't matter, as long as the computer is fast enough to do what you want it to do. The semi-annual MacWorld Photoshop duals are interesting since they actually show that the computer is too slow for designers but the Windows machines aren't any better. Perhaps they need to enunciate more, but I think their current stand of , "it's fast enough," is the mature one.

      • If I had a dual-800 G4 it would be more than fast enough for OSX, but it would still be too slow for compressing MPEG1 video

        What do you mean by "too slow?!"
        Do you mean in real time? If so, say it.
        Saying it's "too slow to compress MPEG1 video" is bollocks. Look at the recent iDVD2 demo on a dual 800, that's MPEG 2 which is harder to compress.

        • Yes, if you have to wait for a computer it's too slow. Realtime is too slow. If anything takes more than 10 seconds it's too slow.

          But MPEG2 implementations typically do *way* less compression than MPEG1. MPEG2's bitrate is typically at least twice as high as MPEG1, sometimes 10x more, and when you're doing discrete cosine transforms that greatly increased size along with motion vectors and variable bitrate encoding allow you to do alot less work during encoding. That's why it's so much faster. You also need alot more space to store the resultant file (DVD vs. CD) Just because 2>1 doesn't mean it's compressing more, it just means it came after.
    • Re:Great Article! (Score:4, Informative)

      by bmajik ( 96670 ) <matt@mattevans.org> on Wednesday November 07, 2001 @12:16PM (#2533208) Homepage Journal

      In reality, Intel is really pulling one over on us, charging more money and all we're getting is a higher clock rate, not a whole lot of performance gain


      This is a debatable point. I think it is wrong to conclude intel is "pulling one over on us". It has been demonstrated that as more EU's are added, the effectiveness and utilization of EU's goes down. The quest for ILP comes to a crashing screeching halt before you even get to 4 EU's. IIRC, only one processor-scheduled CPU is designed with more than 4 EUs.

      The necessity of the chip to extract ILP in realtime is what leads us to these big hairy controllers and limited clock speeds. Controller shrink was what led to RISC in the first place, and now that we've had to add in superscalar "goo" there's hardly a difference between the CISC philosophy and the RISC one. Never mind that Intel chips have been re-writing CISC instructions as multi-EU uops forever.

      The point is, adding additional EU's has been desmontrated to be of dubious merit. Right NOW, the P4 speed improvements come from SSE2, just like the G4's speed improvements come from AltiVec. Both do essentially the same thing, although i've read more about AltiVec and it seems "cooler" :)

      The difference is this - When the P4 core hits 3ghz, its retire rate will just destroy anything a G4 or Athlon will do. Intel took the pipeline length hit NOW and will reap the benefits later.

      They also spent the time to get their prediction units as top notch as possible, because iirc statistically there will be > 3 conditional branches in progress in those ridiculous 20 stage pipes :)

      So - the problem with intel's approach - a single instruction takes longer to complete, and the fill/drain penalty for mispredictions is high.
      The retire rate however, is amazing, and the clock rate ramping ability is similarly amazing.

      Your assertion that MOTs approach _relies_ on adding additional EU's is surely incorrect, because "everyone" knows that controller complexity is again dominating cpus, and much of that is dedicated to extracting and managing ILP on 4 or less EUs (and that it just isn't there beyond 4.. i think the Power4 was supposed to have 6 EUs, and the Alpha 364 or 464 was going to have 8 ?)

      Intel has already "side stepped" the SuperScalar risc EU problem with IA64 - Thats what LIW does. LIW is interesting again now because of the reliazation that controller extracted ILP was too expensive and not good enough for the performance increases needed.
    • Oh, I forgot to address this in my other reply:


      On a related note, this article touches on one of the many reasons why the Gamecube will run circles around the Xbox. GameCube's processor is a 485Mhz PPC designed specifically for video games, while the Xbox just uses a common Pentium running at 733 MHz.


      What horseshit. All console CPUs are outright or adapted cores of commodity outdated CPUs.

      the gamecubes cpu is not any more specifically designed for gamecube than the R4300 was "specifically designed" for the nintendo 64.

      Does the fact that XBox has a different memory system, "northbridge", cache size, etc, mean its 733mhz proc was "specially designed" for xbox ? You wont find any of the xbox core components in anything besides an xbox... it must all be CUSTOM ENGINEERED FOR GAMING!

      I try not to be a fan boy, I only ask that others do the same. Have you played anXbox ? What about a gamecube ? Have you done performance testing on their respective processors ? What about their GPUs ?

      You can speculate all you want to, but dont write some article about how "I've taken CE classes, I can tell that gamecube whoops xbox". It just smears your credibility all over the place.
      • OK, well, first of all, I've played both. Like I said, the GameCube runs circles around the XBox. The animation is smoother, the control is more responsive, the load times are shorter, etc, etc.

        Secondly, they could be telling and outright lie, but in a press release by Nintendo which I unfortunately can't find right now, I read that the processor was a modified design that originated from a general purpose PPC and was designed specifically for the GameCube. Now, you may not believe that, but that's not the only time I've heard it, and based on the performance of the thing, I don't doubt it.
      • The Gamecube's CPU is a PPC 603 with its FPU split in two and very fast custom busses for connection to the rest of the console so yes, I'd say it's "custom engineered for gaming".

        At least compared to that of the X-Box.
  • by Anonymous Coward on Wednesday November 07, 2001 @10:19AM (#2532634)
    about floating point instructions: on the PPC, both the clsassical FPU and the Altivec unit have fused multiply-add instructions, i.e. a single machine instruction computes: RA=RB*RC+RD where RA, RB, RC and RD are arbitrary floating point registers. This takes the same time as a multiply, basically the add (which can also be a substract) step is free.

    The two operand Intel architecture does not allow the fused multiply add, so that the latency of such an operation is the latency of a multiply plus the latency of an add (and the destination register has to be one of the operands, although the other operand can be in memory, saving you a load). There are plenty of practical algorithms which benefit greatly from the fused multiply-add, for example polynomial evaluations, matrix multiplications, etc, a feature pioneered by IBM in the RS6000 series and that Intel is using in Inanium.

    And people who claim that you can do loop unrolling to hide the latencies should check their math: with only 8 registers, there is no way to hide the latencies of a multiply plus an add on a P4, while it is almost trivial on a G4 (32 registers and shorter latencies between accumulates). Furthermore many transcendental function evaluations are evaluated in libraries through polynomial approximations, which cannot be unrolled nor easily sped up: the number of coefficients is usually large enough to make the routine limited by the latency of the back to back floating point operations, but not large enough to take a divide and conquer approach.

    While the G4 is clearly the better architecture (not having double precision Altivec is not that important, I consider vector processing is only worth if you can do more than 4 elemnts per vector), the memory susbystem of the P4 is far superior. Hopefully the G5 will be comparable in this area (and I can't buy a desktop Power4 system :-().

    • The two operand Intel architecture does not allow the fused multiply add, so that the latency of such an operation is the latency of a multiply plus the latency of an add (and the destination register has to be one of the operands, although the other operand can be in memory, saving you a load). There are plenty of practical algorithms which benefit greatly from the fused multiply-add, for example polynomial evaluations, matrix multiplications, etc, a feature pioneered by IBM in the RS6000 series and that Intel is using in Inanium.

      And people who claim that you can do loop unrolling to hide the latencies should check their math: with only 8 registers, there is no way to hide the latencies of a multiply plus an add on a P4, while it is almost trivial on a G4


      Actually, it turns out that you can still mask the loop latency with a limited register set.

      First, you can use "software pipelining" to mask quite a bit of the loop latency without having to unroll (it's a clever reshuffling of the loop instructions; for brevity, I won't describe it here). This requires one extra FP register over the straightforward implementation of an x86 dot-product loop (four instead of three, because I can no longer re-use scratch registers between steps).

      Second, branch prediction will to a limited extent perform unrolling for you. While the architectural register file has only 8 registers, there are many more internal registers on the chip. Register renaming allows the processor to run several iterations of the loop in parallel without having to worry about namespace conflicts (though true dependencies remain intact). This works as long as the total number of iterations being unrolled fits within the scheduler's window (usually 8-16 instructions; I don't know how big the P4's window is).

      In summary, for something as straightforward as a dot product, it's certainly possible to write x86 code that will avoid the penalty of having separate add and multiply instructions.

      [You'll really be bound by the memory subsystem for both chips, but that's moot point for this discussion.]
  • by LSD-OBS ( 183415 ) on Wednesday November 07, 2001 @10:21AM (#2532647)
    ----snip----
    add A, B
    mov C, A

    The first command adds the two numbers, and the second command moves the result from A to C. Of course, you still have the potential problem that the original value of A was erased by the add command, so if you wanted preserve A's value then you'd have to insert even more instructions to store A in a temporary register and then restore its value once the addition has been performed.
    ----snip----

    Not quite. I'm sure even people who _dont_ know x86 assembly language will realise all you don't need any extra instructions at all. Simply reorder them:
    mov C, A
    add C, B

    Obviously, the example was being used to show how much nicer it would be to have three or more operands in your instructions, but it was a lousy example.

    On a sidenote, we've been able to specify more than two operands with certain instructions since the 80386. Look up the syntax for the "imul" instruction.
    • Not quite. I'm sure even people who _dont_ know x86 assembly language will realise all you don't need any extra instructions at all. Simply reorder them:
      mov C, A
      add C, B

      True, but that's going to create some stalls, and is still two commands rather than one:
      add C, A, B
      as with the PPC chip.
      • It was going to create a stall _anyway_, and it was already two instructions _anyway_. The point is, you don't need any extra instructions....
        --
        • We're still thinking according to the Pentium I mindset. Stalls like that one aren't really an issue anymore thanks to the out-of-order instruction execution scheme that has been evolving since the P-II.
          --
    • Well then couldn't the value of C be erased by the add command? You need extra instructions to retain the values of C and A.
    • WTF article did you read? The article I read said:

      mov C, A

      add C, B
      The first command moves A into C in order to preserve the value of A, and the second command ands the two numbers.


      With a three- or more operand format, like many of the instructions in the PPC ISA, you get a little more flexibility and control. For instance, the PPC ISA has a three-operand add instruction of the format add destination, source 1, source 2, so if you wanted to add A to B and store the result in C without erasing the values in either A or B (i.e. "C = A + B") then you could just do:

      add C, A, B

      First actually read the article and make sure your dyslexia isn't acting up before you post something that makes you look like a complete idiot.


      And yes the article does say that the second instruction ands the two numbers when it should say adds.

      • Actually, don't be so sure of yourself. The author changed the article after realising his error - what you see in my post was the original, duplicated with nice, healthy cut and paste. Don't call me an idiot just because you're slow on the trigger.
        • Idoit or not, that doesn't really matter, mainly because if the author changed it then its not your fault that it appeared wrong to me. What I think we can both agree on is that the author really needs someone to proof read his articles for him, since he obviously can not do it himself. I mean correcting one mistake and "anding" another, come on.
  • Of which, I "R" in that group.

    While the G4e has fairly standard, fairly unremarkable floating-point hardware, the PPC ISA does things the way they're supposed to be done
    [snip]
    The P4, on the other hand, has slightly better hardware but is hobbled by the legacy x87 ISA.


    I could have sworn tomshardware stated it best as "Essentially we have a P4(86) 2Ghz".
    I'm paraphrasing, mind you and possibly taking it out of context, *but* instead of increasing the cache (instruction/data/registers) they combined and dropped it down to 8k of instruction and data.

    Oh, and on the P4 vs AMD's XP chip, how would this analogy [slashdot.org] be changed or overhauled as it stands with the P4 vs G4e?

    I'd really like to know. Or have a better "real world" analogy geared for the newbie user who usually winds up asking me, and I have to be able to explain complex things in simple terms to myself first.

    Thanks.

    GISboy
  • by Anonymous Coward
    I'm a CE as well, and I absolutely loved the article. It's nice to have someone fight the technical battles for you to release minute details about their procs. Trying to get information like this is usually like pulling teeth from some companies.

    What I wanted to convey though, for people who may not deal on a hardware level with this stuff. Is that it is very hard to really get a good understanding for the whole processor. These projects are IMMENSE. Trying to keep track of millions of transistors, and lay them out, etc... is a nightmare. I know. So while it is good to talk about the higher level concepts of narrow Vs. Wide on a conceptual level that all falls away when you start looking at transistors. More than anything these projects are all about coordination. You can have a team of engineers working on a specific part that have NO IDEA what the other bits and pieces look like unless they have to interface with them. So just keep that in mind when we're judging these companies. I just think we lose sight of the massive scale of these projects sometime.
  • Resources (Score:2, Informative)

    by rusti999 ( 167057 )
    A good general resource for this kind of advanced computer architecture is the book Computer Architecture [amazon.com] by David Patterson and John Hennessy. It's quite dense. For the latest in processor architecture, the IEEE Micro [computer.org] magazine is useful.

Make sure your code does nothing gracefully.

Working...