Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

IBM PowerPC 970 Architecture 268

riclewis writes "Hannibal from Ars Technica offers an explanation of some of the internals of the new IBM chip. It's certainly more powerful than anything on the desktop now, but by the time it's released a year from now, it looks to be middle-of-the-pack (which could still be a step up for Apple...) This excitement over the early release of hardware specs kinda reminds me of all the hype surrounding the Sony's Emotion Engine when it was introduced a couple years ago. In fact, some are suggesting the PPC 970 chip might be closely related to the PS3's 'Cell' processor..."
This discussion has been archived. No new comments can be posted.

IBM PowerPC 970 Architecture

Comments Filter:
  • PS3 apple? (Score:5, Insightful)

    by barfarf ( 544609 ) on Wednesday October 16, 2002 @11:22AM (#4462445)
    "In fact, some are suggesting the PPC 970 chip might be closely related to the PS3's 'Cell' processor..."

    Even though it's really doubtful, it'd be extremely cool to see a PS3 emulator on the mac if the processors are that closely related.

    I remember running Mac OS 6.0.5 on my Atari ST. Because it had the same processor, it didn't need much to make it run.

    Oh well, I can at least dream, can't I?
    • It has to be incredibly fast! How could a chip called '970' not be really quick and powerful?

      Expect to see a slightly cheaper version called the 940, and a low-end chip called the 880. Well in fact there won't be any real difference between those two chips, but you pay for the extra prestige the extra 60 gives you.
    • Re:PS3 apple? (Score:5, Interesting)

      by Arcturax ( 454188 ) on Wednesday October 16, 2002 @11:45AM (#4462641)
      Oh wow, now that brings back some memories! I remember a friend of mine bringing over this ST he got at a garage sale and had a Mac emulator disk on it. Since I was a Mac person and he wasn't he asked me to help him get it running.

      We managed to get System 7 running on it and even managed to coax AOL 2.x to run on it via the modem, getting it online! It was slow and it was AOL running on a Mac emulator on an Atari, but hey, it was geeky and it was fun to do.

      But back on topic, if they do use the same or similar chip it could possibly work though Sony would DMCA it straight to hell.
    • Re:PS3 apple? (Score:5, Interesting)

      by Tim12s ( 209786 ) on Wednesday October 16, 2002 @01:21PM (#4463326)
      Ouch. I just had a very nasty thought that you might be very Very wrong. Assuming Jobs is that insideous, I'd give it a slim chance. IBM brokers the deal between Sony and Apple to bring economies of scale to the 970 and. . .

      Sony wants to sell less hardware and sell more games with a higher per title margin. Selling hardware at a loss is typical of the console market.

      IBM supplies the goods, Sony & Co supplies the reference PS3 platform and games backing.

      Apple continues building its brand name computing with the ability to run PS3 games. That gives Apple a MAJOR supplier of video games. People who own an Apple typically will be able to afford the various PS3 games. ... now realise that Sony is able to ship a number of exclusive titles, without making a loss on a console AND maintaining their premium on their titles.

      -Tim

      • by Frobozz0 ( 247160 )
        "Selling hardware at a loss is typical of the console market."

        No, actually it's not. Only Microsoft loses money on it's boxes. No matter what Sony is selling their boxes for, they make a profit on every one. It's the difference between a profit oriented, well thought out plan, and a slapped together Microsoft 1.0.
        • I believe you're mistaken on this point. While it is true that Sony NOW makes a profit on the PS2 consoles it sells, that was not the initial case. All major console vendors sell their initial units at a loss, in order to build market presence. The price to produce each console eventually goes down once the development costs are recovered and as the production is optomized. During the products life cycle the consoles slowlyy begin to become profitable. The PS2 reached this point a while ago, while MS will get there eventually. The key advantage that MS has is that it has massive cash reserves ($45 billion or something sick like that) which it can dip into, and never really has to worry about making a profit on the consoles. A company like Sega doesn't have this option, and is forced to withdraw its console offering if it doesn't begin to turn a profit, as happened with the Dreamcast. So MS's advantage comes in their ability to stick around longer than most companies in this market.
  • Apple Chips (Score:2, Informative)

    by cryptorella ( 611118 )
    Middle of the Pack is not a step up for Apple... The G4 chips outperform Intel and there microinstruction intuperted to Risc instructions.... alot more goes into a processor than it's MHZ... Take a read of Hennessy and Patterson's book Computer Architecture A Quantitative Approach
    • by Anonymous Coward
      Apple chips? Are those anything like banana chips? As a child I didn't like banana chips, but I slowly grew accustomed to their taste. However, I still hate them. I hope apple chips taste better.
    • Re:Apple Chips (Score:5, Informative)

      by WittyName ( 615844 ) on Wednesday October 16, 2002 @11:42AM (#4462622)
      The PowerPC 970 triples the length of the PowerPC pipeline [extremetech.com]

      This will give it the same issues the P4 has. Namely a large penalty for branch mispredicts, etc. Instructions per clock will decrease.

      OTOH, they should be able to crank the speed!
      • Re:Apple Chips (Score:5, Informative)

        by Visigothe ( 3176 ) on Wednesday October 16, 2002 @12:00PM (#4462733) Homepage
        > Instructions per clock will decrease.

        Actually, IPC is *increased* from the current G4. It will now fetch 8 instructions per clock, and retire 5 per clock.

        The current G4 IIRC fetches either 3 or 4 per clock. I have no idea how many it can retire at once.

        This coupled with a quick move to a .09 process shows me that this 970 chip has legs. Another thing... IBM has *always* been conservative about what not-quite-ready chips will do as far as clock, and benchmarks. I expect "Real World" [no relation to Peter Gabriel] performance to be quite good. [although I expect Peter Gabriel's performances to be fantastic =)]
        • It's not as simple as that. The P4 takes a huge hit for its long pipeline. Even if IPC increases due to extra execution units (which won't help non-parallellizable code at all) it will decrease due to worse penalties for pipeline flushes.
          • Re:Apple Chips (Score:3, Interesting)

            by Spyky ( 58290 )
            The POWER4 and presumably the 970 will also have, a very very nice branch prediction scheme. The POWER4 uses a total of 3 branch predicters to the Intel P4s one. The 3rd table weighs the comparative performance of the first two tables to acheive the highest possible correct branch prediction.

            In addition, the PowerPC architecture includes a static branch prediction bit for branching instructions, which allows the compiler to "hint" to the processor the likely branch, the x86 architecture has no equivalent feature.

            In short, branch misprediction occurs less often with the POWER4 (and hopefully the 970) for the above reasons. In addition, the "tripling" of the G4 pipeline in the 970 is still shorter than Intel's 20 stage P4.

            Spyky
        • Re:Apple Chips (Score:2, Informative)

          by Mocenigo ( 534548 )
          > Actually, IPC is *increased* from the current G4. It will now fetch 8 instructions per clock,
          > and retire 5 per clock.

          This for the branch/integer/fp core only. Which is borrowed from the Power4 one. This does not count
          altivec, which is a separate unit on the same chip. Further, the two fp units of the core
          can work in parallel with the altivec unit, which the P4 cannot do, because its vector unit uses the normal fpu pipelines...

          >The current G4 IIRC fetches either 3 or 4 per clock. I have no idea how
          > many it can retire at once.

          fetche 3, retire 2 (IIRC, recent iterations may also retire 3)
      • Re:Apple Chips (Score:5, Insightful)

        by Don Negro ( 1069 ) on Wednesday October 16, 2002 @12:09PM (#4462801)
        Does it?

        The eetimes story linked at the top says it's an 8-stage pipe. That doesn't mean any more or less than the extreme tech statement that the new pipe is triple the length (which would be 21, the current pipe is 7) since we haven't seen any actual reference docs from IBM.

        Can anybody who was at the Microprocessor Forum give us more info?
    • Re:Apple Chips (Score:2, Informative)

      by jcupitt65 ( 68879 )
      for general int code, the 800MHz G4 in my mac is about twice as fast as the 450MHz PII in my old work machine ... it only gets faster if you altivec stuff, which no one does (except some clever peeps in apple)
      • Re:Apple Chips (Score:2, Interesting)

        by Clock Nova ( 549733 )
        Is it me, or does saying that an 800MHz G4 is about twice as fast as a 450MHz PII not sound like much of a statement?

        Perhaps if you said your 800MHz G4 was twice as fast as your 1.2Ghz P4, I would be impressed.

        Personally, I think you must have made a typo.
    • So says Apple's PR department...

      Testing in popular applications like Photoshop and Illustrator show that the "Mhz doesn't matter" argument just doesn't hold water.

      • Re:Apple Chips (Score:4, Interesting)

        by Shanep ( 68243 ) on Wednesday October 16, 2002 @02:12PM (#4463698) Homepage
        "Mhz doesn't matter"

        The MHz Myth that Apple talks about is not about trying to say that "Mhz doesn't matter", it's about the fact that MHz cannot be used as a direct comparison between architectures.

        Of course MHz (brute force) matters. But what also matters is smart design.

        I think showing a 333MHz G3 running faster than a 500MHz Pentium III [ucla.edu], kinda proves the MHz Myth is just that. Bear in mind, that the G3 is not AltiVec equiped! So not getting a huge vectorized benefit here.

        If you think that's impressive, look at the G4! I can't wait to see what CPU Apple actually unleashes next.

        I'm astonished that there are actually people who think MHz is THE sole number to go by.

    • Re:Apple Chips (Score:3, Interesting)

      by hawkbug ( 94280 )
      Alright, I'm sooo tired of this argument. First of all, just because it is a RISC chip, doesn't mean that a 1.0 GHZ motorola chip in a Mac could even come close to outperforming a 3.06 or even 2.80 GHZ Pentium 4 when combined with a 533 FSB and RDRAM. Apple just recently adopted DDR ram, but get this - the little PPC chip you have isn't even natively able to support it at DDR speed, the current batch of PPC chips can only work on one swing of the computing "cycle", not on the up and down like an Athlon can for example. Meaning, the motorola chips are not double pumped, so Apple is years behind AMD and Intel right now. Your argument doesn't hold water.
      • Re:Apple Chips (Score:4, Informative)

        by be-fan ( 61476 ) on Wednesday October 16, 2002 @12:46PM (#4463064)
        Actually, the DDR thing is a little misguided. The real reason DDR had no effect was because the 2.1 GB of memory bandwidth was feeding into 1.3 GB/sec of processor bus bandwidth.
      • Re:Apple Chips (Score:5, Informative)

        by Shanep ( 68243 ) on Wednesday October 16, 2002 @02:26PM (#4463812) Homepage
        PPC chips can only work on one swing of the computing "cycle", not on the up and down like an Athlon can for example

        It's called positive and negative edge triggering. It's not a new technology either. I was dealing with it in the 80's at the discrete logic level.

        AGP 2x uses this and 4x uses positive, negative, high and low triggering. Certain UDMA modes make use of this clocking technique also.

        Your argument doesn't hold water.

        His arguement DOES hold water. PPC CPU's DO outperform Intel x86 CPU's by a good margin when compared clock for clock (showing the MHz Myth for what it is). Especially the G4 and boy when AltiVec can and is exploited... Wow. There IS more to CPU design than smaller die and deeper piplining for higher MHz.

        As far as I can tell, Apple seem to be in a position where they have to make the best of what they can get, due to Motorolla dropping the ball pretty baddly.

        I hope IBM comes to their rescue. How ironic.

    • Re:Apple Chips (Score:3, Informative)

      by Junks Jerzey ( 54586 )
      Middle of the Pack is not a step up for Apple... The G4 chips outperform Intel and there microinstruction intuperted to Risc instructions.... alot more goes into a processor than it's MHZ... Take a read of Hennessy and Patterson's book Computer Architecture A Quantitative Approach

      True, but there's still no denying that current Pentium 4's are faster. For the sake of argument, let's say that an 800MHz G4 is roughly equivalent to a 1.4GHz Pentium 4. Well, now a bottom-end $500 Dell is shipping with a 1.8GHz processor, the norm is 2-2.4GHz, and you can buy up to 2.8GHz, if you really want to throw your money way.

      Bottom line: Yes, the G4 is faster than most people claim, but it is still measurably slower than what Intel is currently offering.
    • Re:Apple Chips (Score:3, Insightful)

      by CTho9305 ( 264265 )
      Everything you said is correct... BUT...
      It doesn't matter how much work a processor does per clock, if you can scale an "inferior" (according to your definition of inferior) to a MUCH higher clock.

      This may not even be an architectural flaw as much as the result of an inferior manufacturing process. If Motorola's fabs aren't as good as Intel's (I don't think they are) then the fact that the G4 is a "better" processor on paper is completely irrelevant - for all the consumer cares, the FASTEST G4 available is slower than the fastest P4 (Currently, according to benchmarks not done by apple, it seems that you dont even need the absolute fastest P4s to beat the fastest Macs)
      • (Currently, according to benchmarks not done by apple, it seems that you dont even need the absolute fastest P4s to beat the fastest Macs)

        Agreed and a sad state of affairs.

        Seems to me, with the DDR hack and Apples reliance on SMP now, that they are really trying to hold out before some new CPU's with greater performance and memory bandwidth are available.

        I shall be avoiding the first lot, in case Apple tries to rush them to market too quickly.

        I hope they survive, I rather love OSX and I would HATE to see them move to x86 (which I highly doubt).

    • The G4 chips outperform Intel and...

      Are there any benchmarks to prove this claim? It would be interesting to see a comparison - especially if made by an independent party.

      Tor
      • Are there any benchmarks to prove this claim? It would be interesting to see a comparison - especially if made by an independent party.

        Clock for clock, proving that MHz is not an absolute comparable measure, here [ucla.edu] you can see both G4's and G3's of lower MHz beating Intel Pentium III's which are clocked 50% higher than the G3 beating them!

        In fact, when the code is RISC optimized, a 450MHz G3 manages to run 74% faster than a 450MHz PII.

        Now imagine a program optimized for SMP and AltiVec on Dual 1.25GHz G4's. I know the cheapest Dell can most likely beat the most expensive Apple, but the current situation leaves Apple with little it can do.

    • Re:Apple Chips (Score:3, Interesting)

      by jerkychew ( 80913 )
      Good lord, not this argument again.

      First of all, whoever modded this as interesting should be poked in the eye. This statement is full of FUD to the max.

      Perhaps a G4 will outperform an x86 of the same caliber, but the high-end P4 CPUs absolutely smoke the high-end G4s. The G4 architecture is so maxed out that Apple had to resort to adding a second CPU, cuz they just couldn't scale the G4 chips any higher.

      When the G4 came out, it kicked the arse of all the x86 chips out there. But that was a couple years ago. As things currently stand, the best Apples are barely middle of the pack, performance-wise. Don't worry though, you'll still pay more for a Mac than a fully loaded Dell machine.
    • Show me the money (Score:4, Informative)

      by Andy Dodd ( 701 ) <atd7@@@cornell...edu> on Wednesday October 16, 2002 @02:44PM (#4463935) Homepage
      http://www.heise.de/ct/english/02/05/182/

      SPEC benchmarks for the G4 processors. (Not a synthetic benchmark issued by Apple, but by an unbiased third party, SPEC)

      G4 1 GHz SPECs at 306 integer 187 floating-point
      Interestingly, the 1 GHz G4 was almost neck-and-neck with a 1 GHz PIII (http://www.heise.de/ct/english/02/05/182/qpic02.j pg)

      http://www.spec.org/osg/cpu2000/results/cpu2000. ht ml
      A large archive of SPEC results for many CPUs, including x86.

      A few choice results:
      1.2 GHz Athlon (Ancient by today's standards) - 443 integer, 387 FP
      Athlon XP 1700+ on an Epox EP-8KHA (Happens to be my mobo - Slowst Athlon XP listed for this mobo):
      633 integer, 561 FP
      Dell Precision Workstation 330, 1.3 GHz P4 - 474 integer, 502 FP (The P4 doesn't seem to be taking too much of a branch misprediction hit here)

      So in the case of G4s, while they may be a bit more efficient MHz for MHz (And the P3 vs. G4 benchmarks so that this isn't even necessarily the case), the fact that they're so far behind on the clock speed curve hurts them badly.

      If you want to see a good example of MHz not being everything, check out the benchmarks of Alpha systems - The 750 MHz ones chew even 1.2 GHz Athlons for lunch. But don't look at Apple...

      Also interesting in the case of the SPEC benchmarks run by Heise - MS C pays a 10-15% performance hit over GCC in the SPEC benchmarks.
  • by WittyName ( 615844 ) on Wednesday October 16, 2002 @11:23AM (#4462456)
    Not that hot NOW! They will have a lot of competition in that space with Opteron/Clawhammer, and the new Sparcs.

    Still, glad to see something other than incremental progress.
    • by Visigothe ( 3176 ) on Wednesday October 16, 2002 @11:49AM (#4462663) Homepage
      Also keep in mind that SPEC marks are *highly* manipulatable. Here you have a benchmark that is supposed to test the CPU. The problem with this is that is both compiler dependent *and* OS dependent . As has stated many times before, the current G4 machines score in the low 300s in SPEC marks. Does this mean that the G4 is 3 to 5 times slower than the P4? In practice, it isn't. Yes, the P4 2.8 is much faster than the current G4 in most day-to-day activities, but not by *that* much. Anyone can "cook" SPEC marks.

      What you *really* want to do is use the machine, *then* consider whether or not the machine is fast enough for your purposes. Personally, I think that machines with the 970 in them will be quite competitive with the machines that are available at launch.

      .
      • Also keep in mind that SPEC marks are *highly* manipulatable. Here you have a benchmark that is supposed to test the CPU. The problem with this is that is both compiler dependent *and* OS dependent.

        Indeed. SpecFP has almost been reduced to a memory throughput test. What kind of bandwidth will the (hypothetical) Apple chipset deliver? Also, are these numbers base or peak?

        Not to mention published Spec numbers must use a production system..
      • by Anonymous Coward
        The problem with this is that is both compiler dependent *and* OS dependent .

        Well, that of course depends on what you're interested in.

        Since we use computers to run and compile our own code, this is EXACTLY what we want. The vendor can do anything but touch the code. If they release a compiler that produces twice as fast code they'll get better benchmarks, and that will show up in our performance too.

        As has stated many times before, the current G4 machines score in the low 300s in SPEC marks. Does this mean that the G4 is 3 to 5 times slower than the P4? In practice, it isn't.

        In some cases it defintely is. Have you compared compile times with gcc on a G4 and an x86? It is horribly slow on the Apple box, and this is reflected perfectly in the gcc benchmark of SPECint.

        Of course SPEC benchmarks aren't 100% accurate, but a lot of people seem to believe that they are unfair against Motorola. They aren't - but they don't make any claim whatsoever to measure performance of code that has been handtuned with Altivec or SSE. This means you can get excellent photoshop performance on a G4, but it still sucks on general-purpose compiled code as long as there isn't any compiler that can generate Altivec automatically.

        Now, if you only run a small number of Altivec-accelerated applications (as many Mac users do), that perfectly OK. But for scientific stuff that we do SPEC is a very good and impartial indicator of performance.
  • This could help push Apple back to a respectable market share over a couple of years. A *nix box with a decent processor and lots of commercial software? Of course, Apple has proved to be just as fierce in protecting their proprietary code as Microsoft, so I wouldn't expect the price to drop significantly for every million sold. But still, alternatives (especially of this caliber) are good.
  • Chunks of five (Score:5, Interesting)

    by Faggot ( 614416 ) <choads@g[ ]com ['ay.' in gap]> on Wednesday October 16, 2002 @11:24AM (#4462465) Homepage
    Unlike the P4, the 970 does one more trick after it has cracked the PPC instructions down into iops. The 970 divides up the iop stream into "groups" of five iops a piece. So first it cracks the PPC instructions down into iops, then it collects the iops back together into groups. The iops are placed the group's five slots in program order with the stipulation that all branch instructions must go in slot 4 (the last slot). Furthermore, slot 4 can hold only branch instructions and nothing else. It is these groups of five iops that are dispatched in-order to the issue queues. (I haven't yet seen a functional diagram of the 970's core, so I'm not sure how many issue queues there are.)

    computing in chunks... sounds a lot like a Cray [cray.com]. Together with the 900MHz-effective (jesus... that's a lot) FSB, Apple really will be selling supercomputers in the next few years.
    • The 970 divides up the iop stream into "groups" of five iops a piece. So first it cracks the PPC instructions down into iops, then it collects the iops back together into groups. The iops are placed the group's five slots in program order with the stipulation that all branch instructions must go in slot 4 (the last slot). Furthermore, slot 4 can hold only branch instructions and nothing else.

      This sounds like a trace processor (a processor that groups segments of instructions known to execute in sequence - i.e. containing at most one branch instruction at the end, and having no entry points from other branches [a fragment of a basic block]). Traces are rescheduled, cached in decoded form, etc. The P4 *does* use trace processing, contrary to the poster's original statement, if I understand correctly. Trace processors have been studied for quite a while, and there are many interesting papers about them.
    • Re:Chunks of five (Score:3, Insightful)

      by Courageous ( 228506 )
      computing in chunks... sounds a lot like a Cray

      This chunking is described in great detail in the original POWER4 public design documents. It's referred to in passing as a redeeming feature, borrowed from VLIW concepts. The suggestion is that its a part of traditional VLIW that could be leveraged into a non-VLIW design.

      C//
  • will it scale? (Score:5, Interesting)

    by vicarina22 ( 577401 ) on Wednesday October 16, 2002 @11:29AM (#4462514)
    when the p4 debuted it was sort of average too... the p4's power has come from it's ability to scale to higher mHz ratings pretty quickly. what kind of life are they going to get out of this chip? if it's going to top off at 2gHz then it doesn't really seem worth it, but if they can chip can get up to 3 gHz or so within a year of its release...
    • supposedly the issue with Apple's chips over the last few years was Moto's manufacturing process. rumors say that IBM was always able to make more chips of higher speeds than Moto. the story is that because of the contract between the 3, IBM chips did not go in Apple boxes (upgrades and whatnot), and they could not outclock Moto.

      yes, that's from the rumor mill, but everyone knows Moto has been going through a lot of corporate restructuring and who knows where they will be focusing in the next 5 years. IBM is going to make these chips (where ever they are going to be used) at a brand new plant in NY state. they have a great rep for quality control.

      i kind of creepy thing is that the articles say they will probably debut 2nd half of next year (Macworld NYC? one last hurah! before MW moves back to Boston?) or not till January 2004. the articles also inply that they will debut at 1.4GHz. Apple is now selling 2 x 1.25 GHz G4 chips.

      will Apple stall at or below 1.4 GHz till these new chips come out? the general upgrade of Apple machines is 5 or 6 months right now. that leave 2 possible revisions to the G4 towers before these babies are set. now i know that these chips will come with a super motherboard and 64 bit vs 32 and bla bla bla but Apple fights the megahertx myth even to somewhat educated comsumers. how will they be able to spin it when they have to explain it in terms of Apples vs Apples?

      i guess it's a minor problem if these chips are as zippy as they say... a few benchmark tests and bar graphs should convey some message? maybe instead of having a 12 y.o. kid set up his iMac and go online in 5 minutes, they will have a 12 y.o. kid clone his dog or something. i would be impressed.
      • how will they be able to spin it when they have to explain it in terms of Apples vs Apples?

        Just tell the truth about the new technology, in marketing speak of course. The truth should be able to confuse 98% of end users into belief and convice the geeky 2% anyway. So finishing up with impressive bar graphs that don't start at zero ought to finish it off. : )

        There is no spin on the megahertx myth because what they're saying about an Intel MHz != a PPC MHz for computing power is true.

        If they try to say now that their machines are computationally faster than the fastest P4's, then they may be shooting themselves in the foot when all the magazines publish benchmark results that show the opposite. This may cause people to distrust Apple and avoid the new machines that hopefully really will be speed demons.

  • by bluemilker ( 264421 ) on Wednesday October 16, 2002 @11:29AM (#4462516) Homepage
    I mean... it's great news that Apple won't have to rely on Motorolla's decidedly passive desktop chip development strategy anymore...

    But man. First off, this kills any possibility of a big surprise hit. Second, this dooms apple sales for the next year or so... who wants to buy a stagnating desktop model when the next edition has so much promise?

    Then again, Apple's desktop offerings have been a little stagnant anyway... most people probably won't want to play the waiting game for as long as it'll take for these to come out.

    I just hope that by the time they do, they're worth it.
    • by Arcturax ( 454188 ) on Wednesday October 16, 2002 @11:54AM (#4462693)
      You are right, most of us can't wait, at least those of use with really old Macs (about to retire my Beige G3 in fact). I just ordered one of the Dual 1.25 GHZ machines and that should be more than enough power for me for some time. I'll move up to a 64 bit Mac in 3-5 years when they've worked out the kinks and about the time most people quit making 32 bit apps.

      I did at least learn my lesson with the Beige G3 when it comes to jumping onto the latest thing just as it first comes out. While my old Beige G3 Rev A box has been a fairly solid machine for the past 5 years, it does have some serious shortcomings (possible voltage regulator blow out if upgraded to a G4, 66 MHZ bus (ick) and Rev A rom means no IDE slave support!).

      I feel fairly confident this possibly last of the line G4 should be fairly solid other than the chips not fully utilizing DDR (at least DMA operations will take advantage of it) and the silly idea of making the second IDE channel only ATA66.

      Once the issues of moving to a 64 bit chip and the new Hypertransport bus and such are worked out and my machine starts to look as slow as my Beige G3 is now compared to the latest machines, then I will start itching to move up.
  • by mbourgon ( 186257 ) on Wednesday October 16, 2002 @11:34AM (#4462558) Homepage
    1) Why all the hype over a chip that will be slow when it's released? I'll admit, the specs look damn impressive - a 1.6 Power4 single-core has the SpecFP/INT specs of a P4 2.5 (500mhz Bus), but they're not due out for a year, and the 1.6 is expected to be on the high end

    2) Why only a single-core?

    3) Where's the G5? It looked similarly impressive, a year ago. It still does, according to the Register's leaked spec numbers

    4) What's the advantage again of a 64 bit processor? Sure, more RAM. Is it faster? Does it do more? Anyone?

    4)
    • 1) don't know

      2) becasue multi-core is damn expensive and damn power hungry

      3)DOA

      4)64 bit programing is more efficent, you can crunch more numbers per cycle which will speed up some applications...don't fall into the "consumers don't need 64 bit" crowd.
    • Back in the "good-old-days", a primary benefit of the "newer", larger "bit" processors were the larger instructions. An 8-bit processor had small 8-bit instructions, with maybe some double-"word" instructions that were much slower to execute, along with an 8-bit integer math unit. Floating point, when you had it, was also constrained by the 8-bit size, though a bit less tightly. Thus, moving up in size, meant increases in performance on many fronts, but instruction width, integer math width, and addressing were the big ones.

      I am wondering how this applies to these latest 64-bit processors. In the days of RISC, one would think that a reduced instruction set would easily fit in 32-bit instructions (those are rather huge and comfy compared to the old 8-bit days), though I would guess that a 64-bit instruction can include an opcode, register specification AND 32-bits of memory address, which would mean fewer multi-word instructions, which by old measures means faster execution. A 64-bit integer unit would have some real benefit. I find more and more cases where 32-bit integers are not sufficiently large to cover the range of values needed for problems, and that is without addressing over 32-bits of data.

      I am curious if someone can compare these attributes of the current Pentium 4/Athlon XP processors with this PowerPC 970, the current SPARC from Sun (Ultra is it?), and the current HP/PA processor (though isn't that being dropped in favor of Itanium?)?
      • The 64 bit PPC uses 32 bit instructions.

        Basically the only real difference is in the details of some instructions, and the 64bit registers.

        Since you're using 64 bit integer registers, you can now use 64 bit addressing (pointers), which means you can calculate addresses for 64bit address spaces, which yes, means more RAM.

        Macs are currently limited to below 4GB of ram, which is actually a limit... I think the most significant reason to move to 64bit PPC is to go beyond 4GB of physical ram.

        The other benefit will be the ability to handle 64bit integers fast. As used by databases ;)

        Another benefit will be 64bit load/stores which can happen in 1 cycle, rather than 2.

        Of course, the Altivec unit has allowed 128bit load/stores for a while now (and the fpu allowed 64bit load/stores before)

        Anywho, the big points of PPC64 are increased integer size and larger address space.

        PPC does not use segment hacks like x86
    • by Inoshiro ( 71693 ) on Wednesday October 16, 2002 @12:23PM (#4462895) Homepage
      Once you move beyond a 4.5billion, into the realm of 18.5 (two orders of magnitude past trillion), you can address anything for the forseable future (since you can count each year until the heatdeath of the universe this way, for example).

      For vector operations, 64bit words make for some fast math operations, since you can pack more 32-bit integer components into each bus transfer.

      For floating point, it means you have greater precision in hardware (allowing things like real physics and shapes to be modelled without noticable issues caused by subtle number creep). Since most systems use IEE-784 (64bit double precision floating point), it means a speedup to that software since you're not working with it as 2 32-bit operations.

      In terms of storage space, it means you can address more than 2,199,023,255,552 bytes (~2 terabytes) of disk space (assuming a 512-byte sector). This is important for people with big RAID arrays today, and people with ludicrously big Maxtor drives 3-4 years from now.

      For RAM, it means you don't have to worry about your server topping out at 4 gigabytes of RAM. It also means that your VM space has no effective limitation for the forseable future (very useful for people working on large projects, trying memory-intensive algorithmic approachs to traditionally NP-hard problems, or distributed computing problems).

      I'm sure I missed a lot of the benefits even with this list. As you can see, 64-bit is not just a number game. It is 32 orders of magnitude larger than 2^32, meaning our grandchildren will probably still be using 64bit machines with no limitions being apparent (unlike 16-bit to 32-bit, which only moved from 65k to 4.5 billion in terms of addressable amounts of something).
      • For floating point, it means you have greater
        precision in hardware (allowing things like real physics and shapes to be modelled without noticable issues caused by subtle number creep). Since most systems use IEE-784 (64bit double precision floating point), it means a speedup to that software since you're not working with it as 2 32-bit operations.


        Actually, most CPUs today (including G4 and P4) do double-precision in hardware. The G4 does 64-bit FP multiply-add with a throughput of one operation per cycle (I'm pretty sure the P4 does too). Even the loads and stores are operating on 64-bit chunks. Going to a 64-bit processor won't change any of that. The only thing different for FP operations will be (1) you can hold a heck of a lot more numbers in memory! and (2) it might be possible for extended precision (128-bit) to be done easily in hardware.

        • It's been a while since I worked so directly with the processor that I'd know that :) Another bonus of 64-bit is that a 100Mhz 64-bit bus will do the memory related work for those operands as quickly as a 200Mhz 32-bit bus.
    • 4) What's the advantage again of a 64 bit processor? Sure, more RAM. Is it faster? Does it do more? Anyone?

      Larger memory space, and fewer levels to the page table, which means faster RAM access even for smaller memory spaces.

      4-16 gigs may seem like a lot now, but remember when 4 megs was a lavish amount and 16 unheard-of?

      Re. calculations, some will speed up, but FP registers are already 64 bits (so FP math won't benefit from 64-bit integer registers), and 64-bit integer calculations are done relatively rarely (they're used for a few things commonly, but 32-bit math is much *more* common on a cycle-per-cycle basis).

      The memory data path itself is already 64 bits or wider for all of the recent chips I've heard of, so there's no speedup there.
  • by bsharitt ( 580506 ) <bridget@NoSpAM.sharitt.com> on Wednesday October 16, 2002 @11:35AM (#4462574) Journal
    Years ago Apple was the king of PCs. In an effort to combat them, IBM launched an army of clones to destroy Apple. While the clones were largly successful against Apple, the also brought down IBM(as king of PCS). Now IBM is arming Apple so they can fight the clone army. Kind of ironic isn't it.

  • by thatguywhoiam ( 524290 ) on Wednesday October 16, 2002 @11:35AM (#4462575)
    Is it really so hard to imagine that the Cell chip and the PPC970 are probably somewhat related?

    INT. NIGHTTIME - HIGH ABOVE CITYSCAPE

    A small, immaculately dressed Japanese man sits on the floor; a trickle of incense wafts before him. Across the room, an aged, bearded man in a plain blue suit watches him.

    SONY MAN
    The upstarts think they can trifle with us. Their insolence will not be tolerated.

    IBM MAN
    What do you propose?

    SONY MAN
    You have plants. You have research and design. Let us crush them.

    IBM MAN
    So mote it be.

  • by Anonvmous Coward ( 589068 ) on Wednesday October 16, 2002 @11:36AM (#4462586)
    " In fact, some are suggesting the PPC 970 chip might be closely related to the PS3's 'Cell' processor...""

    Ah, so it runs on vapor instead of smoke?

    *wonders if anybody'll get that.*
  • by Anonymous Coward on Wednesday October 16, 2002 @11:37AM (#4462587)
    ok, so it's SPEC INT and SPEC FP numbers are 937 and 1051 respectively. From www.spec.org, 2002 q3: dell Precision WorkStation 340 (2.8 GHz P4), specint base is 970, peak 1010; specfp base is 938, peak 947. When it's actually released, if they make 2003 Q2, it won't be particularly impressive. But the current apple G4 specmarks are about 35% of the 970, so it'll look good compared to that.
    • yeah, but a) it's running 1GHz slower than the P4, and b) i doubt it's dealing with ultra-long ints (or is that the point of FP? i dunno..)
    • No Apple systems are listed in the SpecCPU95 or CPU2000 results at www.spec.org

      Where did you get your specmarks?

      (Methinks Apple has something to hide...)
    • two advantages to this processor are the bus--900mhz with 6.2 gb/sec--and the power usage.

      "At 1.8GHz, the PowerPC 970 will consume 1.3-volts and dissipate 42-Watts. At 1.2 GHz, the PowerPC 970 will consume 1.1-volts and dissipate only 19-Watts. For comparison, a 1GHz G4 consumes 1.6-volts and dissipates 21.3-Watts."

      it seems that the powerbook potential is there. and in apple's market data throughput counts heavily, maybe more than absolute processor speed. look at sgi. the ibm proprietary memory is a bit confusing however.
  • by mfago ( 514801 )
    Apple starts shipping these in January. Hey, I can hope damn you! ;-)

    At least IBM is pretty good at manufacturing microprocessors, while Moto is certainly not. IBM already has a 0.10 micron (not 0.09) fab in testing, so perhaps the 970 will get to >2GHz "soon."

    In a related story: Moto is supposedly selling their chip business. I guess they finally realized they have no idea what they are doing.
  • by Anonvmous Coward ( 589068 ) on Wednesday October 16, 2002 @11:43AM (#4462625)
    ... so could somebody who understands this processor tell me this:

    Would a 3D rendering app such as Lightwave potentially see a huge benefit to this processor? I understand that it's up to the developer to tune it, yadda yadda yadda, I'm concerned with potential not real world numbers.

    I'm trying to get an image in my mind about how the various processor descriptions (32-bit, 64-bit, Altivec, SimD, etc...) can radically change how an app like that would work.

    Us vertex pushers have a substantial interest in machines that excel at that type of work...
    • by Visigothe ( 3176 ) on Wednesday October 16, 2002 @12:22PM (#4462884) Homepage
      Well, I'll try.

      rendering apps like Lightwave, Maya, etc will benefit from this for several reasons:

      The 64bit architecture:
      Lightwave [if rewritten to be 64bit] will be able to use bigger numbers, and use more memory. Bigger numbers means that calculations that would involve making a 64bit word out of 2 32bit words [as it currently stands] needn't be done. Being able to address more memory is *always* a good thing.

      Really good Floating Point Performance:
      3D rendering apps love FP. bigger/faster/more Fp units are a good thing.

      Memory Bandwidth:
      The 900MHz bus will allow a *huge* amount of memory to be shuttled back and forth from the processor *very* quickly. This means your huge scenes will be rendered faster.

      Altivec/Vector Processing unit:
      Because the VPU doesn't do double precision FP, it doesn't help in the final rendering [much]. It *will* help in things like realtime previews, where the math is simplified. Imagine *big* previews of scenes in realtime.

      Multiprocessing:
      This chip is [as implied] MERSI compliant. This means that it is a perfect candidate for multiprocessing, like the current G4.... but the 970 can go many more "ways" than the G4 [the G4 was in an "optimal" multiprocessing stage with 2 procs]. The 970 can go up to 16, IIRC.

      This seems like it'll be a winner.

      .
      • The 64bit architecture:
        Lightwave [if rewritten to be 64bit] will be able to use bigger numbers, and use more memory. Bigger numbers means that calculations that would involve making a 64bit word out of 2 32bit words [as it currently stands] needn't be done. Being able to address more memory is *always* a good thing.
        >>>>>>>>
        More memory, maybe, but the 64-bit integers are nearly useless. I doubt lightwave is dealing with any integers in performance intensive code, much less 64-bit ones. What's more important is 128-bit floating point SIMD, and everyone already has that.

        The 900MHz bus will allow a *huge* amount of memory to be shuttled back and forth from the processor *very* quickly. This means your huge scenes will be rendered faster.
        >>>>>>>>>>>
        Yep, very much so.

        Altivec/Vector Processing unit:
        Because the VPU doesn't do double precision FP, it doesn't help in the final rendering [much]. It *will* help in things like realtime previews, where the math is simplified. Imagine *big* previews of scenes in realtime.
        >>>>>>>>>>
        Hah hah, SSE2 does.
    • by Anonymous Coward

      My reading is that there are four significant differences between IBM's chip and the G4.

      • Clock speed. This one is straightforward. While comparing clock speeds between different types of processors causes confusion (a Pentium clock isn't the same as an AMD clock isn't the same as a PowerPC clock), higher clock speeds in the same processor family are always better. When it arrives, the PowerPC 970 will max out at 1.8 GHz compared to the top speed of 1.25 GHz of the G4 today.
      • Instructions per cycle. Here's where things get tricky. G4s execute a maximum of 3 instructions per clock cycle. The PowerPC 970 will be able to execute a maximum of 8 instructions per cycle. Before you get too excited, consider that those are maximums and in practice it may not be possible to execute that many instructions. Due to changes in the chip architecture, it is more likely that not all possible instructions will be executed. Still, this should result in a speed improvement.
      • 64 bit addressing. G4s are limited to 4 GB of RAM due to their 32 bit architecture. The PowerPC 970 has a 64 bit architecture, so they can support up to 4 TB of RAM. (I think. I'm a bit hazy on what actually controls the upper limit.) This means that graphics applications that require absurd amounts of memory will have more room to grow. The 64 bit architecture also means more computational precision, but it's unclear to me how useful that would be outside of scientific computations.
      • Bus speed. The PowerPC 970 supports a 900MHz bus, which is much faster than the G4. This controls the rate at which the processor can access memory. For memory intensive applications, bus speed can be more important than processor speed, because the processor ends up having to wait for data from memory much of the time.

      My expectation is that the bus will make the biggest difference for end users, followed by the improvement in instructions per cycle, at least in the short term. Then again, I'm far from an expert, so someone else might have better understanding of the potential performance gains.

      Matthew

      Friends don't let friends Slashdot.

      • G4s are limited to 4 GB of RAM due to their 32 bit architecture.

        Minor rant, the limitation is do to address bus size. Address bus side doesn't have to be tied to overall architecture size. Of processors I've known, the MOS6502 was an 8bit processor, but 16 bit address bus. The upgrade, the MOS6581602 was selectable 8/16 bit architecture and had a 24 bit bus, besides a fully compatible 650x mode. The original 68000 has only a 24 bit address line. The original MacOS stored some flags in there and folks played with those (even though Apple told you not to) and that played havoc when real 32 bit address-bus chips came out. The 68000 was almost a hybrid as well, it had 32 bit registers, but IIRC you could only branch with 16 bit signed offsets, and the 68020 had true 32 bit addressing. the original intel 8086s had essentially a 20 bit bus that was accessible with the evil segment/offset addressing.

        I'm not a chip designer, but I don't think there's a technological reason why they can't put 64 bit addressing on a 32 bit chip. You'd have to have new addressing modes and opcodes (like the 68000 => 68020, or 8086 => 80286 => 80386)) and you'd probably just say with all that work it may be simpler to go to 64 bit across the board. In a deeply pipelined architecture, you probably wouldn't want it, not having standard sizes (32 bit opcodes and data vs. mized 32 bit and 64 bit chunks) it just makes it harder to see where the instructions boundaries are.
  • by scharkalvin ( 72228 ) on Wednesday October 16, 2002 @11:49AM (#4462665) Homepage
    Since I won't be buying a pc from Apple with this chip on it, I hope some third party such as Tyan decides to make an ATX format MB for us pc builders. There WILL be Linux ports for this chip, and existing PPC ports would probably work with it in 32 bit mode at first. Even IBM might offer an ATX evaluation board, though it would probably cost too much.
    • Well a third party motherboard will require a non-Apple chipset.

      MAI currently make reasonably up to date (compared with the old IBM 710 northbridge anyway) G3/G4 chipsets such as the Artica-S. These are used on the bPlan Pegasos (MATX), Eyetech AmigaONE (ATX) and Birdie (ATX server) motherboards.

      Hopefully IBM or MAI will make a reasonably priced chipset to go with the PPC970 processor... hence allowing generic motherboards using that processor to be made.
  • Um, wrong (Score:3, Interesting)

    by be-fan ( 61476 ) on Wednesday October 16, 2002 @12:26PM (#4462917)
    The 7.2 GB/sec of bandwidth is just not much more than double that of existing P4s (P4 = 4.2 GB/sec) and since Hammer will have 6.4 GB/sec in early 2003, should be essentially the same as competing x86 chips.
  • The idea that this will be a strictly middle-of-the-road processor is to ignore some important facts. AMD's and Intel's 64-bit options are primarily geared towards servers and workstations. Meanwhile IBM claims that their GPUL was engineered primarily for personal computers NOT servers. Thus Apple could become the first computer manufaturer who puts 64-bit processing power in the hands of the general population. If the average Joe relizes that the wave of the future (64-bit) is inevitable, he'll probably want to get on early. Plus, don't forget the Altivec support built into the chips as well as the new super-bus that they are working on with nVidia. Not only will Apple get a powerful processor, but they'll also get a pipeline capable of feeding it.
  • think about it... (Score:2, Interesting)

    by zugedneb ( 601299 )
    Well, a bit off topic, but...

    I think that we should be thinking more and more on the power consumption of things in
    general... On the environment, you know...

    I wonder for how long "The_American_Way" will
    hold...

    It would be interesting if some law turned up,(I am from Europe, Sweden), that would make
    some serious "restrictions" on the
    power/performance phenomenon...

    It would be the rebirth of elegant
    engeneering... :-) /zugedneb
  • by Sloppy ( 14984 ) on Wednesday October 16, 2002 @12:55PM (#4463120) Homepage Journal
    with the 970 looming on the horizon and the G4 apparently stuck again around the 1GHz mark, nobody in their right mind would shell out for a new PowerMac any time after mid-2003.

    There is never a good time to buy a computer, and nobody in their right mind will ever buy one at all. There is always something faster coming up.

    Once you get over how ludicrous that is, I say buy a computer whenever the hell you want one. And yes, your machine will be obsolete, according to all the charts and graphs and tables of benchmark numbers, almost immediately. It doesn't matter if you buy a G4 in 2003, or a 970 in 2004. It will still happen. Get over it.

    • It does with Mac (Score:3, Insightful)

      by Quila ( 201335 )
      There's a specific talent to buying a Mac at the right time, as performance increases happen in large steps in a few distinct instances in the year.

      PCs just keep getting gradually better and better. But with a Mac you can buy a single processor machine one day only to find you could have had a dual for the same price on the next day.
  • MP? (Score:2, Insightful)

    by muchmusic ( 45065 )
    I may have missed a comment here, but I'm not that familiar with this subject. Apple uses multiprocessor schemes in all of its pro desktops now - I know that it's mainly to make up for other speed deficiencies - but is it possible (probable) that we will see dual processor versions of desktops with this chip as well?
  • by fastpathguru ( 617905 ) on Wednesday October 16, 2002 @01:35PM (#4463436)
    Decodes/breaks down the native ISA, repackages them in bundles, then issues them to the execution units... A point-to-point FSB... Will have higher IPC than Athlon, but has all the same scalability limits. Hammer has the integrated memory controller and multiple hypertransport interfaces for fast IO and glueless MP. In short, PPC is similar to 7th generation x86 along with P4 and Athlon. Hammer is much more like Power4, but more highly integrated/cost-reduced. fpg
  • by Erich ( 151 ) on Wednesday October 16, 2002 @01:43PM (#4463504) Homepage Journal
    I see people here on Slashdot a lot who dislike the x86 processors because they do translation from the x86 ISA into internal opcodes.

    Note that your new IBM chip is doing exactly that.

    Intel and AMD have repeatedly shown that they can do whatever they like to implement top-notch internal architectures, and lopping on a translation unit only adds 10-20% die area and typically a very small performance hit over a traditional sequential RISC architecture. And they're free to change the internal architecture between revisions. And both Intel and AMD sell enough chips that they can spend a lot of money on designs and make them very good and still turn a profit.

  • by AHumbleOpinion ( 546848 ) on Wednesday October 16, 2002 @02:05PM (#4463650) Homepage
    Contrary to some of the opinions presented recently it is just fine for Apple to use the 970 and be behind the curve with respect to typical performance. Sure there are specialized apps that can leverage a RISC architecture to outperform x86 or leverage Altivec to outperform SSE, but that is a small minority. Typical performance lags behind PC a little but we are in a situation where PCs and Macs have more performance than most people actually use. Most folks out there in the real world will get along very nicely with a 1GHz PC or a 800MHz Mac. Very few people need 2.xGHz machines, and only a few more have enough disposable income to buy those machines for Quake FPS pissing contests :).

    The real Apple problem is that the gap between typical PC and typical Mac performance is starting to grow beyond the range that has historically shown to be viable. Not a problem today, standard dual CPUs counter this to a degree, but it's likely to be a problem in a year or two. While the 970 may only perform like a 3GHz P4 (SPEC), lag whatever Intel/AMD has in a year or two, it will be close enough. Apple will be back to a point where the typical performance gap is small enough. Apple has sold tens of millions of Macs that lagged PC counterparts in performance. They know that their customers are more interested in ease of use. Performance wise close-enough is all they need.
  • by lweinmunson ( 91267 ) on Wednesday October 16, 2002 @03:44PM (#4464347)
    I can't find the link anymore, but last night I saw an article by Frank Soltis, the cheif scientist over the AS/400 unit. He basically laid out the evolution of the POWER achitecture (not the PowerPC) architecture and how it relates to the new 970 CPU. The first POWER cpu used by IBM was derived from their work with Moto and Apple, but it couldn't be used in the AS/400 line becuase of limitations in the chip. So IBM came up with Power2 (PowerPC AS). This exteneded the functionality of the chip to where it could be used in an AS/400 environment, but was no longer compatible with the PowerPC that Apple and Moto were selling. Then they added the POWER64 instruction set which made the chip faster for business and HPC applications, but drove it further away from the PowerPC platform. The POWER4 chip actuall includes 4 seperate instruction setts. POWER64, POWER32, PowerPC64 and PowerPC32. Adding Altivec and cutting out the second CPU core is what the 970 is. He didn't mention that there was really any overlap between it and the PS3 chip. POWER4 design was started in 96 so there may be some shared philosophy, but probably no real instruction matching between the two. He aslo said that the POWER5 (late next year) and POWER6 architectures would have some OS dependent accelerations put in them. He specifically mentioned that the chip would have an instruction for handling TCP streams instead of having to send several instructions to the CPU at once. And that these will be fully documented so that Linux/OSS can use them. POWER6 will extend that to specific DB2 and Domino calls to accelerate those apps.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...