Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IBM Desktops (Apple) Hardware

A History of PowerPC 193

A reader writes: "There's a article about chipmaking at IBM up at DeveloperWorks. While IBM-centric, it talks a lot about the PowerPC, but really dwells on the common ancestory of IBM 801" Interesting article, especially for people interested in chips and chip design.
This discussion has been archived. No new comments can be posted.

A History of PowerPC

Comments Filter:
  • by Erect Horsecock ( 655858 ) on Wednesday March 31, 2004 @02:47PM (#8728195) Homepage Journal
    IBM Also announced [tmcnet.com] a ton of new PPC information and tech today at an event in new york. Opening up the ISA to third parties including Sony.
    • Big Endian (Score:5, Funny)

      by nycsubway ( 79012 ) on Wednesday March 31, 2004 @02:54PM (#8728310) Homepage
      I'm not a fan of big endian... or is it little endian... I dont remember, but I do know, if it's backwards, it's backwards because it's reverse of what I'm used to.

      • Re:Big Endian (Score:5, Informative)

        by Mattintosh ( 758112 ) on Wednesday March 31, 2004 @03:00PM (#8728403)
        PPC is big endian, which is normal.

        X86 is little endian, which is chunked-up and backwards.

        Example:
        View the stored number 0x12345678.

        Big endian: 12 34 56 78
        Little endian: 78 56 34 12

        Clear as mud?
        • Re:Big Endian (Score:3, Interesting)

          Isn't Motorolas PPC implementation both big and little endian (i think it's called bit flipping) which is what made Virtual PC possible on Macs? I seem to remember an article somewhere about thats why VPC 6 wouldn't run on the G5 since it lacked the dual modes....

          Then again I could be completely wrong.
        • Re:Big Endian (Score:5, Informative)

          by Anonymous Coward on Wednesday March 31, 2004 @03:20PM (#8728678)
          Big-endian appeals to people because they learned to do their base-10 arithmetic in big-endian fashion. The most significant digit is the first one encountered. It's habit.

          Little-endian has some nice hardware properties, because it isn't necessary to change the address due to the size of the operand.

          Big Endian:
          uint32 src = 0x00001234; // at address 1000, say
          uint32 dst1 = src; // fetch from 1000 to get 00001234
          uint16 dst2 = src; // fetch from 1000 + 2 to get 1234

          Little Endian:
          uint32 src = 0x00001234; // at address 1000, say
          uint32 dst1 = src; // fetch from 1000
          uint16 dst2 = src; // fetch from 1000

          The processor doesn't have to modify register values and funk around with shifting the data bus to perform different read and write sizes with a little-endian design. Expanding the data to 64 bits has no effect on existing code, whereas the big-endian case will have to change all the pointer values.

          To me, this seems less "chunked up" than big endian storage, where you have to jump back and forth to pick out pieces.

          In any event, it seems unnecessary to use prejudicial language like "normal" and "chunked up". It's just another way of writing digits in an integer. Any competent programmer should be able to deal with both representations with equal facility.

          Being unable to deal with little-endian representation is like being unable to read hexadecimal and insisting all numbers be in base-10 only. (Dotted-decimal IP numbers, anyone?)

          Big-endian has one big practical advantage other than casual programmer convenience. Many major network protocols (TCP/IP, Ethernet) define the network byte order as big-endian.

          • by Dog135 ( 700389 ) <dog135@gmail.com> on Wednesday March 31, 2004 @03:59PM (#8729170)
            Expanding the data to 64 bits has no effect on existing code, whereas the big-endian case will have to change all the pointer values

            So, you're reading in an array of integers, which are now 64 bit vs 32 bit and no code change is needed?

            Programs NEED to know the size of the data they're working with. Simply pulling data from an address without caring for it's size is a recipee for disaster!
          • Re:Big Endian (Score:5, Informative)

            by karlm ( 158591 ) on Wednesday March 31, 2004 @07:04PM (#8731539) Homepage
            What kind of strange CPU implementation modifies register values when addressing sub-word vlaues? This is done most commonly by the programmer at write-time, (or maybe by some strange compiler or assembler at compile-time). This is not a hardware advantage in any architecture I'm aware of. Are you perhaps talking about extra hardware burden associated with unaligned memory access? Unaligned memory access is not a consequence of byte ordering.

            One more big advantage of the big-endian byte order is that 64-bit big-endian CPUs can do string comparisons 8 bytes at a time. This is a big advantage where the length of the strings is known (Java strings, Pascal strings, burrows-wheeler transform for data compression) and still an advantage for null-terminated strings.

            I'm not aware of any such performance advantages for the little-endian byte order.

            The main advantage of little-endian byte order is ease of modifying code written in assembly or raw opcodes if you later decide to change your design and go with larger or smaller data fields. The main uses for assembly programming are very low-level kernel programming (generally the most stable part of the kernel code base) and performace enhancement of small snippets of code that have been well tested and profiled and are unlikely to change a lot.

            I agree that an decent programmer should be able to deal with either endianess, but the advantages of the little-endian byte order seem to be becoming less and less relevant.

        • It depends on your viewpoint. One of my projects at work is a network packet capturer. It's much more natural to see least significant bytes on lower addresses, since it makes decoding much easier (especially bitfields: on PPC, the most significant part of a bitfield goes to lower byte and the least significant part to next byte, because the damn thing had to be the way we read: and we read from left to right, but computer memory is presented from right to left).
      • Both Endians (Score:2, Informative)

        by bsd4me ( 759597 )

        The PPC ISA has support for both big- and little-endian modes. However, the little-endian mode is a bit screwy. There are some appnotes on the Motorola website on using little-endian mode.

    • by Anonymous Coward on Wednesday March 31, 2004 @02:54PM (#8728312)
      Sony?

      Does this mean that ALL next-generation consoles (next Gamebuce, PS3 and Xbox2) will use a IBM chip?
    • Self-modifying silicon? Geez. And I though self-modifying code was complicated.*

      The Sony connection is nothing surprising, as it has already been announced that Sony is creating silicon with IBM for their next-gen chipset. I wouldn't be the slightest bit surprised to see a PS3 running on a cluster of rebranded (and possibly modified) PPC chips.

      P.S. Does anyone know why Windows has never been adapted to run under PPC? While the transition for Apple from PPC to x86 may be without technical merit, why h
    • Intel has also shown virtual micropartitions, rebooting Windows XP while running a DVD without a hitch. The SMT being added to the Power5 is called Hyperthreading by Intel PR. I hope IBM, Intel, AMD and others keep competing.
      • The SMT being added to the Power5 is called Hyperthreading by Intel PR.

        Actually IBM claims that their version of SMT is much superior to HT with 30-60% improvement over the improvements gained bt Intel with HT. Specifically they say the expect to see 35-40%+ improvements using SMT under heavy usage.

        If those numbers are right then it would be significantly better than HT. Although in fairness to Intel they are comparing Power5 server chips with PC-roots Xeon processors so there's probably alot more headro
  • by Anonymous Coward on Wednesday March 31, 2004 @02:48PM (#8728224)
    Cut up a russet potato into thin strips or wedges.

    Fry in oil or bake in oven.

    Salt.

    Enjoy!
  • *sigh* (Score:3, Insightful)

    by nocomment ( 239368 ) on Wednesday March 31, 2004 @02:54PM (#8728298) Homepage Journal
    I still want a PPC ATX board. Pegasos was supposed to deliver, but their boards are still so expensive. :-(
    • At one time I shared your dream, but I've since let go. There would have been a great synergy with BeOS.

      LK
    • Re:*sigh* (Score:4, Interesting)

      by Homology ( 639438 ) on Wednesday March 31, 2004 @03:14PM (#8728620)
      I still want a PPC ATX board. Pegasos was supposed to deliver, but their boards are still so expensive. :-(

      Supposed to deliver? OpenBSD people thought that as well, and got the OS running on it. Now OpenBSD consider Pegasos a scam operation and has pulled the support for Pegasos from CVS :

      R.I.P. OpenBSD/Pegasos - All the story [deadly.org]

      • Ya I've been following that story since it was posted on misc@openbsd.org last week sometime. IT really is a shame. OBSD on a pegasos board would have made the price _almost_ worth it. Maybe if I can find a board on ebay I might try it with yellowdog or something. I won't give money to that company[genesi].
    • Re:*sigh* (Score:3, Interesting)

      by niko9 ( 315647 ) *
      You might just get what you want [theinquirer.net]

      Woudn't it be great to be a able to pick up and ASUS or Epox PowerPC motherboard and run it with a Power970FX?

      One can dream.
  • by Anonymous Coward on Wednesday March 31, 2004 @02:56PM (#8728338)
    They also have a very good article about the PowerPC's three instruction levels and how to use implementation-specific deviations, while code stays compatible. This introduction to the PowerPC application-level programming [ibm.com] model will give you an overview of the instruction set, important registers, and other details necessary for developing reliable, high performing PowerPC applications and maintaining code compatibility among processors.
    • I haven't used it in years, but I remember MetroWerks Codewarrior having options to optimize for specific PPC chips under the Mac OS. At the time I was using a 603ev so any time I coded anything that was math intensive I used to select that chip.

      To be honest, I'm not sure how much of a benefit it provided, but I used it anyway.

      LK
      • The 604 had better floating point performance than the 601, so a number of audio apps I used to use had different specific versions that were installed when the installer ran.

        You'd go into its folder and see "Peak (604)" or "Deck II (604)" to let you know that it was going to use your particular processor to its best performance.
  • From TFA:
    "Today's IBM mainframes still maintain backwards-compatibility with that revolutionary 1962 instruction set."
    Good plan then, Intel, on that whole Itanium mess.

    John.

  • Buried in the middle of a section talking about CMOS, we find this:

    Thus, in the days when computing was still so primitive that people thought that digital watches were a neat idea, it was CMOS chips that powered them.

    You find Douglas Adams fans all over, don't you?
    • Yes. I saw a slightly earlier line under Family Inheritance that reminded my of Adams too.

      All of this complexity meant that by the 1970s, computer chips could do really amazing things (like power increasingly complex digital watches).
  • by crumbz ( 41803 ) <<remove_spam>jus ... am>gmail DOT com> on Wednesday March 31, 2004 @02:58PM (#8728372) Homepage
    "Finally, the Fishkill operation is so hip that the server room runs exclusively on Linux."

    I didn't think it was possible to use the words "Fishkill" and "hip" in the same sentence with a straight face.
  • by MrIrwin ( 761231 ) on Wednesday March 31, 2004 @02:59PM (#8728390) Journal
    Not that it was necessarily a bad think, but with the PowerPC came a whole new generation of workstation.


    Gone where the intelligent disk and network subsystems. No more die cast aluminimum chassis.


    Whilst I can understand in some sectors the incessant drive for highest MIPS per $, is there not also a place for bullet proof proven technology?

  • Yeah, I remember (Score:4, Interesting)

    by Anonymous Coward on Wednesday March 31, 2004 @03:01PM (#8728412)
    back in 94 or so, when the AIM were predicting that they were going to completely obliterate the x86 in a few years. Anyone still have those neat graphs that showed exactly where Intel would hopelessly fall behind while PPC would accellerate exponentially into the atmosphere?
    • Clearly, they mislabeled their price graph as their performance graph...
    • Re:Yeah, I remember (Score:5, Informative)

      by Billly Gates ( 198444 ) on Wednesday March 31, 2004 @03:44PM (#8728972) Journal
      Yes

      What Intel did was include RISC architecture in around the x86 instruction set to create the pentium pro, pentium II, III, etc. Otherwise they would have been killed.

      Infact IBM was correct. Cisc was dying. THe pentium1 could not compete agaisnt the powerpc unless it had a very high clock speed. All chips today are either pure risc or a hybrid cisc/risc like todays Althons/Pentium's. The exception is the nasty Itanium which is not doing too well
    • by Bert64 ( 520050 )
      Well Motorola hoped the PPC would be the successor to the M68k, a very successfull processor that was very widely used, easy to program for and very good for learning assembly on.
  • Nice PowerPC Roadmap (Score:5, Informative)

    by bcolflesh ( 710514 ) on Wednesday March 31, 2004 @03:01PM (#8728419) Homepage
    Motorola has a nice overview graphic [motorola.com] - you can also checkout a more generalized article at The Star Online [star-techcentral.com].
  • ...the PowerPC core is really fast and really tiny (leaving lots of room on the chip for customization), and also because the PowerPC architecture is amenable to being coupled with more than one additional coprocessor. This explains its success in highly specialized environments like set-top boxes or the GameCube and Playstation2 video game consoles.

    Correct me if I'm wrong, but isn't the PlayStation 2's EmotionEngine processor a proprietary MIPS-derived ISA?

  • by exp(pi*sqrt(163)) ( 613870 ) on Wednesday March 31, 2004 @03:03PM (#8728438) Journal
    VHDL, Verilog, something else entirely?
  • by Anonymous Coward on Wednesday March 31, 2004 @03:05PM (#8728467)
    Is its revolutionary three level cache architecture, utilising a 3-way 7 set-transitive cache structure, which gives performance equivalent to a 2-level traditional x86 style cache for more content addressable memory. Each processor has a direct triple-beat burstless fly-by cache gate interface capable of fourteen sequential memory write cycles, including read/write-back and speculative write-thru on both the instruction and data caches. Instruction post-fetch, get-post, roll-forward and cipher3 registers further enhance instruction cache design, and integrated bus snooping guarantees cache coherency on all power PC devices with software intervention. Special cache control and instructions were necessary to control this revolutionary design, such as 'sync', which flushes the cache, and the ever-popular 'exeio' memory fence-case instruction, named after the line in the popular nursery rhyme.
  • by Random BedHead Ed ( 602081 ) on Wednesday March 31, 2004 @03:07PM (#8728511) Homepage Journal
    I don't see how computer history that goes back to the 1960s can fail to be "IBM-centric." Remember, these were the big guys Microsoft was afraid of pissing off in the 1970s and 1980s. No one ever got fired for buying IBM, because they pretty much wrote the book on chip design before Intel hit it big.
    • OTOH it would be difficult to write computer history pre late '60's **with** IBM. Apart from sponsoring the Harvard MK1 they were pretty oblivious to waht computers would do to thier market.

      It was Lyons Tea Shop Company, of all unlikely contenders, who married "electronic programmible devices" to IT.

      Of course when they realised thier mistake they went hell for leather to redress the balance. But...amazingly.....they were totally off the ball **again** with microcomputer technology.

  • You should check out: Momentum Computer [970eval.com]

    Sure its pricey, but I suppose if your interested in such price isn't the key issue.

    Sunny Dubey
  • IBM's John Cocke was no stranger to the battle against complexity. He had already worked on the IBM Stretch computer

    Not sure I'd want that on my resume. Wasn't IBM's greatest success -- even given their unmatched maketing department.

    • even given their unmatched maketing department

      I think you can say a lot of stuff about IBM, but "unmatched marketing department" ? *ahem*

      How goes the old joke ? "How do the US solve their drug problem ?" "They legalize drugs and leave marketing to IBM."

      Most print campaigns of the last years IBM had over here in Germany sucked mightily at least IMO. There were some really funny few, those were great, but most of them were either hard to understand or just boring.
      • I think you can say a lot of stuff about IBM, but "unmatched marketing department" ? *ahem*

        The two comments that stick with me from the earlier days of IBM are:

        1. The guy who gets the rights to put the IBM logo on an office trashcan will make a fortune selling them.

        2. Nobody ever got fired for buying IBM.

        • by Wudbaer ( 48473 )
          Points taken, but I think they owe(d) this more to their absolutely overwhelming market presence and domination (as well as doing things like calling your boss to make sure you get fired for not buying IBM) than their supreme marketing. For a long time, for people computers was IBM. IBM always was there, and everyone thought they would stick around as they always would, unchanged, untouched, invincible. Their style of selling apparently was more something like shock and awe with sales people, threats and pr
    • Well, given that Stretch was one of the most successful research efforts in computer architecture ever, I have no clue why would you consider it to be a bad thing to put on one's resume.

      About 1/2 of modern architectural concepts that we take for granted in current mircroarchitectures were first introduced in stretch. It was that important.
      • Well, given that Stretch was one of the most successful research efforts in computer architecture ever, I have no clue why would you consider it to be a bad thing to put on one's resume.

        Could it be because they sold about one of them, and that was to the government who buys all kinds of stuff they don't really get their money's worth out of afterwards? (The San Diego Supercomputer Center is another example of having bought a dud or two of research projects that have never worked as promised.)

        You may ar

  • by Phs2501 ( 559902 ) on Wednesday March 31, 2004 @03:14PM (#8728614)
    I think it's quite imprecise writing for the article to state (several times, for POWER4 and the PowerPC 970) that they "can process 200 instructions at once at speeds of up to 2 GHz." That makes it sound like they can finish 200 instructions at once, which is silly. I imagine what they really mean is that there can be up to 200 instructions in flight in the pipeline at a time.

    (Which is great until you mispredict a branch, of course. :-)

    • by Abcd1234 ( 188840 ) on Wednesday March 31, 2004 @04:05PM (#8729253) Homepage
      Yeah. It's a good thing that the processors in the POWER line has unbelievable branch prediction logic. So, for example, the branch prediction rate for the POWER 4 is in the mid to high 90 percentile for most workloads (as high as 98%, IIRC) In fact, quite a large number of transitors are dedicated to this very topic, which allows the processor to do a pretty good job of achieving something close to it's theoretical IPC.

      Although, it should be noted that the pipeline depth for the POWER4 is just 15 stages (as opposed to the P4 which has, IIRC, 28 stages), so while a branch misprediction is quite bad, it's not as bad as some architectures. My understanding is that, in order to achieve that 200 IPC number, the POWER4 is just a very wide superscalar architecture, so it simply reorders and executes a lot of instructions at once. Plus, that number may in fact be 200 micro-ops per second, as opposed to real "instructions" (although, that's just speculation on my part... it's been quite a while since I read up on the POWER4), as the POWER4 has what they term a "cracking" stage, similar to most Intel processors, where the opcodes are broken down into smaller micro-ops for execution.
  • Quotable! (Score:2, Offtopic)

    by MrEd ( 60684 )
    [The RS64's] qualities make it ideal for things like on-line transaction processing (OLTP), business intelligence, enterprise resource planning (ERP), and other large and hyphenated, function-rich, database-enabled, multi-user, multi-tasking jobs


    Large and hyphenated! It's nice when technical writers get to slip a little something in on the side.

  • Are you really saying that the POWER3 was built with the same 15M transistors as the POWER2?

    Also, when you say that POWER4/PPC970 can process 200 instructions at once, you need to explain a bit better what having "instructions in flight" really means. It's not that it can do 200 instructions every clock cycle.

    Submitted this on the feedback form at the bottom of the article as well. The above just don't ring right as expressed.

  • by geoswan ( 316494 ) on Wednesday March 31, 2004 @03:33PM (#8728812) Journal

    ...Even x86 chip manufacturers, which continued for quite a time to produce CISC chips, have based their 5th- and 6th-generation chips on RISC architectures and translate x86 opcodes into RISC operations to make them backwards-compatible...

    Maybe this is a sign that it has been too long since I learned about computer architecture, but is it really fair to call a CPU that has a deep pipeline, a crypto-RISC CPU?

    When my buddy first told me about this exciting new RISC idea one of the design goals was each instruction was to take a single instruction cycle to execute. Isn't this completely contrary to a deep pipeline? The Pentium 4 has a 20-stage pipeline IIRC.

    Was I wrong to laugh when I heard hardware manufacturers claim, "sure, we make a CISC, but it has RISC-like elements .

    What I am reminded of is the change in how musicians are classified. When I grew up rock music was just about all that young people listened to. Rap and punk music had never been heard of. And country music was considered incredibly uncool. Now country music's coolness factor has grown considerably. And a strange thing has happened. Lots of artists who were unquestionably considered in the Rock camp back then, like Neil Young, or Credence Clearwater, are now classified as Country music, as if they had never been anything else.

    It has been a long time, but I remember learning in my computer architecture course about wide microcode instruction words, and narrow microcode instruction words. Wide microcode instruction words allowed the CPU to do more operations in parallel. Ie. the opposite of a RISC. So, I ask in perfect ignorance -- how wide are the Pentium 4 and Athlon microcode?

    If I am not mistaken the Transmeta was a very wide instruction word. And if I am not mistaken, doesn't that make it the opposite of a RISC?

    • by Zo0ok ( 209803 )
      The concept of RISC (that each instruction takes one cycle) is what makes pipelining possible in the first place. If you have instructions that take 2-35 cycles to execute its very hard to produce an efficient pipeline.

      Also, things like Out-of-order-execution and Branch-prediction makes more sense for a RISC instruction set (so I was told ;).

      But I more or less agree with you that a long pipeline is somewhat contradictory to the idea of RISC.
    • When my buddy first told me about this exciting new RISC idea one of the design goals was each instruction was to take a single instruction cycle to execute. Isn't this completely contrary to a deep pipeline? The Pentium 4 has a 20-stage pipeline IIRC.
      Not really, the idea is to make every instruction simple.
      Reduced Instruction Set Computer
      The side effects of this are that every instruction can be the same length thus simplifying the complex decoding process of a CPU.
      x86 can be multiple bytes in length, whi
    • BTW, just FYI, the P4 pipline is, in fact, 28 to 31 stages! Truly mind-boggling... although, some of those are, apparently, just filler "driver" stages to allow the clock rate to be ramped up.
    • by Zathrus ( 232140 ) on Wednesday March 31, 2004 @04:28PM (#8729612) Homepage
      When my buddy first told me about this exciting new RISC idea one of the design goals was each instruction was to take a single instruction cycle to execute. Isn't this completely contrary to a deep pipeline?

      No, in fact pipelining is central to the entire concept of RISC.

      In traditional CISC there was no pipelining and operations could take anywhere from 2-n cycles to complete -- at the very least you would have to fetch the instruction (1 cycle) and decode the instruction (1 cycle; no, you can't decode it at the same time you fetch it -- you must wait 1 cycle for the address lines to settle, otherwise you cannot be sure of what you're actually reading). If it's a NOOP, there's no operation, but otherwise it takes 1+ cycles to actually execute -- not all operators ran in the same amount of time. If it needs data then you'd need to decode the address (1 cycle) and fetch (1 cycle -- if you're lucky). Given that some operators took multiple operands you can rinse and repeat the decode/fetch several times. Oh, and don't forget about the decode/store for the result. So, add all that up and you could expect an average instruction to run in no less than 7-9 cycles (fetch, decode, fetch, decode, execute, decode, store). And that's all presuming that you have a memory architecture that can actually produce instructions or data in a single clock cycle.

      In RISC you pipeline all of that stuff and reduce the complexity of the instructions so that (optimally) you are executing 1 instruction/cycle as long as the pipelines are full. You have separate modules doing the decodes, fetches, stores, etc. (and in deep-pipeline architectures, like the P4, these steps are broken up even more). This lets you pump the hell out of the clockrate since there's less for each stage of the pipeline to actually do.

      Modern CPUs have multiple everything -- multiple decoders, fetchers, execution units, etc. so it's actually possible to execute >1 cycle/cycle. Of course, the danger to the pipelining is that if you branch (like when a loop runs out or an if-then-else case) then all those instructions you've been decoding go out the window and you have to start all over from wherever the program is now executing (this is called a pipeline stall and is very costly; once you consider the memory delays it can cost hundreds of cycles). Branch prediction is used to try and mitigate this risk -- generally by executing both branches at the same time and only keeping the one that turns out to be valid.

      Was I wrong to laugh when I heard hardware manufacturers claim, "sure, we make a CISC, but it has RISC-like elements .

      Yes, because neither one exists anymore. CISC absorbed useful bits from RISC (like cache and pipelining) and RISC realized there was more to life than ADD/MUL/SHIFT/ROTATE (oversimplification of course). The PowerPC is allegedly a RISC chip, but go check on how many operators it actually has. And note that not all of them execute in one cycle. x86 is allegedly CISC, but, well... read on.

      how wide are the Pentium 4 and Athlon microcode?

      The x86 ISA has varying width. It's one of the many black marks against it. Of course, in reality, the word "microcode" isn't really applicable to most CPUs nowadays -- at least not for commonly used instructions. And to further muddy the picture both AMD and Intel don't actually execute x86 ISA. Instead there's a translation layer that converts x86 into a much more RISC-y internal ISA that's conducive to running at more than a few megahertz. AFAIK, the internal language is highly guarded by both companies.

      If I am not mistaken the Transmeta was a very wide instruction word. And if I am not mistaken, doesn't that make it the opposite of a RISC?

      Transmeta and Intel's Itanium use VLIW (very large instruction word) computing, which is supposed to make the hardware capable of executing multiple dependant or independant operations in one cycle. It does so by putting the onus on the compiler
      • and RISC realized there was more to life than ADD/MUL/SHIFT/ROTATE (oversimplification of course)

        Early RISC (for example early SPARC) didn't even have integer multiply!

        VLIW seems to have some life in high-throughput DSP. Texas Instruments makes some 8-instructions-at-once DSPs.

      • In the dsp range vliw gets more attention. Take the TI C6000 serie for example. Pure VLIW (8 instruction/cycle for the 8 exec-units) Risc (dedicated load-store arch. etc.) with no pipeline interlock and a very short pipelines you have impressive performance at low cycles/s. In addition you have the advantace off compile ones and have a dedicatet behavior at runtime. Unlike cisc cpus which have to rearange the instructions at runtime you can (if you want) literaly move at compile time any the assembler instr
  • I like this quote (Score:5, Insightful)

    by Zo0ok ( 209803 ) on Wednesday March 31, 2004 @03:35PM (#8728840) Homepage
    The 64-bit PowerPC 970, a single-core version of the POWER4, can process 200 instructions at once at speeds of up to 2 GHz and beyond -- all while consuming just tens of watts of power. Its low power consumption makes it a favorite with notebooks and other portable applications on the one hand, and with large server and storage farms on the other.

    Can anyone tell me where I can buy a G5 laptop?

  • Macosrumors.com has an article suggesting an IBM chip known as the PPC975 may be used for future Apple Macintosh computers at speeds of up to 3Ghz.

    http://www.macosrumors.com/33004M.html

    Please note that MOSR has a long history of being completely and utterly wrong in their predictions, so don't get your hopes too high...
  • On a similar note... (Score:2, Interesting)

    by kyoko21 ( 198413 )
    IBM announced today that they will be offering more information on the architecture of its PowerPC and Power server chips to device makers and software developers. [com.com] First software with Linux, and now hardward with their own Power Line. If intel can only do this for the Centrino line. :-/
  • I figured /. would have a lot more discussion of the Terminator-like aspects of today's announcement.

    Did you read this? [businesswire.com] Look at the second-to-last paragraph:

    "...IBM is working on future Power chips that can physically reconfigure themselves -- adding memory or accelerators, for example -- to optimize performance or power utilization for a specific application."

    That is the first step in self-evolving machines.

    Yes, it is a minor step, but it is a friggin first step, OK? If they can pull this off, the

  • I found this site a couple years ago, and i'm sure everyone has heard of it, but just in case: apple-history.com [apple-history.com]
  • Seriously, check these quotes from the IBM site: [blockquote] Thus, in the days when computing was still so primitive that people thought that digital watches were a neat idea, it was CMOS chips that powered them. [/blockquote] [blockquote]Figure 1. It's wafer-thin[/blockquote] [blockquote]One of the reasons for that is IBM's new top-of-the-line fab in Fishkill, New York. The Fishkill fab is so up-to-date that it is capable of producing chips with all of the latest acronyms, from copper CMOS XS to Silicon

  • Help me out here. A quote from the article is:

    POWER3
    Released in 1998: 15 million transistors per chip
    The first 64-bit symmetric multiprocessor (SMP)


    Didn't several companies have 64-bit multiprocessor machines out back then? Unless I'm mistaken, Sun's Starfire was before then, having up to 64 UltraSparc II's - which, as I recall, were 64-bit chips. And that's just Sun, ignoring the other players.

    So, it is just that they used "SMP", as opposed to other forms of multiprocessing, or is my memor
  • Impressive (Score:3, Informative)

    by 1000101 ( 584896 ) on Wednesday March 31, 2004 @05:39PM (#8730584)
    "What do the Nintendo GameCube's Gekko, Transmeta's first Crusoe chips, Cray's X1 supercomputer chips, Xilinx Virtex-II Pro processors, Agilent Tachyon chips, and the next-generation Microsoft XBox processors-which-have-yet-to-be-named all have in common? All of them were or will be manufactured by IBM."

    That's quite impressive. Throw the 970 in that mix and it's even more impressive. The bottom line is that Intel isn't alone at the top of the mountain when it comes to producing high quality, fast, and reliable chips. On a side note, as a soon-to-be-graduating CS major, I dream about working at a place like IBM.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...