A History of PowerPC 193
A reader writes: "There's a article about chipmaking at IBM up at DeveloperWorks. While IBM-centric, it talks a lot about the PowerPC, but really dwells on the common ancestory of IBM 801" Interesting article, especially for people interested in chips and chip design.
IBM also says Screw you to intel (Score:4, Informative)
Guide to the PowerPC architecture (Score:5, Informative)
Re:IBM also says Screw you to intel (Score:4, Informative)
Theres a pantload of info here [ibm.com].
Re:Motorola (Score:4, Informative)
They gave up on desktop PPC. They still do a lot of new PPCs, just working on improving MIPS/watt instead of pure MIPS. Embedded space is a lot higher volume and bigger profit than Apple.
Re:IBM also says Screw you to intel (Score:2, Informative)
This is really cool stuff. IBM is a little late to the game in some regards, SGI has been doing this stuff for years in IRIX on their MIPS machines. But hey better late than never...
Re:Big Endian (Score:5, Informative)
X86 is little endian, which is chunked-up and backwards.
Example:
View the stored number 0x12345678.
Big endian: 12 34 56 78
Little endian: 78 56 34 12
Clear as mud?
Nice PowerPC Roadmap (Score:5, Informative)
Both Endians (Score:2, Informative)
The PPC ISA has support for both big- and little-endian modes. However, the little-endian mode is a bit screwy. There are some appnotes on the Motorola website on using little-endian mode.
Re:IBM also says Screw you to intel (Score:5, Informative)
Re:PowerPC in PlayStation 2? Huh? (Score:4, Informative)
link [uiuc.edu]
So yes, it is in a way MIPS derived, but the MIPS core does very little of the actual processing, it's more of a bootloader and I/O coprocessor.
Re:"Chips May Physically Reconfigure Themselves" (Score:5, Informative)
Errm, actually, it WAS. See for instance
http://home1.gte.net/res008nh/nt/ppc/default.htm [gte.net]
Re:PowerPC in PlayStation 2? Huh? (Score:1, Informative)
Re:IBM also says Screw you to intel (Score:3, Informative)
Re:"Chips May Physically Reconfigure Themselves" (Score:3, Informative)
Re:Motorola (Score:3, Informative)
So, you could say Motorola is giving up on semiconductors... but the division that worked on the G4 will continue to work on PPC. Just under a different name.
Re:Computer history IS IBM-centric (Score:3, Informative)
It was Lyons Tea Shop Company, of all unlikely contenders, who married "electronic programmible devices" to IT.
Of course when they realised thier mistake they went hell for leather to redress the balance. But...amazingly.....they were totally off the ball **again** with microcomputer technology.
Re:IBM also says Screw you to intel (Score:3, Informative)
Well sort of (Score:4, Informative)
Example: Windows is running on slice 1, BSD on slice 2, and Linux on slice 3.
BSD gets a kernel panic and crashes, the slice is restarted without affecting the remaining running OS's. It's, for the lack of a better term, Hyperthreading for the whole computer.
Re:Big Endian (Score:4, Informative)
Re:For those who want PPC970 without getting a Mac (Score:5, Informative)
RS/6000 [ibm.com]
Or, a Power-based IBM workstation,
Workstation [ibm.com]
Re:So what HDL do they use? (Score:4, Informative)
Re:Big Endian (Score:5, Informative)
Little-endian has some nice hardware properties, because it isn't necessary to change the address due to the size of the operand.
Big Endian:
uint32 src = 0x00001234;
uint32 dst1 = src;
uint16 dst2 = src;
Little Endian:
uint32 src = 0x00001234;
uint32 dst1 = src;
uint16 dst2 = src;
The processor doesn't have to modify register values and funk around with shifting the data bus to perform different read and write sizes with a little-endian design. Expanding the data to 64 bits has no effect on existing code, whereas the big-endian case will have to change all the pointer values.
To me, this seems less "chunked up" than big endian storage, where you have to jump back and forth to pick out pieces.
In any event, it seems unnecessary to use prejudicial language like "normal" and "chunked up". It's just another way of writing digits in an integer. Any competent programmer should be able to deal with both representations with equal facility.
Being unable to deal with little-endian representation is like being unable to read hexadecimal and insisting all numbers be in base-10 only. (Dotted-decimal IP numbers, anyone?)
Big-endian has one big practical advantage other than casual programmer convenience. Many major network protocols (TCP/IP, Ethernet) define the network byte order as big-endian.
Re:For those who want PPC970 without getting a Mac (Score:1, Informative)
Re:"Chips May Physically Reconfigure Themselves" (Score:3, Informative)
MVS... (Score:2, Informative)
Damn them! Dam them to HELL!!!!
Re:Yeah, I remember (Score:5, Informative)
What Intel did was include RISC architecture in around the x86 instruction set to create the pentium pro, pentium II, III, etc. Otherwise they would have been killed.
Infact IBM was correct. Cisc was dying. THe pentium1 could not compete agaisnt the powerpc unless it had a very high clock speed. All chips today are either pure risc or a hybrid cisc/risc like todays Althons/Pentium's. The exception is the nasty Itanium which is not doing too well
Re:Sounds fishy to me... (Score:3, Informative)
Also, things like Out-of-order-execution and Branch-prediction makes more sense for a RISC instruction set (so I was told
But I more or less agree with you that a long pipeline is somewhat contradictory to the idea of RISC.
Re:Sounds fishy to me... (Score:3, Informative)
Not really, the idea is to make every instruction simple.
Reduced Instruction Set Computer
The side effects of this are that every instruction can be the same length thus simplifying the complex decoding process of a CPU.
x86 can be multiple bytes in length, while all PPC (and most RISC) instructions are all 32 bits long (yes even the PPC-64 instructions).
the simplified insruction set allows for more instructions to be processed in less cycles, but generally you need more instructions to do the same thing. Since it's easier to decode the PPC instructions, it's also easier to pipeline them, easier to do superscalar cores (since less transistors are required to do the same thing).
This doesn't always translate into more performance since RISC computers generally need more memory (the code is less dense) and thus more bandwidth to achieve the same performance sometimes. While some x86 instructions are hard to crack for the decoder, the savings in memory to store the instruction can make it worthwhile to do.
If I am not mistaken the Transmeta was a very wide instruction word. And if I am not mistaken, doesn't that make it the opposite of a RISC?
Yep, but the problem is that you're asking the compiler to extract the parrallelism from the instruction stream, which is not always possible. Usually, there is more thread level parallelism then instruction level parallslism.
Re:200 instructions at once? (Score:5, Informative)
Although, it should be noted that the pipeline depth for the POWER4 is just 15 stages (as opposed to the P4 which has, IIRC, 28 stages), so while a branch misprediction is quite bad, it's not as bad as some architectures. My understanding is that, in order to achieve that 200 IPC number, the POWER4 is just a very wide superscalar architecture, so it simply reorders and executes a lot of instructions at once. Plus, that number may in fact be 200 micro-ops per second, as opposed to real "instructions" (although, that's just speculation on my part... it's been quite a while since I read up on the POWER4), as the POWER4 has what they term a "cracking" stage, similar to most Intel processors, where the opcodes are broken down into smaller micro-ops for execution.
Re:Sounds fishy to me... (Score:5, Informative)
No, in fact pipelining is central to the entire concept of RISC.
In traditional CISC there was no pipelining and operations could take anywhere from 2-n cycles to complete -- at the very least you would have to fetch the instruction (1 cycle) and decode the instruction (1 cycle; no, you can't decode it at the same time you fetch it -- you must wait 1 cycle for the address lines to settle, otherwise you cannot be sure of what you're actually reading). If it's a NOOP, there's no operation, but otherwise it takes 1+ cycles to actually execute -- not all operators ran in the same amount of time. If it needs data then you'd need to decode the address (1 cycle) and fetch (1 cycle -- if you're lucky). Given that some operators took multiple operands you can rinse and repeat the decode/fetch several times. Oh, and don't forget about the decode/store for the result. So, add all that up and you could expect an average instruction to run in no less than 7-9 cycles (fetch, decode, fetch, decode, execute, decode, store). And that's all presuming that you have a memory architecture that can actually produce instructions or data in a single clock cycle.
In RISC you pipeline all of that stuff and reduce the complexity of the instructions so that (optimally) you are executing 1 instruction/cycle as long as the pipelines are full. You have separate modules doing the decodes, fetches, stores, etc. (and in deep-pipeline architectures, like the P4, these steps are broken up even more). This lets you pump the hell out of the clockrate since there's less for each stage of the pipeline to actually do.
Modern CPUs have multiple everything -- multiple decoders, fetchers, execution units, etc. so it's actually possible to execute >1 cycle/cycle. Of course, the danger to the pipelining is that if you branch (like when a loop runs out or an if-then-else case) then all those instructions you've been decoding go out the window and you have to start all over from wherever the program is now executing (this is called a pipeline stall and is very costly; once you consider the memory delays it can cost hundreds of cycles). Branch prediction is used to try and mitigate this risk -- generally by executing both branches at the same time and only keeping the one that turns out to be valid.
Was I wrong to laugh when I heard hardware manufacturers claim, "sure, we make a CISC, but it has RISC-like elements
Yes, because neither one exists anymore. CISC absorbed useful bits from RISC (like cache and pipelining) and RISC realized there was more to life than ADD/MUL/SHIFT/ROTATE (oversimplification of course). The PowerPC is allegedly a RISC chip, but go check on how many operators it actually has. And note that not all of them execute in one cycle. x86 is allegedly CISC, but, well... read on.
how wide are the Pentium 4 and Athlon microcode?
The x86 ISA has varying width. It's one of the many black marks against it. Of course, in reality, the word "microcode" isn't really applicable to most CPUs nowadays -- at least not for commonly used instructions. And to further muddy the picture both AMD and Intel don't actually execute x86 ISA. Instead there's a translation layer that converts x86 into a much more RISC-y internal ISA that's conducive to running at more than a few megahertz. AFAIK, the internal language is highly guarded by both companies.
If I am not mistaken the Transmeta was a very wide instruction word. And if I am not mistaken, doesn't that make it the opposite of a RISC?
Transmeta and Intel's Itanium use VLIW (very large instruction word) computing, which is supposed to make the hardware capable of executing multiple dependant or independant operations in one cycle. It does so by putting the onus on the compiler
Re:Guide to the PowerPC architecture (Score:2, Informative)
You'd go into its folder and see "Peak (604)" or "Deck II (604)" to let you know that it was going to use your particular processor to its best performance.
Re:About My Resume... (Score:3, Informative)
Then came the PC, Unix, the fiascos with OS/2 (especially OS/2 marketing was pretty bad) and Microchannel, and IBM changed. They certainly still are one of (if not even the) largest, but they are only a shadow of their former might and the terror they could inflict on people daring to not choose IBM.
VLIW is very impressive. (Score:2, Informative)
Re:Too scary! (Score:2, Informative)
Mostly IBM-developed schematic capture, simulation, and physical design tools. I also did some work on test structure verification using an IBM-designed tool.
Tools available [ibm.com] in the current ASIC methodology are on the IBM website. Some of these would have been used back then, too.
Impressive (Score:3, Informative)
That's quite impressive. Throw the 970 in that mix and it's even more impressive. The bottom line is that Intel isn't alone at the top of the mountain when it comes to producing high quality, fast, and reliable chips. On a side note, as a soon-to-be-graduating CS major, I dream about working at a place like IBM.
Re:Big Endian (Score:5, Informative)
One more big advantage of the big-endian byte order is that 64-bit big-endian CPUs can do string comparisons 8 bytes at a time. This is a big advantage where the length of the strings is known (Java strings, Pascal strings, burrows-wheeler transform for data compression) and still an advantage for null-terminated strings.
I'm not aware of any such performance advantages for the little-endian byte order.
The main advantage of little-endian byte order is ease of modifying code written in assembly or raw opcodes if you later decide to change your design and go with larger or smaller data fields. The main uses for assembly programming are very low-level kernel programming (generally the most stable part of the kernel code base) and performace enhancement of small snippets of code that have been well tested and profiled and are unlikely to change a lot.
I agree that an decent programmer should be able to deal with either endianess, but the advantages of the little-endian byte order seem to be becoming less and less relevant.
Re:Motorola (Score:2, Informative)
- Just last year they reached core speeds they promised back in 2000 (or was it 1999?).
- PCI support was two years late (or was it three)?
- Power dissipation has been higher than expected.
- Some clock speeds require you to run a different voltage, while other other clock speeds don't work at all (if you use certain clock multipliers).
We still actively design in their parts because they are a perfect fit, but we don't trust them to deliver their next feature on time (last Oct they promised the 8270 and related devices would be in production by December... here we are in March and now they are promising May). I hope they can get their act together, cause when they finally release a product, it works like a hose.