Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Technology

A Brief History of Chip Hype and Flops 275

On CNet.com, Brooke Crowthers has a review of some flops in the chip-making world — from IBM, Intel, and AMD — and the hype that surrounded them, which is arguably as interesting as the chips' failures. "First, I have to revisit Intel's Itanium. Simply because it's still around and still missing production target dates. The hype: 'This design philosophy will one day replace RISC and CISC. It is a gateway into the 64-bit future.' ... The reality: Yes, Itanium is still warm, still breathing in the rarefied very-high-end server market — where it does have a limited role. But... it certainly hasn't remade the computer industry."
This discussion has been archived. No new comments can be posted.

A Brief History of Chip Hype and Flops

Comments Filter:
  • by hannson ( 1369413 ) <hannson@gmail.com> on Monday February 16, 2009 @04:03AM (#26869947)

    I don't know enough about the architectures to say which one is better (x86-64 vs IA-64) but backwards compatibility with x86 is a big win for x86-64.

  • by Anonymous Coward on Monday February 16, 2009 @04:09AM (#26869975)

    How could the writer blatantly ignore the 486sx, the winchip, or the original (cacheless) Celeron??

    Although, I've always contended that TI's 486dlc (which fit in a 386 socket) was one of the worst chips I ever used, it overheated, lacked full 486 compatibility, and froze up the system with random halts whenever I needed to get something done on it!
     

  • What about ACE? (Score:4, Insightful)

    by Hal_Porter ( 817932 ) on Monday February 16, 2009 @04:10AM (#26869981)

    Back in 1999 the ACE Consortium had Compaq, Microsoft, MIPS Computer Systems, DEC, SCO, and a a bunch of others [wikipedia.org].

    The plan was to launch a MIPS based open architecture system running Windows NT or Unix. Back then the MIPS CEO said MIPS would become "the most pervasive architecture in the world". The whole thing fell apart as Compaq defected, MIPS run out of cash and got bought by SGI. Dec obviously moved to supporting Alpha instead. Microsoft shipped NT for MIPS, Alpha and PPC for another few released and then gave up the ghost.

  • That's it? (Score:5, Insightful)

    by Anonymous Coward on Monday February 16, 2009 @04:18AM (#26870011)

    A short paragraph about Itanium (or, as the Register likes to call it, Itanic)? A few brief paragraphs about PowerPC? A few brief paragraphs about Puma?

    Come on. There's a lot more scope for this sort of article. What about Rock [wikipedia.org], promised three years ago, with tape out two years ago, and yet we're still waiting for systems? What about the iAPX 432 [wikipedia.org]?

    You've got the basis for a good article, but dear $DEITY, flesh it out! There's more meat on Kate Moss than on this article!

  • by TheLink ( 130905 ) on Monday February 16, 2009 @04:18AM (#26870017) Journal
    The Itanium is not superior at all.

    Even before the AMD64, the Itanium was only good at mainly contrived FPU benchmarks. It was dismal in integer performance.

    When you didn't care about x86 compatibility and wanted to spend lots of money for the usual reasons, it was better to go with IBM's offerings like POWER (which is still a decent contender in performance).

    Intel couldn't offer you much else other than the CPU. They had to rely on HP, who just left their Tandem and VMS stuff to rot. Yes there were other big names pretending to do Itanium servers, but in practice it was HP.

    The Itanic was an EPIC failure.
  • by Cprossu ( 736997 ) <cprossu2@@@gmail...com> on Monday February 16, 2009 @04:20AM (#26870033)

    AC how could you have forgotten to mention the socket 4 Pentiums, or the K5 on AMD's side, the Transmetta Caruso, the Cyrix MII, or the slot 1 PIII 1.13?? From the extraordinary cost alone, you could have also called most of the intel overdrives a flop too.

    although the winchip (shudders) I hope no one was unlucky enough to have to depend on a box with one of those running it

  • Re:FTA: (Score:5, Insightful)

    by snowgirl ( 978879 ) * on Monday February 16, 2009 @04:20AM (#26870035) Journal

    The PowerPC architecture was dumped by Apple and failed to challenge Intel in the PC market in a big way.

    You missed the proper order. The PowerPC architecture didn't have the money behind it that the x86 architecture did. Take a crappier design but spend a ton more money on it, and you can easily make it faster than a better design.

    The PowerPC failed to compete effectively against the Intel/AMD competition, and thus, Apple was pretty much forced to switch because of simple economics.

  • by ausoleil ( 322752 ) on Monday February 16, 2009 @04:22AM (#26870045) Homepage

    Reading through the article, it seems that other than AMD's Puma, most of these failures have one thing in common: they are not backward compatible with the chips they replace.

    People are loathe to buy a new computer and all-new versions of software to run on it. Look at the 64-bit Windows architectures. How many folks are running 32-bit software on those?

    Bottom line is that the software IS the computer and the chips ultimately are sexy only to EE's and gearheads.

  • by snowgirl ( 978879 ) * on Monday February 16, 2009 @04:22AM (#26870055) Journal

    Well, I guess having better compilers for IA64 would helped greatly, considering that the architecture's performance is critically depending upon the compiler detecting instructions that are not interdependant.

    That's pretty much right on the head there. Intel made the IA64 under the assumption "make a better chip, and the compiler will follow", unfortunately, they didn't realize how much inertia was behind x86. AMD exploited it and POOF, Itanium goes down in flames. :(

  • Nice title... (Score:2, Insightful)

    by V!NCENT ( 1105021 ) on Monday February 16, 2009 @04:27AM (#26870069)
    I just got out of my bed 2 minutes ago and by vaguely reading the word FLOP I thought about Floating point Operations Per Second...
  • by m.dillon ( 147925 ) on Monday February 16, 2009 @04:31AM (#26870081) Homepage

    It turns out that the cost of a translation layer has become irrelevant as chips have gotten faster. It's not even considered a pipeline stage any more, not really. That is, it is no longer a bottleneck to have to have a layer of essentially combinational logic to convert a CISC instruction set into a mostly RISC / VLIW one internally. This savings grace is also why the fairly badly bloated intel instruction set no longer has any real impact on the performance they can squeeze out of the chips.

    -Matt

  • Transmeta Crusoe? (Score:5, Insightful)

    by Jeppe Salvesen ( 101622 ) on Monday February 16, 2009 @04:47AM (#26870127)

    That definitely belongs in there. Sorry, Linus.

  • by Hal_Porter ( 817932 ) on Monday February 16, 2009 @04:57AM (#26870155)

    The idea with Itanium was to make a CPU that could perform on the level of RISC and CISC CPUs with a relatively simple front end. In essence the Itanium executes a fixed number of instructions each cycle, then leaves it to the compiler to select which instructions are to be executed in parallel and make sure they don't read and write to the same registers and such (instead of having logic in the CPU figuring this stuff out).

    Actually you could see that Itanium was in deep trouble when it launched at a lower clock rate than x86. The whole idea behind EPIC "explicitly parallel instruction computing" was that you move instruction scheduling to the compiler, and that allows you to essentially out-RISC RISC, i.e. build a dumber chip that can be clocked faster. I think you're right about technology too. Back in the CISC vs RISC days an R4000 for example could be clocked faster than a 486 due to its ultra streamlined pipeline - MIPS originally meant "Microprocessor without Interlocked Pipeline Stages". Itaniums for a variety of reasons ended up clocked slower than x86. Partly I think too much stuff got added to the architecture, and partly I think x86 chips were already very close to process limit for frequency, so a simpler architecture wouldn't run any faster.

    I sort of wonder if .Net might have been part of the sketchy Itanium strategy too. The big thing about .Net is that it is a VM that is designed to be JITted rather than interpreted. Part of EPIC was that chips would be binary compatible, at least for user code, but that old binaries would not necessarily run optimally. It's easy to see why - a binary compiled for an old chip with n functional units would have fewer instructions scheduled to run in parallel than one compiled for a new one with 2n units assuming the scheduling was done at compile time.

    Of course with .Net the applications are compiled for a VM and then JITted. If you had a new chip, the .Net JITter could detect this and schedule optimally.

  • by paul248 ( 536459 ) on Monday February 16, 2009 @05:18AM (#26870259) Homepage

    From what I've heard, Transmeta was creating some pretty remarkable CPU technology; they just made a series of awful business decisions.

  • Re:That's it? (Score:4, Insightful)

    by Hal_Porter ( 817932 ) on Monday February 16, 2009 @05:26AM (#26870281)

    He'd be better off structuring the article as quiche eaters (computer scientists) vs hardware designers.

    Hardware designers try to build something which can be clocked fast. They don't care if it's aesthetically pleasing and so on.

    Quiche eaters moan about how limited von Neumann architectures are. They try to do a CISCy things like reduce the abstraction level between the programmer and the instruction set with lots of hard to implement features in the instruction set, and design ISA where it is impossible, newspeak style, to write incorrect code (e.g. segmentation or capability based addressing [wikipedia.org]). The hardware engineer way to do this is a TLB and page table.

    x86 has had input from both camps, but back compatibility has limited the damage the quiche eaters can do. In the end most of the quiche eater features end up unused (e.g. segmentation and complex instructions) and you end up running ugly, primitive but very fast instructions translated to run on Risc core. It kicked the ass of the quiche eater designed iAPX432 and Itanium.

    Of course the dequicheffication of the x86 was to some extent triggered by competion from the very low quiche Risc chips. In fact MIPS did memory protection by implementing only a TLB in hardware, TLB writes and the rest of paging was done in software. Of course, sometimes RISC designs are so fundamentally anti quiche that the very fundamentalism is form of quiche eating, like Sparc's multiply and divide step instructions that ended up being slower than the 68K's full multiple and divide instructions.

  • Very little code is written in x86 assembly, the vast majority is written in higher level languages and then compiled or interpreted... When you have the source code, porting it to IA64 is relatively easy. Look at Linux, it runs on a variety of architectures, as do a huge number of applications. Many of the original authors of those apps would never have considered they might be running on IA64, Alpha, Arm, Mips or Sparc someday...

    The problem is software being delivered as binaries. Binary software distribution is holding back progress, making it necessary to continue supporting old kludgy architectures instead of making a clean break to something new and modern.

  • by JakiChan ( 141719 ) on Monday February 16, 2009 @05:56AM (#26870413)

    Itanium did one thing well...it killed a lot of other chips. The threat of it killed MIPS post-R12K plans - and the Alpha, and PA-RISC architectures as well.

    I remember how SGI kept the team around that was going to work on their next-gen processor while they were negotiating with Intel. These guys had no work - they just played a lot of foosball in good old Building 40 (yeah, Google, you weren't nearly cool enough to build that campus). Then once SGI had sold it's soul they axed the project (and the team). That was a sad day...

  • POWER and PowerPC? (Score:5, Insightful)

    by dlundh ( 158421 ) on Monday February 16, 2009 @06:06AM (#26870441) Homepage

    Why is that even in there? It "only" powers all three current games consoles and IBMs Power Systems server lines (i and p).

    If that's a failure, I hope IBM has many more failures in the future.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday February 16, 2009 @08:49AM (#26871137) Homepage Journal

    "Bloat" is not the problem with x86. The problem is that there are zero general-purpose registers - many instructions require that the operands be in specific registers, which blows the whole idea of general-purpose registers right out of the water. This is compounded by the fact that there are only four registers which you could even call general-purpose with a straight face. You can sometimes use some of the others (if you're not using them for anything else, and sometimes you have to have pointers in the pointers) to stash something but they're not useful for computation. Just taking an existing program and recompiling it from x86 to x86_64 with any kind of competent compiler will result in a significant performance improvement, often pegged around 10-15% just due to avoiding register starvation issues. While register renaming somewhat mitigates the issues with the "general" purpose registers in x86, it does not eliminate them entirely.

    On the flip side, x86's variable instruction lengths result in smaller code which can improve execution time on massively superscalar processors simply by virtue of getting the instructions into the processor faster.

  • by TheRaven64 ( 641858 ) on Monday February 16, 2009 @09:33AM (#26871375) Journal
    If you make a better chip, the users will follow. But if you make a chip that is marginally better than x86, slower than most of its RISC competitors, and more expensive than anything with similar performance, no one will follow. This is especially true when you release an incredibly power-hungry server chip just as the market is starting to care about performance per Watt.
  • by k.a.f. ( 168896 ) on Monday February 16, 2009 @10:06AM (#26871627)
    The Itanium might have had a chance if optimizing compilers had been available that would actually exploit its hardware... but see the following sound bite:

    the "Itanium" approach that was supposed to be so terrific - until it turned out that the wished-for compilers were basically impossible to write.

    (http://www.informit.com/articles/article.aspx?p=1193856)

    When Don Knuth says your chip is impossible to program for, you're in deep, deep trouble.

  • Re:FTA: (Score:4, Insightful)

    by DurendalMac ( 736637 ) on Monday February 16, 2009 @11:27AM (#26872557)
    I find it interesting that the article failed to mention that variants of the PowerPC are not only in PS3s, but also in the Wii and the XBox 360. Yes, the PowerPC failed in the desktop arena, but it's been very successful in others. The PowerPC also has plenty of success in embedded markets. Does the auto industry still use 603-based chips in car computers?
  • Alpha was scaling quite nicely, thank you, all the way through EV7, and was completely on track for the original plan. Alpha didn't "run out of steam", Alpha was deliberately killed. The EV8 program had its funds and manpower cut, and then that was used as an excuse to kill it.

    And little of what made Alpha good ended up in Hammer. Alpha wasn't am implementation, it was an architecture, and what made that architecture effective was in the instruction set and memory model that let the implementation change without ending up in the dead ends that claimed MIPS and SPARC... and IA64.

  • Well, it could have been worse. It could have been SPARC.
    What's wrong with SPARC? I took SPARC assembler and x86 assembler during the same semester my first time through college (i.e., before I dropped out), and it made the x86 class pretty unpleasant. SPARC actually left you the impression that somebody put some thought into it before they started making them.

  • by Sloppy ( 14984 ) on Monday February 16, 2009 @02:44PM (#26875209) Homepage Journal

    Intel made the IA64 under the assumption "make a better chip, and the compiler will follow", unfortunately, they didn't realize how much inertia was behind x86. AMD exploited it and POOF, Itanium goes down in flames.

    That was pretty hilarious, considering that when other chipmakers were making chips (68k, PPC, Alpha, etc) which didn't take over the personal computer market, Intel made the 80286 which could work as a fast 80806. Then the 80386 which could work as a fast 80286. Intel proved backwards-compatibility was the most important feature in personal computer processors, overriding any other concern.

    They forgot, and then AMD expanded x86 exactly the way Intel had previously done twice: by making something that could work as a fast 80386.

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...