Forgot your password?
typodupeerror
Hardware Technology

A Brief History of Chip Hype and Flops 275

Posted by kdawson
from the open-mouth-insert-marketing dept.
On CNet.com, Brooke Crowthers has a review of some flops in the chip-making world — from IBM, Intel, and AMD — and the hype that surrounded them, which is arguably as interesting as the chips' failures. "First, I have to revisit Intel's Itanium. Simply because it's still around and still missing production target dates. The hype: 'This design philosophy will one day replace RISC and CISC. It is a gateway into the 64-bit future.' ... The reality: Yes, Itanium is still warm, still breathing in the rarefied very-high-end server market — where it does have a limited role. But... it certainly hasn't remade the computer industry."
This discussion has been archived. No new comments can be posted.

A Brief History of Chip Hype and Flops

Comments Filter:
  • by Jurily (900488) <jurily@NOSPam.gmail.com> on Monday February 16, 2009 @04:09AM (#26869973)

    I don't think so. x86-64 is fully backwards-compatible with x86. Itanium is not.

    Wanna guess why they're not that popular?

  • by snowgirl (978879) * on Monday February 16, 2009 @04:15AM (#26870001) Journal

    I don't think so. x86-64 is fully backwards-compatible with x86. Itanium is not.

    Wanna guess why they're not that popular?

    You don't know the architecture? The first Itaniums had hardware x86 processors. The only reason they don't now, is that it was found to be faster to emulate the x86 than run it with a diminished hardware.

  • by anss123 (985305) on Monday February 16, 2009 @04:28AM (#26870073)

    If AMD hadn't rushed with their 64 bit version of the x86, about now, Itanium would be getting popular and hence cheap. Market forces have so much to do with technology advancement. A lot of times, superior technology has to take a back seat ...

    Perhaps, but how superior is that superior technology?

    The idea with Itanium was to make a CPU that could perform on the level of RISC and CISC CPUs with a relatively simple front end. In essence the Itanium executes a fixed number of instructions each cycle, then leaves it to the compiler to select which instructions are to be executed in parallel and make sure they don't read and write to the same registers and such (instead of having logic in the CPU figuring this stuff out).

    It was a neat idea, but advantages in manufacturing technology favored CPUs with more complicated front ends. The Itanium advantage never materialized on the desktop, so had this "superior" technology taken off we'd might have had faster computers at the cost of making our software run on this bling architecture.

    Making big ISA changes for a mere speed boost is not worth it, and it's not certain you'd get even that as the Itanium does not always outperform the x86.

  • Re:What about ACE? (Score:3, Informative)

    by anss123 (985305) on Monday February 16, 2009 @04:59AM (#26870163)

    Back in 1999

    Back on 1991 you mean?

    I only know about that since it was mentioned in an article describing boot.ini. It was from an age before the web so I guess only those who bought certain dead tree magazines ever heard of it.

  • by Rockoon (1252108) on Monday February 16, 2009 @05:41AM (#26870345)
    You are uninformed. The AMD multi-core "problem" is a software problem.

    People who programmed for single-core systems assumed that the processors internal tick count, called the timestamp counter (read with the RDTSC instruction), would be monotonically increasing. The fact is that each core could have its own timestamp counter and if a process is migrated to another core by the OS scheduler, then the monotonically increasing assumption falls flat (time can appear to run backwards.) This is true for AMD multi-core processors as well as ALL (AMD and Intel) multi-processor setups.

    The AMD patch does several things, one of which is to instruct windows to not use the timestamp counter for use in its own time-keeping. Windows XP defaulted to using this timestamp counter for timing, because both dual-core and multi-cpu systems essentially didnt even exist when it was released. This is accomplished by a simple alteration to boot.ini telling windows to use PMTIMER instead of its default.

    Any modern games that are not fixed by the above patch were programmed by stupid people. Thats right... Stupid. They are accessing hardware directly rather than going through a standardized time keeping layer. Their assumptions about time are wrong when using RDTSC, because it isnt a time-keeper. Its a tick counter specifically associated with a CPU (Intel/AMD) or CORE (AMD)
  • by thbb (200684) on Monday February 16, 2009 @06:06AM (#26870447) Homepage

    Commenters seem very young today. Noone remembers the failures of Intel's and Motorola first attemps at addressing RISC designs? Both the Motorola 88000 [wikipedia.org] and the Intel i860 [wikipedia.org] were great designs that failed.

  • IA-64, iirc was slower than x86 when compiled with primitive compilers (read gcc).

    A lot of the advancements were in floating point, which is still meaningless to most people except gamers (which wouldn't be using the platform) and special interest companies. Namely, branch-prediction's were more accurate for instructions which take 32 to 64 clock-ticks (64bit sqrt, divide, etc).

    The advancements of the VLIW were negated by very-large prefetched instructions with cached pre-compiled op-codes.

    The branch predictions only gave you a theoretical advantage over the very large branch-prediction buffers, and in some circumstances the branch-predictors gave you better decisions (hot code could pre-guess a direction before a predicate register was even populated - the Itanium would have to execute both paths, but the branch-predictor choose the hottest path). Further Alpha laughed at Itanium, saying they've had branch-prediction hint op-code variants for years(without predicates), and they showed many synthetic algorithms which would produce better alpha code than Itanium.. Basically saying for most algorithms, predicate-registers produce less efficient code than other alternatives (just happens that x86 didn't have any such instructions - but that would have been easy to correct with yet another op-code prefix). Note i686 did introduce the conditional move instruction, which goes a long way to small but common branch avoidance.

    Register windows (a la sparc's) are a neat idea, but with register renaming on the x86, a tight function-call loop can be just as fast. Plus the spilling of registers into memory often is mitigated with the high speed cache. Further, many arguments against the small register set of the x86-32 are avoided in the x86-64's much larger register pool. Lastly, if you only have 16 registers (of which only 8 are hot), then you can efficiently utilize a pool of 256 rename-registers. If, however, you have an explicit 128 register addressibility (most of which is statistically cold), it's difficult and inefficient to remap them in future versions of the architecture. Note that every power of two register-size causes slow-downs in register interconnects - to say nothing of the real-estate and power-drain.

    Then there's the fact that a program that only needs 2Gig of RAM will generally work better in 32bit than 64 due to the half-sized mem-pointers. More fits into your cache. Note other unrelated optimizations in x86-64 may counter-balance this (though this is independent of the 64bit design), so YMMV. I know that SUN's JDK 32bit runs faster, more smoothly than the 64bit version for most of our apps. Yes there are some CPU instruction optimizations for x86-64, but memory in java tends to be the limiting factor.. The same server app will consume 400Meg on a 32bit version and sometimes breaks a gig on the 64bit version. The GC times are measurably slower.

    The explicit register rotation used in tight-loops - allowing a 6 op-code loop to execute every op in every stage of the loop, thus degrading a loop to at most n clock cycles is nice in theory. But what if your loop is more complex than a trivial inc/dec. And what if you need one more op-code than the architecture supports? Moreover, there's no technical reason why a CPU can't detect such a loop after k iterations and allocate register-renaming to do the exact same thing. With a hot-spot detector (part of branch-prediction), a subsequent access could fire up the loop immediately. But more importantly, future versions of the CPU could support even larger loop-lengths, as you're not explicitly limited by the bit-lengths, or the originally staticly compiled code.

    The sad part is that the explicit compilation of CPU hints, and minimization of register spilling that are the hall-marks of the Itanium should theoretically lead to a slimmer, lower-power, higher-theoretical-clock CPU.. But due to other engineering decisions, the exact opposite is true. Lower clock, bigger silicon, higher power.

    Basically th

  • by swamp boy (151038) on Monday February 16, 2009 @11:42AM (#26872733)

    True, but that's not how it was marketed. There was lots of marketing hype about it being the first "supercomputer on a chip" (or something to that effect). If Intel's goal was only to sell it for specialized uses, why would they have bothered to generate all the hype?

    Intel even made a "supercomputer" using the i860 (they had a version of their nCube that used the i860). All in all, I'd call it a flop. It seems that Intel thought that they would own the scientific, engineering, and academic markets with the i860.

  • Re:FTA: (Score:3, Informative)

    by David Gerard (12369) <slashdot@davidge ... k ['co.' in gap]> on Monday February 16, 2009 @11:53AM (#26872885) Homepage
    The Sun X4600 AMD64 servers at work each have a PowerPC in the LOM processor. Running Linux, no less :-D
  • by fjanss (897687) on Monday February 16, 2009 @11:59AM (#26872973)
    > then they ran out of steam (don't know why)

    "The Alpha architecture was sold, along with most parts of DEC, to Compaq in 1998. Compaq, already an Intel customer, decided to phase out Alpha in favor of the forthcoming Intel IA-64 "Itanium" architecture, and sold all Alpha intellectual property to Intel in 2001, effectively "killing" the product."

    from

    http://en.wikipedia.org/wiki/DEC_Alpha [wikipedia.org]

  • http://www.littlelinuxlaptop.com/ [littlelinuxlaptop.com] - these things are widely available in the UK. They're basically toys as yet (locked down, user-hacked firmware is a hideously rough alpha), but very interesting for their potential.

I don't want to achieve immortality through my work. I want to achieve immortality through not dying. -- Woody Allen

Working...