Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware Technology

A Brief History of Chip Hype and Flops 275

On CNet.com, Brooke Crowthers has a review of some flops in the chip-making world — from IBM, Intel, and AMD — and the hype that surrounded them, which is arguably as interesting as the chips' failures. "First, I have to revisit Intel's Itanium. Simply because it's still around and still missing production target dates. The hype: 'This design philosophy will one day replace RISC and CISC. It is a gateway into the 64-bit future.' ... The reality: Yes, Itanium is still warm, still breathing in the rarefied very-high-end server market — where it does have a limited role. But... it certainly hasn't remade the computer industry."
This discussion has been archived. No new comments can be posted.

A Brief History of Chip Hype and Flops

Comments Filter:
  • by Prodigy Savant ( 543565 ) on Monday February 16, 2009 @03:57AM (#26869929) Homepage Journal
    If AMD hadn't rushed with their 64 bit version of the x86, about now, itanium would be getting popular and hence cheap.
    Market forces have so much to do with technology advancement. A lot of times, superior technology has to take a back seat ...
  • by learningtree ( 1117339 ) on Monday February 16, 2009 @04:16AM (#26870009)
    The biggest advantage of AMD x64 over Itanium is the ability to run x86 32-bit code natively without any performance penalty.
    The comparison is not just about better technology. Think of the trillions of lines of x86 32-bit code that has been written.
    Would you render all this code unusable just because you want to move to a better architecture.
  • by anss123 ( 985305 ) on Monday February 16, 2009 @04:46AM (#26870121)

    TI's 486dlc (which fit in a 386 socket) was one of the worst chips I ever used, it overheated, lacked full 486 compatibility.

    What app did you run that needed full 486 compatibility? Being able to plug a 486 into an old 386 mobo seems like a neat idea, and any software that ran on that 386 would of course run on the nerfed 486 right?

    Too bad about the overheating though.

  • unpublished disaster (Score:2, Interesting)

    by ILuvRamen ( 1026668 ) on Monday February 16, 2009 @04:48AM (#26870133)
    AMD still won't openly admit this but there's a timing problem with all or at least most of their Athlon X2s where the cores' clocls get out of sync with each other. That causes major graphics problems in games that rely on it like Runescape and Halo 2. It also causes really strange side effects where basically the computer gets slower and less responsive over time until you restart it. I never knew what was wrong with my computer and assumed it was inefficient software but then I heard about this and OMFG was I mad! They even have a program on their website that fixes some mysterious, unnamed problem with X2s and graphics and as soon as I installed it, it worked and yet they still won't admit to the public how badly they screwed up! I didn't even see the story on slashdot but it's all over the web.
    Also they should add to the list of major screw ups, the entire naming system used by Intel. Centrino sounds like Celeron and they brought back Pentiums but the Pentium D's and Pentium Dual Cores are different and then there was Core Duo and Core 2 Duo which are easy to overlook. Ugh, it's just stupid!
  • by hyc ( 241590 ) on Monday February 16, 2009 @05:13AM (#26870227) Homepage Journal

    But we can't stay at that 100x level, and in reality we don't need to be there all the time. Intel Atom proves that - you can get *enough* useful work done with a simpler design, and fewer transistors. Unfortunately, when you get down to the number of transistors that Atom uses, suddenly the frontend decoder *is* a significant proportion of your die real estate again. Inefficiency *always* costs you, and it's stupid to pretend that it doesn't. Atom may try to challenge ARM but it will fail, as long as it keeps the x86 ISA baggage. Efficiency *matters*.

  • by Anonymous Coward on Monday February 16, 2009 @05:35AM (#26870323)

    I worked on Itanium/Merced. Keep in mind I was mid-level (not high enough to see the good political fights first hand, only getting the after effects). Below is my opinion from what information I saw or collected at the time. Take it or leave it as you will.

    Itanium (or I-Tanic) was supposed to be the P7, back when Intel still used P#s for chips. That Pentium 4 was never supposed to exist. Basically, Itanium was so bad, the Portland design teams came in a ate the Santa Clara team's lunch.

    The biggest problem for I-Tanic was management, on many levels.
    1) No good top guy
    The main and original project lead was more focused on marketing and "the platform" than actually making the chip. So, there was no top leadership at the CPU design level. This allowed the "lieutenants" to squabble among themselves (more later).
    They finally got a good guy in (who's name I hate to say I forget. It was a long time ago). I believe he had done Kalamath. The project was in a never-ending re-design spin at this point. When he was there you knew there was a Captain of the ship. You weren't 100% sure he was sailing in the right direction, but felt things were moving ... finally. He lasted about 3 months, until his wife (supposedly) gave him the "me or CPU design" ultimatum. He then moved up to start the Intel DuPont site (which was supposed to be as big as the Portland cite). That didn't work out so well for him.
    His hand-picked successor lasted about 1 week before "family reasons" caused his resignation. I assume he looked at the state of the now 2 year delayed chip and ran.

    2) Dot.com boom & Silicon Valley
    The "lieutenants" didn't give a rat's ass about the project. It was mostly a "pump and dump". Being the Dot.com boom and in Silicon Valley, their main concerns were taking over ownership of a "cluster" (State sized chuck of the chip), getting the ownership on their resume, finding a new non-Intel job, and splitting.
    So, every part of the chip got a new guy every 9-12 months who blamed everything on the previous owner, forced a re-design on the part (which may have been needed, but seemed to be needed an awful lot), and then left (forcing the cycle to repeat).

    3) Constant Re-Design
    Look I know re-design is part of engineering. But perpetual hamster-wheel-like re-design is not good. Nothing got finished!!!! No specification was stable (let alone the written specs; I mean verbal specs). You ask people (and this was years, years into the project) about your interface to their part of the chip and they wouldn't have coded it up yet. So, who knows what the Hell the timing issues would be. "Can I move a flip-flop to your unit?" "Go fish. I haven't coded that."
    Let us also remember that back then (I doubt they still do this) you coded in iHDL (not VHDL or Verilog) using macros for AND & OR gates. So, you're basically doing stencil EE work using a programming language. You want an IF-THEN construct, well break out the K-maps because you'll need them.

    4) Moral
    After the chip had slipped 2+years, no one wanted to work on this thing anymore. They had to freeze internal transfers. You had to threaten to quit to get out. "I am leaving Itanium. Are you going to make me leave Intel to do it?"

  • The first Itaniums were pretty much a dismal failure...
    They ran at around 800mhz, so clocked lower than x86 systems of the time which were around 1.4ghz if i remember (and the mhz myth still very much alive, with intel fuelling it using the p4)... Their x86 support was roughly the speed of a p90 and therefore of little use beyond running one or two small legacy apps.
    In terms of outright performance they were behind Alpha and Power at the time, so much for this new architecture. And when it came to price and power consumption they were behind everyone else.

    When Itanium2 came around it performed a lot better, still guzzled power, and they realised that software emulation of x86 was faster than the hardware support, other than that the chips were still too expensive for what they were.

    Now, Itanium is pretty much relegated to the high end niche that Alpha occupied before it was canned.

    Itanium suffered from end users being locked in to proprietary binary only software - which only the original vendor could port... Some were unwilling, some didn't see the business case, some demanded that HP/Intel fund the porting, only they couldn't fund everything, so Itanium is left with a very limited set of apps...
    OSS support was better, but it suffered from the high cost and rarity of the hardware, in that hobbyists had little chance of getting hold of the hardware to play with.

    Personally i think HP/Intel would have been better off putting the effort into continued development of Alpha... It already had a software and user base, it already had x86 emulation which performed reasonably well, and it had a legacy behind it of old hardware that was cheaply available to OSS developers. Even today, Alpha versions of Linux seem far more active than the IA64 versions... Plus any customers already using Alpha would not have needed to migrate (and many of them migrated to Sun or IBM).

  • by anss123 ( 985305 ) on Monday February 16, 2009 @06:14AM (#26870477)

    Itanium did one thing well...it killed a lot of other chips. The threat of it killed MIPS post-R12K plans - and the Alpha, and PA-RISC architectures as well.

    Here's an idea: Let's throw out years of proven engineering in favor of an architecture that has yet hit silicon. That way we can fire our engineers and pocket the change. What could possibly go wrong?

    I feel a big bonus is coming up, and just to be safe let's add a parachute too.

  • by Anonymous Coward on Monday February 16, 2009 @06:26AM (#26870527)

    The 88000 was weird, and more annoying as the SPARC and the MIPS.

    The problem, exposing the pipeline explicit to the system. In the SPARC and MIPS case, this is done partially through the existence of branch delay slots (the SPARC is even more annoying as it has register windows, they make assembly easy to read, but are the source of numerous extremely difficult to find bugs). The 88000 exposed the pipeline even more, and when doing context switches, not only did the program counters and register state need to be stored away, but also other pipeline state.

    The problem here is that, pipeline state or specifics should not be exposed to the programmer, it is better to handle it in hardware with out of order execution as it is very likely that the pipeline might need to be redesigned later on, this would entail difficult to maintain software running on-top of the hardware or that the original pipeline model is used for the programmer even after it has been completely redesigned.

    Anyway, back to work, need to prepare a lecture about something similar.

  • MAJC missing? (Score:2, Interesting)

    by inkhorn ( 650877 ) on Monday February 16, 2009 @06:44AM (#26870595)

    And what sort of thorough article would this be in missing out Sun Microsystems' MAJC chip from the 1990s ?

    Promised to accelerate JAVA instructions, the chip was a multithreading multicore design (can you say Niagara?) but Sun couldn't get it to market fast enough and advances in general purpose CPUs left it for dead.

    Sadly MAJC only made it into two models of Suns own-brand graphics cards before it was dropped, though it's design principles live on in Niagara and Rock.

  • by BikeHelmet ( 1437881 ) on Monday February 16, 2009 @06:46AM (#26870617) Journal

    Didn't the Power6 have insane FPU performance? Double that of its contenders?

    I think it still beats every CPU out there. (FPU only)

    I remember seeing benchmarks where a 4 core Power6 beat 8 Xeon cores and 8 opteron cores, by a safe margin.

    But those things are so huge... at the time of release, they were bigger than all GPUs. :P Lots and lots of transistors, and lots of ghz.

  • by TheThiefMaster ( 992038 ) on Monday February 16, 2009 @06:55AM (#26870657)

    A slight correction: Multi-processor systems had existed for a while, but dynamic clock speed scaling was new, and it was THAT that threw out the use of RDTSC as a timer. The problem just got more obvious when multi-socket chips were introduced that could change speed independently.

    With a single chip that could adjust clock speed dynamically (based on load) the problem with using rdtsc wasn't too bad, because most games were (and still are) written to thrash a CPU (core) to 100% load anyway. However with two cpu (cores) in a system, one core could slow down while the other was running full-tilt. When this happened the tick counts would get out of sync. If the program using rdtsc then got scheduled onto the other cpu, it would see time as having jumped forwards or backwards.

    It's worth noting that running different speed CPUs in a dual-socket board was possible before dynamic frequency scaling, as long as the FSBs matched. I accidentally had a 2GHz and a 600MHz cpu (133MHz FSB IIRC) in dual socket-A board at the same time once, and aside from horrifically confusing the dedicated server I was running on it, it ran fine. Not only were the rdtsc readings out of sync, causing it to keep thinking it had jumped into the past or future, but they were running at significantly different rates, causing it to keep switching between real-time and slomo or super-speed!

  • by Anonymous Coward on Monday February 16, 2009 @06:56AM (#26870661)

    It's probably not too important in the large scale of things, but the Alpha worked fine in workstations, something Itanium doesn't seem to be suitable for. Having workstations in the same architecture is likely one of the reasons Linux/alpha is doing well, and I it might be convenient for the other developers as well.

  • Re:What about ACE? (Score:3, Interesting)

    by anss123 ( 985305 ) on Monday February 16, 2009 @06:56AM (#26870665)

    They could have extended m68k like Intel has done with x86, the result would still have been messy but not as bad.

    Don't be too sure about that. The good old m68k had some instructions that gave CPU designers headache at a glance :-) On the 68060 they literally dropped a number of commonly used instructions outright, don't think Intel ever did that, and with the Coldfire descendant they dropped so much that it's not possible to write a "Coldfire.libary" like Amiga users did for the 68060.

    By luck or by wisdom, x86 avoids the hardest problems normally associated with CISC.

  • by wisty ( 1335733 ) on Monday February 16, 2009 @07:07AM (#26870699)

    It didn't help that part of the advantage of IA-64 was that it let programmers write their own branch prediction. Which they didn't want to do.

  • by OrangeTide ( 124937 ) on Monday February 16, 2009 @07:24AM (#26870761) Homepage Journal

    We probably have as many PowerPC chips in our homes than x86 these days. How many people own two of the following game consoles but only have 1 PC in their home? GameCube, Wii, xbox360, PS3?

    It's true that Apple killed PowerPC on the desktop and it will probably never come back. And ARM and Atom will fight over the mobile and netbook market.

    The article doesn't mention POWER, so I think we can technically assume it only considers PowerPC a failure (which is wrong of course). Even though POWER and PowerPC are almost the same thing, they aren't the same thing. But governments and corporations are still ordering iSeries systems, and IBM is still making plenty of money off them. (although I bet they sell less than 100 of them a year).

  • by anothy ( 83176 ) on Monday February 16, 2009 @08:40AM (#26871095) Homepage
    HP/Intel would have done better, technically, to work on Alpha, but they couldn't sufficiently dominate the market for their tastes in that case. half the point was to have something that they controlled, and Alpha, while technically great, was already too widespread for that.
    which, really, is the most important response to the original parent's point. what was AMD supposed to do, sit around while Intel dictated what the terms of the next stage of the market would be? what gives Intel some inherent right to that sort of dominance? AMD did exactly the right thing, from a business perspective: they saw what they believed to be a strategic mistake that left a market hole open, and produced a product to fill it. turns out they were right.
    turns out it was the right thing to do technically, too. when Itanium hype was at its peak, i remember lots of actual engineers i knew (and even some subset of the tech press) pointing out that EPIC was really just tweaked VLIW, and that had been tried and failed a few times. amd64 has consistently outperformed IA64.

    even the quote in the summary is misleading. yes, IA64 is still plodding along in the high-end server market, but it's even an also-ran there. POWER and amd64, in particular, continue to trounce it, both for your normal "server" market and for the really high end scientific cluster stuff (it's got, what, one spot on the top500 list?). it's a pretty substantial failure, really all around.
  • by gnalre ( 323830 ) on Monday February 16, 2009 @08:43AM (#26871113)

    Intel's i960 was a nice chip for embedded development. One of its nicest features was the large number of individual interrupt vectors which is really useful when you want to hang off a large number of I/O devices off it. Compare that to the x86 where they have to share interrupt vectors. For some reason however Intel decided to drop the whole line and move to ARM architecture instead.

    However the second one is a what might of been. During the 80's we did a lot of development using INMOS T2 and T8 transputers. They were a joy to use and made parallel programming at software and hardware level so easy and natural. The next iteration was to be the T9000. It promised a lot, much improved execution speed, a faster and more flexible processor interconnects. It looked so good we had even sold our next project based on it. However when we started getting the first samples there was obviously something wrong. Bits of the chip did not work or would fail. At the end of the day it looked like INMOS just could not deliver. The T9000 never became a reality but anyone who used transputers how good they were and and could if it had been done right with enough finance could of fundamentally changed the computer industry.

  • ... to be precise, by intel's bankroll and investment in process.

    Power PC and Alpha were outcompeted by the fundamentally inferior x86 family not because of flaws in their designs, but because intel spent more on improving their process than anyone else.

    Both the Power PC and Pentium turned into furnaces, the Pentium 4 and G5 were both following the "megahertz myth" into long pipelines to let the clock speed ramp up. Neither got the clock speeds they were hoping for. Both were too hot for mobile processors. In both cases the solution was going to be shorter pipelines, slower but more clock-efficient cores, and faster busses. The Freescale e700 was torpedoed when Apple went with Intel's Core Duo... because Intel had the resources to get their respin of the PIII out quicker than Freescale could get their respin of the G4 online.

    So now we're still using hacks upon hack on the truly horrible x86 architecture.

    Well, it could have been worse. It could have been SPARC.

  • Re:FTA: (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Monday February 16, 2009 @09:45AM (#26871459) Journal

    In the PC market is right. In the CPU market, I believe PowerPC is still outselling x86. Every new console contains a PowerPC chip. Quite a few handheld devices contain PowerPC chips. A lot of modern cars contain 20-50 PowerPC chips (take a look at a BMW from the last few years - it will have at least 40 PowerPCs).

    PowerPC isn't the market leader though. It still lags behind ARM by a fair way. x86 is a niche player, and that niche is gradually shrinking.

  • Re:That's it? (Score:4, Interesting)

    by TheRaven64 ( 641858 ) on Monday February 16, 2009 @09:54AM (#26871535) Journal

    The iAPX was a beautiful design, and so typical of Intel. That, the i860, and the Itanium all have the feel of chips designed by theorists. Gorgeous on paper, horrendous on silicon (although the i860 did quite well as a GPU. High-end NeXT stations used them to run the Display PostScript engine).

    A former Intel Chief Architect told me a story a couple of years ago about a chip that Intel was making when he went for his Interview. Apparently they'd heard about object-orientation and thought it would take over the world, so they started designing a chip for pure OO languages. This chip supported boxed integer values in hardware so everything really was an object. The problem came when they started to work on the compiler. Most operations required shifting pointer values right by four. Unfortunately, no one had thought to make a fast way of producing constant number objects. You needed a 200-cycle sequence to do this, which made the whole system so slow it was unusable for code written in high-level languages.

  • by Vellmont ( 569020 ) on Monday February 16, 2009 @09:55AM (#26871541) Homepage


    I worked on Itanium/Merced. Keep in mind I was mid-level (not high enough to see the good political fights first hand, only getting the after effects).

    I have to believe that there were forces inside Intel that wanted Itanium to fail. It's hard for me to believe that if the project was this important they wouldn't have pulled some Top Guy that Gets Things Done on the project.

    After the chip had slipped 2+years, no one wanted to work on this thing anymore.

    Back in 2000 or 2001 I went to JavaOne and went to a talk by some Intel engineers about how cool Itanium was going to be. They had to be he least enthused about any project I'd ever seen. The paper features sounded pretty cool, but you'd talk to them and you could just tell they thought the thing was a total piece of garbage. They didn't say it outright of course, but the sounds of their voices and the expressions on their faces told a very different story.

  • by TheRaven64 ( 641858 ) on Monday February 16, 2009 @10:03AM (#26871601) Journal

    The answer is emulation and a much better architecture. Emulation can run applications at 50% of the host speed in most cases now. For tight, mathematically-intensive loops, it's more than this, for things containing a lot of branches it tends to be lower.

    When I replaced my 1.5GHz PowerPC Mac with a 2.16GHz Core 2 Duo, I didn't notice the speed difference on legacy code. I forgot to replace VLC with an Intel build for a while (they do universal binaries now, but they didn't for a while), and even the PowerPC version in the emulator could play H.264, although the CPU load spiked to around 80% on both cores. Switching to the native version dropped this down to around 20%.

    When people talk about backwards compatibility, what they really want is two things:

    • The ability to run new software fast.
    • The ability to run old software.

    If you can only run DOS software at the speed equivalent to a 200MHz Pentium, do you think anyone will care? It was most likely written for a 16MHz 386, so you're still running it fast enough. I can play all of the old DOS games I own, the ones that used to make my machine struggle when they were new, in DOSBox on a PowerPC machine, and they're fast enough.

    Backwards compatibility isn't nearly as much of a problem as persuading developers to support your architecture for new programs. Any new chip can emulate a three to six year old chip from another architecture at a reasonable speed.

  • Your comment is stupid, but I'm going to reply to it anyway. I have no idea what CSUS and UCD are running, but Yuba was offered no choice about their upgrade path - it was either completely change student records applications or buy the 8-way iTanic. Which yes, is going mostly underutilized given that the 4-way Alpha was overkill. And given that I personally set up the HP-UX to Windows ipsec integration for the system (the HP docs on IPSEC are backwards BTW, their examples are exactly backwards and do not work) and in fact I did the prior, ssh-tunnel based encryption that they were using on the Alpha server, I am entirely sure of what they are running.

    I have no idea what other JCs are running, but Yuba is running Colleague under HP-UX 11i on an 8-way iTanic. And thanks to me, student information is actually encrypted between the server and the client :P (Actually, thanks to the district's money going into my pocket... it's not like I would have done it for free. HP-UX is the great satan of the Unix world. I'd walk a mile for AIX after fucking around with HP-SUX for days.

  • by itsdapead ( 734413 ) on Monday February 16, 2009 @10:41AM (#26871993)

    The problem is software being delivered as binaries.

    I think that's chicken-and-egg: if you have a single, dominant, binary-compatible architecture then the most efficient way to distribute commercial software is as pre-compiled binaries.

    Linux runs on so many architectures for the simple reason that many Linux developers take a pride in their work and actually care about interoperability: all that cross-platform support doesn't happen magically because its written in C! A substantial app will be riddled with "#ifdef IA32/#ifdef MIPS"-type, and if you compile from a tarball someone, somewhere has spent a lot of time preparing that .configure script.

    Given the existence of a ubiquitous single platform with 90% of the market, there's no short-term commercial case for that sort of attention to detail (...maybe posterity will show that there's a long-term case, but since when did that amount to a hill of beans?)

    Interestingly, though, most Linux distros have gone for binary packages as their main form of distribution...

    My bet is, long term, "bare metal binary" software will naturally disappear in favour of scripting languages, JIT compilation and/or virtual machine bytecode. Compatibility will be determined by the API, not the hardware and, by definition, any software that still needs "raw hardware" performance will need to be hand-tailored for the hardware anyway.

  • by dpilot ( 134227 ) on Monday February 16, 2009 @11:30AM (#26872613) Homepage Journal

    Outside perception - it started even before you say but really rooted in your reason #1.

    From what I could see IA64 wasn't really started for reasons of pushing technical performance, the problem being solved was the existence of clone designs. All of the IA64 IP was held by a third company, and then licensed back to Intel and HP. That way, none of the IP would be covered by existing Intel or HP cross-licensing agreements. Then the architecture had to be sufficiently different that it would be fully covered by that IP, and none of the essentials covered by anything else.

    So the initial design point was driven by legal and marketing concerns, and technical considerations were a distant third place, if that high.

    That's the impression from one well versed in chip design who watched from outside.

  • Itanium II story (Score:2, Interesting)

    by Anonymous Coward on Monday February 16, 2009 @11:32AM (#26872629)

    To complete the history, this is the Itanium II story.
    Itanium (Merced) was 80%/20% Intel/HP driven. Itanium II (McKinley) was the opposite, 80% HP driven. The HP guys considered Merced garbage and did not leverage much from Merced. But it was probably already too late for Itanium II. Just like in the early days of Microsoft and Apple, the market had already spoken and placed a huge premium on backward compatibility. Both Merced and McKinley performed about 6 years behind on the performance curves when running x86 code. Intel's bet was AMD was not going to be taken seriously with 64bit-Athlon simply because AMD was too small to create a trend by itself and customers would be forced to go to Itanium. Of course Intel was wrong, but it did not hurt Intel too much. Intel always had a backup plan-- codenamed Yamhill. Intel was only a few months behind AMD if x86-64 was to take off. McKinley also suffered from political turmoil at HP:
    1) HP placed a high value on seniority and balked at hiring 1-3 year job hoppers. Even industry recognized people with PhD's and 20+ years of experience at other companies were usually told they would have to start as MTS but in a year or two would get promoted to TC's. And the few people who took this bait usually ended up getting screwed. At their yearly review they would be told maybe next year; they simply have not been around long enough. This meant they had a hard time attracting top of the line talent since while HP would match salary it would balk at matching titles and power.
    2) The Carly Fiorina factor was a huge blow to HP employees expectations. She did not realise how many engineers were simply at HP for the work/life balance and promise of no layoffs in recessions. HP paid about 15% less for engineering talent because of the HP Way. When Fiorina basically destroyed the HP Way a lot of 10+ year HP'ers started to suddenly look around the industry to realise they could get paid a lot more by simply jumping ship so they did. She also made it very clear she wanted to exit the semiconductor industry and the Fort Collins,CO Itanium II team saw the brunt of her complete indifference about what they were working on; no raises, inadequate funding, etc. They were basically being setup to be sold to Intel which did eventually occur.
    3) The Intel buyout was also a sore point. The HP employees were given no way out. Either take it or unemployment. It was pretty obvious at this point Itanium was a failure.
    4) AMD moved into Fort Collins,CO about a year later and stole the best of the Itanium II team. Even the Captain of Itanium II jumped ship!
    5) What remains now is just a shell of the former team. They can't hire since Intel isn't exactly enthusiastic about Itanium's future and what good engineer wants Itanium on their resume at this point.

  • Re:FTA: (Score:3, Interesting)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday February 16, 2009 @11:46AM (#26872781) Homepage Journal

    There is a bit of confusion in some posts here. Motorola made very good PPC chips.

    Very good for embedded purposes. PPC601, the only POWER-compatible PPC chip, was outdated when it hit the market. 603 was maybe the fastest, certainly competitive - for all of about a month and intel jumped up again. The G5, same (plus the most expensive macs EVAR.)

    As a desktop chip, PowerPC is a failure on all levels. The first ones weren't very fast and cost way too much. The later ones were cheaper, but were even slower compared to the competition. The latest ones were fast again, but too expensive again.

    PPC followed it's own course and never had to wait for Power first.

    You are just. plain. wrong. about this. The G5 was derived from POWER4 [wikipedia.org]. Wikipedia is not perfect, they refer to PowerPC processors as POWER architecture processors, in spite of the fact that they do not implement full POWER instruction sets. Since the ISA is the interface, this is a misnomer. But in all other regards they have the story right - all PowerPC processors are stripped versions of a POWER processor.

    PowerPC is an epic failure on the desktop and is proceeding to fail in the embedded market (it is losing market share even as I type this.) Its primary purpose has been to funnel money to IBM via Apple. Its primary effect on Apple computer was to hold them back by years. It's x86 that has given Apple a new lease on life. Trying to stick with PowerPC when it became clear that x86 was going to beat the living shit out of it was one of Apple's biggest mistakes, but you can add it to a long list of other big ones.

  • by level_headed_midwest ( 888889 ) on Monday February 16, 2009 @12:18PM (#26873231)

    I am surprised that they knocked the AMD Puma in the article while leaving the following piles of crap unmentioned:

    1. The original Covington Celerons with no L2 cache
    2. Original Pentiums with the FDIV bug
    3. The Pentium III Coppermine 1.13 GHz that was infamously unstable
    4. Socket 423 Pentium 4
    5. The Pentium 4 Prescott 3.6 and 3.8 that overheated and throttled at stock speeds on the stock heatsink

    All of those chips were bigger duds or had bigger errors than even the TLB error in the BA/B2 Barcelona Opterons they mentioned in the "Part 1" article.

  • by TheLink ( 130905 ) on Tuesday February 17, 2009 @08:51AM (#26885043) Journal

    OK. Seems my assumption that the CFP2006 benchmarks are single threaded was wrong (it can be multithreaded and use multiple cores if "auto parallel"=yes).

    Even so, the POWER6 still doesn't seem that much faster - certainly not 100% faster.

    Back in 2007, the 2400MHz Intel Xeon 3060 still got a score of about 15 with only 1 core enabled. The 4.7GHz POWER6 score of 18.7 is not 50% or 100% more/faster than 15.

    See:
    http://www.spec.org/cpu2006/results/res2007q2/cpu2006-20070329-00693.html [spec.org]
    http://www.spec.org/cpu2006/results/res2007q2/cpu2006-20070611-01218.html [spec.org]

    You have to understand it's a bit hard to do apples to apples comparisons because:

    1) Though IBM did post a 5GHz POWER6 score of 20.1 last year (2008), I don't see "cores=1" submissions for Intel chips last year.
    2) There are no cores > 1 scores for POWER6.

    As it is, I'm inclined to think that the x86 has caught up with the POWER6's CFP2006 performance, if not surpassed it already.

    My reasoning is the POWER6 has not got much faster since 2007 (4.7GHz -> 5GHz, with no change in architecture).

    Whereas the 3733MHz Intel Core i7-965 Extreme Edition is definitely a lot faster than the 2400MHz Intel Xeon 3060 - (which got a score of 15 with one core).

    The i7 is a new architecture with maybe 10-15% faster per clock, and 3733MHz is a fair bit faster than 2400MHz (BTW the performance/watt is very competitive too).

    So it's near certain that a 3.733GHz i7 would beat a 5GHz POWER6 in single core performance in both CFP2006 and CINT2006.

    It's impressive how fast the x86 can go :p.

    I don't think it's going to be easy for the POWER/SPARC/Itanium teams to beat the x86 in performance, or even performance/watt (for high performance computing).

If you want to put yourself on the map, publish your own map.

Working...