Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Windows Intel Microsoft Software Hardware

Microsoft Announces End of the Line For Itanium Support 227

WrongSizeGlass writes "Ars Technica is reporting that Microsoft has announced on its Windows Server blog the end of its support for Itanium. 'Windows Server 2008 R2, SQL Server 2008 R2, and Visual Studio 2010 will represent the last versions to support Intel's Itanium architecture.' Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?"
This discussion has been archived. No new comments can be posted.

Microsoft Announces End of the Line For Itanium Support

Comments Filter:
  • by John Hasler ( 414242 ) on Monday April 05, 2010 @04:39PM (#31741468) Homepage

    How could anyone possibly have any use for servers that don't run Windows?

    • by C0vardeAn0nim0 ( 232451 ) on Monday April 05, 2010 @04:51PM (#31741682) Journal

      yeah, servers with windows are like women playing soccer on high heels. nice to look at, until one of them falls and breaks an ankle.

    • by the linux geek ( 799780 ) on Monday April 05, 2010 @04:54PM (#31741712)
      Exactly. Approximately 85% of Itanium servers are running HP-UX or OpenVMS. Windows and Linux are roughly split on the remaining 15%. Itanium faces challenges, but they're from POWER and SPARC, not from Microosoft killing Windows.
      • Re: (Score:3, Insightful)

        Indeed. The ultimate fate of Itanium is to wind up as HP's upgrade to PA-RISC. You have to wonder how much further interest Intel is going to have in it's development. I suspect it will end up getting tossed back into HP's lap.

        • by Anpheus ( 908711 )

          HP has fabs and/or competent CPU designers?

          I doubt Intel really cares who they sell it to, as long as someone keeps buying. When HP moves on from Itanium, it's done for.

          • by jmauro ( 32523 ) on Monday April 05, 2010 @07:15PM (#31743316)

            Competent CPU designers, yes. It's the only reason Itanium has lasted this long. Intel's solo early designs were less than successful. HP designer came in redid the whole thing and lo-and-behold it worked. HP really needs Intel to fab the chip, not design it.

            • Re: (Score:2, Interesting)

              I don't know, Itanium seems pretty impressive. This presentation [infoq.com] appeared on slashdot awhile ago and does a good job of giving a face to the name Itanium instead of just reading "Failed processor line that was really expensive."

              The huge amount of instruction-level parallelism (dependent on a very good compiler) really seems like the best way to do things. It's too bad it doesn't work out in practice.

              • by Jurily ( 900488 )

                (dependent on a very good compiler)

                Did anyone manage to write one of those? Last I heard it was extremely hard to write even a decent one for 128 registers.

              • The huge amount of instruction-level parallelism (dependent on a very good compiler) really seems like the best way to do things. It's too bad it doesn't work out in practice.

                The problem is that Intel didn't come up with this concept (ILP through compiler-scheduled instructions), nor were they the first to try it.

                VLIW designs have *always* looked great on paper and *always* sucked in practice. Intel did make a bunch of improvements to VLIW with Itanium, but they should have known that you can't just dump th

          • Funny thing, I was casually talking to some folks about 64 bit platforms. We didn't bother to look online, but we were all pretty sure Itanium was already dead. :) I'm sure this isn't a nail in the coffin, just another obscure software that's not supporting it any more. As most have said, most people are using better OS's with them anyways.

            I know WinXP x86_64 was like running WinNT for DEC Alpha. It was there, but it was terrible and didn't do everything you needed. Oh god

        • or upgrade from Alpha (for VMS shops) or upgrade from MIPS for NonStop shops

      • Red hat will not support Itanium in RHEL6. So that 85% will be a 100% in the future.

        • yes, but at least RedHat will support the Enterprise 5 on Itanium2 until 2014. I work for an HP VAR and I've *never* seen any HP Integrity run any Linux but RedHat though there are a few other distros out there.

          • by Macka ( 9388 )

            In reality that's only going to be of use to customers who are already running Red Hat on Itanium. No one making a decision today is going to commit to a solution that only has a 4 year shelf life. If they want Red Hat today and they're in that enterprise space they'll go Nehalem-EX for the best combination of RAS + performance + price.

    • by A12m0v ( 1315511 ) on Monday April 05, 2010 @05:31PM (#31742212) Journal

      No one can stop the x86 train, not even Intel.

      • by Anonymous Coward on Monday April 05, 2010 @06:25PM (#31742878)

        No one can stop the x86 train, not even Intel.

        Maybe not. But certainly some people are trying to strong-ARM the situation.

  • Were many Itanium users running Windows? My impression was that most Itanium users were running some sort of *nix. I don't think it's a huge deal for Itanium.

    I also don't see Itanium going anywhere any time soon. As much as people like to talk about its demise, its numbers do grow every year. Or at least they were growing up until a couple years ago; I assume they're still growing. They're not growing very quickly, but they're still going.

    It's a shame. It's a remarkably beautifully designed architecture,

    • Re: (Score:3, Interesting)

      by _merlin ( 160982 )

      Were many Itanium users running Windows? My impression was that most Itanium users were running some sort of *nix. I don't think it's a huge deal for Itanium.

      The only Itanium servers I encounter regularly run OpenVMS in order to host the popular OM stock exchange platform. OM-based stock exchanges (ASX, HKFE, OMX, SGX, IDEM) all seem to be a hell of a lot more stable than the .NET-based Tradelect/Infolect system used on LSE for the last few years. I don't know why anyone would actually want to run Window

    • I've racked a bunch of Itanium servers running Windows Server 2003 and supporting SAP installs.

      It is not unheard of. And I suspect these will migrate over to a much more desireable platform - in fact, I expec they will decommission these bad boys and I will be in line to scarf up some interesting hardware cheap.

      I will not have to try and flim-flam them into a hardware swap. It's the only way they can actually do this. And I don't sell them any hardware. I'm just one of the few around here that seem to b

    • Re:Probably not (Score:5, Interesting)

      by lgw ( 121541 ) on Monday April 05, 2010 @06:15PM (#31742752) Journal

      Microsoft has had a strict policy since the dawn of Windows that Windows be built for at least 2 processor architectures at all times. They really worried about i386-isms creeping into the kernel. It pretty much doesn't matter what 2 you choose, as long as it's more than one (and they're somewhat different), it keeps the kernel devs honest. I wonder what they're doing now: perhaps they just decided that i386 and "amd64" are different enough to serve their purpose.

      • Re:Probably not (Score:4, Interesting)

        by bhtooefr ( 649901 ) <bhtooefr@@@bhtooefr...org> on Monday April 05, 2010 @06:23PM (#31742848) Homepage Journal

        The other thing is, keep a full build internally.

        The rumor mill says that Microsoft has current versions of Windows built for ARM internally... sorta like how Apple kept x86 builds of Mac OS X internally the whole time.

      • Arm would be the most logical choice, if they didn't decide that i38x and amd64 weren't enough.
      • i'm not an expert on this, but according to this [hoffmanlabs.com], windows so far has been built only for litte endian architectures, or chips that can change endian-ness at boot or on-the-fly. this limits MS's choice of target architectures somewhat.

        i'd like to see if they're capable of building a version for big-endian chips like SPARC or latest PPCs.

      • by dokebi ( 624663 )
        If what you are saying is true, I would speculate that they would be porting it to ARM.

        That should be interesting.
      • Re:Probably not (Score:4, Informative)

        by cbhacking ( 979169 ) <been_out_cruising-slashdot@@@yahoo...com> on Monday April 05, 2010 @10:58PM (#31744658) Homepage Journal

        Just to keep this clear: you're talking about NT (which wasn't even called "Windows NT" initially, internally). NT is almost entirely written in C, and the few architecture-specific parts are abstracted from the core codebase and typically present in assembly modules which are maintained for multiple architectures and which the compiler automatically uses the appropriate one for the current build. There's some use of inline assembly or specifics of x86, but it's all behind #if blocks, with the equivalent checks for other CPU architectures. Overall, NT has been ported to at least 5 architectures that I know of - x86 (32-bit), x64, ia64 (Itanium), PPC, and DEC Alpha. If MS wanted to, it would be possible to port it to ARM, MIPS, SPARC, or almost any other reasonably modern architecture of at least 32 bits.

        By comparison, Win9x has a ton of assembly code that enabled it to run fast even on low-end machines, keeping the system requirements down (and making it attractive to home users back in the days before consumer hardware caught up with the demands of NT). Of course, use of assembly like this has downsides - 9x was badly unstable, and completely non-portable. It only ever ran on x86, and I'm not even sure it made much use of the features found in any version after the i386.

    • "I also don't see Itanium going anywhere any time soon. "

      I should hope not considering a new Itanium processor was just released in February. [wikipedia.org]

      I'm a bit surprised to hear M$ dropping support for a 2 month old processor.
  • Oh Noes! (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Monday April 05, 2010 @04:42PM (#31741518) Journal
    It would appear that the good ship Itanic has struck an MS Iceberg 2010 Datacenter Edition R2!

    Seriously, though: is this an admission by Microsoft that HP-UX is(somehow) hanging on at the high end, despite HP's every attempt to mismanage it, or (more likely) is this a consequence of the fact that, at this point, there is nothing Itanium can do that Intel couldn't do better and cheaper just by bolting some extra cache and a few extra Itanium features onto Xeons?
    • by vadim_t ( 324782 )

      is this an admission by Microsoft that HP-UX is(somehow) hanging on at the high end, despite HP's every attempt to mismanage it

      Doubt it. I don't think Microsoft would give up if there was competition to drive out. They'd do like with the Xbox and keep throwing money at it until it worked. I take it this means that even if they had the marketshare, there would be no (or not enough) profit in it.

      • by Matheus ( 586080 )

        That should be exactly right... their portion of that 15% market share was probably not justifying the resources needed to support the additional architecture.

        I'm guessing they get to lay off some really expensive Itanium knowledge base from their core dev teams as well as all the other baggage necessary for release/support of the ports. Those guys are really hoping there's room for hire on the HP-UX team now :)

      • Doubt it. I don't think Microsoft would give up if there was competition to drive out.

        The difficulty of driving out the competition probably matters also. I wonder how many of the non-Windows Itanium systems are running application software for which there is no drop-in replacement available for Windows? So MS would have to convince the owners that not only is Windows a better/cheaper/whatever OS, but enough so that it's worth replacing application software as well.

    • by dave562 ( 969951 )

      Intel couldn't do better and cheaper just by bolting some extra cache and a few extra Itanium features onto Xeons

      That is exactly what Intel is doing. They are rolling some core Itanium features into the next generation Xeon processors. There was an article on it in the Wall Street Journal last week. It came across as a marketing piece from Intel where they were attempting to reassure Itanium owners that they weren't going to be abandoned.

      • won't do any good for the Itanium2 owners if the Xeons can't run the IA-64 instruction set. The features Intel just brought to xeon from Itanium include MCA (machine check architecture recovery from failures), security and virtual machine migration. But not binary compatibility. But maybe HP will port VMS, NonStop, and HP/UX to the new, improved bullet-proof x86-64 (only with with appropriate supporting chipsets, of course)

  • by overshoot ( 39700 ) on Monday April 05, 2010 @04:44PM (#31741542)
    With Alpha finally gone for good, its job is done and it can now sail off into the West.
  • Intel no longer supports Itanium in some of their own projects on Windows. For example, Intel Threading Building Blocks has x86 and a x86-64 support, but lacks Itanium support on Windows. It does however support Itanium on Linux.
  • Itanium has not been worth it in terms of price/performance for a while, this just confirms the inevitable. However, people will still be running this hardware for some time, and I expect HPUX and Linux to continue to support this hardware for the forseeable future. Hell, Debian supports the Alpha, and the M68k was removed from official support in just the previous revision of Debian (etch), but then only because it took too long to compile and would slow down the updates of the archives.

    -molo

    • Re: (Score:3, Informative)

      by rubycodez ( 864176 )

      Itanium has not been worth it in terms of price/performance for a while
       
        Actually, in many categories, it does. Depends on the work to be done. For example, HP Integrity Superdome with HP/UX leads in price / performance and performance running TCP-H on 10 or 30TB Oracle database. Some on numerical benchmarks that are heavily SMP.

      I don't like the Itanium, but certain database and numerical workloads it still kicks everyone else's butts.

      • by Macka ( 9388 ) on Monday April 05, 2010 @07:37PM (#31743528)

        Oh come on. It's really disingenuous to be quoting that kind of shit. Have you ever taken a really close look at the kind of hardware the vendors use to get these benchmark numbers? Database app benchmarks are almost always very sensitive to I/O, and these kinds of numbers are usually generated by systems that have their I/O card slots max'd out, with several hundred (if not thousands) of small high speed disks behind them. The cost of these solutions in real life would be crippling. Vendor quoted benchmarks should usually be taken with a generous pinch of salt.

  • Nah, the real "end" would be if HP finally bows to the inevitable and ports HPUX to x64. Don't hold your breath though....

  • DEC Alpha? (Score:4, Insightful)

    by Jeff- ( 95113 ) on Monday April 05, 2010 @04:48PM (#31741614) Homepage

    I am incredibly offended that you would compare this bloated, brute-force, abomination of a chip to the incredibly well designed, elegant, and efficient Alpha (may it rest in peace).

    • Might I ask what about Itanium makes it bloated, brute-force or an abomination? Its circuitry is not hand-designed like the Alpha's was, but its design is really beautiful, a testament to the later Berkeley RISC philosophy. It's everything SPARC should have been, really.
  • The DEC Alpha was a much better chip than the Intel Itanium; and not just in the way that Johnny Mathis is way better than Diet Pepsi [wikipedia.org].

    The DEC Alpha was a brilliant RISC processor that could outrun a closet full of x86 chips of the same era (or even the era after). The DEC Alpha was sold by a hardware company that distributed their own Unix-derived OS for it that had the proper compilers ready to go as soon as the system was booted. The Itanium, on the other hand, was an odd attempt by Intel to make a 6
    • by _merlin ( 160982 ) on Monday April 05, 2010 @05:06PM (#31741872) Homepage Journal

      Having used Alpha workstations, I beg to differ. The Alpha was a design that managed to do the absolute minimum per clock cycle in each pipeline stage. This allowed very high clock speeds, and high theoretical peak performance with very deep pipelines. In reality, the deep pipelines' branch misprediction penalty was so bad you never got close to the theoretical peak performance, and the high clock speeds made them hot and unreliable - poor reliability was the main driving factor for switching to SPARC. Everyone should've been able to see the problems with the Pentium 4 well in advance - it was basically an Alpha with an x86 recompiler frontend, so it suffered from all the same problems.

      DEC Tru64 had a lot going for it - lots of good ideas in there. When DEC and HP merged, they should have taken what was worthwhile from HP-UX and integrated it into Tru64, then ported the result to HP-PA. That would've produced a system that people wanted. (HP-UX horrible - nothing behave quite how it should. I'd be surprised if the thing really passed POSIX conformance without some money under the table.)

      • by damn_registrars ( 1103043 ) <damn.registrars@gmail.com> on Monday April 05, 2010 @05:29PM (#31742196) Homepage Journal

        The Alpha was a design that managed to do the absolute minimum per clock cycle in each pipeline stage

        That is pretty much what RISC was about, in a nutshell.

        and the high clock speeds made them hot and unreliable

        I don't know what system you were running. I was using an AlphaServer ES40; four 667 Alphas with 8gb RAM. It was one of the most reliable systems I've ever used for HPC. There was a rack of intel x86 systems of the same era right next to it - something like 32 Intel Xeon CPUs - and the Alpha made the rack look silly and wasteful. On BLAST, the Alpha ran circles around the intel rack, and it became even more embarrasing for the intel rack when the data sets got larger. That was only one example, though; we found pretty much anything we could get source code for, the Alpha ran better. And that was going up against 1.8ghz Xeons.

        By comparison, the Itanium wants to run native 32bit code (though it certainly doesn't do it well). The compilers aren't easy to setup (even in Linux) and it's hard to find a Linux distro that runs on one. I have an SGI cluster with Itanium2 CPUs in it; I know the care and feeding for this system well.

        • by ZosX ( 517789 )

          You might be able to help me. I know this is totally off topic, but I have this old peugeot sound data systems alpha. When I turn it on I get only a blue screen. The case has a lock and I haven't been able to break the lock yet. I wanted to drill it out, but didn't want to get metal shavings all inside. I'm thinking that I have to though. Do you know anything about a blue screen on alphas? Couldn't tell you what OS it was running or anything. Just thought it would be a lot of fun to get a non x86 box runnin

          • You might be able to help me.

            I'm really more of an experienced Alpha user than an experienced Alpha engineer or support guru. I knew how to make it kick ass on the applications that were important to me, and that was about it.

            I have this old peugeot sound data systems alpha

            I've heard of a number of third-party vendors that sold Alpha systems, although I've never worked with of of those systems myself.

            The case has a lock and I haven't been able to break the lock yet. I wanted to drill it out, but didn't want to get metal shavings all inside. I'm thinking that I have to though.

            As much as the architecture of the Alpha is unique amongst most systems you'll see today, it isn't magic. If you decide that you need to drill it out, I doubt that a new level of h

          • by Panaflex ( 13191 )

            That's the ARC console - it's freezing probably trying to netboot or init a lost piece of hardware. Hit ESC and you should get to the console.

            Check here: http://www.compaq.com/AlphaServer/technology/literature/srmcons.pdf [compaq.com]

        • Re: (Score:3, Insightful)

          I don't know what system you were running. I was using an AlphaServer ES40; four 667 Alphas with 8gb RAM. It was one of the most reliable systems I've ever used for HPC. There was a rack of intel x86 systems of the same era right next to it - something like 32 Intel Xeon CPUs - and the Alpha made the rack look silly and wasteful. On BLAST, the Alpha ran circles around the intel rack, and it became even more embarrasing for the intel rack when the data sets got larger. That was only one example, though; we f

      • by Bert64 ( 520050 ) <bert@NOSpaM.slashdot.firenzee.com> on Monday April 05, 2010 @05:32PM (#31742226) Homepage

        The alpha didn't even attempt to do out of order execution until the EV6 chip...
        The EV4 and EV5 chips were strict in-order processors.

        The difference with the P4, is that the p4 was expected to run code that was originally optimized for a 386, whereas the original alpha had code that specifically targeted it... In-order execution works very well when you can specifically target a particular processor (see games consoles), since you can tune the code to the available resources of the processor... The compiler for the alpha was also pretty good, it could beat gcc hands down at floating point code for instance.

        In terms of alphas getting hot, the only workstation i remember which had heat problems was the rather poorly designed multia (which used a cut down alpha chip anyway).. other alpha systems i used were rock solid reliable and i still have several in the loft somewhere - one of which ran for 6 months after the fans failed before i noticed and shut it down...

        Clock for clock the alpha was pretty quick too, unlike the p4 that was considerably slower than a p3 at the same clock...
        http://forum.pcvsconsole.com/viewthread.php?tid=11606 [pcvsconsole.com] shows alphas getting specfp2000 scores higher than x86 chips running at 3x the clock rate.

        A lot of people, myself included, think itanium should never have existed, and that the development effort should have been put into alpha instead - an architecture that already had a good software and user base...

        • by epine ( 68316 ) on Monday April 05, 2010 @07:21PM (#31743368)

          If the 1.8GHz Xeon was based on the Netburst architecture, first you have to multiply by 2/3rds to correct for diet Pepsi clock cycles, then if your code base is scientific, you have to divide by two for the known x86 floating point catastrophe, and finally, if your scientific application is especially large register set friendly, there's another factor of 0.75. So on that particular code base, a 1.8GHz Netbust is about equal to a 400MHz Alpha (I only ever worked with the in-order edition). Netburst usually had some stinking fast benchmarks to show for itself if it happened to have exactly the right SSE instructions for the task at hand. And it gained a lot of relative performance on pure integer code. BTW, were you running Xeon in 64-bit mode? That could be another factor of 0.75.

          A lot of people, myself included, think itanium should never have existed, and that the development effort should have been put into alpha instead - an architecture that already had a good software and user base

          Yeah, you and a lot of clear headed people with insight into the visible half of the problem space. Not good enough.

          Alpha was a nice little miracle, but it fundamentally cheated in its fabrication tactics. This is a long time ago, but as I recall, in order to get single-cycle 64-bit carry propagation, they added extra metal layers for look-ahead carry generation. For a chip intended Intel scale mass production, this kind of thing probably makes an Intel engineer's eyebrows pop off. That chip was tuned like a Ferrari. I'm sure the Alpha was designed to scale, but almost certainly not at a cost of production that generates the fat margins Intel is accustomed to.

          Around the time Itanium was first announced, I spent a week poking into transport triggered architectures. There was some kind of TTA tool download, from HP I think, and I poked my nose into a lot of the rationale and sundry documentation.

          TTA actually contains a lot of valid insight into the design problem. The problem is that Intel muffed the translation, through a combination of monopolistic sugar cravings, management hubris, and cart before the horse engineering objectives. I'm sure many of the Intel engineers would like to take a Mulligan on some of the original design decisions. There might have been a decent in there somewhere trying to get out. Itanium was never that chip.

          I pretty much threw in the towel on Itanium becoming the next standard platform for scientific computing when I discovered that the instruction bundles contained three *independent* instructions. They went the wrong way right there. They could have defined the bundles to contain up to seven highly dependent instructions, something like complex number multiplication: four operands, seven operations, two results. It should have been possible to encode that in a single bundle. Either the whole bundle retires, or not at all.

          Dependencies *internal* to a bundle are easy to make explicit with a clever instruction encoding format. You wouldn't need a lot of circuitry to track these local dependencies. What you gain is that you only have to perform four reads from the register file and two writes to the register file to complete up to, in this example, seven ALU operations. Ports on the register file is one of the primary bottlenecks in TTA theory.

          What you lose is that these bundles have a very long flight time before final retirement. Using P6 latencies, it's about ten clock cycles for the complex multiplication mul/add tree in this example (not assuming a fused mul-add). This means you have to keep a lot of the complexity of the P6 on the ROB side (retirement order buffer). But that also functions as a shock absorber for non-determinism, and takes a huge burden off the shoulders of the compiler writers. This was apparent to me long before the dust settled on the failure of the Itanium compiler initiative.

          In my intuitively preferred approach, instructions within bundles would be tightly bound and s

          • by epine ( 68316 ) on Monday April 05, 2010 @08:27PM (#31743912)

            This is a response to my own post. Sometimes after uncorking a minor screed, I note to myself "that was more obnoxious than normal" and then my subconscious goes "ding!" and I get what's grinding me.

            The secret of x86 longevity is to have been so coyote-ugly that it turns into pablum the brain of any x86-hater who tries to make a chip to rid the planet of the scourge once and for all.

            For three decades right-thinking chip designers have *wanted* x86 to prove as bad in reality as ugliness ought to dictate.

            Instead of having a balanced perspective on beauty, the x86-haters succumb to the rule of thumb that the less like x86, the better. And almost always, that lead to a mistake, because x86 was never in fact rotten to the gore. You need a big design team, and it bleeds heat, but all other respects, it proved salvageable over and over and over again.

            On the empirical evidence, high standards of beauty in CPU design are overrated. Instead, we should have been employing high standards of pragmatic compromise.

            If any design team had aimed merely for "a hell of lot less ugly", instead of becoming mired in some beauty-driven conceptual over-reaction, maybe x86 might have died already.

            Maybe instruction sets aren't meant to be beautiful. Of course, viewed that way, this is an age-old debate.

            The Rise of ``Worse is Better'' [mit.edu]

            Empirically, x86 won.

            The lingering question is this: is less worse less better, or was there a way out, and all the beauty mongers failed to find it?

            • by evilviper ( 135110 ) on Monday April 05, 2010 @09:09PM (#31744118) Journal

              x86 isn't a passable architecture at all. What it has going for it, is MONEY. Intel, AMD, and others have dumped tons of money into it to keep it moving along, against all odds. This because the whole world is tied to, and fixated on x86, which itself came about way back when, because IBM wanted a second supplier, so x86 was the only chip out there with competition, and therefore no proprietary lock-in. Other companies like DEC, MIPS, ARM, etc., have patents on their tech, with no license agreements, so no real attempt to one-up them. x86 competition out the gate made it a healthy ecosystem, which then precluded all others, which then became self-sustaining.

              • Re: (Score:3, Insightful)

                by drinkypoo ( 153816 )

                x86 isn't a passable architecture at all.

                Why does it in fact perform better than supposedly superior architectures for so many workloads? If these other architectures are inherently superior, why don't they run rings around x86 in spite of the difference in dollars spent?

              • Re: (Score:3, Insightful)

                by bartwol ( 117819 )

                x86 isn't a passable architecture at all.

                I'm sorry...I thought you said x86 isn't a passable architecture...at all.
                Just last week I found a good word for this: hyperbole.
                You'll have to ratchet it down at least a couple of notches to get close to truth. See the parent's reference to "coyote-ugly" (x86) and x86-haters (you).

              • by RzUpAnmsCwrds ( 262647 ) on Tuesday April 06, 2010 @12:33AM (#31745052)

                x86 isn't a passable architecture at all. What it has going for it, is MONEY. Intel, AMD, and others have dumped tons of money into it to keep it moving along, against all odds.

                It's fundamentally irrelevant whether anyone thinks that x86 is "passable" - it's a proven fact. We have 15 years of out-of-order x86 implementations that prove that.

                Yeah, you have to handle the brain-dead instruction encodings in the decoder, and you need to emit micro-ops for a bunch of obscure instructions that no one ever uses (to maintain compatibility). You also have to handle the multiple obscure and obsolete memory addressing modes.

                But the reality is that no one but engineers gives a crap about this. In a world of 300M+ transistor cores, there just isn't that much overhead to making the CPU compatible. Most of the die space is cache anyway nowadays.

                We can't compare what x86 is to what POWER or MIPS or SPARC "would have been" in some speculative world where Intel wasn't the dominant desktop/server CPU manufacturer. There's no magic bullet that can make load-store architectures amazingly fast but that doesn't apply to x86. Almost all of the technology out there can apply equally to a modern x86 CPU.

                What sells CPUs is not having a clean and simple ISA. What sells CPUs is performance, power consumption, and, in many cases, compatibility. If having a clean ISA accomplishes those objectives, so much the better. But Intel and AMD have shown that you can make a fast, low-power, compatible x86 CPU and sell it at a very low price. That's what matters.

      • by juuri ( 7678 )

        Well it is easy to bag on things in hindsight, but in 01/02? If you were doing something like running thousands of monte carlo simulations the Alpha was untouchable for commodity hardware. I won a bitter sweet war when I swore up and down with any data I could muster that Sun e420's fully populated couldn't even remotely touch a lowly ol' DS20 running about 1/3 the cost. Ended up with a lot of underutilized sun boxen.

        • by _merlin ( 160982 )

          It was definitely an ambitious design, and something that needed to be tried. It did what it promised for the first generation, but sadly it was a dead end. You're right - no-one could have known in advance that the Alpha would end up hitting insurmountable roadblocks; but they Intel should have seen what was coming when they used the concept in P4 NetBurst. Hopefully, the lessons learned have influenced today's processor designs.

    • Like we are going to see happen with SPARC too im afraid.

      • by ZosX ( 517789 )

        I don't see sparc dying anytime soon. It is manufactured by a variety of chip companies and I'm sure any of them would license the tech to keep producing their own. If anything, it might go the way of the arm and fragment, but retain the same basic instruction set. I don't see oracle giving up such lucrative hardware sales anytime in the future either. They might gobble up all the good parts and leave the rest of sun to slowly bleed to death but I think its going to be a slow death. Oracle is going to do to

    • by Bert64 ( 520050 )

      Itanium was killed primarily by closed source software...
      A few years ago, an Itanium box made a very good but expensive linux box, as did alpha for that matter...

      However, while windows was ported to itanium most of the apps people wanted to run weren't, windows was effectively useless on the itanium because it had no applications... Very few commercial software companies would write software for it because of the small number of users, and the number of users won't increase because of the lack of software.

    • Re: (Score:2, Informative)

      by gertam ( 1019200 )

      Compaq's upper level management's arguments about Itanium's inevitability in the marketplace and economies of scale are a prime example of how you should never let management make decisions of real consequence. I listened to meetings at Compaq where not a single engineer in the crowd agreed with management, but there was nothing they could do. Everyone knew that the game was over simply because a bunch of morons with MBAs thought Intel was unbeatable and they wanted to give up.

      We couldn't understand it unti

      • Given how x86 has utterly destroyed the low-end RISC market, I'm going with the MBAs on this one. Had Compaq's engineers had their way, apparently they would have spent billions of dollars to be the last-place player in a dead-end market. No matter how great Alpha was, the marketing problems were probably insurmountable.

  • Doubt it. (Score:5, Interesting)

    by Jah-Wren Ryel ( 80510 ) on Monday April 05, 2010 @04:51PM (#31741680)

    Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?

    Kinda funny to make that comparison since the Alpha was killed to enable the Itanium. (Long story involving HP making a deal with Intel to hand over the last of PA-RISC/Itanium processor development to Intel and DEC killing Alpha at the same time to clear out the market since HP was in the process of purchasing DEC/Compaq, although the acquisition was not yet public at the time of the cpucide).

    But I doubt its the end of Itanium. Itanium models have things that even the latest Xeons don't in terms of RAS. [wikipedia.org] Most customers don't care about the level of fault tolerance and reliability, but the ones who can't migrate to linux (or Windows) because they are dependent on features of more proprietary OSes like Tandem (now HP) NonStop [wikipedia.org] do need Itanium, and their software is unlikely to be ported to x86 anytime soon (it took at roughly 4 years to get NonStop ported to Itanium to begin with).

    • I thought Intel had partnered with DEC to make the Alpha chip. Also Intel held the patents on it. Intel finally decided to tell DEC sorry but we (Intel) do not want to use these (the Alpha chip designs) anymore. Or something like that anyway. Intel forced DEC to stop making the CPU which left DEC screwed. DEC's value dropped enough for HP to buy it.

      Wasn't the pentium II more like a risc CPU with a cisc interrupter so it could run windows and the rest of the 32 bit cisc stuff? So Intel needed the Aplha to go

      • Re:Doubt it. (Score:5, Interesting)

        by stevel ( 64802 ) * on Monday April 05, 2010 @06:32PM (#31742940) Homepage

        I thought Intel had partnered with DEC to make the Alpha chip. Also Intel held the patents on it. Intel finally decided to tell DEC sorry but we (Intel) do not want to use these (the Alpha chip designs) anymore. Or something like that anyway. Intel forced DEC to stop making the CPU which left DEC screwed.

        Sorry, that is not even close. DEC sued Intel over infringements of the Alpha patents in Pentium processors. One of the results of the settlement was that Intel acquired DEC's Hudson, MA fab (which still operates today). In no way were DEC and Intel partners in Alpha, though ironically, Intel ended up making Alpha chips in the Hudson fab for several years under contract to DEC. What killed Alpha was years of neglect by Bob Palmer (DEC CEO) followed by Compaq's cluelessness. HP ended up with both Alpha and Itanium and bet the farm on the latter, but by that time it probably didn't matter.

        • by Macka ( 9388 )

          That's how I remember it as well. But it wasn't just the Alpha chip that Intel were forced to manufacture (after being forced to buy the Hudson MA fab) but StrongARM as well. Remember that? Ultimately it was this that set the stage for the death of Alpha. After suffering years of neglect at the hands of Intel in fabrication technology advancements, and missing out on many planned die shrinks that would have kept it ahead, it finally got the axe before the EV8 variant had a chance to see the light of day

    • Re:Doubt it. (Score:5, Informative)

      by dave562 ( 969951 ) on Monday April 05, 2010 @05:46PM (#31742430) Journal

      The WSJ mentioned that Intel was porting a lot of the Itanium specific fault tolerance features over to the Xeons.

    • by fm6 ( 162816 )

      This may or may not count as irony, but VMS (DEC's main OS) survives solely as an OS for HP's Itanium based systems. Further weirdness: a major app for this platform is RDB, a DBMS that Oracle bought from DEC over a decade ago. It's interesting that two companies whose mainstay is competing tech (x86 servers for HP, Oracle DBMS and now x86 and SPARC Sun servers for Oracle) work so hard to keep this particular legacy stack alive.

  • by idontgno ( 624372 ) on Monday April 05, 2010 @05:13PM (#31741974) Journal
    Windows on IA-64 can't be dying until Netcraft confirms it!
  • They all get outmoded.

  • Now I won't have to decline all those useless Itanium updates in WSUS console every month.

Don't get suckered in by the comments -- they can be terribly misleading. Debug only code. -- Dave Storer

Working...