Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Windows Intel Microsoft Software Hardware

Microsoft Announces End of the Line For Itanium Support 227

WrongSizeGlass writes "Ars Technica is reporting that Microsoft has announced on its Windows Server blog the end of its support for Itanium. 'Windows Server 2008 R2, SQL Server 2008 R2, and Visual Studio 2010 will represent the last versions to support Intel's Itanium architecture.' Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?"
This discussion has been archived. No new comments can be posted.

Microsoft Announces End of the Line For Itanium Support

Comments Filter:
  • Doubt it. (Score:5, Interesting)

    by Jah-Wren Ryel ( 80510 ) on Monday April 05, 2010 @05:51PM (#31741680)

    Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?

    Kinda funny to make that comparison since the Alpha was killed to enable the Itanium. (Long story involving HP making a deal with Intel to hand over the last of PA-RISC/Itanium processor development to Intel and DEC killing Alpha at the same time to clear out the market since HP was in the process of purchasing DEC/Compaq, although the acquisition was not yet public at the time of the cpucide).

    But I doubt its the end of Itanium. Itanium models have things that even the latest Xeons don't in terms of RAS. [wikipedia.org] Most customers don't care about the level of fault tolerance and reliability, but the ones who can't migrate to linux (or Windows) because they are dependent on features of more proprietary OSes like Tandem (now HP) NonStop [wikipedia.org] do need Itanium, and their software is unlikely to be ported to x86 anytime soon (it took at roughly 4 years to get NonStop ported to Itanium to begin with).

  • Re:Probably not (Score:3, Interesting)

    by _merlin ( 160982 ) on Monday April 05, 2010 @05:57PM (#31741756) Homepage Journal

    Were many Itanium users running Windows? My impression was that most Itanium users were running some sort of *nix. I don't think it's a huge deal for Itanium.

    The only Itanium servers I encounter regularly run OpenVMS in order to host the popular OM stock exchange platform. OM-based stock exchanges (ASX, HKFE, OMX, SGX, IDEM) all seem to be a hell of a lot more stable than the .NET-based Tradelect/Infolect system used on LSE for the last few years. I don't know why anyone would actually want to run Windows on Itanium.

  • by _merlin ( 160982 ) on Monday April 05, 2010 @06:06PM (#31741872) Homepage Journal

    Having used Alpha workstations, I beg to differ. The Alpha was a design that managed to do the absolute minimum per clock cycle in each pipeline stage. This allowed very high clock speeds, and high theoretical peak performance with very deep pipelines. In reality, the deep pipelines' branch misprediction penalty was so bad you never got close to the theoretical peak performance, and the high clock speeds made them hot and unreliable - poor reliability was the main driving factor for switching to SPARC. Everyone should've been able to see the problems with the Pentium 4 well in advance - it was basically an Alpha with an x86 recompiler frontend, so it suffered from all the same problems.

    DEC Tru64 had a lot going for it - lots of good ideas in there. When DEC and HP merged, they should have taken what was worthwhile from HP-UX and integrated it into Tru64, then ported the result to HP-PA. That would've produced a system that people wanted. (HP-UX horrible - nothing behave quite how it should. I'd be surprised if the thing really passed POSIX conformance without some money under the table.)

  • by damn_registrars ( 1103043 ) <damn.registrars@gmail.com> on Monday April 05, 2010 @06:29PM (#31742196) Homepage Journal

    The Alpha was a design that managed to do the absolute minimum per clock cycle in each pipeline stage

    That is pretty much what RISC was about, in a nutshell.

    and the high clock speeds made them hot and unreliable

    I don't know what system you were running. I was using an AlphaServer ES40; four 667 Alphas with 8gb RAM. It was one of the most reliable systems I've ever used for HPC. There was a rack of intel x86 systems of the same era right next to it - something like 32 Intel Xeon CPUs - and the Alpha made the rack look silly and wasteful. On BLAST, the Alpha ran circles around the intel rack, and it became even more embarrasing for the intel rack when the data sets got larger. That was only one example, though; we found pretty much anything we could get source code for, the Alpha ran better. And that was going up against 1.8ghz Xeons.

    By comparison, the Itanium wants to run native 32bit code (though it certainly doesn't do it well). The compilers aren't easy to setup (even in Linux) and it's hard to find a Linux distro that runs on one. I have an SGI cluster with Itanium2 CPUs in it; I know the care and feeding for this system well.

  • by Bert64 ( 520050 ) <bert AT slashdot DOT firenzee DOT com> on Monday April 05, 2010 @06:32PM (#31742226) Homepage

    The alpha didn't even attempt to do out of order execution until the EV6 chip...
    The EV4 and EV5 chips were strict in-order processors.

    The difference with the P4, is that the p4 was expected to run code that was originally optimized for a 386, whereas the original alpha had code that specifically targeted it... In-order execution works very well when you can specifically target a particular processor (see games consoles), since you can tune the code to the available resources of the processor... The compiler for the alpha was also pretty good, it could beat gcc hands down at floating point code for instance.

    In terms of alphas getting hot, the only workstation i remember which had heat problems was the rather poorly designed multia (which used a cut down alpha chip anyway).. other alpha systems i used were rock solid reliable and i still have several in the loft somewhere - one of which ran for 6 months after the fans failed before i noticed and shut it down...

    Clock for clock the alpha was pretty quick too, unlike the p4 that was considerably slower than a p3 at the same clock...
    http://forum.pcvsconsole.com/viewthread.php?tid=11606 [pcvsconsole.com] shows alphas getting specfp2000 scores higher than x86 chips running at 3x the clock rate.

    A lot of people, myself included, think itanium should never have existed, and that the development effort should have been put into alpha instead - an architecture that already had a good software and user base...

  • Re:Probably not (Score:5, Interesting)

    by lgw ( 121541 ) on Monday April 05, 2010 @07:15PM (#31742752) Journal

    Microsoft has had a strict policy since the dawn of Windows that Windows be built for at least 2 processor architectures at all times. They really worried about i386-isms creeping into the kernel. It pretty much doesn't matter what 2 you choose, as long as it's more than one (and they're somewhat different), it keeps the kernel devs honest. I wonder what they're doing now: perhaps they just decided that i386 and "amd64" are different enough to serve their purpose.

  • Re:Probably not (Score:4, Interesting)

    by bhtooefr ( 649901 ) <[gro.rfeoothb] [ta] [rfeoothb]> on Monday April 05, 2010 @07:23PM (#31742848) Homepage Journal

    The other thing is, keep a full build internally.

    The rumor mill says that Microsoft has current versions of Windows built for ARM internally... sorta like how Apple kept x86 builds of Mac OS X internally the whole time.

  • Re:Doubt it. (Score:5, Interesting)

    by stevel ( 64802 ) * on Monday April 05, 2010 @07:32PM (#31742940) Homepage

    I thought Intel had partnered with DEC to make the Alpha chip. Also Intel held the patents on it. Intel finally decided to tell DEC sorry but we (Intel) do not want to use these (the Alpha chip designs) anymore. Or something like that anyway. Intel forced DEC to stop making the CPU which left DEC screwed.

    Sorry, that is not even close. DEC sued Intel over infringements of the Alpha patents in Pentium processors. One of the results of the settlement was that Intel acquired DEC's Hudson, MA fab (which still operates today). In no way were DEC and Intel partners in Alpha, though ironically, Intel ended up making Alpha chips in the Hudson fab for several years under contract to DEC. What killed Alpha was years of neglect by Bob Palmer (DEC CEO) followed by Compaq's cluelessness. HP ended up with both Alpha and Itanium and bet the farm on the latter, but by that time it probably didn't matter.

  • by epine ( 68316 ) on Monday April 05, 2010 @08:21PM (#31743368)

    If the 1.8GHz Xeon was based on the Netburst architecture, first you have to multiply by 2/3rds to correct for diet Pepsi clock cycles, then if your code base is scientific, you have to divide by two for the known x86 floating point catastrophe, and finally, if your scientific application is especially large register set friendly, there's another factor of 0.75. So on that particular code base, a 1.8GHz Netbust is about equal to a 400MHz Alpha (I only ever worked with the in-order edition). Netburst usually had some stinking fast benchmarks to show for itself if it happened to have exactly the right SSE instructions for the task at hand. And it gained a lot of relative performance on pure integer code. BTW, were you running Xeon in 64-bit mode? That could be another factor of 0.75.

    A lot of people, myself included, think itanium should never have existed, and that the development effort should have been put into alpha instead - an architecture that already had a good software and user base

    Yeah, you and a lot of clear headed people with insight into the visible half of the problem space. Not good enough.

    Alpha was a nice little miracle, but it fundamentally cheated in its fabrication tactics. This is a long time ago, but as I recall, in order to get single-cycle 64-bit carry propagation, they added extra metal layers for look-ahead carry generation. For a chip intended Intel scale mass production, this kind of thing probably makes an Intel engineer's eyebrows pop off. That chip was tuned like a Ferrari. I'm sure the Alpha was designed to scale, but almost certainly not at a cost of production that generates the fat margins Intel is accustomed to.

    Around the time Itanium was first announced, I spent a week poking into transport triggered architectures. There was some kind of TTA tool download, from HP I think, and I poked my nose into a lot of the rationale and sundry documentation.

    TTA actually contains a lot of valid insight into the design problem. The problem is that Intel muffed the translation, through a combination of monopolistic sugar cravings, management hubris, and cart before the horse engineering objectives. I'm sure many of the Intel engineers would like to take a Mulligan on some of the original design decisions. There might have been a decent in there somewhere trying to get out. Itanium was never that chip.

    I pretty much threw in the towel on Itanium becoming the next standard platform for scientific computing when I discovered that the instruction bundles contained three *independent* instructions. They went the wrong way right there. They could have defined the bundles to contain up to seven highly dependent instructions, something like complex number multiplication: four operands, seven operations, two results. It should have been possible to encode that in a single bundle. Either the whole bundle retires, or not at all.

    Dependencies *internal* to a bundle are easy to make explicit with a clever instruction encoding format. You wouldn't need a lot of circuitry to track these local dependencies. What you gain is that you only have to perform four reads from the register file and two writes to the register file to complete up to, in this example, seven ALU operations. Ports on the register file is one of the primary bottlenecks in TTA theory.

    What you lose is that these bundles have a very long flight time before final retirement. Using P6 latencies, it's about ten clock cycles for the complex multiplication mul/add tree in this example (not assuming a fused mul-add). This means you have to keep a lot of the complexity of the P6 on the ROB side (retirement order buffer). But that also functions as a shock absorber for non-determinism, and takes a huge burden off the shoulders of the compiler writers. This was apparent to me long before the dust settled on the failure of the Itanium compiler initiative.

    In my intuitively preferred approach, instructions within bundles would be tightly bound and s

  • by epine ( 68316 ) on Monday April 05, 2010 @09:27PM (#31743912)

    This is a response to my own post. Sometimes after uncorking a minor screed, I note to myself "that was more obnoxious than normal" and then my subconscious goes "ding!" and I get what's grinding me.

    The secret of x86 longevity is to have been so coyote-ugly that it turns into pablum the brain of any x86-hater who tries to make a chip to rid the planet of the scourge once and for all.

    For three decades right-thinking chip designers have *wanted* x86 to prove as bad in reality as ugliness ought to dictate.

    Instead of having a balanced perspective on beauty, the x86-haters succumb to the rule of thumb that the less like x86, the better. And almost always, that lead to a mistake, because x86 was never in fact rotten to the gore. You need a big design team, and it bleeds heat, but all other respects, it proved salvageable over and over and over again.

    On the empirical evidence, high standards of beauty in CPU design are overrated. Instead, we should have been employing high standards of pragmatic compromise.

    If any design team had aimed merely for "a hell of lot less ugly", instead of becoming mired in some beauty-driven conceptual over-reaction, maybe x86 might have died already.

    Maybe instruction sets aren't meant to be beautiful. Of course, viewed that way, this is an age-old debate.

    The Rise of ``Worse is Better'' [mit.edu]

    Empirically, x86 won.

    The lingering question is this: is less worse less better, or was there a way out, and all the beauty mongers failed to find it?

  • by Peach Rings ( 1782482 ) on Monday April 05, 2010 @09:48PM (#31744020) Homepage

    I don't know, Itanium seems pretty impressive. This presentation [infoq.com] appeared on slashdot awhile ago and does a good job of giving a face to the name Itanium instead of just reading "Failed processor line that was really expensive."

    The huge amount of instruction-level parallelism (dependent on a very good compiler) really seems like the best way to do things. It's too bad it doesn't work out in practice.

  • by IntlHarvester ( 11985 ) on Monday April 05, 2010 @09:53PM (#31744038) Journal

    The Alpha was supposed to run Unix - Tru64 Unix in particular. Running in a proper 64bit environment the Alpha was an incredible chip.

    This is a pretty gross oversimplification. First of all, Microsoft spent a lot of money writing a portable OS partially because the conventional wisdom at the time was that RISC would bury x86. (Keep in mind they could have just kept using OS/2.) Digital also badly needed volume for their chip production and make a somewhat serious attempt at the Windows workstation/server market. That Alpha was pigeonholed as a Unix chip is one of the main reasons it failed.

  • by evilviper ( 135110 ) on Monday April 05, 2010 @10:09PM (#31744118) Journal

    x86 isn't a passable architecture at all. What it has going for it, is MONEY. Intel, AMD, and others have dumped tons of money into it to keep it moving along, against all odds. This because the whole world is tied to, and fixated on x86, which itself came about way back when, because IBM wanted a second supplier, so x86 was the only chip out there with competition, and therefore no proprietary lock-in. Other companies like DEC, MIPS, ARM, etc., have patents on their tech, with no license agreements, so no real attempt to one-up them. x86 competition out the gate made it a healthy ecosystem, which then precluded all others, which then became self-sustaining.

  • Re:Probably not (Score:1, Interesting)

    by Anonymous Coward on Tuesday April 06, 2010 @12:37PM (#31749754)

    They wrote some of the early NT code on MIPS, so it ran on that in the early version.

Always draw your curves, then plot your reading.

Working...