Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware News

Oracle Claims Intel Is Looking To Sink the Itanic 235

Blacklaw writes "Intel's ill-fated Itanium line has lost another supporter, with Oracle announcing that it is to immediately stop all software development on the platform. 'After multiple conversations with Intel senior management Oracle has decided to discontinue all software development on the Intel Itanium microprocessor,' a company spokesperson claimed. 'Intel management made it clear that their strategic focus is on their x86 microprocessor and that Itanium was nearing the end of its life.'"
This discussion has been archived. No new comments can be posted.

Oracle Claims Intel Is Looking To Sink the Itanic

Comments Filter:
  • Re:Sparc (Score:5, Interesting)

    by blair1q ( 305137 ) on Wednesday March 23, 2011 @08:36PM (#35594012) Journal

    x86 is a small part of what's in a modern x86 CPU.

    There's hardly any good reason to choose anything else over it, either. You can't beat it on performance the way Alpha did. PPC lost its simplicity long ago (and comes with some annoyances that make me wish it would just die).

    Intel's latest stuff is the best that ever was. Nobody else does or ever has come close.

  • by Mysticalfruit ( 533341 ) on Wednesday March 23, 2011 @09:02PM (#35594208) Homepage Journal
    I've long argued that Itanium was Intel's vehicle to kill PA-RISC and get HP out of the high performance computing market and it worked. Intel let that CPU die a death of a thousand committee compromises while simultaneously plundering all of the technology they could out of Alpha and rolling out their Xeon cpus out at much higher clock speeds and with features that weren't in Itanium.

    I worked at a computer company and we built servers that used PA-RISC cpus at the time and we got our hands on some Itanium samples and needless to say, we decided to migrate the platform to Xeon instead.
  • Ah well (Score:4, Interesting)

    by Mr Z ( 6791 ) on Wednesday March 23, 2011 @09:05PM (#35594226) Homepage Journal

    I work directly with a VLIW architecture myself (the TI C6000 family of DSPs). From that perspective, I'm a little sad to see Itanium go. I realize EPIC isn't exactly VLIW, but they had an awful lot in common. Much of HP's and Intel's compiler research helps us other VLIW folks too.

    I think EPIC tried to live up to its name a little too much. The original Merced overreached, and so it ended up shipping far too late for its performance to be compelling. Everybody always zooms in on the lackluster x86 performance, but x86 wasn't at all interesting in the spaces Itanium wanted to play in originally. It wanted to go after the markets dominated by non-x86 architectures such as Alpha, PA-RISC, MIPS and SPARC. And had it come out about 3 years earlier, it may've had a chance there by thinning the field and consolidating the high-end server space behind EPIC.

    Instead, it launched late as a room-heating yawner. And putting crappy x86 emulation on board only tempted comparisons to the native x86 line. That it made it all the way to Poulson is rather impressive, but smells more like contractual obligation than anything else.

    Rest in peace, Itanium.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Wednesday March 23, 2011 @09:56PM (#35594486)
    Comment removed based on user account deletion
  • by Anonymous Coward on Thursday March 24, 2011 @12:19AM (#35595274)

    I agree with Oracle that it is close to over for the chip. Intel lost every good engineer working on it to AMD in Fort Collins, CO and can't (even with massive financial incentives) coax anybody on their x86 teams to transfer over. Itanium is considered the kiss of death on a resume so they are having a hard time even finding people willing to work on it. Work on Itanium is about 6 years behind original schedules! Originally designed and marketed as a performance leader to the Xeon series it has fallen so far behind that it had to be re-marketed with FUD about quality, scalability, and stability. While I agree it has better quality and stability than the i3,5,7 series, Intel has a hard time explaining how it is better in those terms compared to their higher end Xeon series.

  • by BLToday ( 1777712 ) on Thursday March 24, 2011 @01:09AM (#35595450)

    My old college roommate was offered a job at Intel Itanium's unit after finishing his PhD in compiler theory. He turned it down because "life's too short to spend it fixing Itanium."

  • Re:Sparc (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Thursday March 24, 2011 @07:43AM (#35596944) Journal

    Since this is an article about Itanium, it's worth noting that Itanium copies the predicated instruction model from ARM. This doesn't just make the code denser, it meant that ARM could get away without having a branch predictor for a very long time (new ARM chips have one). It works very nicely with superscalar architectures, because the instructions are always executed, and the results are only retired if the condition is met. You always know the state of the condition flag by the time the predicated instructions emerge from the pipeline, so it's trivial to implement in comparison with the kind of speculative execution required for predicted branches on x86.

    Lots of people seem to assume that x86 is translated into RISC and then x86 has no impact on the rest of the execution pipeline. This is absolutely not the case. The x86 instruction set is horrible. Lots of things have side effects like setting condition registers, which cause complex interactions between instructions in a pipelined implementation, and insanely complex interactions in an out-of-order design. This complexity all has to be replicated in the micro-ops. Something like Xeon then has a pass that tries to simplify the micro-ops. You effectively have an optimising JIT, implemented in hardware, which does things like transforming operations that generate side effects into ones that don't if the relevant condition flags are guaranteed to be replaced by something else before they are accessed. All of this adds to complexity and adds to the power requirement.

    Oh, and some of these interactions are not even intentional. Some of the old Intel guys tell a story about the first test simulations of the Pentium. It implemented all of the documented logic, but then they found that most of the games that they tried running on it failed. On the 486, one of the instructions was accidentally setting a condition flag due to a bug in the design. Game designers found that they could shorten some instruction sequences by taking advantage of this. In the Pentium, they didn't recreate this bug, and software broke. After the first phase of testing, they had to go back and recreate it (adding some horrible hacks in the Pentium design in the process), because if games suddenly crashed when people upgraded to a Pentium then people would blame Intel (Windows 95 had a hacky work-around to prevent SimCity crashing on a use-after-free bug, for the same reason). All of these things add to complexity and in hardware complexity equals power consumption.

    Or if, you are that way inclined, you could argue about Java/.NET bytecode making code compiled at run time achieving the same thing.

    And, if you are, then Thumb-2EE is a much nicer target than x86 for running this code. It has instructions for things like bounds-checked array access, which really help performance in JIT'd VM code.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Thursday March 24, 2011 @09:42AM (#35597726)
    Comment removed based on user account deletion

To the systems programmer, users and applications serve only to provide a test load.

Working...