Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware News

Oracle Claims Intel Is Looking To Sink the Itanic 235

Blacklaw writes "Intel's ill-fated Itanium line has lost another supporter, with Oracle announcing that it is to immediately stop all software development on the platform. 'After multiple conversations with Intel senior management Oracle has decided to discontinue all software development on the Intel Itanium microprocessor,' a company spokesperson claimed. 'Intel management made it clear that their strategic focus is on their x86 microprocessor and that Itanium was nearing the end of its life.'"
This discussion has been archived. No new comments can be posted.

Oracle Claims Intel Is Looking To Sink the Itanic

Comments Filter:
  • Sparc (Score:5, Informative)

    by Gary Franczyk ( 7387 ) on Wednesday March 23, 2011 @08:22PM (#35593922)

    Now that Oracle owns Sparc processors from Sun, there is no reason for them to help out their competitor.

    • And they managed to get in a good, FUDdy parting shot on their way out (lovely chaps, those folks at Oracle).

      Unless of course they're telling the truth. Which would be a shame, if not a surprise. Itanium deserved at least slightly better a life than it go (and Intel, once burned, may never try moving away from i86 again, god help us).

      • Re:Sparc (Score:5, Interesting)

        by blair1q ( 305137 ) on Wednesday March 23, 2011 @08:36PM (#35594012) Journal

        x86 is a small part of what's in a modern x86 CPU.

        There's hardly any good reason to choose anything else over it, either. You can't beat it on performance the way Alpha did. PPC lost its simplicity long ago (and comes with some annoyances that make me wish it would just die).

        Intel's latest stuff is the best that ever was. Nobody else does or ever has come close.

        • Re:Sparc (Score:5, Insightful)

          by schmidt349 ( 690948 ) on Wednesday March 23, 2011 @08:49PM (#35594104)

          There's hardly any good reason to choose anything else over it, either.

          Well, yes and no. Certainly in the space between the notebook computer and any but the mightiest supercomputers there's no reason at all not to go with x86. But in the mobile processor space, where ultra-low TDP is the order of the day, ARM has a big leg up on x64. Intel sold out their Xscale division (which was only ARM 5 anyway) and now they're losing this increasingly important segment of the market.

          I'm not counting Intel out by a long shot in that race, but ARM is the new hotness for most geeks.

          • Comment removed (Score:5, Interesting)

            by account_deleted ( 4530225 ) on Wednesday March 23, 2011 @09:56PM (#35594486)
            Comment removed based on user account deletion
            • by c0lo ( 1497653 )

              Well ARM is a hell of a lot less power using but it is also a hell of a lot less powerful clock for clock, so it evens out doesn't it? I mean sure in a cell phone where its main job is running a highly specialized OS, with tons of little support chips to help it out it does great, but I wouldn't want to do my day to day desktop computing on it.

              Why do you think ARM is equivalent with less computation power? Maybe it is so for the present, but doesn't [wikipedia.org] seem so for [wikipedia.org] the near future [linuxfordevices.com]

            • Re:Sparc (Score:5, Informative)

              by Darinbob ( 1142669 ) on Wednesday March 23, 2011 @11:11PM (#35594922)

              The problem is that the x86 is like the living dead. It's an ancient architecture which had a really bad architecture when it was new, and is now being held together through duct tape and an oxygen tent. Yes it's very fast, but it's very expensive to make it that way too. It works because Intel has tons of resources to throw at it. It is saddled with decades of backwards compatibility issues as well, 16-bit modes, segmentation, IO ports, and other things that no one uses anymore if they can help it. It requires tons more support chips than many embedded CPUs. The real reason x86 should die is that it's an embarrassment to computer scientists to see this dinosaur still lumbering about.

              ARM on the other hand has some decent designs. It's not low power because it was designed to be low power, but because it's got a relatively simpler RISC design, and because it was easily licensed for people to fabricate so it got used in a lot in low power designs (ie, ARM core included as part of a larger ASIC. But there are faster ARM designs too, and with the same resources that the x86 has it would be really great. ARM is not inherently a "small chip". The problem is trying to compete head to head with x86 when everyone knows it will lose. So it's high power designs are not intended for PC desktops, but for specialized purposes.

              Internally the modern x86 is really a RISC at heart anyway. But it's got a really massive support system on top of that that converts the older style CISC instruction set into a VLIW/ RISC style that's more efficiently executed in a superscalar way. Just like the original RISC argument, it makes sense to try and rip out that complexity then either use the resources to make things faster or just leave it out entirely to get a cheaper and more efficient design.

              Anytime a better design is out there it seems to be clobbered in the market place because it just doesn't pick up enough steam to compete with x86. This is why alternative CPUs tend to be used for embedded systems, specialized high speed routers, or parallel supercomputers. Even Intel can't compete with itself, Itanium isn't the only alternative they've tried. It's not just performance either, most unix workstations had models that ran rings around x86 but they were expensive too because of low volumes sold.

              The public doesn't understand this stuff. Sadly neither do a lot of computer professionals. All they like to think about is "how fast is it" or "does it run Windows"

              The analogy with cars is wrong. X86 isn't a Peterbilt truck, it's a v8 Chrysler land yacht with a cast iron engine, or maybe an gas guzzling SUV. People stick with it because they don't trust funny little foreign cars, they feel safer wrapped in all that steel, they need to compensate for inadequacies, they feel more patriotic when they use more gas, etc. It's what you drive if you don't want to be different from everyone else.

              • Re:Sparc (Score:4, Informative)

                by PCM2 ( 4486 ) on Thursday March 24, 2011 @12:44AM (#35595370) Homepage

                It is saddled with decades of backwards compatibility issues as well, 16-bit modes, segmentation, IO ports, and other things that no one uses anymore if they can help it.

                Actually, Google Native Client (NaCl) uses segmentation to sandbox downloaded code. It's either a brutal hack or a totally clever trick, I guess, depending on your POV.

                • In that case, it won't work on x86-64, because segmentation doesn't work in 64-bit mode. Xen also uses segmentation, so the hypervisor and guest can share a linear address space and not need a TLB flush on hypercalls.

                  Segmentation is actually really nice, but the x86 implementation sucks. You have a two segment tables. The GDT is shared between all processes, the LDT is per process (TLB flush required to change it, next few dozen memory accesses will be very slow). Each contains 8192 entries. For an OO

              • Re:Sparc (Score:5, Informative)

                by Waffle Iron ( 339739 ) on Thursday March 24, 2011 @02:10AM (#35595668)

                Internally the modern x86 is really a RISC at heart anyway. But it's got a really massive support system on top of that that converts the older style CISC instruction set into a VLIW/ RISC style that's more efficiently executed in a superscalar way.

                If you look at a picture of any modern CPU die, the real estate is totally dominated by the caches. That "massive support system" (which in reality is only a tiny fraction of the whole die area) serves largely as a decoder that unpacks the compact CISC-style opcodes (many of which are only one or two bytes long) into whatever obscure internal superscalar architecture is in vogue this year. This saves huge amounts of instruction cache space compared to unpacking bloated one-size-fits-all RISC-style opcodes into the some similar internal architecture du jour. Thus, the X86 can end up needing less die area overall. This is one reason that despite what elitists geeks say, over the years X86 has usually provided more bang for the buck than any competing processor family.

                This scheme is so advantageous, that even ARM has tacked on a similarly convoluted opcode decompresser. If ARM ever evolves into a mainstream general-purpose high-end CPU, there will be undoubtedly dozens more layers of cruft added to the ARM architecture to make it competitive with X86, at which point it will be similarly complex. (For another example, take a look at how the POWER architecture ended up over time. You can hardly call it RISC any more.)

                • the ARM thumb decompressor isn't that convoluted. And it has fixed length 16-bit words like most RISC machines (even the 16-bit version of MIPS does the same). It means it has a nice simple fetch cycle. If you have no cache (as in smaller ARM versions) then it's one instruction fetch per two instructions. Thumb mode basically is a tradeoff of space for performance; you end up executing more instructions overall.

                • by jabjoe ( 1042100 )
                  ARM is more compact that a normal RISC architecture because most instructions are conditional and can have bit shift or rotations too. This doesn't just mean less instructions required to do common tasks, but it also means less branching, which isn't only faster, but again leads to less instructions as 'branches' can share instructions. In the old Acorn days I remember how much noise there was about how much more compact it was than x86. ARM isn't the RISC your daddy knew. Thumb just makes it even more comp
                  • Re:Sparc (Score:5, Interesting)

                    by TheRaven64 ( 641858 ) on Thursday March 24, 2011 @07:43AM (#35596944) Journal

                    Since this is an article about Itanium, it's worth noting that Itanium copies the predicated instruction model from ARM. This doesn't just make the code denser, it meant that ARM could get away without having a branch predictor for a very long time (new ARM chips have one). It works very nicely with superscalar architectures, because the instructions are always executed, and the results are only retired if the condition is met. You always know the state of the condition flag by the time the predicated instructions emerge from the pipeline, so it's trivial to implement in comparison with the kind of speculative execution required for predicted branches on x86.

                    Lots of people seem to assume that x86 is translated into RISC and then x86 has no impact on the rest of the execution pipeline. This is absolutely not the case. The x86 instruction set is horrible. Lots of things have side effects like setting condition registers, which cause complex interactions between instructions in a pipelined implementation, and insanely complex interactions in an out-of-order design. This complexity all has to be replicated in the micro-ops. Something like Xeon then has a pass that tries to simplify the micro-ops. You effectively have an optimising JIT, implemented in hardware, which does things like transforming operations that generate side effects into ones that don't if the relevant condition flags are guaranteed to be replaced by something else before they are accessed. All of this adds to complexity and adds to the power requirement.

                    Oh, and some of these interactions are not even intentional. Some of the old Intel guys tell a story about the first test simulations of the Pentium. It implemented all of the documented logic, but then they found that most of the games that they tried running on it failed. On the 486, one of the instructions was accidentally setting a condition flag due to a bug in the design. Game designers found that they could shorten some instruction sequences by taking advantage of this. In the Pentium, they didn't recreate this bug, and software broke. After the first phase of testing, they had to go back and recreate it (adding some horrible hacks in the Pentium design in the process), because if games suddenly crashed when people upgraded to a Pentium then people would blame Intel (Windows 95 had a hacky work-around to prevent SimCity crashing on a use-after-free bug, for the same reason). All of these things add to complexity and in hardware complexity equals power consumption.

                    Or if, you are that way inclined, you could argue about Java/.NET bytecode making code compiled at run time achieving the same thing.

                    And, if you are, then Thumb-2EE is a much nicer target than x86 for running this code. It has instructions for things like bounds-checked array access, which really help performance in JIT'd VM code.

              • There is an almost trivial migration path that Intel/AMD could take to get rid of those features whilst still retaining a large part of the market. They could produce x86-64 CPUs that boot into 64bit long mode right from the start, scrapping most of the compatibility modes and features (real mode and virtual 8086 mode can go). x86-64 is a really neat clean architecture when taken on its own.

                Such CPUs could be badged as "pure 64bit" CPUs. They'd require an 64bit OS and drivers and they wouldn't run older sof

            • by Sycraft-fu ( 314770 ) on Wednesday March 23, 2011 @11:54PM (#35595144)

              Comes from the general geek thing of liking the underdog (though one has to ask how underdog they really are given their mass marketshare in embedded devices) and from hating CISC. A lot of geeks take CS classes and learn a bit about processor theory, but not any of the CE/EE to understand the lower levels and thus decide CISC = bad RISC = good.

              What it all adds up to is they hate on Intel and love ARM, and want to see ARM in the desktop space.

              As you said, I've yet to see anything showing ARM is faster than Intel in an equal setting. Yes, a Core i7 uses a lot of power. However it does a lot. Not only is it fast at the sort of operations ARM does, it does other things as well. Like 64-bit. You think ARM isn't doing that just because they are jerks? No, it is because 64-bit needs more silicon, and thus more power. How about heavy hitting vector units? Same deal.

              ARM is great for what it does but those who think that it is some amazing x86 replacement just haven't done any looking. Turns out Intel is pretty much the best there ever was when it comes to getting a lot out of silicon. They produce some powerful chips. Could ARM design one as powerful? Maybe, but guess what? It wouldn't be a tiny fraction of a watt deal anymore. It'd be as big and power hungry as Intel's offerings.

              You can see this from other companies as well. If x86 really was the problem, and another architecture could do so much more with less, then why doesn't anyone else do it? Remember IBM, Hitchai, Sun, they all made non-x86 chips. Yet none of them are killing Intel in terms of performance for watts. IBMs POWER chips are a great example. They have an apt name: They are fast as hell, and draw a ton of energy. They really are for high end servers (which is what IBM designed them for). Despite being RISC based (though you find desktop/server RISC chips are quite complex both in terms of number of instructions and capability) they are not some amazing low power monsters that can rip x86 apart. They are fast, powerful, high end chips that take a lot of silicon and a lot of juice to do what they do. Go have a look at the massive heatsink for a POWER5 chip on Wikipedia.

              Different chips, different markets.

              • by jabjoe ( 1042100 ) on Thursday March 24, 2011 @05:24AM (#35596304)
                My money on 'why' is Windows compatibility and closed source locking the platform more than chip design. The best design doesn't always win, in fact, it often doesn't. This was because of Windows going critical mass and with it x86. So much money was poured into x86 you just got more for your money with x86, and the more that was the case, the more sold, and the more it was the case. This meant it came into the server market from the bottom, and then the same happened there. It's a good example of a bottom up revolution. Now it looks like Wintel compatibility doesn't matter so much (web/freesoftware), and something similar could happen with ARM driven by them being "good enough", cheap and low power. That's why Intel are pooing their pants and MS are hedging their bets with Windows on ARM.
            • by dkf ( 304284 )

              Well ARM is a hell of a lot less power using but it is also a hell of a lot less powerful clock for clock, so it evens out doesn't it?

              The ARM does more than x86 per Watt but less per clock. What this means is that which is best depends entirely on what your bottleneck is. If you're cooling-limited (which a lot of installations are, especially when it comes to servers; getting the heat out of the racks and out of the building is the limiting factor) then the ARM looks a lot sweeter because it allows you to pack processors in much more densely. That in turn saves massively.

              OTOH, if you're not cooling-limited (and not in a low-power situatio

            • by dbIII ( 701233 )
              On the other hand I have no clue what a "Peterbuilt" is but you can see a Kia in any country. The aim of ARM stuff is cheap and nasty but everywhere.
              Also a little nitpick since you are off by about a decade - Intel solved the 4GB problem way back with the Pentium Pro. People think otherwise because Microsoft's low end software couldn't go past 4GB while their high end stuff could on the same hardware. After 1995 the 4GB barrier was a cheap end of town Microsoft problem. Other vendors and linux were comp
          • by c0lo ( 1497653 )

            There's hardly any good reason to choose anything else over it, either.

            Well, yes and no. Certainly in the space between the notebook computer and any but the mightiest supercomputers there's no reason at all not to go with x86. But in the mobile processor space, where ultra-low TDP is the order of the day, ARM has a big leg up on x64

            Yeap. But, in the context of the Oracle behemoth database server, does mobile processors have any relevance? It seems that it does - even if an ARM-based server [linuxfordevices.com] is no longer what one would call "mobile".

            One on top of the other, may it be that the Itanium heavyweight approach is indeed a dinosaur of the past?

          • Re:Sparc (Score:4, Insightful)

            by mcrbids ( 148650 ) on Wednesday March 23, 2011 @11:49PM (#35595112) Journal

            And this segment is *important* because already, I do as much browsing and web-surfing on my Motorola Droid 2 Android phone as my fire breathing Intel Core i7 laptop computer.

            Remember that x86 started out as the cheap chip on the block that was "good enough" for basic stuff that little people could afford, and it slowly grew upward and increased its applicable market segments until it, now, is the high end of the marketplace.

            ARM is now potentially in a similar situation. And like the x86 before it, it has tremendous inertia in the smartphone platform, any of which are easily capable enough to operate as a PC for most uses for most people. It uses something less than 1/100th the power of my laptop and is a reasonable, convenient stand-in for said laptop for pretty much all personal use other than for my work. (I'm a software engineer)

            I've already started to note the conflict: do stuff on the phone or the laptop? So far, it's mostly worked because stuff I do on the phone is pretty much "in the cloud" and is accessible from the PC.

            But Pictures? I've taken a few hundred pictures, and keeping them in synch starts to become a hassle...

            At some point, it could make sense to jump, to switch from one to the other. Why couldn't my phone have a plug or a bluetooth connection to a keyboard, monitor, etc?

            • by Sj0 ( 472011 )

              I think you've got a false dichotomy here.

              It's highly HIGHLY unlikely that x86 is going to be usurped by anything any time soon. Part of the reason is, despite apparently every person here's hatred of it, the legacy of x86.

              There's always going to be that one application that you just can't find a replacement for. Even among FOSS software, there's a good chunk that is non-trivial to port to a non-x86 architecture. This is fine in sectors like smart phones, where the segment isn't so bogged down in legacy app

              • by Surt ( 22457 )

                How much of that x86 software that simply cannot be ported also can't run on an emulator?

                • by Sj0 ( 472011 )

                  History shows us that "we can emulate it" is not an acceptable alternative most of the time.

                  Apple managed to get away with it, but they managed to get away with a lot of dramatic platform shifts because they have dictatorial control over their product. They could switch their entire product line over to ARM tomorrow and apple fans would have no choice but to switch.

                  X86 and the PC architecture are different than that. They're more democratic, which often means innovation must maintain the status quo or risk

              • Even among FOSS software, there's a good chunk that is non-trivial to port to a non-x86 architecture.

                Can you give some examples, please? (I'm not trolling, I'm just curious.) I've heard that Debian and recently Ubuntu have ARM ports, but I've never used them. What's missing from these ports that's commonly available in the "normal" x86 distributions?

                • Not the most widely used software maybe, but SBCL [sbcl.org] is taking its time porting to ARM, Clozure CL [clozure.com] doesn't have a port I'm aware of, nor does CLISP [gnu.org]. The only Common Lisp implemantation I know of that works on ARM is ECL [sourceforge.net].
        • Or maybe Intel is more worried about the new ARMs [slashdot.org] race.

        • Re: (Score:2, Informative)

          by jd ( 1658 )

          Immaterial. The x86 is a lousy architecture and adding onto it hasn't helped any.

          Intel's latest stuff is certainly not the best that ever was. It has no support for content-addressable memory and no support for MIMD, it isn't asynchronous, it's not 128-bit, it doesn't use wafer-scale integration, it doesn't support HyperTransport (which is faster than PCI Express) and it can't do on-the-fly instruction set translation --- all these things have been done on other architectures, making those architectures sup

          • by Sj0 ( 472011 )

            This story is about the further decay of Intel's once flagship product, the Merced. If anything, this story shows that x86 and extensions of it DO have a very important place in the market. Despite having existed forever before x86-64, it wasn't until the Opteron and Athlon 64 that 64-bit architecture became commonplace. It wasn't Merced, it wasn't DEC Alpha, it wasn't a Motorola processor.

            Ignoring the practical reasons why x86 continues to survive may make sense in a vacuum of academic computer science, bu

        • by JanneM ( 7445 )

          On the low-power mobile and embedded side x86 is out. Never mind power-performance - absolute power levels is what matters most. And the big volume in cpus is in this market, from smartphones on the upper end down to windshield wiper controllers and stuff like that on the low end.

          On the very, very high end, again, there's good reason not to use x86, and instead do something like Hitatchis Sparc-based cpus. You have basically low or no concern for binary compatibility - you're most likely running a custom-ro

      • Re:Sparc (Score:5, Insightful)

        by pavon ( 30274 ) on Wednesday March 23, 2011 @08:51PM (#35594122)

        Unless of course they're telling the truth.

        Intel is strongly denying [intel.com] Oracle's claims that Itanium is near end-of-life. So it looks like more Oracle FUD, and probably intended to harm HP-UX rather than Intel.

        • by Sycraft-fu ( 314770 ) on Wednesday March 23, 2011 @09:29PM (#35594336)

          http://www.businessweek.com/news/2011-03-23/hp-calls-oracle-move-shameless-gambit-to-hurt-competition.html [businessweek.com]

          I'm much more inclined to believe Intel and HP on it. While the Itanium did not become the be-all, end-all for computers Intel hoped (they wanted to go to it because their cross licensing is for x86, not IA-64) it has not been a failure. People like to joke about it and rag on it but all it means is they've done little to no research. It is a competitive chip in the super high end market. When you need massive DB servers or the like, it is a real option and one that people use.

          Now as to what kind of future it'll have I can't say. The high end segment keeps shrinking as normal desktop hardware gets better and better. You can knock 4 8-core Xeons in a system right now and get some great performance at a good (relatively speaking) price.

          At any rate I wouldn't listen to anything Oracle says, particularly about competitors. They are not known for their truthfulness, or for their sense of fair play.

          • by afabbro ( 33948 )

            While the Itanium did not become the be-all, end-all for computers Intel hoped (they wanted to go to it because their cross licensing is for x86, not IA-64) it has not been a failure. People like to joke about it and rag on it but all it means is they've done little to no research. It is a competitive chip in the super high end market. When you need massive DB servers or the like, it is a real option and one that people use.

            The only people running "massive DB servers" on Itanium are people who had HP-UX shops before PA-RISC went away and migrated to Itanium. I don't think I've ever met anyone who went out and bought HP-UX + Itanium and introduced it into their shop.

            Itanium is a fine processor - but it solves all the wrong problems. It's fantastic for scientific compute apps - 64 64-bit registers, woo-hoo! - but it's not really a competitive solution for mainstream business use.

            At any rate I wouldn't listen to anything Oracle says, particularly about competitors. They are not known for their truthfulness, or for their sense of fair play.

            I don't think anyone has made better moves over

        • by afabbro ( 33948 )

          Unless of course they're telling the truth.

          Intel is strongly denying [intel.com] Oracle's claims that Itanium is near end-of-life. So it looks like more Oracle FUD, and probably intended to harm HP-UX rather than Intel.

          That's a really silly analysis. Oracle could not care less about HP-UX because they don't compete in the proprietary Unix market. No one does. Yes, Oracle owns Solaris, but Ellison's smart enough to know that proprietary Unices only exist to sell the servers attached to them. There's no money in selling proprietary Unix operating systems by themselves.

          Now that PA-RISC is gone, the only thing HP-UX runs on it Itanium. Already, you can't run any Microsoft or Red Hat on Itanium. And those are just compan

      • Intel is obligated to continue developing Itanium, or HP sues them. Itanium isn't going anywhere, and Oracle is spreading FUD.
        • by c0lo ( 1497653 )

          Intel is obligated to continue developing Itanium, or HP sues them. Itanium is going nowhere, and Oracle is spreading FUD.

          FTFY. Other than that, all your other assertions ring true to me.

        • by syousef ( 465911 )

          Intel is obligated to continue developing Itanium, or HP sues them. Itanium isn't going anywhere, and Oracle is spreading FUD.

          Really? Do you think someone using an Oracle database on IA-64 is going to convert to a different DB? I don't think so.

        • by afabbro ( 33948 )

          Intel is obligated to continue developing Itanium, or HP sues them. Itanium isn't going anywhere, and Oracle is spreading FUD.

          I'm highly skeptical of your argument. Are you saying that HP holds an iron-clad contract saying that Intel must develop Itanium for as long as HP wants?

    • Re:Sparc (Score:4, Insightful)

      by mbkennel ( 97636 ) on Wednesday March 23, 2011 @08:28PM (#35593968)

      It's cleverer, and assholeyer than just saying that.

      Old Lawyer's trick.

      Instead of saying the obvious, i.e. "We won't support our competitor's (HP) fastest computers because we make hardware now" Oracle spreads FUD about the longevity of their competitor's product line by virtue of leaking information from anonymous sources in their competitors' sole supplier.

      Even if Intel and HP completely deny it, their customers will be thinking it all along.

    • Now that Oracle owns Sparc processors from Sun, there is no reason for them to help out their competitor.

      Oracle develops and sells both Solaris and their database software for x86 platforms, which they do not own.

      I think it is more the fact that (a) they *never* had a version of Solaris for Itaniium; and (b) with both RHEL and HP-UX dropping support for Itanium, they would have no platform to run their databases on.

    • by gl4ss ( 559668 )

      i've seen many sparcs, but itaniums only via ssh.

      and current x86's make much more sense. itanium was a flawed research experiment(though, it did live longer than most such..).

  • by Ultra64 ( 318705 ) on Wednesday March 23, 2011 @08:23PM (#35593936)

    I didn't realize the Itanium was still being produced. I thought they shut it down years ago.

    • by blair1q ( 305137 )

      It's still being used in certain proprietary big-iron systems. And it still kicks some ass. But it won't supplant the genetic ingrainment of x86. Which itself is hardly x86 any more. Intel is still selling it, but only the foolish are buying it to use in new designs.

    • It's still around and a valuable tool in the toolbox. The next-gen one (Poulson) should be the fastest processor in the world at release, assuming Intel manages to get it out on time.
  • by AtariDatacenter ( 31657 ) on Wednesday March 23, 2011 @08:33PM (#35593992)

    I still remember the day the HP sales/technical team came on-site to give us a presentation. Flashy videos with Carly Fiorina's new vision of the future. And a bright tomorrow with a new CPU line... out with PA-RISC and in with Itanic. Their sales team looked at each other nervously as we expressed our evaluation of the arrangement as a failed vision. It didn't take them long to figure out that dumping their in-house CPU to go with the Itanic would doom them to irrelevancy. And it did.

    Now the Itanium itself is sinking from irrelevancy. It took too long. This chip was a disaster. Glad to see it go.

    • Yep, I think HP is the main customer for Itanium nowadays. Windows is going to drop support after Server 2008 R2 (support was limited in Server 2008 to certain parts). Red Hat dropped support for it with RHEL6.

    • by Third Position ( 1725934 ) on Wednesday March 23, 2011 @08:52PM (#35594130)

      You have to wonder what chip architecture HP is going to move to now, considering losing Itanium leaves them high and dry. Of course, Itanium was largely developed by HP. Perhaps HP will continue the processor line?

      It certainly isn't going to do HP any good having to do another architecture switch. To this day, most of the HPUX servers in my shop are PA-RISC. Moving to Itanium has generally been painful enough that when our development teams are forced to upgrade their applications, they generally opt to rehost them on Linux on x86 rather than HPUX on Itanium. Only a few applications where that isn't adequate have made it to HPUX Itanium. Putting their customers through another painful transition isn't going to win HPUX any friends.

      • HP claims to have a 10-year roadmap. I suspect there's a contractual clause that says if Intel kills IA64 development, the IP and some amount of money gets transferred to HP.
    • by Mysticalfruit ( 533341 ) on Wednesday March 23, 2011 @09:02PM (#35594208) Homepage Journal
      I've long argued that Itanium was Intel's vehicle to kill PA-RISC and get HP out of the high performance computing market and it worked. Intel let that CPU die a death of a thousand committee compromises while simultaneously plundering all of the technology they could out of Alpha and rolling out their Xeon cpus out at much higher clock speeds and with features that weren't in Itanium.

      I worked at a computer company and we built servers that used PA-RISC cpus at the time and we got our hands on some Itanium samples and needless to say, we decided to migrate the platform to Xeon instead.
    • Kind of sad. PA-RISC was a nice design and very fast. The problems are more business oriented. Developing your own chip is expensive, so companies either want to be chip makers, or computer builders, but not both. Second the high end market made a leap from being unix oriented servers to Windows based servers. So customers don't like oddball chips that their software suppliers don't support. And in this context, "oddball" means anything that isn't x86. It's is also convenient from a business perspect

    • by Apotsy ( 84148 ) on Wednesday March 23, 2011 @11:37PM (#35595044)
      Not that I'm a big fan of Carly, but you can't necessarily blame her for that. The decision for HP to go with Intel's fancy new solution was made in era of Lew Platt being CEO, well before Fiorina took over. I was at HP in the mid-90s and recall seeing roadmaps that showed HP's UNIX solutions all being based on the super-amazing upcoming new Intel architecture well before the end of the decade. PA-RISC was old and busted, and Intel had the new hotness just around the corner. The suits just couldn't say enough about what an unstoppable juggernaut Intel's new baby was going to be. According to them, it was going to solve everything, do everything, and pretty much take over the world.

      I left in 97, but I am sure those roadmaps had to be quietly adjusted each time Intel's new chip was delayed (over and over). It was well past 2000 when the thing finally came out, and in the end, it was a huge disappointment (dare I say disaster) after PA-RISC had been sailing along smoothly for so long. The perf was terrible, the instruction set was a mess, and pretty much the entire industry did their best to avoid it. I'm surprised it took this long for Intel to throw in the towel on it.

      PA-RISC really was a great series of CPUs. It's a shame it had to die. At one point I believe it actually surpassed the (at the time) much-vaunted DEC Alpha as the fastest thing on the market, if only for a little while. Itanium seemed designed solely to kill off the x86 CPU clone market. Intel came up with a completely new instruction set, and patented it so there would be no clones. Actually making a good chip did not seem to be a consideration.

      Good riddance to Itanium, and a bittersweet farewell and R.I.P. to PA-RISC.
      • by Apotsy ( 84148 ) on Wednesday March 23, 2011 @11:50PM (#35595122)
        Oh and I have to mention that HP's decision wasn't necessarily a bad one given the trends that were happening in the mid-to-late 1990s. The big story in everyone's minds was that expensive UNIX workstations were on the way out, to be replaced almost overnight by cheap commodity PCs running WindowsNT (don't laugh, it was the first "Windows" to be taken seriously). SGI pretty much lost their entire hardware business that way. HP was just trying to save themselves from that fate by hitching their future to what looked to be the industry's dominant player.
    • Can you share some information about the nature of this meeting, and what kind of contract your team was evaluating? (especially considering this should have been at least 8 to 10 years ago)..

      Itanium, from an engineering standpoint was a perfectly good architecture -- there are several scenarios in which VLIW architectures can attain truly astounding IPCs. It's weaknesses were essentially software support, power/heat, and price -- which is a vicious cycle of problems -- without software support, you don't

  • Ah well (Score:4, Interesting)

    by Mr Z ( 6791 ) on Wednesday March 23, 2011 @09:05PM (#35594226) Homepage Journal

    I work directly with a VLIW architecture myself (the TI C6000 family of DSPs). From that perspective, I'm a little sad to see Itanium go. I realize EPIC isn't exactly VLIW, but they had an awful lot in common. Much of HP's and Intel's compiler research helps us other VLIW folks too.

    I think EPIC tried to live up to its name a little too much. The original Merced overreached, and so it ended up shipping far too late for its performance to be compelling. Everybody always zooms in on the lackluster x86 performance, but x86 wasn't at all interesting in the spaces Itanium wanted to play in originally. It wanted to go after the markets dominated by non-x86 architectures such as Alpha, PA-RISC, MIPS and SPARC. And had it come out about 3 years earlier, it may've had a chance there by thinning the field and consolidating the high-end server space behind EPIC.

    Instead, it launched late as a room-heating yawner. And putting crappy x86 emulation on board only tempted comparisons to the native x86 line. That it made it all the way to Poulson is rather impressive, but smells more like contractual obligation than anything else.

    Rest in peace, Itanium.

  • by bloodhawk ( 813939 ) on Wednesday March 23, 2011 @09:13PM (#35594272)
    To Sink it, doesn't that imply that at some time it actually floated. That processor line has had all the floating abilities of your average house brick since launch, sure for a while a few companies tried to fit the brick with lifejackets, but in the end they were always destined to sink to the murky depths of hell.
  • by BBCWatcher ( 900486 ) on Wednesday March 23, 2011 @09:19PM (#35594294)
    HP has very little software to offer, so with major software vendors (Microsoft, Red Hat, and now Oracle) fleeing Itanium, it certainly isn't good news for HP. Oracle Database is probably the most popular software product running on HP-UX, as a matter of fact, but Oracle's announcement represents the end of the line. Oracle has a lot of other significant products, too, like Tuxedo, WebLogic Application Server, and Siebel, among others. Ironically IBM may now be the biggest vendor of Itanium-compatible software. Of course this Oracle announcement is self-serving, but it's also brutally smart business strategy. Itanium really is dead as a doorstop without popular software. This move also kills HP's aspirations of overtaking IBM any time soon, and it also kills one of HP's more profitable business lines. (Well played, Larry.)
    • IBM stopped DB2 support for Linux on Itanium with version 9, 9.5 doesn't support it....guess that leaves Websphere for HP/UX, that bloated piece of shit that is excuse for IBM to suck a client dry with consultants to attempt to make it useful
    • by durdur ( 252098 )

      > This move also kills HP's aspirations of overtaking IBM any time soon

      Exactly - HP nowadays really wants to be IBM, a one-stop shop for hardware, software, and services. But they're not. IBM has a better mix of businesses and is executing better. HPQ operating margin - 10.49%, IBM operating margin - 19.97%. HPQ return on equity - 21.85%, IBM return on equity - 64.59% (from Yahoo finance).

  • by sitkill ( 893183 ) on Wednesday March 23, 2011 @09:51PM (#35594446)
    Not sure why the submitter didn't post the Intel response denying it: http://newsroom.intel.com/community/intel_newsroom/blog/2011/03/23/chip-shot-intel-reaffirms-commitment-to-itanium [intel.com] While you would think Intel would of course deny it, but considering Intel just took the wraps [realworldtech.com] off their next revision of the Itanium, this is pretty much just FUD coming from Oracle.
  • by BlueCoder ( 223005 ) on Wednesday March 23, 2011 @11:40PM (#35595056)

    In all truthfulness it did have some ideas going for it but it should have stayed a pet project. An R&D project but produce enough that the market could play with it in self built systems. In my opinion they should have basically given the processors away to inspire developers for hobby and niche products. They wouldn't have lost as much money and would have had more realistic ambitions for it. They had the fabs and the prototyping equipment already...

    The Itanium, a processor designed for programming languages that could provide optimization hints... that could have a concept of L1 cache and manipulate it and be able to provide feedback to the processor when it could do better branch prediction than the processor. Radical concept, only problem was you HAVE TO code to each processor model specifically. Caches changed and the processor logic changed with each revision. That's why they would have made better embedded processors. The generic systems that would benefit the most would be systems with source code you could compile right for the machine, and dynamically compiled code, and code that could self compile and optimize itself.

    They should have been much more radical instead and designed for massively parallel systems based on a RISC design with minimal branch prediction. So even if the processors weren't running the more efficient code a developer could at least attack a problem with the brute force of hundreds of threads at the same time. More or less they should have aimed for something along the lines of the cell processor. Another current story here on Slashdot is how how the US Air Force took 1700 PS3's and turned them into a computer that qualifies in the top 40 for supercomputers.

    • by gl4ss ( 559668 )

      you forgot that you can sell crap big iron to banks for a wide profit, no need to keep the production lines online all year even.
      it doesn't matter what the big iron is you see, as long as it's not the same as the clerk is using on desktop(or if it is, that it is at least named differently).

      massively parallel machines are easy to build.. but ehm, linear speed is whats interesting, really. that's what people would want at home, so much more possibilities would be in that route than in parallel and less speed

  • I don't understand why this is tagged 'yay'. What this means is that the world's largest chip maker with partnership from the world's largest software company couldn't get a competing architecture off the ground in any meaningful way. That's not yay, that to me is just a little sad. Sure we have great designs beneath the all the baked-into-silicon legacy x86 translation, but as developers (especially the developers of compilers) we'll never get to see any of it, and we'll never get to reclaim any of that si
    • by Sj0 ( 472011 )

      Lamenting silicon use is a little silly, from where I'm standing.

      If you look at a modern processor, the entire decision-making part of a chip is absolutely miniscule. The biggest hog of silicon is cache.

      x86 and the PC standard is a boon to everyone. If you want to see what computers would look like without the benefit of the open architecture, look at smart phones -- even Android, fully open source, has people begging for updates to their phones OS because everything is too locked down and proprietary (and

    • by gl4ss ( 559668 )

      it's a yay, the plat sucked. they can use the engineers for something else now. it competed for a LONG TIME - and did not do well.

  • by Anonymous Coward

    I agree with Oracle that it is close to over for the chip. Intel lost every good engineer working on it to AMD in Fort Collins, CO and can't (even with massive financial incentives) coax anybody on their x86 teams to transfer over. Itanium is considered the kiss of death on a resume so they are having a hard time even finding people willing to work on it. Work on Itanium is about 6 years behind original schedules! Originally designed and marketed as a performance leader to the Xeon series it has fallen so f

    • Re: (Score:3, Interesting)

      by BLToday ( 1777712 )

      My old college roommate was offered a job at Intel Itanium's unit after finishing his PhD in compiler theory. He turned it down because "life's too short to spend it fixing Itanium."

  • It did exactly what it was supposed to, destroyed all the competition for i386.

    Where is Alpha now? What happened to SGI?

E = MC ** 2 +- 3db

Working...