Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel Hardware News

Oracle Claims Intel Is Looking To Sink the Itanic 235

Blacklaw writes "Intel's ill-fated Itanium line has lost another supporter, with Oracle announcing that it is to immediately stop all software development on the platform. 'After multiple conversations with Intel senior management Oracle has decided to discontinue all software development on the Intel Itanium microprocessor,' a company spokesperson claimed. 'Intel management made it clear that their strategic focus is on their x86 microprocessor and that Itanium was nearing the end of its life.'"
This discussion has been archived. No new comments can be posted.

Oracle Claims Intel Is Looking To Sink the Itanic

Comments Filter:
  • Sparc (Score:5, Informative)

    by Gary Franczyk ( 7387 ) on Wednesday March 23, 2011 @08:22PM (#35593922)

    Now that Oracle owns Sparc processors from Sun, there is no reason for them to help out their competitor.

  • by AtariDatacenter ( 31657 ) on Wednesday March 23, 2011 @08:33PM (#35593992)

    I still remember the day the HP sales/technical team came on-site to give us a presentation. Flashy videos with Carly Fiorina's new vision of the future. And a bright tomorrow with a new CPU line... out with PA-RISC and in with Itanic. Their sales team looked at each other nervously as we expressed our evaluation of the arrangement as a failed vision. It didn't take them long to figure out that dumping their in-house CPU to go with the Itanic would doom them to irrelevancy. And it did.

    Now the Itanium itself is sinking from irrelevancy. It took too long. This chip was a disaster. Glad to see it go.

  • Yep, I think HP is the main customer for Itanium nowadays. Windows is going to drop support after Server 2008 R2 (support was limited in Server 2008 to certain parts). Red Hat dropped support for it with RHEL6.

  • by Third Position ( 1725934 ) on Wednesday March 23, 2011 @08:52PM (#35594130)

    You have to wonder what chip architecture HP is going to move to now, considering losing Itanium leaves them high and dry. Of course, Itanium was largely developed by HP. Perhaps HP will continue the processor line?

    It certainly isn't going to do HP any good having to do another architecture switch. To this day, most of the HPUX servers in my shop are PA-RISC. Moving to Itanium has generally been painful enough that when our development teams are forced to upgrade their applications, they generally opt to rehost them on Linux on x86 rather than HPUX on Itanium. Only a few applications where that isn't adequate have made it to HPUX Itanium. Putting their customers through another painful transition isn't going to win HPUX any friends.

  • Re:Can't blame them (Score:4, Informative)

    by SpazmodeusG ( 1334705 ) on Wednesday March 23, 2011 @09:06PM (#35594234)

    What are you talking about? The early Itaniums were x86-32 compatible.
    "Itanium processors released prior to 2006 had hardware support for the IA-32 architecture to permit support for legacy server applications"

    http://en.wikipedia.org/wiki/Itanium#Architectural_changes [wikipedia.org]

    It wasn't until later the Itaniums lost their hardware based x86 compatibility.

  • by sitkill ( 893183 ) on Wednesday March 23, 2011 @09:51PM (#35594446)
    Not sure why the submitter didn't post the Intel response denying it: http://newsroom.intel.com/community/intel_newsroom/blog/2011/03/23/chip-shot-intel-reaffirms-commitment-to-itanium [intel.com] While you would think Intel would of course deny it, but considering Intel just took the wraps [realworldtech.com] off their next revision of the Itanium, this is pretty much just FUD coming from Oracle.
  • Re:Sparc (Score:2, Informative)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Wednesday March 23, 2011 @09:57PM (#35594494) Homepage Journal

    Immaterial. The x86 is a lousy architecture and adding onto it hasn't helped any.

    Intel's latest stuff is certainly not the best that ever was. It has no support for content-addressable memory and no support for MIMD, it isn't asynchronous, it's not 128-bit, it doesn't use wafer-scale integration, it doesn't support HyperTransport (which is faster than PCI Express) and it can't do on-the-fly instruction set translation --- all these things have been done on other architectures, making those architectures superior in these respects to Intel's latest and greatest. Even though some of these things were being done by others when Intel's best offering was the 8080.

    IBM's POWER7 not only comes close, it beats the crap out of the Intel clone of the AMD x64 design. Yes, Intel were forced to clone AMD's design because theirs stank.

    As for "ever has", the IIT 8087 was two orders of magnitude faster than Intel's. The 64000 was not only better than the 8086, it was a LOT better. The Transputer was 32-bit and could scale to the thousands of cores in a single box when Intel was 16-bit with an absolute limit of one core on one CPU.

    In fact, I would be willing to bet that a 16-way Intel box with the latest CPUs could still be beat in raw processing power AND addressable memory space by a hypercube of Inmos T400s dating to 1984. THAT was "the best stuff that ever was" and I challenge you to show me a single thing Intel can do better now than Inmos could do then.

  • Re:...and? (Score:2, Informative)

    by Anonymous Coward on Wednesday March 23, 2011 @10:06PM (#35594548)

    I'm not surprised at the bias, poorly researched article that was published once again. Intel specifically said that they have no plans on dumping it and that Oracle is full of shit. The headline is like an attack at intel even though intel did nothing besides deny what oracle said.

  • Re:Sparc (Score:5, Informative)

    by Darinbob ( 1142669 ) on Wednesday March 23, 2011 @11:11PM (#35594922)

    The problem is that the x86 is like the living dead. It's an ancient architecture which had a really bad architecture when it was new, and is now being held together through duct tape and an oxygen tent. Yes it's very fast, but it's very expensive to make it that way too. It works because Intel has tons of resources to throw at it. It is saddled with decades of backwards compatibility issues as well, 16-bit modes, segmentation, IO ports, and other things that no one uses anymore if they can help it. It requires tons more support chips than many embedded CPUs. The real reason x86 should die is that it's an embarrassment to computer scientists to see this dinosaur still lumbering about.

    ARM on the other hand has some decent designs. It's not low power because it was designed to be low power, but because it's got a relatively simpler RISC design, and because it was easily licensed for people to fabricate so it got used in a lot in low power designs (ie, ARM core included as part of a larger ASIC. But there are faster ARM designs too, and with the same resources that the x86 has it would be really great. ARM is not inherently a "small chip". The problem is trying to compete head to head with x86 when everyone knows it will lose. So it's high power designs are not intended for PC desktops, but for specialized purposes.

    Internally the modern x86 is really a RISC at heart anyway. But it's got a really massive support system on top of that that converts the older style CISC instruction set into a VLIW/ RISC style that's more efficiently executed in a superscalar way. Just like the original RISC argument, it makes sense to try and rip out that complexity then either use the resources to make things faster or just leave it out entirely to get a cheaper and more efficient design.

    Anytime a better design is out there it seems to be clobbered in the market place because it just doesn't pick up enough steam to compete with x86. This is why alternative CPUs tend to be used for embedded systems, specialized high speed routers, or parallel supercomputers. Even Intel can't compete with itself, Itanium isn't the only alternative they've tried. It's not just performance either, most unix workstations had models that ran rings around x86 but they were expensive too because of low volumes sold.

    The public doesn't understand this stuff. Sadly neither do a lot of computer professionals. All they like to think about is "how fast is it" or "does it run Windows"

    The analogy with cars is wrong. X86 isn't a Peterbilt truck, it's a v8 Chrysler land yacht with a cast iron engine, or maybe an gas guzzling SUV. People stick with it because they don't trust funny little foreign cars, they feel safer wrapped in all that steel, they need to compensate for inadequacies, they feel more patriotic when they use more gas, etc. It's what you drive if you don't want to be different from everyone else.

  • Re:Sparc (Score:4, Informative)

    by PCM2 ( 4486 ) on Thursday March 24, 2011 @12:44AM (#35595370) Homepage

    It is saddled with decades of backwards compatibility issues as well, 16-bit modes, segmentation, IO ports, and other things that no one uses anymore if they can help it.

    Actually, Google Native Client (NaCl) uses segmentation to sandbox downloaded code. It's either a brutal hack or a totally clever trick, I guess, depending on your POV.

  • Re:Sparc (Score:2, Informative)

    by Anonymous Coward on Thursday March 24, 2011 @01:45AM (#35595590)

    download specbench, build and enjoy. a single p7 core running a single thread is fucking assloads faster than a nehalem core (and that's _without_ heavy FP or decimal).

    for extra laughs, watch how the gap grows to nearly 2x by moving to GCC on the POWER7 and xeon system.

    "i do this for a living" - is that you, demerjian?

  • Re:Sparc (Score:5, Informative)

    by Waffle Iron ( 339739 ) on Thursday March 24, 2011 @02:10AM (#35595668)

    Internally the modern x86 is really a RISC at heart anyway. But it's got a really massive support system on top of that that converts the older style CISC instruction set into a VLIW/ RISC style that's more efficiently executed in a superscalar way.

    If you look at a picture of any modern CPU die, the real estate is totally dominated by the caches. That "massive support system" (which in reality is only a tiny fraction of the whole die area) serves largely as a decoder that unpacks the compact CISC-style opcodes (many of which are only one or two bytes long) into whatever obscure internal superscalar architecture is in vogue this year. This saves huge amounts of instruction cache space compared to unpacking bloated one-size-fits-all RISC-style opcodes into the some similar internal architecture du jour. Thus, the X86 can end up needing less die area overall. This is one reason that despite what elitists geeks say, over the years X86 has usually provided more bang for the buck than any competing processor family.

    This scheme is so advantageous, that even ARM has tacked on a similarly convoluted opcode decompresser. If ARM ever evolves into a mainstream general-purpose high-end CPU, there will be undoubtedly dozens more layers of cruft added to the ARM architecture to make it competitive with X86, at which point it will be similarly complex. (For another example, take a look at how the POWER architecture ended up over time. You can hardly call it RISC any more.)

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...