Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Businesses Intel Hardware IT Technology

Intel's Itanium CPUs, Once a Play For 64-bit Servers And Desktops, Are Dead (arstechnica.com) 138

Reader WheezyJoe writes: Four new 9700-series Itanium CPUs will be the last and final Itaniums Intel will ship. For those who might have forgotten, Itanium and its IA-64 architecture was intended to be Intel's successor to 32-bit i386 architecture back in the early 2000's. Developed in conjunction with HP, IA-64 used a new architecture developed at HP that, while capable as a server platform, was not backward-compatible with i386 and required emulation to run i386-compiled software. With the release of AMD's Opteron in 2003 featuring their alternative, fully backward-compatible X86-64 architecture, interest in Itanium fell, and Intel eventually adopted AMD's technology for its own chips and X86-64 is now dominant today. In spite of this, Itanium continued to be made and sold for the server market, supported in part by an agreement with HP. With that deal expiring this year, these new Itaniums will be Intel's last.
This discussion has been archived. No new comments can be posted.

Intel's Itanium CPUs, Once a Play For 64-bit Servers And Desktops, Are Dead

Comments Filter:
  • It was still alive? (Score:5, Interesting)

    by TWX ( 665546 ) on Friday May 12, 2017 @03:04PM (#54407371)

    I guess I just sort of assumed that IA-64 was dead a long time ago, and figured Intel's gaming the benchmarks was essentially retribution against AMD for the success of amd64 architecture.

    Does anyone remember the reasoning for dropping native support for i386 when these processors debuted? There have always been growing-pains when a manufacturer drops or severely impacts support for their install-base, but sometimes it's beneficial or necessary if an existing architecture is a dead-end.

    • by F.Ultra ( 1673484 ) on Friday May 12, 2017 @03:13PM (#54407407)
      It was a completely different architecture so adding native i386 support would have required to add a complete i386 compute core to the chips. And the i386 architecture is a real mess so they hoped to avoid their old sins but for some reason didn't understand that their own i386 architecture already had "won". Myself I have always mourned that Motorola never could increase the frequency of the MC680x0 beyond 66Mhz and keep up with Intel because that architecture was a real beauty to program in assembler.
      • by TWX ( 665546 )

        Myself I have always mourned that Motorola never could increase the frequency of the MC680x0 beyond 66Mhz and keep up with Intel because that architecture was a real beauty to program in assembler.

        Historically that's one of the things that always boggled my mind, Intel's instruction set had no protected mode until quite late, so programmers had to do some interesting things that often resulted in problems and the computer crashing anyway (IE Windows BSOD) or had to rely on a single-tasker operating system where it didn't matter so much; Motorola's chips had Supervisor Mode that was a protected mode, but Apple chose to ignore its existence when writing "System"/MacOS, where running multiple simultaneo

        • by jeremyp ( 130771 )

          Supervisor mode in the 68000 was not, in any sense a proper protected mode. All it meant was that certain instructions were not available in user mode.

          • by F.Ultra ( 1673484 ) on Friday May 12, 2017 @03:58PM (#54407625)
            It could however be connected to an external MMU that would enter the supervisor mode when the CPU entered restricted memory and later the 68030 included a proper MMU on chip. Even without a MMU the 68000 could catch segfaults which was used by Kickstart 2.4 on the Amiga to not bring down the whole machine when a single process crashed.
            • by Anonymous Coward

              From time wasted on wikipedia : they say you need a 68010 for that, which is just a fixed up and slightly better 68000. Also the Motorola external MMU sucked (wasted a cycle accessing memory) and a third party MMU was better.
              Also, spending the money and motherboard space on that was fine for Unix workstation vendors, but Apple Atari Amiga were each making a cheap highly integrated computer for consumers. Protected memory was vital for making a multiuser machine but that was not the goal there and perhaps th

            • by Agripa ( 139780 )

              It could however be connected to an external MMU that would enter the supervisor mode when the CPU entered restricted memory and later the 68030 included a proper MMU on chip. Even without a MMU the 68000 could catch segfaults which was used by Kickstart 2.4 on the Amiga to not bring down the whole machine when a single process crashed.

              The 68000 did not save enough instruction state to recover from memory faults. Early 68000 workstations which included protected memory management used two 68000s operated out of phase so that the trailing 68000 could be used to restart the failed instruction after the leading 68000 triggered a memory fault.

              • It could of course not recover the segfaulted process, but the rest of the system could go on. Something that was not that common back then on home computers. The 1.3 kickstart for example had the system wide Guru Meditation which in 2.4 was replaced by a popup window letting the rest of the system continue. Not 100% stable since there where no memory protection, but still a big step forward.
        • The PC drove the development of the ia-32 architecture, and the PC was a toy computer for home and office desktop markets, and it grew up in the hobbyist world. The same with the Apple proeduct line, it wasn't focused on professional computing. It wasn't until the home and hobbyist computers got beefier were able to approach the capabilities of other professional computing platforms that the need to have more sophisticated operating systems and applications arose. Intel and Motorola were not idle during t

        • by slew ( 2918 )

          Myself I have always mourned that Motorola never could increase the frequency of the MC680x0 beyond 66Mhz and keep up with Intel because that architecture was a real beauty to program in assembler.

          Historically that's one of the things that always boggled my mind, Intel's instruction set had no protected mode until quite late, so programmers had to do some interesting things that often resulted in problems and the computer crashing anyway (IE Windows BSOD) or had to rely on a single-tasker operating system where it didn't matter so much; Motorola's chips had Supervisor Mode that was a protected mode, but Apple chose to ignore its existence when writing "System"/MacOS, where running multiple simultaneous applications in a GUI it would have been highly beneficial.

          Just never understood that, especially when Apple had left the CLI world even before they completely dropped the Apple II line.

          Supervisor mode in 68k was pretty much just an alt-stack pointer. Virtual memory support and similar protections was a function of the peripheral chips used in the system back in those days (not really highly integrated). The peripheral chip chosen by apple for the mac didn't support virtual memory and without such memory protection it probably wasn't worth the effort. Also you couldn't really run more than one program at a time anyhow.

          By the time switcher and multi-finder features were added to finder,

      • by ShanghaiBill ( 739463 ) on Friday May 12, 2017 @04:21PM (#54407757)

        Myself I have always mourned that Motorola never could increase the frequency of the MC680x0 beyond 66Mhz and keep up with Intel because that architecture was a real beauty to program in assembler.

        The features that made the 68k so nice for assembly programming, like addressing multiple unaligned memory locations in a single instruction, are precisely what made it so difficult to speed up.

        Imagine that you are a chip designer. You are implementing the silicon for "movl @a1, @a2". So you access the first byte at the address stored in a1, but you get a page fault. You trigger an interrupt, the OS swaps in the page, and then returns from the interrupt. Now, you must restart the instruction and fetch the other 3 bytes, but since the address is not aligned, they can be on a different page, so now you trigger another interrupt. The OS returns, and you can now fetch the 3 bytes. But wait a sec, what about the 1st byte? You can either go back and fetch it again, which might trigger yet another interrupt since that page may no longer be in memory, or you can have some extra "hidden" registers to hold that intermediate value (which can be one, two, or three bytes). So you deal with all that. FINALLY you have all 4 bytes. Whew. Now you need to do the SAME thing for the second operand, but that is even more complicated, because if the address straddles a page boundary you may end up with corrupted memory if one byte of the target is modified but the other bytes are not.

        A single instruction can generate up to 4 pages faults (not including double faults). Now how do you deal with cache coherency on a multi-core system when a single operand can be partially read from or written to memory? Can you imagine all the silicon required to handle that?

        Eliminating complex instructions like this is precisely what makes RISC fast. A RISC instruction can load from memory, or store in memory, but never both, and unaligned addresses are usually a fatal error. There is never any dangling state to deal with.

        Instruction sets should be designed for compilers, not humans.

        • It's a common argument, but why does x86 continue to dominate, in performance and performance-per-dollar and performance-per-watt? (If not absolute-low-watts.)

          Is it just because they have the best fabs or designers or the most experience? Those are probably true, but you seem like you know what you're talking about so I'd like to float a different theory:

          - x86 has a pretty expressive instruction set which improves code density enough that more code can fit in cache than RISC-y architectures
          - Modern processo

          • by ShanghaiBill ( 739463 ) on Friday May 12, 2017 @11:04PM (#54409193)

            Is it just because they have the best fabs or designers or the most experience?

            This is a big part of it. Intel has a huge market and can spread high NRE over millions of units. So they can devote a lot of engineering resources to each design iteration, and their fabs are fabulous (sorry).

            - x86 has a pretty expressive instruction set which improves code density enough that more code can fit in cache than RISC-y architectures

            This is only partly true. Original x86 code was dense, but extensions were tacked on that add prefixes to allow extended precision and extra registers. These made the code less dense.

            Code has much better locality of reference than data, since a lot of time is spent in tight loops. So it actually isn't super important to have a really big code cache. The cache can be Harvard Architecture, so code and data are cached separately, and the code cache is read-only. Current x86 CPUs use separate "Harvard" cache for L1 (the one closest to the metal) and use a unified cache for L2 and L3.

            Intel couldn't figure out how to make a compiler good enough

            One of the early RISC principles espoused by Dave Patterson [wikipedia.org] was that the chip and the compiler should be developed simultaneously. If the compiler guys need a key instruction, then the silicon guys add it to the design. If an instruction is rarely used by the compiler, then it should be eliminated and replaced in software with a slower (but infrequent) sequence.

            A big mistake with Itanium, was that this didn't happen. The hardware was done first, and then the design was thrown over the wall to the compiler team. Instead there should have only been ONE team with compiler and hardware guys working side-by-side.

            Another big mistake was that Intel tried to monopolize the compiler market. Only their own rather expensive compiler produced acceptable code. But at the time it was released, the server market was rapidly moving to Linux/FreeBSD and gcc was the FREE pre-installed default compiler. Not many people wanted to pay extra for a compiler. Intel would have been much better off if they teamed up with the FSF and worked hand-in-hand to have a gcc port working on day one, and using feedback from the gcc designers to influence the hardware design as well.

            AMD did exactly this with their x86-64 design. They hired gcc developers and they worked closely with the hardware designers. When the chip was released, gcc was ready, and both Linux and FreeBSD were able to boot up and run on some of the very first systems.

        • Or you simple stop using page fault semantics used for processors without this ability and instead give the MMU/OS both addresses and their respective lengths in the page fault so it can bring in both pages at once. However even if this would turn out to be unsolvable, this particular functionality is not what made the 68k family so nice for assembly programming, the most prominent features where sane mnemonics, arguments in the correct order (i.e source, destination and not the strange Intel reverse way),
      • It was a completely different architecture so adding native i386 support would have required to add a complete i386 compute core to the chips

        Someone has a poor memory. The first generation of Itaniums were advertised as being able to run x86 code and came with an x86 emulator for backwards compatibility. It was a key selling point of the architecture: code that was recompiled got all of the shiny new benefits, but legacy code would continue to work. Unfortunately, the emulator performed so badly that the second generation ended up sticking a Pentium on die to run IA32 code. This, of course, drove the price up and made Itanium even less compe

        • Note the key difference between "running code native" and "using a emulator" here. So not so much with poor memory ;-)
          • There's not much difference between an emulator that runs in the firmware and running natively. If you want to make that distinction, no x86 chip since the 80286 has executed x86 natively.
      • by LWATCDR ( 28044 )

        Motorola could have if the had the money. Motorola decided to go with 88000 risc chip for there future. The then dropped that and teamed up on the PowerPC and then spun that off as Freescale.
        It is really sad that we really only have 4 or so ISAs today. IBMs Power line, ARM, X86, and Sparc.
        IBM left the low end mass market years ago and Sun never really played in that market. DEC had the Alpha and HP had the PA-RISC but those have all gone away. Intel keeps trying but after the X86 it seems that the only succ

      • by DarkOx ( 621550 )

        It was a completely different architecture so adding native i386 support would have required to add a complete i386 compute core to the chips.

        I am not a chip designer so I don't know for certain but I don't think this is quite true. i386 has for a long time been implemented on top of a micro architecture. That is there is an instruction decoder that translates x86 instructions to one or more micro instructions that are then decoded and dispatched to the underlying execution hardware, Adders, program counters, arithmetic logic units, etc.

        There would have needed to be a separate x86 decoder but probably not what we think of as an additional 'core

      • by Agripa ( 139780 )

        If anything, later 68K was worse than x86 because of things like double indirect addressing which really makes instruction restart and fault recovery difficult when pipelining and out of order execution are used. Motorola's Coldfire which replaced 68K discarded the difficult instructions and addressing modes.

        This did not make more advanced 68K implementations impossible however they were beyond Motorola's capabilities or level of interest.

      • by Bert64 ( 520050 )

        It's not that they couldn't increase the frequency, it's that they chose not to...
        Motorola thought PowerPC was the future, and many of their m68k customers abandoned them to move to other architectures (or in many cases develop their own).
        If anything, m68k would have been easier to scale than x86, but motorola wanted a clean break and not to be held back by legacy baggage (even tho their legacy baggage wasn't as heavy as intel's).

    • by e r ( 2847683 )
      Itanium didn't drop support for i386. It's a completely different ISA. Itanium no more dropped support for i386 than ARM did.
      • by Darinbob ( 1142669 ) on Friday May 12, 2017 @04:11PM (#54407687)

        I think there's a bubble effect happening with a lot of people, they can only a limited viewpoint based on seeing a monoculture for so long. Hearing these sorts of questions is sort of like heaing someone ask why Chevrolet exists when we already have Ford. They've only ever seen and used a PC perhaps and only vaguely know of other types of computers, and certainly have no background in computer architecture.

        • Hearing these sorts of questions is sort of like heaing someone ask why Chevrolet exists when we already have Ford. They've only ever seen and used a PC perhaps and only vaguely know of other types of computers, and certainly have no background in computer architecture.

          And here I am, all out of mod points.

      • The original Itanium (Merced) had hardware i386 support. They removed it in later versions.

    • by ShanghaiBill ( 739463 ) on Friday May 12, 2017 @03:18PM (#54407435)

      Does anyone remember the reasoning for dropping native support for i386 when these processors debuted?

      There was a belief by some that emulation would be "good enough" since the IA-64 would be so blazingly fast, and emulation would only be needed for a few years during the expected phase-out of x86. Meanwhile, new applications and upgrades would be issued as "dual binaries" that could run natively on either platform.

      Two things went wrong with this plan:
      1. The IA-64 turned out to not be as blazingly fast as Intel hoped.
      2. AMD offered a good alternative at lower cost and far less hassle.

      Although Intel's plan may appear unrealistic in hindsight, it actually could have worked. Apple managed a similar transition from 68k to x86 a few years later using the same strategy.

      • Although Intel's plan may appear unrealistic in hindsight, it actually could have worked.

        The problem with intel's plan was that it depended on magical compiler technology which only they were developing. If AMD hadn't come along with the x86-64 architecture they might have got enough fire under it to get it to go somewhere... eventually. Or if they had somehow got other players involved. Their C compiler may be wondrous on their more legacy processors but that obviously didn't predict whether they could make VLIW performant in a timely fashion.

      • Apple did it twice:

        MC680x0 -> PowerPC
        PowerPC - >x86-64

      • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Friday May 12, 2017 @03:52PM (#54407593)

        Intel managed to kill of the Alpha and the PA-RISC with the Itanium. MIPS left the high end. SPARC and POWER looked quite endangered at one point.

        All this was done while not being actually competitive at any point. You have to admire that, in an Evil Corp sort of way.

        • by Zo0ok ( 209803 )

          Well, "kill off the Alpha [...] with the Itanium"...
          Intel bought the rights for Alpha and discontinued it. That killed it.
          Itanium would never have been able to compete with Alpha and replace it otherwise.

          Alpha was great but when Pentium Pro came out and delivered good performance at a much lower price... that was the beginning of the end for Alpha.

          • by haruchai ( 17472 ) on Friday May 12, 2017 @04:28PM (#54407801)

            Well, "kill off the Alpha [...] with the Itanium"...
            Intel bought the rights for Alpha and discontinued it. That killed it.
            Itanium would never have been able to compete with Alpha and replace it otherwise.

            Alpha was great but when Pentium Pro came out and delivered good performance at a much lower price... that was the beginning of the end for Alpha.

            The Alpha died when Compaq acquired DEC and dumped most of the engineering team who then joined some of the old Cyrix designers acquired by AMD and became the K7 engineering team that delivered the Athlon in 1999

      • by plopez ( 54068 ) on Friday May 12, 2017 @04:10PM (#54407675) Journal

        The other was the use of cheap laptop chips in rack servers. Why buy an expensive Itanium server when you can buy a boat load of cheap ones; allowing for more flexibility, more redundancy for fail over, and lower energy consumption?

        • by ShanghaiBill ( 739463 ) on Friday May 12, 2017 @04:56PM (#54407903)

          The other was the use of cheap laptop chips in rack servers.

          Indeed. In the mid-90s, I often heard about the need for "big iron" in servers and data centers. Many people assumed that servers needed expensive high-powered CPUs and lots of memory, and this would be a lucrative market.

          I realized this was bullcrap when I visited Hotmail in 1996 (a year before they were acquired by Microsoft). I expected to see a few slick looking million dollar servers, each filling a rack from floor to ceiling. Nope. Instead there was some cheap metal shelving from home depot, covered with cardboard from some old boxes cut up into squares. On each square of cardboard was a cheap commodity motherboard running FreeBSD, and a $2 SLA battery. The cooling was some cheap clip-on desk fans from Walmart. No wonder they were able to provide email for free.

          That night I thought about what I had seen. If Hotmail could do it that way, anyone could. The next day I shorted Sun's stock.

          • by TWX ( 665546 )

            I've heard that Google did much the same thing.

            What's funny now, is that what started as cheap commodity motherboards bastardized into "racks" somehow evolved into these massive blade chassis systems like Cisco UCS. So now you have a machine that's the price of big-iron but has many of the same problems as using cheap commodity hardware as far as managing the whole lot goes.

            It'll be interesting to see where the next direction takes us, especially if organizations get tired of paying for Smartnet.

      • Intel's plan did work. The IA-64 wasn't planned to be an x86 killer directly, though it would have been a nice bonus if it had worked. There is far more to the computing world and to Intel than the PC. The higher level IA-64 architecture was far ahead of i386, probably even Pentium, though the silicon process needed work. Even so it had a pretty good run in high end server markets.

        • by 0123456 ( 636235 )

          "The IA-64 wasn't planned to be an x86 killer directly, though it would have been a nice bonus if it had worked."

          Uh, yes, it was. Everyone was supposed to switch to IA-64 after a few years, with x86 becoming a legacy product.

          Problem was, most people and companies ad a lot of x86 software lying around, and it ran like a three-legged dog on IA-64, so they couldn't make the switch.

    • Because native support for i386 is technically a pain in the ass, though it does make business sense if you need to be in the home computer market. It was one of Intel's attempts to break free of it's own monopoly.

      i386 is a legacy product with a line of design decisions for backwards compatibility that stretch all the way back to the first 4004. Surely we've managed to come up with better designs since then. i386 is unsuitable for modern high performance computing. IA-64 follows a lot of good RISC princip

      • by TWX ( 665546 )

        When I look at where software has gone, I guess that I'm surprised that Intel didn't try to push this architecture wider than servers. The rate of replacement of equipment is astoundingly high these days, and a lot of software is web-delivered and requires post-download work to make it run anyway, so in many ways the base of legacy software to support has shrunk dramatically if Intel could get OS developers and the big software suite developers on-board. Microsoft was already accustomed to writing Windows

    • HPaq exhibiting mastery of the fine dead art of horse beating.

    • by Agripa ( 139780 )

      Does anyone remember the reasoning for dropping native support for i386 when these processors debuted? There have always been growing-pains when a manufacturer drops or severely impacts support for their install-base, but sometimes it's beneficial or necessary if an existing architecture is a dead-end.

      Besides the extra complication in producing Itanium with hardware support for x86, the Itanium's x86 performance fell behind with every new generation of x86. This would have been sufficient if the Pentium 4 was the last generation of x86 processors as Intel intended but AMD screwed that up by releasing AMD-64.

    • The real reason was market monopoly, i.e. an attempt to lock out AMD. Here's the back story...

      * Around 1980, IBM decided to put out an Apple ][ competitor machine, with potential sales of maybe a couple of hundred thousand during its lifetime production run

      * "The IBM Way" included things like insisting on multiple sources for each component, so that no one supplier could demand higher fees on short notice, or go bankrupt and disrupt IBM's production capacity.

      * Back then IBM was *BIG* in computing, and Intel

    • Can I get a few nths for free? As relics, I mean....
  • by isdnip ( 49656 ) on Friday May 12, 2017 @03:07PM (#54407379)

    It never was a very good architecture, but there was a VMS port to it.

    • by Anonymous Coward

      From the start it was 2 generations behind the Pentium 3/4 as far as process technology went. Plus the performance loss of the memory translator hub chips because it was designed assuming RAMBUS had won (I forget if this was both the very early limited production run SDRAM systems, or strictly the DDR1+ systems.) As such the early silicon was slow, hot, had degraded memory performance, and was saddled with a single video device and 1 or 2 PCI busses. Not an auspicious start. By the time Intel had the sense

  • by ErichTheRed ( 39327 ) on Friday May 12, 2017 @03:12PM (#54407405)

    If I remember correctly, it was revealed a few years back that HP was paying Intel to continue developing Itanium simply because it had bet on the processor for its Integrity servers, which run HP-UX, NonStop OS and used to be the only place to run OpenVMS. Obviously these are legacy operating systems, but where they're used they're highly entrenched and can't be written off with an "oh, just migrate to x86 Linux and Java" kind of mindset. OpenVMS is actually living on; HP sold the development rights to a new company who is porting it to x86 -- interesting to me because that was the first ever OS I supported in any professional capacity. But, it looks like HP-UX is probably going to get killed as slowly as an OS like that can.

    There was also a tiny window where Itanium had some life, around the early 2000s before x86-64 became a thing. If you had an application that required large (for that time) amounts of memory, it was basically your only choice if you didn't want to go AIX, Solaris or similar. I worked on such a system around that time (mainframe migration) and the Itaniums were pretty quirky compared to x86 servers. UEFI is one of the things that lives on from that era and actually made it over to the mainstream x86 platform.

    • HP-UX and the 9000s don't even seem to be available via hp.com. Haven't seen them there in a long, long time.

      • Take it back - hpe.com .....

      • Well no, they died off along with other minicomputers and workstations, due to the influx of the PC monoculture. I think even the Itanium was enough for HP to deprecate it's PA-RISC line. The good-enough solution with the 800lb Intel and AMD gorillas able to keep squeezing more and more performance out of a legacy architecture and supply it at commodity prices.

    • There was also a tiny window where Itanium had some life, around the early 2000s before x86-64 became a thing.

      They stuck tons of cache on it, which made it look good for server machines and fooled HPaq into throwing tons of money at it. That, and a few DEC refuses who somehow survived the sinking.

    • Obviously these are legacy operating systems, but where they're used they're highly entrenched and can't be written off with an "oh, just migrate to x86 Linux and Java" kind of mindset.

      It's funny that you put in that way, because that's exactly what we did with our HP/UX installations. All of our internal stuff was based on UniVerse (I feel dirty just admitting that), and UniVerse existed on both HP/UX and Linux. So we moved UniVerse from HP/UX to Linux, thereby setting us up to eliminate some expensive UniVerse licenses. Interestingly enough, HP reduced our HP/UX license costs to zero to try keeping us from moving to Linux.

      After moving our servers to Linux, we started writing all of o

      • by 0123456 ( 636235 )

        "It's funny that you put in that way, because that's exactly what we did with our HP/UX installations."

        The other day, I was trying to run a program on one of our older Linux systems, which we're currently 'upgrading' to run three servers in VMs on a single real server, instead of the slowly-failing decade-old machines it currently runs on. Because the program wouldn't run, I did a 'file' on it, and discovered it was actually an HP/UX executable from the days when the system used to run on HP/UX, before it w

    • by sconeu ( 64226 )

      Nonstop is not a legacy OS. HPE has, however, moved to x86-64 for its newer Nonstop servers.

      • Nonstop is not a legacy OS. HPE has, however, moved to x86-64 for its newer Nonstop servers.

        Hey thanks for this, I didn't realize they moved it from IA-64 over to x64:
        https://en.wikipedia.org/wiki/... [wikipedia.org]

        They place I worked at which used Tandems had Itanium versions, but I left in 2012. I wonder if they transitioned over to x64 hardware. This would also allow easier virtualization (?)

        • by sconeu ( 64226 )

          Indeed. x86 systems were released in late 2014/early 2015. They've announced vNonstop, which is a virtualized NonStop, and may have already released it (don't know if it's in GA yet).

    • Comment removed based on user account deletion
    • A place I worked at a few years ago had some really critical software that ran on HP Tandem. Makes me wonder if we can expect to see an x64 release of NonStop OS in the near future.

    • I worked on such a system around that time (mainframe migration)

      Platform Solutions?

      and the Itaniums were pretty quirky compared to x86 servers.

      Quirky, yes. I've worked with Itanium HP-UX systems. They're kind of a pain to debug in assembly when you don't have symbols; the "make the compiler do the work" philosophy behind VLIW does not make for happy times when staring at disassembled instructions and a list of register values.

      Itanium is also not forgiving of Undefined Behavior, which is good in that it helps enforce better coding practices, but bad when it produces weird heisenbugs that take forever to track down.

      An example: Ita

  • I remember when... (Score:5, Informative)

    by yorgasor ( 109984 ) <ron@@@tritechs...net> on Friday May 12, 2017 @03:19PM (#54407441) Homepage

    The good ol' days, when Intel just announcing the Itanium caused all the other proprietary Unix vendors' stock to crash. Everyone was sure that within one generation, all the SPARC & POWER chips would shrivel up and die. HP rolled over immediately and gave up their line of PA-RISC procs to use Itanium. But Intel crippled their Xeons in fear that the Xeons would eat into their Itanium line, and then AMD walked in and gave people what they really wanted with their Opterons. There were a few years when things were really rocky for Intel, and it was very entertaining to watch, especially since I worked for them at the time :)

    • by Anonymous Coward

      HP didn't roll over. HP developed the Itanium instruction set and Intel inked a contract to produce new silicon revisions until 2017. Oh, would ya look at what year it is...

      It was a joint venture between HP and Intel. HP wanted something better than PA-RISC and Intel wanted to cut into that market.

      • Actually, the Itanium architecture was not very different from PA-RISC, it just allowed to issue multiple instructions to parallel execution units without the CPU trying to solve dependency issues between those execution units, leaving that task to the compiler.
        I am pretty sure HP would have been better off not forfeiting their PA-RISC architecture to Intel at that time - the PA 8000 CPUs ran circles around the x86 CPU from the same time while using much less transistors.
    • by Anonymous Coward

      Intel seems to cannibalize their own technology transitions by aiming for high margins. Meanwhile, an alternative technology that is easy to scale in production and easier to adopt comes to eat their lunch. They have been doing it once again with their Optane drives, or so it seems. I guess I can finally stop waiting for that cheap Itanium workstation for personal use.

      But Intel crippled their Xeons in fear that the Xeons would eat into their Itanium line,

      I think the only currently missing RAS property in Xeons is the instruction replay. But there may be more.

    • by Anonymous Coward

      Intel really deserved to have AMD embarrass the hell out of them back then and I thoroughly enjoyed it. The whole propietary RAMBus and first generation Pentium 4s running slower than some Pentium III at 3x the price was very disappointing. The fdiv bug and PR blunder in the 90's was forgivable, but this wasn't. I was so excited to get my first P4 and boy was I disappointed. Not only did it run hotter than anything before it, it really was slower than the Pentium III. It sucked donkey balls big time. Also,

  • by heson ( 915298 ) on Friday May 12, 2017 @03:30PM (#54407489) Journal
    Dupe from many years back. The Itanium is well dead and buried since long, the zombies were only for extra fooled customers.
  • I had fond memories of the AMD Athlon 64 processor when it first came out. After owning a half-dozen Socket 7 processors and just as many motherboards, this one kicked ass and I had it for a long time. I didn't upgrade to a 64-bit version of Windows until Vista came out and I built a new system, jumping from dual- to quad- to eight-core in ten years.

    https://en.wikipedia.org/wiki/Athlon_64#Single-core_Athlon_64 [wikipedia.org]

    • by haruchai ( 17472 )

      I had fond memories of the AMD Athlon 64 processor when it first came out. After owning a half-dozen Socket 7 processors and just as many motherboards, this one kicked ass and I had it for a long time. I didn't upgrade to a 64-bit version of Windows until Vista came out and I built a new system, jumping from dual- to quad- to eight-core in ten years.

      https://en.wikipedia.org/wiki/Athlon_64#Single-core_Athlon_64 [wikipedia.org]

      I did pretty much the same, except for making the core 2-4-8 journey in 6 years and erased Vista after struggling with it for less than a month. How that Ryzen is here and seems to be living up to the hype, this Xmas I'll build my 1st all-new desktop in 5 years.

  • Itanium is a direct result of the hardware people and the software people refusing to rub elbows in the same room.

    Itanium's designers basically declared war against their software peers. Our beautiful machine would run fast, if only your crappy software didn't expose so many execution hazards.

    Thus Intel set up a grand gauntlet for the compiler writers to finally prove their ultimate hardware manhood: by writing an Itanium compiler that didn't suck.

    We all know how that went.

    I've always though the made the c

  • The real reason Itanium failed isn't because it was inferior but rather because they failed to proliferate support for it in compilers while maintaining a higher price point. If they ensure before it's release that it were well supported by all the major compilers (instead of exclusively with Intel compilers upon release) and had a similar price to x86 chips then they could have had a real chance. Instead, they relied on their market position and expected people would catch up to them in time.

    This conquer

    • The real reason Itanium failed isn't because it was inferior but rather because they failed to proliferate support for it in compilers while maintaining a higher price point.

      And because it was inferior.

  • The original Itanium1 had i386 compatibility.
    Worked at a telecom at the time who were big HP customers so we were given a Itanium1 box as a demo.
    We tried on it:
    - A beta version of HP-UX (11.16) who could not run any PA-RISC code and nothing was available for it.
    - Itanium RedHat
    - i386 RedHat

    That Itanium1 was as fast^H^H^H^Hslow as a Pentium1 in i386 Compatibility mode. Also it was slower than a PA-RISC with the same clock.

    We gave it back and ordered more PA-RISC machines.

    The Intaniums on the marked today are

    • Lol, We got an Itanium1 machine as a demo unit for peanuts. I think we paid $1 for it as a line item on a marge order of other stuff.

      It sucked balls so bad we ended up using it as a team MP3 server...
      Even that had issues.

  • Intels plan was to deliver high-mhz-average-performance-pentium-4 for the broad market. Remember how the first Pentium 4 at 1.3-1.4GHz barely managed to keep up with a 1GHz Pentium 3 (or the Athlon)? The very long pipeline of Pentium 4 was meant to work well for "multimedia", and for other purposes Intel considered the CPU fast enought.

    The Itanium was supposed to be superior with its 64-bit memory and good general performance, and this would make it the only viable option for servers and high performance wo

  • Does Gentoo still work on these? Does any Linux? Does FreeBSD? HP-UX I'm sure does, but IA64 was well supported by major Linux distros until it was pretty well abandoned, long after it was a clear failure.

    I've considered buying a cheapo old IA64 to screw around with, but I would want to be able to install Linux on it, if possible.

  • Yes, incompatibility was an insurmountable problem for IA-64, and x86-64 was what the market needed and wanted. That said, VLIW had other issues of its own that limited its success.

    VLIW emerged in the mid 90's as a potential successor to RISC aimed at improving performance per chip. It had many innovative aspects and more fully leveraged advanced compiler capabilities. Unfortunately, VLIW improved only the "infinite cache" component of uniprocessor performance, and put greater load (per useful instructio
    • When I looked at it years ago, it seemed like they placed a huge burden on compiler writers to get decent performance.

      The instruction set had some really interesting features however. The x86 backwards compatibility was the fatal flaw.

  • ...and finally hit the bottom. It never achieved any mass market, but did carve out a niche that wasn't sustainable.

  • Wonder if intel is still touchy when people call Itanium by the name Itanic instead.

    Oh well. It's been Itanic for so long, I have to put in effort to remember the actual name.

  • IBM got full of themselves and so did Intel when they implemented the Itallium. It was interesting but they priced it ridiculously and abandoned their enthusiast (aka first adapters) market.

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...