Oracle Claims Intel Is Looking To Sink the Itanic 235
Blacklaw writes "Intel's ill-fated Itanium line has lost another supporter, with Oracle announcing that it is to immediately stop all software development on the platform. 'After multiple conversations with Intel senior management Oracle has decided to discontinue all software development on the Intel Itanium microprocessor,' a company spokesperson claimed. 'Intel management made it clear that their strategic focus is on their x86 microprocessor and that Itanium was nearing the end of its life.'"
Sparc (Score:5, Informative)
Now that Oracle owns Sparc processors from Sun, there is no reason for them to help out their competitor.
Re: (Score:2)
And they managed to get in a good, FUDdy parting shot on their way out (lovely chaps, those folks at Oracle).
Unless of course they're telling the truth. Which would be a shame, if not a surprise. Itanium deserved at least slightly better a life than it go (and Intel, once burned, may never try moving away from i86 again, god help us).
Re:Sparc (Score:5, Interesting)
x86 is a small part of what's in a modern x86 CPU.
There's hardly any good reason to choose anything else over it, either. You can't beat it on performance the way Alpha did. PPC lost its simplicity long ago (and comes with some annoyances that make me wish it would just die).
Intel's latest stuff is the best that ever was. Nobody else does or ever has come close.
Re:Sparc (Score:5, Insightful)
Well, yes and no. Certainly in the space between the notebook computer and any but the mightiest supercomputers there's no reason at all not to go with x86. But in the mobile processor space, where ultra-low TDP is the order of the day, ARM has a big leg up on x64. Intel sold out their Xscale division (which was only ARM 5 anyway) and now they're losing this increasingly important segment of the market.
I'm not counting Intel out by a long shot in that race, but ARM is the new hotness for most geeks.
Comment removed (Score:5, Interesting)
Re: (Score:2)
Well ARM is a hell of a lot less power using but it is also a hell of a lot less powerful clock for clock, so it evens out doesn't it? I mean sure in a cell phone where its main job is running a highly specialized OS, with tons of little support chips to help it out it does great, but I wouldn't want to do my day to day desktop computing on it.
Why do you think ARM is equivalent with less computation power? Maybe it is so for the present, but doesn't [wikipedia.org] seem so for [wikipedia.org] the near future [linuxfordevices.com]
Re: (Score:2)
I wouldn't want to do my day to day desktop computing on it.
If that word must be interpreted stricto sensu, can you please point to me where can one find now a desktop computer powered by ARM? I would fully appreciate the reference, thank you.
Re:Sparc (Score:5, Informative)
The problem is that the x86 is like the living dead. It's an ancient architecture which had a really bad architecture when it was new, and is now being held together through duct tape and an oxygen tent. Yes it's very fast, but it's very expensive to make it that way too. It works because Intel has tons of resources to throw at it. It is saddled with decades of backwards compatibility issues as well, 16-bit modes, segmentation, IO ports, and other things that no one uses anymore if they can help it. It requires tons more support chips than many embedded CPUs. The real reason x86 should die is that it's an embarrassment to computer scientists to see this dinosaur still lumbering about.
ARM on the other hand has some decent designs. It's not low power because it was designed to be low power, but because it's got a relatively simpler RISC design, and because it was easily licensed for people to fabricate so it got used in a lot in low power designs (ie, ARM core included as part of a larger ASIC. But there are faster ARM designs too, and with the same resources that the x86 has it would be really great. ARM is not inherently a "small chip". The problem is trying to compete head to head with x86 when everyone knows it will lose. So it's high power designs are not intended for PC desktops, but for specialized purposes.
Internally the modern x86 is really a RISC at heart anyway. But it's got a really massive support system on top of that that converts the older style CISC instruction set into a VLIW/ RISC style that's more efficiently executed in a superscalar way. Just like the original RISC argument, it makes sense to try and rip out that complexity then either use the resources to make things faster or just leave it out entirely to get a cheaper and more efficient design.
Anytime a better design is out there it seems to be clobbered in the market place because it just doesn't pick up enough steam to compete with x86. This is why alternative CPUs tend to be used for embedded systems, specialized high speed routers, or parallel supercomputers. Even Intel can't compete with itself, Itanium isn't the only alternative they've tried. It's not just performance either, most unix workstations had models that ran rings around x86 but they were expensive too because of low volumes sold.
The public doesn't understand this stuff. Sadly neither do a lot of computer professionals. All they like to think about is "how fast is it" or "does it run Windows"
The analogy with cars is wrong. X86 isn't a Peterbilt truck, it's a v8 Chrysler land yacht with a cast iron engine, or maybe an gas guzzling SUV. People stick with it because they don't trust funny little foreign cars, they feel safer wrapped in all that steel, they need to compensate for inadequacies, they feel more patriotic when they use more gas, etc. It's what you drive if you don't want to be different from everyone else.
Re:Sparc (Score:4, Informative)
It is saddled with decades of backwards compatibility issues as well, 16-bit modes, segmentation, IO ports, and other things that no one uses anymore if they can help it.
Actually, Google Native Client (NaCl) uses segmentation to sandbox downloaded code. It's either a brutal hack or a totally clever trick, I guess, depending on your POV.
Re: (Score:3)
In that case, it won't work on x86-64, because segmentation doesn't work in 64-bit mode. Xen also uses segmentation, so the hypervisor and guest can share a linear address space and not need a TLB flush on hypercalls.
Segmentation is actually really nice, but the x86 implementation sucks. You have a two segment tables. The GDT is shared between all processes, the LDT is per process (TLB flush required to change it, next few dozen memory accesses will be very slow). Each contains 8192 entries. For an OO
Re:Sparc (Score:5, Informative)
Internally the modern x86 is really a RISC at heart anyway. But it's got a really massive support system on top of that that converts the older style CISC instruction set into a VLIW/ RISC style that's more efficiently executed in a superscalar way.
If you look at a picture of any modern CPU die, the real estate is totally dominated by the caches. That "massive support system" (which in reality is only a tiny fraction of the whole die area) serves largely as a decoder that unpacks the compact CISC-style opcodes (many of which are only one or two bytes long) into whatever obscure internal superscalar architecture is in vogue this year. This saves huge amounts of instruction cache space compared to unpacking bloated one-size-fits-all RISC-style opcodes into the some similar internal architecture du jour. Thus, the X86 can end up needing less die area overall. This is one reason that despite what elitists geeks say, over the years X86 has usually provided more bang for the buck than any competing processor family.
This scheme is so advantageous, that even ARM has tacked on a similarly convoluted opcode decompresser. If ARM ever evolves into a mainstream general-purpose high-end CPU, there will be undoubtedly dozens more layers of cruft added to the ARM architecture to make it competitive with X86, at which point it will be similarly complex. (For another example, take a look at how the POWER architecture ended up over time. You can hardly call it RISC any more.)
Re: (Score:3)
the ARM thumb decompressor isn't that convoluted. And it has fixed length 16-bit words like most RISC machines (even the 16-bit version of MIPS does the same). It means it has a nice simple fetch cycle. If you have no cache (as in smaller ARM versions) then it's one instruction fetch per two instructions. Thumb mode basically is a tradeoff of space for performance; you end up executing more instructions overall.
Re: (Score:3)
Re:Sparc (Score:5, Interesting)
Since this is an article about Itanium, it's worth noting that Itanium copies the predicated instruction model from ARM. This doesn't just make the code denser, it meant that ARM could get away without having a branch predictor for a very long time (new ARM chips have one). It works very nicely with superscalar architectures, because the instructions are always executed, and the results are only retired if the condition is met. You always know the state of the condition flag by the time the predicated instructions emerge from the pipeline, so it's trivial to implement in comparison with the kind of speculative execution required for predicted branches on x86.
Lots of people seem to assume that x86 is translated into RISC and then x86 has no impact on the rest of the execution pipeline. This is absolutely not the case. The x86 instruction set is horrible. Lots of things have side effects like setting condition registers, which cause complex interactions between instructions in a pipelined implementation, and insanely complex interactions in an out-of-order design. This complexity all has to be replicated in the micro-ops. Something like Xeon then has a pass that tries to simplify the micro-ops. You effectively have an optimising JIT, implemented in hardware, which does things like transforming operations that generate side effects into ones that don't if the relevant condition flags are guaranteed to be replaced by something else before they are accessed. All of this adds to complexity and adds to the power requirement.
Oh, and some of these interactions are not even intentional. Some of the old Intel guys tell a story about the first test simulations of the Pentium. It implemented all of the documented logic, but then they found that most of the games that they tried running on it failed. On the 486, one of the instructions was accidentally setting a condition flag due to a bug in the design. Game designers found that they could shorten some instruction sequences by taking advantage of this. In the Pentium, they didn't recreate this bug, and software broke. After the first phase of testing, they had to go back and recreate it (adding some horrible hacks in the Pentium design in the process), because if games suddenly crashed when people upgraded to a Pentium then people would blame Intel (Windows 95 had a hacky work-around to prevent SimCity crashing on a use-after-free bug, for the same reason). All of these things add to complexity and in hardware complexity equals power consumption.
Or if, you are that way inclined, you could argue about Java/.NET bytecode making code compiled at run time achieving the same thing.
And, if you are, then Thumb-2EE is a much nicer target than x86 for running this code. It has instructions for things like bounds-checked array access, which really help performance in JIT'd VM code.
Re: (Score:3)
Check the architecture reference for the ARM11. The Thumb decoder on ARM11 translated Thumb instructions into ARM instructions. With the Cortex A8 and newer chips, there are two separate decoders, one for ARM one for Thumb-2 (and Thumb-2EE). When you're in Thumb-2 mode, the ARM decoder is powered down. When you're in ARM mode, the Thumb-2 decoder is powered down. This is responsible for a big chunk of the power savings from ARM11 to Cortex A8.
On an x86 chip, the micro-op decoder is about as complex
Re: (Score:2)
There is an almost trivial migration path that Intel/AMD could take to get rid of those features whilst still retaining a large part of the market. They could produce x86-64 CPUs that boot into 64bit long mode right from the start, scrapping most of the compatibility modes and features (real mode and virtual 8086 mode can go). x86-64 is a really neat clean architecture when taken on its own.
Such CPUs could be badged as "pure 64bit" CPUs. They'd require an 64bit OS and drivers and they wouldn't run older sof
It's just ARM heads (Score:5, Insightful)
Comes from the general geek thing of liking the underdog (though one has to ask how underdog they really are given their mass marketshare in embedded devices) and from hating CISC. A lot of geeks take CS classes and learn a bit about processor theory, but not any of the CE/EE to understand the lower levels and thus decide CISC = bad RISC = good.
What it all adds up to is they hate on Intel and love ARM, and want to see ARM in the desktop space.
As you said, I've yet to see anything showing ARM is faster than Intel in an equal setting. Yes, a Core i7 uses a lot of power. However it does a lot. Not only is it fast at the sort of operations ARM does, it does other things as well. Like 64-bit. You think ARM isn't doing that just because they are jerks? No, it is because 64-bit needs more silicon, and thus more power. How about heavy hitting vector units? Same deal.
ARM is great for what it does but those who think that it is some amazing x86 replacement just haven't done any looking. Turns out Intel is pretty much the best there ever was when it comes to getting a lot out of silicon. They produce some powerful chips. Could ARM design one as powerful? Maybe, but guess what? It wouldn't be a tiny fraction of a watt deal anymore. It'd be as big and power hungry as Intel's offerings.
You can see this from other companies as well. If x86 really was the problem, and another architecture could do so much more with less, then why doesn't anyone else do it? Remember IBM, Hitchai, Sun, they all made non-x86 chips. Yet none of them are killing Intel in terms of performance for watts. IBMs POWER chips are a great example. They have an apt name: They are fast as hell, and draw a ton of energy. They really are for high end servers (which is what IBM designed them for). Despite being RISC based (though you find desktop/server RISC chips are quite complex both in terms of number of instructions and capability) they are not some amazing low power monsters that can rip x86 apart. They are fast, powerful, high end chips that take a lot of silicon and a lot of juice to do what they do. Go have a look at the massive heatsink for a POWER5 chip on Wikipedia.
Different chips, different markets.
Re:It's just ARM heads (Score:4, Insightful)
Re: (Score:3, Interesting)
Re: (Score:2)
Well ARM is a hell of a lot less power using but it is also a hell of a lot less powerful clock for clock, so it evens out doesn't it?
The ARM does more than x86 per Watt but less per clock. What this means is that which is best depends entirely on what your bottleneck is. If you're cooling-limited (which a lot of installations are, especially when it comes to servers; getting the heat out of the racks and out of the building is the limiting factor) then the ARM looks a lot sweeter because it allows you to pack processors in much more densely. That in turn saves massively.
OTOH, if you're not cooling-limited (and not in a low-power situatio
Re: (Score:2)
Also a little nitpick since you are off by about a decade - Intel solved the 4GB problem way back with the Pentium Pro. People think otherwise because Microsoft's low end software couldn't go past 4GB while their high end stuff could on the same hardware. After 1995 the 4GB barrier was a cheap end of town Microsoft problem. Other vendors and linux were comp
Re: (Score:2)
Well, yes and no. Certainly in the space between the notebook computer and any but the mightiest supercomputers there's no reason at all not to go with x86. But in the mobile processor space, where ultra-low TDP is the order of the day, ARM has a big leg up on x64
Yeap. But, in the context of the Oracle behemoth database server, does mobile processors have any relevance? It seems that it does - even if an ARM-based server [linuxfordevices.com] is no longer what one would call "mobile".
One on top of the other, may it be that the Itanium heavyweight approach is indeed a dinosaur of the past?
Re:Sparc (Score:4, Insightful)
And this segment is *important* because already, I do as much browsing and web-surfing on my Motorola Droid 2 Android phone as my fire breathing Intel Core i7 laptop computer.
Remember that x86 started out as the cheap chip on the block that was "good enough" for basic stuff that little people could afford, and it slowly grew upward and increased its applicable market segments until it, now, is the high end of the marketplace.
ARM is now potentially in a similar situation. And like the x86 before it, it has tremendous inertia in the smartphone platform, any of which are easily capable enough to operate as a PC for most uses for most people. It uses something less than 1/100th the power of my laptop and is a reasonable, convenient stand-in for said laptop for pretty much all personal use other than for my work. (I'm a software engineer)
I've already started to note the conflict: do stuff on the phone or the laptop? So far, it's mostly worked because stuff I do on the phone is pretty much "in the cloud" and is accessible from the PC.
But Pictures? I've taken a few hundred pictures, and keeping them in synch starts to become a hassle...
At some point, it could make sense to jump, to switch from one to the other. Why couldn't my phone have a plug or a bluetooth connection to a keyboard, monitor, etc?
Re: (Score:2)
I think you've got a false dichotomy here.
It's highly HIGHLY unlikely that x86 is going to be usurped by anything any time soon. Part of the reason is, despite apparently every person here's hatred of it, the legacy of x86.
There's always going to be that one application that you just can't find a replacement for. Even among FOSS software, there's a good chunk that is non-trivial to port to a non-x86 architecture. This is fine in sectors like smart phones, where the segment isn't so bogged down in legacy app
Re: (Score:2)
How much of that x86 software that simply cannot be ported also can't run on an emulator?
Re: (Score:2)
History shows us that "we can emulate it" is not an acceptable alternative most of the time.
Apple managed to get away with it, but they managed to get away with a lot of dramatic platform shifts because they have dictatorial control over their product. They could switch their entire product line over to ARM tomorrow and apple fans would have no choice but to switch.
X86 and the PC architecture are different than that. They're more democratic, which often means innovation must maintain the status quo or risk
Re: (Score:2)
Even among FOSS software, there's a good chunk that is non-trivial to port to a non-x86 architecture.
Can you give some examples, please? (I'm not trolling, I'm just curious.) I've heard that Debian and recently Ubuntu have ARM ports, but I've never used them. What's missing from these ports that's commonly available in the "normal" x86 distributions?
Re: (Score:2)
Re: (Score:2)
Or maybe Intel is more worried about the new ARMs [slashdot.org] race.
Re: (Score:2, Informative)
Immaterial. The x86 is a lousy architecture and adding onto it hasn't helped any.
Intel's latest stuff is certainly not the best that ever was. It has no support for content-addressable memory and no support for MIMD, it isn't asynchronous, it's not 128-bit, it doesn't use wafer-scale integration, it doesn't support HyperTransport (which is faster than PCI Express) and it can't do on-the-fly instruction set translation --- all these things have been done on other architectures, making those architectures sup
Re: (Score:3)
This story is about the further decay of Intel's once flagship product, the Merced. If anything, this story shows that x86 and extensions of it DO have a very important place in the market. Despite having existed forever before x86-64, it wasn't until the Opteron and Athlon 64 that 64-bit architecture became commonplace. It wasn't Merced, it wasn't DEC Alpha, it wasn't a Motorola processor.
Ignoring the practical reasons why x86 continues to survive may make sense in a vacuum of academic computer science, bu
Re: (Score:3)
On the low-power mobile and embedded side x86 is out. Never mind power-performance - absolute power levels is what matters most. And the big volume in cpus is in this market, from smartphones on the upper end down to windshield wiper controllers and stuff like that on the low end.
On the very, very high end, again, there's good reason not to use x86, and instead do something like Hitatchis Sparc-based cpus. You have basically low or no concern for binary compatibility - you're most likely running a custom-ro
Re: (Score:2)
And you can buy 7 of the Intel processor systems for the price of a single Power7 system. A slight performance advantage for a single generation doesn't do you a damn bit of good when Intel is tick-tocking every 2 years while power refreshes every 5 and Intel is at least 1 process tech ahead of everyone else including IBM. In 6 months to a year the Intel processor will catch up and then exceed the Power, 5 years later IBM will leap ahead again. That is providing the trend keeps up and IBM doesn't abandon po
Re: (Score:2)
That's kind of a weird comparison, though. Power7 cores have 4 hw threads. Nehalem has 2 'hyper' threads.
Like any tool, you pick the right one for the job. Nahalem is quite fast on a single thread, but if you have a web server processing boat loads of transactions/second, you may look towards a tool that is fast on many theads and can churn through many transactions concurrently.
Re: (Score:2, Informative)
download specbench, build and enjoy. a single p7 core running a single thread is fucking assloads faster than a nehalem core (and that's _without_ heavy FP or decimal).
for extra laughs, watch how the gap grows to nearly 2x by moving to GCC on the POWER7 and xeon system.
"i do this for a living" - is that you, demerjian?
Re:Sparc (Score:5, Insightful)
Unless of course they're telling the truth.
Intel is strongly denying [intel.com] Oracle's claims that Itanium is near end-of-life. So it looks like more Oracle FUD, and probably intended to harm HP-UX rather than Intel.
Ya HP is calling bullshit too (Score:4, Insightful)
http://www.businessweek.com/news/2011-03-23/hp-calls-oracle-move-shameless-gambit-to-hurt-competition.html [businessweek.com]
I'm much more inclined to believe Intel and HP on it. While the Itanium did not become the be-all, end-all for computers Intel hoped (they wanted to go to it because their cross licensing is for x86, not IA-64) it has not been a failure. People like to joke about it and rag on it but all it means is they've done little to no research. It is a competitive chip in the super high end market. When you need massive DB servers or the like, it is a real option and one that people use.
Now as to what kind of future it'll have I can't say. The high end segment keeps shrinking as normal desktop hardware gets better and better. You can knock 4 8-core Xeons in a system right now and get some great performance at a good (relatively speaking) price.
At any rate I wouldn't listen to anything Oracle says, particularly about competitors. They are not known for their truthfulness, or for their sense of fair play.
Re: (Score:2)
While the Itanium did not become the be-all, end-all for computers Intel hoped (they wanted to go to it because their cross licensing is for x86, not IA-64) it has not been a failure. People like to joke about it and rag on it but all it means is they've done little to no research. It is a competitive chip in the super high end market. When you need massive DB servers or the like, it is a real option and one that people use.
The only people running "massive DB servers" on Itanium are people who had HP-UX shops before PA-RISC went away and migrated to Itanium. I don't think I've ever met anyone who went out and bought HP-UX + Itanium and introduced it into their shop.
Itanium is a fine processor - but it solves all the wrong problems. It's fantastic for scientific compute apps - 64 64-bit registers, woo-hoo! - but it's not really a competitive solution for mainstream business use.
At any rate I wouldn't listen to anything Oracle says, particularly about competitors. They are not known for their truthfulness, or for their sense of fair play.
I don't think anyone has made better moves over
Re: (Score:3)
Unless of course they're telling the truth.
Intel is strongly denying [intel.com] Oracle's claims that Itanium is near end-of-life. So it looks like more Oracle FUD, and probably intended to harm HP-UX rather than Intel.
That's a really silly analysis. Oracle could not care less about HP-UX because they don't compete in the proprietary Unix market. No one does. Yes, Oracle owns Solaris, but Ellison's smart enough to know that proprietary Unices only exist to sell the servers attached to them. There's no money in selling proprietary Unix operating systems by themselves.
Now that PA-RISC is gone, the only thing HP-UX runs on it Itanium. Already, you can't run any Microsoft or Red Hat on Itanium. And those are just compan
Re: (Score:2)
Re: (Score:2)
Intel is obligated to continue developing Itanium, or HP sues them. Itanium is going nowhere, and Oracle is spreading FUD.
FTFY. Other than that, all your other assertions ring true to me.
Re: (Score:2)
Intel is obligated to continue developing Itanium, or HP sues them. Itanium isn't going anywhere, and Oracle is spreading FUD.
Really? Do you think someone using an Oracle database on IA-64 is going to convert to a different DB? I don't think so.
Re: (Score:2)
Intel is obligated to continue developing Itanium, or HP sues them. Itanium isn't going anywhere, and Oracle is spreading FUD.
I'm highly skeptical of your argument. Are you saying that HP holds an iron-clad contract saying that Intel must develop Itanium for as long as HP wants?
Re:Sparc (Score:5, Funny)
Intel is looking over its shoulder at ARM right now
That's a given. When you look over your shoulder, you can't help but see your arm.
Re: (Score:2)
Re:Sparc (Score:4, Insightful)
It's cleverer, and assholeyer than just saying that.
Old Lawyer's trick.
Instead of saying the obvious, i.e. "We won't support our competitor's (HP) fastest computers because we make hardware now" Oracle spreads FUD about the longevity of their competitor's product line by virtue of leaking information from anonymous sources in their competitors' sole supplier.
Even if Intel and HP completely deny it, their customers will be thinking it all along.
Re: (Score:2)
Now that Oracle owns Sparc processors from Sun, there is no reason for them to help out their competitor.
Oracle develops and sells both Solaris and their database software for x86 platforms, which they do not own.
I think it is more the fact that (a) they *never* had a version of Solaris for Itaniium; and (b) with both RHEL and HP-UX dropping support for Itanium, they would have no platform to run their databases on.
Re: (Score:2)
Re: (Score:2)
i've seen many sparcs, but itaniums only via ssh.
and current x86's make much more sense. itanium was a flawed research experiment(though, it did live longer than most such..).
that's still around? (Score:3, Insightful)
I didn't realize the Itanium was still being produced. I thought they shut it down years ago.
Re: (Score:2)
It's still being used in certain proprietary big-iron systems. And it still kicks some ass. But it won't supplant the genetic ingrainment of x86. Which itself is hardly x86 any more. Intel is still selling it, but only the foolish are buying it to use in new designs.
Re: (Score:2)
The processor that sunk HP's UNIX line (Score:5, Informative)
I still remember the day the HP sales/technical team came on-site to give us a presentation. Flashy videos with Carly Fiorina's new vision of the future. And a bright tomorrow with a new CPU line... out with PA-RISC and in with Itanic. Their sales team looked at each other nervously as we expressed our evaluation of the arrangement as a failed vision. It didn't take them long to figure out that dumping their in-house CPU to go with the Itanic would doom them to irrelevancy. And it did.
Now the Itanium itself is sinking from irrelevancy. It took too long. This chip was a disaster. Glad to see it go.
Re:The processor that sunk HP's UNIX line (Score:5, Informative)
Yep, I think HP is the main customer for Itanium nowadays. Windows is going to drop support after Server 2008 R2 (support was limited in Server 2008 to certain parts). Red Hat dropped support for it with RHEL6.
Re:The processor that sunk HP's UNIX line (Score:5, Informative)
You have to wonder what chip architecture HP is going to move to now, considering losing Itanium leaves them high and dry. Of course, Itanium was largely developed by HP. Perhaps HP will continue the processor line?
It certainly isn't going to do HP any good having to do another architecture switch. To this day, most of the HPUX servers in my shop are PA-RISC. Moving to Itanium has generally been painful enough that when our development teams are forced to upgrade their applications, they generally opt to rehost them on Linux on x86 rather than HPUX on Itanium. Only a few applications where that isn't adequate have made it to HPUX Itanium. Putting their customers through another painful transition isn't going to win HPUX any friends.
Re: (Score:2)
Re:The processor that sunk HP's UNIX line (Score:4, Interesting)
I worked at a computer company and we built servers that used PA-RISC cpus at the time and we got our hands on some Itanium samples and needless to say, we decided to migrate the platform to Xeon instead.
Re: (Score:3)
Re: (Score:2)
Seems unlikely to me. Intel would have moved away from x86 if it could. Thinking about it, the whole architecture is something they don't have very tight control over, as the very existence of AMD and other competitiors show. If Intel had been able to lock the world into something like the i960, I'm sure they'd be happier than a pig in shit.
Intel may have tried to push Itanium, but it wouldn't have worked. They were hardly able to push it as a server solution.
Re: (Score:2)
Kind of sad. PA-RISC was a nice design and very fast. The problems are more business oriented. Developing your own chip is expensive, so companies either want to be chip makers, or computer builders, but not both. Second the high end market made a leap from being unix oriented servers to Windows based servers. So customers don't like oddball chips that their software suppliers don't support. And in this context, "oddball" means anything that isn't x86. It's is also convenient from a business perspect
Re:The processor that sunk HP's UNIX line (Score:4, Insightful)
I left in 97, but I am sure those roadmaps had to be quietly adjusted each time Intel's new chip was delayed (over and over). It was well past 2000 when the thing finally came out, and in the end, it was a huge disappointment (dare I say disaster) after PA-RISC had been sailing along smoothly for so long. The perf was terrible, the instruction set was a mess, and pretty much the entire industry did their best to avoid it. I'm surprised it took this long for Intel to throw in the towel on it.
PA-RISC really was a great series of CPUs. It's a shame it had to die. At one point I believe it actually surpassed the (at the time) much-vaunted DEC Alpha as the fastest thing on the market, if only for a little while. Itanium seemed designed solely to kill off the x86 CPU clone market. Intel came up with a completely new instruction set, and patented it so there would be no clones. Actually making a good chip did not seem to be a consideration.
Good riddance to Itanium, and a bittersweet farewell and R.I.P. to PA-RISC.
Re:The processor that sunk HP's UNIX line (Score:4, Insightful)
Re: (Score:2)
Can you share some information about the nature of this meeting, and what kind of contract your team was evaluating? (especially considering this should have been at least 8 to 10 years ago)..
Itanium, from an engineering standpoint was a perfectly good architecture -- there are several scenarios in which VLIW architectures can attain truly astounding IPCs. It's weaknesses were essentially software support, power/heat, and price -- which is a vicious cycle of problems -- without software support, you don't
Ah well (Score:4, Interesting)
I work directly with a VLIW architecture myself (the TI C6000 family of DSPs). From that perspective, I'm a little sad to see Itanium go. I realize EPIC isn't exactly VLIW, but they had an awful lot in common. Much of HP's and Intel's compiler research helps us other VLIW folks too.
I think EPIC tried to live up to its name a little too much. The original Merced overreached, and so it ended up shipping far too late for its performance to be compelling. Everybody always zooms in on the lackluster x86 performance, but x86 wasn't at all interesting in the spaces Itanium wanted to play in originally. It wanted to go after the markets dominated by non-x86 architectures such as Alpha, PA-RISC, MIPS and SPARC. And had it come out about 3 years earlier, it may've had a chance there by thinning the field and consolidating the high-end server space behind EPIC.
Instead, it launched late as a room-heating yawner. And putting crappy x86 emulation on board only tempted comparisons to the native x86 line. That it made it all the way to Poulson is rather impressive, but smells more like contractual obligation than anything else.
Rest in peace, Itanium.
Sink It? (Score:3)
Oracle Had a Lot of Itanium Software (Score:3)
Re: (Score:2)
Re: (Score:3)
> This move also kills HP's aspirations of overtaking IBM any time soon
Exactly - HP nowadays really wants to be IBM, a one-stop shop for hardware, software, and services. But they're not. IBM has a better mix of businesses and is executing better. HPQ operating margin - 10.49%, IBM operating margin - 19.97%. HPQ return on equity - 21.85%, IBM return on equity - 64.59% (from Yahoo finance).
Why not post intel's response? (Score:5, Informative)
Itanium, from the same people that brought the P4 (Score:3)
In all truthfulness it did have some ideas going for it but it should have stayed a pet project. An R&D project but produce enough that the market could play with it in self built systems. In my opinion they should have basically given the processors away to inspire developers for hobby and niche products. They wouldn't have lost as much money and would have had more realistic ambitions for it. They had the fabs and the prototyping equipment already...
The Itanium, a processor designed for programming languages that could provide optimization hints... that could have a concept of L1 cache and manipulate it and be able to provide feedback to the processor when it could do better branch prediction than the processor. Radical concept, only problem was you HAVE TO code to each processor model specifically. Caches changed and the processor logic changed with each revision. That's why they would have made better embedded processors. The generic systems that would benefit the most would be systems with source code you could compile right for the machine, and dynamically compiled code, and code that could self compile and optimize itself.
They should have been much more radical instead and designed for massively parallel systems based on a RISC design with minimal branch prediction. So even if the processors weren't running the more efficient code a developer could at least attack a problem with the brute force of hundreds of threads at the same time. More or less they should have aimed for something along the lines of the cell processor. Another current story here on Slashdot is how how the US Air Force took 1700 PS3's and turned them into a computer that qualifies in the top 40 for supercomputers.
Re: (Score:2)
you forgot that you can sell crap big iron to banks for a wide profit, no need to keep the production lines online all year even.
it doesn't matter what the big iron is you see, as long as it's not the same as the clerk is using on desktop(or if it is, that it is at least named differently).
massively parallel machines are easy to build.. but ehm, linear speed is whats interesting, really. that's what people would want at home, so much more possibilities would be in that route than in parallel and less speed
Yay? (Score:2)
Re: (Score:2)
Lamenting silicon use is a little silly, from where I'm standing.
If you look at a modern processor, the entire decision-making part of a chip is absolutely miniscule. The biggest hog of silicon is cache.
x86 and the PC standard is a boon to everyone. If you want to see what computers would look like without the benefit of the open architecture, look at smart phones -- even Android, fully open source, has people begging for updates to their phones OS because everything is too locked down and proprietary (and
Re: (Score:2)
it's a yay, the plat sucked. they can use the engineers for something else now. it competed for a LONG TIME - and did not do well.
as a former Itanium employee (Score:2, Interesting)
I agree with Oracle that it is close to over for the chip. Intel lost every good engineer working on it to AMD in Fort Collins, CO and can't (even with massive financial incentives) coax anybody on their x86 teams to transfer over. Itanium is considered the kiss of death on a resume so they are having a hard time even finding people willing to work on it. Work on Itanium is about 6 years behind original schedules! Originally designed and marketed as a performance leader to the Xeon series it has fallen so f
Re: (Score:3, Interesting)
My old college roommate was offered a job at Intel Itanium's unit after finishing his PhD in compiler theory. He turned it down because "life's too short to spend it fixing Itanium."
Nothing ill-fated about itanium (Score:2)
It did exactly what it was supposed to, destroyed all the competition for i386.
Where is Alpha now? What happened to SGI?
Re: (Score:3)
Good thing that they managed to change the new architecture from "AMD64" to "x64".
That would be bad if customers thought that AMD out-innovated them.
Actually, I think AMD originally called it x86-64, and then their marketing department got them to call it AMD64 (not a bad idea, from the marketing point of view). Sun and Microsoft decided to call it "x64", probably after Intel licensed it, perhaps so as not to peeve Intel. Intel thrashed around a bit with names, passing through EM64T before arriving at the innovative name "Intel 64", which does not at all resemble "AMD64".
(Not that Intel invented PA-EPIC^WIA-64^WItanium all by themselves, either.)
Re: (Score:2)
"On top of that Intel only sold Itaniums to enterprise, screeching compiler development for it to a hault."
except for maybe Intel's compilers?
Intel's compilers are very very good. Intel inherited the old DEC compiler group (which was very skilled) after some woeful time at Compaq.
Re: (Score:2)
You think Linux is the platform of choice for high-end RISC servers? Seriously?
Re: (Score:3)
>>On top of that Intel only sold Itaniums to enterprise, screeching compiler development for it to a hault.
I had experience working with the preproduction Intel compilers for it, and it was very, very good.
One of the best things about the platform, really. Kind of like the Tera.
Re: (Score:3)
Itanium has its failings. That isn't one. Those who talk about how that is the problem aren't the people that Itanium is for.
Re: (Score:2)
Re: (Score:2)
You can get x86 processor modules for the Unisys Dorado that run Window, Linux and x86 Java VM. And the midrange Dorado 4000 and Libre 4000 actually USE intel xenon. http://www.unisys.com/about__unisys/news_a_events/05158777.htm [unisys.com]
,
You can get x86 processor modules for the IBM Z series, same deal, run Windows and Linux and x86 Java VM.
The IBM PowerVM Lx86 emulator lets one run x86 linux applicatoin on PowerPC
Re: (Score:2)
Re:Can't blame them (Score:4, Informative)
What are you talking about? The early Itaniums were x86-32 compatible.
"Itanium processors released prior to 2006 had hardware support for the IA-32 architecture to permit support for legacy server applications"
http://en.wikipedia.org/wiki/Itanium#Architectural_changes [wikipedia.org]
It wasn't until later the Itaniums lost their hardware based x86 compatibility.
Re: (Score:3)
What are you talking about? The early Itaniums were x86-32 compatible. "Itanium processors released prior to 2006 had hardware support for the IA-32 architecture to permit support for legacy server applications"
http://en.wikipedia.org/wiki/Itanium#Architectural_changes [wikipedia.org]
It wasn't until later the Itaniums lost their hardware based x86 compatibility.
While true, you omitted the crucial continuation of that sentence:
Re: (Score:2)
Re: (Score:3)
... and VMS .. and NonStop. Both systems with a lot of customers that find lots of value in those platforms and don't want to give them up.
Re: (Score:3)
NEC's ACOS (Score:2)
Re: (Score:2)
Re: (Score:2)
Ask Intel how committed they are to Itanium since they just dropped support from their compilers.
Not true.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Informative)
I'm not surprised at the bias, poorly researched article that was published once again. Intel specifically said that they have no plans on dumping it and that Oracle is full of shit. The headline is like an attack at intel even though intel did nothing besides deny what oracle said.
Re:...and? (Score:4, Insightful)
It's a FUD attack at HP, Oracle's newest enemy, FWIW.
(HP is the only company that really uses Itanium any more.)
Yes, I was... (Score:2)
...I honestly didn't know they still made the Itanium.
Re: (Score:3)
Re: (Score:2)