Microsoft Announces End of the Line For Itanium Support 227
WrongSizeGlass writes "Ars Technica is reporting that Microsoft has announced on its Windows Server blog the end of its support for Itanium. 'Windows Server 2008 R2, SQL Server 2008 R2, and Visual Studio 2010 will represent the last versions to support Intel's Itanium architecture.' Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?"
Oh Noes! (Score:5, Insightful)
Seriously, though: is this an admission by Microsoft that HP-UX is(somehow) hanging on at the high end, despite HP's every attempt to mismanage it, or (more likely) is this a consequence of the fact that, at this point, there is nothing Itanium can do that Intel couldn't do better and cheaper just by bolting some extra cache and a few extra Itanium features onto Xeons?
DEC Alpha? (Score:4, Insightful)
I am incredibly offended that you would compare this bloated, brute-force, abomination of a chip to the incredibly well designed, elegant, and efficient Alpha (may it rest in peace).
Not Very Comparable (Score:2, Insightful)
The DEC Alpha was a brilliant RISC processor that could outrun a closet full of x86 chips of the same era (or even the era after). The DEC Alpha was sold by a hardware company that distributed their own Unix-derived OS for it that had the proper compilers ready to go as soon as the system was booted. The Itanium, on the other hand, was an odd attempt by Intel to make a 64bit CPU that could - mostly - run 32bit code as well. Unfortunately by the time the Itanium was released the Intel-Microsoft pairing was well established for most consumers and people wanted it to run Windows Server; which it didn't do particularly well.
So the Itanium may end up killed by the combined factors of lack of a market, lack of consumer interest, lack of consumer knowledge, and poor deployment. The DEC Alpha, on the other hand, was killed by upper level management who didn't seem to know what they had.
Not dead yet (Score:1, Insightful)
Re:Of course it means the end. (Score:3, Insightful)
Indeed. The ultimate fate of Itanium is to wind up as HP's upgrade to PA-RISC. You have to wonder how much further interest Intel is going to have in it's development. I suspect it will end up getting tossed back into HP's lap.
Every Chip is a DEC Alpha (Score:4, Insightful)
They all get outmoded.
No one can stop the x86 train, not even Intel. (Score:5, Insightful)
No one can stop the x86 train, not even Intel.
They will be in millions of homes? (Score:3, Insightful)
Re:Of course it means the end. (Score:5, Insightful)
Competent CPU designers, yes. It's the only reason Itanium has lasted this long. Intel's solo early designs were less than successful. HP designer came in redid the whole thing and lo-and-behold it worked. HP really needs Intel to fab the chip, not design it.
Re:ding - worse is better (Score:3, Insightful)
x86 isn't a passable architecture at all.
Why does it in fact perform better than supposedly superior architectures for so many workloads? If these other architectures are inherently superior, why don't they run rings around x86 in spite of the difference in dollars spent?
Re:ding - worse is better (Score:3, Insightful)
I'm sorry...I thought you said x86 isn't a passable architecture...at all.
Just last week I found a good word for this: hyperbole.
You'll have to ratchet it down at least a couple of notches to get close to truth. See the parent's reference to "coyote-ugly" (x86) and x86-haters (you).
Re:ding - worse is better (Score:4, Insightful)
Because they figured out that the instruction set means diddle squat in the end - it's the branch prediction, floating point, pipelining and good cache design that makes a difference. Get that right and strap an X86 decoder on the front end and it's perfect.
We love CPU's that perform, and only a very few people really care what that looks like under the hood.
Re:ding - worse is better (Score:5, Insightful)
It's fundamentally irrelevant whether anyone thinks that x86 is "passable" - it's a proven fact. We have 15 years of out-of-order x86 implementations that prove that.
Yeah, you have to handle the brain-dead instruction encodings in the decoder, and you need to emit micro-ops for a bunch of obscure instructions that no one ever uses (to maintain compatibility). You also have to handle the multiple obscure and obsolete memory addressing modes.
But the reality is that no one but engineers gives a crap about this. In a world of 300M+ transistor cores, there just isn't that much overhead to making the CPU compatible. Most of the die space is cache anyway nowadays.
We can't compare what x86 is to what POWER or MIPS or SPARC "would have been" in some speculative world where Intel wasn't the dominant desktop/server CPU manufacturer. There's no magic bullet that can make load-store architectures amazingly fast but that doesn't apply to x86. Almost all of the technology out there can apply equally to a modern x86 CPU.
What sells CPUs is not having a clean and simple ISA. What sells CPUs is performance, power consumption, and, in many cases, compatibility. If having a clean ISA accomplishes those objectives, so much the better. But Intel and AMD have shown that you can make a fast, low-power, compatible x86 CPU and sell it at a very low price. That's what matters.
RISC vs CISC (x86) coming back again ? (Score:2, Insightful)
Last time I was playing with it (caliper/HP-UX), it was doing up to around 1 instruction per cycle on average. Not very impressive but also not very bad either. It was three years ago, so I doubt that compilers would improve dramatically since then. And it has about half a clock speed Xeons have.
Now, since we have Nehalem EX and similiar monsters available, there is not much left on performance/scalability front for all those RISC designs, no matter how cool those designs are. The only thing keeping them alive is their hard-to-break-big-iron designs and hard-to-break-big-iron (system) software. Vendors might struggle to port these things onto now conventional x86_64 designs as they risk losing significant income stream doing so. But in the long run most of these things will be dead or become niches. The only area RISC will still shine are energy constrained environments (ARM?) and maybe some manycore designs, like some forms of GPUs evolving in this direction. In other words, the original area where RISC thingies started.
Note that I'm not trashing RISC here - this was a pretty neat idea. It's just history showing a bitter sense of humor: memory bandwidth is now a bottleneck and x86 code is known to be compact.
Re:Not Very Comparable (Score:3, Insightful)
So you're comparing an SMP system with a cluster? Anyone who knows HPC can tell you that's not a fair fight.
Everyone knows that Netburst was not competitive with (just about anything) on a clock-for-clock basis. That's not the point. The question is whether Alpha was competitive on a performance per dollar standpoint.
x86 isn't about being the fastest. It's about being fast and dirt cheap. You can get a complete Core i7 2.8GHz system for around $1000, and there's no non-x86 competitor that's even close, even at 4x the price. Even x86 servers are dirt cheap.
Re:Of course it means the end. (Score:3, Insightful)
They have 45nm CPU fabs? Really?
I was under the impression that designing a modern CPU took a unique combination of a lot of skillful engineers and an extremely expensive modern fab. HP probably has plenty of manufacturing plants, and heck, maybe a few fabs for CCDs and the like for cameras and scanners and other optics... But I doubt they have a CPU fab.