Microsoft Announces End of the Line For Itanium Support 227
WrongSizeGlass writes "Ars Technica is reporting that Microsoft has announced on its Windows Server blog the end of its support for Itanium. 'Windows Server 2008 R2, SQL Server 2008 R2, and Visual Studio 2010 will represent the last versions to support Intel's Itanium architecture.' Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?"
Of course it means the end. (Score:5, Funny)
How could anyone possibly have any use for servers that don't run Windows?
Re:Of course it means the end. (Score:5, Funny)
yeah, servers with windows are like women playing soccer on high heels. nice to look at, until one of them falls and breaks an ankle.
Re:Of course it means the end. (Score:5, Informative)
Re: (Score:3, Insightful)
Indeed. The ultimate fate of Itanium is to wind up as HP's upgrade to PA-RISC. You have to wonder how much further interest Intel is going to have in it's development. I suspect it will end up getting tossed back into HP's lap.
Re: (Score:2)
HP has fabs and/or competent CPU designers?
I doubt Intel really cares who they sell it to, as long as someone keeps buying. When HP moves on from Itanium, it's done for.
Re:Of course it means the end. (Score:5, Insightful)
Competent CPU designers, yes. It's the only reason Itanium has lasted this long. Intel's solo early designs were less than successful. HP designer came in redid the whole thing and lo-and-behold it worked. HP really needs Intel to fab the chip, not design it.
Re: (Score:2, Interesting)
I don't know, Itanium seems pretty impressive. This presentation [infoq.com] appeared on slashdot awhile ago and does a good job of giving a face to the name Itanium instead of just reading "Failed processor line that was really expensive."
The huge amount of instruction-level parallelism (dependent on a very good compiler) really seems like the best way to do things. It's too bad it doesn't work out in practice.
Re: (Score:2)
(dependent on a very good compiler)
Did anyone manage to write one of those? Last I heard it was extremely hard to write even a decent one for 128 registers.
Re: (Score:2)
The problem is that Intel didn't come up with this concept (ILP through compiler-scheduled instructions), nor were they the first to try it.
VLIW designs have *always* looked great on paper and *always* sucked in practice. Intel did make a bunch of improvements to VLIW with Itanium, but they should have known that you can't just dump th
Re: (Score:2)
Funny thing, I was casually talking to some folks about 64 bit platforms. We didn't bother to look online, but we were all pretty sure Itanium was already dead. :) I'm sure this isn't a nail in the coffin, just another obscure software that's not supporting it any more. As most have said, most people are using better OS's with them anyways.
I know WinXP x86_64 was like running WinNT for DEC Alpha. It was there, but it was terrible and didn't do everything you needed. Oh god
Re: (Score:3, Insightful)
They have 45nm CPU fabs? Really?
I was under the impression that designing a modern CPU took a unique combination of a lot of skillful engineers and an extremely expensive modern fab. HP probably has plenty of manufacturing plants, and heck, maybe a few fabs for CCDs and the like for cameras and scanners and other optics... But I doubt they have a CPU fab.
Re: (Score:2)
or upgrade from Alpha (for VMS shops) or upgrade from MIPS for NonStop shops
Re:Of course it means the end. (Score:5, Funny)
Blasphemy!! Heretic!! Burn the witch! Burn blasphemer!! Burn!!!!
Re: (Score:2)
Red hat will not support Itanium in RHEL6. So that 85% will be a 100% in the future.
Re: (Score:2)
yes, but at least RedHat will support the Enterprise 5 on Itanium2 until 2014. I work for an HP VAR and I've *never* seen any HP Integrity run any Linux but RedHat though there are a few other distros out there.
Re: (Score:2)
In reality that's only going to be of use to customers who are already running Red Hat on Itanium. No one making a decision today is going to commit to a solution that only has a 4 year shelf life. If they want Red Hat today and they're in that enterprise space they'll go Nehalem-EX for the best combination of RAS + performance + price.
No one can stop the x86 train, not even Intel. (Score:5, Insightful)
No one can stop the x86 train, not even Intel.
Re:No one can stop the x86 train, not even Intel. (Score:5, Funny)
No one can stop the x86 train, not even Intel.
Maybe not. But certainly some people are trying to strong-ARM the situation.
Re:No one can stop the x86 train, not even Intel. (Score:5, Informative)
You also get more address space and bigger registers. Bigger registers are not such an issue with ARM. If you are using 64-bit operands, you have to split them between two registers, but that still leaves you with as many 64-bit GPRs as x86-64 has.
What about a bigger address space? This is really two issues: the physical and virtual address space. The physical address space is the amount of real memory the processor can access. Current ARM systems ship with 64-256MB of RAM. I think there may be a few with 512MB, but they're quite rare. Low power DDR is a lot more expensive than desktop / laptop RAM so these numbers are going to stay lower than laptop and desktop versions. Obviously, people will eventually want handhelds with more than 4GB of RAM, but it probably won't happen for a little while.
Note, however, that things like PAE on x86 allow you to access more than 4GB of physical address space. All that you need to do is extend the page tables slightly. On post-Pentium x86 chips, the page tables can map from a 32-bit virtual address space to a 36-bit physical one. This means that you can access 64GB of physical memory, but individual processes are limited to 4GB (unless they do some ugly things). This kind of thing is much easier for ARM because, unlike x86, the ARM architecture does not guarantee backwards compatibility in the privileged instruction set. They could quite easily extend the physical address space without changing the unprivileged instruction set, so you'd need to modify the kernel but no userspace stuff. I won't say 64GB ought to be enough for anyone, but a handheld with more than 64GB won't be affordable for many people for several years.
That leaves the virtual address space. I'm currently running a 64-bit OS on a 64-bit CPU and looking at my running processes, the biggest one is using around 750MB of virtual address space. the largest I've seen on this machine is around 1.2GB. That was a web browser, and there is no real reason why it should be using that much address space: for security, I'd rather that it ran more processes, isolating each site into a separate instance. During my PhD I did a lot of stuff that involved much larger processes, but I don't imagine ray tracing large volume data sets on an ARM machine any time soon.
That's not to say that there aren't things that benefit from a larger virtual address space. If you're doing video editing, for example, it makes life a lot simpler for the programmer if you can just mmap() the raw data files, which, at around 10GB/hour, can easily consume 100GB of virtual address space for a medium sized project. Of course, you can just stream the data as it's required. This involves some extra copying, but it's not a huge amount of effort, and most existing code does this anyway.
If your process doesn't need more than 4GB of virtual address space, then there's a significant
Probably not (Score:2)
Were many Itanium users running Windows? My impression was that most Itanium users were running some sort of *nix. I don't think it's a huge deal for Itanium.
I also don't see Itanium going anywhere any time soon. As much as people like to talk about its demise, its numbers do grow every year. Or at least they were growing up until a couple years ago; I assume they're still growing. They're not growing very quickly, but they're still going.
It's a shame. It's a remarkably beautifully designed architecture,
Re: (Score:3, Interesting)
The only Itanium servers I encounter regularly run OpenVMS in order to host the popular OM stock exchange platform. OM-based stock exchanges (ASX, HKFE, OMX, SGX, IDEM) all seem to be a hell of a lot more stable than the .NET-based Tradelect/Infolect system used on LSE for the last few years. I don't know why anyone would actually want to run Window
Re: (Score:2)
I've racked a bunch of Itanium servers running Windows Server 2003 and supporting SAP installs.
It is not unheard of. And I suspect these will migrate over to a much more desireable platform - in fact, I expec they will decommission these bad boys and I will be in line to scarf up some interesting hardware cheap.
I will not have to try and flim-flam them into a hardware swap. It's the only way they can actually do this. And I don't sell them any hardware. I'm just one of the few around here that seem to b
Re: (Score:2)
What are you talking about? EISA is a bus; EFI is firmware.
Re: (Score:2)
EISA setup was a lot like EFI.
Re:Probably not (Score:5, Interesting)
Microsoft has had a strict policy since the dawn of Windows that Windows be built for at least 2 processor architectures at all times. They really worried about i386-isms creeping into the kernel. It pretty much doesn't matter what 2 you choose, as long as it's more than one (and they're somewhat different), it keeps the kernel devs honest. I wonder what they're doing now: perhaps they just decided that i386 and "amd64" are different enough to serve their purpose.
Re:Probably not (Score:4, Interesting)
The other thing is, keep a full build internally.
The rumor mill says that Microsoft has current versions of Windows built for ARM internally... sorta like how Apple kept x86 builds of Mac OS X internally the whole time.
Re: (Score:2)
Re: (Score:2)
i'm not an expert on this, but according to this [hoffmanlabs.com], windows so far has been built only for litte endian architectures, or chips that can change endian-ness at boot or on-the-fly. this limits MS's choice of target architectures somewhat.
i'd like to see if they're capable of building a version for big-endian chips like SPARC or latest PPCs.
Re: (Score:2)
NT 3.51 through 4.0 ran on PowerPC for a short while.
Re: (Score:2)
That should be interesting.
Re:Probably not (Score:4, Informative)
Just to keep this clear: you're talking about NT (which wasn't even called "Windows NT" initially, internally). NT is almost entirely written in C, and the few architecture-specific parts are abstracted from the core codebase and typically present in assembly modules which are maintained for multiple architectures and which the compiler automatically uses the appropriate one for the current build. There's some use of inline assembly or specifics of x86, but it's all behind #if blocks, with the equivalent checks for other CPU architectures. Overall, NT has been ported to at least 5 architectures that I know of - x86 (32-bit), x64, ia64 (Itanium), PPC, and DEC Alpha. If MS wanted to, it would be possible to port it to ARM, MIPS, SPARC, or almost any other reasonably modern architecture of at least 32 bits.
By comparison, Win9x has a ton of assembly code that enabled it to run fast even on low-end machines, keeping the system requirements down (and making it attractive to home users back in the days before consumer hardware caught up with the demands of NT). Of course, use of assembly like this has downsides - 9x was badly unstable, and completely non-portable. It only ever ran on x86, and I'm not even sure it made much use of the features found in any version after the i386.
Re: (Score:2)
The Xbox360 code base is in great part Windows for PPC.
Apparently just the DirectX API was carried over. I don't think there is anything else that is the same. This comes from the developers themselves. Even the kernel is custom for the xbox.
Re: (Score:3, Informative)
Windows 2008 Server will not boot on a x86-32 processor. The fact that, after install, you can then install an optional x86-32 virtual machine is beside the point.
Re: (Score:2)
I should hope not considering a new Itanium processor was just released in February. [wikipedia.org]
I'm a bit surprised to hear M$ dropping support for a 2 month old processor.
Oh Noes! (Score:5, Insightful)
Seriously, though: is this an admission by Microsoft that HP-UX is(somehow) hanging on at the high end, despite HP's every attempt to mismanage it, or (more likely) is this a consequence of the fact that, at this point, there is nothing Itanium can do that Intel couldn't do better and cheaper just by bolting some extra cache and a few extra Itanium features onto Xeons?
Re: (Score:2)
Doubt it. I don't think Microsoft would give up if there was competition to drive out. They'd do like with the Xbox and keep throwing money at it until it worked. I take it this means that even if they had the marketshare, there would be no (or not enough) profit in it.
Re: (Score:2)
That should be exactly right... their portion of that 15% market share was probably not justifying the resources needed to support the additional architecture.
I'm guessing they get to lay off some really expensive Itanium knowledge base from their core dev teams as well as all the other baggage necessary for release/support of the ports. Those guys are really hoping there's room for hire on the HP-UX team now :)
Re: (Score:2)
The difficulty of driving out the competition probably matters also. I wonder how many of the non-Windows Itanium systems are running application software for which there is no drop-in replacement available for Windows? So MS would have to convince the owners that not only is Windows a better/cheaper/whatever OS, but enough so that it's worth replacing application software as well.
Re: (Score:2)
Intel couldn't do better and cheaper just by bolting some extra cache and a few extra Itanium features onto Xeons
That is exactly what Intel is doing. They are rolling some core Itanium features into the next generation Xeon processors. There was an article on it in the Wall Street Journal last week. It came across as a marketing piece from Intel where they were attempting to reassure Itanium owners that they weren't going to be abandoned.
Re: (Score:2)
won't do any good for the Itanium2 owners if the Xeons can't run the IA-64 instruction set. The features Intel just brought to xeon from Itanium include MCA (machine check architecture recovery from failures), security and virtual machine migration. But not binary compatibility. But maybe HP will port VMS, NonStop, and HP/UX to the new, improved bullet-proof x86-64 (only with with appropriate supporting chipsets, of course)
The Itanic was Gandalf (Score:4, Funny)
Intel also seems to be behind it... (Score:2)
Sans Red Hat too (Score:2)
Well kind of. Red Hat recently announced [redhat.com] that they were dropping Itanium starting with Red Hat Enterprise Linux 6. How long will it be before the rest of the distro gang follow suit?
Re:Sans Red Hat too (Score:4, Funny)
Debian 27 plans to drop support.
Still supported on real OSes like Linux and HPUX. (Score:2)
Itanium has not been worth it in terms of price/performance for a while, this just confirms the inevitable. However, people will still be running this hardware for some time, and I expect HPUX and Linux to continue to support this hardware for the forseeable future. Hell, Debian supports the Alpha, and the M68k was removed from official support in just the previous revision of Debian (etch), but then only because it took too long to compile and would slow down the updates of the archives.
-molo
Re: (Score:3, Informative)
Itanium has not been worth it in terms of price/performance for a while
Actually, in many categories, it does. Depends on the work to be done. For example, HP Integrity Superdome with HP/UX leads in price / performance and performance running TCP-H on 10 or 30TB Oracle database. Some on numerical benchmarks that are heavily SMP.
I don't like the Itanium, but certain database and numerical workloads it still kicks everyone else's butts.
Re:Still supported on real OSes like Linux and HPU (Score:5, Informative)
Oh come on. It's really disingenuous to be quoting that kind of shit. Have you ever taken a really close look at the kind of hardware the vendors use to get these benchmark numbers? Database app benchmarks are almost always very sensitive to I/O, and these kinds of numbers are usually generated by systems that have their I/O card slots max'd out, with several hundred (if not thousands) of small high speed disks behind them. The cost of these solutions in real life would be crippling. Vendor quoted benchmarks should usually be taken with a generous pinch of salt.
Re: (Score:3, Informative)
The other shoe to drop would be HP-UX x64 (Score:2)
Nah, the real "end" would be if HP finally bows to the inevitable and ports HPUX to x64. Don't hold your breath though....
Re: (Score:2)
Hmm, I have my ear close to the ground on these things and I'm not sure I believe you. If by "ported to X64" you mean there's been a skunkworks project to get them to a point where they do a minimal boot, then possibly. But beyond that I very much doubt it. The amount of effort to port either O/S and all their associated layered products would be huge and expensive. I could see it happening for OpenVMS as that pretty much has its own ecosystem that's separate from and sort of immune from the machinatio
DEC Alpha? (Score:4, Insightful)
I am incredibly offended that you would compare this bloated, brute-force, abomination of a chip to the incredibly well designed, elegant, and efficient Alpha (may it rest in peace).
Re: (Score:2)
Re: (Score:2)
The architecture is nice (IMHO) but the obscene amounts of cache do make it look bloated in terms of silicon required. This is partly because it's a high-end chip, of course. But perhaps Intel were also having to take a brute-force approach to performance there (throwing transistors at it) rather than an efficient solution. I'd be sorry to see IA64 go, though, I really liked the design of the instruction set.
Re:DEC Alpha? (Score:4, Informative)
...Both Alpha and Itanium were in-order...
IIRC the Alpha 21264 was out of order actually, see http://courses.ece.illinois.edu/ece512/Papers/21264.pdf [illinois.edu]
Not Very Comparable (Score:2, Insightful)
The DEC Alpha was a brilliant RISC processor that could outrun a closet full of x86 chips of the same era (or even the era after). The DEC Alpha was sold by a hardware company that distributed their own Unix-derived OS for it that had the proper compilers ready to go as soon as the system was booted. The Itanium, on the other hand, was an odd attempt by Intel to make a 6
Re:Not Very Comparable (Score:5, Interesting)
Having used Alpha workstations, I beg to differ. The Alpha was a design that managed to do the absolute minimum per clock cycle in each pipeline stage. This allowed very high clock speeds, and high theoretical peak performance with very deep pipelines. In reality, the deep pipelines' branch misprediction penalty was so bad you never got close to the theoretical peak performance, and the high clock speeds made them hot and unreliable - poor reliability was the main driving factor for switching to SPARC. Everyone should've been able to see the problems with the Pentium 4 well in advance - it was basically an Alpha with an x86 recompiler frontend, so it suffered from all the same problems.
DEC Tru64 had a lot going for it - lots of good ideas in there. When DEC and HP merged, they should have taken what was worthwhile from HP-UX and integrated it into Tru64, then ported the result to HP-PA. That would've produced a system that people wanted. (HP-UX horrible - nothing behave quite how it should. I'd be surprised if the thing really passed POSIX conformance without some money under the table.)
Re:Not Very Comparable (Score:5, Interesting)
The Alpha was a design that managed to do the absolute minimum per clock cycle in each pipeline stage
That is pretty much what RISC was about, in a nutshell.
and the high clock speeds made them hot and unreliable
I don't know what system you were running. I was using an AlphaServer ES40; four 667 Alphas with 8gb RAM. It was one of the most reliable systems I've ever used for HPC. There was a rack of intel x86 systems of the same era right next to it - something like 32 Intel Xeon CPUs - and the Alpha made the rack look silly and wasteful. On BLAST, the Alpha ran circles around the intel rack, and it became even more embarrasing for the intel rack when the data sets got larger. That was only one example, though; we found pretty much anything we could get source code for, the Alpha ran better. And that was going up against 1.8ghz Xeons.
By comparison, the Itanium wants to run native 32bit code (though it certainly doesn't do it well). The compilers aren't easy to setup (even in Linux) and it's hard to find a Linux distro that runs on one. I have an SGI cluster with Itanium2 CPUs in it; I know the care and feeding for this system well.
Re: (Score:2)
You might be able to help me. I know this is totally off topic, but I have this old peugeot sound data systems alpha. When I turn it on I get only a blue screen. The case has a lock and I haven't been able to break the lock yet. I wanted to drill it out, but didn't want to get metal shavings all inside. I'm thinking that I have to though. Do you know anything about a blue screen on alphas? Couldn't tell you what OS it was running or anything. Just thought it would be a lot of fun to get a non x86 box runnin
Re: (Score:2)
You might be able to help me.
I'm really more of an experienced Alpha user than an experienced Alpha engineer or support guru. I knew how to make it kick ass on the applications that were important to me, and that was about it.
I have this old peugeot sound data systems alpha
I've heard of a number of third-party vendors that sold Alpha systems, although I've never worked with of of those systems myself.
The case has a lock and I haven't been able to break the lock yet. I wanted to drill it out, but didn't want to get metal shavings all inside. I'm thinking that I have to though.
As much as the architecture of the Alpha is unique amongst most systems you'll see today, it isn't magic. If you decide that you need to drill it out, I doubt that a new level of h
Re: (Score:2)
That's the ARC console - it's freezing probably trying to netboot or init a lost piece of hardware. Hit ESC and you should get to the console.
Check here: http://www.compaq.com/AlphaServer/technology/literature/srmcons.pdf [compaq.com]
Re: (Score:3, Insightful)
Re:Not Very Comparable (Score:5, Interesting)
The alpha didn't even attempt to do out of order execution until the EV6 chip...
The EV4 and EV5 chips were strict in-order processors.
The difference with the P4, is that the p4 was expected to run code that was originally optimized for a 386, whereas the original alpha had code that specifically targeted it... In-order execution works very well when you can specifically target a particular processor (see games consoles), since you can tune the code to the available resources of the processor... The compiler for the alpha was also pretty good, it could beat gcc hands down at floating point code for instance.
In terms of alphas getting hot, the only workstation i remember which had heat problems was the rather poorly designed multia (which used a cut down alpha chip anyway).. other alpha systems i used were rock solid reliable and i still have several in the loft somewhere - one of which ran for 6 months after the fans failed before i noticed and shut it down...
Clock for clock the alpha was pretty quick too, unlike the p4 that was considerably slower than a p3 at the same clock...
http://forum.pcvsconsole.com/viewthread.php?tid=11606 [pcvsconsole.com] shows alphas getting specfp2000 scores higher than x86 chips running at 3x the clock rate.
A lot of people, myself included, think itanium should never have existed, and that the development effort should have been put into alpha instead - an architecture that already had a good software and user base...
Re:Not Very Comparable (Score:5, Interesting)
If the 1.8GHz Xeon was based on the Netburst architecture, first you have to multiply by 2/3rds to correct for diet Pepsi clock cycles, then if your code base is scientific, you have to divide by two for the known x86 floating point catastrophe, and finally, if your scientific application is especially large register set friendly, there's another factor of 0.75. So on that particular code base, a 1.8GHz Netbust is about equal to a 400MHz Alpha (I only ever worked with the in-order edition). Netburst usually had some stinking fast benchmarks to show for itself if it happened to have exactly the right SSE instructions for the task at hand. And it gained a lot of relative performance on pure integer code. BTW, were you running Xeon in 64-bit mode? That could be another factor of 0.75.
A lot of people, myself included, think itanium should never have existed, and that the development effort should have been put into alpha instead - an architecture that already had a good software and user base
Yeah, you and a lot of clear headed people with insight into the visible half of the problem space. Not good enough.
Alpha was a nice little miracle, but it fundamentally cheated in its fabrication tactics. This is a long time ago, but as I recall, in order to get single-cycle 64-bit carry propagation, they added extra metal layers for look-ahead carry generation. For a chip intended Intel scale mass production, this kind of thing probably makes an Intel engineer's eyebrows pop off. That chip was tuned like a Ferrari. I'm sure the Alpha was designed to scale, but almost certainly not at a cost of production that generates the fat margins Intel is accustomed to.
Around the time Itanium was first announced, I spent a week poking into transport triggered architectures. There was some kind of TTA tool download, from HP I think, and I poked my nose into a lot of the rationale and sundry documentation.
TTA actually contains a lot of valid insight into the design problem. The problem is that Intel muffed the translation, through a combination of monopolistic sugar cravings, management hubris, and cart before the horse engineering objectives. I'm sure many of the Intel engineers would like to take a Mulligan on some of the original design decisions. There might have been a decent in there somewhere trying to get out. Itanium was never that chip.
I pretty much threw in the towel on Itanium becoming the next standard platform for scientific computing when I discovered that the instruction bundles contained three *independent* instructions. They went the wrong way right there. They could have defined the bundles to contain up to seven highly dependent instructions, something like complex number multiplication: four operands, seven operations, two results. It should have been possible to encode that in a single bundle. Either the whole bundle retires, or not at all.
Dependencies *internal* to a bundle are easy to make explicit with a clever instruction encoding format. You wouldn't need a lot of circuitry to track these local dependencies. What you gain is that you only have to perform four reads from the register file and two writes to the register file to complete up to, in this example, seven ALU operations. Ports on the register file is one of the primary bottlenecks in TTA theory.
What you lose is that these bundles have a very long flight time before final retirement. Using P6 latencies, it's about ten clock cycles for the complex multiplication mul/add tree in this example (not assuming a fused mul-add). This means you have to keep a lot of the complexity of the P6 on the ROB side (retirement order buffer). But that also functions as a shock absorber for non-determinism, and takes a huge burden off the shoulders of the compiler writers. This was apparent to me long before the dust settled on the failure of the Itanium compiler initiative.
In my intuitively preferred approach, instructions within bundles would be tightly bound and s
ding - worse is better (Score:5, Interesting)
This is a response to my own post. Sometimes after uncorking a minor screed, I note to myself "that was more obnoxious than normal" and then my subconscious goes "ding!" and I get what's grinding me.
The secret of x86 longevity is to have been so coyote-ugly that it turns into pablum the brain of any x86-hater who tries to make a chip to rid the planet of the scourge once and for all.
For three decades right-thinking chip designers have *wanted* x86 to prove as bad in reality as ugliness ought to dictate.
Instead of having a balanced perspective on beauty, the x86-haters succumb to the rule of thumb that the less like x86, the better. And almost always, that lead to a mistake, because x86 was never in fact rotten to the gore. You need a big design team, and it bleeds heat, but all other respects, it proved salvageable over and over and over again.
On the empirical evidence, high standards of beauty in CPU design are overrated. Instead, we should have been employing high standards of pragmatic compromise.
If any design team had aimed merely for "a hell of lot less ugly", instead of becoming mired in some beauty-driven conceptual over-reaction, maybe x86 might have died already.
Maybe instruction sets aren't meant to be beautiful. Of course, viewed that way, this is an age-old debate.
The Rise of ``Worse is Better'' [mit.edu]
Empirically, x86 won.
The lingering question is this: is less worse less better, or was there a way out, and all the beauty mongers failed to find it?
Re:ding - worse is better (Score:4, Interesting)
x86 isn't a passable architecture at all. What it has going for it, is MONEY. Intel, AMD, and others have dumped tons of money into it to keep it moving along, against all odds. This because the whole world is tied to, and fixated on x86, which itself came about way back when, because IBM wanted a second supplier, so x86 was the only chip out there with competition, and therefore no proprietary lock-in. Other companies like DEC, MIPS, ARM, etc., have patents on their tech, with no license agreements, so no real attempt to one-up them. x86 competition out the gate made it a healthy ecosystem, which then precluded all others, which then became self-sustaining.
Re: (Score:3, Insightful)
x86 isn't a passable architecture at all.
Why does it in fact perform better than supposedly superior architectures for so many workloads? If these other architectures are inherently superior, why don't they run rings around x86 in spite of the difference in dollars spent?
Re:ding - worse is better (Score:4, Insightful)
Because they figured out that the instruction set means diddle squat in the end - it's the branch prediction, floating point, pipelining and good cache design that makes a difference. Get that right and strap an X86 decoder on the front end and it's perfect.
We love CPU's that perform, and only a very few people really care what that looks like under the hood.
Re: (Score:3, Insightful)
I'm sorry...I thought you said x86 isn't a passable architecture...at all.
Just last week I found a good word for this: hyperbole.
You'll have to ratchet it down at least a couple of notches to get close to truth. See the parent's reference to "coyote-ugly" (x86) and x86-haters (you).
Re:ding - worse is better (Score:5, Insightful)
It's fundamentally irrelevant whether anyone thinks that x86 is "passable" - it's a proven fact. We have 15 years of out-of-order x86 implementations that prove that.
Yeah, you have to handle the brain-dead instruction encodings in the decoder, and you need to emit micro-ops for a bunch of obscure instructions that no one ever uses (to maintain compatibility). You also have to handle the multiple obscure and obsolete memory addressing modes.
But the reality is that no one but engineers gives a crap about this. In a world of 300M+ transistor cores, there just isn't that much overhead to making the CPU compatible. Most of the die space is cache anyway nowadays.
We can't compare what x86 is to what POWER or MIPS or SPARC "would have been" in some speculative world where Intel wasn't the dominant desktop/server CPU manufacturer. There's no magic bullet that can make load-store architectures amazingly fast but that doesn't apply to x86. Almost all of the technology out there can apply equally to a modern x86 CPU.
What sells CPUs is not having a clean and simple ISA. What sells CPUs is performance, power consumption, and, in many cases, compatibility. If having a clean ISA accomplishes those objectives, so much the better. But Intel and AMD have shown that you can make a fast, low-power, compatible x86 CPU and sell it at a very low price. That's what matters.
Re: (Score:2)
Well it is easy to bag on things in hindsight, but in 01/02? If you were doing something like running thousands of monte carlo simulations the Alpha was untouchable for commodity hardware. I won a bitter sweet war when I swore up and down with any data I could muster that Sun e420's fully populated couldn't even remotely touch a lowly ol' DS20 running about 1/3 the cost. Ended up with a lot of underutilized sun boxen.
Re: (Score:2)
It was definitely an ambitious design, and something that needed to be tried. It did what it promised for the first generation, but sadly it was a dead end. You're right - no-one could have known in advance that the Alpha would end up hitting insurmountable roadblocks; but they Intel should have seen what was coming when they used the concept in P4 NetBurst. Hopefully, the lessons learned have influenced today's processor designs.
Re: (Score:2)
POSIX was just a bunch of unix vendors who got together and wrote a 'standard' that was loose enough to cover all the idiosyncrasies of most their current implementations with a little hog-trading thrown in for some of the outliers..
Worse than that--DEC's involvement caused some wags to quip that POSIX was DEC's attempt to prove that OpenVMS was the One True Unix. :)
Now, the Single Unix Spec (SUS) on the other hand....
Re: (Score:2)
Wasn't windows NT 4.0 POSIX complaint?
Re:Not Very Comparable (Score:5, Informative)
The POSIX NT subsystem (and Interix, the user-space software that runs in the subsystem) have existed for a very long time, possibly all the way back to pre NT 4. The NT kernel doesn't actually use Win32 (or Win16, DOS, or Win64) system calls; it uses NT system calls,w hich are a superset of the functionality in all of those, plus the functionality required for OS/2 and POSIX. For example, the NTCreateFile system call not only implements the Win32 CreateFile system call (as seen in Win9x) but also the OpenFile system call (Win16) and the open system call (POSIX). For each API that NT supports, there is a user-mode DLL that translates the API-specific system calls (such as open(2)) to NT system calls (such as NTCreateFile()). These are then passed to ntdll.dll, which executes the actual system call (invoking ring-0 kernel code).
The OS/2 subsystem was discontinued years ago, but the POSIX one is still supported. From XP forward, it's been possible to enable the POSIX subsystem and download pre-compiled libraries, shells, utilities, headers, build toolchain (optionally using GCC or MSVC), manpages, and so forth to produce a working, if somewhat bare-bones, UNIX-like environment. Initially called OpenNT and now known as Interix, various third parties have provided additional functionality such as package managers (apt, portage, pkgsrc, or one specifically for Interix from http://suacommunity.com/ [suacommunity.com] ), additional shells, libraries, utilities, X servers, and more.
Re: (Score:2)
Re: (Score:2)
correction: EV6 was licensed for use on athlons. opteron uses hypertransport.
killed by upper level management (Score:2)
Like we are going to see happen with SPARC too im afraid.
Re: (Score:2)
I don't see sparc dying anytime soon. It is manufactured by a variety of chip companies and I'm sure any of them would license the tech to keep producing their own. If anything, it might go the way of the arm and fragment, but retain the same basic instruction set. I don't see oracle giving up such lucrative hardware sales anytime in the future either. They might gobble up all the good parts and leave the rest of sun to slowly bleed to death but I think its going to be a slow death. Oracle is going to do to
Re: (Score:2)
Itanium was killed primarily by closed source software...
A few years ago, an Itanium box made a very good but expensive linux box, as did alpha for that matter...
However, while windows was ported to itanium most of the apps people wanted to run weren't, windows was effectively useless on the itanium because it had no applications... Very few commercial software companies would write software for it because of the small number of users, and the number of users won't increase because of the lack of software.
Re: (Score:2, Informative)
Compaq's upper level management's arguments about Itanium's inevitability in the marketplace and economies of scale are a prime example of how you should never let management make decisions of real consequence. I listened to meetings at Compaq where not a single engineer in the crowd agreed with management, but there was nothing they could do. Everyone knew that the game was over simply because a bunch of morons with MBAs thought Intel was unbeatable and they wanted to give up.
We couldn't understand it unti
Re: (Score:2)
Given how x86 has utterly destroyed the low-end RISC market, I'm going with the MBAs on this one. Had Compaq's engineers had their way, apparently they would have spent billions of dollars to be the last-place player in a dead-end market. No matter how great Alpha was, the marketing problems were probably insurmountable.
Re: (Score:2)
Compiler for Windows
Therein lies the problem. Why were you running an OS originally written for x86 (as in 8086) on a RISC processor?
What, somebody was running MS-DOS or Windows 95 on Alpha? (Windows NT was originally written for the Intel 80860, and later MIPS, and for 32-bit x86, according to this article [winsupersite.com].)
The Alpha was supposed to run Unix - Tru64 Unix in particular. Running in a proper 64bit environment the Alpha was an incredible chip.
Well, Unix plus OpenVMS, but they both supported a 64-bit environment.
Re:Not Very Comparable (Score:5, Interesting)
The Alpha was supposed to run Unix - Tru64 Unix in particular. Running in a proper 64bit environment the Alpha was an incredible chip.
This is a pretty gross oversimplification. First of all, Microsoft spent a lot of money writing a portable OS partially because the conventional wisdom at the time was that RISC would bury x86. (Keep in mind they could have just kept using OS/2.) Digital also badly needed volume for their chip production and make a somewhat serious attempt at the Windows workstation/server market. That Alpha was pigeonholed as a Unix chip is one of the main reasons it failed.
Re: (Score:3, Informative)
Just as a point of fact, DEC did not at all blow-off the x86 market even while they were flailing away on Alpha. They were the #3 commodity server vendor when Compaq (#1) bought them, ahead of both IBM and HP. (Personally speaking, Digital had some serious credibility with us PC guys.)
Furthermore, Compaq cited Digital's services group's expertise with Wintel as one the main reasons they bought the company. I would say of all the old minicomputer companies, Digital did the best job of adapting as anyone. Sha
Doubt it. (Score:5, Interesting)
Does this mean the end of Itanium? Will it be missed, or was it destined to be another DEC Alpha waiting for its last sunset?
Kinda funny to make that comparison since the Alpha was killed to enable the Itanium. (Long story involving HP making a deal with Intel to hand over the last of PA-RISC/Itanium processor development to Intel and DEC killing Alpha at the same time to clear out the market since HP was in the process of purchasing DEC/Compaq, although the acquisition was not yet public at the time of the cpucide).
But I doubt its the end of Itanium. Itanium models have things that even the latest Xeons don't in terms of RAS. [wikipedia.org] Most customers don't care about the level of fault tolerance and reliability, but the ones who can't migrate to linux (or Windows) because they are dependent on features of more proprietary OSes like Tandem (now HP) NonStop [wikipedia.org] do need Itanium, and their software is unlikely to be ported to x86 anytime soon (it took at roughly 4 years to get NonStop ported to Itanium to begin with).
Re: (Score:2)
I thought Intel had partnered with DEC to make the Alpha chip. Also Intel held the patents on it. Intel finally decided to tell DEC sorry but we (Intel) do not want to use these (the Alpha chip designs) anymore. Or something like that anyway. Intel forced DEC to stop making the CPU which left DEC screwed. DEC's value dropped enough for HP to buy it.
Wasn't the pentium II more like a risc CPU with a cisc interrupter so it could run windows and the rest of the 32 bit cisc stuff? So Intel needed the Aplha to go
Re:Doubt it. (Score:5, Interesting)
I thought Intel had partnered with DEC to make the Alpha chip. Also Intel held the patents on it. Intel finally decided to tell DEC sorry but we (Intel) do not want to use these (the Alpha chip designs) anymore. Or something like that anyway. Intel forced DEC to stop making the CPU which left DEC screwed.
Sorry, that is not even close. DEC sued Intel over infringements of the Alpha patents in Pentium processors. One of the results of the settlement was that Intel acquired DEC's Hudson, MA fab (which still operates today). In no way were DEC and Intel partners in Alpha, though ironically, Intel ended up making Alpha chips in the Hudson fab for several years under contract to DEC. What killed Alpha was years of neglect by Bob Palmer (DEC CEO) followed by Compaq's cluelessness. HP ended up with both Alpha and Itanium and bet the farm on the latter, but by that time it probably didn't matter.
Re: (Score:2)
That's how I remember it as well. But it wasn't just the Alpha chip that Intel were forced to manufacture (after being forced to buy the Hudson MA fab) but StrongARM as well. Remember that? Ultimately it was this that set the stage for the death of Alpha. After suffering years of neglect at the hands of Intel in fabrication technology advancements, and missing out on many planned die shrinks that would have kept it ahead, it finally got the axe before the EV8 variant had a chance to see the light of day
Re:Doubt it. (Score:5, Informative)
The WSJ mentioned that Intel was porting a lot of the Itanium specific fault tolerance features over to the Xeons.
Re: (Score:2)
Here is an article scrounged up by a quick Google search that re-iterates what I read in the WSJ.
http://www.brightsideofnews.com/news/2009/5/27/intel-nehalem-ex-xeon-spells-the-voice-of-doom-for-itanium.aspx [brightsideofnews.com]
Re: (Score:2)
This may or may not count as irony, but VMS (DEC's main OS) survives solely as an OS for HP's Itanium based systems. Further weirdness: a major app for this platform is RDB, a DBMS that Oracle bought from DEC over a decade ago. It's interesting that two companies whose mainstay is competing tech (x86 servers for HP, Oracle DBMS and now x86 and SPARC Sun servers for Oracle) work so hard to keep this particular legacy stack alive.
What does Netcraft say? (Score:3, Funny)
Re: (Score:2)
Netcraft is dying, Netcraft confirms it.
Every Chip is a DEC Alpha (Score:4, Insightful)
They all get outmoded.
Thank God (Score:2)
Now I won't have to decline all those useless Itanium updates in WSUS console every month.
Re: (Score:2)
Pick up a pentium 4 cpu. See if you can get one of the 3.6 GHz ones. Watch the type of cooler you use. You want to cook on it not set the place on fire.
*shovels in some more troll food*
They will be in millions of homes? (Score:3, Insightful)
Re: (Score:2)
Actually the xbox is a multi-core ppc derivative. Tri-core, inorder execution. (interesting actually) The PS3 is powered by the cell. The core is also a power derivative.
In a simple analysis, the Cell processor can be split into four components: external input and output structures, the main processor called the Power Processing Element (PPE) (a two-way simultaneous multithreaded Power ISA v.2.03 compliant core), eight fully-functional co-processors called the Synergistic Processing Elements, or SPEs, and a
Re: (Score:2)
Don't forget cars. Most ECUs are using PowerPC CPUs.
Re: (Score:2)