Intel's Itanium Will Get x86 Emulation 268
pissoncutler writes "Intel has announced that they will be releasing a software emulation product to allow 32-bit x86 apps to run on Itanium Processors.
According to these stories (story 1, story 2), the emulator is capable of the x86 performance of a 1.5Ghz Xeon (when run on a similar speed Itanium.) Who said that no one cared about x86 anymore?"
Fun (Score:5, Informative)
Mirrors:
story 1 [martin-studio.com]
story 2 [martin-studio.com]
Re:Fun (Score:5, Informative)
You could put the chip is EPIC (ia64) mode and everything would run though the normal pipeline or ia32 mode and things 1st ran through the ia32 translator then most of the normal pipline. Yeah, you took a performance hit in ia32 mode, but it was the price you paid for "100%" backwards compatibility.
So, I am not sure why the change to a software emulator, unless:
1) they ditched the hardware emulator to get back some real estate of the die, or
2) they didn't like the switching the chip between ia32 & ia64 bit modes.
Also, you can tell I've been out of the Itanic design loop for 5 years now. So, some information is out-of-date or lost in the fog of memory. And, I'd like to say that Merced was such a horribly managed project I left engineering.
Re:Fun (Score:5, Interesting)
IMHO, the software emulator is a better long-term solution. A hardware emulator uses some power even if you're not using it and drives up the cost of the chips by taking up realestate and increasing the defect rate. Your design-test cycle is also much faster for a software solution. There's also the marketing point of "we're doing this well so far, and will give you an even better version when it comes out, for free". They can't easily upgrade hardware for free at a later date. The software emulator probably has a lot of overlap with the compiler group, so you might get compiler research almost for free.
Also, I assume most of the guys writing the software emulator aren't experts in hardware design and vice-versa. The two projects are completely independent and likely don't steal personell from eachother.
Re:Fun (Score:5, Insightful)
If you make the emulator separate enough from the core of your new architecture, you can switch the power off when you're not using it. A number of big pieces of silicon in our lives do this, including mobile video solutions (I think the latest mobile radeons, and maybe some of the desktop chips as well?) and some CPUs.]
The only really good thing about a software solution is that you could have a microcode update, as you say, and of course it takes up space, that's always a bummer.
Re:Fun (Score:2, Interesting)
Then, they could do the x86->Itanic code conversion rather invisibly and dynamically as the need arose.
Re:Fun (Score:2)
Re:Fun (Score:5, Insightful)
Re:Fun (Score:4, Insightful)
Re:Fun (Score:2)
Intel: Back to its roots (Score:5, Insightful)
Back before Bill Gates and IBM's Entry System Division thrust Intel microprocessors into every other home on the planet, electronic systems designers were actively courted by Intel by their claim to developing products that won't invalidate all existing design work in one swell foop. And, for the most part, they held up on their end of that promise, which is why the Pentium 4 still has a little bit of the 8080 in it.
Now, when the i432 came out, it was a completely different beast -- and the i432 died a justified death. The i860 didn't fare that well, either. The i960 has seen quite a number of design-ins, because the solution base the i960 was geared to was sufficiently different from the 80x86 that designers didn't try to replace 80x86 chips with the RISC-based i960.
Intel, that was a clue.
What Intel didn't foresee, but should have, is the great technological bust of 1999 put a number of companies under. Source code has flown to the four winds, in some cases the foreclosures also nailed every single backup. In short, the migration path via recompilation was no longer an option. (Not to mention that there were no dollars to make even the most trivial changes to the source to deal with 64-bit processors.)
So this announcement is surprising only in that it comes so late in the product development cycle, as Intel is coming out with its second generation of IA64 chips.
Competition. It's a good thing.
Re:Intel: Back to its roots (Score:2, Interesting)
Go back to the first time Greenspan said "irrational exuberance" and start looking there for when the "real" economy started to tank. The stock market dive (again, mid Y2k-ish) was an aftershock of the real problems in the economy. Looking up stock market info would probably be trivial, yet I'm too lazy to do it.
It is backwards compatable (Score:3, Interesting)
Re:Fun (Score:5, Informative)
I think that's a different situation. For starters, Itanium already does IA32 in hardware (it's just really crappy apparently).
DEC wasn't in the x86 market to start with so FX!32 extended their market by making NT/Alpha more attractive. With the 21164, Alpha introduced data handling functionality in hardware that was intended to accelerate x86 emulation. It was probably too late by then.
There must still be major management/direction problems with the Itanium project for them to resort to this kind of hack. It's embarassing that they can outperform their hardware implementation in software.
The only software emulation I can think of that was successful was Apple's 68k emulation for PPC, but their approach was brilliant and well thought out IMO (smooth transition, fat binaries including code for both chips). At the time, PPC was compelling. I don't think Itanium performance is as compelling even though Itanium 2 is pretty decent from what I've seen. I think for a straight 64 bit Linux system, Itanium 2 is a much better chip.
I suspect Intel and friends (oops almost typed fiends!) will be back with improved hardware support for IA32 because people won't be satisfied with the emulation performance. AMD has to feel pretty good about having Intel/HP in this position.
-Kevin
I would like to run... (Score:3, Funny)
Or something like that... =)
-dave-
Get BearShare! [bearshare.com] for your p2p needs!
Re:I would like to run... (Score:2, Funny)
Opteron (Score:5, Interesting)
Re:Opteron (Score:5, Informative)
And industry won't really adopt a certain chip - I'm sure it'll be just like the x86's today; you can go back and forth between Intel and AMD pretty easily with each new computer you buy - unless you're anti-Intel because they have that agreement with microsoft.
Re:Opteron (Score:4, Informative)
Actually, this is a pretty major fork between AMD and Intel. Unless there's a new processor made by one of them, the two competing 64 bit "x86" systems are mutually incompatible with each other. People are going to have to commit to one or the other, because the instruction set, hell the coding style, is markedly different in the two architectures. AMD's offering, x86-64, is very much a cleanup of the x86 instruction set, with a few features that should have gone into the architecture long ago. IA-64, on the other hand, is essentially a complete abandonment of x86, which, as others mentioned, is something that really hasn't happened with intel since they made the 8080 decades ago.
While I feel that eventually there's probably going to be in-processor emulation of the competetor's code, that's not the case now. This is perhaps where the AMD-Intel war gets truly ugly. Since the days of the 286, the rivalry has been essentially tit for tat, a few added features by one side gets picked up by the other. This is a lot different -- there is no easy migration back and forth.
Re:Opteron (Score:4, Informative)
instruction set architectures. you're not going to
be able to run your x64 kernel on an ia64 chip.
it's not in the least similar or analogous to the
ia32 situation.
Re:Opteron (Score:4, Informative)
According to their site if you want to run x86-64 code you can not use 16-bit legacy apps.
Yes, technicaly the chip can run all those apps, but then it is just the next athalon, and not 64-bit chip with extra registers.
Re:Opteron (Score:4, Insightful)
<quoting sub-section 1.2.3>
Compatibility mode--the second submode of long mode--allows 64-bit operating systems to run existing 16-bit and 32-bit x86 applications.
<end quote>
However, I think what you're looking for is a little earlier in the section.
<quoting sub-section 1.2.1>
Long mode does not support legacy real mode or legacy virtual-8086 mode, and it does not support hardware task switching.
<end quote>
Now, this may seem like a bit of a loss, since DOS was run in real mode, and Linux 1.xx made use of the hardware task switching, but neither of these operating systems are ever going to run in Long mode on an x86 chip since they're both long since being EOLed. Even running DOS programs under Win2003 won't require real mode (unless I'm really off as to how the DOS window works).
In short, this is just cruft that would never be used in x86-64 long mode anyway.
Re:Opteron (Score:2)
Re:Opteron (Score:2, Insightful)
Don't kid yourself. Microsoft and Intel are in bed together and have been for a very long time. Once again, I don't have primary sources much like the parent (I'm sure someone will post some) but I know that Microsoft works very closely with Intel, moreso than they do with AMD if they do at all.
1.5ghz Xeon? (Score:4, Interesting)
Re:1.5ghz Xeon? (Score:5, Informative)
Re:1.5ghz Xeon? (Score:2)
Emulator, converter? (Score:5, Informative)
Ultimately, all an emulator does is convert instructions from one architecture to another. It's almost always more efficient to translate instructions in blocks
To come up with a really primitive, simple example, imagine a simple instruction set with a load, add, and branch if zero-set.
Code might look like this:
lda avar
add bvar
bre label
Now imagine we were translating to an instruction set that had mostly the same instructions, but needed a compare instruction to set our conditional flag
Instruction-by-instruction conversion might turn out like this:
lda avar tstz
add bvar
tstz
bre label
Now if the conversion was done on the entire block, we might end up with this:
lda avar
add bvar
tstz
bre label
Granted, this is a pretty simple example, but I hope it makes my point. Block conversions allow a great deal more optimization than instruction conversions.
This optimization might sound like a lot of work for the host processor, but if the block in question is a tight loop you more than make that up.
Re:Emulator, converter? (Score:2, Informative)
What do you mean by "whatever normally optimizes the native instruction set"? What normally allows the optimization of assembly is data flow analysis of constructs in the higher-level language being compiled (like C). You can only do limited types of optimization with the raw assembly (pe
Re:1.5ghz Xeon? (Score:2)
Better C|Net story (Score:5, Informative)
(Yes, it's linked from the posted C|Net story).
Duh.. (Score:5, Interesting)
Perhaps that is what doomed Itanium 1 to failure form the start. (Well that combined with the horrible heat output and power consuption of the Itanium 1).
Re:Duh.. (Score:3, Insightful)
--- David Wheeler, chief programmer for the EDSAC project in the early 1950s.
Scarily, it's still just as true today...
Emulation (Score:5, Interesting)
conversion? (Score:2)
By murphy's law, dynamic linking or primitive datatype sizes will keep this from being practicle.
Re:conversion? (Score:3, Informative)
Re:conversion? (Score:3, Interesting)
I think this makes it orthogonal to RISC/CISC/VLIW arc
The way I see it (Score:5, Insightful)
Re:The way I see it (Score:5, Insightful)
> And when everyone is changing over, that's CRITICAL.
Pffft. If you want to run 32-bit, get a P4 or Xeon. If you want to run 64-bit, you're most important application(s) is/are 64-bit anyway, right?
What uses would a company have to go 64-bit? Big ass database? High performance workstation perhaps? In the database scenario, you'd probably be running a 64-bit database anyway (or you'd be wasting your time and money). It is likely this would be your only, or at least most important, service running on the box.
How about a high performance workstation, like CAD or something. Well, that CAD engineer will probably have 64-bit CAD, which is what he/she will use most of the day. Who cares if MS Outlook or WordPerfect run at only the speed of a 1GHz processor (or whatever the actual emulation speed equivalent is)?
I don't see what the big deal is, but I know the average Slashdotter has a "AMD inside" bumper sticker on his modded chassis.
Re:The way I see it (Score:5, Insightful)
Itanium may be a true server class chipd and capable of pulling off the same stuff PA-RISC and Sparc can. But if there is *any* performance advantage, it is so slight that it is overshadowed by pathetic industry and software support. Sure you will soon be able to run Windows, and have been able to run linux, but ultimately there isn't much to run on those systems.
AMD has struck a cord here. A lot of large environments (especially clusters) have been getting by on 32-bit architecture because of the great applications support and price/performance ratios. The Opteron falls into the same price/performance league as those 32-bit systems in use, can equal or best those processors in 32 bit tasks, and as the software matures and gets recompiled, smoothly migrate to 64-bit operation without a hiccup. When these huge clusters are running software packages that costs millions to develop, there is a vested interest in continueing to use them while simultaneously ironing out the kinks in their 64-bit versions.
There is a damn good reason why IBM and others are finally acknowledging AMD as worthy of building servers around. Itanium sales have been pathetic, and there has been much more customer interest in the possibility of upcoming Opteron products than the reality of existing Itanium systems.
Re:The way I see it (Score:3, Interesting)
I think you are right, but not necessarily about what you think. It is "CRITICAL" because it is "The way [you] see it". I believe it is not the speed that is important here, but the perception.
Do you know what this means??????? (Score:5, Funny)
NOOOOW I can watch my old dos demos from Unreal and The Humble Crew in less time than my brain can percieve them. Just what topped last years christmas list.
pm
Re:Do you know what this means??????? (Score:2, Funny)
And I thought it was just going to be ... (Score:5, Funny)
And I thought it was just going to be a space heater.
Sounds familiar. (Score:5, Insightful)
For Intel to have a long term future without the embarassment of junking the whole architecture, they need Itanium x to run IA32 credibly. Advances in x86 performance keep coming at such increasing development costs that I think they would have to be able to migrate the market to IA64 within 5-10 years from now.
I would like for both the IA64 and the Hammer architectures to flourish, but Intel's taken an extremely bold step with EPIC, and I don't want to see them get punished in the market for that alone. I like the spirit of aiming higher.
Re:Sounds familiar. (Score:5, Interesting)
Well, no.
Actually, it was a painful transition. Horrible hacks were required to make it work, and Apple lost considerable market share.
From the user perspective, all the applications that used the FPU stopped working. Worse, the PPC only had (has?) a 64-bit FPU, while the 68K and x86 have 80-bit FPUs. So a simple recompile often wasn't enough. Most of the engineering applications (CAD, EDA) were never ported to the PPC at all. There were unsupported 3rd party FPU emulators for the 68K FPU, but they were really slow, since they had to emulate a wider FPU.
Most of the OS ran in 68K emulation mode for years after the "transition". The PPC interrupt model was mainframe-like, assuming that you didn't do much in an interrupt routine except activate some process. The 68K interrupt model was minicomputer-like, with multiple interrupt levels used as the main locking primitive. Hammering those two together was painful. There were some things you just couldn't do in PPC mode; you had to drop into 68K emulation to prevent interrupts.
The old MacOS had what was euphemistically called "cooperative multiprogramming". That didn't mean you had threads without time-slicing, like a real-time OS. It meant you didn't have real context switching at all. You plugged your code into callbacks at different levels of processing, like "system tasks", "VBI tasks", "timer tasks", "interrupt tasks", etc., none of which could block. No mutexes. No locking. Only interrupt prevention. Trying to do anything in the background was very tough. (I know; I wrote a PPP protocol module for the 68K Mac. I had the only one that could dial the phone in the background without locking up the whole machine, and it wasn't easy.)
Worse, the 68K emulator depended on a jump table with 65536 entries, one for each of the first 16 bits an instruction could have. Early PPCs didn't have enough cache to keep that entire table in the cache all the time. But if it wasn't all in the cache, 68K emulation performance was terrible.
Amusingly, much of the perceived performance advantage of the early PPC machines came from the miserable 68K code generators used on the Mac. The Apple and Zortech compilers were clueless about 68K register allocation, preferring to do all arithmetic in register A0. The PPC code generators were much better. Some high-end apps used to be cross-compiled on Sun 68K machines because the Mac code generators were so bad.
Most of these problems were papered over using the Jobs Reality Distortion Field. But this was the period when Apple started losing market share big-time. Arguably, the PPC transition cost Apple its preeminence.
What Apple really needed was faster 68K CPUs, not a new architecture. Technically, that was quite possible. The Motorola 68060, (never used by Apple, but in the last 68K Amiga), was faster than the PPC of the same vintage. But Jobs had cut a deal with IBM under which IBM was supposed to make MacOS compatible machines (!), and that was the motivation for the PPC.
A0 math (Score:2)
A0 was an address register... did you mean D0, or did they actually do math in an address register?
Re:Sounds familiar. (Score:3, Interesting)
But you made one major error:
Jobs was not at Apple when they made the PPC transition. He was at NeXT.
I remember very clearly reading an interview given by Jobs where he ripped Apple's decision to switch to PPC.
Re:Sounds familiar. (Score:4, Informative)
Well, no. Interestingly, you are technically correct on a couple of complex points, but you seem clueless on others. Perhaps your memory has faded. Think C 5's code generator was far better than MPW (Apple's) C or Symantec C++, but Metrowerks C was ultimately much, much better. MPW C tended to frequently do shit like (actual example from disassembling the 7.1-era Finder, IIRC):
mov.l a0, a5
mov.l a5, a0
Note lack of peepholing.
What you call "cooperative multiprogramming" is actually called "interrupt time." All documentation of which I'm aware refers to it as "interrupt time." No euphemism required.
Jobs had been fired for over seven years when John Sculley cut the PowerPC CPU deal, and It had nothing to do with PowerMac clones.
Most of these problems were papered over using the Jobs Reality Distortion Field. But this was the period when Apple started losing market share big-time. Arguably, the PPC transition cost Apple its preeminence.
No, dude. I was there. Apple never had "preeminence" or much market share. Apple was always struggling under the "Apple is dying" myth (and still does in some quarters today). In the mid-nineties, Apple had a series of crises caused by Sculley and his successor's ineptitude. Worse, Apple stopped playing to it's traditional strengths (industrial design and hardware/software) under Spindler, a problem that, combined with vigorous and useless penny-pinching in all the wrong places -- Apple's hardware & software quality hit the lowest point they'd ever reach at the end of Spindler's reign -- ultimately led to the ouster of Spindler. Amelio failed to recognize this (or much of anything else about Apple), which ultimately led him to buy his own doom in NeXT and the return of Jobs.
Re:Sounds familiar. (Score:2)
I never used Think C, but I used MPW and Symantec/Zortech, and later Metrowerks. As you point out, MPW and Symantec/Zortech weren't very good. Metrowerks was a big improvement.
You're right about Jobs not being there at the PPC transition.
Apple had more market share than IBM in the Apple II days, and it was all downhill after that. When Gil Ameilo came in, Apple's market share was about 7%. Now, it's around 2.5%. (Apple likes to emphasize high
Re:Sounds familiar. (Score:2, Informative)
Re:Sounds familiar. (Score:3, Informative)
There was no FPU on the 020, 030, or 040LC - in fact the only Mac chip which had one built-in was the 040, so the possibility of using the math library you mention was well known at the time. People who did use it had no problem (from a math point of view) moving to PowerPC, and those that did were well aware that they
Details? (Score:3, Insightful)
Anybody got the technical details on this "emulation" versus the x86 compatibility in Opteron?
JIT compilation or instruction for instruction?
FX32! for Itanium (Score:5, Interesting)
Re:FX32! for Itanium (Score:3, Informative)
Oops that should be FX!32 (Score:2, Informative)
Re:FX32! for Itanium (Score:2)
I don't know what they bought specifically, but I seem to only remember that they bought the fab for Alphas, as well as DEC's NIC and StrongARM technology. IIRC, DEC kept the Alpha technology, but having been bought by Compaq and then HP, I think there are enough cross-licencing deals in place that Intel might just have a lot of those rights available to them.
Re:FX32! for Itanium (Score:2)
Re:Where to download this "FX32!" ? (Score:3, Interesting)
Oh yeah, this'll work really well... (Score:2, Insightful)
Re:Oh yeah, this'll work really well... (Score:2)
goodbye ia32 on-chip emulation? (Score:3, Informative)
And, really, can't plenty of us just roll our eyes and go back to compiling our systems from source? I mean, once there's a linux kernel + glibc + gcc port, thousands of applications are instantly available to you.
<preachy>Every time you find yourself strapped to a single architecture, ask yourself why you have all this proprietary baggage holding you back. Whether it's that Word
One wonders why Intel didn't do this originally (Score:2)
This, tragically, does hurt AMD quite
Re:One wonders why Intel didn't do this originally (Score:2)
Re:One wonders why Intel didn't do this originally (Score:2, Informative)
Re:One wonders why Intel didn't do this originally (Score:3, Interesting)
I don't think so.
I had read multiple rumors about Intel having something up their bunny-suited sleeves, but most of these rumors had Intel supporting x86-64 -- that is -- copying AMD for the first time. This announcement takes away one of the unique advantages of the Opteron/Athlon64 without following AMD's lead.
If you think running 32-bit code half as fast (1.5 GHz. Xeon vs. 2.8 GHz. Xeon) on a processor that costs four times as much takes away any advantag
Re:One wonders why Intel didn't do this originally (Score:2)
To me it looks like Opteron is around 8x more cost effective at running 32-bit code
I thought the point was to finally move away from "old" 32 bit code. This 32 bit capability is there for backward compatibility until such time as individuals and companies retire their old 32 bit apps.
Things change. The industry moves forward. This is why we don't all run 486's anymore.
Intel is in trouble.
Let's hope not. Competition is a good
Re:One wonders why Intel didn't do this originally (Score:2)
If the Pentium wasn't backwards-compatible, we might still be running on 486's after all!
Now the only question is (Score:2)
I din't think they have a plan (Score:2, Informative)
If you add A
why bother? (Score:2)
Bad Reputation still here for AMD (Score:3, Interesting)
I personnally don't agree, but my opinion isn't worth jack inside the corporation and I already know the system's administrator has a "Intel Inside" sticker on his forehead, even if the chips cost 2x as much. They say they pay for "quality". Psssh, what a load of bull.
Re:Bad Reputation still here for AMD (Score:2)
Motives... (Score:5, Interesting)
From what I've seen, I would argue that their motive is the latter. Intel has show on several occasions that, these days, they simply don't give a damn about the end user. They care about market share, profits, and their precious stock price. Let's not forget the fact that "Pentium" was coined because Intel wasn't allowed to trademark the number 586.
Remember when they released an overclocked Pentium III to the public, and Tom's Hardware had that nice little article exposing it for the failure it was? It choked on GCC, among other things, while Intel steadfastly denied the problem. Then they actually recalled the processors. Competition at the expense of the end user... wonderful!
It is clear AMD is still going to come out on top in performance on this one, unless "software emulation" doesn't mean what I think it means. It is also clear to me that Intel has to do a lot more than throw some software emulation at an issue before I ever buy another Intel processor.
Intel's Missing the Point of the Opteron (Score:3, Insightful)
The Itanium is a marvelous piece of work however, how's going to adopt something so unknown, vs something so familiar? That is the point Intel missed, 32bit is dead, 64 bit is here, which one will be chosen?
Will this be implemented a la 'code morphing'? (Score:2, Informative)
x86-64 (Score:2, Interesting)
And as for licensing, a clean room implementation should be very easy considering it is simply an extention of x86.
Re:x86-64 (Score:2)
That would be the best thing that could happen to AMD.
It would validate AMD64 and give them a HUGE cost advantage. We are talking 5X or so at current pricing levels.
Will compiler tech ever get there? (Score:2)
The reality is we may never get compilers that are that good, and we may never have many applications where much parallelism can be drawn out anyway... at least not enough to make it worthwhile.
EPIC is a huge gamble... one that may not pay off in the long run. I'm no fan of x86 per-se, but it seems that AMD has tried to bring it up to speed with x86-64... more registers (always the b
Misleading headline (Score:2, Informative)
Itanium has always had x86 emulation, just before it was done in hardware, and very very slowly. (The Itanium 1, at 800Mhz, ran x86 software at the speed of a 150Mhz pentium or so.)
A story at The Register, here [theregister.co.uk] explains that this new software will translate some of the x86 assembly to IA-64 assembly at runtime. (See picture [atmarkit.co.jp])
This is the same way that HP's Aries [hp.com] works -- which translates HP-PA instructions into IA-64.
That works pretty well actually, delivering about 80% of the
If Intel hadn't have done it (Score:2)
Wonderful example of spin doctoring (Score:3, Interesting)
No they're trying to spin this story as if it's actually something good and not a patch for a white elephant.
See this story on The Register [theregister.co.uk]
Re:Good work. sort of... (Score:2)
Actually, most of the architecture classes at school I took used the MIPS architecture as an example -- an elegant, clean RISC design which makes it excellent for teaching. My work on other CS courses were done on platforms from Sun to Mac to SGI, but the X86 was conspicuously absent.
In fact, the only CS course involving any x86 programming whatsoever was a graphics class which use
Re:Good work. sort of... (Score:4, Insightful)
Re:Good work. sort of... (Score:2)
The rest was a good mix of 8bit embedded devices, and some build-your-own-cpu emulations. This was in the early 1990s however.
Like the OP, I've come across a few very new graduates who were taught x86 only. That wouldn't be so bad if it weren't for the attitude that they must have been taught it because it was the best and only real option.
Most popular perha
Re:Clean Design (Score:5, Informative)
Re:Clean Design (Score:3, Insightful)
Re:Lets see how well it runs java or .net code (Score:5, Insightful)
I don't think you really mean parrallel
Compilers that support interleaving can achieve parrallelism up to the number of stages on the pipeline (something ridiculus ia64 like 13 or something).
Now of course, if a compiler can optimize for interleaving without programmer intervention, a JIT can optimize for interleaving.
Re:Lets see how well it runs java or .net code (Score:2)
Interleaving is perhaps the most difficult thing I've had to do as a programmer. It requires really strong math skills and an extremely strong understanding of the architecture itself.
You have to more or less be both a mathematician, a chip designer, and a software developer to properly optimize. I'd be willing to wager that the most important CS problems to be solved in the near future are in
Re:Lets see how well it runs java or .net code (Score:5, Insightful)
A VM bytecode program contains alot more structural information of how the original program looked than C or C++ programs. On the Itanium the compiler has to take a "best guess" or some profile data to compile for the most common program-flow, this is one of the largest factors that limit Itanium peformance since alot of the run-time hardware optimisations are n't and cant be there.
A VM could analyse program-flow and compile different versions of the same function, dynamically changing which is used for example.
Of course this does n't help the vast majority of C/C++ code out there, but your assertion is hardly correct.
Re:Lets see how well it runs java or .net code (Score:2)
They'll definitely compare favorably if you consider that Java and
Java and
Re:Lets see how well it runs java or .net code (Score:3, Interesting)
EPIC (or VLIW, which is pretty much the same thing) atchitectures define instructions to be executed (or, at least, that can be executed) in parallel in the encoding of the instructions. Most superscalar machines evolved from single-issue architectures (PPC, Alpha, x86) only have sequential instructions.
That being said, there are almost always instructions that can be executed in parallel. The only difference between EPIC/VLI
PPC 970 would to emulate more than than IA32 (Score:3, Insightful)
It would help PPC for IBM to produce a software emulator for IA32, but it would also need to
don't hold your breath (Score:2)
The desktop market is not really a priority for IBM, so don't expect IBM to put immense amounts of effort into a piece of software that would almost certainly cost you more than buying an actual additional x86 PC to run your 32 bit apps on.
Re:PPC 970 would to emulate more than than IA32 (Score:2)
I was only talking about the instruction set and the processor itself, because the post I was replying to seemed to assume emulating an x86 processor was enough to run x86 applications. Of course more than just the processor is necessary to support an app. Virtual PC does emulate the rest of the hardware required, and Virtual PC+Windows is a good solution for x86 apps on PPC.
WHY... (Score:2, Interesting)
Why why why?
Pity there is no -1 100% wrong choice huh?
Re:This why open source will rock. (Score:2, Insightful)
Re:This why open source will rock. (Score:2, Informative)
Please! (this is for all the people arguing this) (Score:2)
The last program I had trouble of that sort was KDE 2.0-beta3 (may have been fixed anywhere from beta3-beta5) (and that may have been s
Re:Not quite - emulation of virtual machine unavai (Score:2)
I guess bashing
Re:Why not dual CPUs ? (Score:2)