Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel's Itanium Will Get x86 Emulation 268

pissoncutler writes "Intel has announced that they will be releasing a software emulation product to allow 32-bit x86 apps to run on Itanium Processors. According to these stories (story 1, story 2), the emulator is capable of the x86 performance of a 1.5Ghz Xeon (when run on a similar speed Itanium.) Who said that no one cared about x86 anymore?"
This discussion has been archived. No new comments can be posted.

Intel's Itanium Will Get x86 Emulation

Comments Filter:
  • Fun (Score:5, Informative)

    by inertia187 ( 156602 ) * on Sunday April 27, 2003 @06:41PM (#5821555) Homepage Journal
    Fun. So now they realize after they create the chip that they want 20 years of backwards compatibility. The PowerPC knew they wanted this, according to this [slashdot.org] slashdot article.

    Mirrors:
    story 1 [martin-studio.com]
    story 2 [martin-studio.com]
    • Re:Fun (Score:5, Informative)

      by jbs0902 ( 566885 ) on Sunday April 27, 2003 @07:34PM (#5821764)
      Actually, when we created Merced (1st Itanic) it was designed to be able to be FULLY backwards compatible (i.e. boot MS-Dos 1.0). 25%-33% of the chip was actually a HARDWARE ia32 to ia64 translation engine.
      You could put the chip is EPIC (ia64) mode and everything would run though the normal pipeline or ia32 mode and things 1st ran through the ia32 translator then most of the normal pipline. Yeah, you took a performance hit in ia32 mode, but it was the price you paid for "100%" backwards compatibility.

      So, I am not sure why the change to a software emulator, unless:
      1) they ditched the hardware emulator to get back some real estate of the die, or
      2) they didn't like the switching the chip between ia32 & ia64 bit modes.

      Also, you can tell I've been out of the Itanic design loop for 5 years now. So, some information is out-of-date or lost in the fog of memory. And, I'd like to say that Merced was such a horribly managed project I left engineering.
      • Re:Fun (Score:5, Interesting)

        by karlm ( 158591 ) on Sunday April 27, 2003 @08:17PM (#5821900) Homepage
        Maybe they just wanted a second option. Hardware emulation probably runs some apps faster and software emulation probably runs others faster.

        IMHO, the software emulator is a better long-term solution. A hardware emulator uses some power even if you're not using it and drives up the cost of the chips by taking up realestate and increasing the defect rate. Your design-test cycle is also much faster for a software solution. There's also the marketing point of "we're doing this well so far, and will give you an even better version when it comes out, for free". They can't easily upgrade hardware for free at a later date. The software emulator probably has a lot of overlap with the compiler group, so you might get compiler research almost for free.

        Also, I assume most of the guys writing the software emulator aren't experts in hardware design and vice-versa. The two projects are completely independent and likely don't steal personell from eachother.

        • Re:Fun (Score:5, Insightful)

          by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday April 27, 2003 @10:24PM (#5822423) Homepage Journal
          A hardware emulator uses some power even if you're not using it and drives up the cost of the chips by taking up realestate and increasing the defect rate.

          If you make the emulator separate enough from the core of your new architecture, you can switch the power off when you're not using it. A number of big pieces of silicon in our lives do this, including mobile video solutions (I think the latest mobile radeons, and maybe some of the desktop chips as well?) and some CPUs.]

          The only really good thing about a software solution is that you could have a microcode update, as you say, and of course it takes up space, that's always a bummer.

        • Re:Fun (Score:2, Interesting)

          by Anonymous Coward
          ...if they're working with the old DEC stuff (via Compaq via HP), how come they don't dust off the code that they used on the RISC boxes and then Alphas that would do JIT emulation of Vax binaries, and end up saving the resulting RISC binaries back as it hit the code?

          Then, they could do the x86->Itanic code conversion rather invisibly and dynamically as the need arose.
      • The x86 emulation circuitry on Itanium is really slow. So slow that they think software emulation would be faster. . .
      • Re:Fun (Score:5, Insightful)

        by atam ( 115117 ) on Monday April 28, 2003 @12:01AM (#5822770)
        From I read in the article, I think it is not an emulator per se. It is more like a just-in-time compiler/translator. Probably it is something similar to what the Transmeta Crusoe or Alpha FX!32 does. Both of these products already proved that you could do it in software implementation pretty efficiently.
        • Re:Fun (Score:4, Insightful)

          by Hoser McMoose ( 202552 ) on Monday April 28, 2003 @03:11AM (#5823269)
          It will be almost exactly like what FX!32 does, because that software is no longer "Alpha FX!32", but rather "Intel FX!32". Intel bought pretty much all of the old Alpha technology, software and design teams, as well as the plant that Digital used to build the chips in. This occurred over quite a number of years (starting just before Compaq bought Digital), but this is the first really obvious sign of Intel technology to come out of it.
    • And so did the Opteron as well - for such a large technological leap, backwards compatability is a must (read: lessened risk for the corporate consumer)...
    • by satch89450 ( 186046 ) on Sunday April 27, 2003 @07:58PM (#5821844) Homepage
      Fun. So now they realize after they create the chip that they want 20 years of backwards compatibility.

      Back before Bill Gates and IBM's Entry System Division thrust Intel microprocessors into every other home on the planet, electronic systems designers were actively courted by Intel by their claim to developing products that won't invalidate all existing design work in one swell foop. And, for the most part, they held up on their end of that promise, which is why the Pentium 4 still has a little bit of the 8080 in it.

      Now, when the i432 came out, it was a completely different beast -- and the i432 died a justified death. The i860 didn't fare that well, either. The i960 has seen quite a number of design-ins, because the solution base the i960 was geared to was sufficiently different from the 80x86 that designers didn't try to replace 80x86 chips with the RISC-based i960.

      Intel, that was a clue.

      What Intel didn't foresee, but should have, is the great technological bust of 1999 put a number of companies under. Source code has flown to the four winds, in some cases the foreclosures also nailed every single backup. In short, the migration path via recompilation was no longer an option. (Not to mention that there were no dollars to make even the most trivial changes to the source to deal with 64-bit processors.)

      So this announcement is surprising only in that it comes so late in the product development cycle, as Intel is coming out with its second generation of IA64 chips.

      Competition. It's a good thing.

    • But they say this emulator will run faster then using the built-in x86 decoder.
  • by vDave420 ( 649776 ) on Sunday April 27, 2003 @06:42PM (#5821562)
    ...a C64 emulator on an x86 emulator on 1.5Ghz...

    Or something like that... =)

    -dave-
    Get BearShare! [bearshare.com] for your p2p needs!

  • Opteron (Score:5, Interesting)

    by whig ( 6869 ) on Sunday April 27, 2003 @06:45PM (#5821571) Homepage Journal
    Sounds like a defensive reaction to the release of the Opteron. If AMD is offering a 64-bit chip with support for full-speed 32-bit x86 software, then Intel has to have a competitive answer *before* industry adopts the AMD64 over IA-64 for future migration.
    • Re:Opteron (Score:5, Informative)

      by LBArrettAnderson ( 655246 ) on Sunday April 27, 2003 @06:57PM (#5821614)
      no - Intel has been planning for emulation the whole time. AMD still has the advantage with full compatability at full speed. But you're right; it sure does sound like it.

      And industry won't really adopt a certain chip - I'm sure it'll be just like the x86's today; you can go back and forth between Intel and AMD pretty easily with each new computer you buy - unless you're anti-Intel because they have that agreement with microsoft.
      • Re:Opteron (Score:4, Informative)

        by ocelotbob ( 173602 ) <ocelot.ocelotbob@org> on Sunday April 27, 2003 @09:49PM (#5822241) Homepage
        And industry won't really adopt a certain chip - I'm sure it'll be just like the x86's today; you can go back and forth between Intel and AMD pretty easily with each new computer you buy

        Actually, this is a pretty major fork between AMD and Intel. Unless there's a new processor made by one of them, the two competing 64 bit "x86" systems are mutually incompatible with each other. People are going to have to commit to one or the other, because the instruction set, hell the coding style, is markedly different in the two architectures. AMD's offering, x86-64, is very much a cleanup of the x86 instruction set, with a few features that should have gone into the architecture long ago. IA-64, on the other hand, is essentially a complete abandonment of x86, which, as others mentioned, is something that really hasn't happened with intel since they made the 8080 decades ago.

        While I feel that eventually there's probably going to be in-processor emulation of the competetor's code, that's not the case now. This is perhaps where the AMD-Intel war gets truly ugly. Since the days of the 286, the rivalry has been essentially tit for tat, a few added features by one side gets picked up by the other. This is a lot different -- there is no easy migration back and forth.

      • Re:Opteron (Score:4, Informative)

        by aminorex ( 141494 ) on Sunday April 27, 2003 @10:05PM (#5822331) Homepage Journal
        x64 and ia64 are entirely distinct and incompatible
        instruction set architectures. you're not going to
        be able to run your x64 kernel on an ia64 chip.
        it's not in the least similar or analogous to the
        ia32 situation.
      • Re:Opteron (Score:4, Informative)

        by AvitarX ( 172628 ) <me@@@brandywinehundred...org> on Monday April 28, 2003 @12:34AM (#5822907) Journal
        AMD does not have full compatability.

        According to their site if you want to run x86-64 code you can not use 16-bit legacy apps.

        Yes, technicaly the chip can run all those apps, but then it is just the next athalon, and not 64-bit chip with extra registers.

        • Re:Opteron (Score:4, Insightful)

          by Hoser McMoose ( 202552 ) on Monday April 28, 2003 @07:42AM (#5823951)
          You might want to read over those tech-docs again, or more specifically the "AMD64 Architecture Programmer's Manual Volume 1:", in section 1.2: Modes of Operation.

          <quoting sub-section 1.2.3>
          Compatibility mode--the second submode of long mode--allows 64-bit operating systems to run existing 16-bit and 32-bit x86 applications.
          <end quote>

          However, I think what you're looking for is a little earlier in the section.

          <quoting sub-section 1.2.1>
          Long mode does not support legacy real mode or legacy virtual-8086 mode, and it does not support hardware task switching.
          <end quote>

          Now, this may seem like a bit of a loss, since DOS was run in real mode, and Linux 1.xx made use of the hardware task switching, but neither of these operating systems are ever going to run in Long mode on an x86 chip since they're both long since being EOLed. Even running DOS programs under Win2003 won't require real mode (unless I'm really off as to how the DOS window works).

          In short, this is just cruft that would never be used in x86-64 long mode anyway.
  • 1.5ghz Xeon? (Score:4, Interesting)

    by JanusFury ( 452699 ) <.moc.liamg. .ta. .ddag.nivek.> on Sunday April 27, 2003 @06:47PM (#5821580) Homepage Journal
    Is it really emulation or does it convert x86 assembly so it can run on the Itanium? If you can get 1.5ghz worth of performance out of EMULATION on the Itanium, then I need a new processor.
    • Re:1.5ghz Xeon? (Score:5, Informative)

      by afidel ( 530433 ) on Sunday April 27, 2003 @06:51PM (#5821602)
      It's probably a bit of both, it is probably very similar to FX!32 for the Dec Alpha version of NT4. What this did was emulated many x86 functions, but if something was getting called a lot it was dynamically recompiled to native Alpha code. Worked pretty well overall.
    • Emulator, converter? (Score:5, Informative)

      by Trillan ( 597339 ) on Sunday April 27, 2003 @07:01PM (#5821629) Homepage Journal

      Ultimately, all an emulator does is convert instructions from one architecture to another. It's almost always more efficient to translate instructions in blocks

      To come up with a really primitive, simple example, imagine a simple instruction set with a load, add, and branch if zero-set.

      Code might look like this:

      lda avar
      add bvar
      bre label

      Now imagine we were translating to an instruction set that had mostly the same instructions, but needed a compare instruction to set our conditional flag

      Instruction-by-instruction conversion might turn out like this:

      lda avar tstz
      add bvar
      tstz
      bre label

      Now if the conversion was done on the entire block, we might end up with this:

      lda avar
      add bvar
      tstz
      bre label

      Granted, this is a pretty simple example, but I hope it makes my point. Block conversions allow a great deal more optimization than instruction conversions.

      This optimization might sound like a lot of work for the host processor, but if the block in question is a tight loop you more than make that up.

    • This is exactly what AMD and Transmeta chips do. They convert x86 CISC instructions into blocks of VLIW instructions. We have been running VLIW machines for many years, actually.
  • Better C|Net story (Score:5, Informative)

    by Webmonger ( 24302 ) on Sunday April 27, 2003 @06:47PM (#5821582) Homepage
    Here's a more detailed C|Net story [com.com].

    (Yes, it's linked from the posted C|Net story).
  • Duh.. (Score:5, Interesting)

    by OmniVector ( 569062 ) <se e m y h o mepage> on Sunday April 27, 2003 @06:47PM (#5821585) Homepage
    The Itanium had a lot of good ideas, but no matter how much you want to drop an old architecture and start over from scratch like the goal of that project was, you've got to provide a transition period. Athlon's doing this with the Opteron, Apple is doing this with OS X using the Carbon Toolkit, etc etc. The *key* to getting a user base to switch from an older architecture to a newer one has to be a compatability layer.
    Perhaps that is what doomed Itanium 1 to failure form the start. (Well that combined with the horrible heat output and power consuption of the Itanium 1).
    • Re:Duh.. (Score:3, Insightful)

      by xneilj ( 15004 )
      "Any problem in computer science can be solved with another layer of indirection."
      --- David Wheeler, chief programmer for the EDSAC project in the early 1950s.

      Scarily, it's still just as true today...
  • Emulation (Score:5, Interesting)

    by whig ( 6869 ) on Sunday April 27, 2003 @06:47PM (#5821589) Homepage Journal
    Also, it's worth noting that Itanium has always supported running x86 software without emulation. It just turns out their hardware implementation is slower than emulating the same thing in 64-bit IA-64 mode.
    • If the IA-64 instruction set is similar to the IA-32 one, then it shouldn't be hard to wrote a program that converts binaries from one processor to another before running the code?

      By murphy's law, dynamic linking or primitive datatype sizes will keep this from being practicle.
      • Re:conversion? (Score:3, Informative)

        by Webmonger ( 24302 )
        The IA-64 instruction set is not similar to IA-32. It's very, very, very, very, very different. Instead of being CISC (like x86) or RISC (like PowerPC) or VLIW, it's EPIC. The IA-32 compatibility is provided by special compatibility circuitry. If you're looking for a 64-bit instruction set that's similar to x86, you want AMD-64.
        • Re:conversion? (Score:3, Interesting)

          by julesh ( 229690 )
          For those who don't know (I graduated in '97 and my computer architectures course had no mention of it, so it must be a fairly recent development...) EPIC standards for 'Explicitly Parallel Instruction Code' and basically means (as far as I can tell from a 5 min google) that stuff like instruction reordering for the parallel execution cores is handled by the compiler, rather than the processor (the theory being that the compiler should be better at it).

          I think this makes it orthogonal to RISC/CISC/VLIW arc
  • The way I see it (Score:5, Insightful)

    by blitzoid ( 618964 ) on Sunday April 27, 2003 @06:49PM (#5821594) Homepage
    This is great and all, but it's still EMULATION. x86 support in the Itanium seems very 'tacked on', unlike AMD's idea of simply extending the regular x86 instruction set to the realm of 64 bit. The way I see it, AMD chips will always be faster than Intel at x86 stuff. And when everyone is changing over, that's CRITICAL.
    • by hendridm ( 302246 ) * on Sunday April 27, 2003 @07:20PM (#5821704) Homepage

      > And when everyone is changing over, that's CRITICAL.

      Pffft. If you want to run 32-bit, get a P4 or Xeon. If you want to run 64-bit, you're most important application(s) is/are 64-bit anyway, right?

      What uses would a company have to go 64-bit? Big ass database? High performance workstation perhaps? In the database scenario, you'd probably be running a 64-bit database anyway (or you'd be wasting your time and money). It is likely this would be your only, or at least most important, service running on the box.

      How about a high performance workstation, like CAD or something. Well, that CAD engineer will probably have 64-bit CAD, which is what he/she will use most of the day. Who cares if MS Outlook or WordPerfect run at only the speed of a 1GHz processor (or whatever the actual emulation speed equivalent is)?

      I don't see what the big deal is, but I know the average Slashdotter has a "AMD inside" bumper sticker on his modded chassis.

      • by Junta ( 36770 ) on Sunday April 27, 2003 @08:33PM (#5821971)
        It's not as simple as that. The people wanting to run 64-bit appas for big things don't have a problem. In essence, Itanium was made to solve a problem already solved. PA-RISC, Sparc, PPC, MIPS, and others already have 64-bit variants. Companies that need the 64-bit address space and such already have solutions, and don't care a bit about MS offerings on their servers. This is evidenced by MS withdrawing their unpopular ports of WnNT to non-x86 platforms years back.

        Itanium may be a true server class chipd and capable of pulling off the same stuff PA-RISC and Sparc can. But if there is *any* performance advantage, it is so slight that it is overshadowed by pathetic industry and software support. Sure you will soon be able to run Windows, and have been able to run linux, but ultimately there isn't much to run on those systems.

        AMD has struck a cord here. A lot of large environments (especially clusters) have been getting by on 32-bit architecture because of the great applications support and price/performance ratios. The Opteron falls into the same price/performance league as those 32-bit systems in use, can equal or best those processors in 32 bit tasks, and as the software matures and gets recompiled, smoothly migrate to 64-bit operation without a hiccup. When these huge clusters are running software packages that costs millions to develop, there is a vested interest in continueing to use them while simultaneously ironing out the kinks in their 64-bit versions.

        There is a damn good reason why IBM and others are finally acknowledging AMD as worthy of building servers around. Itanium sales have been pathetic, and there has been much more customer interest in the possibility of upcoming Opteron products than the reality of existing Itanium systems.
    • Re:The way I see it (Score:3, Interesting)

      by debrain ( 29228 )
      The way I see it, AMD chips will always be faster than Intel at x86 stuff. And when everyone is changing over, that's CRITICAL.

      I think you are right, but not necessarily about what you think. It is "CRITICAL" because it is "The way [you] see it". I believe it is not the speed that is important here, but the perception.
  • by peculiarmethod ( 301094 ) on Sunday April 27, 2003 @06:50PM (#5821598) Journal
    Wow, I mean WOW.
    NOOOOW I can watch my old dos demos from Unreal and The Humble Crew in less time than my brain can percieve them. Just what topped last years christmas list.

    pm
  • by craigeyb ( 518670 ) on Sunday April 27, 2003 @06:51PM (#5821603) Homepage

    And I thought it was just going to be a space heater.

  • Sounds familiar. (Score:5, Insightful)

    by Grenamier ( 12799 ) on Sunday April 27, 2003 @07:02PM (#5821641)
    This actually reminds me of when Apple's emulation strategy back when they migrated from the old 680x0 series to PowerPC. It was well orchestrated and was actually something of a triumph for them. I hope that bodes well for Intel's attempt.

    For Intel to have a long term future without the embarassment of junking the whole architecture, they need Itanium x to run IA32 credibly. Advances in x86 performance keep coming at such increasing development costs that I think they would have to be able to migrate the market to IA64 within 5-10 years from now.

    I would like for both the IA64 and the Hammer architectures to flourish, but Intel's taken an extremely bold step with EPIC, and I don't want to see them get punished in the market for that alone. I like the spirit of aiming higher.
    • Re:Sounds familiar. (Score:5, Interesting)

      by Animats ( 122034 ) on Sunday April 27, 2003 @09:11PM (#5822075) Homepage
      This actually reminds me of when Apple's emulation strategy back when they migrated from the old 680x0 series to PowerPC. It was well orchestrated and was actually something of a triumph for them.

      Well, no.

      Actually, it was a painful transition. Horrible hacks were required to make it work, and Apple lost considerable market share.

      From the user perspective, all the applications that used the FPU stopped working. Worse, the PPC only had (has?) a 64-bit FPU, while the 68K and x86 have 80-bit FPUs. So a simple recompile often wasn't enough. Most of the engineering applications (CAD, EDA) were never ported to the PPC at all. There were unsupported 3rd party FPU emulators for the 68K FPU, but they were really slow, since they had to emulate a wider FPU.

      Most of the OS ran in 68K emulation mode for years after the "transition". The PPC interrupt model was mainframe-like, assuming that you didn't do much in an interrupt routine except activate some process. The 68K interrupt model was minicomputer-like, with multiple interrupt levels used as the main locking primitive. Hammering those two together was painful. There were some things you just couldn't do in PPC mode; you had to drop into 68K emulation to prevent interrupts.

      The old MacOS had what was euphemistically called "cooperative multiprogramming". That didn't mean you had threads without time-slicing, like a real-time OS. It meant you didn't have real context switching at all. You plugged your code into callbacks at different levels of processing, like "system tasks", "VBI tasks", "timer tasks", "interrupt tasks", etc., none of which could block. No mutexes. No locking. Only interrupt prevention. Trying to do anything in the background was very tough. (I know; I wrote a PPP protocol module for the 68K Mac. I had the only one that could dial the phone in the background without locking up the whole machine, and it wasn't easy.)

      Worse, the 68K emulator depended on a jump table with 65536 entries, one for each of the first 16 bits an instruction could have. Early PPCs didn't have enough cache to keep that entire table in the cache all the time. But if it wasn't all in the cache, 68K emulation performance was terrible.

      Amusingly, much of the perceived performance advantage of the early PPC machines came from the miserable 68K code generators used on the Mac. The Apple and Zortech compilers were clueless about 68K register allocation, preferring to do all arithmetic in register A0. The PPC code generators were much better. Some high-end apps used to be cross-compiled on Sun 68K machines because the Mac code generators were so bad.

      Most of these problems were papered over using the Jobs Reality Distortion Field. But this was the period when Apple started losing market share big-time. Arguably, the PPC transition cost Apple its preeminence.

      What Apple really needed was faster 68K CPUs, not a new architecture. Technically, that was quite possible. The Motorola 68060, (never used by Apple, but in the last 68K Amiga), was faster than the PPC of the same vintage. But Jobs had cut a deal with IBM under which IBM was supposed to make MacOS compatible machines (!), and that was the motivation for the PPC.

      • The Apple and Zortech compilers were clueless about 68K register allocation, preferring to do all arithmetic in register A0.

        A0 was an address register... did you mean D0, or did they actually do math in an address register?

      • Re:Sounds familiar. (Score:3, Interesting)

        by ahchem ( 62628 )
        A lot of what you said was very interesting, and probably accurate.

        But you made one major error:
        Jobs was not at Apple when they made the PPC transition. He was at NeXT.

        I remember very clearly reading an interview given by Jobs where he ripped Apple's decision to switch to PPC.
      • Re:Sounds familiar. (Score:4, Informative)

        by SewersOfRivendell ( 646620 ) on Sunday April 27, 2003 @11:52PM (#5822736)
        Actually, it was a painful transition. Horrible hacks were required to make it work, and Apple lost considerable market share.

        Well, no. Interestingly, you are technically correct on a couple of complex points, but you seem clueless on others. Perhaps your memory has faded. Think C 5's code generator was far better than MPW (Apple's) C or Symantec C++, but Metrowerks C was ultimately much, much better. MPW C tended to frequently do shit like (actual example from disassembling the 7.1-era Finder, IIRC):

        mov.l a0, a5
        mov.l a5, a0

        Note lack of peepholing.

        What you call "cooperative multiprogramming" is actually called "interrupt time." All documentation of which I'm aware refers to it as "interrupt time." No euphemism required.

        Jobs had been fired for over seven years when John Sculley cut the PowerPC CPU deal, and It had nothing to do with PowerMac clones.

        Most of these problems were papered over using the Jobs Reality Distortion Field. But this was the period when Apple started losing market share big-time. Arguably, the PPC transition cost Apple its preeminence.

        No, dude. I was there. Apple never had "preeminence" or much market share. Apple was always struggling under the "Apple is dying" myth (and still does in some quarters today). In the mid-nineties, Apple had a series of crises caused by Sculley and his successor's ineptitude. Worse, Apple stopped playing to it's traditional strengths (industrial design and hardware/software) under Spindler, a problem that, combined with vigorous and useless penny-pinching in all the wrong places -- Apple's hardware & software quality hit the lowest point they'd ever reach at the end of Spindler's reign -- ultimately led to the ouster of Spindler. Amelio failed to recognize this (or much of anything else about Apple), which ultimately led him to buy his own doom in NeXT and the return of Jobs.

        • I'm down with a bad cold today; some of that was wrong.

          I never used Think C, but I used MPW and Symantec/Zortech, and later Metrowerks. As you point out, MPW and Symantec/Zortech weren't very good. Metrowerks was a big improvement.

          You're right about Jobs not being there at the PPC transition.

          Apple had more market share than IBM in the Apple II days, and it was all downhill after that. When Gil Ameilo came in, Apple's market share was about 7%. Now, it's around 2.5%. (Apple likes to emphasize high

  • Details? (Score:3, Insightful)

    by CausticWindow ( 632215 ) on Sunday April 27, 2003 @07:03PM (#5821644)

    Anybody got the technical details on this "emulation" versus the x86 compatibility in Opteron?

    JIT compilation or instruction for instruction?

  • FX32! for Itanium (Score:5, Interesting)

    by msgmonkey ( 599753 ) on Sunday April 27, 2003 @07:04PM (#5821647)
    I'd wager that this is FX32! (allowed you to do the same on Alpha) reworked for the Itanium. Considering Intel purchased all Alpha related technology I would n't be surprised. This is not really that bad a thing since FX32 was quite good at what it did (within its limits).
    • Lightbulb goes on - Your probably right, FX32 waa never that impressive, even on an Alpha. It could run solitair but not much more without bogging down severely.
    • Typo
    • Considering Intel purchased all Alpha related technology I would n't be surprised.

      I don't know what they bought specifically, but I seem to only remember that they bought the fab for Alphas, as well as DEC's NIC and StrongARM technology. IIRC, DEC kept the Alpha technology, but having been bought by Compaq and then HP, I think there are enough cross-licencing deals in place that Intel might just have a lot of those rights available to them.
    • Not - necessarily. I can speculate but. I dont know the exact details about the emulation but I can guess what is happening. Over the past few years, dynamic compilation, optimization and dynamic execution layer interface projects and papers have been doing the rounds in academic community. For example dynamo [nec.com] where a dynamic optimizer (which takes code and performs run time optimization on it - not emulation or translation) showed that the apps in fact ran *faster* even counting the overhead of optimizati
  • Having been the right school-age to had dealt with the first "PowerPC" Macintoshes, running System 7.5, this is a going to be a huge fiasco. The biggest problem that 7.5 had was that it was not running natively, the OS itself was being emulated. It sucked for performance. Yes, Apple did eventually get an all-PowerPC version out, with 8.0 or so, but at that point, it was geared toward the hardware of the time, which weren't 601's. School districts are still dealing with the effects of this screwup, and i
  • by 1nv4d3r ( 642775 ) on Sunday April 27, 2003 @07:22PM (#5821714)
    Does this mean they can now take the ia32 hardware implementation out? I never liked that idea in the first place.

    And, really, can't plenty of us just roll our eyes and go back to compiling our systems from source? I mean, once there's a linux kernel + glibc + gcc port, thousands of applications are instantly available to you.

    <preachy>Every time you find yourself strapped to a single architecture, ask yourself why you have all this proprietary baggage holding you back. Whether it's that Word .doc format you used, or that built-on-contract accounting system you didn't obtain the source for, these days it's usually by your choice that you are in this predicament.</preachy>
  • This is so clearly the right way to go, that one has to really wonder what Intel was thinking when they only released a unreasonably slow hardware emulator. I suppose the integration with the operating systems is a bit of a mess, and a moving target at that, but there would have to have been a number of engineers at Intel and HP that would have seen the tremendous performance difference from the beginning. It's not as if software emulation had never been done before.

    This, tragically, does hurt AMD quite
    • Trust me Yamhill is still alive and well somewhere inside Intel, they would be stupid not to hedge their bets. At an intro speed of 1.4Ghz the Opteron will be faster at 32 bit code at a fraction of the cost (Newegg shows $315 for 1.4Ghz Opteron (Intel P4 equivilant of ~2.8Ghz) vs $2800 (at pricewatch) for a 900Mhz Itanium 2 (what will the 1.5Ghz part cost???)).
    • I would not be too hopeful about a software emulator. This sounds a lot like FX!32, the software binary translater that let you run x86 Windows programs on the DEC Alpha. While FX!32 was impressive for its time, and certainly a workable product, for most of its life it was not nearly performance-competitive with real x86 hardware. And the Alpha in its heyday was a MUCH faster chip than the Pentium. It's not clear to me that the Itanium CPU is inherently superior to x86 or x86-64 (if you optimize code specif
    • This, tragically, does hurt AMD quite a bit.

      I don't think so.

      I had read multiple rumors about Intel having something up their bunny-suited sleeves, but most of these rumors had Intel supporting x86-64 -- that is -- copying AMD for the first time. This announcement takes away one of the unique advantages of the Opteron/Athlon64 without following AMD's lead.

      If you think running 32-bit code half as fast (1.5 GHz. Xeon vs. 2.8 GHz. Xeon) on a processor that costs four times as much takes away any advantag

      • I'm pretty sure it's called Itanium. Is "Itanic" you way of trolling?

        To me it looks like Opteron is around 8x more cost effective at running 32-bit code

        I thought the point was to finally move away from "old" 32 bit code. This 32 bit capability is there for backward compatibility until such time as individuals and companies retire their old 32 bit apps.

        Things change. The industry moves forward. This is why we don't all run 486's anymore.

        Intel is in trouble.

        Let's hope not. Competition is a good
  • Will Intel concede to the AMD x86-64 architecture, or will they try to branch out on their own idea? (HINT: 2nd one is a bad idea... just look at the first "64 bit" itanium)
    • by Anonymous Coward
      The first Itanium was basically designed by Intel. Itanium 2 was HP's attempt to fix it (much better). Intel has lost it. They have great manufacturing and competent marketing, but they let all the best engineers leave (to AMD, IBM, Transmeta, etc.). And now they're starting to behave like Microsoft (threatening OEMs to try to stop them from using competing products). But their grip on the chip market isn't half as strong as Microsoft's on the OS market, so there's a serious risk of backfiring.

      If you add A
  • As far as I can tell, there is no reason to switch to Itanium right now--it seems to be expensive, slow, and hard to compile for. This "compatibility layer" won't change that. When the Itanium is competitive in both performance and cost, then it's worth looking at again.
  • by AvengerXP ( 660081 ) <jeanfrancois,beaulieu&mckesson,ca> on Sunday April 27, 2003 @08:52PM (#5821992)
    Whatever happens, even if the Opteron was 100% full backwards compatible and 2x faster than Itanium, nobody in the server segment or even the high end workstation segment will buy an Opteron because they think that AMD makes unstable cheap processors targetted at the nerdy overclocking enthusiast.

    I personnally don't agree, but my opinion isn't worth jack inside the corporation and I already know the system's administrator has a "Intel Inside" sticker on his forehead, even if the chips cost 2x as much. They say they pay for "quality". Psssh, what a load of bull.
  • Motives... (Score:5, Interesting)

    by mraymer ( 516227 ) <mraymer.centurytel@net> on Sunday April 27, 2003 @08:56PM (#5822004) Homepage Journal
    I really have to wonder if Intel is doing this because of customer demand, or simply because they don't want AMD to have the upper hand.

    From what I've seen, I would argue that their motive is the latter. Intel has show on several occasions that, these days, they simply don't give a damn about the end user. They care about market share, profits, and their precious stock price. Let's not forget the fact that "Pentium" was coined because Intel wasn't allowed to trademark the number 586.

    Remember when they released an overclocked Pentium III to the public, and Tom's Hardware had that nice little article exposing it for the failure it was? It choked on GCC, among other things, while Intel steadfastly denied the problem. Then they actually recalled the processors. Competition at the expense of the end user... wonderful!

    It is clear AMD is still going to come out on top in performance on this one, unless "software emulation" doesn't mean what I think it means. It is also clear to me that Intel has to do a lot more than throw some software emulation at an issue before I ever buy another Intel processor.

  • by 1stflight ( 48795 ) on Sunday April 27, 2003 @09:08PM (#5822054)
    The point of the Opteron isn't the fact that it can do 32bit fast, but that it can do 64bits in a way that everyone understands and has been hammered out for decades.
    The Itanium is a marvelous piece of work however, how's going to adopt something so unknown, vs something so familiar? That is the point Intel missed, 32bit is dead, 64 bit is here, which one will be chosen?
  • I wonder if the emulation technique they'll be using will be similar to Transmeta's 'code-morphing' [transmeta.com]. I always wondered why intel didn't license that idea & use it on their Itanium. 'Code-morphing' achieved middle-of-the-road x86 performance on a VLIW (sound like a familiar goal?), but it was still far better than what Itanium gets with its current x86 support.
  • x86-64 (Score:2, Interesting)

    by darthscsi ( 144954 )
    An interesting thought is that the instruction format and register set of AMD's x86-64 is just an extention of x86, so if Intel has a good emulator for x86 running on IA64, then it should be (from a technical standpoint, not a licensing standpoint) fairly trivial to emulate x86-64 at speeds similar to the x86 emulation. THAT doesn't bode well for AMD.

    And as for licensing, a clean room implementation should be very easy considering it is simply an extention of x86.
    • THAT doesn't bode well for AMD

      That would be the best thing that could happen to AMD.

      It would validate AMD64 and give them a HUGE cost advantage. We are talking 5X or so at current pricing levels.

  • Itanium and the IA-64 instruction set depends very very very heavily on a good compiler to draw parallelism out of the code.

    The reality is we may never get compilers that are that good, and we may never have many applications where much parallelism can be drawn out anyway... at least not enough to make it worthwhile.

    EPIC is a huge gamble... one that may not pay off in the long run. I'm no fan of x86 per-se, but it seems that AMD has tried to bring it up to speed with x86-64... more registers (always the b
  • Misleading headline (Score:2, Informative)

    by conway ( 536486 )
    The headline is misleading.
    Itanium has always had x86 emulation, just before it was done in hardware, and very very slowly. (The Itanium 1, at 800Mhz, ran x86 software at the speed of a 150Mhz pentium or so.)
    A story at The Register, here [theregister.co.uk] explains that this new software will translate some of the x86 assembly to IA-64 assembly at runtime. (See picture [atmarkit.co.jp])
    This is the same way that HP's Aries [hp.com] works -- which translates HP-PA instructions into IA-64.
    That works pretty well actually, delivering about 80% of the
  • Someone else would have. Remeber NT 4.0 (and I think earlier versions) had an emulation layer for RISC processors to be able to run 386 binaries. I'm sure it wouldn't have taken long for MS or the open source people to come up with something similar. At least if Intel does it, they have the inside knowledge to be able to tune it for best performance.
  • by XNormal ( 8617 ) on Monday April 28, 2003 @03:14AM (#5823275) Homepage
    The Itanic always had full 32 bit x86 compatibility and a significant percentage of its die real estate is spent on it. It just sucks so much that it's outperformed by software emulation. Needless to say, if you use the software emulation layer you would *still* be paying for the hardware emulation.

    No they're trying to spin this story as if it's actually something good and not a patch for a white elephant.

    See this story on The Register [theregister.co.uk]

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...