Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Dumps Iitanium's x86 Hardware Compatibility 277

Spinlock_1977 writes "C|Net is running a story that Intel is going back to software x86 emulation on Itanium in order to reclaim chip real estate. (room for another 9MB of cache?) One notable quote about x86 emulation: 'Basically, no one ever used hardware-based IA-32 execution, so better to use the silicon for something else,' said Illuminata analyst Gordon Haff. 'Of course, basically no one uses software-based emulation either, but at least that doesn't cost chip real estate.'"
This discussion has been archived. No new comments can be posted.

Intel Dumps Iitanium's x86 Hardware Compatibility

Comments Filter:
  • by Anonymous Coward on Thursday January 19, 2006 @07:14PM (#14514553)
    Intel's chips will use that extra sillicon for a nice pair of fake breasts. That's sure to up their earnings next quarter. Take that AMD.
  • by Anonymous Coward on Thursday January 19, 2006 @07:16PM (#14514564)
    Seems most of the better software today either falls into 2 camps:
    • Software with source available; so "configure; make; make test; make install" will work, and
    • Virtual machine based stuff (Java/JVM, .NET/CLR) (even popular on cell phones these days)

    I think the days of it mattering what the exact instruction set is are pretty much over.
    • by Eightyford ( 893696 ) on Thursday January 19, 2006 @07:21PM (#14514609) Homepage
      Virtual machine based stuff (Java/JVM, .NET/CLR) (even popular on cell phones these days)

      So that's why it takes two friggen minutes to turn on my cell phone!
    • You missed precompiled binaries targeted for a certain architecture. None of the software I use on my computers runs off of a VM. It's either precompiled or bulit from the source. So yes, I would say it matters because I make it a point to avoid most things that use VMs.
      • Still doesn't matter, because in 2006 recompiling a program with a native-code compiler targeted to a random ISA (for a given operating system) is practically free—especially if the program was written in a reasonable modern language.

        If the Java VM is your beef, check out gcj [gnu.org]. Sure, it still has a runtime system. But its performance overhead relative to compiled C code is almost always negligible or better.

    • Indeed (Score:2, Interesting)

      by Anonymous Coward
      I think the days of it mattering what the exact instruction set is are pretty much over.

      Indeed-- which is why it's hilarious that pretty much the entire world is just this moment moving to a single common unified instruction set. The server world has standardized on x86-64, Itanium is a walking corpse; the PC world has standardized on x86 as well, PPC has retreated to video game systems. We are moving to a new world of processor agnosticism, at the exact same time processor agnosticism has become largely po
      • Re:Indeed (Score:2, Insightful)

        by captaineo ( 87164 )
        Plus the fact that modern processors do so much internal magic on the code stream that an "instruction set" is more of a transmission protocol than anything having to do with CPU internals.
    • As somebody that's trying to run a 64 bit linux installation, I must tell you that this software nirvana where the instruction set does not matter is far far away.

      • Yeah, no kidding, and the worst offenders are these wonderful things that everyone tauts as the fix to the problem: the web browser, and assorted plugins, and Java. The next biggest is video codecs, courtesy of the Windows people that can't seem to understand why it's nice to be able to play back that precious content if they want you to buy it.

        I tried to run 64 bit Linux with Ubuntu. It wasn't worth it. I spent a week screwing around with it and trying to be able to just reliably play a video, or to eve
  • by DurendalMac ( 736637 ) on Thursday January 19, 2006 @07:16PM (#14514566)
    Sheesh, the Itanic wasn't exactly a success story. How does it fit into their new roadmap with cooler chips that eat less power? That processor was a goddamn space heater.
    • Well, some people are cold in the winter you know...
    • by questionlp ( 58365 ) on Thursday January 19, 2006 @08:00PM (#14514837) Homepage
      Although it may not run as cool and will use around 100W of peak power (+/- 10%), Montecito will be dual-core and run at around 1.6-1.8GHz at launch. 100W is less power than the current high-end Xeon MP and just over the Sun US-IV+ processors, but each of the two cores gets 12MB of L3 cache. Compare that to the ~120-130W power envelope of the mid/high-end Itanium processors available right now.

      Granted, the Itanium is not the fastest enterprise-focused processor out there, but at least they are trying to reduce the overall power consumption and heat generation of the next-gen Itaniums.

      For the workload I deal with everyday, the Opteron and US-T1 are better suited.
    • by Anonymous Coward on Thursday January 19, 2006 @08:06PM (#14514876)
      Parent mocked:
      >Sheesh, the Itanic wasn't exactly a success story. How does it fit into their new roadmap with cooler chips that eat less power? That processor was a goddamn space heater.

      See: http://www.ideasinternational.com/benchmark/bench. html [ideasinternational.com]

      Make special note of the SPECint2000 page and SPECfp2000 pages and also make note of the TPC-C scores.

      The Itanium 2 takes the top three SPECint_rate_base2000 spots (128 cores), the top SPECfp_base2000 (single core) and the top two SPECfp_rate_base2000 spots (128 cores). The 64-way HP Superdome (by now they're all Itaniums, so they don't bother noting PA vs Intel) is in four of the top eight nonclustered TPC spots.

      In short, the Itanium 2 is the best scientific computing chip on the market, as proven by the SPEC_int_base2000 and SPECfp_rate_base2000 stats (beating out the Power5). Also, it's not too shabby on the TPC numbers, only being edged by the IBM Power 5.

      If you don't work with a 16+ core Itanium 2 or Power5, please STFU about them being market failures. They're not marketed at you.
    • Actually, if you think about it, the Itanium looks pretty good heat/performace wise compaired to the Pentium D's, which have roughtly the same heat output.
  • As noted elsewhere (Score:5, Interesting)

    by C. E. Sum ( 1065 ) * on Thursday January 19, 2006 @07:17PM (#14514574) Homepage Journal
    This is very old news. Various sources and die photos have showed this for more than a year... ...and no one cares.

    The die space reclaimed was somewhat significant, and the software emulation is faster than the hardware emulation.
    • The news here's the poor choice of words this Gordon Haff has made. I'm guessing this analyst doesn't own any Intel stock.

      Basically, no one ever used hardware-based IA-32 execution, so better to use the silicon for something else. Of course, basically no one uses software-based emulation either, but at least that doesn't cost chip real estate.

      In other words, no one cares. Sounds like it's a worthless feature just so they can include it in the list. I'm not sure what's the market for this chip, having

    • Since you seem to follow this, do you know what they're going to use the die space for?
  • by Anonymous Coward on Thursday January 19, 2006 @07:17PM (#14514577)
    "Of course, basically no one uses Itanium either..."
  • why not Alpha (Score:5, Insightful)

    by xx_chris ( 524347 ) on Thursday January 19, 2006 @07:17PM (#14514580)
    if they are going to dump x86 compatibility, why not dump Itanium compatibility and just go back to Alpha?
    • Re:why not Alpha (Score:5, Insightful)

      by Anonymous Coward on Thursday January 19, 2006 @07:33PM (#14514678)
      Politics. Yes, Alpha is a much superior platform, technically speaking, than pretty much anything else out there today. But for Intel to turn their back on Itanic (thank you, Register, for consistently misnaming the Itanium in such an apt way) would mean admitting that the billions of R&D they spent on it was a waste. HP also has political reasons to not resurrect Alpha.

      Damn shame, that. If they'd poured as much money into Alpha as they did into Itanic, they'd have a platform that would whomp all over everything currently in the marketplace.
      • They'd have a platform that would whomp all over everything currently in the marketplace

        Well, Alpha was a high-end CPU designed for servers. Somehow, I doubt it could whomp the portable market? (which is a important part of the computer world these days)
      • they'd have a platform that would whomp all over everything currently in the marketplace.

        And that is exactly why it isn't happening, everything currently in the marketplace is for a substantial part comming from... Intel.

      • Re:why not Alpha (Score:5, Interesting)

        by maraist ( 68387 ) * <michael.maraistN ... gmail.n0spam.com> on Thursday January 19, 2006 @09:15PM (#14515346) Homepage
        Damn shame, that. If they'd poured as much money into Alpha as they did into Itanic, they'd have a platform that would whomp all over everything currently in the marketplace.

        I don't know that I agree. The alpha was a particular set of optimizations. Dual register files, branch-prediction hints. pure 32bit (sub-32 bit data access had to be emulated through a multi-step process). Deep pipeline (for it's day).

        But at the same time, they purposefully witheld adding out-of-order execution (plays havoc w/ their highly optimized register configuration). Sparc had similar problems with their rolling register-stack.

        I studied the alpha prior to the announcement that their new version would have out-of-order, so I don't know if they ever did go that route.

        The point is that by adding all of the techniques that were employed by modern CPUs (aside from slightly higher speed memory), they would not have maintained much of an advantage. Their performance would be comparable to the AMD-64, but not much faster.

        I'd still love to see the alpha kept alive, there was absolutely nothing wrong with it, except it's price (for general work-station use).
        • Re:why not Alpha (Score:4, Interesting)

          by Paul Jakma ( 2677 ) on Thursday January 19, 2006 @11:22PM (#14516209) Homepage Journal
          I studied the alpha prior to the announcement that their new version would have out-of-order, so I don't know if they ever did go that route.

          Yep, with the 21264 - aggresively out-of-order CPU. The 21064 and 21164 might not have executed instructions out-of-order, however they were highly speculative. AXP arch was designed for out-of-order from the beginning, the two early CPUs did memory IO out-of-order. 21064 had a 32 entry register file it seems, not 2, btw, according to a paperp on the AXP 21064 [upc.edu] I found on google written by a DECy.

          Their performance would be comparable to the AMD-64, but not much faster.

          Agreed, cause guess what: AMD64 is Alpha's progeny-in-spirit. ;)

          The AMD K7 is very alpha-like (hence so is the K8). Highly speculative, out-of-order, wide multiple issue CPUs like the 21264. Not co-incidentally given that Dirk Meyer, co-architect of the 21264, led the AMD K7 design [eet.com] team. K7 used the 21164/21264 EV6 PtP interconnect too. K8 made it routable with HyperTransport - just as DEC^WCompaq did with EV6 in the 21364. You would still expect this mythical equivalently developed Alpha to beat AMD64 though, given it'd be able to use the die-space 'wasted' on x86-decoding for something more productive (cache or somesuch).
    • Could it be the old "not invented here" syndrome?
      • Considering they eventually caved and started implementing AMD's x86_64 architecture (though they're not willing to call it that), I don't think it's the case. Clearly they realized that the market for 64-bit chips with 32-bit x86 compatibility was all in EM64T/AMD64, so Itanium could focus on 64-bit only stuff.
    • Alpha was great.

      Alpha was intended to have a 25 year life. Unfortunately, it is drawing close to the end of 25 years. The design team is gone. By the time they could reconstitute it, train everyone, start a design, get it through fab, and ready for production systems, it would be close enough to 25 years that it wouldn't matter anyway. There is also the ugly N.I.H. factor which makes it unlikely they would ever revive it. I'm afraid Alpha is gone. R.I.P.
  • by bstadil ( 7110 ) on Thursday January 19, 2006 @07:18PM (#14514583) Homepage
    'Basically, no one ever used hardware-based IA-32 execution, so better to use the silicon for something else,' said Illuminata analyst Gordon Haff. 'Of course, basically no one uses software-based emulation either, but at least that doesn't cost chip real estate.'"

    Why not extend that logic? No one really used the Itanium chip anyway so why not use the silicon to make Yohan's for Apple?

  • Sheesh. It took them what, 15 years to realize that they needed to DUMP backward compatibility to become efficient? *cough* 640K barrier *cough*

    What strikes me is that only when they begin losing market share to AMD, they begin to search for design flaws (obviously they don't have time to waste in x86 emulation when they're falling behind)
    • The 640KiB barrier was imposed by the IBM PC architecture not the 8086 hardware. The 8086 can directly address 1MiB of RAM. 4MiB if you isolate each of CS, DS, SS, and ES into their own banks with additional decoding logic.
    • Except AMD's 64-bit chips are backwards compatible. Thank God, too, because precious few desktop components have 64-bit drivers.
    • by maynard ( 3337 ) on Thursday January 19, 2006 @07:36PM (#14514696) Journal
      I don't seem to remember any "640K" barrier with the 8088 or 8086. Didn't it support up to 20 address lines? Yup... I thought so. That missing 384K was reserved for ROM, video RAM, and whatever else one might need. And lets not forget the bank switched expanded RAM boards that were around in the day. As one whose family owned an original XT w/ 20MB drive and full 640K from 1983 onward, I can say with assurance that 640K was a whopping amount of RAM in the day. It also cost a buttload.
    • So I don't get the logic behind the politically-correct Intel bashing. On the one hand, one hears that Intel is bad because they have carried on with binary compatibility since the microprocessor pre-history (the 8086, even the 8080 to a certain extent), creating an architecture that is indeed in some places a bit quaint.

      Then on the other hand we are supposed to believe that AMD is genius for having led the way in moving to 64 bits through extension of that much-reviled x86 architecture rather than by star
      • Why not take a look at the top500 list? They have a nice piechart by processor family [top500.org]. We can see that there are more AMD X86-64 systems than IA-64 systems. You also have to keep in mind that there is a large lag time from when a top500 system is planned and when it's actually purchased, installed, and makes it on the list. Opteron is a newer processor and adoption takes times. Expect to see a huge increase in the number of opteron systems on the next list.
      • I don't get the logic behind the politically-correct Intel bashing.

        I don't get the logic behind describing Intel-bashing as "politically correct".
    • Oh yeah, look what a big success the itanium has been. And compare with the backwards-compatible amd64. Like it or not, dumping x86 backwards compatibility is not a good move.
    • *cough* 640K barrier *cough*

      Damn, where's the (-1, Clueless) mod when you need it?

    • Wrong on both counts. Intel initially planned Itanium to be a replacement for x86. To help in that transition they created a translation layer that they believed would be necessary until vendors made IA64 compiled binaries. The only "inefficiency" is the extra silicon needed to accomplish this, and the IA64 architecture isn't gimped because of it.

      I highly doubt losing market share to AMD has anything to do with the decision to dump x86 compatibility on a chip level. No one is using the x86 compatibility
      • business users typically run ancient software from companies or consulting comapanies that no longer exist in binary form only. COmpatibility is more important for Intel than a company like Apple.

        People who buy pc's do so because its what everyone else buys.

        Its a mess and I am glad I am not Intel. I bet HP has a contract forcing Intel to keep making the Itanium too. They killed the alpha for Itanium and its just astounding after what a few billion in sunkin costs can do to make sure you wont leave for somet
    • Intel's biggest failure was taking so dang long to get the Itanium to market. I remember when it was annouced, and a short decade later the thing arrived. By the time it arrived things had changed quite a bit at HP, Intel and the computer markets in general.
  • Seriously. Is there any reason to buy one of the things? What does it do that justifies ANYONE buying one? Does it still have the "best" floating-point performance?

    • It still has the best fp performance for a single chip, and it still isn't worth it on a price-performance ratio when you consider multiple-core athlons, or even multiple-core late-model pentiums.
    • by friedmud ( 512466 ) on Thursday January 19, 2006 @07:47PM (#14514759)
      I work in Computational Engineering and can say that I know people who specifically write for epic because it is good at pushing the _huge_ amounts of serial computations (mostly solving large systems of equations) through the processor quickly.

      I have personally had a dual itanium workstation sit under my desk for around 9 months. It was ok I suppose. I was doing Finite Element mechanical simulations on it and it did fairly well at it (it helped that it had 8 Gigs of RAM). I also got Gentoo compiled on it (this was before it was really supported) and it worked fairly well as a desktop (had an nvidia quadro card in it).

      Personally, I think intel should just give up... they obviously lost the fight. But who knows, maybe it is actually making them _some_ money (although it can't be much).

      Friedmud
    • by m50d ( 797211 ) on Thursday January 19, 2006 @07:57PM (#14514819) Homepage Journal
      If you have the time to hand-optimize your code, it blows anything out of the water. This means it's useful for simple number crunching, but not much else - more processors are generally cheaper than more coders. It was expected that compilers would improve by the time Itanium was adopted, but that hasn't really happened. (I read here that the hurd coders were able to make their Itanium message-passing routine TEN TIMES faster by doing it in hand-coded assembly compared to what a compiler churned out)
    • by Anonymous Coward
      These are my personal opinions and not those of my employer.

      Some users are:
      - Certain well-tuned scientific and engineering applications that are floating-point intensive but not memory bandwidth bound. Ideally, the code should have few branches. There is a significant performance bonus for code that can fit fithin the L3. However, the per/processor cost delta over the Opteron is difficult to justify for the standard 2 processor per node compute cluster model.
      - Large systems. SGI can support up to 512 proces
  • 'Basically, no one ever used hardware-based IA-32 execution, so better to use the silicon for something else'

    Why not just say....

    Basically, no one ever used Itanium , so better to use the silicon in a more meaningful manner...

    1. Stop making Itanium chips
    2. Harvest saved silicon
    3. ????
    4. Profit!

    Given ???? involves *cough* implants of some type....
    Imagine Intel branded implants.

    I'm talk about cyborg implants, what were you guys thinking about!!

    • Given ???? involves *cough* implants of some type....
      Imagine Intel branded implants.

      So where would the "Intel Inside" stickers be placed?

    • You know, silicon and silicone are not the same thing. Common mistake.

      Silicone jubblies are pretty common in LA, for example.

      Silicon jubblies are best enjoyed in HD (DOA4 for e.g.).

      m-
  • by WasterDave ( 20047 ) <(davep) (at) (zedkep.com)> on Thursday January 19, 2006 @07:31PM (#14514659)
    There's a sense of irony with Apple having, apparently, no problem getting PPC emulation to work on an Intel x86 ... and Intel having no joy running x86 emulation on IA64. If I didn't know better it would look to me like IA64 is a bag of crap.

    Oh, hang on.

    Dave

    • Yeah, especially since (thouse of us old enough should remember ;-) ) one of the original goals of PPC consortium (IBM, HP, Motorola) was to optimize its architecture so that it is able to efficiently emulate "other processors" (meaning especially x86). RISC purists were disgusted by PPC because it had all those extra instructions... I guess ISA simplicity goes both ways and not it makes it easy to emulate PPC on x86...

      BTW, what's happening with TransMeta these days?

      Paul B.
    • Irony .... Where? (Score:2, Informative)

      by vanka ( 875029 )
      I'm not sure what your point is with that comment, Apple's emulation of the PPC architecture (Rosetta) is all done in software, which doesn't run at native speed. As I recall, the Itanium had software emulation of x86 at first, then they added I guess they added hardware emulation. Now to cut costs and chip real estate they are taking out hardware emulation and reverting to software emulation. I'm missing the irony in this particular situation. How is this ironic?
    • No there is no irony here. Just an persistent eagerness to mention apple in every slashdot story no matter how unrelated.

      People should start moderating these offtopic.
  • by marshallh ( 947020 ) on Thursday January 19, 2006 @07:37PM (#14514703)
    Perhaps this is an indication that Intel has finally realized that their strangehold on the CPU market may be threatened by AMD? And that they will have to optimize and trim the fat off their products? Competition is good.
  • by Anonymous Coward
    There's obviously a typo in this headline, which I've corrected:

    Iintel Dumps Iitanium's x86 Hardware Compatibility.

    C'mon Slashdot editors, get with it.
  • This is a good thing. The Itanium can emulate the x86 faster than the 'good for nothing' 486 that was on core. It's worthless and NOBODY has been using it for a LONG time.
  • About time (Score:3, Interesting)

    by msbsod ( 574856 ) on Thursday January 19, 2006 @07:54PM (#14514799)
    I think removal of the x86-emulation from the Itanium CPU was overdue. It should have never made it into the chip. Every serious software developer would have re-compiled their code on the new chip anyway. What I wish to see next is a dramatic reduction of the power consumption and return to the original promise by Intel to make the Itanium a replacement of the aging x86 architecture, not only for expensive servers, but also for desktop and notebook PCs. The x86 is smashhit because it is available for so many different applications. The Itanium however was pushed into a niche.
  • What about the PPC emulation on the new intel macs?

  • by Sebastopol ( 189276 ) on Thursday January 19, 2006 @07:59PM (#14514833) Homepage
    ...Intel figures out what to do with the probably thousands of people working Itanic, they'll drop it. You can't just nix such a huge project and bone all the employees. I suspect they've been wanting to drag this thing out back and shoot it for some time, I mean it gets ZERO real estate or marketing attention on the website or corporate SEC prospectus info. Never read about it adding to bottom line in any filings.

    Maybe they just make it for the supercomputer folks... a niche market which is probably 10x larger and 100x more profitable than the propeller-beanie AMD fanboy crowd that trolls around here, scoffing at neon-illumiation-free chassis.

    • I wonder if they're just trying to keep it alive and not lose too much money while they fulfill some kind of contractual obligations to HP. I'm sure they'd have to pony up some cash to HP if they just dropped it.
    • Re:as soon as... (Score:3, Informative)

      by msbsod ( 574856 )
      OpenVMS/Itanium - two excellent products, very closely connected, and both pushed together into the niche market in absolute silence. The same happened to OpenVMS/Alpha. What a waste!
  • Here's a surprisingly cogent article (surpisingly so for a hobbiest web site like anandtech, that is) about how trends in cpu manufacturing processes may make Itanium a bigger winner in the near future:

    http://anandtech.com/printarticle.aspx?i=2598 [anandtech.com]
    • I would say that anandtech is a step above a simple "hobbiest" site. They have put out a lot of very good in depth articles on various technologies. Memory architectures, pipelining, various GPU things, and many more that I can't think of off the top of my head.
  • Who Cares? (Score:3, Funny)

    by raider_red ( 156642 ) on Thursday January 19, 2006 @08:16PM (#14514923) Journal
    I'm sure that both of the users of the Itanium are thrilled by this development. They should drop it and use the extra fab capacity to make 8-bit microcontrollers. There's still a market for those.
  • by kerecsen ( 807268 ) on Thursday January 19, 2006 @08:48PM (#14515168)
    I find is odd that Intel keeps backtracking to its 20 year old Pentium Pro design. Both of their recent high-budget designs, the P4 and the Itanium proved to be a flop to some extent, while the P6/Pentium Pro/PII/PIII/Centrino/Banias architecture has scaled amazingly well since its humble 200 MHz beginnings.

    Was there a generation change at the design offices? What else could have caused the most prominent chip design firm to lose its ability to do solid engineering? Granted even the golden boys created a dead end (i960) architecture, it wasn't quite as expensive a mistake as Itanium...

    I remember that in the nineties new chip generations would be popping up left and right, each of them offering some really unique and cool innovation in terms of memory management, execution streamlining or heat management. But Transmeta was the last memorable innovation, and since then everyone seems to be exclusively focused on cache megabytes and transistor sizes. I would love to see real experimentation and innovation reintroduced in the CPU arena...
    • Laptops running powerful dual-core CPUs eating less than 30W means nothing to you? Dualcore desktops alone are the biggest change in the CPU world since people started having computers in their homes, IMHO.
    • by Edmund Blackadder ( 559735 ) on Thursday January 19, 2006 @09:18PM (#14515363)
      I do not think it has much to do with design. It has much more to do with inertia.

      Back in the day when new architectures were popping up like mushrooms, there just was not as much software out there. Therefore, it would be easier for somebody to come up with a workable system based on a new architecture. But more and more software is being created and the users are getting higher expectations in terms of the software they expect to have running on their systems. It is getting harder and harder to provide the ammount of software sufficient to make users happy.

      It seems that free software is a good solution to this problem -- all you have to do is compile a bunch of free software to our new architecture and viola -- you have an operational system. If I were Intel I would compile and provide official Itanic support for every major OS piece of software. This way the major problem Itanic has -- lack of software would be solved.
    • Now that's not quite fair. You might as well say that the K8 architecture is nothing more than a K7 with HT and on-die memory controller.
    • "I find is odd that Intel keeps backtracking to its 20 year old Pentium Pro design. Both of their recent high-budget designs, the P4 and the Itanium proved to be a flop to some extent, while the P6/Pentium Pro/PII/PIII/Centrino/Banias architecture has scaled amazingly well since its humble 200 MHz beginnings."

      We hit the wall in single-threaded performance that was remarkably similar for all the various architechtures. I imagine the Pentium Pro architechture appeared at a time when it was possible to incorpo
    • by afidel ( 530433 ) on Friday January 20, 2006 @12:12AM (#14516502)
      The i960 was not a failure, it's used in about 50% of RAID controllers and quite a few other embedded applications. Perhaps you were thinking of the i860, or the i432? The i860 was in many ways similar to Itanium. It was a VLIW (Very Long Instruction Word) architecture which was like EPIC very overreaching for the time. It was also a floating point monster that was expensive to produce. Finally the i860 required massive compiler optimizations to produce efficient code which the compilers of the day weren't up to. Basically Intel didn't learn from the i860 and repeated the mistake a decade and a half later.
  • The same real estate is also taken up on the cheap Athlon64 we all use. In time, when theres enough x64-based software out there AMD should release 64-bit-only chips (meaning remote the legacy parts... of course there are still 32-bit instruction in a 64-bit chip (think ARM)). Since theres so much software out there already, all you need is a new bootloader. The extra space could hold more cache, initial 8mb part of the RAM space running at 1x speeds or quite possibly the southbridge chip itself to cut cost
  • Of course, basically no one uses software-based emulation either, but at least that doesn't cost chip real estate.
    Of course, basically no one uses the Itanium anyway, so all of this is really a moot point. :)
  • by kirk.so ( 944348 ) on Friday January 20, 2006 @01:51AM (#14516953)
    The x86 was never used on itanium ? crap.
    Sure ist was ( and I assume is ) used - for the firmware. IIRC, the EFI-firmware of the Itanium boxen was entirely x86. They use the x86-ISA for running the x86-based firmware of add-on cards. That way, Itanium boxen are able to use about any PCI-card out there, without
    them having any special firmware.
    Alphas did that in software, which mostly worked but far from working with everthing.
    SPARCs and the PowerPC-based Apples have PCI, but neither is able to handle standard
    PCI-cards for exacty that reason, which is why you have to shrug off $$$ to get the same
    PCI-hardware with their native firmware support.

    Ok, any PCI-card stuffed in an Itanium box would need decent OS-drivers, but at least
    that is in the realm of the OS-vendor and drivers can be ported. Only very few
    PCI HW-manufacturers ever did anything but x86 firmware, geared towards BIOS.

    EFI, the firmware that ships with Itaniums, is quite good at handling that crappy
    PC-BIOS type firmware. Need a decent RAID-controller ? Just stuff it in.

    I'd call that a big plus. There are and have been numerous misconceptions about Itanium
    from the very beginning, but saying "Nobody needs on-chip x86" is utterly stupid.

    IIRC, the chip "real-estate" needed for x86 was in the lowish single-digit percentage
    of the total chip-real estate. And it was a good investment, since it saves $$$ for
    anybody running Itaniums. It was there for exactly that purpose, until some marketing
    freak obviously decided to sell that as "backwards compatibility". x86 on Itanium was
    and is dead slow, but for POST/Init purposes, it is sufficient.

    Please, intel, keep it. If Itanium is ever going to be a success, users will happily
    welcome the ability to extend systems using standard off-the-shelf components.

    And, while we are at it, start shipping EFI for the "x86-crowd" now. I think, i am not
    alone with the perception, that hitting "CTRL-S", "ESC whatsoever" at the right moment
    during POST to enter some firmware configuration tool of some card, just plain sucks.

    I want a firmware shell. I want x86-style SRM. EFI is close to that. Intel even
    open-sourced major parts of EFI ( www.tianocore.org ). AFAIK, the Intel-based Apples
    will use it. I want it too.

    For gods sake, keep x86 in Itaniums.

    Regards

Is knowledge knowable? If not, how do we know that?

Working...