Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology Hardware

Despite Aging Design, x86 Still in Charge 475

An anonymous reader writes "The x86 chip architecture is still kicking, almost 30 years after it was first introduced. A News.com article looks into the reasons why we're not likely to see it phased out any time soon, and the history of a well-known instruction set architecture. 'Every time [there is a dramatic new requirement or change in the marketplace], whether it's the invention of the browser or low-cost network computers that were supposed to make PCs go away, the engineers behind x86 find a way to make it adapt to the situation. Is that a problem? Critics say x86 is saddled with the burden of supporting outdated features and software, and that improvements in energy efficiency and software development have been sacrificed to its legacy. And a comedian would say it all depends on what you think about disco.'"
This discussion has been archived. No new comments can be posted.

Despite Aging Design, x86 Still in Charge

Comments Filter:
  • by athloi ( 1075845 ) on Tuesday April 03, 2007 @09:41AM (#18587741) Homepage Journal
    It should be replaced with Esperanto when we all upgrade to Vista.
    • Multilingual User Interface packs only come with Vista Ultimate. Oh, how I hate when a language strengthens monopoly power through such evil, costly means!
    • Re: (Score:2, Funny)

      by SighKoPath ( 956085 )

      when we all upgrade to Vista
      So you mean... never?
    • by Yst ( 936212 ) on Tuesday April 03, 2007 @10:13AM (#18588331)
      Modern English is about 750 years old. English is at least 1550 years old. Tradition is to trace the English presence in Britain to the quasi-historical Anglo-Saxon incursions of the mid-5th century, but migration almost certainly preceded military confrontation. The starting point for the English language (and the Old English era) is the introduction of a continuous Anglic presence to Britain. And that linguistic heritage, termed English, begins at least 1550 years ago.
      • Re: (Score:3, Funny)

        Correction: Modern English [wikipedia.org] is only 25 years old.
    • www.engrish.com

      'nuff said.
  • Let me guess... (Score:5, Insightful)

    by Anonymous Brave Guy ( 457657 ) on Tuesday April 03, 2007 @09:42AM (#18587763)

    A News.com article looks into the reasons why we're not likely to see it phased out any time soon

    I'm going to go with:

    1. Installed base.
    2. Installed base.
    3. Installed base.

    Did I miss anything?

    • by Half a dent ( 952274 ) on Tuesday April 03, 2007 @09:44AM (#18587793)
      4. ???
      5. Profit
    • by morgan_greywolf ( 835522 ) * on Tuesday April 03, 2007 @09:46AM (#18587821) Homepage Journal

      Did I miss anything?


      I think you forgot to mention installed base.
    • by precize ( 83096 ) on Tuesday April 03, 2007 @09:56AM (#18588013) Homepage
      The one time "All your base are belong to us" is actually an on-topic comment
    • by anss123 ( 985305 ) on Tuesday April 03, 2007 @10:06AM (#18588219)
      4. Price / performance. A segment the x86 have done well in.
      5. Security. Will my x86 progs be supported in 20 years? The answer: yes.
      6. Availability. Hmm... Intel, I'd like to 1 000 000 CPUs. Intel: Sure thing.
      7. Good will. What should we buy, Intel or PPC. PPC? What's that? Go Intel! Yes boss. (Just look how far Itanium got on Intel's name, alone.)

      :D
      • Installed Base...
      • Re: (Score:3, Informative)

        by misleb ( 129952 )

        4. Price / performance. A segment the x86 have done well in.

        Because of installed base

        Security. Will my x86 progs be supported in 20 years? The answer: yes.

        Again, because of installed base. Although as Apple has shown with the PPC -> x86 migration (and also m68k -> ppc) this isn't such a big factor. Major software is constantly being upgraded and old CPUs can always be emulated if necessary. You might say that performance isn't good, but how fast does a 20 year old app have to run?

        Availability. Hmm..

    • Re:Let me guess... (Score:5, Informative)

      by leuk_he ( 194174 ) on Tuesday April 03, 2007 @10:13AM (#18588323) Homepage Journal
      "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."

      I think 50% of the transistors on a modern cpu are cache, you could call that legacy stuff. But the 60% figure makes no sense. For the real, seldom used, legacy instructions, less time is spend on optimizing them in Microcode [wikipedia.org]. And the microcode does not take THAT much space on a cpu.

      Some sources:
      Cpu die picture, est 50% = cache [hexus.net]
      P6 takes ~ 40% for compatibility reasons [arstechnica.com]. And as the total grows, the percentage should DECREASE, not INCREASE. If the amount grows it is for performance reasons, not compatibility reasons.

      However when you count the source "XenSource Chief Technology officler" it is not surprising that backwards compatibility gets that much attention. A main reason virtualization exists is to run older platforms so they are compatible.
    • If Linux goes mainstream (and this is a great possibility in many countries not Europe/US) there is less to tie it to the x86 family as many things can just be recompiled.

      But I don't really bet on the x86 being supplanted soon - even Intel couldn't do it. However, I don't see it lasting forever either.

      When the gains for other designs are really a magnitude of an order greater than the current design, people will migrate. So far, other prospects were better, but only on the same scale, nothing outrageous b
    • Re: (Score:3, Insightful)

      by ooze ( 307871 )
      The x86 dominance is basically a result of two crooked architecure holding each other up: if MS DOS wasn't so crappy that it depends on x86 then the processor could be changed. If x86 wasn't too crappy to properly emulate it, then MS DOS or it's successors could be changed. As it is, we are stuck with both, because noone wants to change both at the same time, and you cannot really change each independently.
      There is something I hope for:
      Vista Tanks mightly, OS X and it's successors become the dominant OS in
  • The X86 is a pig. (Score:3, Insightful)

    by LWATCDR ( 28044 ) on Tuesday April 03, 2007 @09:43AM (#18587785) Homepage Journal
    The X86 ISA is a mess. It is a total pig. It is short on registers and it was just an unpleasant ISA to use from day one.
    The problem is that it is a bloody fast and cheap pig that runs a ton of software and has billions or trillions of dollars invested in keeping it useful. I am afraid we are stuck with it. At least the X86-64 is a little better.
    • Re: (Score:3, Interesting)

      by Hoi Polloi ( 522990 )
      I don't know squat about processor design and I'm risking abuse but anyway...

      In this day and age of multi-core CPUs, why not have a processor with a X64 ISA core and a core with the desired architecture. Let them run in parallel like 32/64 bit compatible CPUs. Old software would run on the X64 cpu and newer software or updated versions could run on the newer core. Maybe this could provide a crutch for the PC world to modernize over time.
      • by fitten ( 521191 ) on Tuesday April 03, 2007 @10:12AM (#18588311)
        Already been done, didn't catch on (see Itanium).

        Because there is such a massive amount of installed x86 software base that you'd be throwing away silicon. To be sure that software ran on the most systems possible, software would still be written for x86 and not the 'desired' architecture.

        That being said, OSS tends to have good inroads in that you get all the source so can recompile to whatever architecture you want. However, since x86 is still the huge marketshare, other architectures get less attention. Also, all of the JIT languages (Java, C#, etc.) make transitioning easier IF you can get the frameworks ported to a stable environment on the 'desired' architecture.

        The main problem is that there is *so* much legacy code in binary (EXE) format only (the source code for many of those has been literally lost) that can be directly tracked to money. There are systems that companies continue to use and have so much momentum that changing platforms would require extreme amounts of money to reverse engineer the current system - complete with quirks and oddities, rewrite, and (here is a big part that many people fail to add in) retest and revalidate, that many companies don't want to spend that kind of money to replace something that 'works'.

        There's so much work/time/effort invested in x86 now that it's hard to jump off that train. AMD's x86-64 is a good approach in that you can run all the old stuff and develop on the new at the same time with few performance penalties. However, I don't know if we'll ever be able to shrug off the burden of x86.... at least not for a long time to come. It'd take something truly disruptive to divert from it (and what people are currently invisioning as quantum computing is not that disruption).
        • Re: (Score:3, Interesting)

          by Scott7477 ( 785439 )
          I think it would be interesting to know about applications where the source code has been lost. To me, it would seem that running an app where no one has the source code implies zero vendor support. Is anyone willing to give examples of apps they know of where the source has disappeared and the running binary is a mission-critical app (particularly at any Fortune 500 companies)?
          • Re: (Score:3, Informative)

            by chthon ( 580889 )

            I have one example here. It is a small DOS program, called convert.exe, which somehow does transformations in the linking phase of ELF files in a cross platform environment.

            From what I know, VxWorks licensed this from another company, which does not have the sources anymore.

            From time to time this program crashes, due to the output generated by the Tornado compiler. This renders our daily builds unusable for particular targets, which is definitely a show stopper for testing daily our embedded software.

      • Re:The X86 is a pig. (Score:4, Interesting)

        by kestasjk ( 933987 ) * on Tuesday April 03, 2007 @10:22AM (#18588499) Homepage

        In this day and age of multi-core CPUs, why not have a processor with a X64 ISA core and a core with the desired architecture. Let them run in parallel like 32/64 bit compatible CPUs.
        Because that uses very valuable die real estate. These days x86 is already converted into micro-ops, which is like another instruction set altogether, which can be more easily re-ordered to be made more efficient.

        Basically x86 isn't a perfect instruction set for today's landscape, but then again UNIX isn't a perfect operating system for today's landscape; that doesn't mean it's not still very good and we shouldn't praise those who have made it so good.
        Some say plan9 has a better design than Linux, some say that PPC has a better design than x86, but apparently design isn't everything.

        Lots of things could be better if we could get everyone to migrate from what they currently use, but would it be worth it in this case? I don't think so, at least not until we reach the limits that better design & hardware can do.
    • Re: (Score:2, Interesting)

      by phunctor ( 964194 )
      Yabbut... the ISA gets turned into a plasma of pico-ops, which then dispatch, somewhat out of order, on the Real ISA (which changes from each "x86" to the next "x86"). It doesn't really matter how fugly the ISA *was* as long as the Real ISA is apt for keeping the ALUs well fed.

      It's convenient to have a consistent interface layer, and the gate count cost of the translation is asymptotically zero. It makes writing good optimizing compilers for "generic x86" all but impossible, but fortunately the final levels
      • Yabbut....

        push bp
        mov bp,sp
        <function body>
        pop bp
        ret

        Has to be fetched from main memory, decoded and executed, no matter what happens internally to the CPU.
        • by Waffle Iron ( 339739 ) on Tuesday April 03, 2007 @10:52AM (#18588941)
          So? Function call sequences have to be executed on RISC CPUs as well. On the X86, most of those instructions are encoded in a single byte each, which is a cache-friendly compact representation. Under the hood, that whole sequence is recast into an optimal representation for the particular chip and usually executes in about two clock cycles. Pre-decoded instructions are usually cached in some form, so the x86-to-RISC translation is not incurred all that often anyway.

          The bottom line is: has any other architecture enabled apps run significantly faster over multiple CPU generations at comparable costs? Nope. As other architecture fads have come and gone, but the X86 just absorbs the best ideas from each and keeps marching along.

    • by gr8_phk ( 621180 )
      Agreed. And a bunch of idiots are going to point out that nobody actually implements it directly. x86 instructions are "translated" on the fly to whatever RISC type processor is actually doing the work - or some such. They'll claim it doesn't matter what the ISA is any more because of this capability. There are two problems with these arguments. 1) it takes circuitry and power to break down crappy instructions into nice ones. 2) the inefficient encoding takes more space - this requires extra unwanted instru
      • Re:The X86 is a pig. (Score:5, Interesting)

        by afidel ( 530433 ) on Tuesday April 03, 2007 @10:28AM (#18588597)
        Actually the encoding is VERY efficient where it matters most, cache density and limiting the number of calls to main memory. Having complex instructions helps in the areas where real world performance is most hurt and that is why we have a CISC frontend to an efficient RISC backend. This balance was reached even in the "RISC" camp, look at the PPC970 with the more complex instructions that get broken down in uops and dispatched to execution units, very similar in many ways to how modern x86 processors work. The translation layer is less than one percent of die space and probably a much lower percent of power usage on modern x86 chips.
    • My understanding is that modern processors don't run x86 natively either, but are doing highly optimized translations of x86 instructions on the fly. The path for this way of doing things was blazed by the likes of Transmeta and HP. Read Ars Technica's CPU theory and praxis articles [arstechnica.com] for more information.
  • lock in (Score:5, Insightful)

    by J.R. Random ( 801334 ) on Tuesday April 03, 2007 @09:46AM (#18587817)
    The x86 instruction set will be retired in the same year as the QWERTY keyboard layout.
  • Simple! (Score:5, Insightful)

    by VincenzoRomano ( 881055 ) on Tuesday April 03, 2007 @09:49AM (#18587887) Homepage Journal
    Just like the four stroke engine. It's not the best one, it can be largely enhanced and made better, but it's still here.
    And just like the four stroke engine, modern engines just burn gasoline and push car forward. This is where the similarity with the original engines end.
    • Re:Simple! (Score:5, Insightful)

      by Wite_Noiz ( 887188 ) on Tuesday April 03, 2007 @09:59AM (#18588075)
      I've heard loads of metaphors about why x86 will be around for years to come, but none of the really hold.
      An engine is black-box - petrol in, kinetic energy out (simply) - whereas the architecture on a processor is not.

      AMD and Intel can make as many additions to x86 as they like, but if they stop supporting the existing instruction set, they'll sell nothing.

      I'm sure Linux would be compiled on to a new architecture overnight, but I doubt MS would move any time soon - and their opinion holds a lot of weight on the desktop.

      RISC ftw!
    • Re:Simple! (Score:5, Funny)

      by Nimey ( 114278 ) on Tuesday April 03, 2007 @12:31PM (#18590469) Homepage Journal
      Slashdot needs a mod tag (-1, Car analogy).
  • Does it matter? (Score:5, Interesting)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Tuesday April 03, 2007 @09:52AM (#18587927) Homepage

    At this point, does it matter as much? As we move on the future is clearly x86-64 which is MASSIVELY cleaned up compared to x86 and is really rather clean compared to that. Sure at this point we still boot into 8086 mode and have to switch up to x86-64 but that's not that important, it only lasts a short while.

    As we move off of x86 onto -64, are things really still that bad? Memory isn't segmented, you have like 32 different registers, you don't have operands tied to registers (all add instructions must use AX or something like that) as some 16/32 bit instructions were.

    Of course, we should have used a nice clean architecture like 68k from the start, but that wasn't what was in the first IBM.... and we all know how things went from there.

    • Re:Does it matter? (Score:4, Insightful)

      by Zo0ok ( 209803 ) on Tuesday April 03, 2007 @10:04AM (#18588161) Homepage
      And since the 386 consisted of 275000 transistors while modern cpus have more than 200 millions transistors the cost/waste of backwards compability with the 386 is very little.
      • by MBCook ( 132727 )
        Yes but it is being used less and less. No one really uses the 16 bit support and such in Linux. In the future even the 32 bit support will be used less. When MS drops compatibility at some point (they can't keep going forever) they can put in a software emulation layer. The demand to run 8 and 16 bit DOS programs won't keep being worth it forever. When that happens, after a few years it will be possible to start dropping those portions of the chip since they are so little used and we have emulators at this
    • by renoX ( 11677 )
      >you have like 32 different registers,

      16 integer registers, not 32!

      • by TomRC ( 231027 )
        R0 - R15, xmm0 - xmm15 Yep, that's 16 registers!
        (FP stack doesn't count, MMX is dead)
    • by RetiredMidn ( 441788 ) on Tuesday April 03, 2007 @10:25AM (#18588565) Homepage
      Good points all.

      I would add to this that ISA mattered a lot more when I wrote code in assembly language. For a clean (and simple) instruction set architecture, I fondly remember the PDP-11 [wikipedia.org]. Later on, the 680x0 offered more powerful addressing modes for less simplicity (and consistency). Compared to both, the x86 was infuriating to work with.

      ISA's still mattered, but less, in my early "C" days when source-level debugging was less robust, or even to understand what the compiler was turning my code into so I could figure out where to optimize.

      Today, it hardly matters at all. Looking at generated code tells me little about how the processor with multiple execution units is going to process it; it is necessary to trust the compiler and its optimization strategy. It matters even less with interpreted or JIT'd languages, where the work eventually performed by the processor is far removed from my code. Knowing what's happening at runtime involves much more important factors than the ISA.

      • by thethibs ( 882667 ) on Tuesday April 03, 2007 @11:52AM (#18589841) Homepage

        What's with all this dissing of the X86?

        Like you, I'm an old fart; I wrote assembler code for the PDP-8, PDP/LSI-11 and the 68k. They were ok: easy to learn and use, but I always preferred the X86.

        Sure, it was harder to learn and I never got past having the blue book on my desk when I was coding but, in the end, it produced smaller, faster code. There were a number of apps I wrote for multiple platforms, so I got to compare. Also, (the same reason I love perl) you could do astounding things with side-effects.

        Commercially, X86 has staying power because it was architected to scale. Variable-length instructions with lots of space in the operator range lets Intel adapt the design to any new demands. Most, if not all, of the complaints about X86 (e.g. too few registers) are just version features—yesterday's news if there's a market demand for an improvement.

        Bottom line—it ain't neat, but that doesn't matter; it's programmed once and used millions of times. Programmer convenience is irrelevant.

    • Would it be possible to make a legacy free x86 chip? i.e. remove from the processor die real, unreal, VM86, and 16-bit protect modes as well as all traces of the ISA bus, the BIOS, and anything else you can think of? Porting *NIX and Windows to this new platform architecture would be effortless and it would not change userland compatibility.

      We don't need to support 30 years of backwards compatibility!
  • by InsaneProcessor ( 869563 ) on Tuesday April 03, 2007 @09:55AM (#18587987)
    Yes, the instruction set is old, but, it does still work. As a consumer, why should I have to re-invest in software that I purchased and does the job, just becuase my hardware failed, or faster hardware becomes available and I upgrade. Apple bit that one some time ago. Last year, I had an investment of $4000.00 in software when Intel came out with a significantly faster part that was dropping in price. Just by upgrading my hardware (cost $800) my invenstment improved significantly. $4800.00 did not justify the upgrade but the low cost of hardware only, did. Also, there was not learning curve involved.

    You don't buy a new car just becuase the tires need replaceing (well some people do, but that is rarely the fiscally responsible thing).

    If it ain't broke, it doesn't need fixing.
    • Re: (Score:3, Funny)

      by richdun ( 672214 )

      You don't buy a new car just becuase the tires need replaceing (well some people do, but that is rarely the fiscally responsible thing).

      I hate to use a car analogy, but yeah. Cars have changed tremendously over the past 50+ years, but all in all, they're still four tires attached to two axles, with a transmission converting power from the engine to rotational energy in the axles, with a cabin on top of these axles with seats and a single driver's wheel, pedals, and control area. All of those components h

  • by PineHall ( 206441 ) on Tuesday April 03, 2007 @09:56AM (#18588021)
    It has been said that people will not change unless something is preceived to be 10 times better. The problem is nothing has been perceived to be that much better, so people stay with what they know.
    Paul
  • by kabdib ( 81955 ) on Tuesday April 03, 2007 @09:58AM (#18588041) Homepage
    Things would be a lot easier if the darned thing wasn't so bloody complex to emulate. I mean if we were "stuck" with (say) an ARM or even a 68K we'd be able to use virtual machines to dig ourselves out of a similar architectural hole (though with an ARM we'd be unlikely to want to).

    The x86 has so many modes of operation (SMM, real/protected, lots of choices for vectorizing instructions, 16/32/64 bit modes) and special cases that it's a pretty big project to get emulation working correctly (much less fast). You're pretty much stuck with a 10x reduction clock-for-clock on a host. Making an emulated environment secure is hard, too; you don't necessarily need specialized hardware here (e.g., specialized MMU mapping modes), but it helps.

    And now, with transistor speeds bottoming-out, they want to go multicore and make *more* of the things, which is exactly the opposite direction that I want to go in... :-)
    • by Zo0ok ( 209803 )
      How do you mean? Virtual PC for Apple/PPC emulated x86 quite well. I think a 500MHz PPC-processor was roughly able to emulate a 350 MHz Pentium Equivalent Processor.

      Emulating a CISC architecture on a RISC architecture is not that hard. The other way around is much harder - you cant very well emulate a PPC/SPARC/MIPS on a x86-computer. Then you would suffer 10x clock-for-clock reduction.

      • The other way around is much harder - you cant very well emulate a PPC/SPARC/MIPS on a x86-computer. Then you would suffer 10x clock-for-clock reduction.

        Except I'm doing it now on OS X and it works fine. 60% speed penalty at most.
        • Re: (Score:3, Informative)

          by TheRaven64 ( 641858 )
          Don't forget two things. The first is that one of the design goals of PowerPC was to be able to emulate x86. For this reason, there are a few things that are a bit ugly in the instruction set, and it feels much less clean than something like SPARC and Alpha.

          When it comes to Rosetta, you should also remember that a lot of the process is not actually being emulated. Every time you call something in the standard library, you are executing native code. There's a small overhead for swapping byte orders of

  • by FredDC ( 1048502 ) on Tuesday April 03, 2007 @10:00AM (#18588077)
    If a chipmaker declared its chip could run only software written past some date such as 1990 or 1995, you would see a dramatic decrease in cost and power consumption, Crosby said. The problem is that deep inside Windows is code taken from the MS-DOS operating system of the early 1980s, and that code looks for certain instructions when it boots.
     
    Even new software might (and often does) use the so-called old instructions. If you want to completely redesign the hardware you would also have to completely rewrite the software from scratch as you would not be able to rely on previously written code and libraries. This is simply not feasible on a global scale...
    • Re: (Score:2, Insightful)

      by stevey ( 64018 )

      That isn't entirely true. Sure code might exist in the wild which uses old instructions, but it wouldn't need to be rewritten - just recompiled with a suitable compiler. (Ignoring people who hand-roll assembly of course!) (Of course whether the source still exists is an entirely separate issue!

      However with all the microcode on board chips these days it should be possible to emulate older instructions, providing Intel can persuade compiler-writers to depreciate certain opcodes the situation should essent

  • by trigeek ( 662294 ) on Tuesday April 03, 2007 @10:00AM (#18588079)
    "There's no reason whatsoever why the Intel architecture remains so complex," said XenSource Chief Technology Officer Simon Crosby. "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."

    Who is this guy and what is he smoking? Over half of a modern processor is cache. The instruction decoding and address decoding are a small fraction of the remainder. Where does he get the 60% from?

  • by WED Fan ( 911325 ) <akahige@t r a s h mail.net> on Tuesday April 03, 2007 @10:05AM (#18588175) Homepage Journal

    I know we all bitch about old designs, legacy support for outdated features, but, one of the things that keep people from moving from one OS to another is "existing base of installed software" and "knowledge of exisiting software". Like it or not, the major player is Microsoft. No matter how much a geek says, MS UI's suck, people are comfy with them. If alternative OS's had the same software offerings with the same UI, people would be able to move to them. The same holds true for processors.

    No matter how well a processor performs, if there is no application base for it, no one is going to buy a machine with that processor. In this case, perception is reality. You walk into a software store, you see 16 rows of Windows applications, half a row of Linux, and 5 rows of Apple.

    What processor family runs each of these? Guess who has moved to the dominant processor?

    The only way to build a software base is to build in legacy support. Then start weening users away from the legacy features, get programmers to stop using those features (mainly those building the compilers that developers use), and move towards the more advanced features.

    x86 rules for a reason. Microsoft rules for a reason. The customer is comfortable with them, and their perception is reinforced everytime they go to the store.

  • And a comedian would say it all depends on what you think about disco.
    Disco Stu does the x86 boogaloo
  • 60% (Score:2, Informative)

    by anss123 ( 985305 )
    From the article:
    "There's no reason whatsoever why the Intel architecture remains so complex," said XenSource Chief Technology Officer Simon Crosby. "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes."
    (Emphasis mine)

    Ehe, according to the latest in depth articles, the legacy cruft take less than 10% of the chip. A far cry from Crosby's claim of 60 percent, and that from a Chief Technology Officer no less :p
  • by scgops ( 598104 ) on Tuesday April 03, 2007 @10:17AM (#18588415)
    Computer manufacturers have tried making non-compatible machines. Commodore 64, VIC 20, Coleco Adam, Atari ST. They all had their place in time and their niche in the market before fading out.

    Something they all had in common, though, is that they sold better than IBM's mostly-compatible PCjr. I attribute that difference to software and compatibility problems. Because of BIOS differences, a number of programs written for the PC couldn't run on the PCjr. That led to a fragmentation of shelf space at software retailers and confusion among retail customers, and led to customers avoiding the platform in favor of easier-to-understand options.

    I would expect something similar to happen if Intel, AMD, or anyone else started making mostly-compatible x86 processors. It wouldn't sell unless all of the software people are used to running still worked. Sure, someone could take Transmeta's approach and emulate little-used functionality in firmware rather than continuing to implement everything in silicon, but it all pretty much needs to keep working, so why bother?

    Seriously, why would anyone undertake the effort and expense needed to slim-down x86 processors when the potential gains are small and the market risk is pretty huge? No chip manufacturer wants to replace the math-challenged Pentium as the most recent mass-market processor to demonstrably not work right.

    Pundits and nerds can talk all they want about why the x86 architecture should be put out to pasture, but it won't happen until a successor is available that can run Windows, OSX, and virtually all current software titles at acceptable speeds. At that seems pretty unlikely to happen on anything other than yet another generation of x86 chips.
  • by astrashe ( 7452 ) on Tuesday April 03, 2007 @10:24AM (#18588547) Journal
    If free software ever goes truly mainstream, and the stacks people use are free from top to bottom, lock in goes away in general. Even hardware lock in.

    A couple of years ago, I was shifting some stuff around and I needed to clean off my main desktop machine, an x86 box. I installed the same linux distro on a G4 mac and just copied my home directory over. Everything was exactly the same -- my browser bookmarks and stored passwords, my email, my office docs, etc.

    A lot of people take Apple's jump from PowerPC to x86 as a sign that x86 is unstoppable. But I'd argue that the comparative ease with which the migration took place shows how weak processor lock in is becoming. The shift from PPC to x86 was nothing compared to the jump from MacOS Classic to OS X.

    The real reason x86 won't go away any time soon is that MS has decided that's the only thing it's going to support, and MS powers most of the computers in the world. Windows is closed, so MS's decision on this is final, and impossible to appeal.
  • A little off-topic:
    I've had a picture of a die for my desktop wallpaper for a while now, and I think it works well. I'd really like some larger pictures of the dies they give here [com.com]. Does anyone know where I would find larger ones?
  • by Vexler ( 127353 ) on Tuesday April 03, 2007 @10:26AM (#18588583) Journal
    As part of an operating systems course I am currently taking, we watched a video of a presenter from Intel who lectured on the changes associated with the Itanium processor. In his presentation (see the video at http://online.stanford.edu/courses/ee380/040218-ee 380-100.asx [stanford.edu]), he pointed out that Intel has gone from having one or two major ideas to drive chip design to having fifteen or twenty minor ideas that they can cram in. The thinking is that if they can amass enough of these "little ideas" together, they can probably cobble together enough performance enhancement to justify production and sales of these chips. Part of the issue is that, as the author of this article also admits, there is currently no "big ideas" coming around the bend in terms of truly revolutionary performance increase.

    The problem, though, is that when you introduce many smaller features, you cannot always anticipate how these features will interact with one another. This is why it is counterintuitive to many people that "new and improved" is not always so, and that you actually risk introducing bugs into the design more subtle than you can detect. That, combined with the continuing support for legacy code, means that complexity (and power consumption) goes through the roof with each iteration. While it is a testament of the robustness and versatility of the x86 architecture that it has survived thus far, one could argue that the architecture *had* to survive because we couldn't come up with the next paradigm shift.

    The good news is that there are solutions to this situation. The bad news is that all of the solutions involve massive change in the way the software industry clings to the tried-and-true, or truly revolutionary innovation in chip re-architecture, or billions of dollars, etc. As the article points out, experience with EPIC has demonstrated how NOT to introduce a completely new architecture. There is no easy way out, but there are several possible paths.
  • by tji ( 74570 ) on Tuesday April 03, 2007 @10:29AM (#18588615)
    The article claims that Windows still requires the old compatibility modes to boot. Is this true? I could see how Win95-like OS's could because they basically boot on DOS. But, for NT and beyond, wouldn't they be fine with removing those old legacy capabilities?

    The question that leads to is: What is gained by removing the legacy junk? The guy from Xen-Source in the article claimed "There's no reason why they couldn't ditch 60 percent of the transistors on the chip, most of which are for legacy modes." Which seems ridiculous. Maybe he's talking about 60% of the silicon in a certain subsystem of the CPU, because it certainly can't remove 60% of the total transistors.

    If the savings is minimal, and those modes don't effect anything once you've changed to 32 or 64 bit protected mode, then maybe it's a moot point.

    To really shift the Instruction Set, you obviously have to do it in an evolutionary way. Such as, allowing access to the lower level IS (i.e. the instructions that the x86 gets translated into) in a virtual machine environment. So, you could have a more efficient Linux OS running in a VM, and if the benefits of that are substantial, more people might use that mode for the host OS (which could then run x86 VMs for legacy). It's easy to see that being used for Linux and even Mac OS as their portability is already proven, and they began as modern OS's - working only in protected mode.
  • by Simonetta ( 207550 ) on Tuesday April 03, 2007 @10:35AM (#18588709)
    We lose the X86 when another processor comes along that is cheaper, 10x more powerful, and runs all X86 software at the speed that the users consider to be the same as a PC. Until then we keep the X86. Simple as that. Next tech issue, please.
  • by Erich ( 151 ) on Tuesday April 03, 2007 @11:02AM (#18589071) Homepage Journal
    Haven't we learned this by now? Why do we keep going over this same stupid premise?

    The Instruction Set of a processor architecture with so many resources available to it doesn't really matter, so long as it isn't utterly and completely braindead. X86 isn't braindead enough to qualify... if you had an intercal [catb.org] instruction set or an One Instruction Set Computer [wikipedia.org] it might.

    You really want to do several things to get performance out of an instruction stream -- register renaming, instruction manipulation (breaking them apart or joining them together or changing them into other instructions), elimination of some bad instruction choices, and a host of other things. You would want to do these things even on a "clean" ISA like Alpha or PPC or MIPS. And if you are doing them, the x86 instruction set suddenly becomes much less of a problem. There are even advantages: the code size on x86 tends to be better than a 32-bits-per-instruction architecture.

    Instruction sets are languages with exact meanings. Which means that you can precisely translate from one instruction set to another. And, as it turns out, you can do it fairly easily and efficiently. Which is why Transmeta did pretty well. Which is why Apple's rosetta and Java JIT compilers work (and Alpha FX32 before that). Which is why AMD and Intel are right there at the top of the performance curve with x86-style instruction sets, because it JUST DOESN'T MATTER THAT MUCH.

    Why didn't Transmeta kick more butt? Because they didn't have the economies of scale that AMD and Intel have. Because they didn't have the design resources that AMD and intel have. Because AMD and Intel had better-tuned processes faster than TSMC or whoever was fabbing Transmeta's chips. THOSE are the most important things, not the instruction set that you have on disk.

    Now a good ISA can help in many ways: SIMD instructions really help to point out data level parallelism. More registers helps a wee bit to prevent unnecessary work done around the stack for correctness. You can get rid of a bit of logic if you can execute without translation. But these things can either be added to x86 (SSE/x86-64) or aren't expensive enough to be worth it on a 100 sq mm, >50W processor. Maybe in an embedded, low-power processor.

  • Idiotic... (Score:3, Insightful)

    by evilviper ( 135110 ) on Tuesday April 03, 2007 @11:10AM (#18589209) Journal
    This is the same idiotic argument as always. They don't even try to change it up a little bit...

    The architectural limitations of x86 were probably true up through the Pentium1 days. After the introduction of Intel's P6, and AMD's K6, everything changed. At that point, x86 was no longer the clumsy CISC snail it used-to be. At that point, and from then on, the fierce competition between Intel and AMD has pushed x86 ahead of every other architecture. Others like Alpha held on to the pure performance crown for a few years to come, but they did so by embracing much higher power consumption. These days, new x86 CPUs are falling in power consumption, not rising. And AMD's Geode CPUs can give you a good performing x86 CPU for embedded systems, OLPC, and anything else, in under 1W. There's really nothing else that is lower power, which still performs as well...

    These days, x86 is more than competitive with everything else in sheer performance, performance-per-watt figures, and far ahead in performance per dollar. One at a time, nearly all the limitations of the x86 architecture, that were so often paraded out by competitors, have been worked around. It's most other architectures which were crippled, in that their short-sighted design was only really good in one area, and they only became popular because x86 wasn't quite there at the time. Meanwhile, x86 continued to develop, addressing those shortcomings, and the others did not. The only competitors these days are Power and SPARC, and the two highest-profile companies using them have long since come around, and started selling x86 themselves.

    Backwards compatibility is only the smallest of reasons that x86 is still around. How many Linux/BSD users continue to buy x86 systems, even though they would hardly notice an underlying architecture change? How many super-computing clusters are x86-based? It's only the Windows world that needs x86 compatibility, and though that's about 90% of the market, the other 10% use x86 anyhow.
  • by dpbsmith ( 263124 ) on Tuesday April 03, 2007 @11:23AM (#18589401) Homepage
    ...rather than intelligent design.
    • You say that to be funny, and it is, but it's also insightful.
      One of the things about evolution is that it can only work with what it has, which is why our backs hurt all the time. Evolution can't just suddenly stick a good spine/leg support/locomotion system in, but works with what already exists, intended for quadrupeds. (This is, in essence, the area that the Irreducible Complexity crowd are attacking.)
      But, look at x86 and its dominance over itanium. Itanium is a *good* design, but x86 is outcompeting
  • by Animats ( 122034 ) on Tuesday April 03, 2007 @12:12PM (#18590141) Homepage

    The x86 instruction set is a surprisingly good way to build a computer. The reasons aren't obvious.

    First, the original x86 was a huge pain, with that stupid segmented memory arrangement. But IA-32 was better and cleaner; at last there was a flat 32-bit address space. (Yes, there's a segmented 48-bit mode, and Linux even supports it, but at least apps see a flat address space.) AMD-64 is even more regular; the segmented memory stuff is completely gone in 64 bit mode. So there is progress.

    RISC architectures could yield simple machines that could execute one simple fixed-width instruction per clock cycle. The early DEC Alphas, the MIPS machines, and early IBM Power chips are examples of straightforward RISC machines. This looked like a big win. The ALU was simple, design teams were small (one midrange MIPS CPU was designed by about six people), and debugging wasn't hard. RISC looked like the future around 1990.

    What really changed everything was advanced superscalar architecture. The Pentium Pro, which could execute significantly more than one instruction per clock, changed everything. The complexity was appallingly high, far beyond that of supercomputers. The design teams required were huge; Intel peaked somewhere around 3000 people on that project. But it worked. All the clever stuff, like the "retirement unit" actually worked. Even the horrible cases, like code that stored into instructions just ahead of execution, worked. It was possible to beat the RISC machines without changing the software.

    The Pentium Pro was a bit ahead of the available fab technology. It required a multi-chip module, and was expensive to make. But soon fab caught up with architecture, and the result was the Pentium II and III, which delivered this technology to the masses. Then AMD figured out how to do superscalar x86, too, using different approaches than Intel had taken.

    The RISC CPUs went superscalar too. But they lost simplicity when they did. One of the big RISC ideas was to have many, many programmer-visible registers and do as much as possible register-to-register. But superscalar technology used register renaming, where the CPU has more internal registers than the programmer sees. The effect is that references to locations near the top of the stack are as efficient as register references. Once the CPU has that capability, all those programmer-visible registers don't help performance.

    Making all the instructions the same size, as in most RISC machines, leads to code bloat. Look at RISC code in hex, and you'll see that the middle third of most instructions is zero. Not only does this eat up RAM, it eats up memory and cache bandwidth, which is today's scarce resource. Fixed size instructions simplify instruction decode, but that doesn't really affect performance all that much. So x86, which is a rather compact code representation, actually turns out to be useful.

  • Jeeze ... (Score:3, Funny)

    by smcdow ( 114828 ) on Tuesday April 03, 2007 @12:17PM (#18590243) Homepage
    Why did it have to be a little endian [wikipedia.org] processor?
  • by ceeam ( 39911 ) on Tuesday April 03, 2007 @01:51PM (#18591873)
    Is it only me or anyone else feels a bit unease about lost opportunity with a good cleanup when we moved to x64 ABI (yes, I don't like "x86_64")?

    I mean:

    http://en.wikipedia.org/wiki/X86_calling_conventio ns [wikipedia.org]

    Why require 16-byte alignment? Oh, so that xmm data can be stored aligned on stack. But how often do you need it? 0.01% of all stack frames or less? Wouldn't it make more sense to do this alignment when entering functions that needs it (3 assembler commands, right?). Why so many registers allocated for args? Why not drop 387 stack support at all - wouldn't that improve context switching times? (Hmm, I may be wrong here)... Finally why MS felt obligated to come with their fucking own version of ABI?! (Ok, that last one is rhetoric)...

    But that's peanuts compared to the whole memory-model / "int" size thing. I mean - do people never learn? At least 16-bit Unicode problems should've tought us something about bean-picking. So now we have cache-spoiling-if-nothing-else 32-bits selecting prefix on every other fucking CPU instruction and you cannot have more than 4 Gigs of executable code, what's that? "640k should be enough for everyone" once again? What if I want some code generator for turning my data into self-processing code? (Old-schoolers may remember "compiled sprites" to get my idea).

    x64 is a great step forward for x86 and it could be better if wiser (IMHO) decisions were made in its infancy. Maybe it's too late now but I guess it will bite our asses in the years to come.
    • Re: (Score:3, Informative)

      by AmunRa ( 166367 )
      Note: The ABI (Application Binary Interface) isn't defined by the chip, it's defined by the Operating System. Linux generally uses the System V ABI (on x86), simply because it was easier to use a common ABI than invent your own. Keeping the Linux ABI for x86-64 similar to the x86 one makes the whole toolchain much easier to develop. There is nothing stopping you from calling functions in any way you see fit, saving and restoring no information if you want, but you'll have fun when interfacing with other pr
  • by ravyne ( 858869 ) on Tuesday April 03, 2007 @02:11PM (#18592203)
    I drive a '96 cavalier; Its not stylish, its not particulalry fast, no power windows or locks and due to some dings, its not even orthogonal anymore. But it was cheap, relatively fuel-efficient, reliable and it gets me from A to B as fast as I'm otherwise allowed. We geeks tend to pine over these sleek ISAs like MIPS or Power in much the same way that car enthusiasts wax romantic about the latest sports car. For most of us however, practicality forces us to drive more modest vehicles. Its not practical to drive a vehicle that requires some exotic fuel in the same way that its not practical to run a CPU that digests some exotic instruction set, and for the same reasons: Limited use and availability leads to higher cost-of-ownership overall. Economies of scale and past investment lead to comparatively rock-bottom prices. The PC is also bogged by something far more sinister than the x86 instruction set, namely, the PC BIOS. This is only just beginning to go away with Apple having adopted Intel's EFI firmware (OpenBIOS on their PPC systems before that) and the growing list of LinuxBIOS supported motherboards (still not ready for personal use, but getting there). Widespread EFI adoption might take place if Microsoft releases a home OS with the capability of using EFI without the BIOS compatability layer. Another point to watch for in the future is the proliferation of platforms such as the CLR (.NET) and to a lesser extent, the JVM. These sort of platforms serve as an abstraction layer between the instruction set the software is written in, and the instruction set of the hardware on which it runs. With a performance difference of 10% or so now, and that difference shrinking as the technology matures, we'll begin to see that the underlying architecture will loose its hold on being the defining element of the platform. We're already beginning to see x86 technolgy moving towards extensions to make virtualization (such as Xen) more efficient, and I suspect it will not be long before it moves to include features to make the .net platform and similar technologies run more efficently as well. If these sort of technologies eventually become the defacto target for software, we may see a future in which the CPU's sole purpose becomes to efficiently support a higher-level platform that is defined by software. In the Embedded world, x86 does not reign - in fact, x86 is a very small portion of the embedded market. PowerPC rules, followed by ARM and 68k, this doesn't even mention smaller processing tasks run by Microcontrollers like the 8051 or PIC devices. x86 has all but been ousted where engineers are freed from the concerns of backwards compatibility and high performance is not required.

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...