Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Hardware Technology

x86 Evolution Still Driving the Revolution 82

An anonymous reader writes "The x86 instruction set may be ancient, in technology terms, but that doesn't mean it's not exciting or innovative. In fact the future of x86 is looking brighter than it has in years. Geek.com has an article pointing out how at 30 years old x86 is still a moving force in technological advancement and, despite calls for change and numerous alternatives, it will still be the technology that gets us where we want to go. Quoting: 'As far as the world of the x86 goes, the future is very bright. There are so many new markets that 45nm products enable. Intel has really nailed the future with this goal. And in the future when they produce 32nm, and underclock their existing processors to allow the extremely low power requirements of cell phones and other items, then the x86 will be the power-house for our home computers, our notebooks, our cell phones, our MIDs and other unrealized devices today.'"
This discussion has been archived. No new comments can be posted.

x86 Evolution Still Driving the Revolution

Comments Filter:
  • by Kjella ( 173770 ) on Friday May 09, 2008 @08:32AM (#23349040) Homepage
    x86 processosr aren't x86 processors, and haven't been for many years. They all decode the x86 instruction set to microops which they execute internally. The x86 instruction decoder doesn't take up any significant space, and if there really was an advantage to direct microop code, producers would have offered a "native" microop mode long ago. SSE instructions has provided a lot of the explicit parallelism without touchnig the standard x86 set. The mathematical complexity doesn't get less than an ADD or MUL anyway, so it would have been all about arranging the queue inside the CPU. So yeah, ADD and MUL survives but like in mathematics it's just the symbols, in implementation it can be done with everything from microops to an abacus.
    • Re: (Score:2, Informative)

      by dreamchaser ( 49529 )
      I respectfully disagree. An x86 processor is any processor that can execute x86 instructions. The underlying architecture (RISC vs CISC, etc.) is irrelevant.
      • I respectfully disagree. An x86 processor is any processor that can execute x86 instructions. The underlying architecture (RISC vs CISC, etc.) is irrelevant.

        RISC and CISC describe instruction sets though, which is what x86 IS. So x86 can't really be either RISC or CISC - it's CISC by definition. A RISC instruction set is "smaller" (though I've never seen a set mark for how small they need to be, but x86 most certainly qualifies is too long), but specifically RISC instruction sets must have fixed-length instructions. x86 uses variable length instructions and that explicitly makes it CISC.

        • by mikael ( 484 ) on Friday May 09, 2008 @10:53AM (#23351018)
          Ars Technica has a good article on this debate

          RISC vs. CISC - the Post-RISC Era [arstechnica.com], and Bibliography [arstechnica.com]

          In defence of RISC [embedded.com]

          The majority of software written for any chip is compiled by a relatively small number of compilers, and those compilers tend to use pretty much the same subset of instructions. The UNIX portable C compiler for example used less than 30% of the Motorola 68000 instruction set.
        • I used the RISC vs CISC example because starting with the original Pentium x86 processors use a very RISC like internal architecture and microops. What you say is very true though.
    • Re: (Score:1, Informative)

      by Anonymous Coward
      They are x86 processors. Maybe you don't know that there's more to an ISA than simply how instructions are encoded?

      x86 comes from a time when transistors weren't essentially free, so while its design might have made sense in the microcoded-machine era, x86 processors now have a lot of cruft they have to deal with.

      Its performance is limited by some of this cruft. For example, x86 has a hardcoded page table structure, and its TLB has no application-specific identifiers. Context switches become much more ex
    • Re: (Score:3, Interesting)

      by renoX ( 11677 )
      >x86 processosr aren't x86 processors, and haven't been for many years. They all decode the x86 instruction set to microops which they execute internally.

      Wrong, even the early x86 processors were microcoded, so all the x86 CPUs have these decoding phase just with a varying amount of instruction decoded to microops or executed directly.
      All these CPU are x86 CPU, because they're *designed* to run the x86 instructions *efficiently* whatever the implementation details are, so a 80286 or Core2 Duo are both x8
    • At least when I was still in school (2001-2002), the logic to decode x86 into the native micro-ops was actually a very sizable fraction of the chip area (almost half IIRC).

      That's a large part of how Transmeta was able to get such insane power reductions with their Crusoe CPUs - they offloaded the x86-to-VLIW-micro-op translation step into software, rather than do it in circuitry. That caused a performance hit but saved a LOT of power.
  • by Bert64 ( 520050 ) <bertNO@SPAMslashdot.firenzee.com> on Friday May 09, 2008 @08:37AM (#23349068) Homepage
    It just goes to show what can be achieved in an open market with multiple competitors (intel, amd, cyrix, via, idt etc), as opposed to a stifled closed market with one party or a small number of collaborators (alpha, hppa, ia64)....

    A few years ago, x86 was utter garbage compared to virtually every other architecture out there... But the size and competitiveness of the x86 compatible market has forced companies to invest lots of money in improving their products, to the point that x86 is now ahead of most if not all of it's proprietary counterparts.

    The sooner microsoft's strangle hold on the industry is broken, the better, so that the software world can start providing the benefits we got from the x86 compatible hardware market.
    • by moderatorrater ( 1095745 ) on Friday May 09, 2008 @09:16AM (#23349582)

      The sooner microsoft's strangle hold on the industry is broken, the better
      It's interesting that you should say that considering everything that's going on. Ubuntu's the friendliest desktop distro to come around ever as far as most people are concerned. Apple keeps gaining market share, slowly but surely eating away at Microsoft. Vista came out and it included things that Macs and Linux have had for years, including a 3d desktop and something akin to sudo. In the desktop market, the pressure's building.

      In the server market Windows has always had must more competition, and it's not getting any smaller. Solaris has ZFS which is creating a lot of buzz; I remember when WinFS sounded cool, now it sounds like it would be an incremental upgrade in the face of the ZFS revolution. It wasn't even a year ago that the story came out about the Microsoft sysadmins who had to switch from linux to windows server and hated it, prompting microsoft to look into more configuration in text files.

      In the browser market, Microsoft has finally started seeing that they can't rely on IE6 forever, and now they've got IE7 out with IE8 in the works. They're moving closer to standards compliance, although they're taking their sweet time to do it and they're not taking a direct route. Safari's generating buzz, especially on the iphone, opera's dominating the embedded market and they're still the browser of choice for those who like to feel superior, and firefox is spreading like fire as swift as a fox! (it was a stretch, I know, but I couldn't resist)

      The point is that Microsoft is feeling the pinch. Vista came out and showed everyone that they were wounded, and now all the little guys are running up and taking bites out of their markets before Microsoft can respond. They'll come back with efforts to maintain market share, but the competition is heating up and Microsoft can't (and doesn't) ignore it any longer.
      • by Bert64 ( 520050 )
        Yes the situation is improving, but microsoft are still powerful enough to make it very difficult to run anything else... Once those barriers are gone, the situation should change very rapidly.
        • Yes the situation is improving, but microsoft are still powerful enough to make it very difficult to run anything else... Once those barriers are gone, the situation should change very rapidly.
          Problem is a lot of software, especially specialty software, is Windows only. How is that going to change any other way than very slowly? Wine?
        • Unfortunately, Microsoft's strong-arm tactics of "encouraging" Windows on mobile devices (like the eeePC) are keeping them on top.

          The "shrunken" PC and the "enlarged" mobile device will converge soon and that's where the market is at.

          If linux can be on top in the growing mobile market, it will succeed. Otherwise, it will be an even longer battle.

    • by dpilot ( 134227 ) on Friday May 09, 2008 @09:34AM (#23349832) Homepage Journal
      > stifled closed market ... (alpha, hppa, ia64)....

      Into this thought we have to insert IA64, and I'm not sure how the heck we do. With any discussion of IA64, competition, and closed market is has to come up. IA64 was designed first and foremost to be a closed market, utterly unclonable. Though an Intel/HP joint venture, neither company owns any of the IP related to IA64. Instead the IP is owned by a separate company, and Intel and HP have a license to the IP from that company. That way, the IA64 IP is protected from any cross-licensing agreements that Intel or HP may have made, or may make in the future, since they don't have the rights to make any such agreements.

      IA64 is closed as no architecture ever has been before. But it has been practical matters preventing its widespread adoption, not the competition-proof IP bomb that is its basic nature.

      Oh yeah, IANAL.
    • Re: (Score:2, Informative)

      I agree with your point about competition being good, but technically, Intel tried to keep x86 closed and proprietary. Competition from AMD and others grew despite the spec not being open.
  • Baloney (Score:4, Informative)

    by LizardKing ( 5245 ) on Friday May 09, 2008 @08:43AM (#23349116)

    The article appears to be written from the perspective of someone who knows fuck all about the embedded market. The majority of embedded products that have something more sophisticated than an 8bit processor are using Motorola M68K, ARM or MIPS derivatives. That's likely to stay that way, as x86 processors tend to be large, comparatively power hungry and focused on high clock speeds - especially the ones from Intel and AMD. In fact, the only vaguely embedded device I've come across with an x86 chip was using a 486 clone (from Cyrix I think).

    • by Bandman ( 86149 )
      Right, but what they're talking about is having x86 chips small enough, less power hungry, and able to take the place of less powerful chips in embedded devices.
      • Re: (Score:3, Informative)

        by LizardKing ( 5245 )

        Yup, but the authors argument that familiarity with development tools for x86 (and what seems like an assumption that those don't exist for other architectures) is going to be appealing also shows he's clueless. There are already excellent suites of tools for embedded development, in fact most of them are the same as you'd use for desktop or server development - particularly gcc, gdb and so forth targeted for your particular architecture, along with IDEs and emulators you can run on a typical PC. If the aut

      • by Goaway ( 82658 )
        That would require completely new x86 chips. You can't just re-used desktop processors for embedded systems, there's far too much support circuitry required. Embedded processors need to be highly integrated, with lots of circuitry on-chip.

        And if you need new chips for that, why use x86 for those when you can use ARM?
    • by RupW ( 515653 ) *

      In fact, the only vaguely embedded device I've come across with an x86 chip was using a 486 clone (from Cyrix I think).
      The Madge MkIII token ring network card was built around a lower-power stripped-down x86-clone core. They chose it, IIRC, for the programming tools available. Alas I can't find any more details :-/ and the chip package just says "K2 Ringrunner".
    • Depends on what exactly the definition of embedded device is, but Soekris (http://www.soekris.com/ [soekris.com]) and a number of competitors are quite popular. Very cool products, all of them.

      I'm currently designing a system using one to monitor weather + soil conditions in my garden.
  • Because, like Robespierre, it, (and the "inevitability" of Itanic) has killed off all the possible rivals. Mips, Alpha, PA-RISC, SPARC, PPC, take your choice.
  • by bestinshow ( 985111 ) on Friday May 09, 2008 @08:49AM (#23349210)
    I think that ARM will be rather more tenacious than this guy thinks. 32nm will not be a miracle thing that somehow magically drops x86 (even Atom) down into a mobile phone friendly CPU in terms of power consumption and size (never mind the supporting chipset). Companies with years of ARM code will not suddenly decide to port to x86 on the off-chance that x86 will get more than a tiny proportion of the mobile phone market.

    ARM in a CPU costs under a dollar to license. Those ARM SoCs probably cost under $20 each, and they're tiny and have everything you need on them. Intel would have to provide a dozen Atom variants (in terms of features and size, not clock speeds and number of cores) to even gain the interest of this marketplace. That's why 3 billion ARM based cores are created every year. There's a huge variety of options available in a truly competitive market.
    • by Bandman ( 86149 )
      Companies with years of ARM code will not suddenly decide to port to x86 on the off-chance that x86 will get more than a tiny proportion of the mobile phone market.

      They said the same things about Apple and moto chips.

      Of course, in that case, there was a single controlling power that told people how it would be. There's no "Steve Jobs" of the embedded market.
      • They said the same things about Apple and moto chips.

        Yes, but the article discusses processors for embedded devices. What do you think's inside an iPod or iPhone? An ARM processor.

      • by ajlitt ( 19055 )
        Writing an emulation layer is fine if you're Apple. It's not fine if you're a 10k unit/year medical equipment vendor with hundreds of thousands of dollars spent on qualifying your product for clinical use. It's not fine in the low-margin consumer electronics market where you buy most of your software components, often tied to one architecture or another, to save on development costs.
        • by 4D6963 ( 933028 )

          Writing an emulation layer is fine if you're Apple

          Actually they pretty much just bought Rosetta from whichever company independently made it. Also if I can add my two cents on the subject, I think ARM pretty much won the embedded market. Maybe not forever, and maybe it has some serious concurrence out there, but I don't think anyone has to worry about their dominant position for a while.

          • by ajlitt ( 19055 )
            Apple did buy Rosetta, but I was thinking about their 68k-PPC transition for some reason. They wrote that one in-house.

            ARM is pretty much the winner in the 32-bit embedded world, though MIPS has a hold in video apps.
    • by ajlitt ( 19055 ) on Friday May 09, 2008 @09:17AM (#23349588)
      Right on. Besides, the mobile market is fueled by the further integration of peripherals into SOCs. Performance and power aside: if I were going to design a smartphone, I wouldn't want to go with a three-piece cpu and chipset, not to mention licensing and development for BIOS on a new platform. And that's before including special ASICs for functionality not built into the chipset (3D accel, radio interfaces, LCD & touch panel). And then I'd be stuck with one of the few vendors who make modern embedded x86 chips.

      If I go with ARM instead, I get a wide choice of SOCs from which I can pick and choose the built-in features (including the ones mentioned above). Bootloaders are generally included as part of the BSP for any given embedded OS, and if I don't like that there's always redboot or uboot (probably more too, I haven't been in the embedded world in a few years). If I don't want to use vendor A's product on revision 2 of the product, then I choose from one of the many remaining products out there, and my code ports over cleanly.
    • ...normal desktops or laptops that use that ARM?
    • by renoX ( 11677 )
      Nobody has said that the replacement of ARM by x86 would be done in one day, but still Intel has a huge investment in fabs, remember how Intel beat RISCs in PC/servers?

      By putting more transistors in the x86 CPUs (which gave adequate performance) at a lower price than the competitors.

      Sure using more transistors means consuming more power usually which is a disavandage in the embedded market, but if Intel can come with a better low-power process than the competitors then it's possible that x86 would beat the
      • by pslam ( 97660 )

        So in fact x86 vs ARM is a competition between Intel fab and TSMC fab (and the other) and usually Intel has better process, enough to beat ARM? I don't know..

        Absolutely not. No amount of process refinement is going to push x86 to the same power consumption as ARM. Atom is about 10-100 times the power consumption per MHz of current mobile ARMs. It's orders of magnitude short.

        The mobile and low power embedded industries have long ago found that they don't need to stick to one architecture. In fact, the des

        • by renoX ( 11677 )
          > Atom is about 10-100 times the power consumption per MHz of current mobile ARMs. It's orders of magnitude short.

          That's because Intel didn't target the same power envelope for the Atom as the ARM does:
          Atom target the OLPC, EEE: ultra mobile PCs not phones, that's all..
          BUT Intel has announced that they're going to build a CPU which will be in the same 'power envelope' as ARM, this will be the real competition to ARM, not the Atom.

          As you said the embedded industry is not linked to a given architecture, th
  • Sure, but... (Score:5, Insightful)

    by MostAwesomeDude ( 980382 ) on Friday May 09, 2008 @08:55AM (#23349282) Homepage
    Although it's true that we have been forced to use x86 for quite a while, and as a result have gotten quite good at using it, that doesn't mean that it is an optimal instruction set. amd64 is an ugly hack, as is PAE, and although they do work, they don't change the fact that x86 was never intended to handle 64-bit spaces.

    Consider the various POWER arches, and the ridiculously powerful ARM arch. ARM, for example, has an SIMD extension called Neon, which makes audio decoding possible at something like 15 MHz. These are very cool and potentially powerful architectures that have never been fully explored due to Microsoft's monopoly in the nineties.

    (To be fair, Microsoft couldn't have forced adoption of another arch even if they wanted to; they homogenized the market way too far.)
    • Re:Sure, but... (Score:5, Interesting)

      by Moridineas ( 213502 ) on Friday May 09, 2008 @09:32AM (#23349794) Journal

      Although it's true that we have been forced to use x86 for quite a while, and as a result have gotten quite good at using it, that doesn't mean that it is an optimal instruction set. amd64 is an ugly hack, as is PAE, and although they do work, they don't change the fact that x86 was never intended to handle 64-bit spaces.
      The point is, who cares one iota if x86 is an "ugly" architecture. It gets the job done, and hands down beats most of the performance in what matters most of the time--speed. Saying something like "amd64 is an ugly hack" is just completely irrelevant. If you're one of the very few programmers in the world who regularly write assembly level code, you might have a valid complaint. If you're a more typical developer or an enduser, the ancestral design of your CPU couldn't be less important.
      • Re: (Score:3, Insightful)

        by LizardKing ( 5245 )

        Speed comes much further down the list of priorities in most embedded applications. Size, power consumption, heat dissipation and even code size matter more - and code size is related to instruction set. Even when it comes to performance, x86 is relatively inferior compared to something like an ARM processor - it's mostly the higher clock speed and Intel's ability to build new fabs faster than anyone else that's kept them in the game.

        • I'm not arguing the case of embedded applications--though I WOULD point out my other post to this article which mentions devices like Soekris http://www.soekris.com/ [soekris.com] which are x86, powerful, and small.

          No doubt some/many embedded devices benefit greatly from non-x86. X86 is very steadily improving. Part of this is for sure because of Intel+AMD research divisions and fabs. What I'm saying is, the "why" is irrelevant.

          How can you say that x86 is relatively inferior when compared to ARM, performancewise? Show me
        • Re: (Score:3, Interesting)

          by Waffle Iron ( 339739 )

          Even when it comes to performance, x86 is relatively inferior compared to something like an ARM processor - it's mostly the higher clock speed

          I don't believe that. I got a Compaq iPaq PDA a few years back so I could play around with it. I was excited that it had a 200MHz ARM CPU, and I was expecting that it would run with similar performance to a 200MHz Pentium.

          I loaded Linux on to the thing and compiled a few test programs. I was highly disappointed to find out that the CPU actually ran with a performance level closer to a 66MHz 486. Live and learn. Well, it turns out that that's the price you pay for having almost no cache and a single ALU

          • The 200MHz Pentium would be roughly four or five times of the ARM found in an iPaq, so something had to give - and that was the cache and some complexity. The ARM chip also ran much cooler and with lower power consumption than the Pentium, which needed a fan and sizable heat sink. My point about an x86 processor being inferior is that it's crippled by the instruction set, which requires a lot of decoding before the RISC-like core can actually do its work. There are diagrams that show how much real estate on

            • There are diagrams that show how much real estate on a number of x86 processors from Intel is taken up by the decoder, and it's considerably more than on a processor with a more efficient and elegant instruction set.
              And the decoder also allows for efficient instruction reordering, etc. This is not nearly so 1-dimensional an issue as you make it seem!
            • Re: (Score:3, Interesting)

              by Waffle Iron ( 339739 )
              The size of the x86 decoder as a percentage of die area has been decreasing ever since the days of the 386. It's now pretty negligible. In return for that, you get a very compact instruction set coding that saves on cache space, thus cutting down on the largest single consumer of real estate on the die.

              I notice that the ARM has added a whole alternative instruction set to save on code size, too. So the idea must have some merit.

      • Re: (Score:3, Interesting)

        by Skapare ( 16644 )

        If all the effort that has been put into x86 had instead been put into another architecture that was cleaner to begin with, and designed specifically for being able to migrate to 64 bit, who's to say we wouldn't be even better off than we are now with the x86 ancestry?

        Sure, I agree, we've made x86 work well. But we are comparing a processor that has had a tremendous focus to a few alternatives that have had much less focus in terms of bringing them up to speed.

        There is what I refer to as "the x86 cost".

        • If all the effort that has been put into x86 had instead been put into another architecture that was cleaner to begin with, and designed specifically for being able to migrate to 64 bit, who's to say we wouldn't be even better off than we are now with the x86 ancestry?

          This is very possible. We've certainly seen some promising architectures that have either fizzled due to market share or scaleability. On the other hand, a lot of design from scratch to be clean and forward looking architectures have never left the starting pad virtually (Think itanium). And then there's x86 which has been consistently cheap, and is consistently able to scale. Can we really be doing that much better than x86? I'm not sure.

          And on the other hand, as I mentioned in other post, the hybrid desi

    • Re:Sure, but... (Score:5, Insightful)

      by the_humeister ( 922869 ) on Friday May 09, 2008 @09:53AM (#23350128)

      These are very cool and potentially powerful architectures that have never been fully explored due to Microsoft's monopoly in the nineties.
      How exactly is an ISA monoculture Microsoft's fault? Microsoft did make Windows for multiple CPU architectures. Guess which ones people bought? The x86 version because the hardware is a lot less expensive. If there's any entity to blame, it's IBM, HP, DEC, Sun etc for not bringing down the prices of their architectures.
      • by higuita ( 129722 )
        i even agree with you, that wasnt MS fault for x86 monoculture, but MS didnt help also... they only offered windows for alpha and even then had many problems and quickly dropped...
        alpha didnt last either and all other CPUs filled their niches, the only "old" CPUs still there, out of their niched markets are the MIPS (mostly scaled down to embedded hardware) and powerPC (in the Mac market, but now, with it lost, the PowerPC computer is severely minor)

        IMHO, only powerPC had a change to fight the x86, but they
    • ARM, for example, has an SIMD extension called Neon, which makes audio decoding possible at something like 15 MHz.

      What, a specialized processor is able to do a task in less cycles than a more general processor? You must be joking!

      The instruction set doesn't dictate how the hardware is built. I could design an ARM processor completely unsuitable for audio decoding which needs 1 Ghz to do it in real time. Does that mean the ARM instruction set sucks? No, it just means that my glorious processor is not design

    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Microsoft didn't homogenize anything; it was the hardware manufacturers who did. MS wrote DOS for the IBM PC and when other people copied the PC, they licensed DOS for it. MS wrote software for other personal computers (Apple being the best example), but it was the PC clones that took over the marketplace. Indeed, you could argue that if the market did not become homogenized, home computers would not be the ubiquitous devices they are today. Computers would instead be cheap toys for hobbyists or expensive t
    • Re: (Score:3, Informative)

      by QX-Mat ( 460729 )

      ARM, for example, has an SIMD extension called Neon, which makes audio decoding possible at something like 15 MHz.

      ARM is a heavily pipelined architecture with a reduced instruction set designed to perform a specific tasks like decoding. It takes a lot of silicon to allow a pipeline to decode things outside a tradition math/vector unit. Rarely is there any kind of cross over or feedback late in the execution stage making pipelines less predicable. To make things worst, they're hard to fence, which makes pipelined operations awkward to preempt.

      I don't think it's been an abuse of position (in the vertical monopolistic se

      • Re: (Score:3, Informative)

        by argent ( 18001 )
        Pipelines are an implementation technique, not part of an architecture. Some architectures make it easier to take advantage of pipelining than others, but that doesn't mean they're pipelined architectures. Hell, the intel x86-family processors have had longer pipelines than just about anything else for at least a decade. P4 family chips had up to 33 pipeline stages, neatly beating the profligate G5's max-23-stage pipeline.

        The Core 2 still has 14 stages in its pipeline.

        As for the ARM, the XScale has 5 stages
        • by QX-Mat ( 460729 )

          Pipelines are an implementation technique, not part of an architecture

          I disagree with you somewhat. Pipelines are integral to foundation of the processing of the execution of the architecture and not simply an implementation technique.

          I'm happy to admit the modern demand on data flow is giving the effect that pipelines are a method of implementation (take vector units and the need to poll them), but if you ignore data bottle necks, you'll still find a von Neumann CPU will be a pipelined machine with many alternative accumulators, execution paths and multipliers than a simila

          • by argent ( 18001 )
            Pipelines are integral to foundation of the processing of the execution of the architecture and not simply an implementation technique.

            I can't parse that.
    • Re:Sure, but... (Score:4, Interesting)

      by edwdig ( 47888 ) on Friday May 09, 2008 @11:34AM (#23351682)
      Although it's true that we have been forced to use x86 for quite a while, and as a result have gotten quite good at using it, that doesn't mean that it is an optimal instruction set. amd64 is an ugly hack, as is PAE, and although they do work, they don't change the fact that x86 was never intended to handle 64-bit spaces.

      x86 wasn't intended to handle 32 bit either. But when it made that jump, they actually cleaned things up and made the instruction set nicer. There's a lot less weird limitations on the instruction set in 32 bit mode than 16 bit mode. The jump to 64 bit mode cleaned things up even further and actually makes things rather nice. It's not an ugly hack in any way, it's actually quite elegantly done.

      PAE, yeah, that's an ugly hack, but it's really all you can do if people are demanding > 4 GB memory on a 32 bit processor. You could do things nicer if you used segmentation, but most people developed a hatred of it due to the weird way it was implemented on the 8086 and refused to consider it ever since.

    • by jamesh ( 87723 )

      To be fair, Microsoft couldn't have forced adoption of another arch even if they wanted to

      Do you remember Windows NT on Alpha? MIPS? PowerPC? SPARC? i860/i960?

      I think Microsoft dropped everything except Alpha by sp6a though.

      So they actually did try and capture some other markets, although you're right in that they would never have gotten people off of x86.
  • I know that several of the cores in the Cell resemble PPC's and I seem to recall an association of PPC's and one of the X-Boxes.

    Is there any reason to use a PPC these days? At least, for desktop usage?
    • For desktop usage? No. You use it to be different (hence the raging Mac vs. PC wars back in the day).
    • by Detritus ( 11846 )
      They were designed into a whole bunch of digital cameras. That's an application that requires low power and high speed.
    • by pstorry ( 47673 )
      The PowerPC's desktop presence was pretty much killed when Apple switched.

      I don't think IBM makes an workstations that use the PPC chips anymore - but they still use the related POWER architecture in their higher end servers.

      So on the desktop, it's dead.

      In the device and embedded market, however, it's quite popular. It has an unusual niche "above" ARM and "below" x86, so to speak.

      This is because it has higher performance capabilities and better integration with commodity computing hardware than most ARM ch
  • you mean x86 Intelligent Design is Still Driving the Revolution. Evolution is a theory, not a fact.
    • Hmm... I think a more appropriate correction would be: "The x86 Revolution Is Still Driving The Evolution"

      Because "Revolution" is a change of ideas, an "Evolution" is a change of fact.

      Evolution, as far as passing, or discarding various mutations in the parent animal onto its children, may be a "theory" (to some)

      But the evolution of processors is a fact, because its entirely documentented exactly what changed, how it changed, and why it changed by hundreds if not thousands of individuals.
  • "revolution" (Score:3, Interesting)

    by nguy ( 1207026 ) on Friday May 09, 2008 @05:08PM (#23355958)
    That's "revolution" as in "spinning in place"? :-)

    Seriously, x86 these days is just a compression format for a kind of RISC processor. It's probably not a very good compression format, but that probably also doesn't make a big difference.
  • Article is quite too enthuastic about x86 pushing to other domains.

    Lets make it clear.Modern x86 decoder is more complex than entire simple single issue risc processor and it consumes more power.
    Yes, thats SINGLE unit in the front end of pipeline, that risc processors do not need.

    About those "risc ops" which x86 instructions are translated as. First they are HUGE compared to risc instructions. ~4x as large since they need to map worst case size for each element for everyinstruction. Those are bits that need

"Pok pok pok, P'kok!" -- Superchicken

Working...