Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Why Can't Intel Kill x86? 605

jfruh writes "As tablets and cell phones become more and more important to the computing landscape, Intel is increasingly having a hard time keeping its chips on the forefront of the industry, with x86 architecture failing to find much success in mobile. The question that arises: Why is Intel so wedded to x86 chips? Well, over the past thirty years, Intel has tried and failed to move away from the x86 architecture on multiple occasions, with each attempt undone by technical, organizational, and short-term market factors."
This discussion has been archived. No new comments can be posted.

Why Can't Intel Kill x86?

Comments Filter:
  • Re:It will (Score:5, Informative)

    by realityimpaired ( 1668397 ) on Tuesday March 05, 2013 @02:49PM (#43081567)

    What intel needs is a superior architecture that can successfully microcode intel instructions with minimal performance cost.

    You mean, like x86-64?

    You don't seriously think that modern Intel processors are actually CISC, right? The underlying instruction set is closer to a DEC Alpha than it is to an 80x86 processor....

  • by Anonymous Coward on Tuesday March 05, 2013 @02:58PM (#43081673)

    Do you even understand what "CISC" and "RISC" are? It doesn't just mean "less instructions and stuff." There are, in fact, other design characteristics of "RISC" such as fixed width instructions (wasted bandwidth and cache) and so on.

    While I'm sure you are attempting to somehow suggest that intel pays some kind of massive "decode" penalty for all it's instructions and will always be less power effieicnt because of it, things are not quite so simple. You see, a RISC architecture will typically need more instructions to accomplish the same task as a CISC architecture. This has an impact on cache and bus bandwidth. Also, ARM chips still have to decode instructions. It's not a trace cache.

    It's a false dichotomy to say that things are either CISC or RISC. There would be various architectures that wouldn't really qualify as either, such as a VLIW architrecture for example.

    So, in summary no, technology does not "want" to evolve from CISC to RISC. And even ARM isn't really faithful to the RISC "architecutre", what with supporting multiple bit formats (i.e., thumb, etc) and various other instructions.

    I look forward to this day when discussions of various cpu can be advanced beyond stupid memes and rehashed flamwars from decades ago. But this is slashdot, so I expect too much.

  • by GreatDrok ( 684119 ) on Tuesday March 05, 2013 @03:09PM (#43081793) Journal

    The funny thing about ARM is that back in the late 80's and early 90's when the first ARM processors were being shipped, they were going out in desktop machines in the form of the Acorn Archimedes. These were astoundingly fast machines in their day, way quicker than any of the x86 boxes of that era. It took years for x86 to reach performance parity, let alone overtake the ARM chips at this time. I remember using an Acorn R540 workstation in 1991 that was running Acorn's UNIX implementation and this machine was capable of emulating an x86 in software and running Windows 3 just fine, as well as running Acorn's own OS. ARM may not be the powerhouse architecture now, but there is nothing about it that prevents it being so, just current implementations. ARM is a really nice design, very extensible and very RISC (Acorn RISC Machines == ARM in case you didn't know) so Intel may very well find itself in trouble this time around. The platforms that are all up and coming are on ARM now, and as demand for more power increases, the chip design can keep up. Its done it before and those ARM workstations were serious boxes. Heck, MS may even take another stab at Windows and do a full job this time but even if it doesn't, so what? Chromebooks, Linux, maybe even OS X at some point in the future, and Windows becomes a has-been. It is already around only 20% of machines that people access the internet from down from 95% back in 2005.

  • by NoNonAlphaCharsHere ( 2201864 ) on Tuesday March 05, 2013 @03:10PM (#43081823)
    Don't even bother. There's a whole contingent of "but it's RISC under the hood" folks around here who don't understand that a single accumulator architecture that has gems like "REPNE SCASB" in its instruction set will never be RISC.
  • by PRMan ( 959735 ) on Tuesday March 05, 2013 @03:24PM (#43082021)
    I replaced the slow HD in my Asus EeePC Netbook with an SSD and it works great now. The Atom isn't the problem. It's the dog slow hard drives they put in them.
  • by Bengie ( 1121981 ) on Tuesday March 05, 2013 @03:30PM (#43082117)
    Even Intel talks about Atom's abysmal performance. The good news is the next gen Atoms will be bringing real performance to low power. They're going to be completely difference archs.
  • Re:wtf? (Score:5, Informative)

    by tlhIngan ( 30335 ) <slashdot.worf@net> on Tuesday March 05, 2013 @03:42PM (#43082325)

    the question is idiotic. sounds more like "asking a question just to ask it". Why should even intel kill x86? Would anyone even WANT to kill his cash cow ? It sounds more like wishful thinking from the camp across the atlantic ( arm *wink* *wink* ). Sure they would like to initiate or induce an inception of such an idea, but Intel has no reason at all to abandon such a successful platform.

    Because x86 as an ISA is a lousy one?

    32-bit code still relies on 7 basic registers with dedicated functionality, when others sport 16, 32 or more general purpose registers that can be used mostly interchangably (most do have a "special" GPR used for things like zero and whatnot).

    64-bit extension (x64, amd64, x86-64 or whatever you call it) fixes this by increasing the register count and turns them into general registers.

    In addition, a lot of transistors are wasted doing instruction decoding because x86 instructions are variable length. Great when you needed high code density, but now it's legacy cruft that serves little other than complicate instruction caches, inflight tagging and complicate instruction processing as instructions require partial decoding to figure out their length.

    Finally, the biggest thing nowadays leftover from the RISC vs CISC wars is the load/store architecture (where operands work on registers only, while you have ot do loads/stores to access memory). A load/store architecture makes it easier on the instruction decoder as no more transistors need to be wasted trying to figure out if operands need to be fetched in order to execute the instruction - unless it's a load/store, the operand will be in the register file.

    The flip side though, is a lot of the tricks used to make x86 faster also means that other architectures benefit as well. Things like out-of-order execution, register renaming, and even the whole front end/back end thing (where front end is what's presented to the world, e.g., x86), and back end is the internal processor itself, (e.g., custom RISC on most Intel and AMD x86 parts).

    After all, ARM picked up OOO in the Cortex A series (starting with the A8). Register renaming came into play around then as well, though it really exploded in the Cortex A15. And the next gen chips are taking superscalar to the extreme. (Heck, PowerPC had all this first, before ARM. Especially during the great x86 vs. PowerPC wars).

    The good side though is that x86 is a well studied architecture, so compilers and such for x86 generally produce very good code and are very mature. Of course, they also have to play into the internal microarchitecture to produce better code by taking advantage of register renames and OOO, and knowing how to do this effectively can boost speed.

    And technically, with most x86 processors using a frontend/backend deal, x86 is "dead". What we have from Intel and AMD are processors that emulate x86 in hardware.

  • Re:exactly (Score:3, Informative)

    by eabrek ( 880144 ) <eabrek@bigfoot.com> on Tuesday March 05, 2013 @03:54PM (#43082523)

    Individually they aren't too bad. Taken all together they create real problems.

    64 predicate registers (which is way too many) yields 6 bits per syllable (the Itanium term for instruction). Combine that with 128 int regs (7 bits per) and 3 register operands - you've got 27 bits before specifying any instruction bits.

    The impact of the middle one (instruction steering) was also not seen until late in the design cycle. Instruction decode information got mixed in there, so that not every instruction could go to every position. This led to a large number of NOPs inserted into the instruction stream. The final code density for Itanium was significantly lower than RISC (and way under x86).

    These factors also work against out-of-order implementations - but there were organizational impediments to that happening anyway...

  • by overshoot ( 39700 ) on Tuesday March 05, 2013 @04:37PM (#43083131)

    Can you guys elaborate for the history challenged?

    The mainframe crowd (mainly IBM, but also GE, Control Data, and the five other Dwarfs) dismissed minicomputers when they appeared as not being anything more than toys for academics (because even minis weren't in anyone's household budget).

    Later, the microcomputer (early Altairs and other 8086, systems with the S-100 bus, the Apple II, the TRS-80, Sinclair, etc.) got the same response from minicomputer companies like DEC. They were, in fact, toys -- but they didn't stay toys.

    With the introduction of each successive generation, the previous generation didn't die. After all, we still have mainframes today for jobs that handle godawful amounts of data and/or need to have lotsanines of uptime. What happened, though, was that their markets stopped being real growth segments. We still have minicomputers (although we tend to call them "servers" now.) And we'll always have personal computers. That doesn't mean that they'll resemble today's, just as today's mainframes don't look like those of the 60s. However, there's no reason to be sure that tomorrow's personal computers will be ubiquitous like those from ten years ago, because a lot of the tasks from 2003 (like wasting time on /.) can be done by something more convenient like a phone or a tablet.

  • ARM Mistakes (Score:4, Informative)

    by emil ( 695 ) on Tuesday March 05, 2013 @04:56PM (#43083519)

    I don't program ARM assembly language, but it appears to me that Sophie and Roger made a few calls on the instruction set that proved awkward as the architecture evolved:

    • The original instruction set put the results from compare instructions into the high bits of the program counter, and thus they were not 32-bit CPUs and could not address 4gb of memory. Relics of this are found in GCC with the -mapcs-26 and -mapcs-32 flags.
    • The program counter is a register like any other, and you are able to mov(e) a value to it directly, causing a branch. This makes branch prediction harder, and has been eliminated on the 64-bit version.

    These design decisions made the best desktop CPU for 10 years, but they came at a price.

  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Tuesday March 05, 2013 @05:05PM (#43083677)
    Comment removed based on user account deletion
  • by trifish ( 826353 ) on Tuesday March 05, 2013 @05:15PM (#43083823)

    There is a limit to miniaturization. If you don't realize that, then pause for a moment and think how hard it would be to browse the internet using a device that is 1 inch x 1 inch. There is a limit, believe me.

    Hence, your analogy with mainframes and minis is flawed.

  • by Jeremiah Cornelius ( 137 ) on Wednesday March 06, 2013 @02:29PM (#43095417) Homepage Journal

    But the behind-the-scenes politics had MS deliberately kill NT for PPC, MIPS and Alpha.

    Just as surely as board-member executive machinations had HP/Compaq kill Alpha, for Intel.

    They are the dark side of the force, and normally almost unobservable - like a black hole. Which also explains the sucking...

    I'm watching some of these things in real-time, today. Don't worry. They cannot execute well enough to ruin what is done best in software.

This restaurant was advertising breakfast any time. So I ordered french toast in the renaissance. - Steven Wright, comedian

Working...