Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Why Can't Intel Kill x86? 605

jfruh writes "As tablets and cell phones become more and more important to the computing landscape, Intel is increasingly having a hard time keeping its chips on the forefront of the industry, with x86 architecture failing to find much success in mobile. The question that arises: Why is Intel so wedded to x86 chips? Well, over the past thirty years, Intel has tried and failed to move away from the x86 architecture on multiple occasions, with each attempt undone by technical, organizational, and short-term market factors."
This discussion has been archived. No new comments can be posted.

Why Can't Intel Kill x86?

Comments Filter:
  • by loufoque ( 1400831 ) on Tuesday March 05, 2013 @02:38PM (#43081385)

    Intel is still the major manufacturer of laptop, desktop, workstation and server chips...
    What if they're not the main provider for cheap toys? It's mostly a matter of price anyway. Whatever they do, Intel chips will always cost significantly more than ARM chips due to their business model.

    • by Jeremiah Cornelius ( 137 ) on Tuesday March 05, 2013 @02:40PM (#43081425) Homepage Journal

      Never forget! i960

    • by CastrTroy ( 595695 ) on Tuesday March 05, 2013 @02:59PM (#43081687)
      But what happens when cheaper, more power efficient ARM chips are powerful enough for desktops and laptops? I haven't bought a new machine because of speed issues since 2006. I bought a machine that year, and it's still running. I've since bought 2 laptops which were pretty much bottom of the line. Computers long ago reached the point where they were fast enough. If I'm able to buy an ARM based computer for $100 that plugs into the back of my screen and provides internet functionality, along with the ability to watch movies, listen to music, and play a few games, why would I spend $500 on a more traditional desktop? Intel chips will probably be around for quite a while on servers and workstations, but I think it won't be long until the laptop and desktop model is getting corroded by ARM chips.
      • by WilliamGeorge ( 816305 ) on Tuesday March 05, 2013 @03:05PM (#43081747)

        "Computers long ago reached the point where they were fast enough..."

        For you, maybe - but not for everyone. I work with people daily who need more computing power, and in fact would benefit even further if processors were faster even than they are today. "Fast enough" is a fallacy - there is always, and will always be, room for improvement. Folks doing media editing, 3D animation, scientific research, financial calculations, and a whole host of other things need more power from their computers - not to move away to a less capable platform.

        Heck, even in games this is apparent. A lot of new games simply will not play well on processors from 2006 - that is seven years ago now, before quad-core processors were widely available! So please, don't take your one case and assume that means no one else has different needs for their computers.

        • by JDAustin ( 468180 ) on Tuesday March 05, 2013 @03:16PM (#43081903)

          The Core-2-Quad 6600 (q6600) was released in Jan 2007. The chip is such a workhorse that it will run any of the new games out their. The limiter is the video card capabilities.

          • by pulski ( 126566 ) on Tuesday March 05, 2013 @03:22PM (#43081981)

            There's a lot more to life than gaming. A fast video card won't do a thing to speed up the work I do every day.

          • by The Snowman ( 116231 ) on Tuesday March 05, 2013 @03:36PM (#43082207)

            The Core-2-Quad 6600 (q6600) was released in Jan 2007. The chip is such a workhorse that it will run any of the new games out their. The limiter is the video card capabilities.

            While the GPU is certainly a much bigger factor, the Q6600 is showing its age. I just handed one down to my wife after upgrading to a Core i7 Ivy Bridge. Part of the problem is while the GPU is the more limiting factor, CPU still plays a role: and after seven 7 years, games will tax a Q6600. The second issue is that architecture doesn't support PCI Express 2 or greater. While the cards are backwards and forwards compatible, this does not mean you will get acceptable performance. If you can't move data fast enough, that new GPU won't really shine. Compatibility does not equal "takes full advantage of."

            • I noticed the same thing moving from a Q6600 to a Sandybridge i7 2600K. Most games double their frame rates with the same GTX 480 I was using in the Q6600.

              Yes your GPU is a very large factor, but don't discount the CPU entirely.

          • by PoolOfThought ( 1492445 ) on Tuesday March 05, 2013 @03:39PM (#43082261)
            I hear what you're saying, but WilliamGeorge is right. You can't just declare that something is "fast enough" for someone else. They are probably a little more qualified to make that decision than you are.

            Maybe I don't need a faster computer to play "Sim City 5" or whatever "games" you talking about. But there's more to life and computing than the latest FPS.

            Let me know when I can full system compiles on my video card or run real world business applications on my video card. Until then (and even then), know that I will spend up to an hour each day simply waiting on compiles to complete and unit tests to run. A faster machine is something I look forward to and one would certainly cut down on the amount of time I spend waiting on my computer to be ready for me to get on with my job.

            Then again, it would also likely cut down on my slashdotting as I often alt-tab over here while waiting on those other tasks to complete.
          • by Nemyst ( 1383049 )
            http://www.anandtech.com/bench/Product/53?vs=288 [anandtech.com]

            You can get up to twice the performance of a Q6600 from a newer processor like the 2500K. Far Cry 2 on high in 1080p would go from very playable to too laggy to play. The benchmark also doesn't show Battlefield 3, which taxes CPUs very, very hard and benefits tremendously from modern CPUs.

            It's not because you've not encountered an issue that issues do not exist.
          • by hypergreatthing ( 254983 ) on Tuesday March 05, 2013 @04:42PM (#43083209)

            and since then, there's been a whole lot of improvements. Sure, chips are still packaged as quad core since that seems to be the best bang for the buck in terms of processing power, but efficiency has gone increasingly higher, cache has increased, speed has increased a lot. Sure games aren't pushing cpus as much, because there's not much to push it in. Back in the day video cards didn't have gpus and the ones that were out were really strained and relied a lot on the cpu. So of course when video cards have been increasingly getting better the work that the cpu had to do has been decreasing.
            Would you still buy a q6600 today? No.
            Does that mean you don't need a new i5-3570K? Depends, do you need a new computer? you can probably get away with your q6600 for a while. But if you were in the market for a new one, you'd probably get today's equivalent. And you would probably notice the difference of speed.

        • by jitterman ( 987991 ) on Tuesday March 05, 2013 @03:45PM (#43082357)
          I'll support you on this. I look at processing power as analogous to income - them more most people have, the more ways we find we are capable of using all of it, and eventually find we could certainly use more.
        • by TsuruchiBrian ( 2731979 ) on Tuesday March 05, 2013 @03:51PM (#43082475)

          I don't think things will ever reach a point of "fast enough" in an absolute sense either, but I can see where CastrTroy is coming from.

          I got my first computer was in 1992, and it was the most expensive computer I've (my parents) have ever purchased. Since then I have built computers from parts every year (each time becoming cheaper) until about 2001. The computer I built in 2001 lasted 2 years. The computer I built in 2003 lasted 3 years. The computer I built in 2006 lasted 6 years until 2012.

          Yes new applications are constantly coming out that demand faster computers for personal use, but it seems to be slowing down to me. It's not that technology is slowing down, but that the new technology seems more able to run on 6 year old technology than it used to.

          My core 2 Duo from 2006 is now the processor for my 20 TB RAID5 NAS, and it's doing great. I didn;t even really need an upgrade back in 2012, I just wanted to have a NAS and build a new computer for fun (I hadn't built one in 6 years). My new computer is definately faster, but all I do on it is play FTL, which I can also do on my crappy laptop from 2006.

      • by GreatDrok ( 684119 ) on Tuesday March 05, 2013 @03:09PM (#43081793) Journal

        The funny thing about ARM is that back in the late 80's and early 90's when the first ARM processors were being shipped, they were going out in desktop machines in the form of the Acorn Archimedes. These were astoundingly fast machines in their day, way quicker than any of the x86 boxes of that era. It took years for x86 to reach performance parity, let alone overtake the ARM chips at this time. I remember using an Acorn R540 workstation in 1991 that was running Acorn's UNIX implementation and this machine was capable of emulating an x86 in software and running Windows 3 just fine, as well as running Acorn's own OS. ARM may not be the powerhouse architecture now, but there is nothing about it that prevents it being so, just current implementations. ARM is a really nice design, very extensible and very RISC (Acorn RISC Machines == ARM in case you didn't know) so Intel may very well find itself in trouble this time around. The platforms that are all up and coming are on ARM now, and as demand for more power increases, the chip design can keep up. Its done it before and those ARM workstations were serious boxes. Heck, MS may even take another stab at Windows and do a full job this time but even if it doesn't, so what? Chromebooks, Linux, maybe even OS X at some point in the future, and Windows becomes a has-been. It is already around only 20% of machines that people access the internet from down from 95% back in 2005.

        • And now x86 machines are RISC too. IIRC all the x86 chips translate the x86 instructions into RISC instructions, with a little bit of optimization for their own RISC instruction set. The x86 instruction set, in some ways, simple allows for convenient optimization into the RISC instruction sets, and the option to change them in the background as use priories change. Probably, at least in part, why x86 caught up and surpassed ARM. Then again, you could make such a translator from ARM to arbitrary internal ins

        • by Chris Burke ( 6130 ) on Tuesday March 05, 2013 @04:15PM (#43082807) Homepage

          ARM is a really nice design, very extensible and very RISC

          It has fixed instruction length and load/store architecture, the two crucial components of RISC imo, but doesn't go "very" imo. The more I learn about ARM, the more delirious my laughter gets as I think that this of all RISC ISAs is the one that is poised to overturn x86.

          For example, it has a flags register. A flags register! Oh man, I cackled when I heard that. I must have sounded very disturbed. Which I was, since only moments before I was envisioning life without that particular albatross hanging around my neck. But I guess x86 wasn't the only architecture built around tradeoffs for scalar minimally-pipelined in-order machines.

          Well whatever. The long and short of it is that ISA doesn't matter all that much. It wasn't the ISA that made those Acorn boxes faster than x86 chips. The ISA is limiting x86 in that the amount of energy spent decoding is non-negligible at the lowest power envelopes. In even only somewhat constrained systems it does just fine.

          Oh and on the topic of Intel killing x86 -- they don't really want to kill x86. x86 has done great things for them, with both patents and it's general insane difficulty to implement creating huge barriers to entry for others helping them maintain their monopoly. Their only serious move to ditch x86 in the markets where x86 was making them tons of money (as opposed to dabbling in embedded markets) was IA64, and the whole reason for that was that then AMD and Via wouldn't have licenses to make compatible chips.

        • ARM Mistakes (Score:4, Informative)

          by emil ( 695 ) on Tuesday March 05, 2013 @04:56PM (#43083519)

          I don't program ARM assembly language, but it appears to me that Sophie and Roger made a few calls on the instruction set that proved awkward as the architecture evolved:

          • The original instruction set put the results from compare instructions into the high bits of the program counter, and thus they were not 32-bit CPUs and could not address 4gb of memory. Relics of this are found in GCC with the -mapcs-26 and -mapcs-32 flags.
          • The program counter is a register like any other, and you are able to mov(e) a value to it directly, causing a branch. This makes branch prediction harder, and has been eliminated on the 64-bit version.

          These design decisions made the best desktop CPU for 10 years, but they came at a price.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Not to mention they are the fastest general purpose processors in the world right now. Yet some how that means they aren't staying on the forefront?

    • by overshoot ( 39700 ) on Tuesday March 05, 2013 @03:10PM (#43081829)

      Intel is still the major manufacturer of laptop, desktop, workstation and server chips... What if they're not the main provider for cheap toys?

      If you weren't around for IBM's reaction to the arrival of minicomputers, or for Digital Equipment's reaction to microcomputers, you wouldn't understand why I'm cleaning up the coffee I just spewed all over my desk. Let's just say that last sentence isn't exactly new.

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Tuesday March 05, 2013 @04:45PM (#43083289)
      Comment removed based on user account deletion
  • by colin_faber ( 1083673 ) on Tuesday March 05, 2013 @02:38PM (#43081393)
    Really? I mean the Atom line processors are pretty great. The technology is well developed both for hardware and software and Intel basically owns that market. Why would they want to kill it off when they're still making money hand over fist with it?
    • Re: (Score:3, Insightful)

      by hedwards ( 940851 )

      Pretty great? Atom sucks balls compared with AMD's offerings. And it's not even close. Intel offers them so that AMD has some competition in that space, but Intel doesn't have any reason for them to be good as that would take away from their business of selling the more expensive processors.

    • I don't want to get into an argument here about which process is better. My point was that the Atom works well, as well as the Xeon line and Core line processors.

      Whether or not your favorite brand is something else shouldn't make a difference here. The point being that Intel is making piles of cash on technology they've already developed and put piles of money into. Why kill the golden goose just because cell phones use a slightly lower powered alternative.

      The article reads like:

      "My server processors suck f

      • But, they don't work well. Watching my mother's netbook struggle to do basic things like open windows explorer where my equivalent AMD e-350 had no troubles indicates that it is in fact not something that works well. It certainly doesn't work as well as the Xeon or Core lines do in their respective market.

        In fact, I have a hard time thinking of anything for which Atom works pretty well. If it can't handle basic Windows 7 stuff, I'm at a bit of a loss as to what it can do very well.

        This isn't about brand pre

        • by PRMan ( 959735 ) on Tuesday March 05, 2013 @03:24PM (#43082021)
          I replaced the slow HD in my Asus EeePC Netbook with an SSD and it works great now. The Atom isn't the problem. It's the dog slow hard drives they put in them.
        • by Bengie ( 1121981 ) on Tuesday March 05, 2013 @03:30PM (#43082117)
          Even Intel talks about Atom's abysmal performance. The good news is the next gen Atoms will be bringing real performance to low power. They're going to be completely difference archs.
        • by Mal-2 ( 675116 )

          Owning both an Atom-based Aspire One and an E-350 Aspire, I'd hardly call them "equivalent". Aside from the not-so-hot 1.6 GHz clock speed, the two have almost nothing in common. The Atom has Hyperthreading, the E-350 has two physical cores. The Atom relies on Intel's graphics, the E-350 has an integrated GPU that has NEVER been the bottleneck for anything I want to run. The Aspire One is limited to 2 GB of RAM (and in this implementation only takes 1.5), the E-350 machine currently has 8 GB installed.

          I wou

    • by overshoot ( 39700 ) on Tuesday March 05, 2013 @03:12PM (#43081843)

      Why would they want to kill it off when they're still making money hand over fist with it?

      Try reading "The Innovator's Dilemma."

      • by PRMan ( 959735 ) on Tuesday March 05, 2013 @03:27PM (#43082077)

        David Packard (of HP) used to say, "We're trying to put ourselves out of business every six months. Because if we don't, someone else will."

        Back then, they came out with the LaserJet and DeskJet series and made tons of money. And every new printer was WAY better than the last one. But then he died and they decided that they should lock their ink cartridges and sue refillers instead of innovating. Now, companies like Brother and Canon are eating their lunch, by...wait for it...putting themselves out of business every 6 months...

        • HP's printers by and large are STILL superior to competition; its just the drivers which are a wreck. Of course, Canon UFR drivers arent much better...

  • by cait56 ( 677299 ) * on Tuesday March 05, 2013 @02:39PM (#43081419) Homepage
    This has been true for decades. Technology wants to evolve from CISC to RISC. The x86 brilliantly hid this by translating CISC to RISC superbly,
    But once you lose the x86 tag Intel would just be one of many vendors. The closest thing to competition they have had for x86 has been AMD.
    • by Anonymous Coward on Tuesday March 05, 2013 @02:58PM (#43081673)

      Do you even understand what "CISC" and "RISC" are? It doesn't just mean "less instructions and stuff." There are, in fact, other design characteristics of "RISC" such as fixed width instructions (wasted bandwidth and cache) and so on.

      While I'm sure you are attempting to somehow suggest that intel pays some kind of massive "decode" penalty for all it's instructions and will always be less power effieicnt because of it, things are not quite so simple. You see, a RISC architecture will typically need more instructions to accomplish the same task as a CISC architecture. This has an impact on cache and bus bandwidth. Also, ARM chips still have to decode instructions. It's not a trace cache.

      It's a false dichotomy to say that things are either CISC or RISC. There would be various architectures that wouldn't really qualify as either, such as a VLIW architrecture for example.

      So, in summary no, technology does not "want" to evolve from CISC to RISC. And even ARM isn't really faithful to the RISC "architecutre", what with supporting multiple bit formats (i.e., thumb, etc) and various other instructions.

      I look forward to this day when discussions of various cpu can be advanced beyond stupid memes and rehashed flamwars from decades ago. But this is slashdot, so I expect too much.

      • Re: (Score:2, Informative)

        Don't even bother. There's a whole contingent of "but it's RISC under the hood" folks around here who don't understand that a single accumulator architecture that has gems like "REPNE SCASB" in its instruction set will never be RISC.
        • by KingMotley ( 944240 ) on Tuesday March 05, 2013 @03:27PM (#43082083) Journal

          There is a whole set of folks apparently that don't understand that the CPU doesn't have an execution engine that can process "REPNE SCASB". "REPNE SCASB" will get translated into a small set of RISC-like instructions internally that get executed.

          Or are you trying to say that RISC computers can't possibly run C, because they don't those complex instructions too? Do you think that RISC assembly can't possibly have a REPNE SCASB macro? Are you confused because the translation happens inside the CPU instead of the assembler?

    • Don't forget Cyrix, which used to be everywhere in the '90s, and lives on in the form of VIA C7 / Nano today. It's mostly in netbooks now, but there is competition and they know it.

      I like my Intel chips, but I also remember them getting busted for something akin to collusion in a lot of markets. If they had played by the rules, we probably would see a lot of alternatives.

    • But once you lose the x86 tag Intel would just be one of many vendors

      Yea, just like all the other vendors firmly established on 22nm and owning their own fabs.

      Wait, who are these other vendors, again? As I recall, AMD is just one of a very few comfortably at 28nm, and a lot of others are a few gens behind that. Intel is in front because, whatever problems they have, they still make the best CPUs out there and they still have the best tech.

  • This is /. I'm sure we can find a way to blame Microsoft or Windows, this is an easy one! /sarcasm
  • They could drop 32bit at some point, but I don't think even the legacy instruction sets hinder them much.
  • The reason has something to do with the billions of x86 chips currently in operation in the server/desktop/laptop market and the massive amount legacy software written for x86. Intel tried to implement a new non backwards compatible CPU architecture before, IA-64, and it failed to catch and the backwards compatible AMD 64 bit x86 variation winning out.

  • wtf? (Score:5, Interesting)

    by etash ( 1907284 ) on Tuesday March 05, 2013 @02:51PM (#43081581)
    the question is idiotic. sounds more like "asking a question just to ask it". Why should even intel kill x86? Would anyone even WANT to kill his cash cow ? It sounds more like wishful thinking from the camp across the atlantic ( arm *wink* *wink* ). Sure they would like to initiate or induce an inception of such an idea, but Intel has no reason at all to abandon such a successful platform.
    • Re:wtf? (Score:5, Informative)

      by tlhIngan ( 30335 ) <slashdot.worf@net> on Tuesday March 05, 2013 @03:42PM (#43082325)

      the question is idiotic. sounds more like "asking a question just to ask it". Why should even intel kill x86? Would anyone even WANT to kill his cash cow ? It sounds more like wishful thinking from the camp across the atlantic ( arm *wink* *wink* ). Sure they would like to initiate or induce an inception of such an idea, but Intel has no reason at all to abandon such a successful platform.

      Because x86 as an ISA is a lousy one?

      32-bit code still relies on 7 basic registers with dedicated functionality, when others sport 16, 32 or more general purpose registers that can be used mostly interchangably (most do have a "special" GPR used for things like zero and whatnot).

      64-bit extension (x64, amd64, x86-64 or whatever you call it) fixes this by increasing the register count and turns them into general registers.

      In addition, a lot of transistors are wasted doing instruction decoding because x86 instructions are variable length. Great when you needed high code density, but now it's legacy cruft that serves little other than complicate instruction caches, inflight tagging and complicate instruction processing as instructions require partial decoding to figure out their length.

      Finally, the biggest thing nowadays leftover from the RISC vs CISC wars is the load/store architecture (where operands work on registers only, while you have ot do loads/stores to access memory). A load/store architecture makes it easier on the instruction decoder as no more transistors need to be wasted trying to figure out if operands need to be fetched in order to execute the instruction - unless it's a load/store, the operand will be in the register file.

      The flip side though, is a lot of the tricks used to make x86 faster also means that other architectures benefit as well. Things like out-of-order execution, register renaming, and even the whole front end/back end thing (where front end is what's presented to the world, e.g., x86), and back end is the internal processor itself, (e.g., custom RISC on most Intel and AMD x86 parts).

      After all, ARM picked up OOO in the Cortex A series (starting with the A8). Register renaming came into play around then as well, though it really exploded in the Cortex A15. And the next gen chips are taking superscalar to the extreme. (Heck, PowerPC had all this first, before ARM. Especially during the great x86 vs. PowerPC wars).

      The good side though is that x86 is a well studied architecture, so compilers and such for x86 generally produce very good code and are very mature. Of course, they also have to play into the internal microarchitecture to produce better code by taking advantage of register renames and OOO, and knowing how to do this effectively can boost speed.

      And technically, with most x86 processors using a frontend/backend deal, x86 is "dead". What we have from Intel and AMD are processors that emulate x86 in hardware.

  • by FuzzNugget ( 2840687 ) on Tuesday March 05, 2013 @02:52PM (#43081603)
    He didn't have to deal with an installed base.
  • by jellomizer ( 103300 ) on Tuesday March 05, 2013 @02:52PM (#43081607)

    Windows, Word, Excel, and Games.

    Microsoft is just starting to make cross hardware platform applications and development. So we have decades of legacy software that depends on the x86 architecture.

    Back in the 90's when Java Was becoming Popular, Microsoft put an end to that, and gave us .NET that runs slightly faster than Java but only works with windows on x86 and didn't put any effort in making cross platform, trying to keep a hold on the market. If apps could start working cross OS's and Hardware platforms then people will no longer want Windows, or more to the point, they could choose not to use windows.

  • Legacy (Score:5, Interesting)

    by onyxruby ( 118189 ) <onyxruby&comcast,net> on Tuesday March 05, 2013 @02:55PM (#43081637)

    Because the world runs on legacy software, and that legacy software runs on a legacy platform called x86. The answer is really that simple.

    You can come up with a superior platform for power (ARM), it has been done and it worked really well on phones where there wasn't a large legacy base of software already in place. You can come up with a superior platform for 64 bit processing (Itanium), it has been done and it worked really well in a very limited marked (servers that handled large databases). However that market was too limited and large lawsuits have been filed to try to get out of that market.

    Other examples abound and have been made, the payoff to whoever could succeed would be in the billions of dollars (Even the Chinese are trying their own homegrown CPU architecture). Every single one of them that has tried to enter the desktop market has failed though for the simple reason that it couldn't emulate x86.

    Even Microsoft would dearly love to get out of the x86 business, the payoff in terms of killing legacy software support and selling all new software would be huge (hello Surface RT). I think you'll notice that sales of Microsoft RT products have all been a dismal failure with manufactures declining to make new products as fast as they can.

    Until you can build a chip that can emulate x86 and support a different architecture and do so more cost effectively than just an x86 chip x86 will live. You can't kill it, Intel can't kill it, AMD can't kill it, Microsoft can't kill it and you sure as hell can't nuke it from orbit. It's embedded in billions of computers and software programs worldwide, and that is a zombie army that you just can't fight.

    • Until you can build a chip that can emulate x86 and support a different architecture and do so more cost effectively than just an x86 chip x86 will live. You can't kill it, Intel can't kill it, AMD can't kill it, Microsoft can't kill it and you sure as hell can't nuke it from orbit. It's embedded in billions of computers and software programs worldwide, and that is a zombie army that you just can't fight.

      actually nuking it from orbit is the only way to kill it a good emp pulse from high orbit would take out a lot of the install base.

      • I thought of that, but then decided there are too many of them scattered about, including - in orbit - to ever be able to nuke them from orbit and be sure. I'm not sure if we have enough nukes world wide to actually perform that feat.

        Perhaps someone with more time can calculate how wide of a surface area we can wipe out with an EMP, divide that by the populated surface with a density greater than x and come up with an answer?

    • Re:Legacy (Score:5, Insightful)

      by Cro Magnon ( 467622 ) on Tuesday March 05, 2013 @03:15PM (#43081893) Homepage Journal

      Until you can build a chip that can emulate x86 and support a different architecture and do so more cost effectively than just an x86 chip x86 will live. You can't kill it, Intel can't kill it, AMD can't kill it, Microsoft can't kill it and you sure as hell can't nuke it from orbit. It's embedded in billions of computers and software programs worldwide, and that is a zombie army that you just can't fight.

      That, in fact is how Apple switched processors. Twice. The PowerPC Macs were so much faster than the old 68K that they could emulate the old stuff as fast as the 68K machines, and the native PPC software blew the older machines away. When they switched to (ugh) Intel, the PPC had fallen behind and there was a similar performance gap.

      IIRC, early versions of Windows NT could run emulated x86 software at decent speed on the DEC Alpha, but that machine was too pricey for the mass market.

      So, to kill the x86, we need a machine that is enough faster than the x86 to run legacy software at comparable speed, native software that's faster than anything on X86, and a price low enough for the average consumer.

  • Vendor Lock-in.

    ...or is that three
  • 95% of the processors on tablets and smartphones are ARM processors. ARM Holdings licenses out ARM to a number of chip vendors. In theory, Intel could license ARM also from ARM Holdings and start to manufacture ARM chips. Given the difference in margins, it is unlikely they will do so until they feel there is a significant threat to the business. Even better for Intel (in terms of non-x86 revenue) would be a cross-licensing agreement with ARM that gives Intel a slice of the ARM pie. So, it is not impossible

    • It was called Xscale and it was among the best at the time. They sold it to Freescale (I believe).
      • by slew ( 2918 )

        Myth 1: Xscale was somehow the best ARM at the time.

        Xscale was basically inherited by Intel from DEC StrongArm (which arguably might have been the best at their time back in 1996), but by the time Intel bought it and rebadged it Xscale, it was pretty middling Arm implementation.

        Myth 2: Intel sold it to Freescale (they actually sold it to Marvel).

  • the reason i use ARM on my iphone, ipad or android phone is that there are hundreds of thousands of applications to choose from to do different things

    every non-x86 platform for the desktop market has had a lack of software. the OS is useless by itself.

  • by Sebastopol ( 189276 ) on Tuesday March 05, 2013 @03:05PM (#43081743) Homepage

    The last attempt Intel made at a non-x86 architecture was Itanium.

    In 1995.

    And it wasn't an attempt to ditch x86. The Itanium was a server product from the ground up, and only partially a technology vehicle for VLIW because HP (the partner at the time) largely drove that aspect of the ISA.

    This article is pointless. The RISC/CISC debate is moot. Or, more aptly: an academic exercise, free from real-world constraints.

    • by 0123456 ( 636235 )

      And it wasn't an attempt to ditch x86. The Itanium was a server product from the ground up, and only partially a technology vehicle for VLIW because HP (the partner at the time) largely drove that aspect of the ISA.

      Itanium only became 'a server product from the ground up' when it turned out to suck everywhere else. Before that the media was full of 'Itanium is going to replace x86 everywhere' articles.

  • No need (Score:5, Interesting)

    by gman003 ( 1693318 ) on Tuesday March 05, 2013 @03:11PM (#43081841)

    These articles are constantly missing the point.

    x86 is fine. The flaws of the architecture are mostly superficial, and even then, x86-64 cleans a lot of it up. And it's all hidden behind a compiler now anyways - and we have very good compilers.

    ARM has an advantage in the ultra-low-power market because they've been designing for the ultra-low-power market. Intel has been focusing on the laptop/desktop/server market, and so their processors fit into that power bracket.

    But guess what? As ARM is moving into higher-performance chips, they're sucking up more power (compare Cortex-A9 to Cortex-A15). And as Intel is moving into lower-power chips, they're losing performance (compare Atom to Core).

    The ISA doesn't really affect power too much, as it turns out. It affects how easily compilers can use it, and how easily the chip can be designed, but not really power draw or thermal performance. Given the lead Intel has on fabrication, any slight disadvantage of the x86 architecture in that regard is made up for by the software library.

  • by GameboyRMH ( 1153867 ) <`gameboyrmh' `at' `gmail.com'> on Tuesday March 05, 2013 @03:12PM (#43081851) Journal

    Close Source Applications.

    They're not stupid like Microsoft is, they know that closed source and multi-arch don't work together.

  • by Inkidu ( 2838387 ) on Tuesday March 05, 2013 @03:15PM (#43081885)
    It's already a bad day for Redmondians. Haswell is slated to be introduced in 2014 will mostly offer the BGA designed Broadweil "System-on-a-Chip CPU", pre-sodered on an Intel motherboard like Atom chips are now. There will be nothing to upgrade - in effect this will be a device in PC clothing. There are rumors of high-end LGA packaging, but the upgrade possibilities will be limited to a few paltry offerings. No one will be making consumer upgradable parts anymore. Another way of saying it is that It will become cheaper for Dell just to replace the whole "PC-thingy" than to repair it. Yet Another Way... Intel's Ivy Bridge product cycle ends in 2014. Its successor, Haswell, will not have a desktop chip. The English story: http://semiaccurate.com/2012/11/26/intel-kills-off-the-desktop-pcs-go-with-it/#.UTU5hjZMn2A [semiaccurate.com] As tablets and smart phones replace desktops and notebooks, Intel, Microsoft and the desktop manufacturers struggle for market-share. The end of the desktop in 2014 does not mean the demise of the notebook, or of Microsoft, or of the support jobs they bring. It does foreshadow their end though. This time its a question of what and who will be left behind. Intel's market-based decision will shrink the computer field in general, and IT departments everywhere. With a paradigm shift away from a smart-client/server model to a dumb-portal/Cloud one, the computer becomes just another office supply, and the IT department becomes marginalized. When in the cloud, other services seem more viable. Virtual storage and backup deals mean goodbye to lots of servers, and that backup guy too. No longer dependent on the IT department, HR, Customer Service - hey, every department can find alternatives in the cloud. And those alternatives in the cloud will be supplied by the same people who make the software installed on their computers now. By putting Office online, Microsoft separates their biggest revenue stream from their troubled operating system. Microsoft will want to make up for the loss of revenue. They will “incentivise” their cloud products, making services cheaper than anything an IT department can provide. The stakes are even higher because Microsoft has to move into cloud, which is Google’s home turf. Google enters the market meeting Microsoft head on, feature-to-feature and with a better price - for now. Both competitors want a piece of the IT department, especially in these changing times. So count on predatory pricing to make the move even cheaper. These giants are in a fight for their corporate lives, so don’t think for one moment they’ll do anything that’s not in their financial interest. Every perk will have its price. The original story: http://translate.google.com/translate?sl=ja&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&u=http%3A%2F%2Fpc.watch.impress.co.jp%2Fdocs%2Fcolumn%2Fubiq%2F20121122_574440.html [google.com]
    • That article says that "Broadwell" will be BGA only, not Haswell. Haswell will continue to be offered as LGA. Also, the successor to "Broadwell" will apparently be offered as LGA as well, so I doubt this is the end of the line...

  • by overshoot ( 39700 ) on Tuesday March 05, 2013 @03:27PM (#43082081)

    That when Mankind actually launches ships to other star systems, the computers on board will be running a descendent of the x86 ISA, even if it's running 1024-bit words on superconducting molecular circuitry.

    And also that the geeks who know anything about them will be bitching about the <expletive> ancient POS instruction set.

  • by bobbied ( 2522392 ) on Tuesday March 05, 2013 @04:45PM (#43083283)

    The reason for Intel and X86 is the IBM PC and it's back room marriage to Microsoft basically boiled down to a simple choice of who would supply the microprocessor with the most desirable terms. Intel won, with their X86, not because it was better or faster, but because they agreed to IBM's terms. The rest of the history is about the symbiotic (some would argue incestuous) relationship between Microsoft, Intel, and PC manufacturers has little to do with what would have been better technically.

    The Motorola 68000 series processors where much more capable, flexible and MUCH easier to program (at least at the assembly level). Had Motorola won, we would have enjoyed an instruction set that did not change for the life of the 68000 processor. But as it was, with the x86 progression, 286, 386, Pentium and following, each introduced multiple instruction set alterations in an effort to keep up with the PC's needed expansion and performance requirements. None of this would have been necessary with the 68000 through the same progression.

    Another advantage of the 68000 would have been 64 bit floating point math would have been standard and using 64 bits would have been seamless to the programs that used it. Operating systems would have been easily ported to 64 bit hardware, because it would have been a device driver exercise, and ONLY for devices that required 64 bit so the migration would have been piecemeal instead of the hard cut to 64 bit we have now.

    We are stuck with x86, not because it was or is the best, but because it was the chosen one. X86 is the one supported by IBM back when this all got started, and now it's the primary platform for Windows and Microsoft. These past decisions where for business reasons and not technical ones. There is a lesson in all this.. :)

    So.. As long as the relationship between Microsoft, Intel and Hardware builders remains in tact, and the PC remains the premier computing platform, we will be stuck with the x86. The question is how long will this last? Apple tried and fell back to x86 hardware, but I'm not sure Intel is going to control the mobile computing market which seems to be able to make inroads into the desktop market.

  • by leandrod ( 17766 ) <l@dutras . o rg> on Tuesday March 05, 2013 @05:15PM (#43083815) Homepage Journal

    There are loads of proprietary, binary software around. Some people even run OS/2 because they won’t port their software to something newer. FreeDOS is around and used in production. Alpha emulated x86 quite competently, and current x86 processors are actually Risc chips with an x86 translation unit.

    Until most software is based on open standards and free components that can be trivially recompiled, all platforms will live much longer than people would like them to.

  • by Animats ( 122034 ) on Tuesday March 05, 2013 @05:52PM (#43084357) Homepage

    What killed the RISC alternatives to x86 was the Pentium Pro. Before the Pentium Pro, the industry consensus was that the way to faster machines was RISC. Then Intel developed a superscalar x86 machine and beat out RISC hardware.

    It was an incredible technical achievement to make an instruction set designed for zero parallelism go superscalar. All previous superscalar machines, from the IBM 7030 and CDC 6600 of the 1960s, had imposed restrictions on what programs could do to accommodate the problems of concurrency.

    The Pentium Pro didn't do that. All the awful cases were handled. Exceptions were exact. Storing into code just ahead of execution was allowed. It took Intel 3,000 engineers to make that work. Nobody had ever put that level of effort into a CPU design before. The design team for a MIPS processor was about 15 people.

    The Pentium Pro was designed for 32-bit code, but still ran 16-bit code. Intel thought that by the time the thing shipped in 1995, the desktop world would be 32-bit. After all, it had been 10 years since the 386 introuced 32-bit mode. The desktop world still wasn't ready. Many users ran Windows 3.1/DOS on the Pentium Pro and complained of slow performance. It ran Windows NT quite well, but NT hadn't achieved much market share yet, much to Microsoft's annoyance. So the Pentium II had more transistors devoted to 16-bit support, fixing that problem. The Pentium II and III use modified Pentium Pro architecture. The Pentium 4 (late 2000) was the next new design.

    That was the beginning of the end for RISC. RISC could get a simple CPU to one instruction per clock. Superscalar machines could beat one instruction per clock. Superscalar RISC machines had all the complexity of superscalar CISC machines, combined with a lower code density and thus higher demands on memory bandwidth.

    As it turned out, x86 wasn't a bad instruction set to make go fast. RISC thinking was that having lots of registers would help. It doesn't. On a superscalar machine, commits to memory are deferred, and most stack accesses are really coming from registers within the execution units. So there's no win in having lots of user-visible registers. Also, if you have a huge number of registers like a SPARC does, time is wasted saving and restoring them. On the stack, you just move the stack pointer.

    Also, RISC code is about twice as large as x86 code. Making all the instructions the same length bloats all the small ones.

    The Itanium was an attempt to introduce a proprietary architecture that couldn't be cloned. The Itanium has lots of original, patented technology. It was very different from other CPUs. However, it wasn't better. Just different. Compiling fast code for it was really hard. It was a "build it and they will come" architecture, like the Cell. Except they didn't come.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...