Why Can't Intel Kill x86? 605
jfruh writes "As tablets and cell phones become more and more important to the computing landscape, Intel is increasingly having a hard time keeping its chips on the forefront of the industry, with x86 architecture failing to find much success in mobile. The question that arises: Why is Intel so wedded to x86 chips? Well, over the past thirty years, Intel has tried and failed to move away from the x86 architecture on multiple occasions, with each attempt undone by technical, organizational, and short-term market factors."
A hard time keeping on the forefront? (Score:5, Interesting)
Intel is still the major manufacturer of laptop, desktop, workstation and server chips...
What if they're not the main provider for cheap toys? It's mostly a matter of price anyway. Whatever they do, Intel chips will always cost significantly more than ARM chips due to their business model.
Re:A hard time keeping on the forefront? (Score:4, Insightful)
Never forget! i960
Re:A hard time keeping on the forefront? (Score:5, Interesting)
The i960 was a printer processor with strong vector performance. Much like the gaming systems on XBoX, Playstation today. 486/i860 systems were really good though the use of GPUs more or less is a modern version of the same effect.
Re: (Score:3, Insightful)
Re:A hard time keeping on the forefront? (Score:5, Insightful)
Yes. DEC Alpha, which originally ran Slashdot on a 166 mHZ Multia, and the great MIPS III 64's: R4000 and descendants.
MS's lost opportunities w/ RISC (Score:3)
Re:MS's lost opportunities w/ RISC (Score:4, Informative)
But the behind-the-scenes politics had MS deliberately kill NT for PPC, MIPS and Alpha.
Just as surely as board-member executive machinations had HP/Compaq kill Alpha, for Intel.
They are the dark side of the force, and normally almost unobservable - like a black hole. Which also explains the sucking...
I'm watching some of these things in real-time, today. Don't worry. They cannot execute well enough to ruin what is done best in software.
Re:A hard time keeping on the forefront? (Score:5, Insightful)
Seriously? You think the BSOD thing is because of the CPU architecture, versus the operating system architecture?
Please provide more information. I think you're getting it wrong here.
The alpha architecture was nice, but it was expensive, niche and single-vendor. It had floating point performance the smoked the i387/i487 of the day. It had 64 bit internal bits far before the PC architecture was 64 bits. But none of those prevent BSOD.
BSOD is because of poor driver writing, poor system architecture and crappy hardware quality. Not because of the CPU architecture.
Re:A hard time keeping on the forefront? (Score:4, Insightful)
It's more that NT for Alpha had a far more limited, and thus far better tested set of drivers, and the machines were only mid to highend - no lowend questionable hardware to worry about.
The same reason Apple have a reputation for stability, despite these days being based on mostly the same components as any other x86 vendor.
Re: (Score:3)
x86 does this just fine. It didn't before 286 (well, really 386) and Windows before NT didn't bother with proper memory protection. I can assure you that it does now. The problem with drivers and memory access is entirely to do with the choice on the part of the NT kernel design to let drivers run in kernel space with full access to kernel memory. They could have just as well put them in ring 1 or even ring 3 with limited memory access, but there's a performance cost that, at the time, wasn't acceptable. No
Re: (Score:3)
There's no evidence that the Alpha design would have scaled to higher clock rates. DEC always overpriced the Alpha and that's what killed it. They misunderstood the changing nature of the market, which was away from single fat machines.
Alpha economics of scale (Score:3)
Comment removed (Score:5, Informative)
Re:A hard time keeping on the forefront? (Score:5, Interesting)
Re:A hard time keeping on the forefront? (Score:5, Insightful)
"Computers long ago reached the point where they were fast enough..."
For you, maybe - but not for everyone. I work with people daily who need more computing power, and in fact would benefit even further if processors were faster even than they are today. "Fast enough" is a fallacy - there is always, and will always be, room for improvement. Folks doing media editing, 3D animation, scientific research, financial calculations, and a whole host of other things need more power from their computers - not to move away to a less capable platform.
Heck, even in games this is apparent. A lot of new games simply will not play well on processors from 2006 - that is seven years ago now, before quad-core processors were widely available! So please, don't take your one case and assume that means no one else has different needs for their computers.
Re:A hard time keeping on the forefront? (Score:5, Insightful)
The Core-2-Quad 6600 (q6600) was released in Jan 2007. The chip is such a workhorse that it will run any of the new games out their. The limiter is the video card capabilities.
Re:A hard time keeping on the forefront? (Score:5, Insightful)
There's a lot more to life than gaming. A fast video card won't do a thing to speed up the work I do every day.
Re:A hard time keeping on the forefront? (Score:4, Insightful)
GPU acceleration might come in handy if you do any sort of video editing.
There is a lot more to GPUs than video and bitcoins.
The ever increasing power of commodity processors is what makes my business of inexpensive data crunching possible. 10 years ago the kinds of things we do would require a supercomputer. Today it requires a moderately prices server-class machine.
However I am drooling at the thought of using something like PGStrom [postgresql.org]. GPU based database queries.
Re:A hard time keeping on the forefront? (Score:5, Interesting)
While the GPU is certainly a much bigger factor, the Q6600 is showing its age. I just handed one down to my wife after upgrading to a Core i7 Ivy Bridge. Part of the problem is while the GPU is the more limiting factor, CPU still plays a role: and after seven 7 years, games will tax a Q6600. The second issue is that architecture doesn't support PCI Express 2 or greater. While the cards are backwards and forwards compatible, this does not mean you will get acceptable performance. If you can't move data fast enough, that new GPU won't really shine. Compatibility does not equal "takes full advantage of."
Re: (Score:3)
I noticed the same thing moving from a Q6600 to a Sandybridge i7 2600K. Most games double their frame rates with the same GTX 480 I was using in the Q6600.
Yes your GPU is a very large factor, but don't discount the CPU entirely.
Re:A hard time keeping on the forefront? (Score:4, Interesting)
Maybe I don't need a faster computer to play "Sim City 5" or whatever "games" you talking about. But there's more to life and computing than the latest FPS.
Let me know when I can full system compiles on my video card or run real world business applications on my video card. Until then (and even then), know that I will spend up to an hour each day simply waiting on compiles to complete and unit tests to run. A faster machine is something I look forward to and one would certainly cut down on the amount of time I spend waiting on my computer to be ready for me to get on with my job.
Then again, it would also likely cut down on my slashdotting as I often alt-tab over here while waiting on those other tasks to complete.
Re: (Score:3)
You can get up to twice the performance of a Q6600 from a newer processor like the 2500K. Far Cry 2 on high in 1080p would go from very playable to too laggy to play. The benchmark also doesn't show Battlefield 3, which taxes CPUs very, very hard and benefits tremendously from modern CPUs.
It's not because you've not encountered an issue that issues do not exist.
Re:A hard time keeping on the forefront? (Score:4, Insightful)
and since then, there's been a whole lot of improvements. Sure, chips are still packaged as quad core since that seems to be the best bang for the buck in terms of processing power, but efficiency has gone increasingly higher, cache has increased, speed has increased a lot. Sure games aren't pushing cpus as much, because there's not much to push it in. Back in the day video cards didn't have gpus and the ones that were out were really strained and relied a lot on the cpu. So of course when video cards have been increasingly getting better the work that the cpu had to do has been decreasing.
Would you still buy a q6600 today? No.
Does that mean you don't need a new i5-3570K? Depends, do you need a new computer? you can probably get away with your q6600 for a while. But if you were in the market for a new one, you'd probably get today's equivalent. And you would probably notice the difference of speed.
Re:A hard time keeping on the forefront? (Score:4, Funny)
Re:A hard time keeping on the forefront? (Score:4, Insightful)
Re:A hard time keeping on the forefront? (Score:4, Insightful)
I don't think things will ever reach a point of "fast enough" in an absolute sense either, but I can see where CastrTroy is coming from.
I got my first computer was in 1992, and it was the most expensive computer I've (my parents) have ever purchased. Since then I have built computers from parts every year (each time becoming cheaper) until about 2001. The computer I built in 2001 lasted 2 years. The computer I built in 2003 lasted 3 years. The computer I built in 2006 lasted 6 years until 2012.
Yes new applications are constantly coming out that demand faster computers for personal use, but it seems to be slowing down to me. It's not that technology is slowing down, but that the new technology seems more able to run on 6 year old technology than it used to.
My core 2 Duo from 2006 is now the processor for my 20 TB RAID5 NAS, and it's doing great. I didn;t even really need an upgrade back in 2012, I just wanted to have a NAS and build a new computer for fun (I hadn't built one in 6 years). My new computer is definately faster, but all I do on it is play FTL, which I can also do on my crappy laptop from 2006.
Re: (Score:2)
Except "massive processing" power includes simple things like games and video decoding. Even if you are the computing equivalent of a couch potato, there's reason to have a decent amount of computing power at your disposal.
Cheap ARM devices that are throwbacks to the 90s are very limiting in this regard. That's why there's apps like AirVideo and Plex that run on PCs for the benefit of tablets.
It's very easy to overwhelm a weak system built for the "640k is enough" crowd just by doing something inventive or
Re:A hard time keeping on the forefront? (Score:5, Insightful)
I don't see many people buying tablets or smartphones *instead* of a PC / laptop - they are usually purchased (at least in my experience) to augment them, or to fill a new and unique role. Further, mobile sales like that have picked up - but desktop and laptop sales have not yet *dropped* substantially; their growth has slowed, but unless they stop selling altogether I think there is still plenty of market for Intel's processors.
Further, the modern Atom chips from Intel are increasingly capable and viable compared to ARM - and yet they are also full x86. This gives them more flexibility in terms of what they can run, without loss of battery life... and that will only get better in the future, as the Atom line is improved.
Re:A hard time keeping on the forefront? (Score:5, Informative)
The funny thing about ARM is that back in the late 80's and early 90's when the first ARM processors were being shipped, they were going out in desktop machines in the form of the Acorn Archimedes. These were astoundingly fast machines in their day, way quicker than any of the x86 boxes of that era. It took years for x86 to reach performance parity, let alone overtake the ARM chips at this time. I remember using an Acorn R540 workstation in 1991 that was running Acorn's UNIX implementation and this machine was capable of emulating an x86 in software and running Windows 3 just fine, as well as running Acorn's own OS. ARM may not be the powerhouse architecture now, but there is nothing about it that prevents it being so, just current implementations. ARM is a really nice design, very extensible and very RISC (Acorn RISC Machines == ARM in case you didn't know) so Intel may very well find itself in trouble this time around. The platforms that are all up and coming are on ARM now, and as demand for more power increases, the chip design can keep up. Its done it before and those ARM workstations were serious boxes. Heck, MS may even take another stab at Windows and do a full job this time but even if it doesn't, so what? Chromebooks, Linux, maybe even OS X at some point in the future, and Windows becomes a has-been. It is already around only 20% of machines that people access the internet from down from 95% back in 2005.
Re: (Score:3)
And now x86 machines are RISC too. IIRC all the x86 chips translate the x86 instructions into RISC instructions, with a little bit of optimization for their own RISC instruction set. The x86 instruction set, in some ways, simple allows for convenient optimization into the RISC instruction sets, and the option to change them in the background as use priories change. Probably, at least in part, why x86 caught up and surpassed ARM. Then again, you could make such a translator from ARM to arbitrary internal ins
Re:A hard time keeping on the forefront? (Score:5, Insightful)
ARM is a really nice design, very extensible and very RISC
It has fixed instruction length and load/store architecture, the two crucial components of RISC imo, but doesn't go "very" imo. The more I learn about ARM, the more delirious my laughter gets as I think that this of all RISC ISAs is the one that is poised to overturn x86.
For example, it has a flags register. A flags register! Oh man, I cackled when I heard that. I must have sounded very disturbed. Which I was, since only moments before I was envisioning life without that particular albatross hanging around my neck. But I guess x86 wasn't the only architecture built around tradeoffs for scalar minimally-pipelined in-order machines.
Well whatever. The long and short of it is that ISA doesn't matter all that much. It wasn't the ISA that made those Acorn boxes faster than x86 chips. The ISA is limiting x86 in that the amount of energy spent decoding is non-negligible at the lowest power envelopes. In even only somewhat constrained systems it does just fine.
Oh and on the topic of Intel killing x86 -- they don't really want to kill x86. x86 has done great things for them, with both patents and it's general insane difficulty to implement creating huge barriers to entry for others helping them maintain their monopoly. Their only serious move to ditch x86 in the markets where x86 was making them tons of money (as opposed to dabbling in embedded markets) was IA64, and the whole reason for that was that then AMD and Via wouldn't have licenses to make compatible chips.
Re: (Score:3)
Heh. Given the intimate relationship between the proprietary Unix vendors and their proprietary RISC chips it's not completely bonkers to call em that. I mean did a PA-RISC chip have any purpose besides running HPUX? And did HPUX have any purpose besides being the Unix you got when you bought your PA-RISC systems?
That's an honest question; I've never seen a machine that had one without the other. I'm sure someone runs NetBSD or Linux on it but as far as market presence...
ARM Mistakes (Score:4, Informative)
I don't program ARM assembly language, but it appears to me that Sophie and Roger made a few calls on the instruction set that proved awkward as the architecture evolved:
These design decisions made the best desktop CPU for 10 years, but they came at a price.
Re:A hard time keeping on the forefront? (Score:5, Insightful)
And they've also demonstrated several times that even when they can't beat their competitors on technical merits, they can still use their monopolistic footprint to stomp all over them anyway.
Don't get me wrong; Intel has a huge R&D budget, which buys them a lot of progress when they decide to focus on something that somebody else is currently better than them at. But sometimes, they use that money to just undercut their competitors (eg by selling chips at a loss), so smaller companies have no hope of surviving. Either they sell at a loss too and go out of business; or they maintain their price, nobody buys their chips, and they go out of business. Because of this, they've been sued by numerous companies and governments, and fined or settled for billions of dollars multiple times.
Re: (Score:3)
Intel has successfully demonstrated several times that they can beat their competitors at whatever they're best at.
But can it beat everyone at doing what each is best at?
I worked at Intel (and hated the management, if not the product), but internally they're a typical megacorp sea of bullshit cheapification initiatives and trying to offshore everything they can, and use young kids for everything else. The brains are still there, but there aren't many of them, and they can't stay on top of everything. I re
Re: (Score:3)
Intel have shown themselves to be a very competitive company; but that does not give them any magic bullets. x86 is laden with legacy nonsense, and there is a very real possibility that ARM (which is meaner and leaner by design, was ahead of x86 in performance terms at one point, and which has every popular up-and-coming system supporting it ahead of x86) simply has an innate advantage that will carry it through.
Intel know it; as TFA says, they've tried to kill of x86 in favour of better architectures many
Re:A hard time keeping on the forefront? (Score:4, Funny)
Work is making things like movies, music and games.
So what is it that the other 99% of computer-using workers do for 8 hours a day?
Play Solitaire. Thus games.
Re: (Score:3, Insightful)
Not to mention they are the fastest general purpose processors in the world right now. Yet some how that means they aren't staying on the forefront?
Re: (Score:3, Insightful)
general purpose means not a GPU, FPGA, etc..
Re:A hard time keeping on the forefront? (Score:5, Insightful)
Intel is still the major manufacturer of laptop, desktop, workstation and server chips... What if they're not the main provider for cheap toys?
If you weren't around for IBM's reaction to the arrival of minicomputers, or for Digital Equipment's reaction to microcomputers, you wouldn't understand why I'm cleaning up the coffee I just spewed all over my desk. Let's just say that last sentence isn't exactly new.
Re: (Score:3)
I'm not convinced history will repeat itself with that one.
You're in good company. Ken Olsen would have agreed with you.
Re:A hard time keeping on the forefront? (Score:5, Informative)
Can you guys elaborate for the history challenged?
The mainframe crowd (mainly IBM, but also GE, Control Data, and the five other Dwarfs) dismissed minicomputers when they appeared as not being anything more than toys for academics (because even minis weren't in anyone's household budget).
Later, the microcomputer (early Altairs and other 8086, systems with the S-100 bus, the Apple II, the TRS-80, Sinclair, etc.) got the same response from minicomputer companies like DEC. They were, in fact, toys -- but they didn't stay toys.
With the introduction of each successive generation, the previous generation didn't die. After all, we still have mainframes today for jobs that handle godawful amounts of data and/or need to have lotsanines of uptime. What happened, though, was that their markets stopped being real growth segments. We still have minicomputers (although we tend to call them "servers" now.) And we'll always have personal computers. That doesn't mean that they'll resemble today's, just as today's mainframes don't look like those of the 60s. However, there's no reason to be sure that tomorrow's personal computers will be ubiquitous like those from ten years ago, because a lot of the tasks from 2003 (like wasting time on /.) can be done by something more convenient like a phone or a tablet.
Re: (Score:3)
IDC foresees that the industry will return to positive growth between 2014 and 2017.
Comment removed (Score:5, Interesting)
Why would intel want to? (Score:4, Insightful)
Re: (Score:3, Insightful)
Pretty great? Atom sucks balls compared with AMD's offerings. And it's not even close. Intel offers them so that AMD has some competition in that space, but Intel doesn't have any reason for them to be good as that would take away from their business of selling the more expensive processors.
Re: (Score:2)
I don't want to get into an argument here about which process is better. My point was that the Atom works well, as well as the Xeon line and Core line processors.
Whether or not your favorite brand is something else shouldn't make a difference here. The point being that Intel is making piles of cash on technology they've already developed and put piles of money into. Why kill the golden goose just because cell phones use a slightly lower powered alternative.
The article reads like:
"My server processors suck f
Re: (Score:2)
But, they don't work well. Watching my mother's netbook struggle to do basic things like open windows explorer where my equivalent AMD e-350 had no troubles indicates that it is in fact not something that works well. It certainly doesn't work as well as the Xeon or Core lines do in their respective market.
In fact, I have a hard time thinking of anything for which Atom works pretty well. If it can't handle basic Windows 7 stuff, I'm at a bit of a loss as to what it can do very well.
This isn't about brand pre
Re:Why would intel want to? (Score:4, Informative)
Re:Why would intel want to? (Score:4, Informative)
Re: (Score:3)
Owning both an Atom-based Aspire One and an E-350 Aspire, I'd hardly call them "equivalent". Aside from the not-so-hot 1.6 GHz clock speed, the two have almost nothing in common. The Atom has Hyperthreading, the E-350 has two physical cores. The Atom relies on Intel's graphics, the E-350 has an integrated GPU that has NEVER been the bottleneck for anything I want to run. The Aspire One is limited to 2 GB of RAM (and in this implementation only takes 1.5), the E-350 machine currently has 8 GB installed.
I wou
Re: (Score:3)
You don't think that £90 price might be subsidized by Intel to a very large degree to get a foothold in the market?
Re:Why would intel want to? (Score:5, Interesting)
Why would they want to kill it off when they're still making money hand over fist with it?
Try reading "The Innovator's Dilemma."
Re:Why would intel want to? (Score:5, Insightful)
David Packard (of HP) used to say, "We're trying to put ourselves out of business every six months. Because if we don't, someone else will."
Back then, they came out with the LaserJet and DeskJet series and made tons of money. And every new printer was WAY better than the last one. But then he died and they decided that they should lock their ink cartridges and sue refillers instead of innovating. Now, companies like Brother and Canon are eating their lunch, by...wait for it...putting themselves out of business every 6 months...
Re: (Score:3)
HP's printers by and large are STILL superior to competition; its just the drivers which are a wreck. Of course, Canon UFR drivers arent much better...
Why would Intel want to kill the x86? (Score:5, Insightful)
But once you lose the x86 tag Intel would just be one of many vendors. The closest thing to competition they have had for x86 has been AMD.
Re:Why would Intel want to kill the x86? (Score:4, Informative)
Do you even understand what "CISC" and "RISC" are? It doesn't just mean "less instructions and stuff." There are, in fact, other design characteristics of "RISC" such as fixed width instructions (wasted bandwidth and cache) and so on.
While I'm sure you are attempting to somehow suggest that intel pays some kind of massive "decode" penalty for all it's instructions and will always be less power effieicnt because of it, things are not quite so simple. You see, a RISC architecture will typically need more instructions to accomplish the same task as a CISC architecture. This has an impact on cache and bus bandwidth. Also, ARM chips still have to decode instructions. It's not a trace cache.
It's a false dichotomy to say that things are either CISC or RISC. There would be various architectures that wouldn't really qualify as either, such as a VLIW architrecture for example.
So, in summary no, technology does not "want" to evolve from CISC to RISC. And even ARM isn't really faithful to the RISC "architecutre", what with supporting multiple bit formats (i.e., thumb, etc) and various other instructions.
I look forward to this day when discussions of various cpu can be advanced beyond stupid memes and rehashed flamwars from decades ago. But this is slashdot, so I expect too much.
Re: (Score:2, Informative)
Re:Why would Intel want to kill the x86? (Score:5, Insightful)
There is a whole set of folks apparently that don't understand that the CPU doesn't have an execution engine that can process "REPNE SCASB". "REPNE SCASB" will get translated into a small set of RISC-like instructions internally that get executed.
Or are you trying to say that RISC computers can't possibly run C, because they don't those complex instructions too? Do you think that RISC assembly can't possibly have a REPNE SCASB macro? Are you confused because the translation happens inside the CPU instead of the assembler?
Re: (Score:2)
Don't forget Cyrix, which used to be everywhere in the '90s, and lives on in the form of VIA C7 / Nano today. It's mostly in netbooks now, but there is competition and they know it.
I like my Intel chips, but I also remember them getting busted for something akin to collusion in a lot of markets. If they had played by the rules, we probably would see a lot of alternatives.
Re: (Score:3)
But once you lose the x86 tag Intel would just be one of many vendors
Yea, just like all the other vendors firmly established on 22nm and owning their own fabs.
Wait, who are these other vendors, again? As I recall, AMD is just one of a very few comfortably at 28nm, and a lot of others are a few gens behind that. Intel is in front because, whatever problems they have, they still make the best CPUs out there and they still have the best tech.
Blame Windows? (Score:2)
They could.... (Score:2)
They tried it before and failed. (Score:2)
The reason has something to do with the billions of x86 chips currently in operation in the server/desktop/laptop market and the massive amount legacy software written for x86. Intel tried to implement a new non backwards compatible CPU architecture before, IA-64, and it failed to catch and the backwards compatible AMD 64 bit x86 variation winning out.
wtf? (Score:5, Interesting)
Re:wtf? (Score:5, Informative)
Because x86 as an ISA is a lousy one?
32-bit code still relies on 7 basic registers with dedicated functionality, when others sport 16, 32 or more general purpose registers that can be used mostly interchangably (most do have a "special" GPR used for things like zero and whatnot).
64-bit extension (x64, amd64, x86-64 or whatever you call it) fixes this by increasing the register count and turns them into general registers.
In addition, a lot of transistors are wasted doing instruction decoding because x86 instructions are variable length. Great when you needed high code density, but now it's legacy cruft that serves little other than complicate instruction caches, inflight tagging and complicate instruction processing as instructions require partial decoding to figure out their length.
Finally, the biggest thing nowadays leftover from the RISC vs CISC wars is the load/store architecture (where operands work on registers only, while you have ot do loads/stores to access memory). A load/store architecture makes it easier on the instruction decoder as no more transistors need to be wasted trying to figure out if operands need to be fetched in order to execute the instruction - unless it's a load/store, the operand will be in the register file.
The flip side though, is a lot of the tricks used to make x86 faster also means that other architectures benefit as well. Things like out-of-order execution, register renaming, and even the whole front end/back end thing (where front end is what's presented to the world, e.g., x86), and back end is the internal processor itself, (e.g., custom RISC on most Intel and AMD x86 parts).
After all, ARM picked up OOO in the Cortex A series (starting with the A8). Register renaming came into play around then as well, though it really exploded in the Cortex A15. And the next gen chips are taking superscalar to the extreme. (Heck, PowerPC had all this first, before ARM. Especially during the great x86 vs. PowerPC wars).
The good side though is that x86 is a well studied architecture, so compilers and such for x86 generally produce very good code and are very mature. Of course, they also have to play into the internal microarchitecture to produce better code by taking advantage of register renames and OOO, and knowing how to do this effectively can boost speed.
And technically, with most x86 processors using a frontend/backend deal, x86 is "dead". What we have from Intel and AMD are processors that emulate x86 in hardware.
How did God Create the Universe in 6 Days? (Score:5, Funny)
Re:How did God Create the Universe in 6 Days? (Score:5, Funny)
Simple. (Score:3)
Windows, Word, Excel, and Games.
Microsoft is just starting to make cross hardware platform applications and development. So we have decades of legacy software that depends on the x86 architecture.
Back in the 90's when Java Was becoming Popular, Microsoft put an end to that, and gave us .NET that runs slightly faster than Java but only works with windows on x86 and didn't put any effort in making cross platform, trying to keep a hold on the market. If apps could start working cross OS's and Hardware platforms then people will no longer want Windows, or more to the point, they could choose not to use windows.
Legacy (Score:5, Interesting)
Because the world runs on legacy software, and that legacy software runs on a legacy platform called x86. The answer is really that simple.
You can come up with a superior platform for power (ARM), it has been done and it worked really well on phones where there wasn't a large legacy base of software already in place. You can come up with a superior platform for 64 bit processing (Itanium), it has been done and it worked really well in a very limited marked (servers that handled large databases). However that market was too limited and large lawsuits have been filed to try to get out of that market.
Other examples abound and have been made, the payoff to whoever could succeed would be in the billions of dollars (Even the Chinese are trying their own homegrown CPU architecture). Every single one of them that has tried to enter the desktop market has failed though for the simple reason that it couldn't emulate x86.
Even Microsoft would dearly love to get out of the x86 business, the payoff in terms of killing legacy software support and selling all new software would be huge (hello Surface RT). I think you'll notice that sales of Microsoft RT products have all been a dismal failure with manufactures declining to make new products as fast as they can.
Until you can build a chip that can emulate x86 and support a different architecture and do so more cost effectively than just an x86 chip x86 will live. You can't kill it, Intel can't kill it, AMD can't kill it, Microsoft can't kill it and you sure as hell can't nuke it from orbit. It's embedded in billions of computers and software programs worldwide, and that is a zombie army that you just can't fight.
Re: (Score:2)
Until you can build a chip that can emulate x86 and support a different architecture and do so more cost effectively than just an x86 chip x86 will live. You can't kill it, Intel can't kill it, AMD can't kill it, Microsoft can't kill it and you sure as hell can't nuke it from orbit. It's embedded in billions of computers and software programs worldwide, and that is a zombie army that you just can't fight.
actually nuking it from orbit is the only way to kill it a good emp pulse from high orbit would take out a lot of the install base.
Re: (Score:2)
I thought of that, but then decided there are too many of them scattered about, including - in orbit - to ever be able to nuke them from orbit and be sure. I'm not sure if we have enough nukes world wide to actually perform that feat.
Perhaps someone with more time can calculate how wide of a surface area we can wipe out with an EMP, divide that by the populated surface with a density greater than x and come up with an answer?
Re:Legacy (Score:5, Insightful)
Until you can build a chip that can emulate x86 and support a different architecture and do so more cost effectively than just an x86 chip x86 will live. You can't kill it, Intel can't kill it, AMD can't kill it, Microsoft can't kill it and you sure as hell can't nuke it from orbit. It's embedded in billions of computers and software programs worldwide, and that is a zombie army that you just can't fight.
That, in fact is how Apple switched processors. Twice. The PowerPC Macs were so much faster than the old 68K that they could emulate the old stuff as fast as the 68K machines, and the native PPC software blew the older machines away. When they switched to (ugh) Intel, the PPC had fallen behind and there was a similar performance gap.
IIRC, early versions of Windows NT could run emulated x86 software at decent speed on the DEC Alpha, but that machine was too pricey for the mass market.
So, to kill the x86, we need a machine that is enough faster than the x86 to run legacy software at comparable speed, native software that's faster than anything on X86, and a price low enough for the average consumer.
Re: (Score:3, Informative)
Individually they aren't too bad. Taken all together they create real problems.
64 predicate registers (which is way too many) yields 6 bits per syllable (the Itanium term for instruction). Combine that with 128 int regs (7 bits per) and 3 register operands - you've got 27 bits before specifying any instruction bits.
The impact of the middle one (instruction steering) was also not seen until late in the design cycle. Instruction decode information got mixed in there, so that not every instruction could go
Two Words (Score:2)
ARM Processors (Score:2)
95% of the processors on tablets and smartphones are ARM processors. ARM Holdings licenses out ARM to a number of chip vendors. In theory, Intel could license ARM also from ARM Holdings and start to manufacture ARM chips. Given the difference in margins, it is unlikely they will do so until they feel there is a significant threat to the business. Even better for Intel (in terms of non-x86 revenue) would be a cross-licensing agreement with ARM that gives Intel a slice of the ARM pie. So, it is not impossible
They used to make ARM (Score:3)
Re: (Score:3)
Myth 1: Xscale was somehow the best ARM at the time.
Xscale was basically inherited by Intel from DEC StrongArm (which arguably might have been the best at their time back in 1996), but by the time Intel bought it and rebadged it Xscale, it was pretty middling Arm implementation.
Myth 2: Intel sold it to Freescale (they actually sold it to Marvel).
where is the software? (Score:2)
the reason i use ARM on my iphone, ipad or android phone is that there are hundreds of thousands of applications to choose from to do different things
every non-x86 platform for the desktop market has had a lack of software. the OS is useless by itself.
Article is 20 years too late (Score:3)
The last attempt Intel made at a non-x86 architecture was Itanium.
In 1995.
And it wasn't an attempt to ditch x86. The Itanium was a server product from the ground up, and only partially a technology vehicle for VLIW because HP (the partner at the time) largely drove that aspect of the ISA.
This article is pointless. The RISC/CISC debate is moot. Or, more aptly: an academic exercise, free from real-world constraints.
Re: (Score:2)
And it wasn't an attempt to ditch x86. The Itanium was a server product from the ground up, and only partially a technology vehicle for VLIW because HP (the partner at the time) largely drove that aspect of the ISA.
Itanium only became 'a server product from the ground up' when it turned out to suck everywhere else. Before that the media was full of 'Itanium is going to replace x86 everywhere' articles.
No need (Score:5, Interesting)
These articles are constantly missing the point.
x86 is fine. The flaws of the architecture are mostly superficial, and even then, x86-64 cleans a lot of it up. And it's all hidden behind a compiler now anyways - and we have very good compilers.
ARM has an advantage in the ultra-low-power market because they've been designing for the ultra-low-power market. Intel has been focusing on the laptop/desktop/server market, and so their processors fit into that power bracket.
But guess what? As ARM is moving into higher-performance chips, they're sucking up more power (compare Cortex-A9 to Cortex-A15). And as Intel is moving into lower-power chips, they're losing performance (compare Atom to Core).
The ISA doesn't really affect power too much, as it turns out. It affects how easily compilers can use it, and how easily the chip can be designed, but not really power draw or thermal performance. Given the lead Intel has on fabrication, any slight disadvantage of the x86 architecture in that regard is made up for by the software library.
Three words (Score:3)
Close Source Applications.
They're not stupid like Microsoft is, they know that closed source and multi-arch don't work together.
Funny you should ask . . . (Score:5, Interesting)
broadwell, not haswell (Score:3)
That article says that "Broadwell" will be BGA only, not Haswell. Haswell will continue to be offered as LGA. Also, the successor to "Broadwell" will apparently be offered as LGA as well, so I doubt this is the end of the line...
You may safely assume (Score:4, Interesting)
That when Mankind actually launches ships to other star systems, the computers on board will be running a descendent of the x86 ISA, even if it's running 1024-bit words on superconducting molecular circuitry.
And also that the geeks who know anything about them will be bitching about the <expletive> ancient POS instruction set.
Let's Face it x86 is horrid, but here to stay (Score:3)
The reason for Intel and X86 is the IBM PC and it's back room marriage to Microsoft basically boiled down to a simple choice of who would supply the microprocessor with the most desirable terms. Intel won, with their X86, not because it was better or faster, but because they agreed to IBM's terms. The rest of the history is about the symbiotic (some would argue incestuous) relationship between Microsoft, Intel, and PC manufacturers has little to do with what would have been better technically.
The Motorola 68000 series processors where much more capable, flexible and MUCH easier to program (at least at the assembly level). Had Motorola won, we would have enjoyed an instruction set that did not change for the life of the 68000 processor. But as it was, with the x86 progression, 286, 386, Pentium and following, each introduced multiple instruction set alterations in an effort to keep up with the PC's needed expansion and performance requirements. None of this would have been necessary with the 68000 through the same progression.
Another advantage of the 68000 would have been 64 bit floating point math would have been standard and using 64 bits would have been seamless to the programs that used it. Operating systems would have been easily ported to 64 bit hardware, because it would have been a device driver exercise, and ONLY for devices that required 64 bit so the migration would have been piecemeal instead of the hard cut to 64 bit we have now.
We are stuck with x86, not because it was or is the best, but because it was the chosen one. X86 is the one supported by IBM back when this all got started, and now it's the primary platform for Windows and Microsoft. These past decisions where for business reasons and not technical ones. There is a lesson in all this.. :)
So.. As long as the relationship between Microsoft, Intel and Hardware builders remains in tact, and the PC remains the premier computing platform, we will be stuck with the x86. The question is how long will this last? Apple tried and fell back to x86 hardware, but I'm not sure Intel is going to control the mobile computing market which seems to be able to make inroads into the desktop market.
Proprietary binary software (Score:5, Insightful)
There are loads of proprietary, binary software around. Some people even run OS/2 because they won’t port their software to something newer. FreeDOS is around and used in production. Alpha emulated x86 quite competently, and current x86 processors are actually Risc chips with an x86 translation unit.
Until most software is based on open standards and free components that can be trivially recompiled, all platforms will live much longer than people would like them to.
The Pentium Pro did it (Score:3)
What killed the RISC alternatives to x86 was the Pentium Pro. Before the Pentium Pro, the industry consensus was that the way to faster machines was RISC. Then Intel developed a superscalar x86 machine and beat out RISC hardware.
It was an incredible technical achievement to make an instruction set designed for zero parallelism go superscalar. All previous superscalar machines, from the IBM 7030 and CDC 6600 of the 1960s, had imposed restrictions on what programs could do to accommodate the problems of concurrency.
The Pentium Pro didn't do that. All the awful cases were handled. Exceptions were exact. Storing into code just ahead of execution was allowed. It took Intel 3,000 engineers to make that work. Nobody had ever put that level of effort into a CPU design before. The design team for a MIPS processor was about 15 people.
The Pentium Pro was designed for 32-bit code, but still ran 16-bit code. Intel thought that by the time the thing shipped in 1995, the desktop world would be 32-bit. After all, it had been 10 years since the 386 introuced 32-bit mode. The desktop world still wasn't ready. Many users ran Windows 3.1/DOS on the Pentium Pro and complained of slow performance. It ran Windows NT quite well, but NT hadn't achieved much market share yet, much to Microsoft's annoyance. So the Pentium II had more transistors devoted to 16-bit support, fixing that problem. The Pentium II and III use modified Pentium Pro architecture. The Pentium 4 (late 2000) was the next new design.
That was the beginning of the end for RISC. RISC could get a simple CPU to one instruction per clock. Superscalar machines could beat one instruction per clock. Superscalar RISC machines had all the complexity of superscalar CISC machines, combined with a lower code density and thus higher demands on memory bandwidth.
As it turned out, x86 wasn't a bad instruction set to make go fast. RISC thinking was that having lots of registers would help. It doesn't. On a superscalar machine, commits to memory are deferred, and most stack accesses are really coming from registers within the execution units. So there's no win in having lots of user-visible registers. Also, if you have a huge number of registers like a SPARC does, time is wasted saving and restoring them. On the stack, you just move the stack pointer.
Also, RISC code is about twice as large as x86 code. Making all the instructions the same length bloats all the small ones.
The Itanium was an attempt to introduce a proprietary architecture that couldn't be cloned. The Itanium has lots of original, patented technology. It was very different from other CPUs. However, it wasn't better. Just different. Compiling fast code for it was really hard. It was a "build it and they will come" architecture, like the Cell. Except they didn't come.
Re: (Score:2)
Redmond *can* ship good software... but they're hobbled by backwards compatibility. They're not willing to eat the same poison pill that Apple did when they shifted to OSX.
Redmond's software for platforms where they've declared from the outset that they're not going to try for backwards compatibility is actually pretty good, from a software engineering standpoint. That's the xbox line and the current generation of WinMo. The user interface leaves a lot to be desired, but the actual underlying platform is pr
Re:They just can't do it, cap'n! (Score:5, Interesting)
I can name a whole shit load of things wrong with (pick a version of) windows, none of which have anything to do with backwards compatability, or anything else under the hood.
The problem with windows 15 years ago is that Microsoft didn't know how to innovate. All they could do is steal the good ideas of others.
The much worse problem with windows today is that they've stopped stealing good ideas, and started developing horrible ones in-house.
Microsoft is an alchemist that has discovered, after years of toil, a method for turning gold into shit.
Re: (Score:2)
Re:It will (Score:5, Informative)
What intel needs is a superior architecture that can successfully microcode intel instructions with minimal performance cost.
You mean, like x86-64?
You don't seriously think that modern Intel processors are actually CISC, right? The underlying instruction set is closer to a DEC Alpha than it is to an 80x86 processor....
Re:It will (Score:5, Insightful)
And that's really why the story question is misguided. The underlying architecture has nothing to do with the ISA; Intel can build whatever they want and throw an x86 decoder frontend on it and have a suitable x86 CPU. Killing the x86 ISA doesn't do anything for Intel or their customers.
Re: (Score:3)
Re: (Score:2)
what i don't get is why they didn't just throw a couple of x86_64 cores on the same chip as their say itanium prossesors for their workstation and servers possessors and a atom based core along with a ARM chip (they own a ARM license as i recall) for phones and tablets. they would then have a leg up in both markets because they could still use legacy code. best of both worlds.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
i remember one of my old friends use to have a 386 or 486 computer that ran Windows 3.1. It even had a 5 and 1/2 inch floppy drive. it would be interesting if Intel put a 486 into a smartphone. but then the phone's battery will be drained fast, I bet. ok, i'll stop rambling. lol
there's intel based android phones. you can buy them if you want, but it's mostly an experiment from intels viewpoint in scale.
and Nokia did put a 386 inside a phone in the '90s.. http://en.wikipedia.org/wiki/Nokia_9000_Communicator [wikipedia.org]
I'm getting old it seems. damn.
Re:The Curse of Reverse Compatibility (Score:4, Insightful)
They consciously made a profit-seeking management decision that shackled their ability to engineer radically.
Oh come on. Do you honestly think there have been no major innovations in Intel processors since the 8086?
they'd cut of all the old baggage that keeps them weighed down
Except all that stuff that keeps them "weighed down" is the same stuff than generates them millions in profits.