Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Intel Hardware

Happy Birthday! X86 Turns 30 Years Old 362

Posted by CmdrTaco
from the break-out-the-chips-and-dips dept.
javipas writes "On June 8th, 1978 Intel introduced its first 16-bit microprocessor, the 8086. Intel used then "the dawn of a new era" slogan, and they probably didn't know how certain they were. Thirty years later we've seen the evolution of PC architectures based on the x86 instruction set that has been the core of Intel, AMD or VIA processors. Legendary chips such as Intel 80386, 80486, Pentium and AMD Athlon have a great debt to that original processor, and as recently was pointed out on Slashdot, x86 evolution still leads the revolution. Happy birthday and long live x86."
This discussion has been archived. No new comments can be posted.

Happy Birthday! X86 Turns 30 Years Old

Comments Filter:
  • by Anonymous Coward on Thursday June 05, 2008 @09:07AM (#23667643)
    Buggy processors, flawed floating point, and backward integers. Better solutions have come and gone, always squashed by the INTEL 800 pound gorilla.

    Yep, lots to be happy about. Long live mediocrity.

  • by Hankapobe (1290722) on Thursday June 05, 2008 @09:08AM (#23667653)
    Move on to something better. Backwards compatibility can too far some times.
  • Legendary? (Score:0, Insightful)

    by Anonymous Coward on Thursday June 05, 2008 @09:20AM (#23667797)
    Everything the x86 series did, someone else did first on some other processor (68k, Sparc, MIPS, and PPC, to name a few), and usually better, because they didn't have the handicap of backward compatibility.

    But I guess we have the Pentium 4 to thank for conditioning the masses to think that clock speed equals performance to the exclusion of all else, and that it's okay for a CPU to burn 100-150 watts all by itself.
  • by waldo2020 (592242) on Thursday June 05, 2008 @09:23AM (#23667821)
    Motorola always had better product, just worse marketing.. If IBM had chosen the 68K in their instruments machine, instead of the 8086/8085 from the Displaywriters, we would have saved ourselves from 3 decades of segmented address space, half a dozen memory models and non-orthogonal cpu architecture.
  • by quanticle (843097) on Thursday June 05, 2008 @09:25AM (#23667843) Homepage

    But you see, the thing with standards is that the longer they live, the harder they are to kill. At 30 years old, the x86 ISA is damn near invincible now.

  • by oblivionboy (181090) on Thursday June 05, 2008 @09:26AM (#23667861)
    The other big success is their constant work on making the entire system architecture better, and basically giving that work to the industry for free.

    While I'm sure thats how the script was repeated in Intel, suggesting great generosity ("And we give it away for free!"), what choice did they really have? IBM's whole Micro Channel Architecture fiasco showed what licensing did to adoption of new advances in system architecture and integration.
  • Before the 8086 was released, I knew a V.P. of Technology who was extremely excited about it. Every time I saw him, he would tell me the date of release, and how much he was waiting for that date.

    On that day, he was very sad. Intel made some horrible design decisions. We've had to live with them every since. Starting with the fact that assembly language programming for the X86 architecture is really annoying.
  • by Urkki (668283) on Thursday June 05, 2008 @09:29AM (#23667903)
    What may have been a limitation some time ago might start to be an advantage. I'm under the impression that there's already more than enough transistors to go around per processor, and there's nothing *special* that can be done with them, so it's just cramming more cores and more cache into a single chip. So parsing, splitting and parallelizing complex instructions at the processor may not be very costly after all. OTOH I bet it does reduce the memory bandwidth needed, which definitely is an advantage.
  • Re:Itanium sank (Score:5, Insightful)

    by WMD_88 (843388) <kjwolff8891@yahoo.com> on Thursday June 05, 2008 @09:31AM (#23667925) Homepage Journal
    My theory is that Itanium was secretly never created to replace x86; rather, it was designed to kill of all competitors to x86. Think about it: Intel managed to convince the vendors of several architectures (PA-RISC, Alpha come to mind) that IA-64 was the future. They proceeded to jump on Itanium and abandon the others. When Itanium failed, those companies (along with the hope of reviving the other arch's) went with it, or jumped to x86 to stay in business. Ta-da! x86 is alone and dominant in the very places IA-64 was designed for. Intel 1, CPU tech 0.
  • Die already ! (Score:5, Insightful)

    by DarkDust (239124) * <marc@darkdust.net> on Thursday June 05, 2008 @09:37AM (#23667987) Homepage
    Happy birthday and long live, x86.

    Oh my god, no ! Die already ! The design is bad, the instruction set is dumb, too much legacy stuff from 1978 still around and making CPUs costly, too complex and slow. Anyone who's written assembler code for x86 and other 32-bit CPUs will surely agree that the x86 is just ugly.

    Even Intel didn't want it to live that long. The 8086 was hack, a beefed up 8085 (8-bit, a better 8080) and they wanted to replace it with a better design, but iAPX 432 [wikipedia.org] turned out to be a desaster.

    The attempts to improve the design with 80286 and 80386 were not very successful... they merely did the same shit to the 8086 that the 8086 already did to the 8085: double the register size, this time adding a prefix "E" instead of the suffix "X". Oh, and they added the protected mode... which is nice, but looks like a hack compared to other processors, IMHO.

    And here we are: we still have to live with some of the limitations and ugly things from the hastily hacked together CPU that was the 8086, for example no real general purpose registers: all the "normal" registers (E)AX, (E)BX, etc. pp. are bound to certain jobs at least for some opcodes. No neat stuff like register windows and shit. Oh, I hate the 8086 and that it became successful. The world could be much more beautiful (and faster) without it. But I rant that for over ten years now and I guess I will rant about it on my deathbead.
  • Re:Legendary? (Score:4, Insightful)

    by bhtooefr (649901) <bhtooefr&bhtooefr,org> on Thursday June 05, 2008 @09:38AM (#23668001) Homepage Journal
    What about CISC-to-RISC translation?

    I do believe that was first done by an x86 CPU, the NexGen Nx586 (the predecessor to the AMD K6...)
  • by fremen (33537) on Thursday June 05, 2008 @09:40AM (#23668011)
    What you're really saying is that "if only the chip had been a little more expensive to produce things might have been different." Adding a few little tweaks to devices was a heck of a lot more expensive in the 80s than it is today. The reality is that had Intel done what you asked, the x86 might not have succeeded this long at all.
  • NEC V20? (Score:1, Insightful)

    by Anonymous Coward on Thursday June 05, 2008 @09:44AM (#23668055)
    Alright - who remembers replacing the original 8086/8088 with an NEC V20 for the extra 2mhz?

    -- kickin it old school IBM PCjr style.
  • by divided421 (1174103) on Thursday June 05, 2008 @09:44AM (#23668061)
    You are absolutely correct. It amazes me how large of a market x86 commands with the undisputed worst instruction set design. Even x86 processors know their limitations and merely translate the instructions into more RISC-like 'micro-ops' (as intel calls them) for quick and efficient execution. Lucky for us, this translation 'baggage' only occupies, what, 10% of total chip design now? Had several industry giants competed on perfecting a PowerPC-based design with the same amount of funding as x86 has received, we would be years ahead of where we are now.
  • by Gnavpot (708731) on Thursday June 05, 2008 @09:54AM (#23668197)

    If the paragraph size had been 256 bytes, that would have resulted in a 24MB address space. We probably wouldn't have hit the wall for another several years. Companies such as VisiCorp might have succeeded at products like VisiOn, which were bending heaven and earth to cram their products into 640K, it would have been much easier to do graphics-oriented processing (death of Microsoft and Apple, anyone?). And so on.

    But would the extra RAM have been affordable to typical users of these programs at that time?

    I remember fighting for expensive upgrades from 4 to 8 MB RAM at my workplace back in the early 90's. At that time PCs had already been able to use more than 1 MB for some years. So the problem you are referring to must have been years earlier where an upgrade from 1 to 2MB might probably have been equally expensive.
  • Re:Itanium sank (Score:2, Insightful)

    by Hal_Porter (817932) on Thursday June 05, 2008 @09:55AM (#23668223)

    Lets not forget the wonderful Itanium processor which was supposed to replace X86 and be the next gen 64-bit king.

    How could Intel have got it so wrong? as Linus said "they threw out all of the good bits of X86".

    It's good to see however that Intel have now managed to product decent processors now the GHz wars are over. In fact it's been as much about who can produce the lowest power CPU. AMD seem to just have the edge.
    Not just Itanium. All the x86 alternatives have sunk over the years. Mips, Alpha, PPC. x86 was originally a hack on 8080, designed to last for a few years. All the others had visions of 25 year lifetimes. But the odd thing is that a hack will be cheaper and reach the market faster. An architecture designed to last for 25 years by definition must include features which are baggage when it is released. x86, USB, Dos and Windows show that it's better to optimize something for when it is released. Sure doing this will leave a few holes. But if it succeeds you have the money to fix them. As limited human engineers this seems inelegant. But that's how evolution works.

    Maybe having vision is overrated. Evolution has no vision it just hacks stuff blindly. But it designed your brain. Conscious engineers planning for the long term can't do that.
  • Re:Die already ! (Score:2, Insightful)

    by Anonymous Coward on Thursday June 05, 2008 @10:02AM (#23668313)
    The problem is that, like English, even though x86 sucks in so many ways it just happens to be very successful for those same reasons. Like for instance using xor on a register to zero itself is both disgusting and efficient... kind of like "yall" instead of "all of you".
  • by imgod2u (812837) on Thursday June 05, 2008 @10:15AM (#23668485) Homepage
    Well, in order to be fast in executing, the code density can't be all that high for the internal u-ops. I don't have a rough estimate but if the trace cache in Netburst is any indication, it's a 30% or more increase in code size for the same operations vs x86. We're talking 30% increase of simple instructions too. I would imagine it's pretty bloated and not suitable to be used external to the processor.

    On top of that, it's probably subject to change with each micro-architecture.
  • by slittle (4150) on Thursday June 05, 2008 @10:25AM (#23668659) Homepage
    Does it really matter? Once you expose the instruction set, it's set in stone. That'll lead us back to exactly where we are in another 30 years. As these instructions are internal, they're free to change to suit the technology of the execution units in each processor generation. And presumably because CISC instructions are bigger, they're more descriptive and the decoder can optimise them better. Intel already tried making the compiler do the optimisation - didn't work out so well.
  • Re:Die already ! (Score:3, Insightful)

    by eswierk (34642) on Thursday June 05, 2008 @10:27AM (#23668697) Homepage
    Does anyone besides compiler developers really care that the x86 instruction set is ugly and full of legacy stuff from 1978?

    Most software developers care more about things like good development tools and the convenience of binary compatibility across a range of devices from supercomputers to laptops to cell phones.

    Cross-compiling will always suck and emulators will always be slow. As lower-power, more highly integrated x86 chipsets become more widespread I expect to see the market for PowerPC, ARM and other embedded architectures shrink rather than grow.
  • Re:How Long? (Score:5, Insightful)

    by compro01 (777531) on Thursday June 05, 2008 @10:29AM (#23668731)

    Wow, I can't imagine what we'll be doing with 18 billion billion bytes of *RAM*. That's what 64 bits of address space gives you.
    [bashing joke]
    maybe that will finally be enough to run vista at a decent speed.
    [/bashing joke]
  • by kylben (1008989) on Thursday June 05, 2008 @10:30AM (#23668743) Homepage
    This is probably the first time in the history of advertising that a slogan of such over the top hyperbole turned out to be understated.
  • Re:Die already ! (Score:5, Insightful)

    by Wrath0fb0b (302444) on Thursday June 05, 2008 @10:30AM (#23668755)

    Even Intel didn't want it to live that long. The 8086 was hack, a beefed up 8085 (8-bit, a better 8080) and they wanted to replace it with a better design, but iAPX 432 turned out to be a desaster.

    The attempts to improve the design with 80286 and 80386 were not very successful... they merely did the same shit to the 8086 that the 8086 already did to the 8085: double the register size, this time adding a prefix "E" instead of the suffix "X". Oh, and they added the protected mode... which is nice, but looks like a hack compared to other processors, IMHO.
    Perhaps this can be taken as a lesson that it is more fruitful to evolve the same design for the sake of continuity than to start fresh with a new design. The only really successful example I can think of a revolutionary design was OS-X, and even that took two major revisions (10.2) to be fully usable. Meanwhile, Linux still operates based on APIs and other conventions from the 70s, the internet has all this web 2.0 stuff running over HTTP 1.1, which itself runs on TCP -- old, old technology.

    The first instinct of the engineer is always to tear it down and build it again, it is a useful function of the PHB (gasp!) that he prevents this from happening all the time.
  • Re:Itanium sank (Score:5, Insightful)

    by putaro (235078) on Thursday June 05, 2008 @10:36AM (#23668861) Journal
    x86 succeeded for exactly one reason - volume. If IBM had chosen the 68K over the x86 we'd be using that today.

    Back in the 80's it was a lot cheaper to develop a processor. They were considerably simpler and slower. The reason there were so many processor architectures around back them was that it was feasible for a small team to develop a processor from scratch. It was even possible for a small team to build, out of discrete components, a processor that was (significantly) faster than a fully integrated microprocessor, e.g. the Cray-1.

    As the semiconductor processes improved and more, faster, transistors could get squeezed onto a chip, the complexity and the speed of microprocessors increased. Where you're at today is that it takes a billion dollar fab and a huge design team to create a competitive microprocessor. x86 has succeeded because there is such a torrent of money flowing into Intel from x86 sales that it is able to build those fabs and fund those design teams.

    PowerPC, for example, was a much smaller effort than Intel back in the mid-90's. PowerPC was able, for a short time, to significantly outperform Intel and remained fairly competitive for quite a while even though the design team was much smaller and the semiconductor process was not as sophisticated as Intel's. The reason for that was that the architecture was much better designed than Intel, making it easier to get more performance for fewer $$. Eventually, however, the huge amount of $$ going into x86 allowed Intel to pull ahead.
  • by beowulf (12899) on Thursday June 05, 2008 @10:41AM (#23668947)
    Amen to that. I remember in the long ago days of my undergrad CS assembler class (mid '80s) spending the first half of the semester working with the M68000 and thinking, huh, assembly isn't so bad. Then we switched to the 80286. Cue multiple brain explosions around the lab...
  • by hvm2hvm (1208954) on Thursday June 05, 2008 @10:50AM (#23669093) Homepage
    What?
  • Re:How Long? (Score:3, Insightful)

    by SendBot (29932) on Thursday June 05, 2008 @11:02AM (#23669263) Homepage Journal
    y'know, in the future we'll look back on this time and laugh at how silly it was to say "16 exabytes should be enough for anyone"
  • Re:How Long? (Score:3, Insightful)

    by erikdalen (99500) <erik.dalen@mensa.se> on Thursday June 05, 2008 @11:08AM (#23669355) Homepage
    The biggest problem is probably like with 64-bit Windows: drivers.

    Linux can just recompile them and Apple only supports hardware they distribute, so that makes it easier.
  • by Alwin Henseler (640539) on Thursday June 05, 2008 @11:18AM (#23669485) Homepage

    What Intel should do is expose both sets of instructions, act like an x86 if the OS expects it, or act RISC-like if the OS expects that.
    Not that there's any point. From what I understand, many common compilers only use a limited subset of the x86 instruction set anyway. To take advantage of that fact, you'd have to create a new CPU with just the set of instructions that are used, or modify compilers to use a specific subset, and create an optimized CPU for that. The most important thing: ditch the unneeded extra ballast of the x86 instruction set, and perhaps re-arrange the remainder.

    Oh wait, that's been done. It's called porting to a different, more elegant and/or power efficient architecture (like ARM, Mips or other). What you need for that is source code for the software. If you have source code for all the software you need, nothing keeps you from moving to a better CPU architecture. If you don't (like with closed-source apps on Windows), then you can't.

    If manufacturers of those mini-PC's like the Asus EeePC can take a hint, they'd do the smart thing and move the Linux-based versions onto a more power-efficient architecture. You'd lose the binary compatibility, but that would be a small loss if you're running Linux and not do serious PC gaming anyway. In return you could have vastly improved battery life for the Linux-based versions.
  • Re:How Long? (Score:5, Insightful)

    by Hal_Porter (817932) on Thursday June 05, 2008 @11:31AM (#23669693)

    That translation of x86 instructions must have some performance cost to it. What Intel should do is expose both sets of instructions, act like an x86 if the OS expects it, or act RISC-like if the OS expects that. Then everyone can have their Windows installed, and it creates an opening for other operating systems. An OS that uses the native instruction set should be a little faster, giving people a reason to use it over windows. That will encourage MS to port windows the the new instruction set, and voila we are free of x86.
    Actually Windows NT and its descendents are very portable - they were designed to run on i860, Mips, x86, Alpha and PPC. Even now they run on x86, x64, Itanium and PowerPC (in the XBox 360). All those ports probably made the code quite easy to port to new architectures. It's all the binary application software that isn't. Or rather it probably could be done if you had the source and time to do it, but lots of people have some very old applications that they don't want to buy again. E.g. Photoshop may be portable, but the copy of Photshop CSx I have on my desk isn't. And I don't to use the latest Photoshop version because it's slower and costs a lot of money. It's even worse if the company that made the app is out of business. But I buy a new copy of Windows every couple of years. So your hypothetical dual mode CPU could run Windows 7 natively. Some new apps would be native and some old ones x86. Actually x64 is already like this on Vista on x64 - the kernel is 64 bit and most applications will stay 32 bit, but x64 is no more native to the processor than x86.

    The question is whether a processor running its native instruction set would be faster. From what I can tell the native instruction format of a modern x86 is wider than the x86 equivalent. Suppose the uops in the pipeline are 48 bit - a 32 bit constant and a 16 bit instruction. That is quite a bit larger than a typical x86 instruction. Wider instructions take more space in Ram and cache. You don't need to decode them, but the extra time fetching them kills the advantage.

    And what is native is very implementation dependent. An AMD chip will have a very different uop format from an Intel one. Actually even between chip generations the uop format might change. Essentially Risc chips tried to make the internal pipeline format the ISA. But in the long run that wasn't good. Original Risc had branch delay slots and later superscalar implementations where branch delays work very differently had to emulate the old behaviour because it was no longer at all native. So if you did this you'd get an advantage for one generation but later generations would be progressively disadvantaged. Or you could keep switching instruction sets. But if most software is distributed as binaries that is impossible.
  • Re:How Long? (Score:3, Insightful)

    by vux984 (928602) on Thursday June 05, 2008 @01:14PM (#23671469)
    The problem is as usual MSFT

    The problem is NOT Microsft.

    The problem is end users. They want to use their existing x86 hardware and software. They aren't really interested in not having drivers for anything more than 3 months old, and running all their existing software at 30-70% its current speed.

    Look at the x64 versions of Windows. It highlights exactly the problem. XP x64 was crippled by lack of drivers, and Microsoft HAD to force the issue with Vista because the x86 ram limit was starting to hold things back. But even today most customers don't WANT the x64 edition. This issue isn't really MS. If they could abandon x86 and keep their customer base, they would, in a heartbeat.

    Linux also supports other architectures, but the G5 as a linux platform is a pretty niche thing to do, since you have to compile a lot more stuff from source and not all of it is cpu agnostic, plus no proprietary linux software will run and you can't stick Windows into a VMware VM, etc.

    yes I know a decade ago NT 4.0 did run on PowerPC, and even a couple of alpha chips.

    And MS could release a version for another CPU within a couple months if they really wanted to, if not faster. But who is going to step up and rewrite all the drivers? Who is going to step up and rewrite all the applications? Leaving that to the 3rd party vendors? They aren't interested in anything but their current project... they aren't going to go back and recompile squat. Hell, most STILL aren't releasing x64 native code.

    Apple with a fraction a of the software guys can keep their OS on two major different style of chips PowerPC, and Intel x86, along with 32bit and 64 bit versions of both.

    1) Apple controls the drivers so that part of the issue is largely solved. Of course your 3rd party hardware might not work after they switch, but at least all the apple hardware works.

    2) Apple didn't want to switch. They had to. Intel was kicking butt in performance, while IBM couldn't even deliver a mobile G5. Consumers were starting to get twitchy about the fact that Windows PCs were getting markedly faster, while mac laptops were still stuck on G4s.

    3) The performance gap from the G4/G5 to the intel stuff had gotten so bad, that by the time Apple switched, running PPC code in emulation on intel was actually an improvement in some cases, and in most cases at least comparable to running on the (slower) native hardware.

    4) Apple is killing off the PPC. Much new software is already intel only, and the next release of OSX is rumoured to be intel only.

    Apple is really a whole other ball game. As for solaris... that's the same as linux... but even more niche. How many people do you know running solaris on ppc?

  • Re:How Long? (Score:3, Insightful)

    by drsmithy (35869) <drsmithy@gmailSLACKWARE.com minus distro> on Thursday June 05, 2008 @01:24PM (#23671625)

    The biggest problem is probably like with 64-bit Windows: drivers.

    No, the biggest problem is applications . Same "problem" that stops people switching from Windows to $OS_DU_JOUR.

    The second biggest problem, of course, is basic economics. What other hardware platform offers even the slightest amount of ROI for Microsoft to expend the effort on porting Windows to ? Where's the business case ?

  • Re:How Long? (Score:3, Insightful)

    by edalytical (671270) on Thursday June 05, 2008 @01:24PM (#23671635)
    Apple has the OS running on ARM too. That makes three major ISAs.

Wishing without work is like fishing without bait. -- Frank Tyger

Working...