Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Happy Birthday! X86 Turns 30 Years Old 362

javipas writes "On June 8th, 1978 Intel introduced its first 16-bit microprocessor, the 8086. Intel used then "the dawn of a new era" slogan, and they probably didn't know how certain they were. Thirty years later we've seen the evolution of PC architectures based on the x86 instruction set that has been the core of Intel, AMD or VIA processors. Legendary chips such as Intel 80386, 80486, Pentium and AMD Athlon have a great debt to that original processor, and as recently was pointed out on Slashdot, x86 evolution still leads the revolution. Happy birthday and long live x86."
This discussion has been archived. No new comments can be posted.

Happy Birthday! X86 Turns 30 Years Old

Comments Filter:
  • by suso ( 153703 ) * on Thursday June 05, 2008 @09:00AM (#23667557) Journal
    Intel's own 40th anniversary is coming up on July 18th. I guess the microcomputer industry is officially over the hill.

    Nice self-reference double entendre Taco!
    • Re: (Score:3, Funny)

      by canuck57 ( 662392 )

      Intel's own 40th anniversary is coming up on July 18th. I guess the microcomputer industry is officially over the hill.

      Sure beats being under the hill.

  • by Anonymous Coward on Thursday June 05, 2008 @09:04AM (#23667607)
    The story is a few days early. I think you may have a rounding bug somewhere.
  • How Long? (Score:5, Interesting)

    by dintech ( 998802 ) on Thursday June 05, 2008 @09:07AM (#23667635)
    I'm pretty sure x86 processors will still be in use for another 15 years at least. But, how much further will this architecture evolve? When will we see the demise of x86?
    • Re:How Long? (Score:5, Interesting)

      by flnca ( 1022891 ) on Thursday June 05, 2008 @09:11AM (#23667681) Journal
      The demise of the x86 general architecture will not begin until Windows goes out of fashion. It's the only major platform strongly tied to that CPU architecture. x86 CPUs have been emulating the x86 instruction set in hardware for many years now. I guess, if they could, Intel / AMD / VIA and others would happily abandon the concept, because it leads to all sorts of complexities.
      • by Futurepower(R) ( 558542 ) on Thursday June 05, 2008 @09:27AM (#23667883) Homepage
        Before the 8086 was released, I knew a V.P. of Technology who was extremely excited about it. Every time I saw him, he would tell me the date of release, and how much he was waiting for that date.

        On that day, he was very sad. Intel made some horrible design decisions. We've had to live with them every since. Starting with the fact that assembly language programming for the X86 architecture is really annoying.
        • by IvyKing ( 732111 ) on Thursday June 05, 2008 @09:41AM (#23668015)
          The docs for the 8086 stated that the interrupts below 20H were reserved, so guess what IBM used for the BIOS. The 8086 documentation was emphatic about not using the non-maskable interrupt for the 8087, and guess what IBM used. OTOH, Tim Paterson did pay attention to the docs and started the interrupt usage at 20H, but he wasn't working for either IBM or Microsoft at the time.


          TFA doesn't get into the real reason that the x86 took off, that the BIOS for the IBM PC was cloned at least two or three times which allowed for much cheaper hardware (the original Compaq and IBM 486 machines were going for close to 10K$, where 486 whiteboxes were available a few months late for 2K$).

          • You said, "... Compaq and IBM 486 machines ...".

            I think you mean 8086 computers, or even 8008 computers.
            • Re: (Score:3, Informative)

              by IvyKing ( 732111 )
              "or even 8008 computers"


              Perhaps you meant 8088?


              I mentioned the 486 specifically because low cost 486 machines were available only a few months after the expensive models from the big boys - the first low cost clone of the 8088 came out several years after the PC.

      • Re:How Long? (Score:5, Interesting)

        by peragrin ( 659227 ) on Thursday June 05, 2008 @09:28AM (#23667889)
        Actually Intel keeps trying(Itanium?) and AMD uses a compatibility mode.

        The problem is as usual MSFT. which only runs on windows. yes I know a decade ago NT 4.0 did run on PowerPC, and even a couple of alpha chips.

        Apple with a fraction a of the software guys can keep their OS on two major different style of chips PowerPC, and Intel x86, along with 32bit and 64 bit versions of both. Sun keeps how many versions of Solaris?

        Nope but Vista only runs on x86. So X86 will remain around as long as it does.
        • by B3ryllium ( 571199 ) on Thursday June 05, 2008 @09:33AM (#23667941) Homepage
          Hrm, I wonder what this HAL thing is ... must be a virus! I'd better remove it.
        • by bsDaemon ( 87307 )
          I think if Intel were really interested, they could force MS to follow suit. The problem, so long as AMD is willing to run the compatibility mode, Microsoft doesn't have to change -- and that means that Intel would have to lose out, at least in the home market.

          I have little doubt that Intel could force a change on servers and corporate desktops, and Linux, BSD and Solaris, as well as Apple, would be able to adjust within a very short period of time to run on it.
        • Re: (Score:3, Insightful)

          by erikdalen ( 99500 )
          The biggest problem is probably like with 64-bit Windows: drivers.

          Linux can just recompile them and Apple only supports hardware they distribute, so that makes it easier.
          • Re: (Score:3, Insightful)

            by drsmithy ( 35869 )

            The biggest problem is probably like with 64-bit Windows: drivers.

            No, the biggest problem is applications . Same "problem" that stops people switching from Windows to $OS_DU_JOUR.

            The second biggest problem, of course, is basic economics. What other hardware platform offers even the slightest amount of ROI for Microsoft to expend the effort on porting Windows to ? Where's the business case ?

        • Re: (Score:3, Insightful)

          by vux984 ( 928602 )
          The problem is as usual MSFT

          The problem is NOT Microsft.

          The problem is end users. They want to use their existing x86 hardware and software. They aren't really interested in not having drivers for anything more than 3 months old, and running all their existing software at 30-70% its current speed.

          Look at the x64 versions of Windows. It highlights exactly the problem. XP x64 was crippled by lack of drivers, and Microsoft HAD to force the issue with Vista because the x86 ram limit was starting to hold things
        • Re: (Score:3, Insightful)

          by edalytical ( 671270 )
          Apple has the OS running on ARM too. That makes three major ISAs.
      • Re:How Long? (Score:5, Interesting)

        by Hal_Porter ( 817932 ) on Thursday June 05, 2008 @09:36AM (#23667979)

        The demise of the x86 general architecture will not begin until Windows goes out of fashion. It's the only major platform strongly tied to that CPU architecture. x86 CPUs have been emulating the x86 instruction set in hardware for many years now. I guess, if they could, Intel / AMD / VIA and others would happily abandon the concept, because it leads to all sorts of complexities.
        Yeah, they could move to an architecture with a simple, compact instruction set encoding which makes efficient use of the instruction cache and can be translated to something easier to implement on the fly with extra pipeline stages.

        But wait, that's exactly what x86 is. In terms of code density it does pretty well compared to Risc. Modern x86s don't implement it internally, they translate it to Riscy uops on the fly and execute those. And over the years compilers have learned to prefer the x86 instructions that are fast in this sort of implementation. And, thanks to AMD it now supports 64 bit natively in its x64 variant. This is important. 64 bit maybe overkill today, but most architectures die because of a lack of address space (see Computer Architecture by Hennessy and Patterson [amazon.com]). But 64 bit address spaces will keep x86/x64 going for at least a while.

        http://cache-www.intel.com/cd/00/00/01/79/17969_codeclean_r02.pdf [intel.com]
        If you know that the variable does not need to be pointer polymorphic (scale with the architecture), use the following guideline to see if it can be typed as 32-bit instead of 64-bit. (This guideline is based on a data expansion model of 1.5 bits per year over 10 years.)

        IIRC 1.5 bits per year address space bloat is from Hennessy and Patterson.

        At this point we have 30 unused bits of address space, assuming current apps need 32GB tops. That gives 64 bit x64 another 20 years lifetime!
        • Re:How Long? (Score:5, Interesting)

          by TheRaven64 ( 641858 ) on Thursday June 05, 2008 @10:07AM (#23668397) Journal

          Many of the shortest opcodes on modern Intel CPUs are for instructions that are never used. Compare this with ARM, where the 16-bit thumb code is used in a lot of small programs and libraries and there are well-defined calling conventions for interfacing 32-bit and 16-bit code in the same program.

          Modern (Core 2 and later) Intel chips do not just split the ops into simpler ones, they also combine the simpler ones into more complex ones. This was what killed a lot of the original RISC archs - that CISC multi-cycle ops became CISC single-cycle ops while compilers for RISC instructions were still generating multiple instructions. On ARM, this isn't needed because the instruction set isn't quite so brain-dead. ARM also has much better handling of conditionals (try benchmarking the cost of a branch on x86 - you'll be surprised at how expensive it is), since conditionals are handled by select-style operations (every instruction is conditional) and which reduces branch penalties and scales much better to superscalar architectures without the cost of huge register files.

        • Re:How Long? (Score:5, Informative)

          by bcrowell ( 177657 ) on Thursday June 05, 2008 @10:33AM (#23668807) Homepage

          IIRC 1.5 bits per year address space bloat is from Hennessy and Patterson. [...] At this point we have 30 unused bits of address space, assuming current apps need 32GB tops. That gives 64 bit x64 another 20 years lifetime!
          Empirically, it hasn't been growing at anywhere near that rate. Ca. 1980 my TRS-80 had a 16-bit address space, and had enough memory to exhaust all of the addresses. Today, I'm using computers that have 1 Gb of memory, which is 30 bits worth of address space. That's less than 0.5 bits per year.

          Also, in order to keep the actual used address space growing at a constant number of bits per year, Moore's law would have to continue indefinitely. But most experts are saying it will probably stop in 10 to 30 years. If we keep growing at 0.5 bits per year, starting now at 30 bits, and stop growing at the Moore's law rate in 2038, then we'll only be using 45 bits worth of actual address space.

          It's hard to grok how big a 64-bit address space would really be. As a reality check, let's say that I want to own every movie that's ever been listed on IMDB, and store every single one of those in my computer's RAM simultaneously. If each one takes as much storage as a 5 Gb DVD, and IMDB has 400,000 movies listed [imdb.com], then that's a total of 2x10^15 bytes, which is 50 bits. That's 16,000 times smaller than a 64-bit address space.

          As another example, the human brain has about 10^11 neurons. Each of those may be connected to 10^4 other neurons, so the total number of connections is about 10^15. That suggests that the total amount of RAM needed for direct, brute-force modeling of a human brain (assuming we knew enough to program such a model, which we don't, and had parallel processors that could run such a simulation, which we don't) might be about 10^15 bytes, which is a 50-bit address space. A 64-bit address space is 16,000 times bigger than that.

          I think we're likely to see flying cars, Turing-level AI, and vacations on the moon before we need 128-bit pointers.

          • Re: (Score:3, Interesting)

            You're talking about desktop computers - the serious applications out there (OLAP, scientific apps) are already a few bits ahead of you.

            I'd also suggest that the state variables to describe each neuron and synaptic connection would be fairly complex, so the 16,000 times bigger probably shrinks quite a bit (hint - 1,000 separate connections per neuron can't be efficiently represented in less than 1,000 bits - and if we need FP accuracy, we're talking 32Kb / neuron).

            Give me 128 bit pointers, or give me death!

          • math fail (Score:5, Informative)

            by SendBot ( 29932 ) on Thursday June 05, 2008 @11:21AM (#23669531) Homepage Journal
            pardon me for being such a math nerd, but I enjoy it so:

            Each of those may be connected to 10^4 other neurons, so the total number of connections is about 10^15.

            You're counting a lot of connections more than once (see permutations), not to mention your perilous assumption that a neural connection would only consume one byte in the hypothetical model.

            If each one takes as much storage as a 5 Gb DVD, and IMDB has 400,000 movies listed [imdb.com], then that's a total of 2x10^15 bytes, which is 50 bits. That's 16,000 times smaller than a 64-bit address space.

            Firstly, what you mean is GB(bytes) not Gb(bits). 2e15 bytes would need a 51-bit address space, and 16 exabytes is a little over 9223 times 2e15 bytes.

            I like the direction of your ideas though.
          • Re:How Long? (Score:5, Interesting)

            by hr raattgift ( 249975 ) on Thursday June 05, 2008 @01:41PM (#23671925)

            I think we're likely to see flying cars, Turing-level AI, and vacations on the moon before we need 128-bit pointers.


            128-bit linear addressing is not so useful, but you can introduce structure into the address so that (for example) the first 64 bits is a network address and the second 64 bits is the address of storage at that network address. This requires distributing the functionality of the MMU across various network elements, but is not especially novel, and from a software perspective is a special case of NUMA. (The special case lends itself to some clever scheduling based on the delay hints available in a further structured network address, especially if you generally organize things such that the XOR of two network addresses is a useful (if not perfect) delay metric from the perspective of an accessor).

            This can even be done "in the small" on a non-networked host by allocating "network addresses" in the top 64 bits to local random access storage. You could look at this as a form of segmented memory (MULTICS style) or as an automatic handling of open(2)+mmap(2) based on (for example) a 64 bit encoding of a path name in the MSBs of the addresses. That is, dereferencing computer memory address 0xDEADBEEF00000001 automatically opens and mmaps a file corresponding to 0xDEADBEEF.

            The opportunities to abstract away networked file systems without losing (or even while gaining) useful information about objects' characteristics (proximity, responsiveness, staleness) suggests that the address size used at the level of a primitive ISA that uses pseudo-flat addressing is mainly limited by the overhead of hauling around extra bytes per memory access. Pseudo-flat addressing can also in principle steal ideas from X86's various addressing models for dealing with addresses of different lengths.

            Ultimately, the difficulty is in the directory problem. That does not go away even if you use radically different "addresses" for objects -- directories are already a pain if you use URLs/URIs for example, or if you use POSIX style filenames, or whatever, and the problem worsens when you have different "addresses" for the same logical object.

            (Fun is when you have to figure out race conditions involving a structured set of bytes that is in a file shared out by AFP, SMB, NFS, and WebDAV, as well as being in use locally, with client software responsible for choosing the most appropriate available access method since there is no guarantee that any one of these methods will work for all clients at all times).

            One possible approach to this is to insist that any reachable object is a persistent object, with a permanent universal name. If you have the permanent universal name, the object is either available to you or errors out. If you do not have the permanent universal name, you are out of luck unless you have a "locator" that points to it (or points to something that points to something that ... points to it). This is in some ways much easier if what is pointed to by a permanent universal name is immutable, and if most such objects are compositions of primitive PUNs, the most primitive and common of which ("well known PUNs") can be cached or recalculated locally.

            [cf Church encoding, Morgensen-Scott encoding and normalization in the computer science sense]
      • by Hatta ( 162192 )
        That translation of x86 instructions must have some performance cost to it. What Intel should do is expose both sets of instructions, act like an x86 if the OS expects it, or act RISC-like if the OS expects that. Then everyone can have their Windows installed, and it creates an opening for other operating systems. An OS that uses the native instruction set should be a little faster, giving people a reason to use it over windows. That will encourage MS to port windows the the new instruction set, and voil
        • Re:How Long? (Score:5, Insightful)

          by Hal_Porter ( 817932 ) on Thursday June 05, 2008 @11:31AM (#23669693)

          That translation of x86 instructions must have some performance cost to it. What Intel should do is expose both sets of instructions, act like an x86 if the OS expects it, or act RISC-like if the OS expects that. Then everyone can have their Windows installed, and it creates an opening for other operating systems. An OS that uses the native instruction set should be a little faster, giving people a reason to use it over windows. That will encourage MS to port windows the the new instruction set, and voila we are free of x86.
          Actually Windows NT and its descendents are very portable - they were designed to run on i860, Mips, x86, Alpha and PPC. Even now they run on x86, x64, Itanium and PowerPC (in the XBox 360). All those ports probably made the code quite easy to port to new architectures. It's all the binary application software that isn't. Or rather it probably could be done if you had the source and time to do it, but lots of people have some very old applications that they don't want to buy again. E.g. Photoshop may be portable, but the copy of Photshop CSx I have on my desk isn't. And I don't to use the latest Photoshop version because it's slower and costs a lot of money. It's even worse if the company that made the app is out of business. But I buy a new copy of Windows every couple of years. So your hypothetical dual mode CPU could run Windows 7 natively. Some new apps would be native and some old ones x86. Actually x64 is already like this on Vista on x64 - the kernel is 64 bit and most applications will stay 32 bit, but x64 is no more native to the processor than x86.

          The question is whether a processor running its native instruction set would be faster. From what I can tell the native instruction format of a modern x86 is wider than the x86 equivalent. Suppose the uops in the pipeline are 48 bit - a 32 bit constant and a 16 bit instruction. That is quite a bit larger than a typical x86 instruction. Wider instructions take more space in Ram and cache. You don't need to decode them, but the extra time fetching them kills the advantage.

          And what is native is very implementation dependent. An AMD chip will have a very different uop format from an Intel one. Actually even between chip generations the uop format might change. Essentially Risc chips tried to make the internal pipeline format the ISA. But in the long run that wasn't good. Original Risc had branch delay slots and later superscalar implementations where branch delays work very differently had to emulate the old behaviour because it was no longer at all native. So if you did this you'd get an advantage for one generation but later generations would be progressively disadvantaged. Or you could keep switching instruction sets. But if most software is distributed as binaries that is impossible.
  • Move on to something better. Backwards compatibility can too far some times.
    • Re: (Score:3, Insightful)

      by quanticle ( 843097 )

      But you see, the thing with standards is that the longer they live, the harder they are to kill. At 30 years old, the x86 ISA is damn near invincible now.

      • by Nullav ( 1053766 )
        So what better birthday present than a new friend (that will later kill and supplant it)?
  • by HW_Hack ( 1031622 ) on Thursday June 05, 2008 @09:12AM (#23667689)
    I spent over 16 yrs with Intel as a HW engineer. I saw many good decisions and a lot of bad ones too. Same goes for opportunities taken and missed. But their focus on cpu development cannot be faulted - they stumbled a few times but always found their focus again.

    The other big success is their constant work on making the entire system architecture better, and basically giving that work to the industry for free. PCI - USB - AGP - all directly driven by Intel.

    Its a bizarro place to work but my time their was not wasted
    • by oblivionboy ( 181090 ) on Thursday June 05, 2008 @09:26AM (#23667861)
      The other big success is their constant work on making the entire system architecture better, and basically giving that work to the industry for free.

      While I'm sure thats how the script was repeated in Intel, suggesting great generosity ("And we give it away for free!"), what choice did they really have? IBM's whole Micro Channel Architecture fiasco showed what licensing did to adoption of new advances in system architecture and integration.
    • by Simonetta ( 207550 ) on Thursday June 05, 2008 @10:18AM (#23668551)
      Hello,
          Congrats on working at Intel for 16 years. Might I suggest that you document this period of activity into a small book? It would be great for the historical record.

          Typing is a real pain. I suggest using the speech-to-text feature found buried in newer versions of MS Word or the IBM or Dragon speech programs. Train the system by reading a few chapters off the screen. Then sit back and talk about the Intel years, the projects, the personalities, the cubicals, the picnics, the parking lot, the haircuts, the water cooler stories, anything and everything. Don't worry about punctuation and paragraphing, which can be awkward when using speech-to-text systems. It's important to get a text file of recollections from the people who were there. Intel was 'ground zero' for the digital revolution that transformed the world in the last quarter of the 20th century. In fifty to a hundred years from now, people will want to know what it was really like.

      Thank you.
  • A few tweaks, and... (Score:5, Interesting)

    by kabdib ( 81955 ) on Thursday June 05, 2008 @09:13AM (#23667691) Homepage
    This is a case where just a couple of tweaks to the original x86 architecture might have had a dramatic impact on the industry.

    The paragraph size of the 8086 was 16 bytes; that is, the segment registers were essentially multiplied by 16, giving an address range of 1MB, which resulted in extreme memory pressure (that 640K limit) starting in the mid 80s.

    If the paragraph size had been 256 bytes, that would have resulted in a 24MB address space. We probably wouldn't have hit the wall for another several years. Companies such as VisiCorp might have succeeded at products like VisiOn, which were bending heaven and earth to cram their products into 640K, it would have been much easier to do graphics-oriented processing (death of Microsoft and Apple, anyone?). And so on.

    Things might look profoundly different now, if only the 8086 had had four more address pins, and someone at Intel hadn't thought, "Well, 1MB is enough for anyone..."
    • by fremen ( 33537 ) on Thursday June 05, 2008 @09:40AM (#23668011)
      What you're really saying is that "if only the chip had been a little more expensive to produce things might have been different." Adding a few little tweaks to devices was a heck of a lot more expensive in the 80s than it is today. The reality is that had Intel done what you asked, the x86 might not have succeeded this long at all.
    • by Gnavpot ( 708731 ) on Thursday June 05, 2008 @09:54AM (#23668197)

      If the paragraph size had been 256 bytes, that would have resulted in a 24MB address space. We probably wouldn't have hit the wall for another several years. Companies such as VisiCorp might have succeeded at products like VisiOn, which were bending heaven and earth to cram their products into 640K, it would have been much easier to do graphics-oriented processing (death of Microsoft and Apple, anyone?). And so on.

      But would the extra RAM have been affordable to typical users of these programs at that time?

      I remember fighting for expensive upgrades from 4 to 8 MB RAM at my workplace back in the early 90's. At that time PCs had already been able to use more than 1 MB for some years. So the problem you are referring to must have been years earlier where an upgrade from 1 to 2MB might probably have been equally expensive.
    • by deniable ( 76198 )
      You've got that the wrong way around. The paragraph size was a result of register and address bus size. I doubt anyone would go out and say "Lets make assembly programmers use segments, they'll appreciate it." It was a way to make 16 bit registers handle a 20 bit address bus.

      Anyway, you're giving me bad flash-backs to EMS and XMS and himem and other things best forgotten.
  • Lets not forget the wonderful Itanium processor which was supposed to replace X86 and be the next gen 64-bit king.

    How could Intel have got it so wrong? as Linus said "they threw out all of the good bits of X86".

    It's good to see however that Intel have now managed to product decent processors now the GHz wars are over. In fact it's been as much about who can produce the lowest power CPU. AMD seem to just have the edge.
    • Re: (Score:3, Funny)

      by Anonymous Coward
      Itaniums were great processors. I have a bank of surplus ones installed in my oven as a replacement heating element.
    • Re:Itanium sank (Score:5, Insightful)

      by WMD_88 ( 843388 ) <kjwolff8891@yahoo.com> on Thursday June 05, 2008 @09:31AM (#23667925) Homepage Journal
      My theory is that Itanium was secretly never created to replace x86; rather, it was designed to kill of all competitors to x86. Think about it: Intel managed to convince the vendors of several architectures (PA-RISC, Alpha come to mind) that IA-64 was the future. They proceeded to jump on Itanium and abandon the others. When Itanium failed, those companies (along with the hope of reviving the other arch's) went with it, or jumped to x86 to stay in business. Ta-da! x86 is alone and dominant in the very places IA-64 was designed for. Intel 1, CPU tech 0.
    • Re: (Score:3, Interesting)

      by argent ( 18001 )
      How could Intel have got it so wrong?

      That's what they do best. Getting it wrong.

      x86 segments (we'll make it work like Pascal). Until they gave up on the 64k segments it was excruciating.
      iApx432 ... the ultimate CISC (the terminal CISC)
      i860 ... The compilers will make it work (they didn't)
      IA64 ... It's not really VLIW! We'll call it EPIC! The compiler's will make it work! Honest!
    • Re: (Score:2, Insightful)

      by Hal_Porter ( 817932 )

      Lets not forget the wonderful Itanium processor which was supposed to replace X86 and be the next gen 64-bit king.

      How could Intel have got it so wrong? as Linus said "they threw out all of the good bits of X86".

      It's good to see however that Intel have now managed to product decent processors now the GHz wars are over. In fact it's been as much about who can produce the lowest power CPU. AMD seem to just have the edge.

      Not just Itanium. All the x86 alternatives have sunk over the years. Mips, Alpha, PPC. x86 was originally a hack on 8080, designed to last for a few years. All the others had visions of 25 year lifetimes. But the odd thing is that a hack will be cheaper and reach the market faster. An architecture designed to last for 25 years by definition must include features which are baggage when it is released. x86, USB, Dos and Windows show that it's better to optimize something for when it is released. Sure doing t

      • Re:Itanium sank (Score:5, Insightful)

        by putaro ( 235078 ) on Thursday June 05, 2008 @10:36AM (#23668861) Journal
        x86 succeeded for exactly one reason - volume. If IBM had chosen the 68K over the x86 we'd be using that today.

        Back in the 80's it was a lot cheaper to develop a processor. They were considerably simpler and slower. The reason there were so many processor architectures around back them was that it was feasible for a small team to develop a processor from scratch. It was even possible for a small team to build, out of discrete components, a processor that was (significantly) faster than a fully integrated microprocessor, e.g. the Cray-1.

        As the semiconductor processes improved and more, faster, transistors could get squeezed onto a chip, the complexity and the speed of microprocessors increased. Where you're at today is that it takes a billion dollar fab and a huge design team to create a competitive microprocessor. x86 has succeeded because there is such a torrent of money flowing into Intel from x86 sales that it is able to build those fabs and fund those design teams.

        PowerPC, for example, was a much smaller effort than Intel back in the mid-90's. PowerPC was able, for a short time, to significantly outperform Intel and remained fairly competitive for quite a while even though the design team was much smaller and the semiconductor process was not as sophisticated as Intel's. The reason for that was that the architecture was much better designed than Intel, making it easier to get more performance for fewer $$. Eventually, however, the huge amount of $$ going into x86 allowed Intel to pull ahead.
        • Re:Itanium sank (Score:5, Interesting)

          by afidel ( 530433 ) on Thursday June 05, 2008 @12:56PM (#23671145)
          The reason PPC was able to beat x86 for a time was that around that time the x86 architecture was moving to being an ISA with the actual code done by a RISCy back end. The decode logic at that time was a significant percentage of the die space available, as process improvements came along that logic remained fairly static as far as total resource usage but that quickly became a smaller and smaller percentage of the available resources and so relative performance went up as the amount of the chip available for useful work rose. Today the more compact instruction density of a CISC front end helps increase cache utilization and thus better hide the huge penalty for accessing main RAM.
  • by steve_thatguy ( 690298 ) on Thursday June 05, 2008 @09:14AM (#23667709)
    Kinda makes you wonder how different things might be or how much farther things might've come had a better architecture become the de facto standard of commodity hardware. I've heard it said that most of the processing of x86 architectures goes to breaking down complex instructions to two or three smaller instructions. That's a lot of overhead over time. Even if programmers broke down the instructions themselves so that they were only using basically a RISC-subset of the x86 instructions, there's all that hardware that still has to be there for legacy and to preserve compatibility with the standard. But I'm not a chip engineer, so my understanding may be fundamentally flawed somehow.
    • by Urkki ( 668283 ) on Thursday June 05, 2008 @09:29AM (#23667903)
      What may have been a limitation some time ago might start to be an advantage. I'm under the impression that there's already more than enough transistors to go around per processor, and there's nothing *special* that can be done with them, so it's just cramming more cores and more cache into a single chip. So parsing, splitting and parallelizing complex instructions at the processor may not be very costly after all. OTOH I bet it does reduce the memory bandwidth needed, which definitely is an advantage.
    • Re: (Score:2, Interesting)

      by Hal_Porter ( 817932 )

      Kinda makes you wonder how different things might be or how much farther things might've come had a better architecture become the de facto standard of commodity hardware.

      I've heard it said that most of the processing of x86 architectures goes to breaking down complex instructions to two or three smaller instructions. That's a lot of overhead over time. Even if programmers broke down the instructions themselves so that they were only using basically a RISC-subset of the x86 instructions, there's all that hardware that still has to be there for legacy and to preserve compatibility with the standard.

      But I'm not a chip engineer, so my understanding may be fundamentally flawed somehow.

      I think the important thing to remember is that total chip transistor counts - mostly used for caches - have inflated very rapidly due to Moores Law. And legacy baggage has grown more slowly. So the x86 compatibility overhead in a modern x86 compatible chip is lower than it was in a 486 for example. Meanwhile the cost of not being x86 compatible has stayed the same. Arm cores are much smaller than x86 for example but most PC like devices still use x86 because most applications and OSs are distributed as x8

  • Signed,
    your great-great-great grandson,
    Pentium
  • by marto ( 110299 ) on Thursday June 05, 2008 @09:20AM (#23667789)
    .model small .stack .data
    message db "Happy Birthday!", "$" .code
    main proc
          mov ax,seg message
          mov ds,ax
          mov ah,09
          lea dx,message
          int 21h
          mov ax,4c00h
          int 21h
    main endp
    end main
  • by waldo2020 ( 592242 ) on Thursday June 05, 2008 @09:23AM (#23667821)
    Motorola always had better product, just worse marketing.. If IBM had chosen the 68K in their instruments machine, instead of the 8086/8085 from the Displaywriters, we would have saved ourselves from 3 decades of segmented address space, half a dozen memory models and non-orthogonal cpu architecture.
    • Re: (Score:2, Insightful)

      by divided421 ( 1174103 )
      You are absolutely correct. It amazes me how large of a market x86 commands with the undisputed worst instruction set design. Even x86 processors know their limitations and merely translate the instructions into more RISC-like 'micro-ops' (as intel calls them) for quick and efficient execution. Lucky for us, this translation 'baggage' only occupies, what, 10% of total chip design now? Had several industry giants competed on perfecting a PowerPC-based design with the same amount of funding as x86 has receive
    • You forgot the unintuitive (until made standard through pervasiveness), inverted reference scheme. Should one have LSB or MSB first? IMHO, Motorola got that one right as well.
    • Re: (Score:3, Interesting)

      by vivin ( 671928 )
      Very true. I started learning assembly on the Motorola 6811, then the 6800. My final semester at college, I took a graduate course where we wrote a small OS for the Motorola 68k. The 68k was a delight to code for. Beautifully orthogonal and intuitive. The Motorola instruction set was what really got me into assembly. I tried many times to write assembly for the x86, but I simply couldn't get around the ugliness, the endianness (backwards for me), and the reversed format for source and destination... and don
  • by triffid_98 ( 899609 ) on Thursday June 05, 2008 @09:24AM (#23667837)
    Happy birthday my Intel overlords, and a pox on whomever designed that ugly memory map.
  • Makes you wonder why they didn't fix the MMU issues while they went about evoluting .. :)
  • Die already ! (Score:5, Insightful)

    by DarkDust ( 239124 ) * <marc@darkdust.net> on Thursday June 05, 2008 @09:37AM (#23667987) Homepage
    Happy birthday and long live, x86.

    Oh my god, no ! Die already ! The design is bad, the instruction set is dumb, too much legacy stuff from 1978 still around and making CPUs costly, too complex and slow. Anyone who's written assembler code for x86 and other 32-bit CPUs will surely agree that the x86 is just ugly.

    Even Intel didn't want it to live that long. The 8086 was hack, a beefed up 8085 (8-bit, a better 8080) and they wanted to replace it with a better design, but iAPX 432 [wikipedia.org] turned out to be a desaster.

    The attempts to improve the design with 80286 and 80386 were not very successful... they merely did the same shit to the 8086 that the 8086 already did to the 8085: double the register size, this time adding a prefix "E" instead of the suffix "X". Oh, and they added the protected mode... which is nice, but looks like a hack compared to other processors, IMHO.

    And here we are: we still have to live with some of the limitations and ugly things from the hastily hacked together CPU that was the 8086, for example no real general purpose registers: all the "normal" registers (E)AX, (E)BX, etc. pp. are bound to certain jobs at least for some opcodes. No neat stuff like register windows and shit. Oh, I hate the 8086 and that it became successful. The world could be much more beautiful (and faster) without it. But I rant that for over ten years now and I guess I will rant about it on my deathbead.
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      The problem is that, like English, even though x86 sucks in so many ways it just happens to be very successful for those same reasons. Like for instance using xor on a register to zero itself is both disgusting and efficient... kind of like "yall" instead of "all of you".
    • Re: (Score:3, Insightful)

      by eswierk ( 34642 )
      Does anyone besides compiler developers really care that the x86 instruction set is ugly and full of legacy stuff from 1978?

      Most software developers care more about things like good development tools and the convenience of binary compatibility across a range of devices from supercomputers to laptops to cell phones.

      Cross-compiling will always suck and emulators will always be slow. As lower-power, more highly integrated x86 chipsets become more widespread I expect to see the market for PowerPC, ARM and other
    • Re:Die already ! (Score:5, Insightful)

      by Wrath0fb0b ( 302444 ) on Thursday June 05, 2008 @10:30AM (#23668755)

      Even Intel didn't want it to live that long. The 8086 was hack, a beefed up 8085 (8-bit, a better 8080) and they wanted to replace it with a better design, but iAPX 432 turned out to be a desaster.

      The attempts to improve the design with 80286 and 80386 were not very successful... they merely did the same shit to the 8086 that the 8086 already did to the 8085: double the register size, this time adding a prefix "E" instead of the suffix "X". Oh, and they added the protected mode... which is nice, but looks like a hack compared to other processors, IMHO.
      Perhaps this can be taken as a lesson that it is more fruitful to evolve the same design for the sake of continuity than to start fresh with a new design. The only really successful example I can think of a revolutionary design was OS-X, and even that took two major revisions (10.2) to be fully usable. Meanwhile, Linux still operates based on APIs and other conventions from the 70s, the internet has all this web 2.0 stuff running over HTTP 1.1, which itself runs on TCP -- old, old technology.

      The first instinct of the engineer is always to tear it down and build it again, it is a useful function of the PHB (gasp!) that he prevents this from happening all the time.
      • Perhaps this can be taken as a lesson that it is more fruitful to evolve the same design for the sake of continuity than to start fresh with a new design.

        Nope. The best strategy is to push whatever product you have on something that will be sold on a massive amount of machine and become so much pervasive that it will become standard. Then everyone will stick to it because of compatibility to legacy code.

        8086/8088 didn't succeed *because* it was a 16bit hack of the 8008/8080/8085. It succeed because it was sold on the IBM PC (lots of sales) which in turn got cloned (even more sales of 8088s). By the time you sit back and try thinking about it, there are 8088

  • June 8th (Score:5, Funny)

    by coren2000 ( 788204 ) on Thursday June 05, 2008 @09:38AM (#23667999) Journal
    Why couldn't the poster wait for June 8th to post this story... its *MY* birthday today dang it... x86 is totally stealing my day....

    Jerk.
  • and long live x86

    Are you out of your mind!? OK OK, I'll admit that commercially, Intel was genius in making backward-compatibility king.

    But on a technical side, the world would be such a better place if we could just switch to more modern architectures / instruction set. The chips would be smaller and more power efficient, not having to waste space on a front-end decoding those legacy instructions for the core, for example.

    I know, Intel tried to break with the past with the Itanium. They were wrong in betting that the comp

  • Intel used then "the dawn of a new era" slogan, and they probably didn't know how certain they were.

    How "certain" they were? "Certain"?? Surely you mean "correct", "right", or perhaps "prophetic".
  • Intel, 1978 (Score:2, Informative)

    by conureman ( 748753 )
    Rejected my application for employment.
  • by Nullav ( 1053766 ) <[Nullav.gmail] [ta] [com]> on Thursday June 05, 2008 @09:55AM (#23668215)
    Now die, you sputtering son of a whore. :D
  • Out of interest... (Score:3, Interesting)

    by samael ( 12612 ) * <Andrew@Ducker.org.uk> on Thursday June 05, 2008 @09:55AM (#23668219) Homepage
    I know that modern x86 chips convert into RISC-like instructions and then execute _them_ - if the chip only dealt with those instructions, how much more efficient would it be?

    Anyone have any ideas?
    • Re: (Score:3, Insightful)

      by imgod2u ( 812837 )
      Well, in order to be fast in executing, the code density can't be all that high for the internal u-ops. I don't have a rough estimate but if the trace cache in Netburst is any indication, it's a 30% or more increase in code size for the same operations vs x86. We're talking 30% increase of simple instructions too. I would imagine it's pretty bloated and not suitable to be used external to the processor.

      On top of that, it's probably subject to change with each micro-architecture.
    • Re: (Score:3, Insightful)

      by slittle ( 4150 )
      Does it really matter? Once you expose the instruction set, it's set in stone. That'll lead us back to exactly where we are in another 30 years. As these instructions are internal, they're free to change to suit the technology of the execution units in each processor generation. And presumably because CISC instructions are bigger, they're more descriptive and the decoder can optimise them better. Intel already tried making the compiler do the optimisation - didn't work out so well.
  • by cyfer2000 ( 548592 ) on Thursday June 05, 2008 @10:00AM (#23668283) Journal
    The ubiquitous ARM architecture is 25 years old this year and still rising.
    • Yes but not quite...

      Development of the ARM architecture began in 1983, but the first prototype ARM 1 processors weren't completed until 1985, with the ready for market ARM 2 being available in 1986 when the Acorn Archimedes computers were released.
  • by xPsi ( 851544 ) * on Thursday June 05, 2008 @10:02AM (#23668323)

    X86 Turns 30 Years Old
    Happy Birthday! But do not be alarmed. That flashing red light on your palm is a natural part of our social order. On Lastday, please report to Carousel for termination at your earliest convenience. The computer is your friend. Oh, wait, you ARE the computer...
  • by FurtiveGlancer ( 1274746 ) <{moc.loa} {ta} {yuGhceTcoHdA}> on Thursday June 05, 2008 @10:16AM (#23668503) Journal
    About the tyranny of backward compatibility? Think how much further we might be in capability without that albatross [virginia.edu] slowing innovation.

    No "it was necessary" arguments please. I'm not panning reverse compatibility, merely lamenting the unfortunate stagnating side effect it has had.
  • by kylben ( 1008989 )
    This is probably the first time in the history of advertising that a slogan of such over the top hyperbole turned out to be understated.
  • by peter303 ( 12292 ) on Thursday June 05, 2008 @10:43AM (#23668989)
    Even Intel early on recognized the limitations of its very early architecture and introduced replacements. But all were commmercial failures. Customers were too attached to legacy binary software. And this left openings for companies like AMD who "did Intel better than Intel".

    So what happened then is that Intel emulates itself using more modern architectures. The underlying engine changesd to RISC around 486(?), wide-words, and more recently cells. All emulate the ancient x86 instruction set. Each generation needs proportionately less real estate to do this. Last time I looked it was 5%, but might be under 1% now.
  • So? (Score:5, Funny)

    by Bluesman ( 104513 ) on Thursday June 05, 2008 @11:18AM (#23669489) Homepage
    What's so special about this?

    Wake me up when it turns 32.
  • by ucblockhead ( 63650 ) on Thursday June 05, 2008 @11:23AM (#23669567) Homepage Journal
    I will declare a far pointer in its honor.

GREAT MOMENTS IN HISTORY (#7): April 2, 1751 Issac Newton becomes discouraged when he falls up a flight of stairs.

Working...