Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Happy Birthday! X86 Turns 30 Years Old 362

javipas writes "On June 8th, 1978 Intel introduced its first 16-bit microprocessor, the 8086. Intel used then "the dawn of a new era" slogan, and they probably didn't know how certain they were. Thirty years later we've seen the evolution of PC architectures based on the x86 instruction set that has been the core of Intel, AMD or VIA processors. Legendary chips such as Intel 80386, 80486, Pentium and AMD Athlon have a great debt to that original processor, and as recently was pointed out on Slashdot, x86 evolution still leads the revolution. Happy birthday and long live x86."
This discussion has been archived. No new comments can be posted.

Happy Birthday! X86 Turns 30 Years Old

Comments Filter:
  • How Long? (Score:5, Interesting)

    by dintech ( 998802 ) on Thursday June 05, 2008 @10:07AM (#23667635)
    I'm pretty sure x86 processors will still be in use for another 15 years at least. But, how much further will this architecture evolve? When will we see the demise of x86?
  • Re:How Long? (Score:5, Interesting)

    by flnca ( 1022891 ) on Thursday June 05, 2008 @10:11AM (#23667681) Journal
    The demise of the x86 general architecture will not begin until Windows goes out of fashion. It's the only major platform strongly tied to that CPU architecture. x86 CPUs have been emulating the x86 instruction set in hardware for many years now. I guess, if they could, Intel / AMD / VIA and others would happily abandon the concept, because it leads to all sorts of complexities.
  • by Anonymous Coward on Thursday June 05, 2008 @10:12AM (#23667685)
    What a mishmash of zany grafted-on non-orthogonal instructions and registers the x86 is. For years its technology lagged Motorola's 68x00. x86 succeeded due to IBM and Microsoft selecting it. Anything will fly given enough propulsion. We can only imagine how much further ahead CPUs would be if not for the x86 monopoly.
  • by HW_Hack ( 1031622 ) on Thursday June 05, 2008 @10:12AM (#23667689)
    I spent over 16 yrs with Intel as a HW engineer. I saw many good decisions and a lot of bad ones too. Same goes for opportunities taken and missed. But their focus on cpu development cannot be faulted - they stumbled a few times but always found their focus again.

    The other big success is their constant work on making the entire system architecture better, and basically giving that work to the industry for free. PCI - USB - AGP - all directly driven by Intel.

    Its a bizarro place to work but my time their was not wasted
  • A few tweaks, and... (Score:5, Interesting)

    by kabdib ( 81955 ) on Thursday June 05, 2008 @10:13AM (#23667691) Homepage
    This is a case where just a couple of tweaks to the original x86 architecture might have had a dramatic impact on the industry.

    The paragraph size of the 8086 was 16 bytes; that is, the segment registers were essentially multiplied by 16, giving an address range of 1MB, which resulted in extreme memory pressure (that 640K limit) starting in the mid 80s.

    If the paragraph size had been 256 bytes, that would have resulted in a 24MB address space. We probably wouldn't have hit the wall for another several years. Companies such as VisiCorp might have succeeded at products like VisiOn, which were bending heaven and earth to cram their products into 640K, it would have been much easier to do graphics-oriented processing (death of Microsoft and Apple, anyone?). And so on.

    Things might look profoundly different now, if only the 8086 had had four more address pins, and someone at Intel hadn't thought, "Well, 1MB is enough for anyone..."
  • by steve_thatguy ( 690298 ) on Thursday June 05, 2008 @10:14AM (#23667709)
    Kinda makes you wonder how different things might be or how much farther things might've come had a better architecture become the de facto standard of commodity hardware. I've heard it said that most of the processing of x86 architectures goes to breaking down complex instructions to two or three smaller instructions. That's a lot of overhead over time. Even if programmers broke down the instructions themselves so that they were only using basically a RISC-subset of the x86 instructions, there's all that hardware that still has to be there for legacy and to preserve compatibility with the standard. But I'm not a chip engineer, so my understanding may be fundamentally flawed somehow.
  • Ah, fresh air! (Score:1, Interesting)

    by Icarium ( 1109647 ) on Thursday June 05, 2008 @10:21AM (#23667811)
    Someone advocating better hardware over more efficient code? Heresy I say!
  • Re:1978?? (Score:5, Interesting)

    by Ctrl-Z ( 28806 ) <timNO@SPAMtimcoleman.com> on Thursday June 05, 2008 @10:27AM (#23667879) Homepage Journal
    Are you kidding? The 8086 was the processor used in the IBM 5150, also known as the IBM PC, introduced in 1981.
  • Re:How Long? (Score:5, Interesting)

    by peragrin ( 659227 ) on Thursday June 05, 2008 @10:28AM (#23667889)
    Actually Intel keeps trying(Itanium?) and AMD uses a compatibility mode.

    The problem is as usual MSFT. which only runs on windows. yes I know a decade ago NT 4.0 did run on PowerPC, and even a couple of alpha chips.

    Apple with a fraction a of the software guys can keep their OS on two major different style of chips PowerPC, and Intel x86, along with 32bit and 64 bit versions of both. Sun keeps how many versions of Solaris?

    Nope but Vista only runs on x86. So X86 will remain around as long as it does.
  • The scary thing is (Score:4, Interesting)

    by wiredog ( 43288 ) on Thursday June 05, 2008 @10:36AM (#23667977) Journal
    I was able to follow that, and it's been decades since I had to use x86 assembler.
  • Re:How Long? (Score:5, Interesting)

    by Hal_Porter ( 817932 ) on Thursday June 05, 2008 @10:36AM (#23667979)

    The demise of the x86 general architecture will not begin until Windows goes out of fashion. It's the only major platform strongly tied to that CPU architecture. x86 CPUs have been emulating the x86 instruction set in hardware for many years now. I guess, if they could, Intel / AMD / VIA and others would happily abandon the concept, because it leads to all sorts of complexities.
    Yeah, they could move to an architecture with a simple, compact instruction set encoding which makes efficient use of the instruction cache and can be translated to something easier to implement on the fly with extra pipeline stages.

    But wait, that's exactly what x86 is. In terms of code density it does pretty well compared to Risc. Modern x86s don't implement it internally, they translate it to Riscy uops on the fly and execute those. And over the years compilers have learned to prefer the x86 instructions that are fast in this sort of implementation. And, thanks to AMD it now supports 64 bit natively in its x64 variant. This is important. 64 bit maybe overkill today, but most architectures die because of a lack of address space (see Computer Architecture by Hennessy and Patterson [amazon.com]). But 64 bit address spaces will keep x86/x64 going for at least a while.

    http://cache-www.intel.com/cd/00/00/01/79/17969_codeclean_r02.pdf [intel.com]
    If you know that the variable does not need to be pointer polymorphic (scale with the architecture), use the following guideline to see if it can be typed as 32-bit instead of 64-bit. (This guideline is based on a data expansion model of 1.5 bits per year over 10 years.)

    IIRC 1.5 bits per year address space bloat is from Hennessy and Patterson.

    At this point we have 30 unused bits of address space, assuming current apps need 32GB tops. That gives 64 bit x64 another 20 years lifetime!
  • by IvyKing ( 732111 ) on Thursday June 05, 2008 @10:41AM (#23668015)
    The docs for the 8086 stated that the interrupts below 20H were reserved, so guess what IBM used for the BIOS. The 8086 documentation was emphatic about not using the non-maskable interrupt for the 8087, and guess what IBM used. OTOH, Tim Paterson did pay attention to the docs and started the interrupt usage at 20H, but he wasn't working for either IBM or Microsoft at the time.


    TFA doesn't get into the real reason that the x86 took off, that the BIOS for the IBM PC was cloned at least two or three times which allowed for much cheaper hardware (the original Compaq and IBM 486 machines were going for close to 10K$, where 486 whiteboxes were available a few months late for 2K$).

  • Re:Itanium sank (Score:3, Interesting)

    by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Thursday June 05, 2008 @10:49AM (#23668125) Homepage Journal
    How could Intel have got it so wrong?

    That's what they do best. Getting it wrong.

    x86 segments (we'll make it work like Pascal). Until they gave up on the 64k segments it was excruciating.
    iApx432 ... the ultimate CISC (the terminal CISC)
    i860 ... The compilers will make it work (they didn't)
    IA64 ... It's not really VLIW! We'll call it EPIC! The compiler's will make it work! Honest!
  • Out of interest... (Score:3, Interesting)

    by samael ( 12612 ) * <Andrew@Ducker.org.uk> on Thursday June 05, 2008 @10:55AM (#23668219) Homepage
    I know that modern x86 chips convert into RISC-like instructions and then execute _them_ - if the chip only dealt with those instructions, how much more efficient would it be?

    Anyone have any ideas?
  • by cyfer2000 ( 548592 ) on Thursday June 05, 2008 @11:00AM (#23668283) Journal
    The ubiquitous ARM architecture is 25 years old this year and still rising.
  • by Hal_Porter ( 817932 ) on Thursday June 05, 2008 @11:01AM (#23668293)

    Kinda makes you wonder how different things might be or how much farther things might've come had a better architecture become the de facto standard of commodity hardware.

    I've heard it said that most of the processing of x86 architectures goes to breaking down complex instructions to two or three smaller instructions. That's a lot of overhead over time. Even if programmers broke down the instructions themselves so that they were only using basically a RISC-subset of the x86 instructions, there's all that hardware that still has to be there for legacy and to preserve compatibility with the standard.

    But I'm not a chip engineer, so my understanding may be fundamentally flawed somehow.
    I think the important thing to remember is that total chip transistor counts - mostly used for caches - have inflated very rapidly due to Moores Law. And legacy baggage has grown more slowly. So the x86 compatibility overhead in a modern x86 compatible chip is lower than it was in a 486 for example. Meanwhile the cost of not being x86 compatible has stayed the same. Arm cores are much smaller than x86 for example but most PC like devices still use x86 because most applications and OSs are distributed as x86 code.
  • Re:How Long? (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Thursday June 05, 2008 @11:07AM (#23668397) Journal

    Many of the shortest opcodes on modern Intel CPUs are for instructions that are never used. Compare this with ARM, where the 16-bit thumb code is used in a lot of small programs and libraries and there are well-defined calling conventions for interfacing 32-bit and 16-bit code in the same program.

    Modern (Core 2 and later) Intel chips do not just split the ops into simpler ones, they also combine the simpler ones into more complex ones. This was what killed a lot of the original RISC archs - that CISC multi-cycle ops became CISC single-cycle ops while compilers for RISC instructions were still generating multiple instructions. On ARM, this isn't needed because the instruction set isn't quite so brain-dead. ARM also has much better handling of conditionals (try benchmarking the cost of a branch on x86 - you'll be surprised at how expensive it is), since conditionals are handled by select-style operations (every instruction is conditional) and which reduces branch penalties and scales much better to superscalar architectures without the cost of huge register files.

  • by FurtiveGlancer ( 1274746 ) <AdHocTechGuy@@@aol...com> on Thursday June 05, 2008 @11:16AM (#23668503) Journal
    About the tyranny of backward compatibility? Think how much further we might be in capability without that albatross [virginia.edu] slowing innovation.

    No "it was necessary" arguments please. I'm not panning reverse compatibility, merely lamenting the unfortunate stagnating side effect it has had.
  • by Simonetta ( 207550 ) on Thursday June 05, 2008 @11:18AM (#23668551)
    Hello,
        Congrats on working at Intel for 16 years. Might I suggest that you document this period of activity into a small book? It would be great for the historical record.

        Typing is a real pain. I suggest using the speech-to-text feature found buried in newer versions of MS Word or the IBM or Dragon speech programs. Train the system by reading a few chapters off the screen. Then sit back and talk about the Intel years, the projects, the personalities, the cubicals, the picnics, the parking lot, the haircuts, the water cooler stories, anything and everything. Don't worry about punctuation and paragraphing, which can be awkward when using speech-to-text systems. It's important to get a text file of recollections from the people who were there. Intel was 'ground zero' for the digital revolution that transformed the world in the last quarter of the 20th century. In fifty to a hundred years from now, people will want to know what it was really like.

    Thank you.
  • by vivin ( 671928 ) <vivin,paliath&gmail,com> on Thursday June 05, 2008 @11:38AM (#23668905) Homepage Journal
    Very true. I started learning assembly on the Motorola 6811, then the 6800. My final semester at college, I took a graduate course where we wrote a small OS for the Motorola 68k. The 68k was a delight to code for. Beautifully orthogonal and intuitive. The Motorola instruction set was what really got me into assembly. I tried many times to write assembly for the x86, but I simply couldn't get around the ugliness, the endianness (backwards for me), and the reversed format for source and destination... and don't even talk about those ugly segmented registers. Ugh.
  • by peter303 ( 12292 ) on Thursday June 05, 2008 @11:43AM (#23668989)
    Even Intel early on recognized the limitations of its very early architecture and introduced replacements. But all were commmercial failures. Customers were too attached to legacy binary software. And this left openings for companies like AMD who "did Intel better than Intel".

    So what happened then is that Intel emulates itself using more modern architectures. The underlying engine changesd to RISC around 486(?), wide-words, and more recently cells. All emulate the ancient x86 instruction set. Each generation needs proportionately less real estate to do this. Last time I looked it was 5%, but might be under 1% now.
  • Re:How Long? (Score:3, Interesting)

    by aproposofwhat ( 1019098 ) on Thursday June 05, 2008 @12:00PM (#23669235)
    With that much RAM, you might even be able to fit an image of your mental state into it.

    I'd have to defrag mine first, though :P

  • Re:How Long? (Score:3, Interesting)

    by aproposofwhat ( 1019098 ) on Thursday June 05, 2008 @12:07PM (#23669337)
    You're talking about desktop computers - the serious applications out there (OLAP, scientific apps) are already a few bits ahead of you.

    I'd also suggest that the state variables to describe each neuron and synaptic connection would be fairly complex, so the 16,000 times bigger probably shrinks quite a bit (hint - 1,000 separate connections per neuron can't be efficiently represented in less than 1,000 bits - and if we need FP accuracy, we're talking 32Kb / neuron).

    Give me 128 bit pointers, or give me death!

  • Re:How Long? (Score:2, Interesting)

    by bcrowell ( 177657 ) on Thursday June 05, 2008 @12:35PM (#23669755) Homepage

    I'd also suggest that the state variables to describe each neuron and synaptic connection would be fairly complex, so the 16,000 times bigger probably shrinks quite a bit (hint - 1,000 separate connections per neuron can't be efficiently represented in less than 1,000 bits - and if we need FP accuracy, we're talking 32Kb / neuron).
    Sure, let's go with your assumption of 32 kb/neuron rather than 10 kb/neuron. That means you add 2 bits on, and now the estimate is that you need 52 bits. It doesn't affect the result in any significant way. The whole thing is just an order-of-magnitude estimate, and I think you're sort of missing the point. If you could directly simulate a human brain on a computer, it would mean immortality, the end of history, the transformation of the human race into something completely different. The fact that you can do that in a 50- or 52-bit address space means IMO that it's kind of silly even to talk about 128-bit pointers.

    For perspective, let's imagine what it would take to fill up a 256-bit address space. The number of atoms in the observable universe is estimated to be about 10^80. A 256-bit address space would have 10^77 addresses. In other words, if you wanted to manufacture 1000 computers, each of which had enough memory to exhaust a 256-bit address space, you would need to use up all the matter in the observable universe, assuming you could manufacture one bit of memory out of one hydrogen atom. The point here is that if the nth generation of computer chips uses pointers with 8x2^n bits (where n=1 for 16-bit machines in 1980, n=2 for 32-bit machines today, etc.), then the size of the address space varies like O(2^(2^n)), which just gets big ridiculously fast.

  • Re:Itanium sank (Score:1, Interesting)

    by Anonymous Coward on Thursday June 05, 2008 @01:35PM (#23670809)
    Actually, Itanium was more of a bid to remove AMD from the playing field than anything. The only reason that AMD can still (somewhat) compete with Intel is because they were awarded free licensing of the x86 architecture from Intel in a lawsuit. This means that AMD has access to ALL Intel patents that are related to x86 - Itanium would completely negate that, effectively shutting AMD out completely (they don't have the money or legal ability to reverse engineer the EPIC architecture).
    Intel probably knew that the move to 64-bit computing was needed, so they had their chance to completely negate the x86 patent deal they are legally bound to. What a perfect legal way to destroy a competitor?
    Unfortunately for Intel, EPIC turned out to be way expensive and no one ever really jumped on board with it. The main reasons EPIC died was really because it wasn't x86. There just wasn't incentive to swtich to a completely new architecture, especially given the paltry performance gains achieved with Itanium - Itanium processors didn't deliver the performance increase Intel was hoping for until it was already dead (and even then they were modest). This had nothing to do with Microsoft, however, as Windows Server (can't remember exactly which versions) had Itanium support.

    As a side note, I want to remind people that Windows is not the anchor to x86. Microsoft HAS been willing to create Windows with support for other architectures (like EPIC and x86-64, Windows CE has ARM support I think), the problem is cost. There is a lot of risk in developing a non-x86 variant of Windows and costs a lot of man-hours when there is no guarantee of any payoff. If Microsoft wanted to run themselves into the ground, going and making Windows for every architecture that sold itself on being "the next best thing" (example: EPIC) would probably be a pretty good way.
  • Re:Itanium sank (Score:5, Interesting)

    by afidel ( 530433 ) on Thursday June 05, 2008 @01:56PM (#23671145)
    The reason PPC was able to beat x86 for a time was that around that time the x86 architecture was moving to being an ISA with the actual code done by a RISCy back end. The decode logic at that time was a significant percentage of the die space available, as process improvements came along that logic remained fairly static as far as total resource usage but that quickly became a smaller and smaller percentage of the available resources and so relative performance went up as the amount of the chip available for useful work rose. Today the more compact instruction density of a CISC front end helps increase cache utilization and thus better hide the huge penalty for accessing main RAM.
  • Re:How Long? (Score:5, Interesting)

    by hr raattgift ( 249975 ) on Thursday June 05, 2008 @02:41PM (#23671925)

    I think we're likely to see flying cars, Turing-level AI, and vacations on the moon before we need 128-bit pointers.


    128-bit linear addressing is not so useful, but you can introduce structure into the address so that (for example) the first 64 bits is a network address and the second 64 bits is the address of storage at that network address. This requires distributing the functionality of the MMU across various network elements, but is not especially novel, and from a software perspective is a special case of NUMA. (The special case lends itself to some clever scheduling based on the delay hints available in a further structured network address, especially if you generally organize things such that the XOR of two network addresses is a useful (if not perfect) delay metric from the perspective of an accessor).

    This can even be done "in the small" on a non-networked host by allocating "network addresses" in the top 64 bits to local random access storage. You could look at this as a form of segmented memory (MULTICS style) or as an automatic handling of open(2)+mmap(2) based on (for example) a 64 bit encoding of a path name in the MSBs of the addresses. That is, dereferencing computer memory address 0xDEADBEEF00000001 automatically opens and mmaps a file corresponding to 0xDEADBEEF.

    The opportunities to abstract away networked file systems without losing (or even while gaining) useful information about objects' characteristics (proximity, responsiveness, staleness) suggests that the address size used at the level of a primitive ISA that uses pseudo-flat addressing is mainly limited by the overhead of hauling around extra bytes per memory access. Pseudo-flat addressing can also in principle steal ideas from X86's various addressing models for dealing with addresses of different lengths.

    Ultimately, the difficulty is in the directory problem. That does not go away even if you use radically different "addresses" for objects -- directories are already a pain if you use URLs/URIs for example, or if you use POSIX style filenames, or whatever, and the problem worsens when you have different "addresses" for the same logical object.

    (Fun is when you have to figure out race conditions involving a structured set of bytes that is in a file shared out by AFP, SMB, NFS, and WebDAV, as well as being in use locally, with client software responsible for choosing the most appropriate available access method since there is no guarantee that any one of these methods will work for all clients at all times).

    One possible approach to this is to insist that any reachable object is a persistent object, with a permanent universal name. If you have the permanent universal name, the object is either available to you or errors out. If you do not have the permanent universal name, you are out of luck unless you have a "locator" that points to it (or points to something that points to something that ... points to it). This is in some ways much easier if what is pointed to by a permanent universal name is immutable, and if most such objects are compositions of primitive PUNs, the most primitive and common of which ("well known PUNs") can be cached or recalculated locally.

    [cf Church encoding, Morgensen-Scott encoding and normalization in the computer science sense]
  • Re:How Long? (Score:1, Interesting)

    by Anonymous Coward on Thursday June 05, 2008 @03:02PM (#23672205)

    Hrm, I wonder what this HAL thing is ... must be a virus! I'd better remove it.

    From NT4.0 onwards, the Hardware Abstraction Layer effectively became deprecated. MS started bypassing it for better performance. It was at that time that they decided NT didn't need to be portable, since x86 processors hadn't hit the performance dead-end that was predicted, no portability was no longer required.

  • by DrYak ( 748999 ) on Thursday June 05, 2008 @04:19PM (#23673447) Homepage

    Perhaps this can be taken as a lesson that it is more fruitful to evolve the same design for the sake of continuity than to start fresh with a new design.
    Nope. The best strategy is to push whatever product you have on something that will be sold on a massive amount of machine and become so much pervasive that it will become standard. Then everyone will stick to it because of compatibility to legacy code.

    8086/8088 didn't succeed *because* it was a 16bit hack of the 8008/8080/8085. It succeed because it was sold on the IBM PC (lots of sales) which in turn got cloned (even more sales of 8088s). By the time you sit back and try thinking about it, there are 8088s almost every where.

    As counter example :
    - Motorola 68k : wasn't a hack of the 6800, was instead a completely new and better architecture. Never the less, it managed to get really popular on 16bits arcade machine and home consoles. (To the point that it's really hard hard to find something else inside those - the SNES' 65c816 comes to mind as an exception). It was the standard everyone was used to, thus it made sense to keep the same chip into the consoles to help porting arcade titles.

    - ARM. Wasn't a hack, wasn't a successor which tore older design neither. Just a new chip. Attracted initially some designers because of efficiency low power and low cost. Got success in embed applications. Grew fast. Now engineer are so much used to it, that this architecture simply can't get replaced. At least, unlike the x86 it's a very nice one and nobody is complaining about its dominance.
    You can find in almost anything that is microprocessor controlled, but isn't a desktop.
    To the point that Intel has a hard time pushing it's Atom chip in the PDA world.

    The first instinct of the engineer is always to tear it down and build it again, it is a useful function of the PHB (gasp!) that he prevents this from happening all the time.
    No. He must not avoid tearing at all cost. He must avoid tearing something that is very popular and pervasive. He can safely tear appart and rebuilt better something that nobody cares about.

    The web is a nice example : when HTTP was invented, there were already other transfer protocols existing. Nevertheless it turned out being very popular. Because, well, the whole web thingy didn't exist before it. HTTP was new *in its own niche* and didn't try to replace something popular before it. On the contrary, it became itself very widespread (thanks to the popularity of the Web which used it), and thus became a standard that every body is using today for completely unrelated stuff (HTTP used as transfer protocol for Jabber, Bittorent, some RPC, etc.)

    Unix was popular when Linux arrived thus, Linux' compatibility to the "widely used standard" did matter.

    The Mac OS X success is simply explained by the same mechanism : the Macs are a controlled platform - no 3rd party hardware maker which could be pissed of by an incompatible switch in software or hardware.
    Being more in control of whatever runs in a Mac, enable Apple to "abstract" each successive upgrade (68k -> PPC, Classic -> OS X, PPC -> Intel) by putting the former in an emulator running on the later.
    Thus, for Apple user, whatever is being used underneath doesn't matter, the application are still running the same - except with more stability.
    And thus, Apple engineer can safely tear apart and rebuilt it.

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...