Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Is The x86 Obsolete? 336

levendis writes: "Ars Technica has an excellent article up on the future of the x86 architecture. It touches on new idea from Transmeta, Intel, HP's Dynamo, and a bunch of other technology that keeps the 20+ year old architecture alive and kicking." As always, the Ars take on this (specifically, Hannibal's) is lucid and thoughtful, and grounded in some interesting history.
This discussion has been archived. No new comments can be posted.

Is The x86 Obsolete?

Comments Filter:
  • With a bit of recomiling, I can run the same Linux software on x86, Alpha, Sun, PPC etc. with only a few minor issues. The only incompatibilities are those inherent in C/C++.

    ... and these "incompatibilities inherent in C/C++" would be...?

    There are none. Incompatibilities arise when the programmers assume a certain integer size (32-bit, usually... I don't know of anyone who writes code which is meant to be reasonably portable who assumes an integer is 64-bits :-) The size of an integer changes from architecture to architecture based on the size of the registers, not on incompatibilities in C/C++.
    --
  • Because its NOT THAT BAD. No, its not perfect, but anyone who has ever programmed assembler for one must realize that hey, it works. Modern x86es combines the best of the modern RISC chips with the best of the old-style CISC chips (like hardware stacks and more registers).

    No, the reason x86 hasn't died is because it's as bad as can possibly be without forcing people away from it. Blame Microsoft and idiot peecee buyers for its continued plague of the earth. Modern x86s are an excellent implementation of what is likely the worst architecture ever created. Their "design" to the extent that it exists combines the worst of RISC (hard to write good compilers) with the worst of CISC (lots of useless confusing instructions, nowhere near enough registers) and some extra Intel-specific bad bits (stupid CISC->RISC translation mechanism for example, 16-bit compatibility, nonlinear memory model). What a crock.

  • Also realize that all of these instructions are fixed at 32-bits on most chips. That's 32-bits to copy a register, 32-bits for a return, etc. This may simplify the hardware, but at the expense of bloat. So you need a bigger instruction cache.

    "This is a feature, not a bug."

    It's a tradeoff; yes, it takes more space to cache fixed-length instructions, but it's easier to pipeline them, faster to look ahead, etc. Speed versus space.
  • I think that Hannibal is dead on with the whole idea of translation technology being built into future chips. I cannot believe that Intel is not trying to duplicate Transmeta's own clever innovations in a manner that does not infringe on patents. And once Intel does it, everyone else is going to have to follow suit.

    That's the hardware end. Programs don't just run off a ISA, they also run off an API. What is the software technology that would best run in combination? Virtual machine technology. My gut feeling is that someone will build a PC with the ability to boot virtual Windows sessions where the programs there think they're on an x86 with all the requisite hardware while the rest of the computer is emulating some nicely streamlined ISA and you've got Java code running in some sandboxed virtual machine elsewhere on the system.

    At this point you have support for legacy ISAs and legacy APIs in a nice simple format. As computer architectures die, they just end up virtualized and emulated/translated. And the computer is designed from the hardware through the operating system to seamlessly do it. In time everyone assumes they're on a virtual machine and the new operating system evolve to adapt to that environment.

    No doubt such a setup will even allow for finetuning for things like emulating old Apple ][ systems as well as original IBM PCs running at 4.77 Mhz to get all those old games running just right. Or old Nintendo/Sega/whatever boxes. All those emulation setups get ported to the virtual machine setup and the appropriate ISA and you've really got emulation there.

    Companies will like it because they can ditch old unsupported hardware but keep the software around forever. Especially companies with several brands of hardware/software that they can suddenly run under a single brand of hardware.

    This is Microsoft's worst nightmare because all of a sudden switching to a new operating system does not preclude dropping old software. It can be done seamlessly and as gradually as people want. To Apple, it is also a bad nightmare, especially if other people work out Mac virutal machines on other hardware.

    To Intel, they're already taking a bruising from the other CPU manufacturers. Intel will take to the translating setup with a vengence, but instead of going in the direction of power consumption (or alongside it) they're going to focus on performance, the niche they've always gone into. And their rivals will go and do the same.

    To the PC vendors like Dell and Compaq (but not as I said, Apple) its something to be welcomed. They're already in the trenches competing against other machines that run all of the same software anyway. Anything that allows them to expand their range of software to run, the range of operating systems seamlessly is fine by them.

    To operating system vendors (except for Microsoft and to a lesser extent Apple) it will be a major blessing. All of a sudden experimenting with a new operating system becomes easier and switching to it becomes useful. Niche operating systems can thrive being used for specialized applications on an existing machine.

    Things like OpenBSD can suddenly start becoming really popular doing things like being the virtual machine that's the only one designated to see the DSL connection to the outside world while the Windows and Sega Genesis virtual machines are there for playing old games and the Linux or FreeBSD virtual machines are used for getting work done.

    This is the direction I think the PC will be evolving in to compete with the small and specialized Internet appliances. They are going to take their strength, flexibility and go more so in that direction. The CPUs will become more flexible and the operating systems will capitalize on that and take virtual machines to the next level.
  • Patterson and Hennessy make the point in their seminal book - no architecture has survived an address space crunch, which arrives on x86 around 2-4 GB of memory (and is already biting with the Linux limitations on file size). Typical desktops are still a few generations away from this amount of virtual memory, let alone physical memory, but servers are getting to the point where this poses a serious limitation. Ergo, Intel is providing the IA-64, which will remove the limitation (modulo some PCI problems with >2^32 byte address space, IIRC - have they been fixed yet?).

    Once this becomes a more widespread problem, the x86 architecture, in its present form, is doomed. At that point, what the industry will converge on (and whether it will converge at all) is an open question.

  • by phil reed ( 626 ) on Friday June 16, 2000 @05:31AM (#997985) Homepage
    Back when I was in school (showing my age), we had some Perkin-Elmer machines in the computer science department. One of the interesting features of these machines was user-accessable microcode - we could create our own instruction set if we wanted to.

    The IBM 360 and 370 series not only had microcode-based hardware, but IBM could and did ship out microcode updates (originally on 8-inch floppy). Among other things, IBM got in trouble with the anti-trust folks because they would send out microcode "updates" that just happened to break 3rd party peripherals - you installed the REQUIRED update and your Amdahl hard drives stopped working, for instance. IBM also would put high-level instruction code support into their microcode. For a long time, IBM's sort software package ran faster than anybody else's because they had microcode instruction assist - kind of a secret machine instruction that the competitors didn't have. It's like the private APIs in Windows.


    ...phil

  • by Hard_Code ( 49548 ) on Friday June 16, 2000 @06:44AM (#997986)
    Did anybody actually /read/ the article? Hannibal argues that asking whether it is obsolete is not even a meaningful question. x86 was obsolete as soon as there was something more convenient to use. But that's not going to be really relevant anymore because of the advent of third generation chips which will support whatever ISA you want. If you don't like x86, use something else. People have been making fast, successful, "obsolete" x86 chips since x86 was around. Just look at AMD's and Intel's latest processors for evidence of that. The question is whether x86 will be relevant or not.
  • by tealover ( 187148 ) on Friday June 16, 2000 @03:47AM (#997993)
    Haven't the RISC folks been telling us that since, oh, the X86 chips first came out? Eventually, they'll be right.
  • All this stuff aside, there is one very simple, absolutely deterministic way to determine whether something is obsolete: nobody uses it anymore! It doesn't get any simpler than this - the x86 has some redeeming qualities, otherwise nobody would use it.
  • Depends how you count it. :)

    A first post by me was 5 at one point, then got marked back down to 0. It was regarding the May 2nd DMCA protest that Slashdot refused to cover. Unfortunately I was not able to find it in the archives, as I know I posted it in a completely off-topic article.
  • x86/AT archicture is certainly a horrible platform to program for on a metal bashing level. I've been looking at OS development, and from what i've seen already, the whole architecture is horrible and fidly to program for.

    The x86 instruction set isn't even that nice; we have extension upon extension that creates a horrible mess of standards and layers, each of which you need to accomodate; a seriously limited hardware interupt lines to attach all important hardware too etc. etc.

    Personally, i'd like to see the x86 and the AT die, and quickly, please.
  • ...this is WAY more answer than we needed. Here's what Slashdot wants to hear (and all it can understand): "x86 will die a horrible flaming death in 9 months. Also, it's girlfriend will leave it."
    --
    Compaq dropping MAILWorks?
  • Yes, but lost of people can't stand macs because they don't have any backwards compatibility.

    they don't? i run 68K apps such as Illustrator on my iMac no problem dude, theres a little calculator app i like that was written when the MacPlus was cutting edge. it works fine still!

    As for Mac users howling, you bet they did. When I told a guy I worked with that his 3 year old mac was too old to play a game he got for his young daughter he was pissed.

    so u mean 3 year old PCs can play new games? often not the case

  • The x86 is hardly obsolete, except maybe in our own little world. For parts of the world that don't have the income to lease a new SUV every two years, the cheap x86 clones are cutting-edge technology and will continue to be around for quite a long time.

    It's cool to have newer, faster, better hardware but does it actually let you get more work done or have more fun? For a few individuals, yes, but the vast majority of us have had computers faster than we can type for a long, long time.

    Don't make something obsolete when something better comes along - make it obsolete when it ceases to be useful. I still use a 486 as a simple mail and web server just fine, thank you.
  • While I would like to see the x86 hardware go away, it gonna be around for a long time. I think the real issue is how you get your legacy code to run on different OSs. Your win9x/NT apps don't run on linux/x86 yet they have the same hardware. To solve this kind of problem, all the OS will need to do what the microcode is doing. That is have a low level OS that runs between the user OS and the hardware. Which pretty much sounds like a MACH microkernel. If MACH was more widespread then legacy might not be as big an issue, but code will always need to be updated to the latest User OS.

    Just some random thoughts.
  • The x86 architecture's design is hindered by backward compatibility requirements, making it significantly less efficient than chips designed from scratch. A powerful Pentium-class CPU burns significantly more power than, say, an equivalent Alpha or PowerPC, and dissipates more heat. The maximum speed, and maximum performance, of such a CPU are inferior to newer designs. And in order to perform reasonably under modern conditions and retain compatibility, such a chip is necessarily much more complex.

    The only reason that the x86 has stayed around is market inertia and economies of scale. Because of the large scale of manufacturing, x86 machines are a lot cheaper than newer architectures, and most binaries are for the x86. Rather sad, really, but that's the way it is.
  • Heh, what up spiralx? Go post something on Smokedot instead of reading this crap :-)

    Anyway, I smoke weed like a fiend - every day, if I can help it. And I get tons of stuff done. I work full time, write open source software [sourceforge.net], do CGI art [umd.edu], and other stuff. I can't usually get anything done if I'm not stoned though :-)

    Alright, this is drifting way off topic.
    --
  • Non-linear memory model is true of any system that uses paging, and is a very VERY good thing.

    I was referring to the "segment" concept, the fact that physical memory is nonlinear (or however you prefer to describe it. The point is that pointers require two registers.) Thankfully protected mode helps somewhat and makes it possible for an OS to offer virtual memory and a linear address space, but the fact that 16-bit real-mode segmented address spaces still have to be dealt with at all, ever, is obnoxious and stupid.


  • I don't want to pretend to be an expert, but isn't the biggest limitation with the intel processor the Bus architecture that can only let one chanel of communication between two devices happen at one time?

    I know that with SGI hardware they have a type is switched bus that allows multiple devices to talk at the same time which allows for much higher sustained bandwidth.

    Does anyone know if x86 chips can run on a non-bus architecture or is it part of the Chips instruction set to function on bus architecture?

  • 1> Write an emulator in C (since there are more machines that you can target with C than there are x86-es).
    This is exactly the approach that Java is supposed to provide.

    And it is proving to be popular. However, it's going to be quite a while before someone writes a Java/C interpreter/compiler/emulator/translator that can provide a good enough environment in which to produce something like Quake VIII.

    Even the guys at Transmeta don't get it.
    No, they do get it. They know that the market is ready right now for the technology that they're providing. Java, on the other hand, is relegated to the non-gaming segment until more advances can be made to the technology. But you're right - this is the approach of the future. It will become more and more mainstream.
  • It's not necessarily true that RISC code is good for compilers. ARM assembler is pleasant to code by hand, but most compilers generate relatively poor code for it. At least, gcc was not blazingly fast on ARM last I checked, and neither was Norcroft C.
  • Never critize those who came before you. They didn't have the benefit of your knowledge.

    The 8086 is not what I'm criticizing. Yes it sucked but so did everything else at that time. What I'm criticizing is the decision of the engineers to let the marketdroids run Intel and thereby prolong the life of a design that had no business surviving past 1988 or so. In the mid-80s the SPARC and MIPS projects were starting to produce marketable CPUs. These CPUs were well-designed and well-implemented, and fast. Intel's engineers are more than capable of competing with such offerings, but they chose instead to allow non-technical idiots to dictate technical policy. Specifically, the 486, Pentium, ... are mistakes and deserve to be treated as such. These CPUs should never have existed because Intel should have abandoned an architecture that by the time of the 386 was already aging very badly. I will gladly excuse technical mistakes made in the absence of information we have today. I will not excuse bad policy decisions which should have been made by engineers, made instead by braindead marketdroids.

    In legal-speak, I'm saying that Intel knew, or should have known, that their current product offerings were technically inferior to those of their competitors, and should have adjusted their product line accordingly.

    The i860 and i960, along with the ability to manufacture x86 CPUs that offer any performance at all, prove that Intel has a great many competent, if not brilliant, engineers. Their mistake was in giving up control of the product line. When Intel's marketdroids announced the 486, every Intel engineer should have either demanded a change, or cut and run. There is no excuse for the continuing existence of the x86.

  • However, you do _not_ have to save all 32 registers, unless you use all 32. You only have to save the ones you are going to use.

    I never said that. I said 15 or more. On the PPC, 15 is about the max. In general, though, there's about 15-20 instructions of overhead in a non-trivia, non-leaf subroutine (in C), but it can be twice that.

    You need to stop and think about what "complex" means. A CALL instruction is not complex. Heck, it was standard on 8-bit processors with less than 10,000 transistors. Complex instructions are some of the crazy things done in hardware on the VAX and IBM 360. If a CALL instruction is considered too complex to implement efficiently in hardware, then we shouldn't even both with things like texture mappers or floating point math. The bottom line is that RISC has gone over the top, making things more simplistic than we really want.
  • Bill Gates was just a tiny little software vendor at the time. His competitor was Digital Research, with a thing called CPM/86. The decision to use the 8086 was made long before Bill Gates even got involved in the process. He was just a software vendor, with no input at all into the design of the machine.

    Maybe he had delusions of this type of grandeur, but no footing in reality at the time.

    The 68K didn't even exist yet. The Apple Lisa, which was the predecessor to the 68K, was still about 5 years away. Even the 6502, which the Commodore 64 and Apple II used, didn't even exist yet.

    The obvious 8 bit choice would have been the Z80 or the 8080 (more likely the Z80). That Z80 was the standard chip for the late 70's CP/M machines. It is possible that Intel had some input on using the 8086 instead of the 8080. The reason for the 8088 was that is used 8 bit bus architecture, which allowed IBM to leverage cheaper motherboard designs. 8086's were simply too expensive to build at the time - an 8086 PC with a green mono monitor and two floppies was somewhere around $6000.

  • I see you threw Wine in there. In theory Wine would run fine on any other platform, but there isn't much purpose for it. Wine allows you to run windows binaries, which are created for x86. Even if you managed to port wine to Sparc for example, what sparc win32 programs would you run?

    I wouldn't be surprised if winelib was ported though.
  • Yes, but lost of people can't stand macs because they don't have any backwards compatibility. When I was going to buy my first computer I bought PC because I knew any Mac I bought would not only be obsolete when I bought it, also be unsupported by everyone - even Apple - within a few years. Plus Apple alters its hardware specs enough between models that you can't upgrade the hardware to get it compatible again... You just have to buy a new Mac after 3ish years. Its almost as bad as Microsoft really.

    Wow, what an outrageous load of FUD this is.

    As for Mac users howling, you bet they did. When I told a guy I worked with that his 3 year old mac was too old to play a game he got for his young daughter he was pissed. He should be, by ditching compatibility like that it destroys any value his 3 year old computer had whatsoever.

    Well, you shouldn't lie to people like that. I run UT, Falcon 4.0, and a number of other interesting things on my 3 year old Mac. Maybe your Mac-user friends will wise up and stop asking you for advice on things you know nothing about.


    --

  • by The Man ( 684 ) on Friday June 16, 2000 @09:00AM (#998062) Homepage
    since you can pick up an O2 for about the same amount as a mid to high end powermac

    A reasonably configured used O2 in perfect condition can be had for under US$1500, about the same price as a midrange peecee. An R10k High Impact Indigo2 can be had for about $1300-$1700 as well. That's a fully 64-bit system with a 200 MHz processor (faster than it sounds) and graphics faster than all but the high-end peecee offerings. Even Sun Ultra 2 systems, which are also fully 64-bit and offer dual CPU capability, are less than $3000 in reasonable configurations today, and it's even possible to get them new. You can say what you like about high workstation prices, but in the real world, clever individuals can get nice, if slightly out of date, systems that offer good to excellent performance for prices comparable with peecees.

    On top of that, the only unix box hardware I really appreciate is SGI, but the only commercial unix I would run is Solaris - Which is a fundamental incompatibility.

    That seems odd. Both SGI and Sun build great machines, but I'd rather put a fork in my eye than have to use Solaris. IRIX is ok most of the time though. IMO the only acceptable OS for Sun boxes is Linux. Try it; you'll like it.

    Feel free to feel like you have a larger penis because you've left the PC platform

    [Looks down] Looks pretty standard to me. A refusal to compromise with idiocy doesn't come from the penis, it comes from the brain, and I'm pretty sure ours are within 20% in size.

    Until the cost of systems based on other processors drops

    It has. See above.

    the number of available applications must increase...

    I don't know about you, but I have solid applications - we're talking about things that actually work reliably here - for every task I might possibly want to do on a Unix box. I challenge you to name a task I can't do on a Unix box. That Turd or whatever other flavor of the month isn't available isn't important - what matters is what tasks you can do, and how easily you can do them. I've found that Unix systems offer more applications than I could ever find a use for.

  • Chemical dependance...good...you've identified your problem.

    Not quite... while chemical dependance is a good way to describe my relationship with nicotine, it's not with THC. I could never get anything done before I started smoking pot either. I was always very lazy... the only time this isn't true is when I smoke some nice kind bud.
    --
  • by tilly ( 7530 ) on Friday June 16, 2000 @07:14AM (#998073)
    The comments on how to have different platforms be binary compatible are interesting in their own right. What I find interesting is how the same idea in a different form is implicit in what Torvalds writes. For instance read his essay on the kernel [oreilly.com] from Open Sources carefully. Here is a more technical explanation [kernelnotes.org]. In both cases you abstract out from the architecture, OS, library, whatever the interface you want to program to, and then (with appropriate macros etc) set up that interface. Then when you go to port it, you merely need to figure out how to set up all of your macros and the bulk of the code remains untouched.

    Look at that sideways. That is *exactly* what IBM did to make code binary portable. That is the principle that the AS400 uses. If you peek in well-known and widely ported projects (eg Perl) you will often find that they take the same approach. (For good reason!)

    The key to wisdom lies in seeing how good ideas about foo look like good ideas about bar and then trying to apply that. There is a good lesson here about portability...

    Cheers,
    Ben
  • Well shit, I guess I'm halucinating all of this since I'm on a Powermac 8100/80 with a G3 upgrade running OS8.6. That's six years old for you non-mac types. Runs Office 98, Photoshop 5.5, Quark 4.1, Illustrator 8, Explorer 5, and Quake 3 like a champ. Besides that, backwards compatibility has nothing to do with an old computer running new software, rather it's a new computer's ability to run old software that makes for backwards compatibility. A G4-500 with OS9.0.4 runs the vast majority of Mac software ever made, even 68k code through emulation. When I was running OSX dp3 on a G4 it would do the same in the classic layer.

    I also am somewhat dubious of your claims concerning the 3 year old mac. In '97 you're talking 604s at 180+ Mhz (4400s,7300s, 8600s, and 9600s) that should run all of todays software without any problem, at least as well as a P200 would on the other side of the fence. The only powermacs without an upgrade path to at least a G3 are the original PCI based macs (7200s). Those were doomed machines from the start though (the whole Carl Sagan thing ;)).

    The only machines that have been totaly ditched, support wise, are the 68k machines. 68030 and below based macs (9+ years old) were dropped with OS8 and 68040 based machines (7+ years old) were dropped with OS8.5. That's allright though, just throw a BSD on the thing and go to town.

    The Performas are a different story (although it's been nearly 5 years since they quit making those junkers) but then again so are the PS/2s.

  • ok, I don't know where that moderator gets off, but my post was NOT redundant. I related a personal view. Redundant?!

    Whoever you are I hope you run out of points soon.

  • Of course the x86 is obsolete, but Intel hasn't yet figured out it could make buckets of money following Microsoft's lead, by making PC users upgrade -- first to x95, then x95 OSR2, then x98, and now x98 SE.

    :)
  • Folks,

    I find it very amusing that people think the x86 CPU architecture is obselete.

    That may have been true for the 8086 with its 1 MB memory addressing limit and the 80286 with its 16 MB memory addressing limit, but once the 386DX with its 32-bit flat memory addressing scheme became available, in theory the x86 can address as much as 4 GB of system RAM! It's mostly memory physical limits on the motherboard and motherboard memory controller chip limits that has limited computers from addressing all 4 GB until now.

    Besides, the x86 architecture has undergone an unbelievable increase in performance. Remember when the first 386DX CPU's were rated at a meager 12 MHz 15 years ago? We now have Pentium IIIEB and Athlon CPU's running at around 83 times the clock speed of the original 386DX and vastly better memory management.

    Besides, very few programs for stand-alone workstations demand more than 256 MB of RAM nowadays. And most server applications run extremely well with 1 GB of RAM, especially on the Linux server machines.

    The big bottlenecks are no longer the CPU; it's mostly hard disk access times and access times through the network adapter card that holds your system back. Now you know why RAID 5 hard drive arrays and Gigabit Ethernet NIC's are used on high-end servers.

    However, I do see that non-x86 architectures may become more prominent in the next three to four years. Projects such as LinuxPPC will allow Linux applications to run on systems that use the PowerPC CPU, a CPU with superb memory addressing capability and an equally superb FPU unit. If Linux becomes popular enough, maybe we might even see a revival of the PReP platform in an updated version running LinuxPPC, machines sold to people who need serious FPU processing power such as engineers and computer animation artists.
  • A powerful Pentium-class CPU burns significantly more power than, say, an equivalent Alpha or PowerPC, and dissipates more heat.

    What are you smoking? Alphas suck power like there's no tomorrow, and my 600mhz EV56 heats my room. I moved it into another room, and now with only my P2 my room is much cooler (and quieter for that matter, but that's not the processor's fault.) Every seen an Alpha laptop? Wanna know why?

    (of course, somebody is going to respond saying "I've seen one!" but there were only like 1 or 2 models made so save it.)

    You're right about the PowerPC though.
    --
  • by Jon Erikson ( 198204 ) on Friday June 16, 2000 @04:00AM (#998102)

    Hardly. Whilst I don't know of anyone that likes the x86, saying that it's obsolete is extremely premature - look at the increases in processing power that have gone on over the last few years and are still continuing with things like Athlon's forthcoming Sledgehammer.

    The fact is that despite its poor design chip makers have done some amazing things to push it to greater speeds - the Athlon CPU looks and works nothing like the 8086, they just happen to run the same instruction set. And in this year we'll be seeing the GHz barrier broken - hardly the sign of an "obsolete" chip is it?

    As long as the chips are still getting faster and people are still buying them I think calling the x86 platform obsolete is incorrect. A pain in the ass? Sure, we'd all like a brand new chip design, even Intel, but it works, and it's still growing.


    ---
    Jon E. Erikson
  • A reasonably configured used O2 in perfect condition can be had for under US$1500, about the same price as a midrange peecee. An R10k High Impact Indigo2 can be had for about $1300-$1700 as well.

    That's not what's holding me back. It's that $15K for Alias|Wavefront Power Animator, or $5K for Maya :)

    Of course, as soon as Maya gets released for OS X, I'm sure I could get ot illegally at all the usual places...

    Pope

    Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
  • From day 1 there were better archetectures available, and at a lower cost. The only thing that kept it going in the early days was the market perception that PCs were business machines, so all the businesses bought them.
  • DO you not understand that the 3DNow and MMX instructions are corrolated to hardware within the chip? To add MMX to the Pentium's Intel had to increase the size of the integer instruction unit. Telling your system not to include MMX would just shut down the MMX section of the IU. That is rather stupid. If you made a really simple RISC architecture with an emulation layer you'd get so much of a performance hit you'd be shooting yourself in the foot. Without any complex instruction units you're left with a Crusoe, and its already patented.
  • Thus you're saying the x86 needs a single instruction and about ten thousand registers

    Not even that. All we have to do is apply multiple cycles of Phil's Law of Program Optimization:

    1. Every program can be made one byte smaller.
    2. Every program has at least one bug.
    Conclusion: Every program can be optimized until it's only one byte long. But, it will be the wrong byte.

    That makes it a perfect match for your single-instruction x86.


    ...phil

  • by MartinG ( 52587 ) on Friday June 16, 2000 @04:03AM (#998120) Homepage Journal
    Much as I hate people who post definitions of words in /. comments, here goes.

    obsolete (bs-lt, bs-lt)
    adj.

    1) No longer in use: an obsolete word. See Synonyms at old.

    No. x86 is not obsolete.

    2) Outmoded in design, style, or construction: an obsolete locomotive.

    Yes, x86 is obsolete.

  • Ever play any games in the Quake series? They've got plenty of parts coded to the metal IIRC. There's one thing you're forgetting with the different architectures. Will my internal components be able to run? Sure my software can be compiled on x86, SPARC, or PPC but can I stick in my Soundblaster and have it work in said OS? Can I plug in my AGP video card and then stick in a driver CD and have it work all dandy? No I can't. Sure companies could write drivers for 20 different operating systems but why would they want to spend that sort of cash? If writing for different architectures meant different OSes they have to write OS specific drivers, or they can completely open their product specs which sort of defeats their patents which means there's no incentive to seel their product. The open sourced world is solcialistic in thinking that a money fairy is going to put clothes on their backs and heat their houses for them. You also need to remember that not all processors are equal, not everyone has the same operations. So if SoftwareA needs to use FunctionFoo and a SPARC or PPC doesn't have a FunctionFoo you're not going to have a very functional program.
  • When cosmic rays hit certain materials (metals especially) they cause a tiny nuclear fission which usually makes a charged alpha particle fly off. Alpha particles aren't too large but they can pack a wallop to very small electronics. The sort of electronics found in processors. If you have a very tight die with little space between components, there is a much large change of several of these being famboozled by a stray alpha particle. Wider dies mean if a single part of a circuit is taken out the unit as a whole can still work decently enough for your needs. Make yourself some effective radiaction screens and you can stick a brand new .13 micron processor inside and it will have a reasonable run aboard said spacecraft.
  • It's a bit of both. RISC architectures try to make the instructions easy to decode, more like the microcode on a CISC architecture, if you've ever examined a CISC microcode listing. They also jettison complex instructions and addressing modes that take multiple cycles and screw up pipelines. The VAX POLY instruction is my favorite example. That doesn't stop them from adding a bunch of new, simple, easy to decode instructions.
  • The only think keeping x86 alive is the fact that if people tried to ditch it today, 90% of the world's desktop software wouldn't have anyplace to run tomorrow.

    For that you can thank Microsoft's amazing track record at porting to different platforms.

    Otherwise you wouldn't see so many companies trying to keep the architecture limping along.

    --
  • "22x faster with Multimedia than any other JVM!"

    Don't believe everything they tell you. Since the introduction of hotspot there's nothing wrong with the execution speed of Java (check the recent story on Java performance, it's actually faster than C in some cases). The reason why java applications (graphical and multimedia in particular) are still slow is because of the way its libraries are implemented. Particularly Java2D and the swing classes are slow compared to their C/C++ counterparts. A faster interpreter does not help very much. In fact, the early swing implementations ran faster with the JIT disabled! So, unless tao completely reimplemented swing and other libraries, I'd be surprised if it performed significantly better than other JVMs.

    I really liked the discussion in the article about obsolete and irrelevant. The real thing that has become obsolete is not the ISA but the static compiler that ties programs to it. The only reason x86 is still around is because there is no convenient way to convert an binary x86 program to, say an alpha binary, on the fly without performance loss. X86 will be around as long as people choose to statically compile their programs. The interesting question is not how long x86 will be around but how long hardware implementations of x86 will be around. Hardly anybody codes directly to an instruction set anymore. A simple recompile will port linux applications to other processors, often no changes are needed to the code. So, the dependency of the program to a specific ISA is not functional. In fact, as HP's dynamo shows, it is counter productive.

    Transmeta's crusoe is the first of a generation of processors that can execute X86 efficiently without having a hardware implementation of it. My guess is that in five years or so, all major chip manufacturers will have stopped implementing instruction sets in hardware.

    Java is ahead of its time in a way, since it is not dependent on specific instruction sets. The hotspot idea is not fundamentally different from what the crusoe does.
  • I've programmed a variety of modern chips at a low level--MIPS, PPC, x86, SHx--and there's more to be said for the x86 than many people realize. For example, on a RISC chip, the subroutine call overhead can be stifling. Yes, it only takes a cycle or two to make the call, but then there's no hardware stack, so the return address has to be saved manually (usually two instructions to save it and two to restore) except for leaf functions. And then you may have to save 15 or more registers, at one instruction per, and restore them at the end of the routine. This all comes down to 20-40 instructions of overhead per subroutine. Is that progress? On the x86, subroutine calls are much faster and cleaner.

    Also realize that all of these instructions are fixed at 32-bits on most chips. That's 32-bits to copy a register, 32-bits for a return, etc. This may simplify the hardware, but at the expense of bloat. So you need a bigger instruction cache.

    Is the x86 perfect? No. If you look at an x86 reference, you'll find that over 50% of the instructions are either (1) really old things that mattered in the 1970s but not any more, like daa; (2) instructions from the 8086 and 80286 that run poorly on more recent chips, like lods and leave; (3) along the same lines, instructions for managing segment registers and other 16-bit relics; (4) MMX or Katmai related; (5) specialized instructions that we could easily live without, like the set family. If you take all of this out, you pretty much have a RISC chip. And you'd still be compatible with 95% of the code that runs on the Pentium II and III. I expect we'll be seeing this kind of thing soom from either Intel or AMD.
  • by gmhowell ( 26755 ) <gmhowell@gmail.com> on Friday June 16, 2000 @04:05AM (#998131) Homepage Journal
    Geez, it's a 30 year old OS, there are tons of newer ones available that handle everything almost as well as *nix platforms do. It's time we got rid of this albatross and moved on.
  • Beowulf?

    Why do so many people misunderstand? it's not simply like having a 300 processor machine.
  • When GCC is compiling, what exactly do you think it is compiling to? It isn't usually the bare metal, it is compiling to the ISA. And why the fuck would I want to recompile my software if I bought an upgraded CPU? Large programs with alot of linking take a while to compile even on fast processors which means I'd need to wait to actually do anything of important on my system. And then I'd have to compile all the programs I use. Do I really want to waste my time compiling everything (long live binary packaging!)? No I don't. The way to write directly to the metal of an Athlon is to write in Athlon assembler. Writing in assembler is fine and good and optimized if you're a good assembler programmer and the chip manufacturer doesn't change any of the chip components you're writing to. Do you realize how expensive it would be to write 30 different versions of Quake 3 or Office 2000 just to deal with a particular iteration of a processor?
  • AFAIK you seem to have IDE confused with Intel memory architecture. Intel chipsets use DMA which lets all devices to talk directly to one another rather than through a bus controller and such. SGI's use a unified memory model which says the memory bus is the central point of the system rather than the processor. Intel's say the processor is the king of the motherboard. I can't recall any real bandwidth restrictions, just addressing resitrctions which severely limit the number of processors you can have sharing a single bus. If you really push MIPS systems you can get 128 chips all on the same memory bus.
  • The 286 was the first protected mode Intel processor. And it could switch into protected mode, but required a reset for switching back to real mode. This is like having a really nice new Alpine CD player where you have to coax the CD back out with a bread knife.

    To be fair to the Intel engineers, when the 80286 was being designed, real-mode was supposed to be a bootstrap into protected mode. The idea was that once you were in protected mode, you would stay there. The problem was that these decisions were being made years before the 80286 went into production, before there was a huge base of real-mode software (8086 assembler) that couldn't be easily modified to run in protected mode.

  • Why the fuck would I run OpenBSD on a virtual machine? The reason I'd run OpenBSD is because it's really secure due to lots of security audits. Who's to say the virtual machine you're talking about is the perfect software with no bugs. The point of having a kernel is to provide a layer of abstraction between the management of the hardware and the functions of an application. If developers had to write in hardware management code in order to write an office app or game you'd see almost no development houses around because that would be way too costly.
  • Fuck. Beowulf clusters != true cluster. A Beowulf is a really really dumb cluster that has no real hardware relation, merely a software relation over network cables. With a Beowulf a controlling computer gives all the nodes an algorithm or function and then gives them some data to perform the algorithm or function on. They perform said task and send their results to the controlling computer. A true cluster runs programs as if it were an SMP box. The Crusoe is NOT I REPEAT NOT SMP capable. This means it will not be showing up in true clusters, only Beowulf style setups that use embarassingly parallel computations. You'd be hard pressed to get a machine to turn programs that have only one thread or process into a program that had multiple threads or processes. The computer has no aware knowlege of what you're trying to get at with a function. All it nows is what you tell it. If you're going to tell it how to make your code be ulti-threaded you might as well just do it yourself.
  • Software shmoftware, how about the sournd card and video card that even let you use realplayer? In order for you to switch to an Alpha you'd need to find an Alpha mobo that works just dandy with your existing hardware and then drivers to make it all work on said new architecture. Software is a bit trivial at this point, its getting the sound card in my Linux box working that bothers me.
  • Well, it wouldn't be progress, if your assertions were accurate. I'm not sure about any other RISC architecture, but I can sure as heck say that the Sparc architecture does not have this particular limitation. On a sparc, you almost never need to save your registers, and you almost never need to save your stack pointer.

    Ah, true, I was thinking in terms of the RISC CPUs that have gotten widely used in consumer hardware, like the SHx (in the Sega Saturn & Dreamcast), MIPS (in some CE devices and Sony's game machines), and the PPC (Mac, of course). None of these chips have the register window features of the SPARC, so quite a huge amount of code gets generated for subroutine entry and exit--up to 20% or more of the total code in a project, in many cases.

    What's wrong with this picture is that writing very small subroutines has become the accepted norm--and rightly so--but most hardware is not designed for that style of programming. Increased emphasis on inlining has been the result, but it sure would be nicer to just have single cycle subroutine calls without needed overhead. The SPARC method sounds good.
  • by reimero ( 194707 ) on Friday June 16, 2000 @04:08AM (#998154)

    Way back in 1989 or 1990 I was taking a class in Assembler and the teacher was remarking that the x86 was making itself obsolete. IIRC, he said that the memory addressing was horrendous and the whole reason they stuck with it was for backwards-compatibility.

    There comes a time when backwards-compatibility needs to be sacrificed for genuine improvement or development. Apple no longer supports the 68k series of processors, barely supports any PPC lower than a 604, and is moving strongly toward G3 (or G4) only. Mac users howled, but it was expensive and counterproductive to try to keep too much backwards compatibility. Use older OSes and older apps for older computers and let newer computers become truly cutting-edge. IMHO there's no need for gigahertz PIIIs or Athlons to be able to run WordStar.

    Just my $0.02.
  • What kind of question is that? Of COURSE x86 is obsolete. (Academically anyway). However, I really could care less if it is. It serves our purposes and we keep it around anyway. Academically, UNIX is obsolete too, better designs have already been made. Still, it work with the apps we have, it does its job decently, and that's why we continue to use it.
  • I always thought RISC meant reducing the complexity of each instruction, so every one could execute in 1 clock cycle. Not reducing the number of instructions.

    Also, IBM was doing research on RISC in 1974, which predates the x86.
  • Patterson and Hennessy make the point in their seminal book - no architecture has survived an address space crunch, which arrives on x86 around 2-4 GB of memory...

    Oh? Like the Intel 4004 - 8008 - 8080 - 8086 - x286 - 486 - Pentium - Pentium Pro - Pentium II - Pentium III line? Or the Motorola 68000 - 68010 - 68020 - 68030 line?

    Hacking round address space limitations has been done many times. In fact, Pentiums and up have a 48-bit segmented addressing mode, and some of the parts bring out 38 bits of address lines already.

    There's a lot to be said for segmented addressing. It has a bad rep in the PC world because it was such a pain in the pre-386 era, but the modern x86 machines have it done right. We may well see expansion of the x86 architecture beyond 4GB. The Merced VLIW machines may be another Intel dead end, like their iapx432, i860, and i960 lines.

  • What makes the x86 a major proctalgia is the nonorthogonality it inherits from the 4004 et al.; nearly every register is "magic" in some way, i.e. there's some instruction that requires an operand to be in that register

    This was true in the days of 16-bit code, but has never been for 32-bit x86 code. You can use any register for any addressing mode or operation. In the 16-bit days you had to use either bx, si, or di for memory addressing, for example, which was horrible.
  • One interesting thing here is that "using" a CPU architecture is such a fuzzy concept these days. I mean, on a good day, I might write a couple of hundred lines of C code, thereby implementing new functionality in my current project, perhaps making new demos possible or whatever. But, that code was C code, which almost by definition is more or less independent of the fact that I use AMD's take on the x86 ISA to run it. The code would be the same on a PowerPC, SPARC, Alpha, MIPS, or any other reasonable processor. So, am I really using the ISA itself? I spent the money (um, no, my employer did), and I run the system for 10 hours a day, but I still don't feel like I'm primarily using an instruction set architecture.

    Perhaps the largest group of people who make sense as a "target audience" for a new ISA is the various compiler writers out there?

    Back in the old days of the Amiga, most programs were written in assembler, and they would only run on MC600x0-based Amiga machines, of course. Then it made a lot more sense to think about programming as actively using an ISA - in higher level languages, it doesn't. Of course, things such as the POSIX standards for operating system interfaces also helps make the code less tied to specific machines.

    But then again, as long as Intel keep introducing three or four (or is it more?) new implementations of their architecture every year, each time with new refinements (artificial life support?), it doesn't make sense to talk about is as being "dead", either... Although I think I must add myself to the camp of people waiting for something else to take over. Once, we had this dream that it would be the PowerPC, but seems to have failed.

    Um, end rant. I guess I just confused everybody else, now. ;^)
  • I think that your use of obsolete is redundant.

    Given that another has opined that there is a simple, emperical way to determine if something is obsolete (see if anyone uses it, if so, it isn't), which addressed only the first definition of obsolete, posting the definition is an excellent rebuttal and not in the least redundant. (unless you refer to definitions #3 and #4 of redundant and assume redundancy on the part of the slashdot servers, which would make every word ever posted on this forum redundant and becomes a silly excersize in sophistry).

    According to the dictionary definition of obsolete, the x86 architecture qualifies, despite Intel apologist arguments to the contrary. It is obsolete hardware which is, alas, still in widespread use. Horses and buggies are obsolete, but you still see them on the roads in Central Illinois and Pennsylvania. This doesn't make them any less obsolete. When the oil is gone, automobiles may well become obsolete while horses and buggies become the pinnacle of technology. Unless, of course, the patents on hydrogen cells the energy cartels are keeping under wraps are finally freed, but thats a diatribe for another day ...
  • Did you bother to read the article? It declares that x86 as a solution was obsolete before the 486, but legacy x86 compatibility as a problem isn't going away.

    Sure, Intel could have killed their version of the x86 by not issuing the 486. In that case, buisness purchases would have simply turned Cyrix and AMD into huge companies, because their chips would have solved the buisness problem of having to run legacy code. Intel either would have had to go back to x86, or have become a non-player in the desktop market.

    Steven E. Ehrbar
  • by ChrisRijk ( 1818 ) on Friday June 16, 2000 @04:09AM (#998165)
    Looking at the replies above, it looks like nobody has actually read the article.

    It's not trying to say "the x86 ISA is obsolete", far from it.

  • by tofus ( 201424 ) on Friday June 16, 2000 @04:13AM (#998169)
    x86 is the architecture of the future! I mean, we already have the dual-CPU(x2) and quad-CPU(x4) motherboards that are getting ever more popular. I bet every self-respecting motherboard manufacturer is way ahead of that, and currently working on secret octohexal-CPU (x86) architecture! I bet! One other great thing about octohexal architecture is that it'll keep your room warm. See? x86's uprise has only just started!
  • Is the x86 architecture obsolete? sure it is but theres so many of 'em out there or at least chips that bear as much resemblance to 'em as a Ferrari does to a VW bug but hey, they are both cars and you drive 'em in more or less the same way...

    The issue of whether the design is dead or not will never be settled by the question "Is it obsolete?" but rather by "Does it still work?" The 486 and 386 should already have gone the way of the dodo by any standard of obsolesence but those two old boxes suit me very well thank you as a linux firewall and NFS server respectively. If they ever end up so short on power that they stop working in those roles then they will get upgraded but until then the upgrades are limited to the usual round of patch it, break it, patch it again :) Now admittedly I have no intention of using either as my main workstation (thats a K6,) but for as long as they do the job I need 'em to those older chips may be obsolete but they sure aint dead.

    # human firmware exploit
    # Word will insert into your optic buffer
    # without bounds checking

  • x86 has about three orders of magnitude too many instructions and a similar factor too few registers

    If I remember my physics correctly, an order of magnitude is a power of ten; three orders of magnitude would be 10^3, or a thousand. Thus you're saying the x86 needs a single instruction and about ten thousand registers. :)
  • by debrain ( 29228 ) on Friday June 16, 2000 @04:16AM (#998189) Journal
    I don't think it's really relevent whether or not the technology is obsolete. There are still valid reasons to use a VAX, although I don't know any off hand, mostly for legacy compatibility with existing systems. Is the x86 architecture obsolete? I think so; it's old, and it has a great number of architectural issues that have no easy resolution.

    But technical obsolesence isn't really that relevent; the market factors governing success are really the presence of a transparent upgrade process, like Transmeta's Crusoe chip, for example. Something may be technically obsolete, but it is not socially or economically obsolete. Since we live in an economically governed society, not a technically governed one, the principles that affect the growth and distribution of new technology are economic, not technical.

    Thus, we see x86 and DOS compatibility, two of the first and most popular (economically) primary personal computer architectures, resilient to even today. One might note that it is the presence of propietary technology that is indignant to change, and that open architectures (like ARPANet => Internet:TCP/IP) evolved dramatically. One can only speculate, of course, what would have happened if the internet was composed of closed minds and standards (and we can only agree to disagree at this time), or equivalently if DOS/x86 were developed by open minds with open standards in mind (again, only agree to disagree if you do).

    So is x86 obsolete? Yes. But there is no clear economicly sound upgrade path at this time, but we are certainly seeing ones arise, especially with the advent of the internet and the universal movement "community", on that internet.

  • > Or even so, if Wine were ported, would that make it so that Windows apps were just a recompile away from running on other processors/operating systems?

    That's part of the benefits of running OSS apps. In principle, your whole system is just a recompile away from a hardware switch, if you think some non-x86 platform is a better buy.

    In principle I suspect there are some 32/64 bit issues, but if a hot new platform came out that gave a great bang:buck ratio, the world of OSS programmers would be all over it in a heartbeat. When/if Linux ever gets up to 20% of the desktop market share, you may start seeing some "breakaway" desktop architectures.

    [Posting from my freshly downloaded M16.]


    --
  • The original 8086 processor had a 1 MB addressing limit, but the x86 architecture still survived.

    The 80286 had a 16 MB addressing limit, but the x86 architecture still survived.

    The Athlon faces a 4 GB addressing limit, but AMD is developing a 64-bit version with a potential 16 EB addressing limit.

    Addressing limits are *not* why the x86 won't survive.

    Steven E. Ehrbar
  • Sure, 4G looks like a lot of ram now ... but look at what SGI machines could do in the early '90's ... theoretically, they could handle an aweful lot more than that (and I'm sure several people could give examples) ... MIPS, Alpha, etc.

    ALl bad marketing ... better tech.
  • DEC developed a technology called fx!32 for the Alpha processor version of Windows NT. This was available when NT 4.0 came out in 1996. Not having used an alpha-NT box in several years I'll have to just assume it's still around. Basically, fx!32 runs x86 binary applications in emulation mode on the alpha all the while watching the execution and translating the binary into a native alpha version. The first several times you run it the application gets quite a bit faster each time. At best it's still slower than a comparable intel box, but it's better than having two machines. Here's a link to some info at compaq:

    http://www.digital.com/amt/fx32/fx-r elnotes.html [digital.com]

  • The only think keeping x86 alive is the fact that if people tried to ditch it today, 90% of the world's desktop software wouldn't have anyplace to run tomorrow.

    Agreed. However, the server market isn't quite so dependent on x86 compatibility (yet). Do the high end chips (Xeon, Itanium etc.) still contain the full instruction set, or have they dumped any of the legacy instructions that were only present to support backwards compatibility? After all, how many people actually run MS-DOS 2.x on a Xeon? My guess is that if they don't already do this, future generations of processors probably will, as extreme backwards compatibility becomes less important. Of course, Win2K could prove to be the wildcard here, as MS try to blur the boundaries between desktop and server. Yes, Linux does the same, but Linux has never had to run 16-bit code...

  • I have always loathed the x86's bloody segmented architecture and arse-backwards way of doing things since day one.

    The 68000 had a MUCH better architecture than the 8086 - nice linear memory space from 0 to as much as the memory bus could hold - bytes the right way round in long numbers - a delight to program for us old Hex hackers...

    Had IBM chosen a 68000 processor, the history of personal computing would have been very different, and very probably a whole load better. Then again IBM deliberately knackered the PC design, so maybe they chose the inferior processor deliberately?
  • How's this?

    William Stallings lists four primary characteristics of the RISC architecture: One instruction per cycle, register-to-register operations, simple address modes, and simple instruction formats. With one simple instruction per machine cycle, CPU idle-time is significantly reduced. RISC instructions are hardwired into the chip as opposed to residing in a microcode ROM, theoretically reducing execution time by eliminating references to the microcode program as well as freeing up valuable chip space. Simple instructions greatly reduce the complexity of the chip, and eliminate such complications as performing an operation on several parameters while at the same time calculating effective memory addresses.

    It doesn't say anything about the number of instructions.
  • Ugh, very true. The Wintel combination has meant that each of them has held back to keep in line with the other, and because of this neither has been able to break away from the past.

    Case in point - real mode. In real terms it has been obsolete since the 286 introduced protected mode and the 386 enhanced it. But it's only now with Windows Millenium and 2000 that real mode is no longer used by MS operating systems. This means that the chips require extra transistors for real mode and virtual 8086 mode making them more expensive and hotter, and Windows has reuqired extra code to handle legacy apps which use them.

    No, you might have got a speed increase by changing the core from CISC to RISC, but you could also get one by just removing all the extraneous crap that's still there. Hopefully Intel or AMD will abandon their wish for every new chip to be able to pretend it is an 8086...


    ---
    Jon E. Erikson
  • Yes, that's obvious. Just pointing out amibiguities in languages between fields of study. Truly, a dizzying waste of time and intellect...
  • It isn't tough but it is time consuming. And gcc on some processors really blows. Take Linux on an Alpha for example. Using gcc programs run 20% slower than they do with the free compiler released by Compaq. The dude I was replying to wanted everything to be released as source with the end user having to compile everything. I'm not a big fan of compiling because I don't have the free time to sit for hours while my system compiles. Not only would you have to recompile 30 versions of Quake 3 but you would have to test and debug all those versions of it. Quake 3 also has alot of processor specific assembler to make it run a bit more efficently. You'd need more people coding for projects, ones that were gurus with a particular chip.
  • x86 has always been obsolete since the RISC technology has always been ahead of it. It's just a favorite. Macs have, on a fundamental design level, faster processors. The G4's have been pulling a terraflop for a good time now. There are countless other processors that are just better than them. Their market share is just so large that they can sell them cheap, and they were smart enough to open the architecture.
  • Wouldn't an 8mbit connection "suck" if you were used to it? I mean, why don't we all have gigabit ethernet connections? Damn those engineers for not having the foresight for making a system ready for 50 years in the future! The nature of tech is that there's something better five minutes after you buy the top of the line.
  • The x86 architecture has been obsolete for at least ten years. Then again, nothing in the Pentium2-class arena could be called an x86 by my definition; they've done some (admittedly) amazing stuff to keep hacking the speed higher and higher, determined to keep their precious cash cow from dying. The p3, by this time, is so different from the 8086 that thy're hardly the same ship or architecture. The only similarity is in the instruction set, and even then there are differences (let's see you run a KNI-enhanced program on an original 8086).

    This said, the x86 instruction set really does have a lot of problems. This reflects the time in which it was made quite well. Very few chips made today have these problems; they've learned from x86's mistakes. Intel itself has hacked bits onto the x86 ISA for decades now, trying to patch up the problems, but they've also made the ISA a complete mess in doing this.

    Does x86 have a performance limit? Theoretically, no; it's an instruction set, not a chip. But it's one hell of an instruction set to burden a processor with. IBM chose that chip for its PC line specifically to hobble it; that way the PC could never compete with IBM's then-profitable minicomputer line. This seems to have failed rather miserably, thanks to the amazing work done at Intel/AMD/etc. In time, x86 will die; every computing concept and program dies given enough time (anyone here still seriously use VisiCalc?) Frankly it is past its due; cleaner architectures have existed almost as long as x86 itself. But I suppose we should be patient. The day will come (probably when/if Intel finally gets IA-64 out the door).
  • Isn't it teraflop anyways?

    Terraflop would have something to do with Land crashing or something.
  • No doubt they have a nice product but a 30x improvement, as mentioned in one of the posts, in performance sounds like marketing crap to me. In any case, I don't believe it until I see it.

    Also note that their VM is about personal java, which is a slimmed down version of Java for embedded machines. Probably the competition uses an interpreter rather than a JIT to save on memory usage (this would explain the performance difference). No doubt speed is usefull in some situations but the real bottleneck of embedded machines is usually memory size. The more memory you have available, the more features you can put into the tiny space.

    In any case, thanks for drawing attention to an interesting virtual machine. Diversity is a good thing.
  • I already read it yesterday ^-^
  • They do it in hardware, not in software: limited optimizations, no profiling of runtime system, fixed instruction set. There isn't a single transistor on the crusoe implementing a X86 instruction. It is all done in software. So, the crusoe doesn't have these limitations. The software that does the translation can be updated meaning that bugs in the translation can be fixed and bugs in the processor hardware can be worked around. New translations for new instruction sets can be added and most importantly it can do runtime optimizations using profiling information.

  • From the hang-on-and-we'll-get-you-out-of-there dept.:

    I believe it is generally understood that the x86 architecture is not the most superb set of instructions and such that we could get. RISC obviously has much value, the newer embedded systems and such will become more and more the wave of things to come. However, it's going to take some time. Here's what must happen before a new standard (whatever that is) is accepted:
    1. Companies must stop supporting old architectures, regardless of the reaction of consumers.
    2. Old hardware must die off, either broken or unable to run with any usefulness. My hobby is collecting vintage machines and making them run and do useful tasks. If I couldn't get them to run, I wouldn't use them. Period.
    3. Major business and educational software must be written primarily for those new architectures. You want Linux to be god? So do I. However, until most packages write their software for Linux, it won't happen. If things were written for the Amiga, I'd have an Amiga. Since things are mostly written for Win32, I have a Winblows box. (not that I enjoy the pain, you realize)
    4. Hype has to be mutated into standard. Sure, we all love to play with 1GHz Athlons, but are they standard yet? Hardly. Similar with other architectures. When a 64-bit processor becomes standard and not "the newest thing on the block since swiss cheese", it'll happen.
    5. Computer industry professionals (techies) and computer savvy people (geeks) must promote these new and alternate technologies. Break the mold. Go 64-bit. Recommend that your neighbor do it, too. Send a memo to your boss, tell them to convert. Until we push for this to happen, it won't.
    In conclusion, this change will happen. When is a matter of many factors, including the ones above.

  • The 4004 was a 4-bit microprocessor which was not binary compatible with *anything* else. The 8008 and 8080 were all 8-bit processors which were *not* binary compatible with the 8086.

    The 286 ISA was a superset of the 8086 - any code that used protected mode was *not* backwards compatible. Ditto the 386 - it added new modes that were not backwards compatible. However, the 386 ISA has stayed mostly unchanged (notable exceptions include MMX, 3DNOW, and whatever Intel's latest hack is called) through the days of the 486, Pentium, PPro, PII, and PIII, as well as the Cyrix and AMD equivalents.

    Yes, the weird-ass segmented addressing modes exist, but I haven't seen anybody show any enthusiasm for trying to *use* them.

    AMD's Sledgehammer proposal might successfully extend the x86 architecture to support a true 64-bit address space. It might be a horrible flop. Whichever way it goes, Sledgehammer code will *not* run on anything other than a Sledgehammer processor. Ergo, Sledgehammer is a new ISA, related though it may be to the original x86 one.

  • by Dwindlehop ( 62388 ) on Friday June 16, 2000 @04:33AM (#998250) Homepage

    Did you, in fact, read the article? Hannibal said as much in his article. Obsolescence is the wrong question here; timothy [monkey.org] should be ashamed of himself for titling this Is The x86 Obsolete?.

    Here's the short version for people too lazy to read the article or too dumb to understand what Hannibal is talking about:

    Due to incredible amount of programs written for the x86 architecture, machines that execute x86 instructions will be around for some time yet. Everyone agrees (even Intel) that x86 is not a good ISA (instruction set architecture), but the ability to run all the programs written for it make it too costly to scrap. In order to achieve better and better performance, the current generation of microprocessors (Athlons and PIIIs) emulate x86 in hardware. The actual execution on these machines takes place using a completely different, RISC-style set of instructions (x86 being CISC for those who don't know).

    This information addresses only half of Hannibal's article. The other and more interesting half describes the latest ideas computer architects have for circumventing the problems of the x86 ISA. The primary advancement is translation of x86 instructions into another architecture; this translation occurs only once, as opposed to emulation, and can be very aggressively optimized for the particular hardware it is running on because it is performed at runtime. Because the performance hit is only incurred once and because of the further, machine-specific optimizations, machines which execute x86 instructions will continue to increase in performance.

    Furthermore, executing x86 instructions by translation means that computer architects have the freedom to change the native architecture of their machines without worrying about executing legacy code. These issues were addressed by emulation; translation is a further step in this direction.

    As I said before, the obsolescence of the x86 ISA is a ridiculous and unanswerable question. However, I believe that the x86 ISA will continue to be a relevant problem until we leave 32 bit machines behind for 64 bit and larger.


    Jonathan David Pearce

  • Yes, the x86 might survive as a backwards compatibility mode, but any *new* 64-bit is not going to run natively on old machines.

    If you're going to have to change ISA, and cope with all the nuisances that entails, why wouldn't you swap to the one offering the very best price/performance compromise? As far as backwards compatibility goes, you can run x86 code on the Intel/HP IA-64, and, if it comes down to it, on the PPC and Alpha through emulation. What's the special attraction of the Sledgehammer?

  • If you think the P6 line is 32 bit, you are _extremely_ out of touch. The P6 has been doing 36 bit for several years. Perhaps you should check your facts in the future before you claim to know what is going on. Please log on to Intel's developer site (http://developer.intel.com) and get volumes 1-3 of the developer manuals.
  • by theonetruekeebler ( 60888 ) on Friday June 16, 2000 @04:35AM (#998262) Homepage Journal
    I don't think that issues of legacy code compatibility matter less and less these days, at least in regards to processor instruction sets. Why? HLL compilers and operating systems.

    The x86 ISA has been closely married to the fate of a single operating system for quite some time now. After the shift from CLI to GUI, most of the compatibility issues in software have been WRT how to talk to the OS, not anything underlying. Nobody talks to the hard drive or keyboard directly--you talk to the driver. Likewise, the only programs that generally need to understand the underlying architecture are compilers.

    There is so much standardization at levels above the processor instruction set that particular CPU architectures matter only while writing compilers and operating systems. Open source software distribution is making architectural irrelevancy even more thorough.

    I will freely admit that there are applications which need good familiarity with the underlying hardware; most of these, however are drivers. The rest are heavily optimized scientific computing tools that need to bum every single instruction out of a loop because the loop is going to run sixty-nine trillion times.

    As for the rest of the world, though, nearly transparant portability of operating systems and applications suites across architectures is a reality that lags only a few hours or days after the compiler is written. I'll offer two examples: Unix and Java.

    When does compatibility with prehistoric applications become a reality? In places other than the x86 architecture. I do DBA work for an RBOC, and yes, we have ancient COBOL and FORTRAN applications that first ran in the 1960s. For those groups, Y2K was a genuine nightmare. But all those apps run on MVS and other mainframe environments--not exactly the x86's stomping grounds. As for other, pre-x86 micro architectures, well, I can run all my old Atari 400 apps under an emulator on my Pentium 200, because I have cycles to spare even to a badly written emulator.

    So, no, the x86 isn't obsolete. The newer generations have some obsolete components, though.

    --

  • by ColonelPanic ( 138077 ) on Friday June 16, 2000 @04:36AM (#998263)
    Is the x86 deficient relative to other instruction set architectures for microprocessing? Of course. Does it matter?

    What we lose in the x86 is performance. While I'm quite aware of the heroic measures taken by AMD, Intel, Transmeta, et al. to run x86 code quickly, you can't escape the fact that an optimizing compiler for x86 has extremely limited power of expression.

    In computer architecture you want the ISA to be such that the compiler can do what compilers do best (static analyses over large regions) and hardware can do what it does best (dynamic adjustment to unpredictable runtime conditions). A bad ISA can bottleneck both the compiler and the hardware. x86 is poorly balanced in this regard. So's IA-64 (in the other direction), IMO.

    On the plus side of the tradeoff, with x86 you get billions of dollars in fab R&D and commodity pricing, not to mention a huge installed base. It's never going to go away. Sigh. But life would be so much better for compiler writers, systems software people, and (indirectly!) users if all this business were centered around a nice 64-bit ISA rather than the x86 monstrosity. I very much enjoyed using the Alpha ISA on the Cray MPP machines and commend it as a model among the publically-known ISAs. No condition codes, delay slots, segments, special-purpose registers; just lots and lots of registers.

  • There are other things besides sheer power that make people choose one architecture over another. Often times availability and cost are much more important, especially to the Linux geek (IMHO). Other `geek' factors play in as well as just raw speed pulsing inside the heart of your machine.

    x86 is fairly cheap, highly available, and easily self servicable. Therefore it is a quasi jack of all trades, master of none. That's not a bad thing in my book.

    Bad Mojo [rps.net]
  • Old hardware must die off, either broken or unable to run with any usefulness. My hobby is collecting vintage machines and making them run and do useful tasks. If I couldn't get them to run, I wouldn't use them. Period.

    I would imagine that depends on your definition of usefulness. Most people would think that companies would jump at the time that it would be more cost-effective or something along those lines.

    People grow very attached to old systems, and often get burned by vaporware upgrades. Often ancient systems are in place, limping along, at most businesses that have been around for longer than a decade. There was a flat tandy pc with a four line LED screen (no, not LCD), and the PAlm Beach Post reporters still use it. Because it has a real touchtype keyboard, and runs for weeks on four or six AA batteries. Plenty of COBOL and FORTRAN routines are running happy and live deep in the bowels of many companies. For a more recent example, Foxpro and dBase are alive and well, and *new* apps are being written in-house, because all of the other companies business is stored there.

    So, all of this is considered "Obsolete", but businesses use it, and buy new equipment, hire IT people and maintain and grow their "obsolete" equipment.

    Now, *home* use is a completely different matter. You have a hardcore group of people who still use Apple ][s, people who are looking for NES cartridges, 2600 joysticks, and eight inch floppies, but those are geeks doing it for enjoyment.

    There are plenty of people, however, who are actually *using* 386 class machines. They do email, type reports, print them on dot matrix printers (I get the question "where can I get tractor feed paper?" more and more often, rather than less). They can't afford to upgrade, either because they don't have the money, their perception is that new computers still cost thousands, or they don't consider $300 worth it.

    Obselence is a matter of perception, not a matter of logic.

    --
    Evan

  • IMHO there's no need for gigahertz PIIIs or Athlons to be able to run WordStar.

    True. There's also no need to delete Lord-knows-many programs written and compiled for x86 machines. Using the x86 ISA is a question of economics, not of "genuine improvement or development." The field of computer architecture has come a long way since some Intel engineers sat down and designed the 8086. I hope nobody refutes that.

    But, today architects have some awfully good ideas about how to squeeze more performance out of a machine that has to be able to execute x86 instructions. These sorts of breakthroughs keep the performance of x86-compatible machines climbing with minimal performance hits compared to other architectures. If maintaining x86 compatiblity is so "expensive and counterproductive" that it makes sense to leapfrog from architecture to architecture, then modern (i.e. designed in the last decade) architectures would rule the market in terms of sales and performance. They do not; hence, maintaining backwards compatibility does not significantly adversely affect an architecture.

    What are your gripes with the x86 ISA? It is rather clunky and, from an academic standpoint, not optimal. Also, I'm sure the Intel engineers curse themselves (or their predecessors) on a weekly basis. ;) In spite of that, though, machines which execute x86 instructions have the advantages of low price and large amounts of software; what more can you ask for? Lucky you, I'll tell you:

    The two problems of the machines which execute x86 instructions currently are power requirements and die size, because all the current schemes for circumventing the problems of x86 require additional hardware. These are not serious issues currently because of the widespread desktop computing paradigm. As the market moves away from big, stationary computers and towards smaller devices, x86 will become less and less viable. You can already see this trend happening. How many hand-held devices can you think of that execute x86 instructions?


    Jonathan David Pearce

  • The __only__ serious limitation in the IA-32 architecture is the fact that there are only 8 general purposes registers, and the fact that they aren't that general purpose (e.g. the MUL instruction always works with the same registers, the stack pointer must be in ESP, etc.)

    Aside from this, the IA-32 architecture is actually considerably more simple than most other architectures to program on.

    A few ...

    IA-32 does all alignment checking for you. There is no problem doing a store split across a line or even a page, and the microarchitecture takes care of this. On something like Alpha, this is illegal, and will generate an exception and the OS must do two stores to perform the operation.

    Cache coherency. IA-32 has very well defined cache coherency protocols, and again works in all cases such as split words. Many architectures, including Alpha, leave coherency to the programmer, and you have to do locks yourself. This is extremely complicated, especially for false sharing when it is not clear what is on the same line.

    Memory ordering. Ditto as the above. Many of the RISC architectures have very chaotic memory ordering rules, especially the Alpha, which does all sorts of weird out of order and speculative loads so you have to insert fences everywhere. A real mess.

    Despite this, IA-32 is still the fastest architecture around. The fastes CPU currently shipping on SPECint2000 is the 1 GHz Pentium III. The RISC architectures are more difficult to program, but are also slower!

    One good benefit of the CISC IA-32 architecture is instruction density. You can code in two bytes (a CISC ALU instruction, for example), what it takes 8 btyes to code in RISC (a load/store, then the ALU). When you code denisity is 2x-4x greater, this helps tremendously for i-cache! Also, it really cuts down on the relatively expensive decode process (which is really the only expensive part of the IA-32 architecture)
  • Interesting. I've programmed a similar set of CPUs and have come to exactly the opposite conclusion. I fail to see the difference between a "hardware stack" which apparently means there are extra instructions just for pushing and popping, and a software stack which presumably means normal load/store/add/subtract instructions are used with one register considered the stack pointer (and maybe another for frame pointer). The instructions do the same things. They take, in general, about the same time to execute. So what it really comes down to is more instructions to do the same things, which means more die size, which means more heat, more power, and more manufacturing cost. Some deal that is. The only reason it can take so long to save registers on a real CPU is that there are so many. Sure, it's fast to push your six general purpose registers. But that's not enough to make up for your memory-accessing instructions and the register-shuffling you have to do to keep useful values in your pitiful register file. If a register has to be saved, it has to be saved. That's true of any architecture. Sparc tries to avoid this and actually does a very good job of lowering call overhead, but the bottom line is that there will always be times when things have to be pushed onto the stack.

    Also realize that all of these instructions are fixed at 32-bits on most chips. That's 32-bits to copy a register, 32-bits for a return, etc. This may simplify the hardware, but at the expense of bloat. So you need a bigger instruction cache.

    This really depends on your instruction mix. There are longer instructions on x86 too. And let's remember that simpler hardware means less die size, less heat, less power, and less cost. And remember that the SHx has 16-bit instructions, not 32. So on that architecture your code size will always be less than equivalent x86 code.

    The bottom line is that x86 has about three orders of magnitude too many instructions and a similar factor too few registers. It exists without the grace of design or forethought. It's too big, too bloated, too hot, and more expensive than it needs to be. Programming it is a nightmare. The only positive thing I'll say for it is that the performance isn't terrible given its complete lack of design. This says good things about Intel's engineers. Of course, if they can do as well with x86, imagine how much better they could do with a decent architecture. In other words, if Intel manufactured MIPS and SPARC chips, they could crush the existing implementations in performance.

    The x86 was obsolete 12 years ago. The sacrifice of sanity on the altar of backward compatibility is disgraceful and foolish. I don't use x86 any more, thank God. I just wish nobody else did either. We'd all be better off if x86 died the death immediately or sooner.

  • There might be no hardware stack but this often compensated by register windowing - which with the large register sets available in most RISC chips allows the majority of subroutine calls to be made directly from the register set - even nested ones!. Good examples of this are SPARC, TMS9994 (remember that ;-) and ARM.

    All in all though, it would seem that RISC and CISC have converged so much in recent years its hard to tell them apart. Shame the same isn't true for the software that runs on them.
  • Yes, but lost of people can't stand macs because they don't have any backwards compatibility. When I was going to buy my first computer I bought PC because I knew any Mac I bought would not only be obsolete when I bought it, also be unsupported by everyone - even Apple - within a few years. Plus Apple alters its hardware specs enough between models that you can't upgrade the hardware to get it compatible again... You just have to buy a new Mac after 3ish years. Its almost as bad as Microsoft really.

    As for Mac users howling, you bet they did. When I told a guy I worked with that his 3 year old mac was too old to play a game he got for his young daughter he was pissed. He should be, by ditching compatibility like that it destroys any value his 3 year old computer had whatsoever.

  • by DaveHowe ( 51510 ) on Friday June 16, 2000 @06:26AM (#998295)
    Hmm. I can see two big new markets as the transmeta cached crosscompiler model takes off:
    1. Products will have a "burn in" time on new machines - Optimising and path mapping of the cached version will take many passes through the code; I can see an active after-market of pre-burned-in copies of popular packages, provided the optimisation image is extractable
    2. If the transmeta model can support ONE ISA, it can support two or three. We should start to see manufacturers competing over new ISA designs that best utilise the underlieing hardware, without being so restrictive that the next generation of chips run legacy ISA programs faster than the new ones. Bonus points if you can get "new model" code to run under an "old model" os such as Windows, and vice versa - I would expect new-model Linux (for example) to support old-model RPMs, at least in emulation.

    --

He who steps on others to reach the top has good balance.

Working...