Is The x86 Obsolete? 336
levendis writes: "Ars Technica has an excellent article up on the future of the x86 architecture. It touches on new idea from Transmeta, Intel, HP's Dynamo, and a bunch of other technology that keeps the 20+ year old architecture alive and kicking." As always, the Ars take on this (specifically, Hannibal's) is lucid and thoughtful, and grounded in some interesting history.
Re:Open Source is the ultimate intermediate format (Score:2)
There are none. Incompatibilities arise when the programmers assume a certain integer size (32-bit, usually... I don't know of anyone who writes code which is meant to be reasonably portable who assumes an integer is 64-bits
--
Re:x86 is popular to hate, but not that bad really (Score:2)
No, the reason x86 hasn't died is because it's as bad as can possibly be without forcing people away from it. Blame Microsoft and idiot peecee buyers for its continued plague of the earth. Modern x86s are an excellent implementation of what is likely the worst architecture ever created. Their "design" to the extent that it exists combines the worst of RISC (hard to write good compilers) with the worst of CISC (lots of useless confusing instructions, nowhere near enough registers) and some extra Intel-specific bad bits (stupid CISC->RISC translation mechanism for example, 16-bit compatibility, nonlinear memory model). What a crock.
Re:x86 is popular to hate, but not that bad really (Score:2)
"This is a feature, not a bug."
It's a tradeoff; yes, it takes more space to cache fixed-length instructions, but it's easier to pipeline them, faster to look ahead, etc. Speed versus space.
Translation and virtualization (Score:2)
That's the hardware end. Programs don't just run off a ISA, they also run off an API. What is the software technology that would best run in combination? Virtual machine technology. My gut feeling is that someone will build a PC with the ability to boot virtual Windows sessions where the programs there think they're on an x86 with all the requisite hardware while the rest of the computer is emulating some nicely streamlined ISA and you've got Java code running in some sandboxed virtual machine elsewhere on the system.
At this point you have support for legacy ISAs and legacy APIs in a nice simple format. As computer architectures die, they just end up virtualized and emulated/translated. And the computer is designed from the hardware through the operating system to seamlessly do it. In time everyone assumes they're on a virtual machine and the new operating system evolve to adapt to that environment.
No doubt such a setup will even allow for finetuning for things like emulating old Apple ][ systems as well as original IBM PCs running at 4.77 Mhz to get all those old games running just right. Or old Nintendo/Sega/whatever boxes. All those emulation setups get ported to the virtual machine setup and the appropriate ISA and you've really got emulation there.
Companies will like it because they can ditch old unsupported hardware but keep the software around forever. Especially companies with several brands of hardware/software that they can suddenly run under a single brand of hardware.
This is Microsoft's worst nightmare because all of a sudden switching to a new operating system does not preclude dropping old software. It can be done seamlessly and as gradually as people want. To Apple, it is also a bad nightmare, especially if other people work out Mac virutal machines on other hardware.
To Intel, they're already taking a bruising from the other CPU manufacturers. Intel will take to the translating setup with a vengence, but instead of going in the direction of power consumption (or alongside it) they're going to focus on performance, the niche they've always gone into. And their rivals will go and do the same.
To the PC vendors like Dell and Compaq (but not as I said, Apple) its something to be welcomed. They're already in the trenches competing against other machines that run all of the same software anyway. Anything that allows them to expand their range of software to run, the range of operating systems seamlessly is fine by them.
To operating system vendors (except for Microsoft and to a lesser extent Apple) it will be a major blessing. All of a sudden experimenting with a new operating system becomes easier and switching to it becomes useful. Niche operating systems can thrive being used for specialized applications on an existing machine.
Things like OpenBSD can suddenly start becoming really popular doing things like being the virtual machine that's the only one designated to see the DSL connection to the outside world while the Windows and Sega Genesis virtual machines are there for playing old games and the Linux or FreeBSD virtual machines are used for getting work done.
This is the direction I think the PC will be evolving in to compete with the small and specialized Internet appliances. They are going to take their strength, flexibility and go more so in that direction. The CPUs will become more flexible and the operating systems will capitalize on that and take virtual machines to the next level.
Address space is going to kill off the x86 (Score:3)
Once this becomes a more widespread problem, the x86 architecture, in its present form, is doomed. At that point, what the industry will converge on (and whether it will converge at all) is an open question.
Microcode (Score:3)
The IBM 360 and 370 series not only had microcode-based hardware, but IBM could and did ship out microcode updates (originally on 8-inch floppy). Among other things, IBM got in trouble with the anti-trust folks because they would send out microcode "updates" that just happened to break 3rd party peripherals - you installed the REQUIRED update and your Amdahl hard drives stopped working, for instance. IBM also would put high-level instruction code support into their microcode. For a long time, IBM's sort software package ran faster than anybody else's because they had microcode instruction assist - kind of a secret machine instruction that the competitors didn't have. It's like the private APIs in Windows.
...phil
Re:A little premature to call it obsolete (Score:5)
Of Course (Score:4)
Hyperbole. (Score:2)
Re:Hmmm, a record for /.??? (Score:2)
A first post by me was 5 at one point, then got marked back down to 0. It was regarding the May 2nd DMCA protest that Slashdot refused to cover. Unfortunately I was not able to find it in the archives, as I know I posted it in a completely off-topic article.
x86, die die die! (Score:2)
The x86 instruction set isn't even that nice; we have extension upon extension that creates a horrible mess of standards and layers, each of which you need to accomodate; a seriously limited hardware interupt lines to attach all important hardware too etc. etc.
Personally, i'd like to see the x86 and the AT die, and quickly, please.
Dude... (Score:2)
--
Compaq dropping MAILWorks?
Re:Macs and backwards compatibility (Score:2)
they don't? i run 68K apps such as Illustrator on my iMac no problem dude, theres a little calculator app i like that was written when the MacPlus was cutting edge. it works fine still!
As for Mac users howling, you bet they did. When I told a guy I worked with that his 3 year old mac was too old to play a game he got for his young daughter he was pissed.
so u mean 3 year old PCs can play new games? often not the case
Our own little world (Score:2)
It's cool to have newer, faster, better hardware but does it actually let you get more work done or have more fun? For a few individuals, yes, but the vast majority of us have had computers faster than we can type for a long, long time.
Don't make something obsolete when something better comes along - make it obsolete when it ceases to be useful. I still use a 486 as a simple mail and web server just fine, thank you.
It's the OS not the CPU (Score:2)
Just some random thoughts.
The x86 has been obsolete for years (Score:2)
The only reason that the x86 has stayed around is market inertia and economies of scale. Because of the large scale of manufacturing, x86 machines are a lot cheaper than newer architectures, and most binaries are for the x86. Rather sad, really, but that's the way it is.
Re:Coincidence (Score:2)
Anyway, I smoke weed like a fiend - every day, if I can help it. And I get tons of stuff done. I work full time, write open source software [sourceforge.net], do CGI art [umd.edu], and other stuff. I can't usually get anything done if I'm not stoned though
Alright, this is drifting way off topic.
--
Re:x86 is popular to hate, but not that bad really (Score:2)
I was referring to the "segment" concept, the fact that physical memory is nonlinear (or however you prefer to describe it. The point is that pointers require two registers.) Thankfully protected mode helps somewhat and makes it possible for an OS to offer virtual memory and a linear address space, but the fact that 16-bit real-mode segmented address spaces still have to be dealt with at all, ever, is obnoxious and stupid.
NUMA and Intel Bus architecture (Score:2)
I don't want to pretend to be an expert, but isn't the biggest limitation with the intel processor the Bus architecture that can only let one chanel of communication between two devices happen at one time?
I know that with SGI hardware they have a type is switched bus that allows multiple devices to talk at the same time which allows for much higher sustained bandwidth.
Does anyone know if x86 chips can run on a non-bus architecture or is it part of the Chips instruction set to function on bus architecture?
Re:Poppycock (Score:2)
And it is proving to be popular. However, it's going to be quite a while before someone writes a Java/C interpreter/compiler/emulator/translator that can provide a good enough environment in which to produce something like Quake VIII.
No, they do get it. They know that the market is ready right now for the technology that they're providing. Java, on the other hand, is relegated to the non-gaming segment until more advances can be made to the technology. But you're right - this is the approach of the future. It will become more and more mainstream.Re:x86 is popular to hate, but not that bad really (Score:2)
Re:x86 is popular to hate, but not that bad really (Score:2)
The 8086 is not what I'm criticizing. Yes it sucked but so did everything else at that time. What I'm criticizing is the decision of the engineers to let the marketdroids run Intel and thereby prolong the life of a design that had no business surviving past 1988 or so. In the mid-80s the SPARC and MIPS projects were starting to produce marketable CPUs. These CPUs were well-designed and well-implemented, and fast. Intel's engineers are more than capable of competing with such offerings, but they chose instead to allow non-technical idiots to dictate technical policy. Specifically, the 486, Pentium, ... are mistakes and deserve to be treated as such. These CPUs should never have existed because Intel should have abandoned an architecture that by the time of the 386 was already aging very badly. I will gladly excuse technical mistakes made in the absence of information we have today. I will not excuse bad policy decisions which should have been made by engineers, made instead by braindead marketdroids.
In legal-speak, I'm saying that Intel knew, or should have known, that their current product offerings were technically inferior to those of their competitors, and should have adjusted their product line accordingly.
The i860 and i960, along with the ability to manufacture x86 CPUs that offer any performance at all, prove that Intel has a great many competent, if not brilliant, engineers. Their mistake was in giving up control of the product line. When Intel's marketdroids announced the 486, every Intel engineer should have either demanded a change, or cut and run. There is no excuse for the continuing existence of the x86.
Re:Function calls, Code bloat, other reasonable my (Score:2)
I never said that. I said 15 or more. On the PPC, 15 is about the max. In general, though, there's about 15-20 instructions of overhead in a non-trivia, non-leaf subroutine (in C), but it can be twice that.
You need to stop and think about what "complex" means. A CALL instruction is not complex. Heck, it was standard on 8-bit processors with less than 10,000 transistors. Complex instructions are some of the crazy things done in hardware on the VAX and IBM 360. If a CALL instruction is considered too complex to implement efficiently in hardware, then we shouldn't even both with things like texture mappers or floating point math. The bottom line is that RISC has gone over the top, making things more simplistic than we really want.
Re:They've been right all along (Score:2)
Maybe he had delusions of this type of grandeur, but no footing in reality at the time.
The 68K didn't even exist yet. The Apple Lisa, which was the predecessor to the 68K, was still about 5 years away. Even the 6502, which the Commodore 64 and Apple II used, didn't even exist yet.
The obvious 8 bit choice would have been the Z80 or the 8080 (more likely the Z80). That Z80 was the standard chip for the late 70's CP/M machines. It is possible that Intel had some input on using the 8086 instead of the 8080. The reason for the 8088 was that is used 8 bit bus architecture, which allowed IBM to leverage cheaper motherboard designs. 8086's were simply too expensive to build at the time - an 8086 PC with a green mono monitor and two floppies was somewhere around $6000.
Re:Windows again. (Score:2)
I wouldn't be surprised if winelib was ported though.
Re:Macs and backwards compatibility (Score:2)
Wow, what an outrageous load of FUD this is.
As for Mac users howling, you bet they did. When I told a guy I worked with that his 3 year old mac was too old to play a game he got for his young daughter he was pissed. He should be, by ditching compatibility like that it destroys any value his 3 year old computer had whatsoever.
Well, you shouldn't lie to people like that. I run UT, Falcon 4.0, and a number of other interesting things on my 3 year old Mac. Maybe your Mac-user friends will wise up and stop asking you for advice on things you know nothing about.
--
Re:Hyperbole. (Score:3)
A reasonably configured used O2 in perfect condition can be had for under US$1500, about the same price as a midrange peecee. An R10k High Impact Indigo2 can be had for about $1300-$1700 as well. That's a fully 64-bit system with a 200 MHz processor (faster than it sounds) and graphics faster than all but the high-end peecee offerings. Even Sun Ultra 2 systems, which are also fully 64-bit and offer dual CPU capability, are less than $3000 in reasonable configurations today, and it's even possible to get them new. You can say what you like about high workstation prices, but in the real world, clever individuals can get nice, if slightly out of date, systems that offer good to excellent performance for prices comparable with peecees.
On top of that, the only unix box hardware I really appreciate is SGI, but the only commercial unix I would run is Solaris - Which is a fundamental incompatibility.
That seems odd. Both SGI and Sun build great machines, but I'd rather put a fork in my eye than have to use Solaris. IRIX is ok most of the time though. IMO the only acceptable OS for Sun boxes is Linux. Try it; you'll like it.
Feel free to feel like you have a larger penis because you've left the PC platform
[Looks down] Looks pretty standard to me. A refusal to compromise with idiocy doesn't come from the penis, it comes from the brain, and I'm pretty sure ours are within 20% in size.
Until the cost of systems based on other processors drops
It has. See above.
the number of available applications must increase...
I don't know about you, but I have solid applications - we're talking about things that actually work reliably here - for every task I might possibly want to do on a Unix box. I challenge you to name a task I can't do on a Unix box. That Turd or whatever other flavor of the month isn't available isn't important - what matters is what tasks you can do, and how easily you can do them. I've found that Unix systems offer more applications than I could ever find a use for.
Re:Coincidence (Score:2)
Not quite... while chemical dependance is a good way to describe my relationship with nicotine, it's not with THC. I could never get anything done before I started smoking pot either. I was always very lazy... the only time this isn't true is when I smoke some nice kind bud.
--
Reminded of what Torvalds said (Score:4)
Look at that sideways. That is *exactly* what IBM did to make code binary portable. That is the principle that the AS400 uses. If you peek in well-known and widely ported projects (eg Perl) you will often find that they take the same approach. (For good reason!)
The key to wisdom lies in seeing how good ideas about foo look like good ideas about bar and then trying to apply that. There is a good lesson here about portability...
Cheers,
Ben
Re:Macs and backwards compatibility (Score:2)
I also am somewhat dubious of your claims concerning the 3 year old mac. In '97 you're talking 604s at 180+ Mhz (4400s,7300s, 8600s, and 9600s) that should run all of todays software without any problem, at least as well as a P200 would on the other side of the fence. The only powermacs without an upgrade path to at least a G3 are the original PCI based macs (7200s). Those were doomed machines from the start though (the whole Carl Sagan thing ;)).
The only machines that have been totaly ditched, support wise, are the 68k machines. 68030 and below based macs (9+ years old) were dropped with OS8 and 68040 based machines (7+ years old) were dropped with OS8.5. That's allright though, just throw a BSD on the thing and go to town.
The Performas are a different story (although it's been nearly 5 years since they quit making those junkers) but then again so are the PS/2s.
Redundant? (Score:2)
Whoever you are I hope you run out of points soon.
When is Intel going to start following Microsoft? (Score:2)
x86 still a LONG way from obselete! (Score:2)
I find it very amusing that people think the x86 CPU architecture is obselete.
That may have been true for the 8086 with its 1 MB memory addressing limit and the 80286 with its 16 MB memory addressing limit, but once the 386DX with its 32-bit flat memory addressing scheme became available, in theory the x86 can address as much as 4 GB of system RAM! It's mostly memory physical limits on the motherboard and motherboard memory controller chip limits that has limited computers from addressing all 4 GB until now.
Besides, the x86 architecture has undergone an unbelievable increase in performance. Remember when the first 386DX CPU's were rated at a meager 12 MHz 15 years ago? We now have Pentium IIIEB and Athlon CPU's running at around 83 times the clock speed of the original 386DX and vastly better memory management.
Besides, very few programs for stand-alone workstations demand more than 256 MB of RAM nowadays. And most server applications run extremely well with 1 GB of RAM, especially on the Linux server machines.
The big bottlenecks are no longer the CPU; it's mostly hard disk access times and access times through the network adapter card that holds your system back. Now you know why RAID 5 hard drive arrays and Gigabit Ethernet NIC's are used on high-end servers.
However, I do see that non-x86 architectures may become more prominent in the next three to four years. Projects such as LinuxPPC will allow Linux applications to run on systems that use the PowerPC CPU, a CPU with superb memory addressing capability and an equally superb FPU unit. If Linux becomes popular enough, maybe we might even see a revival of the PReP platform in an updated version running LinuxPPC, machines sold to people who need serious FPU processing power such as engineers and computer animation artists.
Re:The x86 has been obsolete for years (Score:2)
What are you smoking? Alphas suck power like there's no tomorrow, and my 600mhz EV56 heats my room. I moved it into another room, and now with only my P2 my room is much cooler (and quieter for that matter, but that's not the processor's fault.) Every seen an Alpha laptop? Wanna know why?
(of course, somebody is going to respond saying "I've seen one!" but there were only like 1 or 2 models made so save it.)
You're right about the PowerPC though.
--
A little premature to call it obsolete (Score:3)
Hardly. Whilst I don't know of anyone that likes the x86, saying that it's obsolete is extremely premature - look at the increases in processing power that have gone on over the last few years and are still continuing with things like Athlon's forthcoming Sledgehammer.
The fact is that despite its poor design chip makers have done some amazing things to push it to greater speeds - the Athlon CPU looks and works nothing like the 8086, they just happen to run the same instruction set. And in this year we'll be seeing the GHz barrier broken - hardly the sign of an "obsolete" chip is it?
As long as the chips are still getting faster and people are still buying them I think calling the x86 platform obsolete is incorrect. A pain in the ass? Sure, we'd all like a brand new chip design, even Intel, but it works, and it's still growing.
---
Jon E. Erikson
software (Score:2)
That's not what's holding me back. It's that $15K for Alias|Wavefront Power Animator, or $5K for Maya
Of course, as soon as Maya gets released for OS X, I'm sure I could get ot illegally at all the usual places...
Pope
Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
They've been right all along (Score:2)
Re:perfect?? (Score:2)
Re:x86 is popular to hate, but not that bad really (Score:3)
Not even that. All we have to do is apply multiple cycles of Phil's Law of Program Optimization:
That makes it a perfect match for your single-instruction x86.
...phil
What does obsolete mean? (Score:3)
obsolete (bs-lt, bs-lt)
adj.
1) No longer in use: an obsolete word. See Synonyms at old.
No. x86 is not obsolete.
2) Outmoded in design, style, or construction: an obsolete locomotive.
Yes, x86 is obsolete.
Re:Do ISA's even matter these days? (Score:2)
Re:Space electronics (Score:2)
Re:True definition of RISC (Score:2)
Windows again. (Score:2)
For that you can thank Microsoft's amazing track record at porting to different platforms.
Otherwise you wouldn't see so many companies trying to keep the architecture limping along.
--
Re:Outdated yes. Obsolete no yet! (Score:2)
Don't believe everything they tell you. Since the introduction of hotspot there's nothing wrong with the execution speed of Java (check the recent story on Java performance, it's actually faster than C in some cases). The reason why java applications (graphical and multimedia in particular) are still slow is because of the way its libraries are implemented. Particularly Java2D and the swing classes are slow compared to their C/C++ counterparts. A faster interpreter does not help very much. In fact, the early swing implementations ran faster with the JIT disabled! So, unless tao completely reimplemented swing and other libraries, I'd be surprised if it performed significantly better than other JVMs.
I really liked the discussion in the article about obsolete and irrelevant. The real thing that has become obsolete is not the ISA but the static compiler that ties programs to it. The only reason x86 is still around is because there is no convenient way to convert an binary x86 program to, say an alpha binary, on the fly without performance loss. X86 will be around as long as people choose to statically compile their programs. The interesting question is not how long x86 will be around but how long hardware implementations of x86 will be around. Hardly anybody codes directly to an instruction set anymore. A simple recompile will port linux applications to other processors, often no changes are needed to the code. So, the dependency of the program to a specific ISA is not functional. In fact, as HP's dynamo shows, it is counter productive.
Transmeta's crusoe is the first of a generation of processors that can execute X86 efficiently without having a hardware implementation of it. My guess is that in five years or so, all major chip manufacturers will have stopped implementing instruction sets in hardware.
Java is ahead of its time in a way, since it is not dependent on specific instruction sets. The hotspot idea is not fundamentally different from what the crusoe does.
x86 is popular to hate, but not that bad really (Score:4)
Also realize that all of these instructions are fixed at 32-bits on most chips. That's 32-bits to copy a register, 32-bits for a return, etc. This may simplify the hardware, but at the expense of bloat. So you need a bigger instruction cache.
Is the x86 perfect? No. If you look at an x86 reference, you'll find that over 50% of the instructions are either (1) really old things that mattered in the 1970s but not any more, like daa; (2) instructions from the 8086 and 80286 that run poorly on more recent chips, like lods and leave; (3) along the same lines, instructions for managing segment registers and other 16-bit relics; (4) MMX or Katmai related; (5) specialized instructions that we could easily live without, like the set family. If you take all of this out, you pretty much have a RISC chip. And you'd still be compatible with 95% of the code that runs on the Pentium II and III. I expect we'll be seeing this kind of thing soom from either Intel or AMD.
Is Unix Obsolete? (Score:3)
Re:Clustering? (Score:2)
Why do so many people misunderstand? it's not simply like having a 300 processor machine.
Re:two stupid questions: (Score:2)
Re:NUMA and Intel Bus architecture (Score:2)
Re:Twenty Years? More like thirty. (Score:2)
To be fair to the Intel engineers, when the 80286 was being designed, real-mode was supposed to be a bootstrap into protected mode. The idea was that once you were in protected mode, you would stay there. The problem was that these decisions were being made years before the 80286 went into production, before there was a huge base of real-mode software (8086 assembler) that couldn't be easily modified to run in protected mode.
Re:Translation and virtualization (Score:2)
Re:Clustering? (Score:2)
Re:almost, but not quite ... (Score:2)
Re:x86 is popular to hate, but not that bad really (Score:2)
Ah, true, I was thinking in terms of the RISC CPUs that have gotten widely used in consumer hardware, like the SHx (in the Sega Saturn & Dreamcast), MIPS (in some CE devices and Sony's game machines), and the PPC (Mac, of course). None of these chips have the register window features of the SPARC, so quite a huge amount of code gets generated for subroutine entry and exit--up to 20% or more of the total code in a project, in many cases.
What's wrong with this picture is that writing very small subroutines has become the accepted norm--and rightly so--but most hardware is not designed for that style of programming. Increased emphasis on inlining has been the result, but it sure would be nicer to just have single cycle subroutine calls without needed overhead. The SPARC method sounds good.
Back in school... (Score:5)
Way back in 1989 or 1990 I was taking a class in Assembler and the teacher was remarking that the x86 was making itself obsolete. IIRC, he said that the memory addressing was horrendous and the whole reason they stuck with it was for backwards-compatibility.
There comes a time when backwards-compatibility needs to be sacrificed for genuine improvement or development. Apple no longer supports the 68k series of processors, barely supports any PPC lower than a 604, and is moving strongly toward G3 (or G4) only. Mac users howled, but it was expensive and counterproductive to try to keep too much backwards compatibility. Use older OSes and older apps for older computers and let newer computers become truly cutting-edge. IMHO there's no need for gigahertz PIIIs or Athlons to be able to run WordStar.
Just my $0.02.Stupid title. (Score:2)
Re:True definition of RISC (Score:2)
Also, IBM was doing research on RISC in 1974, which predates the x86.
Re:Address space is going to kill off the x86 (Score:2)
Oh? Like the Intel 4004 - 8008 - 8080 - 8086 - x286 - 486 - Pentium - Pentium Pro - Pentium II - Pentium III line? Or the Motorola 68000 - 68010 - 68020 - 68030 line?
Hacking round address space limitations has been done many times. In fact, Pentiums and up have a 48-bit segmented addressing mode, and some of the parts bring out 38 bits of address lines already.
There's a lot to be said for segmented addressing. It has a bad rep in the PC world because it was such a pain in the pre-386 era, but the modern x86 machines have it done right. We may well see expansion of the x86 architecture beyond 4GB. The Merced VLIW machines may be another Intel dead end, like their iapx432, i860, and i960 lines.
Re:x86 is popular to hate, but not that bad really (Score:2)
This was true in the days of 16-bit code, but has never been for 32-bit x86 code. You can use any register for any addressing mode or operation. In the 16-bit days you had to use either bx, si, or di for memory addressing, for example, which was horrible.
Re:Hyperbole. (Score:2)
One interesting thing here is that "using" a CPU architecture is such a fuzzy concept these days. I mean, on a good day, I might write a couple of hundred lines of C code, thereby implementing new functionality in my current project, perhaps making new demos possible or whatever. But, that code was C code, which almost by definition is more or less independent of the fact that I use AMD's take on the x86 ISA to run it. The code would be the same on a PowerPC, SPARC, Alpha, MIPS, or any other reasonable processor. So, am I really using the ISA itself? I spent the money (um, no, my employer did), and I run the system for 10 hours a day, but I still don't feel like I'm primarily using an instruction set architecture.
Perhaps the largest group of people who make sense as a "target audience" for a new ISA is the various compiler writers out there?
Back in the old days of the Amiga, most programs were written in assembler, and they would only run on MC600x0-based Amiga machines, of course. Then it made a lot more sense to think about programming as actively using an ISA - in higher level languages, it doesn't. Of course, things such as the POSIX standards for operating system interfaces also helps make the code less tied to specific machines.
But then again, as long as Intel keep introducing three or four (or is it more?) new implementations of their architecture every year, each time with new refinements (artificial life support?), it doesn't make sense to talk about is as being "dead", either... Although I think I must add myself to the camp of people waiting for something else to take over. Once, we had this dream that it would be the PowerPC, but seems to have failed.
Um, end rant. I guess I just confused everybody else, now.Re:What does obsolete mean? (Score:2)
Given that another has opined that there is a simple, emperical way to determine if something is obsolete (see if anyone uses it, if so, it isn't), which addressed only the first definition of obsolete, posting the definition is an excellent rebuttal and not in the least redundant. (unless you refer to definitions #3 and #4 of redundant and assume redundancy on the part of the slashdot servers, which would make every word ever posted on this forum redundant and becomes a silly excersize in sophistry).
According to the dictionary definition of obsolete, the x86 architecture qualifies, despite Intel apologist arguments to the contrary. It is obsolete hardware which is, alas, still in widespread use. Horses and buggies are obsolete, but you still see them on the roads in Central Illinois and Pennsylvania. This doesn't make them any less obsolete. When the oil is gone, automobiles may well become obsolete while horses and buggies become the pinnacle of technology. Unless, of course, the patents on hydrogen cells the energy cartels are keeping under wraps are finally freed, but thats a diatribe for another day
Re:x86 is popular to hate, but not that bad really (Score:2)
Sure, Intel could have killed their version of the x86 by not issuing the 486. In that case, buisness purchases would have simply turned Cyrix and AMD into huge companies, because their chips would have solved the buisness problem of having to run legacy code. Intel either would have had to go back to x86, or have become a non-player in the desktop market.
Steven E. Ehrbar
Read the article first! (Score:4)
It's not trying to say "the x86 ISA is obsolete", far from it.
x2, x4, x8......x86! (Score:4)
obsolete isnt the real issue (Score:2)
Is the x86 architecture obsolete? sure it is but theres so many of 'em out there or at least chips that bear as much resemblance to 'em as a Ferrari does to a VW bug but hey, they are both cars and you drive 'em in more or less the same way...
The issue of whether the design is dead or not will never be settled by the question "Is it obsolete?" but rather by "Does it still work?" The 486 and 386 should already have gone the way of the dodo by any standard of obsolesence but those two old boxes suit me very well thank you as a linux firewall and NFS server respectively. If they ever end up so short on power that they stop working in those roles then they will get upgraded but until then the upgrades are limited to the usual round of patch it, break it, patch it again :) Now admittedly I have no intention of using either as my main workstation (thats a K6,) but for as long as they do the job I need 'em to those older chips may be obsolete but they sure aint dead.
# human firmware exploit
# Word will insert into your optic buffer
# without bounds checking
Re:x86 is popular to hate, but not that bad really (Score:2)
If I remember my physics correctly, an order of magnitude is a power of ten; three orders of magnitude would be 10^3, or a thousand. Thus you're saying the x86 needs a single instruction and about ten thousand registers.
Obsolote Technology? (Score:4)
But technical obsolesence isn't really that relevent; the market factors governing success are really the presence of a transparent upgrade process, like Transmeta's Crusoe chip, for example. Something may be technically obsolete, but it is not socially or economically obsolete. Since we live in an economically governed society, not a technically governed one, the principles that affect the growth and distribution of new technology are economic, not technical.
Thus, we see x86 and DOS compatibility, two of the first and most popular (economically) primary personal computer architectures, resilient to even today. One might note that it is the presence of propietary technology that is indignant to change, and that open architectures (like ARPANet => Internet:TCP/IP) evolved dramatically. One can only speculate, of course, what would have happened if the internet was composed of closed minds and standards (and we can only agree to disagree at this time), or equivalently if DOS/x86 were developed by open minds with open standards in mind (again, only agree to disagree if you do).
So is x86 obsolete? Yes. But there is no clear economicly sound upgrade path at this time, but we are certainly seeing ones arise, especially with the advent of the internet and the universal movement "community", on that internet.
Re:Windows again. (Score:2)
That's part of the benefits of running OSS apps. In principle, your whole system is just a recompile away from a hardware switch, if you think some non-x86 platform is a better buy.
In principle I suspect there are some 32/64 bit issues, but if a hot new platform came out that gave a great bang:buck ratio, the world of OSS programmers would be all over it in a heartbeat. When/if Linux ever gets up to 20% of the desktop market share, you may start seeing some "breakaway" desktop architectures.
[Posting from my freshly downloaded M16.]
--
Re:Address space is going to kill off the x86 (Score:2)
The 80286 had a 16 MB addressing limit, but the x86 architecture still survived.
The Athlon faces a 4 GB addressing limit, but AMD is developing a 64-bit version with a potential 16 EB addressing limit.
Addressing limits are *not* why the x86 won't survive.
Steven E. Ehrbar
Re:x86 still a LONG way from obselete! (Score:2)
ALl bad marketing
Translation not all that new . . . (Score:2)
http://www.digital.com/amt/fx32/fx-r elnotes.html [digital.com]
Re:Windows again. (Score:3)
Agreed. However, the server market isn't quite so dependent on x86 compatibility (yet). Do the high end chips (Xeon, Itanium etc.) still contain the full instruction set, or have they dumped any of the legacy instructions that were only present to support backwards compatibility? After all, how many people actually run MS-DOS 2.x on a Xeon? My guess is that if they don't already do this, future generations of processors probably will, as extreme backwards compatibility becomes less important. Of course, Win2K could prove to be the wildcard here, as MS try to blur the boundaries between desktop and server. Yes, Linux does the same, but Linux has never had to run 16-bit code...
It always has been (Score:2)
The 68000 had a MUCH better architecture than the 8086 - nice linear memory space from 0 to as much as the memory bus could hold - bytes the right way round in long numbers - a delight to program for us old Hex hackers...
Had IBM chosen a 68000 processor, the history of personal computing would have been very different, and very probably a whole load better. Then again IBM deliberately knackered the PC design, so maybe they chose the inferior processor deliberately?
Re:True definition of RISC (Score:2)
William Stallings lists four primary characteristics of the RISC architecture: One instruction per cycle, register-to-register operations, simple address modes, and simple instruction formats. With one simple instruction per machine cycle, CPU idle-time is significantly reduced. RISC instructions are hardwired into the chip as opposed to residing in a microcode ROM, theoretically reducing execution time by eliminating references to the microcode program as well as freeing up valuable chip space. Simple instructions greatly reduce the complexity of the chip, and eliminate such complications as performing an operation on several parameters while at the same time calculating effective memory addresses.
It doesn't say anything about the number of instructions.
Legacy problems (Score:2)
Ugh, very true. The Wintel combination has meant that each of them has held back to keep in line with the other, and because of this neither has been able to break away from the past.
Case in point - real mode. In real terms it has been obsolete since the 286 introduced protected mode and the 386 enhanced it. But it's only now with Windows Millenium and 2000 that real mode is no longer used by MS operating systems. This means that the chips require extra transistors for real mode and virtual 8086 mode making them more expensive and hotter, and Windows has reuqired extra code to handle legacy apps which use them.
No, you might have got a speed increase by changing the core from CISC to RISC, but you could also get one by just removing all the extraneous crap that's still there. Hopefully Intel or AMD will abandon their wish for every new chip to be able to pretend it is an 8086...
---
Jon E. Erikson
Re:x86 is popular to hate, but not that bad really (Score:2)
Re:two stupid questions: (Score:2)
RISC (Score:2)
Re:Hyperbole. (Score:2)
Depends on your point of view... (Score:2)
This said, the x86 instruction set really does have a lot of problems. This reflects the time in which it was made quite well. Very few chips made today have these problems; they've learned from x86's mistakes. Intel itself has hacked bits onto the x86 ISA for decades now, trying to patch up the problems, but they've also made the ISA a complete mess in doing this.
Does x86 have a performance limit? Theoretically, no; it's an instruction set, not a chip. But it's one hell of an instruction set to burden a processor with. IBM chose that chip for its PC line specifically to hobble it; that way the PC could never compete with IBM's then-profitable minicomputer line. This seems to have failed rather miserably, thanks to the amazing work done at Intel/AMD/etc. In time, x86 will die; every computing concept and program dies given enough time (anyone here still seriously use VisiCalc?) Frankly it is past its due; cleaner architectures have existed almost as long as x86 itself. But I suppose we should be patient. The day will come (probably when/if Intel finally gets IA-64 out the door).
Re:RISC (Score:2)
Terraflop would have something to do with Land crashing or something.
Re:Outdated yes. Obsolete no yet! (Score:2)
Also note that their VM is about personal java, which is a slimmed down version of Java for embedded machines. Probably the competition uses an interpreter rather than a JIT to save on memory usage (this would explain the performance difference). No doubt speed is usefull in some situations but the real bottleneck of embedded machines is usually memory size. The more memory you have available, the more features you can put into the tiny space.
In any case, thanks for drawing attention to an interesting virtual machine. Diversity is a good thing.
Re:Read the article first! (Score:2)
Re:Outdated yes. Obsolete no yet! (Score:2)
Time, Time, Time (Score:4)
I believe it is generally understood that the x86 architecture is not the most superb set of instructions and such that we could get. RISC obviously has much value, the newer embedded systems and such will become more and more the wave of things to come. However, it's going to take some time. Here's what must happen before a new standard (whatever that is) is accepted:
Re:Address space is going to kill off the x86 (Score:2)
The 286 ISA was a superset of the 8086 - any code that used protected mode was *not* backwards compatible. Ditto the 386 - it added new modes that were not backwards compatible. However, the 386 ISA has stayed mostly unchanged (notable exceptions include MMX, 3DNOW, and whatever Intel's latest hack is called) through the days of the 486, Pentium, PPro, PII, and PIII, as well as the Cyrix and AMD equivalents.
Yes, the weird-ass segmented addressing modes exist, but I haven't seen anybody show any enthusiasm for trying to *use* them.
AMD's Sledgehammer proposal might successfully extend the x86 architecture to support a true 64-bit address space. It might be a horrible flop. Whichever way it goes, Sledgehammer code will *not* run on anything other than a Sledgehammer processor. Ergo, Sledgehammer is a new ISA, related though it may be to the original x86 one.
Re:Hyperbole. (Score:5)
Did you, in fact, read the article? Hannibal said as much in his article. Obsolescence is the wrong question here; timothy [monkey.org] should be ashamed of himself for titling this Is The x86 Obsolete?.
Here's the short version for people too lazy to read the article or too dumb to understand what Hannibal is talking about:
Due to incredible amount of programs written for the x86 architecture, machines that execute x86 instructions will be around for some time yet. Everyone agrees (even Intel) that x86 is not a good ISA (instruction set architecture), but the ability to run all the programs written for it make it too costly to scrap. In order to achieve better and better performance, the current generation of microprocessors (Athlons and PIIIs) emulate x86 in hardware. The actual execution on these machines takes place using a completely different, RISC-style set of instructions (x86 being CISC for those who don't know).
This information addresses only half of Hannibal's article. The other and more interesting half describes the latest ideas computer architects have for circumventing the problems of the x86 ISA. The primary advancement is translation of x86 instructions into another architecture; this translation occurs only once, as opposed to emulation, and can be very aggressively optimized for the particular hardware it is running on because it is performed at runtime. Because the performance hit is only incurred once and because of the further, machine-specific optimizations, machines which execute x86 instructions will continue to increase in performance.
Furthermore, executing x86 instructions by translation means that computer architects have the freedom to change the native architecture of their machines without worrying about executing legacy code. These issues were addressed by emulation; translation is a further step in this direction.
As I said before, the obsolescence of the x86 ISA is a ridiculous and unanswerable question. However, I believe that the x86 ISA will continue to be a relevant problem until we leave 32 bit machines behind for 64 bit and larger.
Jonathan David Pearce
Re:Address space is going to kill off the x86 (Score:2)
If you're going to have to change ISA, and cope with all the nuisances that entails, why wouldn't you swap to the one offering the very best price/performance compromise? As far as backwards compatibility goes, you can run x86 code on the Intel/HP IA-64, and, if it comes down to it, on the PPC and Alpha through emulation. What's the special attraction of the Sledgehammer?
Re:Pentium Pro (Score:2)
legacy code issue overrated (Score:5)
The x86 ISA has been closely married to the fate of a single operating system for quite some time now. After the shift from CLI to GUI, most of the compatibility issues in software have been WRT how to talk to the OS, not anything underlying. Nobody talks to the hard drive or keyboard directly--you talk to the driver. Likewise, the only programs that generally need to understand the underlying architecture are compilers.
There is so much standardization at levels above the processor instruction set that particular CPU architectures matter only while writing compilers and operating systems. Open source software distribution is making architectural irrelevancy even more thorough.
I will freely admit that there are applications which need good familiarity with the underlying hardware; most of these, however are drivers. The rest are heavily optimized scientific computing tools that need to bum every single instruction out of a loop because the loop is going to run sixty-nine trillion times.
As for the rest of the world, though, nearly transparant portability of operating systems and applications suites across architectures is a reality that lags only a few hours or days after the compiler is written. I'll offer two examples: Unix and Java.
When does compatibility with prehistoric applications become a reality? In places other than the x86 architecture. I do DBA work for an RBOC, and yes, we have ancient COBOL and FORTRAN applications that first ran in the 1960s. For those groups, Y2K was a genuine nightmare. But all those apps run on MVS and other mainframe environments--not exactly the x86's stomping grounds. As for other, pre-x86 micro architectures, well, I can run all my old Atari 400 apps under an emulator on my Pentium 200, because I have cycles to spare even to a badly written emulator.
So, no, the x86 isn't obsolete. The newer generations have some obsolete components, though.
--
Always a tradeoff (Score:3)
What we lose in the x86 is performance. While I'm quite aware of the heroic measures taken by AMD, Intel, Transmeta, et al. to run x86 code quickly, you can't escape the fact that an optimizing compiler for x86 has extremely limited power of expression.
In computer architecture you want the ISA to be such that the compiler can do what compilers do best (static analyses over large regions) and hardware can do what it does best (dynamic adjustment to unpredictable runtime conditions). A bad ISA can bottleneck both the compiler and the hardware. x86 is poorly balanced in this regard. So's IA-64 (in the other direction), IMO.
On the plus side of the tradeoff, with x86 you get billions of dollars in fab R&D and commodity pricing, not to mention a huge installed base. It's never going to go away. Sigh. But life would be so much better for compiler writers, systems software people, and (indirectly!) users if all this business were centered around a nice 64-bit ISA rather than the x86 monstrosity. I very much enjoyed using the Alpha ISA on the Cray MPP machines and commend it as a model among the publically-known ISAs. No condition codes, delay slots, segments, special-purpose registers; just lots and lots of registers.
Re:Hyperbole. (Score:2)
x86 is fairly cheap, highly available, and easily self servicable. Therefore it is a quasi jack of all trades, master of none. That's not a bad thing in my book.
Bad Mojo [rps.net]
Re:Time, Time, Time (Score:2)
I would imagine that depends on your definition of usefulness. Most people would think that companies would jump at the time that it would be more cost-effective or something along those lines.
People grow very attached to old systems, and often get burned by vaporware upgrades. Often ancient systems are in place, limping along, at most businesses that have been around for longer than a decade. There was a flat tandy pc with a four line LED screen (no, not LCD), and the PAlm Beach Post reporters still use it. Because it has a real touchtype keyboard, and runs for weeks on four or six AA batteries. Plenty of COBOL and FORTRAN routines are running happy and live deep in the bowels of many companies. For a more recent example, Foxpro and dBase are alive and well, and *new* apps are being written in-house, because all of the other companies business is stored there.
So, all of this is considered "Obsolete", but businesses use it, and buy new equipment, hire IT people and maintain and grow their "obsolete" equipment.
Now, *home* use is a completely different matter. You have a hardcore group of people who still use Apple ][s, people who are looking for NES cartridges, 2600 joysticks, and eight inch floppies, but those are geeks doing it for enjoyment.
There are plenty of people, however, who are actually *using* 386 class machines. They do email, type reports, print them on dot matrix printers (I get the question "where can I get tractor feed paper?" more and more often, rather than less). They can't afford to upgrade, either because they don't have the money, their perception is that new computers still cost thousands, or they don't consider $300 worth it.
Obselence is a matter of perception, not a matter of logic.
--
Evan
Re:Back in school... (Score:2)
True. There's also no need to delete Lord-knows-many programs written and compiled for x86 machines. Using the x86 ISA is a question of economics, not of "genuine improvement or development." The field of computer architecture has come a long way since some Intel engineers sat down and designed the 8086. I hope nobody refutes that.
But, today architects have some awfully good ideas about how to squeeze more performance out of a machine that has to be able to execute x86 instructions. These sorts of breakthroughs keep the performance of x86-compatible machines climbing with minimal performance hits compared to other architectures. If maintaining x86 compatiblity is so "expensive and counterproductive" that it makes sense to leapfrog from architecture to architecture, then modern (i.e. designed in the last decade) architectures would rule the market in terms of sales and performance. They do not; hence, maintaining backwards compatibility does not significantly adversely affect an architecture.
What are your gripes with the x86 ISA? It is rather clunky and, from an academic standpoint, not optimal. Also, I'm sure the Intel engineers curse themselves (or their predecessors) on a weekly basis. ;) In spite of that, though, machines which execute x86 instructions have the advantages of low price and large amounts of software; what more can you ask for? Lucky you, I'll tell you:
The two problems of the machines which execute x86 instructions currently are power requirements and die size, because all the current schemes for circumventing the problems of x86 require additional hardware. These are not serious issues currently because of the widespread desktop computing paradigm. As the market moves away from big, stationary computers and towards smaller devices, x86 will become less and less viable. You can already see this trend happening. How many hand-held devices can you think of that execute x86 instructions?
Jonathan David Pearce
Re:x86, die die die! (Score:2)
Aside from this, the IA-32 architecture is actually considerably more simple than most other architectures to program on.
A few
IA-32 does all alignment checking for you. There is no problem doing a store split across a line or even a page, and the microarchitecture takes care of this. On something like Alpha, this is illegal, and will generate an exception and the OS must do two stores to perform the operation.
Cache coherency. IA-32 has very well defined cache coherency protocols, and again works in all cases such as split words. Many architectures, including Alpha, leave coherency to the programmer, and you have to do locks yourself. This is extremely complicated, especially for false sharing when it is not clear what is on the same line.
Memory ordering. Ditto as the above. Many of the RISC architectures have very chaotic memory ordering rules, especially the Alpha, which does all sorts of weird out of order and speculative loads so you have to insert fences everywhere. A real mess.
Despite this, IA-32 is still the fastest architecture around. The fastes CPU currently shipping on SPECint2000 is the 1 GHz Pentium III. The RISC architectures are more difficult to program, but are also slower!
One good benefit of the CISC IA-32 architecture is instruction density. You can code in two bytes (a CISC ALU instruction, for example), what it takes 8 btyes to code in RISC (a load/store, then the ALU). When you code denisity is 2x-4x greater, this helps tremendously for i-cache! Also, it really cuts down on the relatively expensive decode process (which is really the only expensive part of the IA-32 architecture)
Re:x86 is popular to hate, but not that bad really (Score:5)
Also realize that all of these instructions are fixed at 32-bits on most chips. That's 32-bits to copy a register, 32-bits for a return, etc. This may simplify the hardware, but at the expense of bloat. So you need a bigger instruction cache.
This really depends on your instruction mix. There are longer instructions on x86 too. And let's remember that simpler hardware means less die size, less heat, less power, and less cost. And remember that the SHx has 16-bit instructions, not 32. So on that architecture your code size will always be less than equivalent x86 code.
The bottom line is that x86 has about three orders of magnitude too many instructions and a similar factor too few registers. It exists without the grace of design or forethought. It's too big, too bloated, too hot, and more expensive than it needs to be. Programming it is a nightmare. The only positive thing I'll say for it is that the performance isn't terrible given its complete lack of design. This says good things about Intel's engineers. Of course, if they can do as well with x86, imagine how much better they could do with a decent architecture. In other words, if Intel manufactured MIPS and SPARC chips, they could crush the existing implementations in performance.
The x86 was obsolete 12 years ago. The sacrifice of sanity on the altar of backward compatibility is disgraceful and foolish. I don't use x86 any more, thank God. I just wish nobody else did either. We'd all be better off if x86 died the death immediately or sooner.
Re:x86 is popular to hate, but not that bad really (Score:2)
All in all though, it would seem that RISC and CISC have converged so much in recent years its hard to tell them apart. Shame the same isn't true for the software that runs on them.
Macs and backwards compatibility (Score:2)
Yes, but lost of people can't stand macs because they don't have any backwards compatibility. When I was going to buy my first computer I bought PC because I knew any Mac I bought would not only be obsolete when I bought it, also be unsupported by everyone - even Apple - within a few years. Plus Apple alters its hardware specs enough between models that you can't upgrade the hardware to get it compatible again... You just have to buy a new Mac after 3ish years. Its almost as bad as Microsoft really.
As for Mac users howling, you bet they did. When I told a guy I worked with that his 3 year old mac was too old to play a game he got for his young daughter he was pissed. He should be, by ditching compatibility like that it destroys any value his 3 year old computer had whatsoever.
Optimal Paths and New ISAs (Score:3)
--