Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Chipmakers Angling For Support 98

defence budget writes "According to this article at CNet, what once happened with Intel and Microsoft might be happening with Linux, AMD and Intel. Apparently "In a sign of how strategic Linux has become, AMD and Intel are angling to lure open-source programmers to their future chip designs". I cannot see how the low end market will react to this, but surely the high end market should see the potential advantages in migrating to systems running on hardware custom built for Linux?"
This discussion has been archived. No new comments can be posted.

Chipmakers Angling For Support

Comments Filter:
  • Just think about it. Linux has the potential to take the market theat Solaris, AIX and HPUX have had for years. If Intel and AMD can get Linux apps to perform as well or better than the properitary OS's then they stand to make money because it will be there hardware in the boxes, not Sun HP or IBM. It would just seem to make sence to do what they are doing.

    I wish there was a spell checker plugin for /. posts :-)
    • Servers are Linux's market, but I don't see an x86 taking the place of a SPRAC.
    • There's a long way to go before you can remove system boards/CPU's from live systems with an Intel/AMD and Linux combination as you can with Solaris/SPARC. Until then I don't think Sun needs to be overly worried.

      The market you're talking about is expensive. These machines aren't your average 2K PC with Linux/Windows. And lets face it, if you can afford a 500K machine, I don't think a copy of Solaris will break the bank.
      • And lets face it, if you can afford a 500K machine, I don't think a copy of Solaris will break the bank.

        That's true, but most Solaris machines don't cost anywhere near that.

        The SUN Blade on my desk has a single board and CPU in it, and two (non-redundant) hard drives, if I unplug any of these pieces, the system will stop. :)

        Now I admit SUN makes a lot nicer machines than this one, so I certainly see your point, but a lot of the machines in the SUN/HP range could be replaced with x86 boxes. And SUN is way overpriced for the kind of performance it provides.

        • We are talking high end. What you have there isn't high end. And I'll agree that Sun equipment isn't cheap. But its generally of higher quality than anything Intel produce for this level, and you also pay for the ability to do things like hot swapping boards etc. Also Suns tech support is very good in my experience.
          • Still, the article is not talking about price as an issue here.
            It's the quick availability of an OS for the new chip that matters.
            They talk about Microsoft and how they hope having a Linux running their chip should put pressure on them. (they being AMD and Intel)
          • Well, "high-end" is obviously an extremely fuzzy term, so let me just say I don't think this is competition in the $500k market.

            However I think the original poster was thinking about the workstation market, and Intel/AMD machines might well be competitive there.

            HP has already x86 machines on offer [hp.com]. I imagine Intel and AMD would be keen to see their a 64 bit chips in a similar sort of setup.

            Support will depend on the company who ships the workstation of course.

        • > machines in the SUN/HP range could be replaced with x86 boxes

          These computers are the computers Sun are tring to break into the x86 market with thou.
        • But don't forget, when you buy a Sun you're not only interested in performance, but in total system reliability. There's a lot of cheap, crappy commidity components in the PC world (I just had a motherboard die on me for no apparent reason; the CPU is still fine). With a system like a Sun, you probably don't have to worry much about components failing like that, or heat dissipation issues (since the whole system is designed properly, rather than throw together by the user from various off-the-shelf components).

          For my home Linux machine, commodity components are fine since they're dirt cheap, and if something fails, I can go buy a new one in a few days. But if you really don't want your hardware failing you at any time, it's probably a good idea to invest in something like a Sun.
  • LAMDTEL..
    sounds more like a hitech butcher or something.
  • What are people these days using computers for? What drives the market for high speed chips? --Games; That is what people are using windows for; if a chipmaker can say that their chips are specially designed for l00nix, then people will buy it, and start using l00nix more. The main problem with this trend is the fact that a windows user cannot at this point in time install and use l00nix like windows: stick in the CD, and sit back and relax for a while while the OS installs. l00nix is great for tweakers, hackers, and just plain h4rd-c0r3 people, but it is not ready for the general market, it is TOO custom. If tweaked enough, it WILL work on just about any configuration and system, but that is not good enough for gamers. If someone went the extra step and made l00nix more usable to the general public (I love it, I use little else, but i LIKE to tweak), THEN we could blow windows away... (although admittedly, GNOME is a good start). I am not saying making it necessarily more windows like, just more available in terms of usage. However, chipmakers starting the trend for "designed for l00nix" is most definetely good.
    • How many windows users actually install windows?
      How many are capable of it?

      I think the issue isn't how easy the os is to install, or to some extent how easy it is to use, some would argue windows is hard to use. The issue is getting OEMs to sell linux boxen already to rock and roll. Once that happens then more apps will start to appear and linux will apear on more desktops

      btw have you installed mandrake lately? its the easyist os I've ever installed, and I've installed everything from BeOS(rip) to DOS 6.22.

    • Most hardware sales are going to companies, not individuals. And there the decision to buy high-speed CPU's is more of a "our computers are old, buy us new ones" thing from management than anything else. Large companies usually just get the best computers they can 'cause no one bothers to test for what they really need. Have you ever stopped to think of who buys those new systems and chips when they first come out? It's sure as hell not the home users - they can't afford it. Companies can.
      • It really depends on what type of machines yuo are talking about. if yuo are speaking of PCs, the companies are the LAST to buy them. As soon as the 1.4GHz t-bird came out, my buddies and i were the first to buy it. H4rd-c0r3 Gamers and techno-geeks are the first to buy the newest stuff, including CPUs. Companies wait a while until the CPU's become cheaper-- still good but not as expensive as the top-of-the-line stuff. For example, the 1GHz duron costs $32. Also, they wait for companies like DELL to have clearances so they can stock up on, say 100 1GHz desktop DELLS. that makes them perfectly happy.
  • No Integration (Score:5, Interesting)

    by piecewise ( 169377 ) on Monday September 03, 2001 @09:32AM (#2247967) Journal
    Although certainly having a specially-designed chip for Linux systems would be nice, Linux will forever be fragmented in the nature of its architecture simply because of its open-source design. So I think the primary source of reliability will come from the kernel and entire system itself, not so much from the chip on which it runs. And clearly, one of Linux's strongpoints has been its portability across chip designs. I can run Linux on my G4.. but also on a P3 system, if I were so inclined. There are so many Linux-based OSes out there these days.

    Also, are the chip companies even targeting Linux? It seems to me that they're interested in open-source. But open-source does not mean Linux. Open-source is much larger as a concept than Linux is. And of course, I imagine that the future will be this: open-source programmers will be lured away by dollar signs (not in a bad way -- but hey, everyone's gotta eat). The companies will have a vested interest in making sure that these programmers are not working on things outside of the company itself, and in fact will also require that parts of the systems they develop will be proprietary. Just like Apple does. Darwin is open-source, but Aqua, Quartz, etc., are proprietary systems. And Apple nabbed the top guy for BSD, did they not?

    I'm rambling now. But what I'm saying, basically, is that although i think this is primarily a good thing, the waters are still very muddy and the trail itself extends very far out.
    • by Carnage4Life ( 106069 ) on Monday September 03, 2001 @09:59AM (#2248017) Homepage Journal
      Although certainly having a specially-designed chip for Linux systems would be nice, Linux will forever be fragmented in the nature of its architecture simply because of its open-source design.

      1. The article is not about providing a specially-designed chip that runs Linux. The article is about the fact that chip designers are now getting interested in making sure Linux runs on their chipsets especially now that it looks like Linux due to its Open Source nature will be quicker at supporting new chipsets than Microsoft's offerings as is witnessed by how long Linux supported Itanium [linuxia64.org] versus Microsoft's recent announcement [yahoo.com].

        Similarly it looks like Linux on the AMD's Hammer chipset [x86-64.org] is already way underway as a project while according to the article Microsoft has no current plans to support that chipset.

      2. What exactly do you mean by the Linux architecture is too fragmented to ever allow for a chip that runs Linux?
      • Similarly it looks like Linux on the AMD's Hammer chipset [x86-64.org] is already way underway as a project while according to the article Microsoft has no current plans to support that chipset

        Heh, it shouldn't be too hard since NetBSD [netbsd.org] already runs on the x86-64, so there should be a compiler and such you can borrow, and TLB faulting code you can take (you can relicence BSD code to GPL, just not so easy it go the other way).

      • After examining your resume, I noticed that you do a lot of .NET work. Anything interesting to say on the subject with regards to chipsets?
  • Linux vs Microsoft (Score:4, Interesting)

    by Pink Daisy ( 212796 ) on Monday September 03, 2001 @09:34AM (#2247973) Homepage
    According to the article, the hardware vendors are looking to Linux to force Microsoft to adopt new features. That's a strong testament to the power of competition! I know that Intel has hated their dependence upon Microsoft for a long time, and that Microsoft is delerious about AMD, since it untied them from Intel.

    AMD really needs Linux on the hammer platform. Actually, they need Windows as well, but Linux is the club to force Microsoft to make the port. Intel is less dependent on Microsoft for the success of IA64 platforms, but mainstream adoption of new technologies like SMT (or hyperthreading, as they say) could really distinguish them from AMD performance-wise.

    I'm usually pro-Microsoft around here, given the amount of nonsense Linux-propoganda spewed out, but I will be really happy when Linux can compete across the board, instead of just on servers. The benefits of competition are very high.
  • Re: Why not SPARC? (Score:5, Insightful)

    by Bodero ( 136806 ) on Monday September 03, 2001 @09:35AM (#2247975)
    As for reliabilty, let's not forget that most PC reliability is based on Redmond's spooky OSes

    I don't know. I have a Matrox Millenium II that only just started working reliably as of Solaris 8 (or Solaris 7 with patches). It seems that when you do a certain thing to the card, the card stands about a 50% chance of getting confused and hanging the entire PCI bus.

    Also inside the same case, I have two Western Digital IDE hard drives that won't both talk on the same bus if you set one of them to master and one to slave. It seems to only work if exactly *one* of them is set to cable select.

    I also have an Intel motherboard (which is sitting in a drawer right now) that only allows me to use 64 MB of RAM. I bought that system in 1997. Sun's very first desktop SPARC system (the SPARCstation 1) could expand to 64 MB of RAM, and that was in 1990.

    Also in the drawer, I have a Diamond Viper V770 Ultra whose fan has decided to make loud scraping noises. Diamond refused to sell me a replacement part, so I have an approximate match replacement part that I will install when I feel like getting out the soldering iron.

    The system that had the Intel motherboard originally came with a Toshiba XM-6102B CD-ROM drive. When I first installed Solaris on that thing, I was afraid the driver was confused, because it was reporting all kinds of errors even though Windows didn't seem to have a problem with the drive at all. As time went on, the drive got worse and worse and eventually reached the point where it took 3 or 4 tries for it to recognize a CD.

    All of these experiences with dodgy PC hardware are with *name* *brand* PC hardware that I've taken good care of. And, it's not like I've run through hundreds of systems, either. The amount of PC hardware I have ever owned in my life is not enough to build two working systems.

    Basically, my experience with PC hardware is that it's cheaply made, and any given piece of hardware will probably be somewhere between limping along and working almost right but not quite. (Some hardware will just outright break, and some of it will be trouble-free for years and years, too.) Overall, I think this is a symptom of the fact that most PC consumers don't know to expect better, and also the pressure to make things as cheap as possible.

    There is a lot of stuff out there that is just crap, and there is a lot of stuff out that there sort of works and sort of doesn't. Yes, you can get high quality PC parts, but the fact is that you have to be pretty choosy about it. Which brings me to my next point...

    And let's not forget that practically everything in a Blade 100 is off-the-shelf PC parts, so that theory goes out the window.

    I tend to think that the Blade 100 is going to be better built than a system you'd buy from some PC vendor, because Sun's attitude is different. Few manufacturers of any complex product like a computer actually make most of the stuff themselves. The reason Sun systems are reliable is that they select good parts, and test the system together as a whole. They have never controlled the whole process, but they do control more of the process for their machines than PC manufacturers do. I think this is what's going to lead to better quality.

    (Part of the reason I think that is that it's my belief that one of the reasons PC hardware and software is so unreliable is the size of the market. It's prohibitively expensive to test everything with everything, and not only that, but it's also just very chaotic. It's difficult to make a system work well under those conditions. Sun doesn't suffer from that problem as much because their market is smaller and not only that but simpler.)

    • who believed that most hardware is shoddily made crap. I cant remember when i last read a comparative review of PCs at a given price point where none of them had a dodgy display driver which failed basic timing tests, or a noisy fan, or faulty sound card, etc.
    • well I think this is something we can thank microsoft for (not expecting more). Most people are used to computers crashing and not working. Weither it's a hardware failure or just another windows blue screen of death: most people just generalize it as "computer problems" and don't know much about what's going on underneath. Either that, or when ANYTHING goes wrong they blame it on a virus.
    • (Part of the reason I think that is that it's my belief that one of the reasons PC hardware and software is so unreliable is the size of the market. It's prohibitively expensive to test everything with everything, and not only that, but it's also just very chaotic. It's difficult to make a system work well under those conditions. Sun doesn't suffer from that problem as much because their market is smaller and not only that but simpler.)

      Not just that, but if you do find, say, a glitch in the L2 cache controller on an x86 design that might cause one lock up every year or so you can talk yourself out of fixing it since most x86 machines run Windows, and one extra crash a year will be unnoticed, and blamed on MS anyway.

      The SPARC designers are going to assume you run Solaris, and one hardware caused crash a year may well be the crash for the year. Way more incentive to fix it.

      Lest you think this is totally theroitical, I use to work for a company that owned 100 or so DEC PC machines with a little L2 problem... and we noticed because we were running a real OS.

    • Re: Why not SPARC? (Score:1, Interesting)

      by Anonymous Coward
      Good design costs a lot of money, and well designed parts will cost more to make. Well designed parts will have more layers in the epoxy pcb, gold plated contacts, mil-spec chips, carefully thought-out design which keeps standing waves and impedance/unit length down, and so on. PC's DO NOT have well designed parts. Paraphrasing Eric Raymond's Hardware Howto, if most of the units barely work, in most machines, under light use, it's good enough for the PC market.

      By the way, one doesn't test everything; it is enough to test a sample, and every manufacturer (execept the very worst) does that. If the sample is made large enough, you can drive the failure rate arbitrarily low. If the sample is made small (and thus cheap) enough, the large failure rate can be accepted, in the pc market. If it doesn't work, the customers will just return it. If it fails the day after the warrenty runs out, that's bonus.
  • by jensend ( 71114 ) on Monday September 03, 2001 @09:42AM (#2247987)
    What Intel and AMD are really looking for is not as much for their products to conform to Linux as for Linux to conform to their products. Neither is a bad idea. However, the failure of the community to band together behind GCC 3, fix the major bugs, and get distros and other major software compiled with processor optimizations is going to cause these moves by the processor companies to fade away. A message to all developers everywhere: Help now with what you can in order to get code to compile cleanly on GCC 3!
    • by Anonymous Coward
      Huh? all major distros are moving toward GCC 3.x in the near future. The standard ABI for C++ means that commercial applications (which are very often designed-by-committee baroque C++ monstrosties) will be much easier to port to linux.
      • The previous comment is quite true about patents, but the understanding I got from the article was that Intel and AMD were thinking about this. If they force the optimizations to stay in their own compilers, they will lose out in many ways, and I thought that they were making the logical move of asking developers to start making these optimizations. I may quite possibly be misinformed. As for the inclusion problem, I do not see major distros moving to GCC 3.x right now. RH is apparently sticking with their modified 2.96 version for 7.2, the latest rant from glibc tells us 3.x will not be adopted, and so on and so forth. This is what I am commenting needs to change. (I have no complaints about the ABI, I think it's great.)
    • Ah... but there is something important you are forgetting - patents.

      My understanding is that a lot of the extremely useful optimisations are covered by patents owned by IBM, Intel, Microsoft, etc.

      Now if IBM and Intel just opened up those patents then a lot more useful optimisations could be done. Otherwise we have the much more difficult route of the GCC developers having to come up with their own non-infringing optimisations.
  • custom hard ware. (Score:3, Interesting)

    by Error27 ( 100234 ) <error27@[ ]il.com ['gma' in gap]> on Monday September 03, 2001 @09:45AM (#2247992) Homepage Journal
    >>Surely the high end market should see the potential advantages in migrating to systems running on hardware custom built for Linux?

    Oddly enough, I can't think of any advantage. The trend in high end computing recently seems to be to move to commodity hardware. We have clusters of x86 machines. SGI is moving to an Intel platform. And Compaq has sold the Alpha to Intel.

    I could be wrong of course...

  • Software personality (Score:3, Interesting)

    by Alien54 ( 180860 ) on Monday September 03, 2001 @09:47AM (#2247995) Journal
    I was up way too late last night, so this is not going to be all that coherant. not that what I say is all that often.

    This is just a reflection on the root cause of the obvious success that Linux continues to have, as evidenced by this story.

    Somehow I think that the personality of the main visionary behind a piece of software does occasionally express itself in the software in certain subtle ways.

    In The case of Linux vs MS, where people want to contribute their energies to some degree, where people give things to the project. This vs MS where alot of people do not want to contribute and where resources are boughtr, paid for, and taken.

    Alot of this has to do with the social agreements regarding what is right and normal and just behavior for capitalism, big business, etc. It's what "everyone does". But this seems to be changing with the model of contribution and community help.

    This community help model requires more healthy and alive community to work well, while the typical capitalist model can work in a perverse way with criminal types who steal resources. In fact, it can be difficult to avoid.

    We eventually come to the point where we have the successes that we have today.

    and we can say, with some logic, that the two operating systems and the companies, etc reflect the main personalites involved. Linux is much more community oriented, while MS is more imperial (or something), in its own way.

    - - -
    Radio Free Nation [radiofreenation.com]
    "If You have a Story, We have a Soap Box"
    - - -

  • What's the point? (Score:3, Insightful)

    by nougatmachine ( 445974 ) <johndagen@@@netscape...net> on Monday September 03, 2001 @09:54AM (#2248008) Homepage
    OK, so chip manufacturers are starting to pay more attention to Linux. Sure, that's great, but what's with the comment about hardware "custom-built" for Linux? Isn't the whole point of open architecture that you can run darn near any operating system on it, including one you just wrote yourself, if you were so inclined? How would a "custom-built" Linux system be any different from the chip architecture it's running on? Linux can even run on closed systems like Macs, for crying out loud. It's not like it particularly needs it's own architecture. Matter of fact, that could be a barrier to entry. Say Joe User wants to try an alternative operating system, and he's narrowed it down to a choice between Linux and Mac OS X. One of the attractive things about Linux is that he doesn't need to buy new hardware to run it.

    Bearing all that in mind, why does anyone need custom Linux hardware?

    • by Tim C ( 15259 ) on Monday September 03, 2001 @10:08AM (#2248033)
      In the past, Microsoft and Intel have worked together to produce software and hardware that complement each other.

      This can go beyond merely understanding the best way to structure an executable, or tips and tricks for hand-coding assembler.

      On the one hand, Intel could say to MS "we'd really like to push this new instruction set that we've come up with", so MS say "okay, we'll build support for it into the next DirectX release".

      Alternatively, MS could say "we'd really like to get into the streaming multimedia market, could you help us out?"

      The upshot is that Intel gets support for their latest, expensive features at the OS level, whilst MS get hardware-level optimization for apps they want to write. Wrap the exact details in an NDA or two, and bingo - Windows runs better on Intel hardware, and Intel hardware runs Windows better. (ie Linux on Intel, and Windows on AMD just aren't as good)

      Yes, the whole point is that you can run any OS on any hardware, but sometimes it pays to have a little help.

      Cheers,

      Tim
  • by Quixote ( 154172 ) on Monday September 03, 2001 @09:55AM (#2248011) Homepage Journal
    If the chip makers were serious, they would start helping Linux out today. Case in point: gcc. Why don't the chip makers hand over their internal compilers to the GCC developers, so that GCC can produce optimal code for their processor? The SPEC marks for Intel CPUs are always achieved on some internal Intel compiler, that is sometimes available as a module for MSVC++. Why not release the same for Linux? I know Intel is working on it now, but what took them so long? And the same applies to AMD.
    • Because if Intel released it's compilers as open source, anyone (read: AMD) could look at Intel's optimizations and use that to make their chips better.

      As we move to RISC VLIW processors, compilers become more and more important.

      There is this story in the late 80's of how a lot of independent hardware vendors were choosing MIPS over SPARC because MIPS were perceived as being faster. Sun promptly hired MIPS' compiler team and found that, with their opimizations, the SPARC chips were actually faster. Of couse, by this time the market had moved to MIPS, so MIPS was able to pump more money into hardware R+D...

      • Because if Intel released it's compilers as open source, anyone (read: AMD) could look at Intel's optimizations and use that to make their chips better.


        They can do that already by purchasing a copy and looking at the machine code it generates. The necessary tweaks to generate fast-running code for a particular processor are not kept secret; on the contrary, they need to be as publicized as possible to increase the amount of software that runs well on that processor.



        (At least, that's how it damn well should be, and Intel wouldn't do themselves any favours by having 'secret optimizations'.)

        • Intel has secret opcodes even. Remember SETALC? Sets every bit in the AL register equal to the carry flag. It is actually supposed to be useful for something, but I forget what exactly.
          • It allows AL to be a bit mask.
          • Are they really secret opcodes, or just a consequence of the design? Many chips do strange things on being presented illegal opcodes. The manufacturer wants to keep the behaviour undocumented on these illegal opcodes, so that in the future they can use these opcodes to do something useful, not just what they happen to do today.
    • If the chip makers were serious, they would start helping Linux out today. Case in point: gcc. Why don't the chip makers hand over their internal compilers to the GCC developers, so that GCC can produce optimal code for their processor?

      In the past Intel (at least) has done major work on gcc. The first time I remember seeing anything about it they dumped a ton of patches off and they were wrong. There were a lot of Intel-specific patches in the machine independent parts, and lots of machine independent parts in the x86 only part.

      The patches were not accepted (someone did fork off a pgcc or something like that for a while). Much of that work has been re-done right in egcs (now gcc 3).

      I don't know if they have been contributing a lot recently, with luck they will get the two messages "smaller patches tend to be better", and "stick with the framework (we'll give help if you ask)".

      Apple does seem to have learned. A lot of their patches made it into egcs. Unfortunitly their pre-compiled headers code didn't make it in (it is in their gcc that they ship), maybe for 3.1...

    • Uhm. The latest versions of gcc built for x86 are by Intel. If you run gcc --version it'll say "egcs" - The Intel compiler.
  • For the high enders with cycle guzzling applications this is important. But for us lowly users this is baaad. I don't want to see a superior Linux on a more expensive chip that locks me into another Intel style relationship with a vendor. I want freedom to choose chips, mice, screens, OS, the lot. I want it all. I have it all! (almost) so don't lets go giving it away slipping down the platform dependant route - that way lies hell and OS taunting such has never before been seen!!!!
    kennygeek "im mugh minmbe mex" {I use poorUX}
    cartmangeek "Awww - cant the little poor boy afford Intel??"
  • I've noticed that Mandrake 8.0 claims to be optimized for the G3 processor. Does this mean that gcc now has PowerPC optimizations? From what I've understood Linux on PowerPC (and possibly other architectures) was somewhat hobbled by the lack of decent PowerPC code generated from the compiler and that gcc pretty much only optimizes for the x86 architecture. Are there compilers out there readily available that now optimize for PowerPC?
    • There MUST be optimisations for the G3/G4 around, because the compiler that comes with MacOS X, the same compiler apple uses commercially AFAIK, is egcs. An open source compiler. So either Apple is not using an optimised compiler (which would be crazy) or the optimisations must have been made public.
    • I've noticed that Mandrake 8.0 claims to be optimized for the G3 processor. Does this mean that gcc now has PowerPC optimizations?

      I think so. I was running Linux on this Powerbook (292mhz G3 Wallstreet) about a year ago, and it was a dog. But I installed Mandrake 8/ppc on it a few days ago and it flies - it's almost as snappy as Classic MacOS is on here (OS X is unuseably slow though). I'm not sure if this is related to a better compiler or just that 2.4 is better on PPC than 2.2 was, but it makes a really nice Linux box now.

      All the hardware (sound, modem, ethernet, display, power management) works beautifully, too.
  • by bockman ( 104837 ) on Monday September 03, 2001 @10:47AM (#2248098)
    The idea of adapting an hardware architecture to run well for a specific OS sounds awful to me. It should be the other way around, given the more flexibility and and dynamic nature of software(what if Linux changes architecture? Should I buy a new PC?). If a chip maker wants an OS run well on its CPU, should supply plenty of information and support to the OS developers, but NOT warp the CPU architecture to its excclusive advantage.

    On a related topic, one of the great points of Linux IMO is that can run on so many architectures. In a dream-world dominated by the Penguin, one could pick up the best h/w platform for its needs, without worring about software compatibility
    Therefore, I am worried by anything that restricts the number of platforms on which Linux can run.

    • I think the finer point is being missed. By designing hardware around Linux, Linux will not be bound that architechure, but will run really well when compiled on that architechure

      Take Macs for instance. Apple does a lot of graphic stuff which need a lot of floating point and so they have a G4 chips which does floating point really well. You can do graphic stuff on a Pentium or a Ultra or some other chip, but it's not really built with the graphics model in mind.
      Similar issues come up with a system like Linux. Graphics aren't as important. Process switching becomes an issue, mutext and shared memory becomes a major point!
      Look at Windows. It is, for most issues, a single user environment. Mutext is still very important, but not encountered NEARlY as much as it is in a Unix system running 200+ processes with 150+ user id's all grabbing for the same system resources.
      I've skipped around a bit and I hope this makes sense. :-) I really would like to just post a really BIG architechture book, but I don't think the publishers would let me. :-)
      • If you say 'an architecture for server tasks' or 'an architecture for home desktop tasks', I 'm with you. But an architecture for servers, for instance, should be able to run equally well any Unix-like system as well as win2000 or winNT (or the good old VMS).

        An architecture built 'for Linux only' ( or for Windows only or for Mac OS only ) is a bad thing IMO. I am aware that they already exists in some extent, but that does not make things better.

  • Virtualisation (Score:5, Interesting)

    by AirSupply ( 210301 ) on Monday September 03, 2001 @12:02PM (#2248302)
    Oddly enough, I was thinking earlier today about a feature that I'd like to see in x86-type CPUs that ain't there yet. I've no idea as to its feasibility, and it might not even be useful, but I'll throw it out into the open here in the hopes that someone else will praise it and run with it, or smack it down and stop me wasting further brain cycles on it.

    The feature in question is better support for virtualisation. I'm led to understand that half the reason projects like Plex86 and proprietary products like VMWare are so clever is that the x86 doesn't lend itself to virtualisation. You can't necessarily retrofit virtualisation, but I suspect you could wrap it around the existing architecture.

    What I imagine this to look like in actual practice is a CPU that boots up in a mode where it's just a typical x86, but has a set of extra commands for creating and managing virtual x86en. A virtualisation-aware OS could then use these (privileged, I suppose) commands to initialise and execute virtual machines. Certain exceptions (configured at VM initialisation) would cause the virtual machine to break right back out to the real machine, dumping the virtual machine status in an appropriate location for later restoration.

    Clearly there's a largish book worth of details I've left out, but this is just meant to be a seminal idea. I don't even pretend to have any real knowledge of the x86 architecture, specifically.

    How would this help Linux? Well hey -- with a little bit of added tweaking, Linux could have 90% of the functionality of VMWare built into it. There are many other applications of virtualisation, and its addition to the core of Linux could make for some interesting possibilities. One application that springs to mind is the idea of having "multi-root" systems, where users can have their own root access to their own virtual system. If the virtualisation commands were also available in the virtual x86, then "virtual" would be a relative concept, and the root user of a virtual system could create more virtual systems of his own.

    I think it's a good idea. Now bring on the applause and the clue-sticks.

There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann

Working...