Chipmakers Angling For Support 98
defence budget writes "According to this article at CNet, what once happened with Intel and Microsoft might be happening with Linux, AMD and Intel. Apparently "In a sign of how strategic Linux has become, AMD and Intel are angling to lure open-source programmers to their future chip designs". I cannot see how the low end market will react to this, but surely the high end market should see the potential advantages in migrating to systems running on hardware custom built for Linux?"
High end is the idea (Score:2, Interesting)
I wish there was a spell checker plugin for
Re:High end is the idea (Score:1)
Re:High end is the idea (Score:2)
The market you're talking about is expensive. These machines aren't your average 2K PC with Linux/Windows. And lets face it, if you can afford a 500K machine, I don't think a copy of Solaris will break the bank.
Re:High end is the idea (Score:2, Interesting)
That's true, but most Solaris machines don't cost anywhere near that.
The SUN Blade on my desk has a single board and CPU in it, and two (non-redundant) hard drives, if I unplug any of these pieces, the system will stop. :)
Now I admit SUN makes a lot nicer machines than this one, so I certainly see your point, but a lot of the machines in the SUN/HP range could be replaced with x86 boxes. And SUN is way overpriced for the kind of performance it provides.
Re:High end is the idea (Score:2)
Re:High end is the idea (Score:2, Insightful)
It's the quick availability of an OS for the new chip that matters.
They talk about Microsoft and how they hope having a Linux running their chip should put pressure on them. (they being AMD and Intel)
Re:High end is the idea (Score:1)
However I think the original poster was thinking about the workstation market, and Intel/AMD machines might well be competitive there.
HP has already x86 machines on offer [hp.com]. I imagine Intel and AMD would be keen to see their a 64 bit chips in a similar sort of setup.
Support will depend on the company who ships the workstation of course.
Re:High end is the idea (Score:1)
These computers are the computers Sun are tring to break into the x86 market with thou.
Re:High end is the idea (Score:2)
For my home Linux machine, commodity components are fine since they're dirt cheap, and if something fails, I can go buy a new one in a few days. But if you really don't want your hardware failing you at any time, it's probably a good idea to invest in something like a Sun.
Linux AMD and Intel (Score:1)
sounds more like a hitech butcher or something.
So, you might want to wait before (Score:1)
;-)
linux and chip-makers (Score:1, Interesting)
Re:linux and chip-makers (Score:1)
Re:linux and chip-makers (Score:2)
I doubt a GeForce25 could help improving this, except by lowering the CPU charge a little...
Now it would be also good to gain more power for productivity sake, the ones who read this book [oreilly.com] or this one [oreilly.com] or will understand me for sure.
Re:linux and chip-makers (Score:2, Insightful)
How many are capable of it?
I think the issue isn't how easy the os is to install, or to some extent how easy it is to use, some would argue windows is hard to use. The issue is getting OEMs to sell linux boxen already to rock and roll. Once that happens then more apps will start to appear and linux will apear on more desktops
btw have you installed mandrake lately? its the easyist os I've ever installed, and I've installed everything from BeOS(rip) to DOS 6.22.
Games? (Score:1)
Re:Games? (Score:1)
No Integration (Score:5, Interesting)
Also, are the chip companies even targeting Linux? It seems to me that they're interested in open-source. But open-source does not mean Linux. Open-source is much larger as a concept than Linux is. And of course, I imagine that the future will be this: open-source programmers will be lured away by dollar signs (not in a bad way -- but hey, everyone's gotta eat). The companies will have a vested interest in making sure that these programmers are not working on things outside of the company itself, and in fact will also require that parts of the systems they develop will be proprietary. Just like Apple does. Darwin is open-source, but Aqua, Quartz, etc., are proprietary systems. And Apple nabbed the top guy for BSD, did they not?
I'm rambling now. But what I'm saying, basically, is that although i think this is primarily a good thing, the waters are still very muddy and the trail itself extends very far out.
Does anyone read the articles? (Score:4, Informative)
Similarly it looks like Linux on the AMD's Hammer chipset [x86-64.org] is already way underway as a project while according to the article Microsoft has no current plans to support that chipset.
Re:Does anyone read the articles? (Score:2)
Heh, it shouldn't be too hard since NetBSD [netbsd.org] already runs on the x86-64, so there should be a compiler and such you can borrow, and TLB faulting code you can take (you can relicence BSD code to GPL, just not so easy it go the other way).
Re:Does anyone read the articles? (Score:1)
Re:Does anyone read the articles? (Score:1)
Linux vs Microsoft (Score:4, Interesting)
AMD really needs Linux on the hammer platform. Actually, they need Windows as well, but Linux is the club to force Microsoft to make the port. Intel is less dependent on Microsoft for the success of IA64 platforms, but mainstream adoption of new technologies like SMT (or hyperthreading, as they say) could really distinguish them from AMD performance-wise.
I'm usually pro-Microsoft around here, given the amount of nonsense Linux-propoganda spewed out, but I will be really happy when Linux can compete across the board, instead of just on servers. The benefits of competition are very high.
Re: Why not SPARC? (Score:5, Insightful)
I don't know. I have a Matrox Millenium II that only just started working reliably as of Solaris 8 (or Solaris 7 with patches). It seems that when you do a certain thing to the card, the card stands about a 50% chance of getting confused and hanging the entire PCI bus.
Also inside the same case, I have two Western Digital IDE hard drives that won't both talk on the same bus if you set one of them to master and one to slave. It seems to only work if exactly *one* of them is set to cable select.
I also have an Intel motherboard (which is sitting in a drawer right now) that only allows me to use 64 MB of RAM. I bought that system in 1997. Sun's very first desktop SPARC system (the SPARCstation 1) could expand to 64 MB of RAM, and that was in 1990.
Also in the drawer, I have a Diamond Viper V770 Ultra whose fan has decided to make loud scraping noises. Diamond refused to sell me a replacement part, so I have an approximate match replacement part that I will install when I feel like getting out the soldering iron.
The system that had the Intel motherboard originally came with a Toshiba XM-6102B CD-ROM drive. When I first installed Solaris on that thing, I was afraid the driver was confused, because it was reporting all kinds of errors even though Windows didn't seem to have a problem with the drive at all. As time went on, the drive got worse and worse and eventually reached the point where it took 3 or 4 tries for it to recognize a CD.
All of these experiences with dodgy PC hardware are with *name* *brand* PC hardware that I've taken good care of. And, it's not like I've run through hundreds of systems, either. The amount of PC hardware I have ever owned in my life is not enough to build two working systems.
Basically, my experience with PC hardware is that it's cheaply made, and any given piece of hardware will probably be somewhere between limping along and working almost right but not quite. (Some hardware will just outright break, and some of it will be trouble-free for years and years, too.) Overall, I think this is a symptom of the fact that most PC consumers don't know to expect better, and also the pressure to make things as cheap as possible.
There is a lot of stuff out there that is just crap, and there is a lot of stuff out that there sort of works and sort of doesn't. Yes, you can get high quality PC parts, but the fact is that you have to be pretty choosy about it. Which brings me to my next point...
And let's not forget that practically everything in a Blade 100 is off-the-shelf PC parts, so that theory goes out the window.
I tend to think that the Blade 100 is going to be better built than a system you'd buy from some PC vendor, because Sun's attitude is different. Few manufacturers of any complex product like a computer actually make most of the stuff themselves. The reason Sun systems are reliable is that they select good parts, and test the system together as a whole. They have never controlled the whole process, but they do control more of the process for their machines than PC manufacturers do. I think this is what's going to lead to better quality.
(Part of the reason I think that is that it's my belief that one of the reasons PC hardware and software is so unreliable is the size of the market. It's prohibitively expensive to test everything with everything, and not only that, but it's also just very chaotic. It's difficult to make a system work well under those conditions. Sun doesn't suffer from that problem as much because their market is smaller and not only that but simpler.)
I thought it was just me (Score:1)
Re: Why not SPARC? (Score:1)
Re: Why not SPARC? (Score:2)
Not just that, but if you do find, say, a glitch in the L2 cache controller on an x86 design that might cause one lock up every year or so you can talk yourself out of fixing it since most x86 machines run Windows, and one extra crash a year will be unnoticed, and blamed on MS anyway.
The SPARC designers are going to assume you run Solaris, and one hardware caused crash a year may well be the crash for the year. Way more incentive to fix it.
Lest you think this is totally theroitical, I use to work for a company that owned 100 or so DEC PC machines with a little L2 problem... and we noticed because we were running a real OS.
Re: Why not SPARC? (Score:1, Interesting)
By the way, one doesn't test everything; it is enough to test a sample, and every manufacturer (execept the very worst) does that. If the sample is made large enough, you can drive the failure rate arbitrarily low. If the sample is made small (and thus cheap) enough, the large failure rate can be accepted, in the pc market. If it doesn't work, the customers will just return it. If it fails the day after the warrenty runs out, that's bonus.
Processor optimization and the open/free community (Score:4, Insightful)
Re:Processor optimization and the open/free commun (Score:1, Insightful)
Re:Processor optimization and the open/free commun (Score:1)
Re:Processor optimization and the open/free commun (Score:2, Insightful)
My understanding is that a lot of the extremely useful optimisations are covered by patents owned by IBM, Intel, Microsoft, etc.
Now if IBM and Intel just opened up those patents then a lot more useful optimisations could be done. Otherwise we have the much more difficult route of the GCC developers having to come up with their own non-infringing optimisations.
custom hard ware. (Score:3, Interesting)
Oddly enough, I can't think of any advantage. The trend in high end computing recently seems to be to move to commodity hardware. We have clusters of x86 machines. SGI is moving to an Intel platform. And Compaq has sold the Alpha to Intel.
I could be wrong of course...
Software personality (Score:3, Interesting)
This is just a reflection on the root cause of the obvious success that Linux continues to have, as evidenced by this story.
Somehow I think that the personality of the main visionary behind a piece of software does occasionally express itself in the software in certain subtle ways.
In The case of Linux vs MS, where people want to contribute their energies to some degree, where people give things to the project. This vs MS where alot of people do not want to contribute and where resources are boughtr, paid for, and taken.
Alot of this has to do with the social agreements regarding what is right and normal and just behavior for capitalism, big business, etc. It's what "everyone does". But this seems to be changing with the model of contribution and community help.
This community help model requires more healthy and alive community to work well, while the typical capitalist model can work in a perverse way with criminal types who steal resources. In fact, it can be difficult to avoid.
We eventually come to the point where we have the successes that we have today.
and we can say, with some logic, that the two operating systems and the companies, etc reflect the main personalites involved. Linux is much more community oriented, while MS is more imperial (or something), in its own way.
- - -
Radio Free Nation [radiofreenation.com]
"If You have a Story, We have a Soap Box"
- - -
What's the point? (Score:3, Insightful)
Bearing all that in mind, why does anyone need custom Linux hardware?
Re:What's the point? (Score:4, Insightful)
This can go beyond merely understanding the best way to structure an executable, or tips and tricks for hand-coding assembler.
On the one hand, Intel could say to MS "we'd really like to push this new instruction set that we've come up with", so MS say "okay, we'll build support for it into the next DirectX release".
Alternatively, MS could say "we'd really like to get into the streaming multimedia market, could you help us out?"
The upshot is that Intel gets support for their latest, expensive features at the OS level, whilst MS get hardware-level optimization for apps they want to write. Wrap the exact details in an NDA or two, and bingo - Windows runs better on Intel hardware, and Intel hardware runs Windows better. (ie Linux on Intel, and Windows on AMD just aren't as good)
Yes, the whole point is that you can run any OS on any hardware, but sometimes it pays to have a little help.
Cheers,
Tim
Future? What about now? (Score:4, Insightful)
Re:Future? What about now? (Score:3, Insightful)
As we move to RISC VLIW processors, compilers become more and more important.
There is this story in the late 80's of how a lot of independent hardware vendors were choosing MIPS over SPARC because MIPS were perceived as being faster. Sun promptly hired MIPS' compiler team and found that, with their opimizations, the SPARC chips were actually faster. Of couse, by this time the market had moved to MIPS, so MIPS was able to pump more money into hardware R+D...
Re:Future? What about now? (Score:2)
They can do that already by purchasing a copy and looking at the machine code it generates. The necessary tweaks to generate fast-running code for a particular processor are not kept secret; on the contrary, they need to be as publicized as possible to increase the amount of software that runs well on that processor.
(At least, that's how it damn well should be, and Intel wouldn't do themselves any favours by having 'secret optimizations'.)
Intel secrets (Score:2)
Re:Intel secrets (Score:1)
Re:Intel secrets (Score:2)
Re:Future? What about now? (Score:3, Interesting)
In the past Intel (at least) has done major work on gcc. The first time I remember seeing anything about it they dumped a ton of patches off and they were wrong. There were a lot of Intel-specific patches in the machine independent parts, and lots of machine independent parts in the x86 only part.
The patches were not accepted (someone did fork off a pgcc or something like that for a while). Much of that work has been re-done right in egcs (now gcc 3).
I don't know if they have been contributing a lot recently, with luck they will get the two messages "smaller patches tend to be better", and "stick with the framework (we'll give help if you ask)".
Apple does seem to have learned. A lot of their patches made it into egcs. Unfortunitly their pre-compiled headers code didn't make it in (it is in their gcc that they ship), maybe for 3.1...
Re:Future? What about now? (Score:1)
High Enders (Score:1)
kennygeek "im mugh minmbe mex" {I use poorUX}
cartmangeek "Awww - cant the little poor boy afford Intel??"
Speaking of compilers... (Score:2, Interesting)
Re:Speaking of compilers... (Score:1)
Re:Speaking of compilers... (Score:2)
I think so. I was running Linux on this Powerbook (292mhz G3 Wallstreet) about a year ago, and it was a dog. But I installed Mandrake 8/ppc on it a few days ago and it flies - it's almost as snappy as Classic MacOS is on here (OS X is unuseably slow though). I'm not sure if this is related to a better compiler or just that 2.4 is better on PPC than 2.2 was, but it makes a really nice Linux box now.
All the hardware (sound, modem, ethernet, display, power management) works beautifully, too.
Shouldn't be the other way around ? (Score:3, Insightful)
On a related topic, one of the great points of Linux IMO is that can run on so many architectures. In a dream-world dominated by the Penguin, one could pick up the best h/w platform for its needs, without worring about software compatibility
Therefore, I am worried by anything that restricts the number of platforms on which Linux can run.
Re:Shouldn't be the other way around ? (Score:3, Interesting)
Take Macs for instance. Apple does a lot of graphic stuff which need a lot of floating point and so they have a G4 chips which does floating point really well. You can do graphic stuff on a Pentium or a Ultra or some other chip, but it's not really built with the graphics model in mind.
Similar issues come up with a system like Linux. Graphics aren't as important. Process switching becomes an issue, mutext and shared memory becomes a major point!
Look at Windows. It is, for most issues, a single user environment. Mutext is still very important, but not encountered NEARlY as much as it is in a Unix system running 200+ processes with 150+ user id's all grabbing for the same system resources.
I've skipped around a bit and I hope this makes sense.
Re:Shouldn't be the other way around ? (Score:1)
An architecture built 'for Linux only' ( or for Windows only or for Mac OS only ) is a bad thing IMO. I am aware that they already exists in some extent, but that does not make things better.
Virtualisation (Score:5, Interesting)
The feature in question is better support for virtualisation. I'm led to understand that half the reason projects like Plex86 and proprietary products like VMWare are so clever is that the x86 doesn't lend itself to virtualisation. You can't necessarily retrofit virtualisation, but I suspect you could wrap it around the existing architecture.
What I imagine this to look like in actual practice is a CPU that boots up in a mode where it's just a typical x86, but has a set of extra commands for creating and managing virtual x86en. A virtualisation-aware OS could then use these (privileged, I suppose) commands to initialise and execute virtual machines. Certain exceptions (configured at VM initialisation) would cause the virtual machine to break right back out to the real machine, dumping the virtual machine status in an appropriate location for later restoration.
Clearly there's a largish book worth of details I've left out, but this is just meant to be a seminal idea. I don't even pretend to have any real knowledge of the x86 architecture, specifically.
How would this help Linux? Well hey -- with a little bit of added tweaking, Linux could have 90% of the functionality of VMWare built into it. There are many other applications of virtualisation, and its addition to the core of Linux could make for some interesting possibilities. One application that springs to mind is the idea of having "multi-root" systems, where users can have their own root access to their own virtual system. If the virtualisation commands were also available in the virtual x86, then "virtual" would be a relative concept, and the root user of a virtual system could create more virtual systems of his own.
I think it's a good idea. Now bring on the applause and the clue-sticks.
Re:Virtualisation - check out the hurd (Score:1)
ie multiple independent servers/users each with limited access to hardware, running in secure environments.
Elivs