Linux For Cell Processor Workstation 310
News for nerds writes "The Cell processor from Sony, Toshiba and IBM, has been known as the chip that powers the upcoming PlayStation 3 computer entertainment system, but except for that very little is known about how it's applied to a real use. This time, at LinuxTag 2005 from 22nd to 25rd June 2005, at Messe- und Kongresszentrum Karlsruhe, Germany, Arnd Bergmann of IBM will speak about the Cell Processor programming model under Linux, and the Linux kernel in the first Cell Processor-based workstation computer, which premieres at Linuxtag 2005."
New wave of freedom (Score:3, Insightful)
Re:real use? (Score:3, Insightful)
Another Demo loop (Score:4, Insightful)
Maybe the old man face and duck in water tech demos from the PS2 will also appear.. Did any PS2 game ever look as good as sonys techdemos?
Re:So what's the deal with you linux zealots? (Score:2, Insightful)
Oh wow, I don't know where to start.
1. Relevance: This comment has absolutely no relevance to the slashdot article.
2. Open-source software sucks compared to closed-source because it's not done by 'professionals'? Give me a break! Several open-source projects are funded by companies, organizations, and universities and are recognized world-wide.
3. You're saying you can't use those programs because of their silly names which you somehow derived as sexual euphemisms? What about windows cause it kinda sounds something like dildos LOL!
4. You're comparing programming to prostitution while discussing the lack of professionalism -- how very... professional!
Congrats Apple and Steve! (Score:1, Insightful)
Time to build a killer AMD64 Linux box...right after I take this now worthless G5 to the dumpster.
Re:real use? (Score:1, Insightful)
And it will happen like this: The first real use will be determined by our graphics programmers who will manage to eat up every new cycle on dynamic lights and high dynamic range textures. Then our physics guys and AI people will ask why there's not much left. Finally the game programmers will show up and have only enough power left to make the sweetest looking version of pong you ever saw. Wait for it, we'll have it ready for 2006.
In related news, that is what happened last time we got next gen hardware. Games didn't get that much more fun, but they got pretty. A bit sad really.
Re:The Linux role in hardware design (Score:5, Insightful)
Perhaps because it is a Unix work-alike, and this was the original design goal of Unix?
Re:old wave, actually (Score:3, Insightful)
Cheap, but limited.
--
Evan "My first computer was an S100 bus handbuilt. My first OS wasn't."
The Cell Advantage (Score:3, Insightful)
I picture the PS3 using a camera as a very flexible form of input to allow for more creative game design. Super-fast compression and decompression also come to mind, which could be useful for more complex and fluid internet play.
Recent articles have said the cell will have some hickups with physics and AI, because those tasks benefit from branch prediction, but this should be made up for by the fact that the cell will be able to recognize input at a far more human level than present technology affords.
Re:Congrats Apple and Steve! (Score:1, Insightful)
Where do you live and which dumpster are you putting it in? I'll be glad to be the thud sound you here when you throw it in.
Re:And yet again the Cell fanboys (Score:3, Insightful)
Well, not the average application coder, but the compiler guys. And thats the right thing to do. x86 is a hardware VM with a hardware JIT-compiler right now. This is a job that is better done in software at compile time and not realtime in execution. (An exception would be bandwidth limitations as they were reported for the Transmeta-CPU (IIRC) running native VLIW code.) Abstraction is nice. But it doesnt belong in hardware - it belongs in the language and the compiler.
Re:So what's the deal with you linux zealots? (Score:2, Insightful)
There's only one thing worse than repetitive, uncreative, irrelevant trolls.
It's the fucktards that reply to them on a point-by-point basis as if it does anything other than justifying the trolling.
Next time you feel the need to reply to such a lame, obvious troll, try sucking your own cock instead. It's an endeavor that will doubtless keep you occupied for days and be far less distasteful to onlookers.
Re:Perhaps he is right though (Score:5, Insightful)
The other possibility is that Apple have got seriously pissed off watching IBM spew out the 3-core G5 for the XBox 360, the Cell for the PS3, and leaving them with an aging 2.7GHz CPU.
Re:And yet again the Cell fanboys (Score:2, Insightful)
While it might be the way of the future, it is very much a thing of the future, not the present.
Expect to see lots of carefully hand-tuned code for the Cell to make it behave.
"Real Use" != "Real Work" (Score:3, Insightful)
E.g., you get some real use out of your bed at home, but I wouldn't say sleeping there counts as "work". (Or if it does, where can I sign up to get paid for it?) And screwing doesn't really count as work for most people either.
E.g., you get some real use out of your TV, but most people don't get paid to watch TV, nor consider it "work".
Same here. Playing a game _is_ "real use" of a computer. It might not be "work", but "use" it is.
Re:Another Demo loop (Score:3, Insightful)
Re:Could be the replacement for my Macs (Score:3, Insightful)
I think you'll find the gains from 16 extra registers is less than what the [for example] AMD gains from having three pipelines, register file, etc...
It's like cache, throwing more registers pays off big to start [say going from 1 to 2, 2 to 4,
Take apart that 5% of your program that takes 95% of the time and see how many registers it actually uses in the inner loop.
With bignum math for instance, inner loops usually amount to 3 registers for an accumulator, 1 for a step counter, 2 for source pointers and 1 for an outer loop counter, 7 registers in total...
Take the EM64T case, it implements x86_64 as well but AMD still pwnz it bad. Why? Well let's see, three [not one] dedicated decoders, three ALU pipelines with 8-step schedulers [re parallelism], etc...
Intel still pwnz AMD when it comes to SSE2 and memory ops but that gap has been closing with every new AMD release [AMD64 for instance has more SSE2 opcodes implemented as directpath instead of MicroROM] where in the Intel camp their cpus haven't really been getting ANY better...
Tom
Re:Some words about Big Blue (Score:5, Insightful)
Okay what do we know about IBM:
What does that mean?
If I was Intel/Microsoft/Apple/Lenovo I would be running for the hills. IBM is about to try and redefine computing again.
I am not simply recycling the hype about the CELL being better then sliced bread. I truly think the signs are there that IBM is going to go head long into the Workstation/Embedded/Client/Server market with a CELL/Linux architecture and are going to try and settle some very old debts with Wintel.
I don't now whether they will successes. I expect it will come down to whether they can make programming the SPU's as easy as x86. But I think it will be a very interesting few years.
Re:Wrongo (Score:3, Insightful)
In case you don't remember, the point of RISC was to put optimization on the compiler so it wouldn't require massive on-the-fly speculative bibbledy-bop with millions of extra transistors and hideous pipelines like we have nowadays. This was done by providing, essentially, a compiler-accessible cache in the form of lots of registers, and by having an instruction set that was amenable to automated optimization.
Yes, at least in the beginning in its most pure form. Most high performance RISC architectures eventually adopted all those OoO, pipelining etc. tricks anyway.
In theory, you don't need any GP registers at all, you could just have memory-memory ops and rely on the cache.
Such "register-less" architectures have been researched, yes. Their primary downfall was that as the compiler has no way of knowing which memory currently happens to reside in cache (as you probably know, cache loading/eviction is decided at runtime based on the memory access pattern), the memory access time is non-deterministic. So there was no way the compilers could schedule the instructions in an intelligent way, and thus such an architecture would have to rely on some really fancy OoO scheme with a huge lookahead (=lots and lots of transistors) to get anywhere near decent performance.
The real problem seems to be that compilers have just not been able to keep up with the last 20 years of theory.
What theory? Optimizing code generation is a very hard problem, and if theory had provided some easy answer to it, the compiler vendors would have implemented it really quickly.
Witness the Itanium--in theory it should have been the ultimate, but they didn't seem to be able to get things optimized for it (other problems, too). Then what happens are curmudgeons complain about the extra work of optimization and insist on setting us back to early 80s architecture rather than writing a decent compiler.
Well it seems that we have to agree to disagree then. My opinion is that the godlike compiler you seem to think is just around the corner only if those curmudgeon compiler writers would get off their fat sorry asses, hasn't arrived because despite all compiler research we still haven't got much of a clue about how to make it.
Moral of the story: write a decent compiler and stop trying to glorify crappy ISAs that suit your antiquated and inefficient coding habits.
My moral: Write your code in a high-level portable language that isn't tied to some specific ISA. Don't get emotionally attached to ISA:s, whether positively or negatively. Judge the goodness of an architecture on how well the compiler + hardware executes the code, not on theoretical figures unlikely to be reached in practice.
Example of the above moral: Despite the supposed crappiness of the x86 ISA, it still manages pretty good performance (and in most cases unbeatable price/performance), even with a performance-wise mediocre compiler like gcc.
Re:*sigh* (Score:1, Insightful)
Me think not...
what a load of crap (Score:1, Insightful)
Ignorant simpletons probably had the same argument as you when the concept of threads were introduced. "They're too hard to use" the PHBs screamed. Well guess what? Programmers are actually quite smart people, and when you give them a wicked new toy like the Cell, they will figure it out.