Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM Software Hardware Linux

Linux For Cell Processor Workstation 310

News for nerds writes "The Cell processor from Sony, Toshiba and IBM, has been known as the chip that powers the upcoming PlayStation 3 computer entertainment system, but except for that very little is known about how it's applied to a real use. This time, at LinuxTag 2005 from 22nd to 25rd June 2005, at Messe- und Kongresszentrum Karlsruhe, Germany, Arnd Bergmann of IBM will speak about the Cell Processor programming model under Linux, and the Linux kernel in the first Cell Processor-based workstation computer, which premieres at Linuxtag 2005."
This discussion has been archived. No new comments can be posted.

Linux For Cell Processor Workstation

Comments Filter:
  • by XanC ( 644172 ) on Tuesday June 07, 2005 @02:53AM (#12744358)
    We are fast approaching an era where you'll be able to run any OS and any software you want on any architecture you want.
  • Re:real use? (Score:3, Insightful)

    by Taladar ( 717494 ) on Tuesday June 07, 2005 @02:55AM (#12744366)
    Because they are probably written by people that signed NDAs and can't talk about it, so their knowledge about that Cell processor is not available to the public.
  • Another Demo loop (Score:4, Insightful)

    by BagOBones ( 574735 ) on Tuesday June 07, 2005 @02:56AM (#12744369)
    Too bad that at LinuxTag 2005 all you will get to see is a looped video on running "real time" on "similar hardware" simulating the great development advanced you will be able to achieve with the new cell processor.

    Maybe the old man face and duck in water tech demos from the PS2 will also appear.. Did any PS2 game ever look as good as sonys techdemos?
  • by rammerhammer ( 590539 ) on Tuesday June 07, 2005 @03:23AM (#12744449)

    Oh wow, I don't know where to start.

    1. Relevance: This comment has absolutely no relevance to the slashdot article.

    2. Open-source software sucks compared to closed-source because it's not done by 'professionals'? Give me a break! Several open-source projects are funded by companies, organizations, and universities and are recognized world-wide.

    3. You're saying you can't use those programs because of their silly names which you somehow derived as sexual euphemisms? What about windows cause it kinda sounds something like dildos LOL!

    4. You're comparing programming to prostitution while discussing the lack of professionalism -- how very... professional!

  • by Anonymous Coward on Tuesday June 07, 2005 @03:27AM (#12744465)
    WTG Apple! Steve throws a tantrum because he can't get his G5 Powerbook and instead of the insanely great stuff IBM is doing with Cell we get dumped into the garbage world of Intel x86. An architecture Intel themselves have been trying to dump for a decade.

    Time to build a killer AMD64 Linux box...right after I take this now worthless G5 to the dumpster.

  • Re:real use? (Score:1, Insightful)

    by The_Hooleyman ( 724719 ) on Tuesday June 07, 2005 @03:40AM (#12744508)
    Use: Games

    And it will happen like this: The first real use will be determined by our graphics programmers who will manage to eat up every new cycle on dynamic lights and high dynamic range textures. Then our physics guys and AI people will ask why there's not much left. Finally the game programmers will show up and have only enough power left to make the sweetest looking version of pong you ever saw. Wait for it, we'll have it ready for 2006.

    In related news, that is what happened last time we got next gen hardware. Games didn't get that much more fun, but they got pretty. A bit sad really.

  • by Anne Thwacks ( 531696 ) on Tuesday June 07, 2005 @04:02AM (#12744573)
    how it has given chip/platform makers a specific, generic target OS that they can use freely to get something useful running on their hardware quickly

    Perhaps because it is a Unix work-alike, and this was the original design goal of Unix?

  • by JabberWokky ( 19442 ) <slashdot.com@timewarp.org> on Tuesday June 07, 2005 @04:10AM (#12744597) Homepage Journal
    Too be fair, it was the introduction of the mass production IC that allowed computers to be priced to where people could afford them (as opposed to large corporations and governments). Those early CPUs were very very underpowered compared to the "real computer" counterparts and OSes like CP/M and DOS were reflections of those limitations.

    Cheap, but limited.

    --
    Evan "My first computer was an S100 bus handbuilt. My first OS wasn't."

  • The Cell Advantage (Score:3, Insightful)

    by EMIce ( 30092 ) on Tuesday June 07, 2005 @04:17AM (#12744615) Homepage
    Those SPEs will be pretty useful for massaging and distilling large streams of data, which should make the cell great at tasks like video recognition and real-time market analysis. The cell may not be that revolutionary as parallelism has been touted in academia for a long time now, but the DSP like capabilities + parallelism will make the cell much more capable of responding quickly to complex sensory input than commodity hardware currently allows.

    I picture the PS3 using a camera as a very flexible form of input to allow for more creative game design. Super-fast compression and decompression also come to mind, which could be useful for more complex and fluid internet play.

    Recent articles have said the cell will have some hickups with physics and AI, because those tasks benefit from branch prediction, but this should be made up for by the fact that the cell will be able to recognize input at a far more human level than present technology affords.
  • by Anonymous Coward on Tuesday June 07, 2005 @04:54AM (#12744701)
    Time to build a killer AMD64 Linux box...right after I take this now worthless G5 to the dumpster.

    Where do you live and which dumpster are you putting it in? I'll be glad to be the thud sound you here when you throw it in. :D
  • by Sweetshark ( 696449 ) on Tuesday June 07, 2005 @05:34AM (#12744830)
    The Cell also is simple, but in a way that that inflates the gflop rating at the cost of programmer time.
    Well, not the average application coder, but the compiler guys. And thats the right thing to do. x86 is a hardware VM with a hardware JIT-compiler right now. This is a job that is better done in software at compile time and not realtime in execution. (An exception would be bandwidth limitations as they were reported for the Transmeta-CPU (IIRC) running native VLIW code.) Abstraction is nice. But it doesnt belong in hardware - it belongs in the language and the compiler.
  • by Slashcrap ( 869349 ) on Tuesday June 07, 2005 @05:40AM (#12744857)
    Oh wow, I don't know where to start.

    There's only one thing worse than repetitive, uncreative, irrelevant trolls.

    It's the fucktards that reply to them on a point-by-point basis as if it does anything other than justifying the trolling.

    Next time you feel the need to reply to such a lame, obvious troll, try sucking your own cock instead. It's an endeavor that will doubtless keep you occupied for days and be far less distasteful to onlookers.
  • by rpozz ( 249652 ) on Tuesday June 07, 2005 @05:50AM (#12744887)
    I doubt this is the result of a 5 year plan simply because Jobs loves Intel. That's just pure insanity.

    The other possibility is that Apple have got seriously pissed off watching IBM spew out the 3-core G5 for the XBox 360, the Cell for the PS3, and leaving them with an aging 2.7GHz CPU.
  • by tesmako ( 602075 ) on Tuesday June 07, 2005 @06:17AM (#12744972) Homepage
    While this is a nice thing to say it is not realistic today, Intel already tried with the Itanium to push the handling of instruction level parallelism to the compiler, with poor results. This has been a meme for easily 20 years (VLIW has been a research darling for a long time) but compiler technology has just not measured up to the expectations.

    While it might be the way of the future, it is very much a thing of the future, not the present.

    Expect to see lots of carefully hand-tuned code for the Cell to make it behave.

  • by Moraelin ( 679338 ) on Tuesday June 07, 2005 @06:36AM (#12745031) Journal
    You confuse "Real Use" with "Real Work". You can "use" a lot of stuff, without it counting as "work".

    E.g., you get some real use out of your bed at home, but I wouldn't say sleeping there counts as "work". (Or if it does, where can I sign up to get paid for it?) And screwing doesn't really count as work for most people either.

    E.g., you get some real use out of your TV, but most people don't get paid to watch TV, nor consider it "work".

    Same here. Playing a game _is_ "real use" of a computer. It might not be "work", but "use" it is.
  • by arose ( 644256 ) on Tuesday June 07, 2005 @06:38AM (#12745039)
    Sorry, but real time cutscenes do NOT count. Real games with physics, AI and other overheads do.
  • x86_64 has 16 GPRs and 16 XMM [simd] registers.

    I think you'll find the gains from 16 extra registers is less than what the [for example] AMD gains from having three pipelines, register file, etc...

    It's like cache, throwing more registers pays off big to start [say going from 1 to 2, 2 to 4, ...] but dies off quick after that.

    Take apart that 5% of your program that takes 95% of the time and see how many registers it actually uses in the inner loop.

    With bignum math for instance, inner loops usually amount to 3 registers for an accumulator, 1 for a step counter, 2 for source pointers and 1 for an outer loop counter, 7 registers in total...

    Take the EM64T case, it implements x86_64 as well but AMD still pwnz it bad. Why? Well let's see, three [not one] dedicated decoders, three ALU pipelines with 8-step schedulers [re parallelism], etc...

    Intel still pwnz AMD when it comes to SSE2 and memory ops but that gap has been closing with every new AMD release [AMD64 for instance has more SSE2 opcodes implemented as directpath instead of MicroROM] where in the Intel camp their cpus haven't really been getting ANY better...

    Tom
  • by bWareiWare.co.uk ( 660144 ) on Tuesday June 07, 2005 @08:26AM (#12745488) Homepage

    Okay what do we know about IBM:

    • The have designed the chips for all the major consoles.
    • The have dumped their Intel based PC business.
    • They have dumped their partner for Power based PCs (IBM would have hardly had to bend over backwards to continue the Apple relationship - they must have basically stonewalled them for Job's to risk a jump to Intel.)
    • They are very Linux friendly.

    What does that mean?

    • They are going to ship an unbelievable volume of chips, allowing them to make highend chips cost effectively.
    • They have no tires to the existing PC business and are completely free to do something new.
    • They have a powerful and adaptable OS that they can push for everything from mobile phones to big iron.

    If I was Intel/Microsoft/Apple/Lenovo I would be running for the hills. IBM is about to try and redefine computing again.

    I am not simply recycling the hype about the CELL being better then sliced bread. I truly think the signs are there that IBM is going to go head long into the Workstation/Embedded/Client/Server market with a CELL/Linux architecture and are going to try and settle some very old debts with Wintel.

    I don't now whether they will successes. I expect it will come down to whether they can make programming the SPU's as easy as x86. But I think it will be a very interesting few years.

  • Re:Wrongo (Score:3, Insightful)

    by joib ( 70841 ) on Tuesday June 07, 2005 @09:11AM (#12745812)

    In case you don't remember, the point of RISC was to put optimization on the compiler so it wouldn't require massive on-the-fly speculative bibbledy-bop with millions of extra transistors and hideous pipelines like we have nowadays. This was done by providing, essentially, a compiler-accessible cache in the form of lots of registers, and by having an instruction set that was amenable to automated optimization.


    Yes, at least in the beginning in its most pure form. Most high performance RISC architectures eventually adopted all those OoO, pipelining etc. tricks anyway.


    In theory, you don't need any GP registers at all, you could just have memory-memory ops and rely on the cache.


    Such "register-less" architectures have been researched, yes. Their primary downfall was that as the compiler has no way of knowing which memory currently happens to reside in cache (as you probably know, cache loading/eviction is decided at runtime based on the memory access pattern), the memory access time is non-deterministic. So there was no way the compilers could schedule the instructions in an intelligent way, and thus such an architecture would have to rely on some really fancy OoO scheme with a huge lookahead (=lots and lots of transistors) to get anywhere near decent performance.


    The real problem seems to be that compilers have just not been able to keep up with the last 20 years of theory.


    What theory? Optimizing code generation is a very hard problem, and if theory had provided some easy answer to it, the compiler vendors would have implemented it really quickly.


    Witness the Itanium--in theory it should have been the ultimate, but they didn't seem to be able to get things optimized for it (other problems, too). Then what happens are curmudgeons complain about the extra work of optimization and insist on setting us back to early 80s architecture rather than writing a decent compiler.


    Well it seems that we have to agree to disagree then. My opinion is that the godlike compiler you seem to think is just around the corner only if those curmudgeon compiler writers would get off their fat sorry asses, hasn't arrived because despite all compiler research we still haven't got much of a clue about how to make it.


    Moral of the story: write a decent compiler and stop trying to glorify crappy ISAs that suit your antiquated and inefficient coding habits.


    My moral: Write your code in a high-level portable language that isn't tied to some specific ISA. Don't get emotionally attached to ISA:s, whether positively or negatively. Judge the goodness of an architecture on how well the compiler + hardware executes the code, not on theoretical figures unlikely to be reached in practice.

    Example of the above moral: Despite the supposed crappiness of the x86 ISA, it still manages pretty good performance (and in most cases unbeatable price/performance), even with a performance-wise mediocre compiler like gcc.
  • Re:*sigh* (Score:1, Insightful)

    by Anonymous Coward on Tuesday June 07, 2005 @09:20AM (#12745863)
    I suppose I need a new label for my p-series and i-series then... IBM GAME CONSOLE... of a Deep Blue Game Console...

    Me think not...
  • by Anonymous Coward on Tuesday June 07, 2005 @11:18AM (#12747020)
    Tell me, what do you think the GPU on a nice new ATI or NVidia video card is? It's a parallel CPU designed to do a specific task extremely well and extremely quickly at the cost of programmer time. There is simply no reason, other than fear of change, why the concepts behind modern 3D GPU performance cannot be extended to more general purpose applications. As the high level programmer APIs to the Cell evolve, the programmer time required to take advantage of this technology will reduce.

    Ignorant simpletons probably had the same argument as you when the concept of threads were introduced. "They're too hard to use" the PHBs screamed. Well guess what? Programmers are actually quite smart people, and when you give them a wicked new toy like the Cell, they will figure it out.

When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle. - Edmund Burke

Working...