Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IBM Software Hardware Linux

Linux For Cell Processor Workstation 310

News for nerds writes "The Cell processor from Sony, Toshiba and IBM, has been known as the chip that powers the upcoming PlayStation 3 computer entertainment system, but except for that very little is known about how it's applied to a real use. This time, at LinuxTag 2005 from 22nd to 25rd June 2005, at Messe- und Kongresszentrum Karlsruhe, Germany, Arnd Bergmann of IBM will speak about the Cell Processor programming model under Linux, and the Linux kernel in the first Cell Processor-based workstation computer, which premieres at Linuxtag 2005."
This discussion has been archived. No new comments can be posted.

Linux For Cell Processor Workstation

Comments Filter:
  • real use? (Score:5, Funny)

    by DustyShadow ( 691635 ) on Tuesday June 07, 2005 @01:46AM (#12744323) Homepage
    but except for that very little is known about how it's applied to a real use.

    And why are video games not considered to be "real use" ??
    • And why are video games not considered to be "real use"

      Because the successful ones prevent you from getting any "real work" done.
      • You confuse "Real Use" with "Real Work". You can "use" a lot of stuff, without it counting as "work".

        E.g., you get some real use out of your bed at home, but I wouldn't say sleeping there counts as "work". (Or if it does, where can I sign up to get paid for it?) And screwing doesn't really count as work for most people either.

        E.g., you get some real use out of your TV, but most people don't get paid to watch TV, nor consider it "work".

        Same here. Playing a game _is_ "real use" of a computer. It might not
    • Re:real use? (Score:3, Insightful)

      by Taladar ( 717494 )
      Because they are probably written by people that signed NDAs and can't talk about it, so their knowledge about that Cell processor is not available to the public.
    • Re:real use? (Score:2, Interesting)

      by Criton ( 605617 )
      Not a real use Cell is awsome an under $300 chip that eats xeons for snacks and can eat an opteron for lunch?
      This would be a big seller for people in engineering the movie industry etc.
      With linux on it I want to see a standard PC board with a Cell processor and an X86 emu in rom for X86 OSes and using X86 cards roms.
      But for speed it'll run native cell compiled applications.
      Another odd effect is if cell finds it way into printers we'll have a situation we had back in the 80s where the printer is more powerfu
  • by Anonymous Coward on Tuesday June 07, 2005 @01:47AM (#12744326)
    Can't wait!
  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Tuesday June 07, 2005 @01:53AM (#12744351) Journal
    What has impressed me about Linux is not so much that it has enabled some sort of "software revolution", but rather in how it has given chip/platform makers a specific, generic target OS that they can use freely to get something useful running on their hardware quickly.

    It used to be the case that platform makers would have to either develop their own minimal operating system for testing purposes or work very closely with an OS maker to port their software to the new hardware platform. With Linux, this has been pushed into the anals of history. Now the Linux OS porting goes hand in hand with platform building, as evidenced by the almost immediate support for Linux at the time of hardware release.

    I'm not so much interested in how the Cell board is going to revolutionize anything (it won't), but in how we have, in just the past few years, seen a dramatic increase in the number of hardware platforms being released. And not just in numbers, but also in variety. The number of different types of hardware platforms has risen dramatically. It's only limitation is the number of chip instruction sets supported by gcc and the imaginations of hardware manufacturers.

    If you want to see how Microsoft's monopoly has hurt the computer industry, look no further than the current industry. Whereas hardware platforms were pretty standardized and boring, now, with Linux (and real competition to Microsoft's hegemony) the numbers of innovative platforms has increased dramatically. We need a Microsoft out there developing consumer-level applications and quality, user-friendly operating systems. However, we also need a real competitor like Linux to push the giant into innovating.
    • by CrankyFool ( 680025 ) on Tuesday June 07, 2005 @02:01AM (#12744384)
      Erm.

      Just for the record: I think you meant "annals of history." "Anals of history" is ...

      different.
    • by ignorant_coward ( 883188 ) on Tuesday June 07, 2005 @02:21AM (#12744438)

      Linux is more popular, but NetBSD allows quicker porting of "something useful".

      I agreee that Microsoft has dealt a fair amount of damage with crappy APIs and bad QA regarding stability and security. A 'standard turd with a pretty GUI' is still a turd.
    • by Anne Thwacks ( 531696 ) on Tuesday June 07, 2005 @03:02AM (#12744573)
      how it has given chip/platform makers a specific, generic target OS that they can use freely to get something useful running on their hardware quickly

      Perhaps because it is a Unix work-alike, and this was the original design goal of Unix?

      • That really wasn't the original design goal of Unix, though. The original design goal of Unix was for Ken Thompson to be able to play a really funny little game that he had developed while working on Multics.

        The first few years, Unics, like all OSs of the day, was indeed written in assembly. Even after it was rewritten in C, it wasn't portable for another few years, since it still relied on stuff that only worked on PDP-11s. It was only when they decided to try and port it to another architecture (which I

    • by Per Abrahamsen ( 1397 ) on Tuesday June 07, 2005 @03:56AM (#12744714) Homepage
      When you had some new hardware, you bought a (relatively cheap) Unix source license, and had something running fast

      Linux is better though, because the GPL encourage hardware vendors to share their modifications.

      With Unix all you had access to was the original source, and the ports done by non-commercial/academic groups (source as UCB). Not other vendors code.
    • by RKBA ( 622932 ) on Tuesday June 07, 2005 @03:59AM (#12744720)
      ...It's only limitation is the number of chip instruction sets supported by gcc and the imaginations of hardware manufacturers.

      I have news for you,... we programmers have been letting the hardware designers have FAR too much fun for far too long! It wasn't until my recent retirement from more than 35 years of computer programming (I've had many different titles) that I've had the time to learn the Verilog hardware design language - and it's GREAT FUN!!! :-) Verilog is very liberating because it removes the boring sequential execution of most CPU's and provides a clean slate with which to design any sort of little tiny electronics machine (that's how I think of VLSI design) that my heart desires. There is a GPLed version of SystemC (a higher level hardware design language than Verilog) on SourceForge that I've been meaning to take a look at, but first I'm creating a 640 bit-wide(!!!) factoring machine in Verilog which I hope to fit into one of the Lattice or Altera FPGA parts.

      Really, I highly encourage programmers or anyone interested to learn and use Verilog or some other high level hardware design language. Verilog is similar in many ways to the C language, so if you're familiar with C then you already know most of Verilog's operators, precedence rules, etc. The only thing that takes a little getting used to is Verilog's inherently parallel nature. That is both its strength and the source of most Verilog design errors (at least for me). Also, Verilog is even more bit-picky than C but I sort of actually prefer the extra control that languages like C and Verilog give me over the hardware versus languages that try to insulate me from it.

      • I agree HDLs (be it Verilog or VHDL) are much fun to use. And developer boards are becoming more affordable as well. (You can get a dev board with an FPGA and a bunch of ports for a few hundred bucks.)

        The drawback is that most of the high-end tools (Modelsim and synthesisers) are extremely expensive. But there are often free tools that work allright, I know Xilinx supply these for free.
      • I have been thinking for a while that it would be fun to get into hardware design, but I really don't know where to start. Could you point me in the direction of good resources to read (online and offline) and what kind of kit I'd need to buy to start?
  • by XanC ( 644172 ) on Tuesday June 07, 2005 @01:53AM (#12744358)
    We are fast approaching an era where you'll be able to run any OS and any software you want on any architecture you want.
    • We were fast approaching that about 30 years ago. Then the personal computer "revolution" happened, and companies like Microsoft and Apple started from square one, making all the mistakes that their predecessors had been making, and then some: programming in low-level languages, extensive use of assembly, lack of hardware abstraction, etc.

      Unfortunately, the so-called PC-pioneers like Gates, the Apple developers, and others, didn't have a clue what they were doing technically and were learning on the job;
      • Too be fair, it was the introduction of the mass production IC that allowed computers to be priced to where people could afford them (as opposed to large corporations and governments). Those early CPUs were very very underpowered compared to the "real computer" counterparts and OSes like CP/M and DOS were reflections of those limitations.

        Cheap, but limited.

        --
        Evan "My first computer was an S100 bus handbuilt. My first OS wasn't."

  • Another Demo loop (Score:4, Insightful)

    by BagOBones ( 574735 ) on Tuesday June 07, 2005 @01:56AM (#12744369)
    Too bad that at LinuxTag 2005 all you will get to see is a looped video on running "real time" on "similar hardware" simulating the great development advanced you will be able to achieve with the new cell processor.

    Maybe the old man face and duck in water tech demos from the PS2 will also appear.. Did any PS2 game ever look as good as sonys techdemos?
    • No, the PS2 games didn't look as good as the demos. The legendary Sony hype machine aside, those Unreal 3 engine demos were realtime, and they looked a hell of a lot like pre-rendered. Very impressive, end of story.

      Cell is a very cool design. I suggest you read the design docs linked to in the news item. IBM, Toshiba and Sony are fairly reputable - this is not some vaporware. If you were trolling about Infinium Labs and the Phantom, I'd understand, but PS3? Come on...
      • Sorry, but that's patently untrue. Everyone seems to forget just how bad the PS2 tech demos were - all of them were surpassed in games within the life of the system. Compare the cutscene-style demos (RR girl, Final Fantasy dance sequence) to something like the real-time cutscenes in MGS2. The Tekken demo to any of the Tekken games on the system (even Tekken Tag Tournament outclassed it). The GT demo was probably one of the most impressive, though it cheated quite a lot to achieve that, and it still doesn't
        • by arose ( 644256 )
          Sorry, but real time cutscenes do NOT count. Real games with physics, AI and other overheads do.
          • Tech demos are always sans physics / AI etc. That's standard and you have to take it into account when viewing them. Especially if they're not showing in game graphics, like the head demo. A fair complaint is when the system simply can't recreate the tech demo. That's cheating. But if the system can run the demo then all's fair.
            • Bingo. Although all of the examples I gave other than the MGS2 cutscenes were in-game, and the MGS2 in-game graphics are still more impressive than the FFVIII dance demo (which is available online if you search a bit, for those people who are having difficulty distinguishing in their memory between the PS2 tech demo and the PSX FMV).
    • ### Did any PS2 game ever look as good as sonys techdemos?

      The current generation of PS2 games looks at least as good if not in many cases better then the techdemos they have shown back then. That said, yes, it took them a while to get all the power out of the PS2. This thread [the-magicbox.com] has some videos and pictures to compare.

      And last not least one should never forget that a techdemo isn't actually gameplay. A techdemo allows the developer to prescript everything they want, insert cool effects all over the place

  • by emanuelez ( 563534 ) on Tuesday June 07, 2005 @02:14AM (#12744420) Homepage
    I really hope that Cell will boost IBM since in the last few monthes they sold their Personal Computers department to Lenovo and have lost their partnership with Apple for PPC processors. I really think IBM has still a lot to give to the IT world and it would be a real waste to loose their know-how!
    • Oh boy... maybe we should start a fund-raiser to save IBM, ya think?
    • Agreed, it'd be a shame to lose them. That said, although the loss of Apple is a PR blow I doubt it counts much financially. Regarding selling the PC dept, I suspect it just suited their business model better. So they're still looking healthy for now :-)
    • by bWareiWare.co.uk ( 660144 ) on Tuesday June 07, 2005 @07:26AM (#12745488) Homepage

      Okay what do we know about IBM:

      • The have designed the chips for all the major consoles.
      • The have dumped their Intel based PC business.
      • They have dumped their partner for Power based PCs (IBM would have hardly had to bend over backwards to continue the Apple relationship - they must have basically stonewalled them for Job's to risk a jump to Intel.)
      • They are very Linux friendly.

      What does that mean?

      • They are going to ship an unbelievable volume of chips, allowing them to make highend chips cost effectively.
      • They have no tires to the existing PC business and are completely free to do something new.
      • They have a powerful and adaptable OS that they can push for everything from mobile phones to big iron.

      If I was Intel/Microsoft/Apple/Lenovo I would be running for the hills. IBM is about to try and redefine computing again.

      I am not simply recycling the hype about the CELL being better then sliced bread. I truly think the signs are there that IBM is going to go head long into the Workstation/Embedded/Client/Server market with a CELL/Linux architecture and are going to try and settle some very old debts with Wintel.

      I don't now whether they will successes. I expect it will come down to whether they can make programming the SPU's as easy as x86. But I think it will be a very interesting few years.

  • cell (Score:5, Funny)

    by Eric(b0mb)Dennis ( 629047 ) on Tuesday June 07, 2005 @02:19AM (#12744428)
    The cell is amazing it will-

    - optimize seamless communities
    - generate vertical e-services
    - everage synergistic convergence

    and best of all

    - engage e-business content

    Perfect solution
  • done that [uq.edu.au]

    OK, so it's not on the Cell architecture, but rather an FPGA-based softCPU, but certainly the problem of integrating asymmetric coprocessing engines into the Linux architecture has been thought about before.

    Cool stuff nonetheless.

  • The IBM Cell workstations used for PS3 dev run a version of the Linux kernel to handle development I/O tasks: file transfers, communications with the PC host, starting/restarting programs etc. The game itself does not run in a Linux environment.

    This is similar to the T10K PS2 devkits running Linux (on a separate X86 processor) to do similar purposes.

    As with the PS2, the consumer PS3 console itself uses a custom bare-bones kernel; it is NOT Linux based, although I could certainly see Linux being ported t
  • What I want to know is "Does this 'uber-multiprocessor ready' architecture have some kind of 'priority' flag that one uses on a thread?

    More succinctly: how does it handle its passing of processing requests to other 'cells'?

    Using some (tiny, tiny bits of) ASM, I started to wonder about this. I mean, dear GOD! How do you deal with it? Some form of modified call I would suppose, like:

    call_avail mem_Address_Of_Function, MemAddress_To_Store_Result

    And when the result comes in it fires some interupt. Maybe th
    • by Anonymous Coward
      That function name is way too long and descriptive. The acutal name would be something like addfunmemdestaddavsize().

      And that would be followed by a series of non-sensical parameters which can be defaulted to NULL and everything still seems to work fine.

      As for your question, that's why they make the big bucks and you are posting on Slashdot. If you knew the answer, you'd be working for them.
      • except no ASM code uses C function notation [e.g. func(x)] ;~)

        But yes. Funny haha ;~)
      • I think you've been programming on Windows for too long. On a POSIX system, the system call would be something like spu_run(). The first parameter would be the load address. The second parameter would be the store address, and the third would be a horrendously complicated control structure with loads of values that need to be set to their defaults in 100% of cases, as well as a single integer representing the signal to be fired when the SPU finishes. Since signals only convey one bit of information, you
    • More succinctly: how does it handle its passing of processing requests to other 'cells'?

      Wrong question. As I understand it from the pictures..., no I didn't RTFA ;-), the SPU's are co-processors (like a GPU or floating point co-processor) with the exception that they're all executing the same copy of the same program. This is the old concept of associative memory, except that in this case the control logic associated with each local block of memory (the Local Store or "LS" blocks in the picture) contains

      • Actually the SPUs are independentt of each other, each executing a different program. Internally, they have 8-way vector units, which are close to what you describe. There is one more level in the processing hierarchy than you guessed :)
  • by Rolman ( 120909 ) on Tuesday June 07, 2005 @03:01AM (#12744564)
    The Cell architecture was developed with powerful and complex math applications in mind. How will existing Linux applications perform on it? It seems to me that the Cell's strengths are not integer math and general purpose computing, so in theory only floating-point intensive and vector applications can get a real kick out of it. There are not many well known applications with these characteristics.

    That said, advances in parallelizing or vectorizing tasks within the kernel or popular applications are possible, but that's not a trivial task, so at first glance Cell's Linux benchmarks could look unimpressive or misleading, even though the architecture itself is revolutionary, at least in theory.

    Here I hope IBM has done their homework and show something really impressive, yet realistic. I want to see things like Apache and GD serving hundreds of thousands of requests for dynamic content, or some real-time encoding/compositing of MPEG4 video for scalable delivery. I want to see Maya or Lightwave rendering a very complex scene. Rubber ducks may be fun to look at and -in all fairness- fit for a videogame-oriented crowd, but I want to see some kick-ass performance based on what it can potentially do to application development.
    • I mean, optimizing Maya or Lightwaves raytracing/radiosity engines to make use of the Cell is NOT a trivial task.
    • by Anonymous Coward
      The Cell arch has very litle to do with the processoring elements themselves (even though they are quite interesting)
      The true power of Cell is the data rates that can flow inbetween the individual processors, memory and the IO back plane. It is a mini super computer on a chip because of the data rates, the processoring elements are secondary as they can be altered and changed for different "Cell" microprocessors.
      I wrote up a brief explaination with info about data rates, etc... here
      http://www.friendsglobal [friendsglobal.com]
  • The Cell Advantage (Score:3, Insightful)

    by EMIce ( 30092 ) on Tuesday June 07, 2005 @03:17AM (#12744615) Homepage
    Those SPEs will be pretty useful for massaging and distilling large streams of data, which should make the cell great at tasks like video recognition and real-time market analysis. The cell may not be that revolutionary as parallelism has been touted in academia for a long time now, but the DSP like capabilities + parallelism will make the cell much more capable of responding quickly to complex sensory input than commodity hardware currently allows.

    I picture the PS3 using a camera as a very flexible form of input to allow for more creative game design. Super-fast compression and decompression also come to mind, which could be useful for more complex and fluid internet play.

    Recent articles have said the cell will have some hickups with physics and AI, because those tasks benefit from branch prediction, but this should be made up for by the fact that the cell will be able to recognize input at a far more human level than present technology affords.
    • by EMIce ( 30092 )
      I just though of something else. A cell powered robot would be incredibly powerful. With the right algorithms to recognize critical feedback for a particular task, the cell could allow the robot to respond quickly to complex stimuli, filtering and focusing on the few elements which are relevant to the task. Think of a robot capable of competitevely playing a physical sport, given the right "muscles".

      Also, while AI and physics performance are limited in some respects, as I mentioned in the last post, I just
  • by mcc ( 14761 ) <amcclure@purdue.edu> on Tuesday June 07, 2005 @03:25AM (#12744646) Homepage
    Is this supposed Cell/Linux workstation something we actually know jack squat about it, or is it just IBM going "uh, we're gonna make one of these... someday"? Can we make any educated guesses based on what IBM usually does?

    Specifically, is this, like, something that will be actually in the affordable range for people, or is this going to be like some kind of $6000 near-server tank?

    Also, how many Cells is this likely to have? One? Two? Four? These SPEs are all well and good for computational stuff but the rest of the time it's nice not to be stuck with a single processor.
    • I have 15K Euro as my guess and I expect it to have as many Cells as needed to get it up to that price. I also expect it to be big, loud, ugly,.... and black. It will use a currently unknown subset of DVI which only will drive Sony or Toshiba LCD monitors which cost over 3K Euro (which only come in back, non-widescreen formats). It will use a form of Rambus memory which not only do you have to pay per RAM update cycle but to use you must sign an confession stating that you have personally violated SEC re
  • Cell-less (Score:2, Interesting)

    by necrodeep ( 96704 ) *
    With all the continuing good news about the evolution of the PPC, including the Cell processor, I find it hard to believe that Apple has choosen now to move to Intel chips... and the developer workstations are only 32bit no less (I think they could have at least gone with AMD64).

    The good news is that someone is at least taking advantage of the architecture and producing linux workstations based on the Cell... unfortunately i don't think tht will be enough for it to survive in the desktop/workstation marke
  • by __aahlyu4518 ( 74832 ) on Tuesday June 07, 2005 @03:55AM (#12744707)
    Maybe Apple would like to use a nice IBM chip :-)
  • by tesmako ( 602075 ) on Tuesday June 07, 2005 @04:02AM (#12744726) Homepage
    In this thread I have already seen several posts talking about the worthlessness of the ill-designed x86 and the wonders of the simple Cell. The problem is that while the x86 instruction set is old and very tacky the internals of the processors has evolved to be best-of-breed modern chips, lots of execution units with excellent out-of-order performance and branch-prediction, very high clockrates with nice IPC.

    The Cell also is simple, but in a way that that inflates the gflop rating at the cost of programmer time.

    • Multicore, requiring the programmers to extract explicit parallelity (granted, this is coming everywhere, but really, the fewer better-performing cores there are the easier they are to utilise well).
    • A whole pile of vector units (it is very hard to fill even one or two vector units well, this will be a huge time-sink for any project trying to utilise it even half-way well).
    • An in-order primary CPU core, what is this, the eighties?! And if you think this will be like stepping back to how it was with in-order cores a decade or two ago, think again, memory latencies are higher, pipelines are deeper, you'd better pray that your compiler gets lucky to get any real performance out of the primary core (or many sleepness nights hand-optimizing it).
    • Hand-managed memory hierarchy?! This is not even a throwback to the eighties, this is a whole new level of inconvenience for the programmer. Where all normal CPU's carefully handle the memory hierarchy for you, in the Cell it is suddenly up to the software to handle where and when and why memory is in the "cache" of the vector elements.

    By comparison the modern x86 is a dream to program for, just note how two fairly radically different cpu's (Athlon64 and the P4) handle the same code very nicely without any big performance issues. Compare this to the Cell, where all the explicitness will make sure that any binary you write for the Cell today will run like crap on the next version.

    The point here is that Apple could absolutely not have switched to the Cell, it is inconvenient now and hopeless to upgrade without having to rewrite a ton of assembler and recompile everything for the new explicit requirements.

    The Cell is the thing for number crunching and pro applications where they are willing to spend the time optimizing for every single CPU, but for normal developers it is a step back.

    • Wrongo (Score:4, Interesting)

      by Urusai ( 865560 ) on Tuesday June 07, 2005 @04:25AM (#12744792)
      In case you don't remember, the point of RISC was to put optimization on the compiler so it wouldn't require massive on-the-fly speculative bibbledy-bop with millions of extra transistors and hideous pipelines like we have nowadays. This was done by providing, essentially, a compiler-accessible cache in the form of lots of registers, and by having an instruction set that was amenable to automated optimization.

      In theory, you don't need any GP registers at all, you could just have memory-memory ops and rely on the cache. This is impractical due to the size of memory addresses eating up your bandwidth (incidentally, this is a problem with RISC architectures, eating bandwidth and clogging the cache, but that's another story). As an alternative, you can simply expose the cache as one big honking register file using somewhat smaller addresses, and let your fancy-pants optimizing compiler do its best.

      The real problem seems to be that compilers have just not been able to keep up with the last 20 years of theory. Witness the Itanium--in theory it should have been the ultimate, but they didn't seem to be able to get things optimized for it (other problems, too). Then what happens are curmudgeons complain about the extra work of optimization and insist on setting us back to early 80s architecture rather than writing a decent compiler.

      Moral of the story: write a decent compiler and stop trying to glorify crappy ISAs that suit your antiquated and inefficient coding habits.
      • Re:Wrongo (Score:5, Interesting)

        by tesmako ( 602075 ) on Tuesday June 07, 2005 @05:37AM (#12745035) Homepage
        The problem with that moral is that compiler technology is nowhere near where it needs to be. Doing VLIW and other explicitly parallel architectures has been a research darling for many years, it just so happens that compiler technology fails to really make it work as things stand.

        Compilers do manage to do decent jobs in some cases, especially with languages that are easier to do semantic analysis over than C/C++, but while it is interesting research it is not a practical way to go. The reality is that C/C++ is prevalent, and highly detuned code is abundant. This also fails to address the problem of migrating between versions of the processor, while recompiling everything every time is a way to go it is not terribly practical (and when every new processor will fail to measure up to the old in the users old apps the user will not be happy).

        It is a bit odd that you bring up the Itanium since it is the best argument for this stance, there has not been any lack of effort in the compiler technology for the Itanium, the compilers are real marvels leveraging the very best the research has to offer. The silicon itself is very powerful, if you manage to actually fill all the instruction slots the thing will really fly. Unfortunately they never do, they get 50% fills and such, and the problem is that a modern sophisticated OoO processor will do an equally good job extracting parallelity on the fly while offering more flexibility.

        A large part of the problem, and the reason why multithreaded models are becoming pervasive, is that OoO processors actually extract very close to the maximum in instruction level parallelism even with near-infinite window-sizes (I recommend the paper at http://citeseer.ist.psu.edu/145067.html [psu.edu]), so automatic vectorization of ILP is not a field to pin much hope on.

        My final note is that; While having sophisticated issue logic is fairly complex, the chip real estate is not that large, and the gains to be made are huge. The Cell has a weak primary processor, mostly meant to be an organizing hub for the vector operations, if you don't write vectorized code you are screwed (unless compiler technology does something amazing soon).

        • I think EPIC is the result of flawed thinking. You hear much of moving the complexity of the OoO CPUs from "expensive" silicon into software. Yeah, that's great, but it's not really equivalent because one happens at runtime and one happens at compile time. The CPU has much more information about the code and dataset than the compiler does and can make better decisions. A better comparison would be between an OoO CPU and dynamic translation and optimization in a JIT or the Transmeta "Code Morphing" stuff
      • Re:Wrongo (Score:3, Insightful)

        by joib ( 70841 )

        In case you don't remember, the point of RISC was to put optimization on the compiler so it wouldn't require massive on-the-fly speculative bibbledy-bop with millions of extra transistors and hideous pipelines like we have nowadays. This was done by providing, essentially, a compiler-accessible cache in the form of lots of registers, and by having an instruction set that was amenable to automated optimization.


        Yes, at least in the beginning in its most pure form. Most high performance RISC architectures
    • The Cell also is simple, but in a way that that inflates the gflop rating at the cost of programmer time.
      Well, not the average application coder, but the compiler guys. And thats the right thing to do. x86 is a hardware VM with a hardware JIT-compiler right now. This is a job that is better done in software at compile time and not realtime in execution. (An exception would be bandwidth limitations as they were reported for the Transmeta-CPU (IIRC) running native VLIW code.) Abstraction is nice. But it doe
      • While this is a nice thing to say it is not realistic today, Intel already tried with the Itanium to push the handling of instruction level parallelism to the compiler, with poor results. This has been a meme for easily 20 years (VLIW has been a research darling for a long time) but compiler technology has just not measured up to the expectations.

        While it might be the way of the future, it is very much a thing of the future, not the present.

        Expect to see lots of carefully hand-tuned code for the Cell to

  • by SleepyHappyDoc ( 813919 ) on Tuesday June 07, 2005 @04:14AM (#12744756)
    I was talking to a friend about this new Cell processor they were going to have in the PS3, that was supposed to have all these nifty new capabilities, and he was looking at me like I'd grown another head. I asked him why he was looking at me so oddly, and he said, "Dude, Celerons are not that good."
  • Perhaps apple could pick up using CELL chips for the new MACs, instead of ruining them by shoving a ix86 in them...

Reality must take precedence over public relations, for Mother Nature cannot be fooled. -- R.P. Feynman

Working...