Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
IBM Programming Hardware IT Technology

IBM to use Cell in Blade Servers 159

taskforce writes "IBM announced on Wednesday that it would be putting versions of its Cell processor inside its increasingly popular low-power blade servers by this summer. From the article: 'For Cell to gain wide acceptance, IBM needs to spur outside programmers to write software that takes advantage of Cell's prowess. That could prove more challenging than usual because Cell's architecture is so different. IBM hopes this summer's release of the Cell-based servers kick-starts work by third-party programmers.'" Also covered in a PCPro article.
This discussion has been archived. No new comments can be posted.

IBM to use Cell in Blade Servers

Comments Filter:
  • by Jordan Catalano ( 915885 ) on Thursday February 09, 2006 @12:50PM (#14679294) Homepage
    That could prove more challenging than usual because Cell's architecture is so different. IBM hopes this summer's release of the Cell-based servers kick-starts work by third-party programmers.'"

    Deja vu? [wikipedia.org]
    • Itanium may perform better for some number crunching apps, but not enough to outweigh the costs, generally.

      The cell processor, on the other hand offers such a giant increase in performance (for some applications) that you will see people investing time and money to take advantage of it. In addition, with Toshiba, Sony and IBM all with product plans and thus the related volume and eco-system surrounding development tools, etc., I think the cell is positioned far better than Itanium to succeed.
      • Itanium offers such a giant increase in performance (for some applications) compared to rival RISC products that you will see people investing time and money to take advantage of it. In addition, with Intel, SGI and HP all with product plans and thus the related volume and eco-system surrounding development tools, etc., I think the Itanium is positioned far better than Alpha to succeed.

        D'oh.
      • by ShadowFlyP ( 540489 ) on Thursday February 09, 2006 @01:53PM (#14679992) Homepage
        Actually, the bigger difference is in how the architecture changed. Cell processor is more along the lines of multi-core DSPs. The instruction set is different than general computing cores and there are many of them. The key is that these cores are disjoint. You can run one application on one core and another application on another core.

        The Itanium is different than this in that it required instructions to be passed to the CPU as "bundles". Any of the instructions in a bundle could be executed in any order, but these instructions were all from the same application. Thus, in order to extract speed from the Itanium, the compiler was forced to extract parallelism from within functions. This is very difficult since most programming is fairly sequential. The Cell, on the other hand, allows you to execute different tasks and so puts this control back on the programmer instead of extra work for the compiler.

        Itanium was (is) a great idea from compiler theory perspective, but doesn't work out all that well (yet) in the real world.
        • You're close to correct. The Cell processor does have a bunch of cores that are basically DSPs (no virtual memory, etc.) BUT there's also another core that's basically a full-blown Power processor. That core is meant to rule the others.

          So while you do still have to program differently for a cell with 8+1 cores than you would for a computer with 9 Power processors, it's still not like being stuck with just 9 DSPs.
        • by mosel-saar-ruwer ( 732341 ) on Thursday February 09, 2006 @03:40PM (#14681144)

          Actually, the bigger difference is in how the architecture changed. Cell processor is more along the lines of multi-core DSPs.

          Standard computer graphics are RGB color at 24-bits per pixel [2^24 = 16777216], i.e. about 16 million colors.

          Standard thinking in the graphics bidness is that: If our triangles will only be displayed in 24-bits worth of color, then why do we need to perform triangle-arithmetic in anything higher than maybe 32-bits worth of floating points?

          Hence floating point calculations are 24-bit in the ATi world, and 32-bit in the nVidia and Playstation3/Cell world.

          Boy, I hope they're upping that floating point number for these "server" chipsets, cause 32-bit single-precision floats are essentially worthless for even something as trivial as computing interest on a bank statement.

          On the other hand, a "Cell" server CPU with a 128-bit FPU would be something to drool over. The problem, though, is that transistor counts on FPU's tend to increase as n^2, so each time you double the FPU bit-count [to 64-bits, then to 128-bits], your transistor count goes through the roof.

          • Hate to break some painful news to you, but "24-bit" RBG refers to each color getting 8bits -- an UNSIGNED INTEGER value.

            No floating point involved -- at all...

            Now for 3D Graphics, coordinates may be represented in floating point. But during rendering, the values are converted to 8-bit integer values for Red, Green, and Blue components of each pixel.

            And financial calculations are computed using INTEGER arithemetic....

            A lot of things that might appear to require floating point, can often be implemented u
        • "...Thus, in order to extract speed from the Itanium, the compiler was forced to extract parallelism from within functions. This is very difficult since most programming is fairly sequential. No, it's not that hard. The CDC 6600 "super computer" build in the 1960's accepted "bundled" instructions to and required a compiler or human programmer to take advange of the 6600's parallelism. The old FORTRAN compiler could many times beat out an experianced assembly language programmer. It was not really that h
          • The old FORTRAN compiler could many times beat out an experianced assembly language programmer.

            Yes, and I think that FORTRAN code performs quite well on Itanium. The problem is that C code, with its almost unrestricted use of pointers, doesn't lend itself easily to that sort of optimisation. If you have a chuck of code with lots of pointer references, the compiler will need to make some pretty big deductions on where those pointers could be pointing before it can hope to parallelise anything.

    • by John Whitley ( 6067 ) on Thursday February 09, 2006 @01:58PM (#14680042) Homepage
      Deja vu?

      Nice quip, but the realities of the situation are completely different. My take on EPIC nee IA-64 when it was first publicly announced was surprise at an architecture that actually encouraged ultra-complex processor control logic. This, when prevailing trends tended to find ways to manage or reduce that complexity, or at least provide unambiguous chip-compiler synergy. Put another way, Intel made design choices that made the hardware itself very challenging to build and properly synergize with a compiler to achieve high total performance. Intel had certainly shown their chops at this sort of high-complexity chip controller design in the x86 line, but the move still seemed brazen from an outsider's perspective. History now shows that they certainly had trouble going down that path...

      Cell, however, is basically a bog-stock PowerPC with DSP engines at its disposal. Think Altivec/MMX/SSE type units on steroids. This approach provides computing power that isn't applicable to all tasks, but is generally proven to perform well for applications that require high performance mathematical processing. Incidentally, that's precisely the target market that IBM's stated they're after with Cell-based servers. Moreover, Cell's scalability model and hardware complexities are much more managable.

      To really leverage Cell's power from the software side will require some or all of 1) good compiler and toolchain support, 2) good library support, and 3) dedicated development effort for the specific application. IBM has the expertise and motivation to provide 1 and 2, and developers in the supercomputing world tend to get really good at 3. When your *highly optimized* supercomputer app may take on the order of a year to run, big emphasis tends to be put on making it run fast. Months of work to save years of time.

      It still remains to be seen how this effort will play out in the marketplace, but variants of Cell's basic approach are working right now in many, many devices.
      • Does that mean putting cell chips on a video card could enhance dynamic world creation?

      • Intel made design choices that made the hardware itself very challenging

        You misspelled "HP". Itanium was originally going to be the new PA-RISC chip, (originally named 'PA-WideWord'). HP approached Intel when it became apparent that they wouldn't produce the volume of chips to make it profitable to upgrade their fab (which they would have to do to produce a chip of Itanium's complexity). So, enter Intel ca. 1994. Sun produced a version of Solaris for the new chip, IBM and SCO played together nicely (al
      • Cell, however, is basically a bog-stock PowerPC with DSP engines at its disposal.

        Actually, the Power PC Unit (PPU) in a cell is a highly simplified streamlined Power PC and nothing at all like the PowerPC's you'll find in a G5 Mac. While it runs at a higher clock rate, it's missing lots of stuff like out-of-order execution and advanced branch prediction and has a much simpler load-store unit. For example, on Cell there are huge penalties for load-hit-store but on current gen Power PC's there is a unit
    • "The Register" has a recent article about building servers based on the IBM Cell [theregister.co.uk].

      Since the Cell is now integrated into the military apparatus of the best-funded military aparatus in the world, the Cell will live essentially forever. For the same reason, Ada (i.e. the computer language) will live forever even though few people in industry use the language.

      By the way, Cell is also IBM's answer to Sun's Niagara. For years, Sun touted Niagara as a new revolution in computing: Niagara is supposedly the fi

  • by gasmonso ( 929871 ) on Thursday February 09, 2006 @12:51PM (#14679299) Homepage

    Take a peek at http://www.research.ibm.com/cell/patents_and_publi cations.html [ibm.com] to see the patents and whitepapers for cell technology. One interesting point is the Online Game Prototype white paper on there.

    http://religiousfreaks.com/ [religiousfreaks.com]
  • by db32 ( 862117 ) on Thursday February 09, 2006 @12:51PM (#14679302) Journal
    Sun Microsystems has decided to include the Gohan chip to combat IBM's Cell chips.
  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Thursday February 09, 2006 @12:55PM (#14679354) Homepage Journal
    a free optimizing compiler, that takes advantage of the architecture, would do wonders...

    It being command-line compatible with (or simply a back-end of) an existing compiler like gcc is even better.

    Add a port of a good OS, and your platform is suddenly incredibly attractive to developers.

    • I'd even be happy with a CLR or VM (ala .Net or Java) that would take care of the compilation. If they could get the .Net framework to run on it, I can think of a few apps I would be up for redesigning to take advantage of the multi-threading advantages.

      -Rick
    • The main problem, I suspect, is that general purpose code just doesn't run very well on it. You really need to optimise for each application, to tune how you handle your data and what algorithms you use in order feed the Cell properly if you want to get the most out of it.
      • by ajs ( 35943 ) <ajs@ajs . c om> on Thursday February 09, 2006 @01:21PM (#14679637) Homepage Journal
        And of course, optimizing in that way is probably analogous to the halting problem, but that doesn't mean that a good general-purpose back-end for GCC could not be written. History teaches us one thing about specialized hardware that we should never forget: the average user of your hardware is going to need to have VERY LITTLE of their code hand-tuned for it. For example, let's say that this hardware tends to be very good at encryption. Your average user would likely be running a Web server or some other sort of networking technology, and almost NONE of that code cares about the 10-100 hand-tuned routines in OpenSSL that you wrote for this platform.

        Get a good compiler and general-purpose OS up and running fast (which, by the way, I'm sure IBM is doing), and you'll see many more people writing special-purpose code where they need it.
    • by vlad_petric ( 94134 ) on Thursday February 09, 2006 @01:46PM (#14679909) Homepage
      It's *very* difficult to get a compiler to exploit this kind of parallelism. Unless you're doing scientific Fortran loopy code, where it's much easier to do things like automatic vectorization/parallelization, it's basically almost impossible for the compiler (out of curiosity, try to use the automatic openmp parallelization feature within Intel C Compiler on standard C/C++ code; the results will likely underwhelm you). Unfortunately, even if you do have scientific code, the slave processing units only do simple precision (IIRC).

      In my opinion, this thing will run well games, but that's about it. I've seen so far 2 presentations by IBM about the Cell processor (at (micro-)architecture conferences). Both times, the question on everybody's mind was "How do you program these things?". The answer was pretty much a hand-wavy "oh hmmm, well, blah blah blah manual"

      • The SPEs can do double precision, but at half the flops.
        • Actually, 10th the FLOPS. Which makes Cell very unimpressive compared to a dual core Opteron for real scientifi computations.
          • Hmm. Why is that? I do scientific computing, but my knowledge of chip design is iffy.

            • Cell's peak theoretical performance is 25 gigaflops, derived by taking the product of the clockspeed (3.2 GHz), and the number of operations per cycle (8). In reality, this figure is highly optimistic. Each SPE only has a single floating-point pipeline. The 8 operations/cycle figure is derived by counting a 4-element single-precision multiply-accumulate as 8 total operations. Moreover, when doing double-precision operations, it takes an additional 5x speed hit, since they must be performed in multiple clock
              • That's a very good summary of the Cell's SPEs.

                For scientific computing, I think the Cell's advantages will heavily depend on how many vectorizable loops are in the existing code. For example, in Molecular Dynamics (MD) code, one frequently calculates the forces between each pair of atoms (basically solving a = F/m over and over). Systems with N atoms have N*(N-1)/2 pairs so when N is large that loop requires significant calculations (this excludes cut off distances and other tricks). MD code is used to (try
              • Actually, I made a math error. Cell's DP theoretical performance is the same as a dual-core Opteron's (10 gigaflops), since the Opteron has 2 FPU pipelines and 2 cores.
      • In my opinion, this thing will run well games, but that's about it.

        Two words, one algorithm: MapReduce [google.com].

        Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with par

    • by Anonymous Coward
      There is a free GCC compiler for Cell. And Linux. And you can get a free simulator to run it on. All at http://www.ibm.com/developerworks/power/cell [ibm.com]

    • by Tune ( 17738 ) on Thursday February 09, 2006 @02:45PM (#14680520)
      First, as others have already commented, a gcc backend is already available and Linux runs on Cell.

      Second, optimizing compilers tend to optimize only small parts of linear code. Simply put, this comes down to filtering binaries and replacing inefficient code sequences by more efficient ones. Depending on the quality of the compilercore, this typically gains a few percent, occasionally some 25% but that's nowhere near what Cell could offer, namely (theoretically) 800%.
      The problem is refactoring the problem to run in
      - small chunks,
      - independently (parallel)
      - and on a specialized processor.
      A compiler can help only modestly with the last point. In any non-trivial case, this means reanalyzing the problem and reimplementing the solution from the start, making different tradeoffs. That is why people say Cell is difficult.

      IMHO, the benefits of code optimization will be close to irrelevant for almost any successful application on Cell over the coming years. And while Moore's law has provided us with bigger and faster hardware, we programmers are still mostly empty-handed when it comes to program translation for parallel architectures.

      We need a paradigm shift, not an optimizing compiler.
    • Linux:
      Yellow Dog Linux runs on Cell. (Link [linuxdevices.com]; this is the same military product that is linked to in a Register article further up in the thread.) It's being marketed for semi-embedded uses, like in medical imaging systems, sonar and radar, etc., apparently.

      Free Optimizing Compiler:
      I have no idea whether there are any compiler optimizations for it in GCC, I suspect not, though. However there is a version of the IBM XL C compiler for it, available here [ibm.com] (no idea if registration is required, I didn't attempt to
  • Linux on Cell (Score:5, Insightful)

    by morgan_greywolf ( 835522 ) on Thursday February 09, 2006 @12:58PM (#14679387) Homepage Journal
    Considering they've already got Linux on Cell [slashdot.org] and a proposed model for making userland apps to take advantage of the SPUs [linuxtag.org], and have had these since last summer, I wouldn't be surprised if some open source code is already in the process of being ported.

    Anyone know of any specific server apps?
  • by digitaldc ( 879047 ) * on Thursday February 09, 2006 @01:02PM (#14679425)
    Juhi Jotwani, IBM's director of Blade Center and xSeries solutions, holds the company's new Cell processor during a presentation yesterday in New York.

    She said, "Come on, juh know jouwant it!"
  • Sun has 'em beat (Score:5, Interesting)

    by AKAImBatman ( 238306 ) <<moc.liamg> <ta> <namtabmiaka>> on Thursday February 09, 2006 @01:02PM (#14679428) Homepage Journal
    As I understand it, the various pipelines of the Cell chip tend to be more specialized than the Coolthreads technology Sun is using on their new T1 processor. However, even with 32 full-blown pipelines, Sun is also concerned about whether their chips will be put to good use or not.

    I'm not quite sure what IBM is planning to do, but Sun has started a contest [java.net] to see who can build the coolest program that takes advantage of their new Coolthreads technology. The prize is a cool $50,000, so Sun seems to be serious about this. The results of the contest may very well prove whether the new parallel technologies have a future or not.
    • by Anonymous Coward
      The prize is a cool $50,000, so Sun seems to be serious about this.

      If Sun were really serious, they'd put a $500,000 team on it to develop something themselves. Paying for 1/3 - 1/2 a man-year of development is not that serious.
      • Especially when IBM's already setting the groundwork for Cell to be used in supercomputers (for seismic activity, nuclear warhead simulations, ect), rendering 3D MRIs (reportedly, current image rendering for this is done on Intel Pentium 4s and takes about 4 minutes, when they did the tech demo of it on a Cell platform, it took about 20 seconds).
        • Thats great and all but i think the key here is price and availability. Intel took over the server market with their low end chips. Companies said they wanted cheap servers and lots of servers. IBM and Sun have a vested interest in the return of big iron but i don't know if companies want that. I'm curious to see what happens with the new sun and ibm moves.
    • Re:Sun has 'em beat (Score:4, Informative)

      by ArbitraryConstant ( 763964 ) on Thursday February 09, 2006 @01:33PM (#14679766) Homepage
      "As I understand it, the various pipelines of the Cell chip tend to be more specialized than the Coolthreads technology Sun is using on their new T1 processor."

      Yes. A Cell's SPUs are not PowerPC processors, so you can't run the same code on the PowerPC front end as you do on the SPUs. Not only that, but Cell and Niagara are designed for totally different things. Cell is designed for floating-point intensive apps with pretty poor general purpose capabilities, while a Niagara has 1 floating point unit shared between all 8 cores and 32 threads but they're all good at the branchy sort of thing servers ususally run.

      I think these Cell servers will be more useful for things like render farms, They'll be essentially useless as generic servers for web or database duty.
    • Then all that is needed is a honking big web connection, and something that can be legally downloaded for a while. Seed a couple of thousand torrents, and let the world at it.
    • Sun's new processor is designed for many-connection business server applications. Web stuff.

      The Cell is designed for image processing and other high-volume number crunching.

      The design decisions both companies made were heavily influenced by their target markets for these specific processors, and those target markets are very different.

      These are apples and oranges.
  • by d3ac0n ( 715594 ) on Thursday February 09, 2006 @01:02PM (#14679433)
    Blades in Cells are usually a Bad Thing. Apparently Cells in Blades are a good thing! Go figure...
  • by Orrin Bloquy ( 898571 ) on Thursday February 09, 2006 @01:07PM (#14679485) Journal
    It's a hell of a paradigm shift for programmers to go from writing code that targets one CPU to code that deliberately splinters tasks across a bank of specialized processors.

    It's fun to bash the Cell as a general purpose CPU when no one has actually suggested it's designed for that.

    All of the above being true, it remains to be seen what gains IBM's POWER/Cell system actually offers above present architectures -- RISC was the next big thing, too, until Intel internalized part of it into the x86 architecture.

    Flyover landscape graphics demos are a shopworn rabbit pulled out of a threadbare hat: convert fractals into craggy vertical displacements with extremely primitive lighting/mapping. Show me an architecture that can *realtime* render Incredibles-caliber cloth/hair simulations and I'll get a hard-on while ATI and nVidia executives slit their wrists.
    • It's a hell of a paradigm shift for programmers to go from writing code that targets one CPU to code that deliberately splinters tasks across a bank of specialized processors.

      You mean specialized processors like FPUs, 3d audio accelerators, 3d video accelerators (and the sub-processing units contained in video accelerators), encryption and TCP offload engines, WinModems, MPEG encoder/decoders, and platform management controllers?

      Yeah, they'll have a real hard time adjusting... In 1982.
    • It's a hell of a paradigm shift for programmers to go from writing code that targets one CPU to code that deliberately splinters tasks across a bank of specialized processors.

      Not really, developers have been using co-processors for years -- numeric (a la Weitek or 8087), DSP, odd-wad AI and "dataflow" boxes. And I imagine the early attempts will follow a similar pattern: present the functionality of the co-pro wrapped neatly in a library, then just call the library routines. Presto, your code is automatic
  • Would it be possible to write some kind of virtualization that would present an easy-to-develop-on layer? Besides, if you already have Linux that runs on this platform and compilers written, how would it be any harder than developing for any other platform? A rose by any other name...
    • Would it be possible to write some kind of virtualization that would present an easy-to-develop-on layer?

      You know, that's a really good idea [wikipedia.org]!
    • You can act (pretty much) like its a power processor and your apps will run 'fine' on it. But if you want the REAL power (no pun intended) of the Cell, you hand optimize (and design) your program for the cell.
      • you hand optimize (and design) your program for the cell.

        Every parallel architecture I've ever programmed for had nice APIs for offloading and directing tasks to the various available processing units. There shouldn't be much 'hand-optimization' involved in the sense you're implying.

        Developers who write code that takes advantage of GPUs in modern gaming PCs are already familliar with this style programming, and the ones that understand the architecture instead of memorizing the APIs or program out of a cook
        • by 2megs ( 8751 ) on Thursday February 09, 2006 @02:15PM (#14680214)
          Developers who write code that takes advantage of GPUs in modern gaming PCs are already familliar with this style programming,

          But you can probably count on your fingers the number of developers who are using GPUs for anything other than rendering pixels, or at most some simple vectorizable simulations like water or cloth.

          Taking an arbitrary program and turning it into something that would run well on a GPU (or a Cell SPU) usually requires a significant redesign of the algorithms and data structures as compared to what you would naively and straightforwardly do in C...or it won't get anywhere near peak performance and may even run slower. It's certainly possible to do, but you won't be re-using any of that originally written code, and it's a different way of thinking from what 95% of programmers are used to. I'm speaking from experience as someone who earns his living by being in the remaining 5%. :)

          As the original poster said: you hand optimize (and design) your program for the cell.
          • But you can probably count on your fingers the number of developers who are using GPUs for anything other than rendering pixels,

            And for good reason. GPUs are designed to render pixels, not do other stuff.

            Taking an arbitrary program and turning it into something that would run well on a GPU (or a Cell SPU)

            I don't understand why you think I'm saying that those two things are equivalent. Taking an arbitrary program and turning it into something that would run well on a GPU would be unusual. You're talking abou
    • To the programmer, communicating with the SPU is abstracted to file i/o operations. Go check out IBM developerworks pages for lots of info.
  • PS3 release date? (Score:5, Insightful)

    by nutshell42 ( 557890 ) on Thursday February 09, 2006 @01:09PM (#14679514) Journal
    This probably means that the PS3 will either actually make its "spring" release or that it is hampered by problems with the Blu-Ray drives/disks instead of a Cell shortage because otherwise I couldn't imagine that Sony would allow IBM to use even one Cell for something that's not a PS3 for the first 3 months.
    • I read a post somewhere (Kotaku, Gizmodo, Joystiq, or somewhere else) that quoted a Sony/IBM official as saying that yeilds on the Cell chips were doing very good now and they got the yields up to the level they are now (whatever that is) MUCH faster than previous new chips. If that's true, then there may not be any shortage problem with the PS3, at least not from the Cell.

      There is always the chance that the RAM, GPU, Blu-Ray drive, or something else would end up in short supply.

  • by alta ( 1263 )
    So this means I'll be able to take my PS3 and slide it into my IBM Blade chassis when I need more CPU. When I'm done, I pull it out and play.

  • by Thaidog ( 235587 ) <slashdot753@nym.hush. c o m> on Thursday February 09, 2006 @01:19PM (#14679618)
    We've had blades with Cell cpus on them for quite a while. They're a lot different than any other architecture... resembling the pSeries layout more-so than others. One thing I don't like about the prototypes is that the Cell cpu's along with the bga memory they use are fused directly to the logic board. They're were a few pictures released to the public about a year ago on the Register but I can not find them now. Other than that they are seriously fast and very clusterable.
  • by killtherat ( 177924 ) on Thursday February 09, 2006 @01:19PM (#14679621)
    IBM has opened the spec for their blade chassis design. Does anybody know if somebody is trying to make a 'desktop' blade chassis? Rather then buying a huge box that holds 14 blades, something that might only hold two.
    This doesn't mean make a desktop out of a blade, because as I understand it, so far the JS20s (IBMs PPC 970 blade) don't even have video cards. You have to set them up over the serial port, and run them over the network.
    But does anybody have a development sized unit you don't need a server rack and new power circuits for?
    • Portable development units come mounted on their side in a 19" enclosure with a handle on top, semi-attractive looking trim pieces, and appropriate power supplies and cooling on the inside. They cost about three times what you'd pay for a standard rackmount production model.
    • Interesting question. I've been lurking around blade platforms lately and I'm not happy. Why the proprietary chassis? Why the internal storage?

      My ideal "blade" system mounts in standard 19" racks. 2 or 3 complete systems in 1U, 48VDC powered from another 1U transformer. Let me stack 1-8 of these in my rack and don't make me pay for the damn IBM/HP/etc chassis.

      My system also has no storage inside the blades. Just give me 4 network interfaces per "blade", with at least 2 optionally capable of providing
  • Wow. Program for a while, then take a break playing the latest PS3 game -- all without leaving the confines of your own terminal into the system.
  • But wait (Score:4, Funny)

    by Pakaran2 ( 138209 ) <windrunnerNO@SPAMgmail.com> on Thursday February 09, 2006 @01:46PM (#14679916)
    Won't the Cell reception be poor inside the metal cabinets?

    *looks bright*
  • Why SPEs? (Score:4, Interesting)

    by Guspaz ( 556486 ) on Thursday February 09, 2006 @02:30PM (#14680343)
    Why go with SPEs anyhow? The whole problem with coding for the Cell involves the differences between the PPE and the SPE. The SPE doesn't have branch predictors, making it virtually useless for any sort of flow control.

    Why didn't IBM just pack in a lesser number of PPEs? The PPE already seems to be a very lightweight general purpose processing core, unless I'm missing something. It is about the same size as an SPE. So why not just put 9 PPEs on a Cell chip instead of 1 PPE and 8 SPEs?

    If you had 9 PPEs on the chip, any multithreaded code (servers for example) would see massive benefits without having to rewrite it to try to find aspects of the program that could run on what is effectively a DSP. While everybody else was fooling around with 2-core processors, they'd have a 9-core processor on the market. Sure, slower per-core, but 9 of them, with that number going up in the future.

    Or am I missing something here?
    • Simple Floating point.
      These are not for general purpose computing that is what the Power5 and the Power6 will be for. Think DSP, render farms, or simulation and not web or database servers.
      You could create a system with a Power5 blade to do database and general purpose type stuff and have that feed multiple Cell blades to do rendering and or DSP.
      A render farm jumps to mind but I could see it being used for military functions like Sigint, Radar, and Sonar or any number of scientific simulations.
      Not every com
    • by Soong ( 7225 ) on Thursday February 09, 2006 @03:30PM (#14681054) Homepage Journal
      PPEs are bigger. Also, a dedicated slave processor doesn't have to worry about interrupts and context switches and OS crap, it can spend all its cycles on number crunching. Cell SPEs are all about moving large amounts of data and doing a whole lot of compute on that data. They're simpler and more efficient at what they're designed for.
      • From the images of the core that I've seen, PPEs are virtually the same size as the SPEs.

        I refer you to this image:

        http://images.anandtech.com/reviews/cpu/cell/ppehi ghlight.jpg [anandtech.com]

        Perhaps you mean the PPE and it's supporting hardware, such as the cache? That'd ideally be shared among multiple PPEs.

        If you look closely at the PPEs, a huge amount of their real estate seems to go to what looks like their 256KB of cache. Cache takes up a lot of space. Since the PPE's wouldn't each have dedicated cache, they're stil
  • This chip won't be popular until you can package it into a small box, sell it for less than the cost to build it, with high end audio/video built in, and someone develops games for it.

    Nobody would be crazy enough to do that!
  • Two Tutorials (Score:3, Interesting)

    by GrEp ( 89884 ) <crb002@NOSPAM.gmail.com> on Thursday February 09, 2006 @03:41PM (#14681155) Homepage Journal
    IBM needs to release two SIMPLE tutorials if they want programmers to bother porting code specifically to the cell:

    1. A cell program that solves linear equations Ax=b efficently using SPE's. This would help those with data intensive problems.

    2. A cell program that speeds up depth first search (a la for SAT,GRAPH COLORING, MAX-CLIQUE) by using the SPE's. This would help those programming CPU intensive problems.

    • Tutorials 3 and 4 (Score:2, Interesting)

      by dch24 ( 904899 )
      Having been a long-time reader over at the IBM forums [ibm.com], there are a lot of similar questions and answers going on over there.

      There were a couple that would be really helpful:
      1. An implementation of zlib for the SPE architecture, with a speed comparison to the PPE. (Hopefully, the SPE is very fast...)
      2. Examples of direct SPE-to-SPE streaming.

  • by mnmn ( 145599 ) on Thursday February 09, 2006 @04:45PM (#14681894) Homepage
    They could come up with ATX or miniATX boards at real cheap prices, able to take your average DDR DIMMs, power supplies and IDE etc. Give it maybe 3 PCI slots... or 1 if its miniATX.

    Sold for under $100, and theyre making money off it while spreading the love that will increase the developer market for the cell architecture.

    It goes like this. Make a new architecture. Release a good compiler for free.. with awesome documentation and sample programs and libraries. Allow people to buy evaluation boards for low prices. Once you get people hooked enough, sell the chips themselves at high prices. Its the Microchip (tm) model. Their chips dont really do much for the high costs (compared to atmel, TI etc) but since everyone knows how to work them, they sell sell sell. Rabbit semiconductors however are trying hard to get into the market, and their dev tools are cheap. It'll take time.

    IBM cant release a couple o PDFs and one tough software suite and expect the world to jump on it. Theres a reason why theres so much momentum behind the Power architecture, and the Cell is different.
  • Blades, cells, it's getting to be like prison around here.

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...