Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Hardware IT Technology

IBM Releases Cell SDK 207

derek_farn writes "IBM has released an SDK running under Fedora core 4 for the Cell Broadband Engine (CBE) Processor. The software includes many gnu tools, but the underlying compiler does not appear to be gnu based. For those keen to start running programs before they get their hands on actual hardware a full system simulator is available. The minimum system requirement specification has obviously not been written by the marketing department: 'Processor - x86 or x86-64; anything under 2GHz or so will be slow to the point of being unusable.'"
This discussion has been archived. No new comments can be posted.

IBM Releases Cell SDK

Comments Filter:
  • Well . . . (Score:2, Funny)

    by Yocto Yotta ( 840665 ) *
    But does it run Linux?

    Oh. Well, okay then.
  • Not knowing too much about the cell processor I read the wikipedia article. I came across this: "In other ways the Cell resembles a modern desktop computer on a single chip."

    Why?
    • Because they are offering audio, video, networking on the same chip as the general purpose processing.
    • Um. That's kind of a weird statement. I think they mean to say that it encompasses much of the multiprocessing capabilities of a modern PC in a single chip. i.e. It's your CPU and GPU rolled into one.

      Cell processors aren't really anything all that new per say. The multi-core design makes them superficially similar to GPUs (which are also vector processors) with the difference that GPUs use multiple pipelines for parallel processing whereas each cell is a self-contained pipeline capable of true multi-threaded execution. In theory, the interplay between these chips could accelerate a lot of the work currently done through a combination of software and hardware. e.g. All the work that graphics drivers do to process OpenGL commands into vector instructions could be done on one or two cells, thus allowing those cells to feed the other cells with data.

      I guess you could say that the cell processor is the start of a general purpose vector processing design. I'm not really sure if it will take off, but unbroken thoroughput on these things is just incredible.
    • by l33t-gu3lph1t3 ( 567059 ) <[moc.liamtoh] [ta] [61legna_hcra]> on Thursday November 10, 2005 @12:45PM (#13998589) Homepage
      Easy answer - the wiki article on "Cell" isn't that good. Cell isn't a System-On-A-Chip. It's just a stripped-down, in-order power pc core coupled to 8 single-purpose in-order SIMD units, using an unconventional cache/local memory architecture. It can run perfectly optimized code very, very fast, at extremely low power consumption to boot, but optimization will be/is a bitch. For instance, you have to unroll your "for" loops to start, since those SIMD co-processors can't do loops.

      I'm sure IBM and Sony have much better documentation on the CPU than I do, but that's it in a nutshell. Everything else you hear about it is just marketing. Oh yeah, almost forgot. Microsoft's "Xenon" processor for the Xbox360 is pretty much just 3 of those stripped down, in-order PPC cores in one cpu die.
      • Cell isn't a System-On-A-Chip. It's just a stripped-down, in-order power pc core coupled to 8 single-purpose in-order SIMD units, using an unconventional cache/local memory architecture

        You know, I'm looking back at all these replies to the poor guy, and I can't help but think that he's sitting in front of his computer wondering, "Can't anyone explain it in ENGLISH?!?" :-P

        For instance, you have to unroll your "for" loops to start, since those SIMD co-processors can't do loops.

        Actually, we need a new program
        • Looks like Ruby to me, although it's a little to verbose ;)

          0..9 { |i| puts i }
        • That looks more like syntactic sugar to me. How is that different? More importantly, how would that translate differently into assembler code? You pretty much will wind up with the same thing, that is: "do your thang, increment the accumulator, if the accumulator equals the count, jump to do your thang."

          gcc and other compilers have options such as -funroll-loops, which will unroll loops (no matter how they were specified) for you if the count can be determined at compile time. So you wind up with "Do yo
          • GCC can unroll all loops if you want including those with variable itteration counts. In those cases it uses a variant of duff's device. [well on x86 anyways].

            As for the other posters, the real reason you want to unroll loops is basically to avoid the cost of managing the loop, e.g.

            a simple loop like

            for (a = i = 0; i b; i++) a += data[i];

            In x86 would amount to

            mov ecx,b
            loop:
            add eax,[ebx]
            add ebx,4
            dec ecx
            jnz loop

            So you have a 50% efficiency at best. Now if you unroll it to

            mov ecx,b
            shr ecx,1
            loop:
            add eax,[e
            • mov ecx,b
              shr ecx,2
              loop:
              add eax,[ebx]
              add eax,[ebx+4]
              add eax,[ebx+8]
              add eax,[ebx+12]
              add ebx,16
              dec ecx
              jnz loop


              With SIMD instructions, you can execute all four of those adds in one instruction. I wish I knew SSE a bit better, then I could rewrite the above. Sadly, I haven't gotten around to learning the precise syntax. :-(

              However, there's a fairly good (if not a bit dated) explanation of SIMD here [mackido.com].
              • Yes, and unrolling would speed that up in the same fashion.

                iirc the instruction is "paddd", and you'd do four parallel adds then shuffle and add twice to get the single sum.

                Tom
              • Try;

                mov ecx, b
                shr ecx, 2
                pxor xmm1, xmm1;

                loop_: // you can unroll this loop...
                movdqa xmm0, [ebx] // aligned-- use movdqu for unaligned
                paddd xmm1, xmm0
                add ebx, 16
                dec ecx
                jnz loop_

                // now just need to do a horizontal add...
                // but there
        • mov ecx, COUNT
          LOOP_START:
          ;IIRC this is the underlying assembly
          ;construct for looping
          ;
          ;excluding conditional jumps
          LOOP LOOP_START
        • In most cases, I think template metaprogramming (in C++) is pedantic garbage. In this case however, you could probably use it to great effect (ie, the compiler will unroll your loops for you). The syntax is still pretty horrible though.

      • by plalonde2 ( 527372 ) on Thursday November 10, 2005 @12:55PM (#13998686)
        You are wrong. These SIMD processors do loops just fine. There's a hefty hit for a mis-predicted branch, but the branch hint instruction works wonders for loops.

        The reason you want to unroll loops is because of various other delays. If it takes 7 cycles to load from the local store to a register, you want to throw a few more operations in there to fill the stall slots. Unrolling can provide those operations, as well as reduce the relative importance of branch overheads.

        • You are wrong. These SIMD processors do loops just fine. There's a hefty hit for a mis-predicted branch, but the branch hint instruction works wonders for loops.

          Um... I'm not sure that's what he's trying to say. SIMD by definition is Single Instruction, Multiple Data. i.e. You give it a couple of instructions and watch it perform them on every item in the stream of data. By definition, a loop is an iteration over each instruction, multiple times. a.k.a. Multiple Instruction Multiple Data (MIMD).

          What's neede
      • You were off to a really good start! But a couple of things:

        Optimization won't be a problem. At least it won't be the main problem. The instruction set is rich enough to provide scalar and vector integer/fp/dp operations along with both conditional branching and conditional assignment. And it can be programmed in C using intrinsics for SIMD instead of assembly. That brings up the really important part-- 128 128-bit registers. Current x86 compilers suck balls at intrinsics mostly because SSE is such

    • The Cell processor is essentially a multi-core chip. It has, IIRC, one "master" CPU, and then multiple slave CPUs on the same die.

      A modern desktop computer has one master CPU, then several smaller CPUs each running their own software. Graphics, Sound, CD/DVD, HD, not to mention all the CPUs in all the peripherals.

      But the analogy ends there. The Cell has certian limitations and wouldn't be able to operate as a full computer system with no other processors very efficiently. I believe the PS3 has a s
    • I suspect the author of the Wikipiedia article knows a bit more than he's being given credit for elsethread.

      Each cell processor includes not only the multiple processors mentioned elsethread, but addressable memory, DMA controller, and a controller for what is essentially a proprietary network. The last is somewhat open to argument -- for example, current AMD CPUs include HyperTransport controllers, which are somewhat similar.

      In any case, IBM does (e.g. here [ibm.com]) talk about the Cell as a System on a Chip, t

    • Not knowing too much about the cell processor I read the wikipedia article. I came across this: "In other ways the Cell resembles a modern desktop computer on a single chip."

      Why?


      Actually each of the SPU's resemble a system-on-a-chip. They each have local memory, CPU and I/O. The Cell itself actually resembles a network-on-a-chip (or in slashdotology, a Beowulf-Cluster-on-a-Chip) if you consider main memory to be I/O storage.
  • by RManning ( 544016 ) on Thursday November 10, 2005 @12:40PM (#13998532) Homepage

    My favorite quote from TFA...

    ...in addition, the ILAR license states that "You are not authorized to use the Program for productive purposes" -- so make sure that your time spent with these downloads is as unproductive as possible.

  • by frankie ( 91710 ) on Thursday November 10, 2005 @12:41PM (#13998540) Journal
    ...the Cell processor is an upcoming PowerPC variant that will be used in the PlayStation 3. It's great at DSP but terrible at branch prediction, and would not make a very good Mac. If you want to know full tech specs, Hannibal is da man [arstechnica.com].
    • I'm just wondering what information you have on the Cell being "terribla at branch predeiction"? I don't know about using it in a mac, but IBM seems eager to run Linux on it. They've even demonstrated a prototype cell-based blade server system [slashdot.org] running Linux, back in May.
      • Even though the GP linked to an article that greets you with Inside the Xbox 360, Part II: the Xenon CPU there are links to some informative articles about the Cell architecture further down.

        Short story: The cool thing about the Cell are the SPEs that are the best thing since sliced bread if you have lots of matrix-vector operations to perform but more or less useless otherwise.

        IBM is eager to run Linux on it because the Cell could make one hell of a supercomputing grid. (Although it loses lots of flops i

    • It's great at DSP but terrible at branch prediction

      With 8 or more semi-independent "Synergistic Processing Unit" pipelines, it doesn't really need to have a lot of complex branch prediction logic. It could adopt a bit of a quantum methodology and assign a different SPU to proceed for each possible outcome of a compare/branch instruction, and then once the correct outcome has been established, discard the "dead-end" pipelines.

      Then again, I learned microprocessor design principles back when the PPC 601 was s
    • That's not entirely true. The PPE can do branch prediction, the SPEs can't. Whether the PPE's branch prediction is any good or not, I don't know... :)
  • by mustafap ( 452510 ) on Thursday November 10, 2005 @12:47PM (#13998605) Homepage
    Thats great news, but as an embedded systems designer and eternal tinkerer, where will I be able to buy a handfull of these processors to experiment with? Without having to dismantle loads of games machines ;o)
  • by kuwan ( 443684 ) on Thursday November 10, 2005 @12:49PM (#13998619) Homepage
    As the Cell is basically a PPC processor I find it strange that the SDK is for x86 processors. Fedora Core 4 (PowerPC), also known as ppc-fc4-rpms-1.0.0-1.i386.rpm is listed as one of the files you need to download. Maybe it's just because of the large installed base of x86 machines.

    It'd be nice if IBM released a PPC SDK for Fedora, it would have the potential to run much faster than an x86 SDK and simulator.
    • Not sure how much faster it would be really(though I'm writing this from a powerbook and I really wish they would release some ppc stuff). A PPC chip acts as the controller but the actual proccessing is done on chips with architectures vastly different from both x86 and PPC, For instance, they aren't superscalar so they do no branch prediction like both x86 and PPC do...so really the emulation speed is pretty independent of it's host architecture. I suppose they could use the Altivec found on Apple's CPU
    • I dont know how much of a performace boost you would get. Despite the fact that it has a power pc processor on it, it is a very different power pc. It does not have out of order execution, and has all those pretty vector processors dangling off of it. I think emulation would be 2x faster, at most.
    • I wonder if it'll take advantage of multi-core chips? Might make sense to do so, especially since that's also (sort of) similar to the hardware being simulated.

    • > Maybe it's just because of the large installed base of x86 machines.

      Got it in one try. Anyone interested in actually using this thing has a spare PC to load FC4 on, almost none has a spare top of the line PowerMac in the closet. Heck, most don't have a top of the line Powermac period.
      • Well, I have a dual 2.5 G5 and as easy as it is to dual boot with OS X I'd devote a firewire disk to it for a while.

        I keep having this fantasy that a PCI-E development board will come out and I'll be able to do something interesting with it (what I have no idea but I'm open to suggestions). I'd really like OS X development environment for it to tinker with.

    • Not a PPC Processor (Score:2, Informative)

      by MJOverkill ( 648024 )

      Once again, the cell is not a PPC processor. It is not PPC based. The cell going into the playstation 3 has a POWER based PPE (power processing element) that is used as a controller, not a main system processor. Releasing an SDK for Macs would not give any advantage over an X-86 based SDK because you are still emulating another platform.

      Wiki [wikipedia.org]

    • The simulator is actually maintained on a number of different platforms within IBM. Since the rest of the SDK team (xlc, cross-dev gcc, sample & libs, etc) chose Fedora Core 4 on x86 as a means of enabling the most number of people, we didn't want to confuse too many people by supplying the simulator on a variety of platforms for which the rest of the SDK is not supported. This was somewhat of a big-bang release of quite a bit of software to enable exploration of Cell. Now that we have this released a
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Thursday November 10, 2005 @12:50PM (#13998628)
    Comment removed based on user account deletion
    • Re:GNU toolchain (Score:4, Informative)

      by Have Blue ( 616 ) on Thursday November 10, 2005 @01:24PM (#13999002) Homepage
      IBM may have run into the same problems with the Cell that they did with the PowerPC 970- the chip breaks some fundamental assumptions GCC makes, and to add the best optimization possible it would necessary to modify the compiler more drastically than the GCC leads would allow (to keep GCC completely platform-agnostic).
      • IBM may have run into the same problems with the Cell that they did with the PowerPC 970- the chip breaks some fundamental assumptions GCC makes, and to add the best optimization possible it would necessary to modify the compiler more drastically than the GCC leads would allow (to keep GCC completely platform-agnostic).

        who the heck says they have to keep the GCC they distribute with the software development kit platform agnostic??? what a stupid concept. It has absolutely NOTHING to do with the GCC leads..

    • Re:GNU toolchain (Score:4, Informative)

      by Wesley Felter ( 138342 ) <wesley@felter.org> on Thursday November 10, 2005 @01:32PM (#13999098) Homepage
      The SDK includes both GCC and XLC. GCC's autovectorization isn't the greatest, but Apple and IBM have been working on it. I think if you want fast SPE code you'll end up using intrinsics anyway.
  • Echoes of Redhat (Score:3, Insightful)

    by delire ( 809063 ) on Thursday November 10, 2005 @12:51PM (#13998649)

    Why Fedora is so often considered the default target distribution I don't know. Even the project page [redhat.com] states it's an unsupported, experimental OS, and one now comparitvely marginal when tallied [distrowatch.com].

    Must be a case of 'brand leakage' from a distant past, one that held Redhat as the most popular desktop Linux distribution.

    Shame, I guess IBM is missing out on where the real action is.
    • Give me a nice clean distro like Gentoo anyday. I can't stand that a Fedora install requires 5CDs and installs some 600 packages that I will never use. Why do I need so many text editors, etc? I get lost in the and nervous in the Applications menu. Sure, I tried 30 text editors before I found the one I wanted, but that's all I install on my box durring reinstall or upgrade.

      BTW, this parent might be offtopic, be he is no troll. Shame on you mods!
    • by LnxAddct ( 679316 ) <sgk25@drexel.edu> on Thursday November 10, 2005 @02:10PM (#13999548)
      Fedora overtook Suse within a year and a half in terms of users. It is now a close 3rd to Debian which is a far second from Red Hat (Red Hat and Fedora together have around 3 times the market share of Debian, check netcraft to confirm those numbers). The numbers on distrowatch are not downloads or users, that number is how many people clicked on the link to read about Ubuntu. Mark Shuttleworth is obscenely good at getting press about Ubuntu so the Ubuntu link gets a lot of click throughs, and now that it is at the top, it is kind of self fulfilling as interested people want to read about the top distro so they click on that more.

      When it comes down to it, Fedora is the most advanced linux distribution out there. It comes standard with SELinux and virtualization. It uses LVM by default, integrates exec-shield and other code foritfying techniques into all major services. It has the latest and greatest of everything. Things just work in Fedora because a large portion of that code was coded by Red Hat. Red Hat maintains GCC and glibc, they commit more kernel code than anyone else, they play a large role in everything from Apache and Gnome to creating GCJ to get java to run natively under linux. Whether you like it or not, Fedora is the distro most professionals go with, despite what the slashdot popular oppinion is and despite the large amounts of noise that a few ubuntu users create.

      Out of the big two, Novell and Red Hat, Novell has never been worse off and Red Hat has never been healthier. Red Hat doesn't officially provide support for Fedora, but it is built and paid for by Red Hat and their engineers (in addition to the community contributions). By targetting Fedora, IBM knows that they are targeting a stable platform with the largest array of hardware support. IBM is in bed with both Novell and Red Hat, they didn't choose Fedora because they were paid to or something... they chose Fedora based on technical merits. Claiming that Fedora is unstable is no different than claiming GMail is in beta, both products are still the best in their respective industries. Why do people go spreading FUD about such a good produc when they've never used it themselves? Whether you want to admit it or not, Fedora is the platform to target for most. It is compatible in large part with RHEL, so you're getting the most bang for your buck.

      IBM doesn't just shit around, or make decisions for dumb reasons. If Fedora is good enough for IBM it is good enough for anyone. Apparently this is a common oppinion as more and more businesses switch to Fedora desktops. Here [computerworld.com.au] is one recent story of a major Australian company, Kennards, replacing 400 desktops with Fedora. Don't be so close minded or you might be left behind.
      Regards,
      Steve
    • Fedora's #4 ranking [distrowatch.com] on Distrowatch can hardly be called "marginal". Nevermind that one should also question the site's "page hit ranking" methodology before passing it off as representative, much less authoritative.
  • I dunno - telling people they have to upgrade their PC to run the SDK for a new PC architecture seems like a marketer's job.
  • This is very interesting. The Cell has a very non-standard architecture, but it can be used in a very powerful way. The key is software and thus, an emulation SDK is really important for a new architecture. From and HPC (High Performance Computing) prospective, these chips could be very powerful.

    The real question is whether the the PS3 will have an Linux hard disk option like the PS2. If that is the case, it may be the cheapest way to get actual development hardware.

  • Cell Hardware... (Score:4, Informative)

    by GoatSucker ( 781403 ) on Thursday November 10, 2005 @01:35PM (#13999133)
    From the article:
    How does one get a hold of a real CBE-based system now? It is not easy: Cell reference and other systems are not expected to ship in volume until spring 2006 at the earliest. In the meantime, one can contact the right people within IBM [ibm.com] to inquire about early access.

    By the end of Q1 2006 (or thereabouts), we expect to see shipments of Mercury Computer Systems' Dual Cell-Based Blades [mc.com]; Toshiba's comprehensive Cell Reference Set development platform [toshiba.co.jp]; and of course the Sony PlayStation 3 [gamespot.com].

    • the "Cell" well, as far as I am concerned. They seem to be totally unremorseful regarding their music CD DRM (aka rootkit). At one point I considered the purchase of a PS3 in order to gain experience with the Cell Processor. Today, I would not consider the purchase of ANYTHING with Sony's name on it, regardless of how "geeky" it might be.

      Purchasing IBM's (or perhaps Mercury Computer's) reference CBE-based platform are now my only choices. Sony's NRE for the PS3 might make their platform a "best buy" pri
  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
    • you ever think that rosetta is more like wine than it is like virtual pc? ie handles the upper API calls by translating them to native lower level?
    • It's not the PPE cores that are slow to emulate. It's the 8 additional vector-only processors.

      This is a sim, not just an emulator. It's not just vaguely implementing the output; it is at least to some extent modeling the instruction pipelining, branch miss penalties, and so on.
  • by Animats ( 122034 ) on Thursday November 10, 2005 @02:25PM (#13999729) Homepage
    The "cell" processors have fast access to local, unshared memory, and slow access to global memory. That's the defining property of the architecture. You have to design your "cell" program around that limitation. Most memory usage must be in local memory. Local memory is fast, but not large, perhaps as little as 128KB per processor.

    The cell processors can do DMA to and from main memory while computing. As IBM puts it, "The most productive SPE memory-access model appears to be the one in which a list (such as a scatter-gather list) of DMA transfers is constructed in an SPE's local store so that the SPE's DMA controller can process the list asynchronously while the SPE operates on previously transferred data." So the cell processors basically have to be used as pipeline elements in a messaging system.

    That's a tough design constraint. It's fine for low-interaction problems like cryptanalysis. It's OK for signal processing. It may or may not be good for rendering; the cell processors don't have enough memory to store a whole frame, or even a big chunk of one.

    This is actually an old supercomputer design trick. In the supercomputer world, it was not too successful; look up the the nCube and the BBN Butterfly, all of which were a bunch of non-shared-memory machines tied to a control CPU. But the problem was that those machines were intended for heavy number-crunching on big problems, and those problems didn't break up well.

    The closest machine architecturally to the "cell" processor is the Sony PS2. The PS2 is basically a rather slow general purpose CPU and two fast vector units. Initial programmer reaction to the PS2 was quite negative, and early games weren't very good. It took about two years before people figured out how to program the beast effectively. It was worth it because there were enough PS2s in the world to justify the programming headaches.

    The small memory per cell processor is going to a big hassle for rendering. GPUs today let the pixel processors get at the frame buffer, dealing with the latency problem by having lots of pixel processors. The PS2 has a GS unit which owns the frame buffer and does the per-pixel updates. It looks like the cell architecture must do all frame buffer operations in the main CPU, which will bottleneck the graphics pipeline. For the "cell" scheme to succeed in graphics, there's going to have to be some kind of pixel-level GPU bolted on somewhere.

    It's not really clear what the "cell" processors are for. They're fine for audio processing, but seem to be overkill for that alone. The memory limitations make them underpowered for rendering. And they're a pain to program for more general applications. Multicore shared-memory multiprocessors with good cacheing look like a better bet.

    Read the cell architecture manual. [ibm.com]

    • I think too much emphasis is being placed on "slow" access to system memory for the CELL processor when is is "slow" only relative to access to local memory of the SPUs. Please remember that system memory for the CELL is about 8 times faster than the memory in todays high end PCs with lower latency. XDR is by far the best memory type available unfortunately nobody like RAMBUS the company. So please when you are speaking about access to system memory keep in mind that the CELL processor has about the same
    • > It's not really clear...

      There was a Toshiba demo, showing 8 Cells; 6 used to decode forty-eight HDTV MPEG4 streams, simultaneously, 1 for scaling the results to display, and one left over. A spare, I guess?

      This reminds me of the Texas Instruments 320C80 processor; 1 RISC general purpose cpu, plus four DSP-oriented CPUs. Each had an on-chip memory chunk. 4KB. 256KB would be fantastic, after the experience of programming for the C80. 256KB will be plenty of memory to work on a tile of framebuffer.

      1.
  • I'm very excited about this project, even spec'd out a new dell to handle it. But before I can lay down the cash, I just wonder: why?

    why? Is the cell processor expected to go anywhere past PS3? There is obviously no OS port planned, and I have no access to PS3 game SDK. I have read some pretty awesome posts regarding the technical details of cell vs. x86 or Mac architectures, but none that would encourage me to download, install, and play around with this with the hope of ever making a buck.
    • Blade servers have already been announced.

      I would buy one of these, and no, I don't plan to get a PS3.

As in certain cults it is possible to kill a process if you know its true name. -- Ken Thompson and Dennis M. Ritchie

Working...