IBM Releases Cell SDK 207
derek_farn writes "IBM has released an SDK running under Fedora core 4 for the Cell Broadband Engine (CBE) Processor. The software includes many gnu tools, but the underlying compiler does not appear to be gnu based. For those keen to start running programs before they get their hands on actual hardware a full system simulator is available. The minimum system requirement specification has obviously not been written by the marketing department: 'Processor - x86 or x86-64; anything under 2GHz or so will be slow to the point of being unusable.'"
Re:Is this the same Cell processor used in the PS3 (Score:1, Informative)
Since the submitter didn't bother to explain... (Score:5, Informative)
Re:Wikipedia article question (Score:2, Informative)
A modern desktop computer has one master CPU, then several smaller CPUs each running their own software. Graphics, Sound, CD/DVD, HD, not to mention all the CPUs in all the peripherals.
But the analogy ends there. The Cell has certian limitations and wouldn't be able to operate as a full computer system with no other processors very efficiently. I believe the PS3 has a seperate GPU, for instance. And doubtless has many other microcontrollers managing the rest of the system.
-Adam
Re:Wikipedia article question (Score:5, Informative)
The reason you want to unroll loops is because of various other delays. If it takes 7 cycles to load from the local store to a register, you want to throw a few more operations in there to fill the stall slots. Unrolling can provide those operations, as well as reduce the relative importance of branch overheads.
Re:Wikipedia article question (Score:2, Informative)
gcc and other compilers have options such as -funroll-loops, which will unroll loops (no matter how they were specified) for you if the count can be determined at compile time. So you wind up with "Do your thang, do your thang, do your thang, do your thang
Re:GNU toolchain (Score:4, Informative)
Re:Wikipedia article question (Score:1, Informative)
Re:Wikipedia article question (Score:3, Informative)
As for the other posters, the real reason you want to unroll loops is basically to avoid the cost of managing the loop, e.g.
a simple loop like
for (a = i = 0; i b; i++) a += data[i];
In x86 would amount to
mov ecx,b
loop:
add eax,[ebx]
add ebx,4
dec ecx
jnz loop
So you have a 50% efficiency at best. Now if you unroll it to
mov ecx,b
shr ecx,1
loop:
add eax,[ebx]
add eax,[ebx+4]
add ebx,8
dec ecx
jnz loop
You now have 5 instructions for two itterations. That's down from 8 you would have before, and so on, e.g.
mov ecx,b
shr ecx,2
loop:
add eax,[ebx]
add eax,[ebx+4]
add eax,[ebx+8]
add eax,[ebx+12]
add ebx,16
dec ecx
jnz loop
Does 7 opcodes for 4 itterations [down from the 16 required previously, e.g. 100% more efficient].
Tom
Re:GNU toolchain (Score:4, Informative)
Cell Hardware... (Score:4, Informative)
How does one get a hold of a real CBE-based system now? It is not easy: Cell reference and other systems are not expected to ship in volume until spring 2006 at the earliest. In the meantime, one can contact the right people within IBM [ibm.com] to inquire about early access.
By the end of Q1 2006 (or thereabouts), we expect to see shipments of Mercury Computer Systems' Dual Cell-Based Blades [mc.com]; Toshiba's comprehensive Cell Reference Set development platform [toshiba.co.jp]; and of course the Sony PlayStation 3 [gamespot.com].
"cell" architecture is all about local memory (Score:5, Informative)
The cell processors can do DMA to and from main memory while computing. As IBM puts it, "The most productive SPE memory-access model appears to be the one in which a list (such as a scatter-gather list) of DMA transfers is constructed in an SPE's local store so that the SPE's DMA controller can process the list asynchronously while the SPE operates on previously transferred data." So the cell processors basically have to be used as pipeline elements in a messaging system.
That's a tough design constraint. It's fine for low-interaction problems like cryptanalysis. It's OK for signal processing. It may or may not be good for rendering; the cell processors don't have enough memory to store a whole frame, or even a big chunk of one.
This is actually an old supercomputer design trick. In the supercomputer world, it was not too successful; look up the the nCube and the BBN Butterfly, all of which were a bunch of non-shared-memory machines tied to a control CPU. But the problem was that those machines were intended for heavy number-crunching on big problems, and those problems didn't break up well.
The closest machine architecturally to the "cell" processor is the Sony PS2. The PS2 is basically a rather slow general purpose CPU and two fast vector units. Initial programmer reaction to the PS2 was quite negative, and early games weren't very good. It took about two years before people figured out how to program the beast effectively. It was worth it because there were enough PS2s in the world to justify the programming headaches.
The small memory per cell processor is going to a big hassle for rendering. GPUs today let the pixel processors get at the frame buffer, dealing with the latency problem by having lots of pixel processors. The PS2 has a GS unit which owns the frame buffer and does the per-pixel updates. It looks like the cell architecture must do all frame buffer operations in the main CPU, which will bottleneck the graphics pipeline. For the "cell" scheme to succeed in graphics, there's going to have to be some kind of pixel-level GPU bolted on somewhere.
It's not really clear what the "cell" processors are for. They're fine for audio processing, but seem to be overkill for that alone. The memory limitations make them underpowered for rendering. And they're a pain to program for more general applications. Multicore shared-memory multiprocessors with good cacheing look like a better bet.
Read the cell architecture manual. [ibm.com]
Re:"cell" architecture is all about local memory (Score:1, Informative)
d
Re:Wikipedia article question (Score:3, Informative)
You have this backwards. Optimizing compilers will turn tail-recursive style source into "normal" loops.
You can write a loop recursively, so that: becomes Recursion in foo-helper is in the tail position. That is, foo-helper only calls itself as the final operation before returning.
Compiling this naively involves a function call per recursion, which on most architectures results in pushing data onto the stack. However, because we are doing tail-recursion, we can do a tail call elimination optimization.
How this works is that the "return" before the recursion is taken to mean that any automatic variables are dead, any stack space used for the arguments is reusable, and the recursive call is really a jump.
That is, when foo-helper calls itself, it really does an argument rewrite and jump, which in effect "pretends" that foo-helper was called with different arguments in the first place.
In other words, tail call elimination turns recursive loops into iterative loops.
Writing in "tail-recursive style" just means making sure your recursion is done in tail position (i.e., attached to a "return"). Some compilers for a variety of languages can identify recursion which is not done in the tail position, and reorder the recursion into tail position (and then the tail calls are eliminated into iterative loops). However, many compilers can't, and many more don't do tail-call elimination at all.
Once you've optimized recursive loops into iterative ones, you can optimize iterative loops however you like, including partially or fully unrolling them.
In summary, recursion is a way of looping, but function calls are not free. In particular, they usually consume stack space. If you only return the result of your recursion, then you are tail-recursing. Tail recursion can be turned into code which does not incur function-call overhead.
Re:"cell" architecture is all about local memory (Score:2, Informative)
Re:"cell" architecture is all about local memory (Score:2, Informative)
There was a Toshiba demo, showing 8 Cells; 6 used to decode forty-eight HDTV MPEG4 streams, simultaneously, 1 for scaling the results to display, and one left over. A spare, I guess?
This reminds me of the Texas Instruments 320C80 processor; 1 RISC general purpose cpu, plus four DSP-oriented CPUs. Each had an on-chip memory chunk. 4KB. 256KB would be fantastic, after the experience of programming for the C80. 256KB will be plenty of memory to work on a tile of framebuffer.
1. DMA tile -> local RAM
2. render to local...
3. ???
4. Profit!
Whoops, where was I going with that, again?
Not a PPC Processor (Score:2, Informative)
Once again, the cell is not a PPC processor. It is not PPC based. The cell going into the playstation 3 has a POWER based PPE (power processing element) that is used as a controller, not a main system processor. Releasing an SDK for Macs would not give any advantage over an X-86 based SDK because you are still emulating another platform.
Wiki [wikipedia.org]
Re:"cell" architecture is all about local memory (Score:1, Informative)
WTF?
Just what the world needs, another clown from the peecee world talking about the PS3.
There is no 'GPU' in the PS3. The entire Cell+RSX unit is used to render. The RSX would best described as the PS3's rasterizer, but even that isn't entirely accurate since there is a large amount of painting/modifying pixels that the SPEs do. Physics and graphics data is unified and processed on the Cell side of the system, although the RSX does have vertex transform capabilities itself.
PS3 rendering is best described as a hybrid rendering system where rendering is load balanced between the internal components on the fly depending on the unique characteristics of the scene and world data being processed.
So, no, there isn't a NVidia 'GPU' in the PS3...
Re:Not a PPC Processor (Score:3, Informative)
What is Power Architecture technology? [ibm.com]
"Power Architecture is an umbrella term for the PowerPC® and POWER4(TM) and POWER5(TM) processors produced by IBM, as well as PowerPC processors from other suppliers."
Re:What about a PPC SDK and simulator? (Score:3, Informative)
The NVidia GPU in the PS3 (Score:3, Informative)
SCEA press release:
SONY COMPUTER ENTERTAINMENT INC. AND NVIDIA ANNOUNCE JOINT GPU DEVELOPMENT FOR SCEI'S NEXT-GENERATION COMPUTER ENTERTAINMENT SYSTEM> [playstation.com].
TOKYO and SANTA CLARA, CA
DECEMBER 7, 2004
"Sony Computer Entertainment Inc. (SCEI) and NVIDIA Corporation (Nasdaq: NVDA) today announced that the companies have been collaborating on bringing advanced graphics technology and computer entertainment technology to SCEI's highly anticipated next-generation computer entertainment system. Both companies are jointly developing a custom graphics processing unit (GPU) incorporating NVIDIA's next-generation GeForce(TM) and SCEI's system solutions for next-generation computer entertainment systems featuring the Cell* processor".