Linux For Cell Processor Workstation 310
News for nerds writes "The Cell processor from Sony, Toshiba and IBM, has been known as the chip that powers the upcoming PlayStation 3 computer entertainment system, but except for that very little is known about how it's applied to a real use. This time, at LinuxTag 2005 from 22nd to 25rd June 2005, at Messe- und Kongresszentrum Karlsruhe, Germany, Arnd Bergmann of IBM will speak about the Cell Processor programming model under Linux, and the Linux kernel in the first Cell Processor-based workstation computer, which premieres at Linuxtag 2005."
real use? (Score:5, Funny)
And why are video games not considered to be "real use" ??
Re:real use? (Score:2)
Because the successful ones prevent you from getting any "real work" done.
"Real Use" != "Real Work" (Score:3, Insightful)
E.g., you get some real use out of your bed at home, but I wouldn't say sleeping there counts as "work". (Or if it does, where can I sign up to get paid for it?) And screwing doesn't really count as work for most people either.
E.g., you get some real use out of your TV, but most people don't get paid to watch TV, nor consider it "work".
Same here. Playing a game _is_ "real use" of a computer. It might not
Re:"Real Use" != "Real Work" (Score:2)
1J = 1Nm
one kilogram-meter squared per second squared
Re:real use? (Score:3, Insightful)
Re:real use? (Score:2, Interesting)
This would be a big seller for people in engineering the movie industry etc.
With linux on it I want to see a standard PC board with a Cell processor and an X86 emu in rom for X86 OSes and using X86 cards roms.
But for speed it'll run native cell compiled applications.
Another odd effect is if cell finds it way into printers we'll have a situation we had back in the 80s where the printer is more powerfu
Oh JOY! Tux Racer on the PS3! (Score:3, Funny)
Re:Oh JOY! Tux Racer on the PS3! (Score:4, Funny)
The Linux role in hardware design (Score:5, Interesting)
It used to be the case that platform makers would have to either develop their own minimal operating system for testing purposes or work very closely with an OS maker to port their software to the new hardware platform. With Linux, this has been pushed into the anals of history. Now the Linux OS porting goes hand in hand with platform building, as evidenced by the almost immediate support for Linux at the time of hardware release.
I'm not so much interested in how the Cell board is going to revolutionize anything (it won't), but in how we have, in just the past few years, seen a dramatic increase in the number of hardware platforms being released. And not just in numbers, but also in variety. The number of different types of hardware platforms has risen dramatically. It's only limitation is the number of chip instruction sets supported by gcc and the imaginations of hardware manufacturers.
If you want to see how Microsoft's monopoly has hurt the computer industry, look no further than the current industry. Whereas hardware platforms were pretty standardized and boring, now, with Linux (and real competition to Microsoft's hegemony) the numbers of innovative platforms has increased dramatically. We need a Microsoft out there developing consumer-level applications and quality, user-friendly operating systems. However, we also need a real competitor like Linux to push the giant into innovating.
Re:The Linux role in hardware design (Score:5, Funny)
Just for the record: I think you meant "annals of history." "Anals of history" is
different.
Re:The Linux role in hardware design (Score:2)
Re:The Linux role in hardware design (Score:5, Funny)
Re:The Linux role in hardware design (Score:2)
Re:The Linux role in hardware design (Score:2)
Jason.
Re:The Linux role in hardware design (Score:5, Interesting)
Linux is more popular, but NetBSD allows quicker porting of "something useful".
I agreee that Microsoft has dealt a fair amount of damage with crappy APIs and bad QA regarding stability and security. A 'standard turd with a pretty GUI' is still a turd.
Re:The Linux role in hardware design (Score:2)
Re:The Linux role in hardware design (Score:3, Funny)
Ooh, aren't we the pretentious one!
Back before I had a computer, I had NetBSD running on an old sofa. Beat that!
Re:The Linux role in hardware design (Score:5, Insightful)
Perhaps because it is a Unix work-alike, and this was the original design goal of Unix?
Re:The Linux role in hardware design (Score:2)
The first few years, Unics, like all OSs of the day, was indeed written in assembly. Even after it was rewritten in C, it wasn't portable for another few years, since it still relied on stuff that only worked on PDP-11s. It was only when they decided to try and port it to another architecture (which I
Unix used to have that role (Score:5, Interesting)
Linux is better though, because the GPL encourage hardware vendors to share their modifications.
With Unix all you had access to was the original source, and the ports done by non-commercial/academic groups (source as UCB). Not other vendors code.
A Linux kernel in Verilog? ;-) (Score:4, Interesting)
I have news for you,... we programmers have been letting the hardware designers have FAR too much fun for far too long! It wasn't until my recent retirement from more than 35 years of computer programming (I've had many different titles) that I've had the time to learn the Verilog hardware design language - and it's GREAT FUN!!! :-) Verilog is very liberating because it removes the boring sequential execution of most CPU's and provides a clean slate with which to design any sort of little tiny electronics machine (that's how I think of VLSI design) that my heart desires. There is a GPLed version of SystemC (a higher level hardware design language than Verilog) on SourceForge that I've been meaning to take a look at, but first I'm creating a 640 bit-wide(!!!) factoring machine in Verilog which I hope to fit into one of the Lattice or Altera FPGA parts.
Really, I highly encourage programmers or anyone interested to learn and use Verilog or some other high level hardware design language. Verilog is similar in many ways to the C language, so if you're familiar with C then you already know most of Verilog's operators, precedence rules, etc. The only thing that takes a little getting used to is Verilog's inherently parallel nature. That is both its strength and the source of most Verilog design errors (at least for me). Also, Verilog is even more bit-picky than C but I sort of actually prefer the extra control that languages like C and Verilog give me over the hardware versus languages that try to insulate me from it.
Re:A Linux kernel in Verilog? ;-) (Score:2)
The drawback is that most of the high-end tools (Modelsim and synthesisers) are extremely expensive. But there are often free tools that work allright, I know Xilinx supply these for free.
O/T: Getting started? (Score:2)
New wave of freedom (Score:3, Insightful)
old wave, actually (Score:2)
Unfortunately, the so-called PC-pioneers like Gates, the Apple developers, and others, didn't have a clue what they were doing technically and were learning on the job;
Re:old wave, actually (Score:3, Insightful)
Cheap, but limited.
--
Evan "My first computer was an S100 bus handbuilt. My first OS wasn't."
Another Demo loop (Score:4, Insightful)
Maybe the old man face and duck in water tech demos from the PS2 will also appear.. Did any PS2 game ever look as good as sonys techdemos?
Re:Another Demo loop (Score:2)
Cell is a very cool design. I suggest you read the design docs linked to in the news item. IBM, Toshiba and Sony are fairly reputable - this is not some vaporware. If you were trolling about Infinium Labs and the Phantom, I'd understand, but PS3? Come on...
Re:Another Demo loop (Score:2)
Re:Another Demo loop (Score:3, Insightful)
Re:Another Demo loop (Score:2)
Re:Another Demo loop (Score:2)
Re:Another Demo loop (Score:2)
The current generation of PS2 games looks at least as good if not in many cases better then the techdemos they have shown back then. That said, yes, it took them a while to get all the power out of the PS2. This thread [the-magicbox.com] has some videos and pictures to compare.
And last not least one should never forget that a techdemo isn't actually gameplay. A techdemo allows the developer to prescript everything they want, insert cool effects all over the place
Re:Another Demo loop (Score:2)
Some words about Big Blue (Score:3, Interesting)
Re:Some words about Big Blue (Score:3, Funny)
Re:Some words about Big Blue (Score:2)
Re:Some words about Big Blue (Score:2)
Re:Some words about Big Blue (Score:5, Insightful)
Okay what do we know about IBM:
What does that mean?
If I was Intel/Microsoft/Apple/Lenovo I would be running for the hills. IBM is about to try and redefine computing again.
I am not simply recycling the hype about the CELL being better then sliced bread. I truly think the signs are there that IBM is going to go head long into the Workstation/Embedded/Client/Server market with a CELL/Linux architecture and are going to try and settle some very old debts with Wintel.
I don't now whether they will successes. I expect it will come down to whether they can make programming the SPU's as easy as x86. But I think it will be a very interesting few years.
cell (Score:5, Funny)
- optimize seamless communities
- generate vertical e-services
- everage synergistic convergence
and best of all
- engage e-business content
Perfect solution
Re:cell (Score:2)
Re:cell (Score:2)
- generate vertical e-services
- everage synergistic convergence
engage e-business content
Perfect solution
I will believe it when I either see this in a powerpoint presentation or hear it come out of the mouth of a funny sock puppet.
Re:cell (Score:2)
Re:cell (Score:4, Funny)
Been there, ... (Score:2)
OK, so it's not on the Cell architecture, but rather an FPGA-based softCPU, but certainly the problem of integrating asymmetric coprocessing engines into the Linux architecture has been thought about before.
Cool stuff nonetheless.
NB: This does not mean PS3 will run Linux (Score:2, Interesting)
This is similar to the T10K PS2 devkits running Linux (on a separate X86 processor) to do similar purposes.
As with the PS2, the consumer PS3 console itself uses a custom bare-bones kernel; it is NOT Linux based, although I could certainly see Linux being ported t
I would love to know.... (Score:2)
More succinctly: how does it handle its passing of processing requests to other 'cells'?
Using some (tiny, tiny bits of) ASM, I started to wonder about this. I mean, dear GOD! How do you deal with it? Some form of modified call I would suppose, like:
call_avail mem_Address_Of_Function, MemAddress_To_Store_Result
And when the result comes in it fires some interupt. Maybe th
You haven't spent enough time in the kernel (Score:2, Funny)
And that would be followed by a series of non-sensical parameters which can be defaulted to NULL and everything still seems to work fine.
As for your question, that's why they make the big bucks and you are posting on Slashdot. If you knew the answer, you'd be working for them.
Re:You haven't spent enough time in the kernel (Score:2)
But yes. Funny haha
Re:You haven't spent enough time in the kernel (Score:2)
Re:I would love to know.... (Score:2)
Wrong question. As I understand it from the pictures..., no I didn't RTFA ;-), the SPU's are co-processors (like a GPU or floating point co-processor) with the exception that they're all executing the same copy of the same program. This is the old concept of associative memory, except that in this case the control logic associated with each local block of memory (the Local Store or "LS" blocks in the picture) contains
Re:I would love to know.... (Score:2)
Cell may not be impressive at first glance (Score:4, Interesting)
That said, advances in parallelizing or vectorizing tasks within the kernel or popular applications are possible, but that's not a trivial task, so at first glance Cell's Linux benchmarks could look unimpressive or misleading, even though the architecture itself is revolutionary, at least in theory.
Here I hope IBM has done their homework and show something really impressive, yet realistic. I want to see things like Apache and GD serving hundreds of thousands of requests for dynamic content, or some real-time encoding/compositing of MPEG4 video for scalable delivery. I want to see Maya or Lightwave rendering a very complex scene. Rubber ducks may be fun to look at and -in all fairness- fit for a videogame-oriented crowd, but I want to see some kick-ass performance based on what it can potentially do to application development.
Define realistic? (Score:2)
Re:Cell may not be impressive at first glance (Score:2, Interesting)
The true power of Cell is the data rates that can flow inbetween the individual processors, memory and the IO back plane. It is a mini super computer on a chip because of the data rates, the processoring elements are secondary as they can be altered and changed for different "Cell" microprocessors.
I wrote up a brief explaination with info about data rates, etc... here
http://www.friendsglobal [friendsglobal.com]
The Cell Advantage (Score:3, Insightful)
I picture the PS3 using a camera as a very flexible form of input to allow for more creative game design. Super-fast compression and decompression also come to mind, which could be useful for more complex and fluid internet play.
Recent articles have said the cell will have some hickups with physics and AI, because those tasks benefit from branch prediction, but this should be made up for by the fact that the cell will be able to recognize input at a far more human level than present technology affords.
Re:The Cell Advantage (Score:3, Interesting)
Also, while AI and physics performance are limited in some respects, as I mentioned in the last post, I just
How much can we expect this workstation to cost? (Score:3, Interesting)
Specifically, is this, like, something that will be actually in the affordable range for people, or is this going to be like some kind of $6000 near-server tank?
Also, how many Cells is this likely to have? One? Two? Four? These SPEs are all well and good for computational stuff but the rest of the time it's nice not to be stuck with a single processor.
Re:How much can we expect this workstation to cost (Score:2)
Re:How much can we expect this workstation to cost (Score:2)
Cell-less (Score:2, Interesting)
The good news is that someone is at least taking advantage of the architecture and producing linux workstations based on the Cell... unfortunately i don't think tht will be enough for it to survive in the desktop/workstation marke
Cool processor (Score:5, Funny)
And yet again the Cell fanboys (Score:5, Interesting)
The Cell also is simple, but in a way that that inflates the gflop rating at the cost of programmer time.
By comparison the modern x86 is a dream to program for, just note how two fairly radically different cpu's (Athlon64 and the P4) handle the same code very nicely without any big performance issues. Compare this to the Cell, where all the explicitness will make sure that any binary you write for the Cell today will run like crap on the next version.
The point here is that Apple could absolutely not have switched to the Cell, it is inconvenient now and hopeless to upgrade without having to rewrite a ton of assembler and recompile everything for the new explicit requirements.
The Cell is the thing for number crunching and pro applications where they are willing to spend the time optimizing for every single CPU, but for normal developers it is a step back.
Wrongo (Score:4, Interesting)
In theory, you don't need any GP registers at all, you could just have memory-memory ops and rely on the cache. This is impractical due to the size of memory addresses eating up your bandwidth (incidentally, this is a problem with RISC architectures, eating bandwidth and clogging the cache, but that's another story). As an alternative, you can simply expose the cache as one big honking register file using somewhat smaller addresses, and let your fancy-pants optimizing compiler do its best.
The real problem seems to be that compilers have just not been able to keep up with the last 20 years of theory. Witness the Itanium--in theory it should have been the ultimate, but they didn't seem to be able to get things optimized for it (other problems, too). Then what happens are curmudgeons complain about the extra work of optimization and insist on setting us back to early 80s architecture rather than writing a decent compiler.
Moral of the story: write a decent compiler and stop trying to glorify crappy ISAs that suit your antiquated and inefficient coding habits.
Re:Wrongo (Score:5, Interesting)
Compilers do manage to do decent jobs in some cases, especially with languages that are easier to do semantic analysis over than C/C++, but while it is interesting research it is not a practical way to go. The reality is that C/C++ is prevalent, and highly detuned code is abundant. This also fails to address the problem of migrating between versions of the processor, while recompiling everything every time is a way to go it is not terribly practical (and when every new processor will fail to measure up to the old in the users old apps the user will not be happy).
It is a bit odd that you bring up the Itanium since it is the best argument for this stance, there has not been any lack of effort in the compiler technology for the Itanium, the compilers are real marvels leveraging the very best the research has to offer. The silicon itself is very powerful, if you manage to actually fill all the instruction slots the thing will really fly. Unfortunately they never do, they get 50% fills and such, and the problem is that a modern sophisticated OoO processor will do an equally good job extracting parallelity on the fly while offering more flexibility.
A large part of the problem, and the reason why multithreaded models are becoming pervasive, is that OoO processors actually extract very close to the maximum in instruction level parallelism even with near-infinite window-sizes (I recommend the paper at http://citeseer.ist.psu.edu/145067.html [psu.edu]), so automatic vectorization of ILP is not a field to pin much hope on.
My final note is that; While having sophisticated issue logic is fairly complex, the chip real estate is not that large, and the gains to be made are huge. The Cell has a weak primary processor, mostly meant to be an organizing hub for the vector operations, if you don't write vectorized code you are screwed (unless compiler technology does something amazing soon).
Re:Wrongo (Score:2)
Re:Wrongo (Score:3, Insightful)
In case you don't remember, the point of RISC was to put optimization on the compiler so it wouldn't require massive on-the-fly speculative bibbledy-bop with millions of extra transistors and hideous pipelines like we have nowadays. This was done by providing, essentially, a compiler-accessible cache in the form of lots of registers, and by having an instruction set that was amenable to automated optimization.
Yes, at least in the beginning in its most pure form. Most high performance RISC architectures
Re:And yet again the Cell fanboys (Score:3, Insightful)
Well, not the average application coder, but the compiler guys. And thats the right thing to do. x86 is a hardware VM with a hardware JIT-compiler right now. This is a job that is better done in software at compile time and not realtime in execution. (An exception would be bandwidth limitations as they were reported for the Transmeta-CPU (IIRC) running native VLIW code.) Abstraction is nice. But it doe
Re:And yet again the Cell fanboys (Score:2, Insightful)
While it might be the way of the future, it is very much a thing of the future, not the present.
Expect to see lots of carefully hand-tuned code for the Cell to
Unfortunate name (Score:5, Funny)
Apple? (Score:2)
Re:I thought that the PS3 was going to be real (Score:2, Informative)
Cell info [blachford.info]
From TFA ... (Score:5, Informative)
Only the kernel is able to directly communicate with an SPU and therefore needs to abstract the hardware interface into system calls or device drivers. The most important functions of the user interface including loading a program binary into an SPU, transferring memory between an SPU program and a Linux user space application and synchronizing the execution. Other challenges are the integration of SPU program execution into existing tools like gdb or oprofile."
Re:From TFA ... (Score:2)
Re:From TFA ... (Score:2)
Re:Should be interesting (Score:3, Informative)
Perhaps you are in the wrong business/hobby (Score:5, Funny)
Perhaps you are in the wrong business or hobby. If inconsequential details like what CPU is sitting at the heart of Apple's proprietary design causes you emotional distress you really need to reconsider your life. Assuming of course that you are not in advertising and needed the faux x86/PPC conflict. If so please continue with your distress, otherwise, have you considered forestry?
http://data2.itc.nps.gov/digest/usajobs.cfm [nps.gov]
No... HE's right here (Score:3, Funny)
If inconsequential details like what CPU is sitting at the heart of Apple's proprietary design causes you emotional distress you really need to reconsider your life.
This is Slashdot, man. If we had a "life" to reconsider, we wouldn't be here.
Re:Perhaps he is right though (Score:4, Interesting)
The SPUs are not for the OS, they are for high level libraries or apps. They are for highly specialized computationally intensive jobs. Maybe OpenGL could benefit but not the OS. FYI:
"Unlike existing SMP systems or multi-core chips, only the general purpose PowerPC core, is able to run a generic operating system, while the SPUs are specialized on running computational tasks. Porting Linux to run on Cells PowerPC core is a relatively easy task because of the similarities to existing platforms like IBM pSeries or Apple Power Macintosh, but does not give access to the enormous computing power of the SPUs. Only the kernel is able to directly communicate with an SPU and therefore needs to abstract the hardware interface into system calls or device drivers. The most important functions of the user interface including loading a program binary into an SPU, transferring memory between an SPU program and a Linux user space application and synchronizing the execution. Other challenges are the integration of SPU program execution into existing tools like gdb or oprofile."
http://www.linuxtag.org/typo3site/freecongress-de
Re:Perhaps he is right though (Score:2)
So what do you think WGF in Longhorn would be called then? Part of the OS or "high level libraries or apps"?
Re:Perhaps he is right though (Score:3, Interesting)
Re:Perhaps he is right though (Score:5, Insightful)
The other possibility is that Apple have got seriously pissed off watching IBM spew out the 3-core G5 for the XBox 360, the Cell for the PS3, and leaving them with an aging 2.7GHz CPU.
Re:Perhaps he is right though (Score:2)
Yep, you got it. Apple had no choice. They were going to lose the laptop market completely. In fact they will already be a good deal behind when the first Powerbooks with intel roll out.
But, I had always expected that if Apple had stayed with IBM for chips that they would get OS X onto the cell processor, and that would
Re:Perhaps he is right though (Score:3, Interesting)
I'd even take it a step further; by going cross-platform with the OS, and abstracting the binary compatibility issue away with XCode and Rosetta, they are now no longer beholden to any chipmaker. Intel is probably giving them a sweet deal (they are a pretty high-volume seller, after all), but should that deteriorate, they can always go over to AMD. Or back to IBM for power/cell chips. And in fact, they can do all at once....if they decide they want to have pentium-M in the laptops, cell in the desktops, and
Re:Perhaps he is right though (Score:3, Interesting)
0x86 chips have added silicon AFTER, not BEFORE Microsoft created all of their sound and video extensions. Unlike the implication of GP. And MMX was a response to the fact that 'omg! People use video and sound!'. Linux and anyone else is free to take advantage of the extra instructions, and is the case with Linux at least.
If Cell doesn't have special instr
Re:Quaternion rotations? (Score:2)
Wait....
Seriously though, do tell. The link in the body really didn't say much, despite the significant number of words.
Re:Quaternion rotations? (Score:2)
Re:*sigh* (Score:5, Interesting)
Consoles are where PowerPC is at from here on out.
Re:*sigh* (Score:2)
Re:*sigh* (Score:2)
Re:So what's the deal with you linux zealots? (Score:2, Insightful)
Oh wow, I don't know where to start.
1. Relevance: This comment has absolutely no relevance to the slashdot article.
2. Open-source software sucks compared to closed-source because it's not done by 'professionals'? Give me a break! Several open-source projects are funded by companies, organizations, and universities and are recognized world-wide.
3. You're saying you can't use those programs because of their silly names which you somehow derived as sexual euphemisms? What about windows cause it kinda sou
Re:So what's the deal with you linux zealots? (Score:2, Insightful)
There's only one thing worse than repetitive, uncreative, irrelevant trolls.
It's the fucktards that reply to them on a point-by-point basis as if it does anything other than justifying the trolling.
Next time you feel the need to reply to such a lame, obvious troll, try sucking your own cock instead. It's an endeavor that will doubtless keep you occupied for days and be far less distasteful to onlookers.
Re:So what's the deal with you linux zealots? (Score:2)
I don't think the reason we call them paedophiles is because 12 yearolds don't give very good hand jobs, is it?
Re:cell chips (Score:2)
Re:Does it run Linux? (Score:2)
Re:Congrats Apple and Steve! (Score:5, Funny)
Re:Could be the replacement for my Macs (Score:2)
There ya go, efficient high performance 64-bit desktop platform which leaves you enough money for a good road trip with some buddies.
Tom
Re:Could be the replacement for my Macs (Score:2)
Re:Could be the replacement for my Macs (Score:3, Insightful)
I think you'll find the gains from 16 extra registers is less than what the [for example] AMD gains from having three pipelines, register file, etc...
It's like cache, throwing more registers pays off big to start [say going from 1 to 2, 2 to 4,
Take apart that 5% of your program that takes 95% of the time and see how many registers it actually uses in the inner loop.
With bignum math for instance, inner loops usually amount
Re:Could be the replacement for my Macs (Score:2)
Those are tricks you need EVEN in the RISC world [although perhaps less as the # of registers increases].
While I agree x86_64 is a kludge it seems to work well. The ideal would be AMD making a new ISA, replacing their existing decoders but keeping the rest the same. That would remove the prefix byt