Ars Technica's Hannibal on IBM's Cell 449
endersdouble writes "Ars Technica's Jon "Hannibal" Stokes, known for
his many articles on CPU technology, has posted a new article on IBM's new Cell processor. This one is the first part of a series, and covers the processor's approach to caching and control logic. Good read."
Apple? (Score:4, Insightful)
Re:Apple? (Score:5, Informative)
The Cell and Apple
Finally, before signing off, I should clarify my earlier remarks to the effect that I don't think that Apple will use this CPU. I originally based this assessment on the fact that I knew that the SPUs would not use VMX/Altivec. However, the PPC core does have a VMX unit. Nonetheless, I expect this VMX to be very simple, and roughly comparable to the Altivec unit o the first G4. Everything on this processor is stripped down to the bare minimum, so don't expect a ton of VMX performance out of it, and definitely not anything comparable to the G5. Furthermore, any Altivec code written for the new G4 or G5 would have to be completely reoptimized due to inorder nature of the PPC core's issue.
So the short answer is, Apple's use of this chip is within the realm of concievability, but it's extremely unlikely in the short- and medium-term. Apple is just too heavily invested in Altivec, and this processor is going to be a relative weakling in that department. Sure, it'll pack a major SIMD punch, but that will not be a double-precision Alitvec-type punch.
Re:Apple? (Score:3, Informative)
Firefox, wily, gcc, python, perl, MS office, gimp and so on.
Re:Apple? (Score:3, Funny)
Re:Apple? (Score:4, Funny)
Re:Apple? (Score:2)
You weren't even born yet, loser.
It was a joke (Score:2)
Re:Apple? (Score:2)
Re:Apple? (Score:2)
I don't *quite* know if that's 2000 vintage, but it's gotta be close.
Re:Apple? (Score:2, Informative)
For browsing simple websites or writing emails it works acceptably. For anything even remotely multimedia related, it is rendered useless.
Meanwhile a 400Mhz PII running Windows 2K can play flash, mp3s,
Re:Apple? (Score:5, Informative)
5 year old? Your 600mhz g3 ibook came out October 2001. That machine is just a few months older than 3 years old.
In October of 2001, the P4 was at 2.0ghz, and the Athlon 2000+ was just coming out. Are you going to tell me that a 2ghz P4 isn't adequate for browsing the web, listing to mp3s and importing digital photos?!
Re:Mistake (Score:5, Interesting)
Get off what you 'assume', assumption is just intuition for idiots.
We have test 200mhz laptops with 80mb of ram 5gb hard drives, released 1997 all running WindowsXP Professional (yes even the themes turned on) and they benchmark faster than they did when they shipped with Windows 95.
Secondly, they can do full 30fps video as long as it is uncompressed AVI or even WMA 9. QuickTime (MPEG4), MPEG2, and real stutter horribly on video playback unfortunately.
As for battery, don't know, these laptops hold for 3hrs with a single charge, and yes techs are REQUIRED and have no problems using them daily in test scenarios.
Now if you really want to compare laptops to laptops, why don't I show you our 900mhz AMD Compaq laptops, they have JBL sound systems in them, and there isn't a single feature the cannot perform with the exception of running a T&L based video game, as the integrated video doesn't handle it, oh wait, the 900mhz PowerBook video didn't support such features either. (BTW, This is not to say that there are not several 900-1000mhz class laptops that have upper end video features), I am just using what we have in our test labs for comparison.
The 900mhz laptop has a DVD/CDRW, came out late 2000 early 2001 (trying to remember if we got them before holidays or not). They do full software DVD decoding with less than 20% CPU utilization and pretty much do anything fairly fast that we through at them. We even have a beta version of Windows 2003 server running on one with 256mb of RAM. (Yes we are always pushing the limits, but it works as fast as the WindowsXP pro version of the machine sitting next to it.)
Now off my rant... Macs truly are great, and the PowerBooks of the time were great, but that DOES NOT MEAN they were the BEST, WILL ALWAYS BE THE BEST, or you should be complacent listening to Apple tell you what you are getting is the best when it might not be. It is time for us as MAC users to stand up and DEMAND that technology becomes as much a part of what a MAC is as the EASE of USE in the Interface.
The time is now, we need to STOP accepting what they tell us and give us and force them to truly give us the LATEST technological concepts, not just the above average concepts when compared to the PC world. These are Macs, they SHOULD BE BETTER. IT shouldn't even be subjected to a debate they should be so far advanced a debate should not be possible. PERIOD.
Sadly, it just isn't true now, and has not been for many years. OSX has giving the Mac world some credibility backing OS technology, but not Apple needs to take Macs to the next level.
Even if my comment inspires one Mac user to say hey Apple, we want better, then maybe we all can be the symbolic person with the hammer from their 1984 video and WAKE THEM UP this time.
Re:Mistake (Score:2)
At the time there was no sony vaio, so the powerbook titanium was the smallest laptop around. It also had optional wireless and standard firewire and gigabit ethernet built in. Os 10.1 was a bit lacking but i'd take it over whatever windows version any day (i tried 98 2000 and xp home)
I'd say it was the best.
My article on the new cell processor: (Score:3, Insightful)
Aside from my own (competent) review of the cell processor, the article possibly the most insightful and technically nicely balanced articles posted on slashdot in a long while!
I'll cover more of the Cell's basic architecture, including the mysterious 64-bit POWERPC core that forms the "brains" of this design.
Looking forward to that... I think that many people will be moving to Mac
Part II is up now (Score:5, Informative)
Like having a whole Beowulf Cluster on one chip... (Score:3, Funny)
Workstation? (Score:5, Interesting)
" Last fall, IBM and Sony said they were developing a workstation based on Cell chips, which is the first product IBM will ship based on Cell."
Regardless if this is the first product shipped or not, a workstation is coming. I can't see it running anything but linux. Given the mass market targeting of the cell, I hope Sony makes a strong go at grabbing the market with cheap hardware, rather than trying to milk the high-end content creation market first.
Workstation?-Cell Wars. (Score:2, Informative)
Re:Workstation? (Score:3, Interesting)
OS X is another strong possibility. Sony's President was recently on stage with Steve Jobs at the Macworld Expo, hinting at working with Apple in the future. A recent slashdot story linked to an article which states that 3 PC manufacturers have been begging Apple to license OS X to them. I'll bet Sony was one of them, and IBM would also be a logical suitor.
Since OS X is essentially NEXTSTEP 6, and the Cell workstations would be great for science or 3d, OS X is a
Re:Workstation? (Score:4, Insightful)
Apple at the moment is two companies. One is primarily a computer hardware company that makes software to drive hardware sales and sells the entire package as user experience. The other is a consumer electronics company. Last year, the profits made by both companies were about the same. Whether they wish to transition to being a software and consumer electronics company that also makes some niche hardware is a decision they will have to make.
A real supercomputer chip that CELL copied (Score:2)
It's almost just like Cell but has onchip memory to solve the bandwidth problem.
Dally worked for Cray and mentioned that todays supercomputers are not efficient.
When hell freezes over (Score:2)
Hey "never say never", but I don't see Microsoft (xbox2) porting/releasing ANY Windows technology on Sony (ps3) hardware any time soon. The Xbox2/PS3 showdown is going to be the biggest thing since, well, Xbox/PS2.
Re:Workstation? (Score:5, Insightful)
The target market is not home users but rather scientists, animators, engineers, and others who need raw power and aren't concerned with the fact that Word won't work on it; many customers will probably have a cheap PC sitting next to it for office tasks, freeing up the workstation to do nothing but grind through computations. In this world, various unicies are the only serious choice; SGIs run IRIX or Linux, Suns run Solaris or Linux, and IBMs run AIX or Linux.
Take into account IBM's commitment to Linux, and the fact that many of their customers already use it, and it's almost certain that Linux will be a major OS choice for Cell workstation customers, particularly those working in a mixed-architecture environment. While it's likely to run AIX and a Windows port is possible, it's almost certain that a majority of Cell workstations will be running Linux.
More info in these slides (Score:5, Interesting)
e.g. 234 M transistors [impress.co.jp] (!) That's why I don't think this will be replacing the G5 any time soon. The die size (at the current prototype's 90nm) is over 200 mm2.
It'll have to get a fair bit smaller/cheaper before the PS3 can use it without major subsidies, and I don't know why they think general consumer devices will want it. God knows how much power it dissipates with all 8 SPEs clocking over at 4 GHz...
Re:More info in these slides (Score:2)
Re:More info in these slides (Score:4, Informative)
Re:More info in these slides (Score:2, Insightful)
Re:More info in these slides (Score:2)
It'll have to be shrunk to 65nm before it can hope to be competitive.
Not if the CPU is too expensive. (Score:3, Informative)
Well, in MS's case, they can pull shit like that. Microsoft makes loads of cash off
Re:More info in these slides (Score:3, Informative)
A complicated CPU may have tens or hundreds of millions of transistors, but a single memory chip has billions.
So when you bump up the cache size on a CPU, the transistor count goes up greatly.
Re:If Sony can, Apple can (Score:3, Insightful)
Sony may be able to do that with the 65nm final design, when it arrives some time in 2006. Then we'll see.
Even then, there are other considerations that may make it a less-than-ideal fit for a general purpose computer - all those vector units are great for number crunching, but how much of that do you do each day? And when you're not, that's 3/4 of the cost of your chip sitting around idle. There are more
Re:If Sony can, Apple can (Score:4, Insightful)
I am not convinced by this argument. A lot of OS X code uses AltiVec, but very little actually uses it directly. Apple has spent a lot of effort producing libraries that people can use which wrap AltiVec into something higher level (e.g. QuickTime, vDSP). Most of these could potentially be ported to the SPEs. Things like CoreVideo could also make use of the SPEs.
all those vector units are great for number crunching, but how much of that do you do each day? And when you're not, that's 3/4 of the cost of your chip sitting around idle.
90% of the time, my 1.5GHz G4 is sitting at 20% utilisation or less. You could argue that 80% of the power of the chip is wasted. However, when I am doing things that tax it they are almost always things that would support a large degree of parallelism.
Re:If Sony can, Apple can (Score:3, Informative)
Love those architectural articles (Score:5, Funny)
Hannibal (Score:5, Funny)
Re:Hannibal (Score:5, Funny)
Re:Hannibal (Score:5, Funny)
Re:Hannibal (Score:2)
Re:Hannibal (Score:3, Informative)
Zen, your Google-fu is weak: http://en.wikipedia.org/wiki/Michel_Lotito [wikipedia.org] :)
Re:Hannibal (Score:2)
Hannibal was the greatest general of his era (Score:2)
He also was a great politician after the Tunic wars.
Re:Hannibal was the greatest general of his era (Score:2)
Re:Hannibal (Score:2)
Re:Hannibal (Score:2)
iCell? (Score:2, Interesting)
Even so, I doubt we'd see Cell-based Macs until at least 2007 -
depends on application (Score:2)
With the low performance PPC cpu, I doubt Apple will want these things. Apple has too much interest in the general purpose computer market to care much about so
Re:depends on application (Score:2)
I wonder how the cell architecture comapares to ATi and nVidia GPUs?
Re:iCell? (Score:2)
That's a theoretical performance boost. Few apps will be able to take full advantage of 9 simultaneous processors, even after being coded with specific support for it. Still, it'd give a nice speedup to a couple of specific Photoshop tasks, and that's all you need to feed the Jobs RDF. If a Pentium III can speed up the Internet, then why not?
wouldn't i
Re:iCell? (Score:2)
The primary advantage of consoles (to developers) is that they are a uniform environment,
Re:iCell? (Score:2)
Re:iCell? (Score:2)
How do I code this thing?? (Score:4, Interesting)
Anybody out there with experience on this architecture or even attended the presentation itself can give us mere coders details? Preferably a website.
Re:How do I code this thing?? (Score:2)
Check out his earlier articles on the PS2 architecture to learn more about those vector units.
Re:How do I code this thing?? (Score:5, Informative)
The architecture of the Cell look like a much-improved PS2 system, with the PS2's vu0 and vu1 (vector units 0 and 1) replaced by 8 SPE's. Also, the programmable DMA (with chaining ability, allowing it to sequence multiple DMA events one after the other etc.) looks very similar to the PS2's.
If that turns out to be the case, then PS2 programming is a hint towards how it'll work. On the PS2, you generally configured the DMA controller to upload mini programs to the vector units, then DMA-chained data as streams from RAM through the just-uploaded program and onto the destination (usually the GS which rasterised the display).
On the Cell, it looks as though you can DMA-chain code & data through multiple SPE's and ultimately back to RAM/the PPC core/whatever is memory mapped. This is cool - it's software pipelining
So, my guess is that the PPC acts as a (DMA, IO, etc.) controller (much like the mips chip did in the PS2), and the heavy lifting goes on in the vector units, with code and data being streamed in on demand.
It's a different model to normal programming, and as far as I can see it encourages you to be closer to the metal (ie: it's harder, I normally expect my L1 cache to take care of itself...), but assuming they release/port gcc for the SPE's, it might not be too hard if you're used to event-driven highly-threaded programming. Let's just hope they release a Linux port and 'vcl' so we can do something useful with the vector units...
Oh, and if the xbox was a target for a self-hosting linux solution, I think the Cell will be irrestible
Simon
Re:How do I code this thing?? (Score:3, Insightful)
Sounds a lot like pixel/vertex shaders. Is this how we're going to get around all our bandwidth problems now? Slice up our programs into little independent fragments and upl
Re:As a total Cell/PS2-coding n00b... (Score:4, Informative)
A fair question, but no. Consider for example an iterative factorial agorithm:
Totally unparallelizable.
This is a case where to execute the next step, you absolutely need the results of the previous step to be completed. There can be other kinds of reasons for this: In this case you don't even know how many times the loop is going to execute in advance. Now, maybe if you're clever you can figure it out, but what if f() is return (rand() * i);? Ick.
To make matters worse, C lets you use pointers and do whatever you want. So given some set of instructions, there could be side affects on i (or n) that are totally unpredictable without executing the program.
What you're looking for - the problem I'm describing - is not a problem with gcc. It's a problem with the C language. If you want to get rid of side-effects and make parallelization easy, try using a pure functional language. But people don't like programming in pure functional languages (well, I don't), they like programming in C (or other procedural-style language).
Re:How do I code this thing?? (Score:5, Informative)
We'd do our skeletal animation skinning with this. DMA a bunch of verts to scratchpad, transform and weight them on the VU, DMA back to a display list. The thing is, there's really no high-level language support for this... the onus is on the programmer to schedule and memory map everything, mostly in assembly.
The design of the cell-- it's incredible. It's every game programmer's wet dream. I just don't see how it's going to be as useful in other areas though. It's going to be a compiler-writer's nightmare, and to get real performance frome the SPEs is going to take a lot of assembly or a high-level language construct that I haven't seen yet.
Re:How do I code this thing?? (Score:5, Interesting)
"Starting today, the performance lunch isn't free any more. Sure, there will continue to be generally applicable performance gains that everyone can pick up, thanks mainly to cache size improvements. But if you want your application to benefit from the continued exponential throughput advances in new processors, it will need to be a well-written concurrent (usually multithreaded) application. And that's easier said than done, because not all problems are inherently parallelizable and because concurrent programming is hard."
Obviously, it's not clear whether this is directly relevant to cell processors, but I think it's at least of passing interest. It's also worth considering whether concurrency-oriented languages like Erlang and Oz could become more important with these sorts of processors (not for games but possibly for scientific work).
See also the discussion [lambda-the-ultimate.org] of this article on Lambda [lambda-the-ultimate.org].
Re:How do I code this thing?? (Score:2)
anyhow.. you don't probably have to sweat over it.. not that likely to be that open for just anybody(it's going to be a nightmare though.. as they've pretty much hinted that it's up to the programmer to keep the 8 apu's occupied).
Re:How do I code this thing?? (Score:3, Insightful)
The real value of the x86 (Score:5, Insightful)
Perhaps I don't quite understand (Score:2, Insightful)
"Reverse engineered implementations exist" is not really much of a meaningful strength if you don't own one such reverse engineered implementation already. You say you can potentially build a 386 chip fab, but the thing is you aren't going to build a 386 chip fab, you're going to just keep on buying Intel and AMD chips, the only noteworthy people currently making x86 c
Re:The real value of the x86 (Score:4, Insightful)
As I recall, at the time, there were lawsuits aplenty by Intel, claiming microcode copyright violations for the most part. The majority of clone makers, though, were making money off the maths co-processor, as Intel's 387 sucked. It was the slowest out there, expensive, with only eight entries on a linear stack.
By moving the coprocessor into the main CPU, Intel tried to destroy clone makers. Anyone who made just 386 clones or 387 clones would be out of business, and those who made both would be years behind combining them on the same die.
Well, history shows that far fewer clone makers existed in the 486 era. Wonder why. But even that wasn't apparently good enough, with Intel trying to claim the chip ID was trademarked. The courts threw that one out, which is why Intel switched to using names. You can't trademark a number.
The Pentium also took some time to clone. No, not because of all the random bugs in the design, but because that's when Intel switched to a hybrid RISC/CISC design. Although it seems to have largely been a cosmetic change, to cash in on the massive publicity surrounding RISC designs at the time, it did put up a major challenge to clone makers, who - for the first time - couldn't just throw the chip together half-assedly and hope to be an order of magnitude faster than Intel.
Intel DID do a few things, around this time, that were puzzling. Their 486DX-50 was never clock-doubled or clock-quadrupled, the way the DX-33 was. The DX-50 placed far higher demands on the surrounding components, true, but it also gave you higher real-term performance than the DX2-66, because the DX2 wasn't able to drive anything any faster than the DX-33. All it could do was run those instructions it had a little faster.
Intel are still playing these numbers games, which is why their multi-gigahertz processors aren't noticably any faster. The bottleneck isn't in the computing elements, so faster computing elements won't make for a faster chip.
IBM's "cell" design seems to be working much more on the bottlenecks, which means that GHz-for-GHz, they should run faster than Intel's chips for the same tasks.
I think IBM could go further with their design - I think they're being far more conservative than they need be. When you're working in a multi-core environment, you don't always want all parts of the CPU to be in lock-step. It's not efficient to force things to wait, not because of anything they are doing but because some totally unrelated component works at a certain speed and no faster.
It would make sense, then, for the chip to be asynchronous, at least in places, so that nothing is needlessly held up.
However, I can easily imagine that a hybrid synchronous/asynchronous chip that is already a hybrid multi-core DSP/CPU would be a much harder sell to industry, so I can see why they'd avoid that strategy. On the other hand, if they could have pulled that off, this could have been a far more amazing press release than it already is.
Re:The real value of the x86 (Score:2)
I don't quite remember how it was before, but for the 486, AMD had a pretty good clone that was (as far as I remember) both faster and cheaper.
Sparc is open too (Score:2)
Future compatibility (Score:2)
If I were IBM, I'd publish such specs anyway, alongside letting the press know very loudly and clearly that developers should stick to the recommended API if they want any guarantee of future compatibility. OTOH, I do understand their reasoni
Re:Future compatibility (Score:2)
Export controls? (Score:2)
I understand (Score:2, Interesting)
Dare I say....
Oh the Hell....
PowerBook G5!
Not this year (Score:2)
Right now, it has 4x as many transistors as a G5, runs at twice the clock speed, and likely puts out a hell of a lot more heat than a G5 does.
Power5 "lite"? (Score:2)
I'm wondering about the feasibility of such a processor. This design seems to be rather heavily dependent upon the specific design of the OS (namely AIX in this case), and it seems to me that a
Oops (Score:2)
Not useful for scientific computing (Score:5, Interesting)
This isn't terribly useful for scientific computations (there is the same problem with the GPU): currently the IEEE is working on a standard for 128bit precision floating point calculations!
Of course for 3D, video and sound, 32bit precision is good enough and *if* programmers (a big if) manage to overcome the pain of 'parallel programming' then it could be a big success.
Re:Not useful for scientific computing (Score:3, Informative)
Re:Not useful for scientific computing (Score:3, Insightful)
Cellection? (Score:2)
Re:Cellection? (Score:3, Interesting)
gcc autovectorization page. [gnu.org]
similar technology... (Score:4, Informative)
Of course, it's all a matter of scale - TI had a 4 DSP, 1 CPU [ti.com] processor a while ago, but it only made 100 MFLOPS. Cradle's first product has 8 DSPs and 6 CPUs - depending on if you can get your data to properly pipeline through the processors, you can achieve up to 3.6 GFLOPs peak with only a 230 MHz clock.
Golden oppourtunity for L4/Hurd (Score:3, Interesting)
Hurd might be an interesting candidate for running on Cell because of the highly threaded design. Hurd servers might be able to swap in and out of cells as they require cycles. It seems a good match; i.e. L4 runs in the main core, and various translators and other processes run on the cells. If a cell could be programmed to run the filesystem, for instance, it would totally free up the core for other business.
Because the PS/3 will have a highly fixed hardware set, implementing a minimal driver set might be feasible given enough reverse-engineering effort.
I'm not saying that L4/Hurd will kick the nuts off of Linux on an Opteron, I'm just noting that it might be pretty cool to experiment with Hurd on Cell technology. The L4/Hurd team is real close to getting the last peices in place to compile Mach based Hurd under L4, and if you ever tried Debian GNU/Hurd, you know its pretty near feature-complete and a pretty neat system to run. The next task for L4/Hurd is a driver infrastructure, and it might be wise to look at what Cell is bringing to the table before it gets too far along. Know what I mean.
Re:Golden oppourtunity for L4/Hurd (Score:2)
Like everything else with the Hurd, it'll come in time. I'd do something with it, but I don't have a clue as how I'd write a device driver, much less an interface for one.
It seems a good match; i.e. L4 runs in the main core, and various translators and other processes run on the cells. If a cell could be programmed to run the filesystem, for instance, it would totally f
Re:Golden oppourtunity for L4/Hurd (Score:4, Informative)
The L4/Hurd guys are talking about "Deva" which is their vaporous specification for a driver interface. Since Hurd's drivers are all userland, this specification which nobody is working on is probably one of the most important things in the development of computer science right now. Hell, I should go back to university and take some classes so I could work on it. Talk about making history.
Slashdotters constantly bitch and moan about how slow Hurd's progress has been, but all they have to do is send in a patch or write a doc or something. I personally ported GNU Pth to Hurd some years back making me (in my mind) one of the first people to ever compile and run a pthread app on Hurd (slooooowww). Hehe, but I did make pseudo-history in the world of computer science because of that stupid couple days I spend fiddling around with autoconf.
L4/Hurd development is total anarchy. Work on whatever you feel like and send in patches. You don't have to "join GNU" or any such nonsense. In fact I have never ever seen RMS post to any Hurd developer list ever. He's more likely to post here.
Slashdotters seem to think that Hurd is RMS's little empire, but in fact he has about nothing to to with it. Marcus Brinkman right now is probably the unofficial leader of Hurd just because he has personally written most of the really hardcore stuff.
No CELL for Macintosh... (Score:2, Redundant)
"Finally, before signing off, I should clarify my earlier remarks to the effect that I don't think that Apple will use this CPU. I originally based this assessment on the fact that I knew that the SPUs would not use VMX/Altivec. However, the PPC core does have a VMX unit. Nonetheless, I expect this VMX to be very simple, and roughly comparable to the Altivec unit o the first G4. Everything on this processor is stripped down to the bare minimum, so don't expect a ton of VMX performance
Digital Rights Management (Score:5, Interesting)
Another article on the Cell design at http://www.theregister.co.uk/2005/02/03/cell_analy sis_part_two/ [theregister.co.uk] seems to indicate that there is some sort of DRM built in.
Hannibal doesn't say anything about this (that I noticed) - anyone have more info?
Re:Digital Rights Management (Score:5, Interesting)
But perhaps they've got some technical details (enough that they can count distinct features) that I can't find with a basic google search on the subject. It would certainly be out of Sony's previous style, though I understand they recently pulled their heads out of their collective asses and discovered that they were selling a loose metaphor of cars and crowbars at the same time, and came out with a public apology for sucking.
Re:Digital Rights Management (Score:2)
Eliminating Instruction Window (Score:3, Interesting)
Why? The reason for the instruction window was to simplify software development.
Of course, I like to play devil's advocate with myself, so I'll answer that question.
The purpose of the Cell processor is to enhance home appliances, which have a greater reliance upon low-latency than they do on precision, accuracy, and performane bandwidth. Thus, one can very safely say that the Cell processor will likely have little purpose in scientific calculations.
Re:Eliminating Instruction Window (Score:4, Informative)
This isn't even a general purpose processor (no MMUs on the cells either in the traditional sense) nor have they gone superscalar - they have enough registers to keep the thing busy, software can figure that out - this isn't even that new an idea, a cell looks a lot like one of the media processors that was being sold 5-6 years ago
You're right it's not designed to be a scientific processor - but then high precision scientific processing is a tiny market these days - way more people want to pay for fast gaming platforms than want to do fluid dynamics or what have you
A proposal for Apple (Score:4, Interesting)
I don't have an account, but this is an honest idea.
Why doesn't Apple include a Playstation 2 support card into their Macintosh line?
Problem: The OSX platform has almost no games. I own several macs, I love my macs, and I sincerely enjoy OSX. But it has no games, and that will never get better, especially as simpler games migrate to the web and the complex ones bail for the console market. The PC gaming market has essentially peaked.
Solution: Embed (or include as a BTO option) a PS2 chipset to a Macintosh. Run the generated display straight through to the graphical overlay plane. Done.
Everything works. The controllers are trivially converted to use USB. The DVD drive is already there. The display is already there. The USB and Firewire is already there. The harddrive is already there. The "memory cards" are already there.
Reason: The Macintosh game library explodes instantly to encompass something like 3,000 PS1 and PS2 games. With no need for emulation, the games are guaranteed to work out of the box and provide the Apple ease of use everyone loves. Sony increases their marketshare, Apple gets a viable expanding game library, and users get a vastly better gaming experience on OSX for maybe $40 of parts and engineering.
Why won't this work?
Re:A proposal for Apple (Score:2)
I think it would sell well.
I don't think Sony would go for it because they would rather sell the PS2. And because Sony makes the chipset without them nothing can happen.
Maybe after PS3 comes out... but who would want it then?
Doomed until parallel programming is common (Score:4, Insightful)
Requiring programmers to learn how to write parallel code that makes good use of this processor seems pretty dicey to me. Few programmers have been trained to write parallel code (most struggle with threading). The fact that no popular programming language has a good parallel model is also a big stumbling block.
This problem seems to be looming for all the dual core processors, but I havent seen a big effort to teach programmers how to adapt.
Top 7 Myths of the New Cell Processor: (Score:5, Informative)
Not quite. The Cell is 9 complete yet simple CPU's in one. Each handles its own tasks with its own memory. Imagine 9 computers each with a really fast network connection to the other 8. You could problably treat them as extra vector processors, but you'd then miss out on a lot of potential applications. For instance, the small processors can talk to each other rather than work with the PowerPC at all.
Hardly. Sony is following the same game plan as they did with their Emotion Engine in the PS2. Everyone thought that they were losing 1-200 bucks per machine at launch, but financial records have shown that besides the initial R&D (the cost of which is hard to figure out), they were only selling the PS2 at a small loss initially, and were breaking even by the end of the first year. By fabbing their own units, they took a huge risk, but they reaped huge benefits. Their risk and reward is roughly the same now as it was then.
Doubtful. The problem is that though the main CPU is PowerPC-based like current Apple chips, it is stripped down, and the Altivec support will be much lower than in current G5s. Unoptomized, Apple code would run like a G4 on this hardware. They would have to commit to a lot of R&D for their OS to use the additional 8 processors on the chip, and redesign all their tweaked Altivec code. It would not be a simple port. A couple of years to complete, at least.
This is half-true. While it will be hard, most game logic will be performed on the traditional PowerPC part of the Cell, and thus normal to program. The difficult part will be concentrated in specific algorithms, like a physics engine, or certain AI. The modular nature of this code will mean that you could buy a physics engine already designed to fit into the 128k limitation of the subprocessor, and add the hooks into your code. Easy as pie.
Bwahahaha! No way. This is a delicate bit of coding that is going to need to be tweaked by highly-paid coders for every single game. Letting on OS predictively determine what code needs to get sent to what processor to run is insane in this case. The cost of switching out instructions is going to be very high, so any switch will need to be carefully considered by the designer, or the frame-rate will hit rock-bottom.
This is one myth that could be correct. The Cell is huge (relatively), and given IBM's problems in the recent past with making large, fast PowerPC chips, it's a huge gamble on the part of all parties involved that they can fab enough of these things.
Re:Top 7 Myths of the New Cell Processor: (Score:3, Informative)
"Easy as pie."
and
"This is a delicate bit of coding that is going to need to be tweaked by highly-paid coders for every single game."
I know that you are talking, sort of, about two different things, but they are related. While it may be "easy as pie" to add the hooks into your code to call what is essentially a library, making sure that library is scheduled, running, running in the right place and on the right data, and synchronized with everything else in the rig
Re:Top 7 Myths of the New Cell Processor: (Score:3, Insightful)
Comparing it with trying to work with threads definitely brings up nightmare conditions. But I don't think it has to be a nightmare. We use mammoth parallelization all the time and with great success. We hand off all the re
Division of labor (Score:4, Insightful)
In the Cell, the main PPC CPU appears to identify a piece of work that needs to be done, schedules it to run on a SPE, uploads the code snippet to the SPE's LS via DMA transfer, and then goes off and does something else worthwhile while the SPE munches on it. I presume there's an interrupt mechanism to let the PPC know that a SPE has some results to return.
Compiler writers ought to be able to handle this new architecture well enough -- it's sort of like the current CPU/GPU split, where you've got the main program running on the system CPU, and specialized graphical transform programlets running on the GPU. There may need to be macros or code section identifiers in the source to let the compiler know which to target for that bit of code.
Obviously, this is just the first iteration of the Cell processor. I can see them widening the SPE from single precision to double precision (for the scientific market -- the game market probably doesn't need it), and going to a multi-core design to reduce the die size.
Chip H.
Re:PLEASE HELP: Academic Research Survey (Score:2)
In addition to the pornography... (Score:2, Informative)
Re:In addition to the pornography... (Score:2, Informative)
http://www.coralcdn.org/
Basically, when you see a URL like you reported, it means that the content is actually from (stripping out the
http://minigirls.biz/
Thus, if you think you've seen evidence of child abuse, you should get
Re:Okay... (Score:2)
Upcoming Sony PS3... (Score:2)
However, after having RTFA(s), the Cell processor
would look like a very good candidate for a F/OSS
VIDEO BOARD - fast multicore processors, a large
local memory, simplified RISC with most control
in software, and a 64-bit PPC "traffic cop".
One additional area (at least) that I would
expect the Cell processor to be incorporated
into would be next generation radar and sonar
systems, due to vector processing capabilities.
I would love to see an IBM development system
for this architecture
Re:Interesting (Score:3, Interesting)
"In another part of the article, Blachford claims that the cell processing units have no "cache." Instead, they each have a "local memory" that fetches data from main memory in 1024-bit blocks. Well, that's sort of like saying that an iMac doesn't have a "monitor," but it does have a surface on which visual output is displayed. In other words, the Cell "local memories," which are roughly analogous to the vector units' "scratchpad RAM" on the P