Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM Hardware Technology Entertainment Games

Ars Technica's Hannibal on IBM's Cell 449

endersdouble writes "Ars Technica's Jon "Hannibal" Stokes, known for his many articles on CPU technology, has posted a new article on IBM's new Cell processor. This one is the first part of a series, and covers the processor's approach to caching and control logic. Good read."
This discussion has been archived. No new comments can be posted.

Ars Technica's Hannibal on IBM's Cell

Comments Filter:
  • Apple? (Score:4, Insightful)

    by tinrobot ( 314936 ) on Wednesday February 09, 2005 @01:45AM (#11615689)
    Why do I have the sneaking suspicion that, if successful, this processor will eclipse the PowerPC on the Mac in the next few years?
    • Re:Apple? (Score:5, Informative)

      by Tropaios ( 244000 ) <.tropaios. .at. .yahoo.com.> on Wednesday February 09, 2005 @02:29AM (#11615889)
      From the article:

      The Cell and Apple

      Finally, before signing off, I should clarify my earlier remarks to the effect that I don't think that Apple will use this CPU. I originally based this assessment on the fact that I knew that the SPUs would not use VMX/Altivec. However, the PPC core does have a VMX unit. Nonetheless, I expect this VMX to be very simple, and roughly comparable to the Altivec unit o the first G4. Everything on this processor is stripped down to the bare minimum, so don't expect a ton of VMX performance out of it, and definitely not anything comparable to the G5. Furthermore, any Altivec code written for the new G4 or G5 would have to be completely reoptimized due to inorder nature of the PPC core's issue.

      So the short answer is, Apple's use of this chip is within the realm of concievability, but it's extremely unlikely in the short- and medium-term. Apple is just too heavily invested in Altivec, and this processor is going to be a relative weakling in that department. Sure, it'll pack a major SIMD punch, but that will not be a double-precision Alitvec-type punch.
  • by tod_miller ( 792541 ) on Wednesday February 09, 2005 @01:46AM (#11615692) Journal
    I want 2 of them, yesterday.

    Aside from my own (competent) review of the cell processor, the article possibly the most insightful and technically nicely balanced articles posted on slashdot in a long while!

    I'll cover more of the Cell's basic architecture, including the mysterious 64-bit POWERPC core that forms the "brains" of this design.

    Looking forward to that... I think that many people will be moving to Mac ... on cell... likely?
  • Part II is up now (Score:5, Informative)

    by Anonymous Coward on Wednesday February 09, 2005 @01:50AM (#11615715)
    Part II is up [arstechnica.com] as well.
  • by ABeowulfCluster ( 854634 ) on Wednesday February 09, 2005 @01:52AM (#11615728)
    .. made of risc components.
  • Workstation? (Score:5, Interesting)

    by jericho4.0 ( 565125 ) on Wednesday February 09, 2005 @01:56AM (#11615746)
    From this site [itjungle.com] and others..

    " Last fall, IBM and Sony said they were developing a workstation based on Cell chips, which is the first product IBM will ship based on Cell."

    Regardless if this is the first product shipped or not, a workstation is coming. I can't see it running anything but linux. Given the mass market targeting of the cell, I hope Sony makes a strong go at grabbing the market with cheap hardware, rather than trying to milk the high-end content creation market first.

    • by Anonymous Coward
    • Re:Workstation? (Score:3, Interesting)

      by node 3 ( 115640 )
      I can't see it running anything but linux.

      OS X is another strong possibility. Sony's President was recently on stage with Steve Jobs at the Macworld Expo, hinting at working with Apple in the future. A recent slashdot story linked to an article which states that 3 PC manufacturers have been begging Apple to license OS X to them. I'll bet Sony was one of them, and IBM would also be a logical suitor.

      Since OS X is essentially NEXTSTEP 6, and the Cell workstations would be great for science or 3d, OS X is a
      • Re:Workstation? (Score:4, Insightful)

        by TheRaven64 ( 641858 ) on Wednesday February 09, 2005 @06:26AM (#11616629) Journal
        The last time Apple tried licensing the OS, it almost killed them. They licensed it completely indiscriminately and lost out at the low end because clones were built using cheaper components and at the high end because SMP clones were cheaper. Licensing to Sony or IBM remains a possibility if the licensing agreement contained some kind of non-competition clause - Apple primarily target the home user, and so would be happy to let IBM have the corporate market if it meant paying them a royalty on every sale and a whole load of free publicity for OS X.

        Apple at the moment is two companies. One is primarily a computer hardware company that makes software to drive hardware sales and sells the entire package as user experience. The other is a consumer electronics company. Last year, the profits made by both companies were about the same. Whether they wish to transition to being a software and consumer electronics company that also makes some niche hardware is a decision they will have to make.

    • Stanford professor Dally's stream processor. [weblogs.com]

      It's almost just like Cell but has onchip memory to solve the bandwidth problem.

      Dally worked for Cray and mentioned that todays supercomputers are not efficient.
  • by Namarrgon ( 105036 ) on Wednesday February 09, 2005 @01:58AM (#11615757) Homepage
    Scroll down a bit here [impress.co.jp], there's some more tasty tidbits.

    e.g. 234 M transistors [impress.co.jp] (!) That's why I don't think this will be replacing the G5 any time soon. The die size (at the current prototype's 90nm) is over 200 mm2.

    It'll have to get a fair bit smaller/cheaper before the PS3 can use it without major subsidies, and I don't know why they think general consumer devices will want it. God knows how much power it dissipates with all 8 SPEs clocking over at 4 GHz...

    • What is the problem here? 200 mm square is a little over half inch by half inch.
    • No subsidies required. PS3 will sell enough to write its own ticket. No need to hope others pick up the slack.

      • Not if its CPU costs twice as much to manufacture as e.g. a $300 Pentium 4 CPU. Would you pay $600+ for a PS3?

        It'll have to be shrunk to 65nm before it can hope to be competitive.

      • New consoles are sold at a loss, but there's a limit to how muc of a loss companies can take. If the CPU itself ends up costing Sony $300+, they'd be looking at a massive loss on the consoles, probably larger than they are willing to take. That was actually a noted problem with the X-box, the loss per unit was large so they had to sell quite a few games per unit to make it up. I'm not even sure if they made any money on it.

        Well, in MS's case, they can pull shit like that. Microsoft makes loads of cash off
    • The reason it has so many transistors is because of the amount of onboard memory. Memory uses a lot more transistors than the logic circuits do.

      A complicated CPU may have tens or hundreds of millions of transistors, but a single memory chip has billions.

      So when you bump up the cache size on a CPU, the transistor count goes up greatly.
  • by hurtfultater ( 745421 ) on Wednesday February 09, 2005 @02:02AM (#11615778)
    Thank god. I've enjoyed his articles in the past, and if experience is any indication, I will have the false impression that I understand this stuff in a nontrivial way for up to three hours. This is not meant to rag on Hannibal, BTW.
  • Hannibal (Score:5, Funny)

    by ndogg ( 158021 ) <the@rhorn.gmail@com> on Wednesday February 09, 2005 @02:08AM (#11615801) Homepage Journal
    WIth a name like that, I expect to see pictures of him eating those Cell processors, and describing how they taste.
  • iCell? (Score:2, Interesting)

    by mpesce ( 146930 )
    Although the article (which is quite clear) indicates that the AltiVec architecture is closer to G4 than G5, won't the speed increase of having 8 fully-parallel processors (9 if you count the main CPU) more than make up for the issues associated with the loss of the G5's advanced features? It seems to me that this is a natural for Apple - it will give them a 5x - 10x performance boost over anything that's on the drawing boards over at Intel.

    Even so, I doubt we'd see Cell-based Macs until at least 2007 -
    • How well you can use 8 dsps really depends on your code. I'd guess in most cases the answer is no, you can't use the vector units to make up for the lost performance of the main core. If you got effective use of VMX, then you might be able to, because easily vectorizable calculations should be possible to port to the dsps more often than most code.

      With the low performance PPC cpu, I doubt Apple will want these things. Apple has too much interest in the general purpose computer market to care much about so

    • It seems to me that this is a natural for Apple - it will give them a 5x - 10x performance boost over anything that's on the drawing boards over at Intel.

      That's a theoretical performance boost. Few apps will be able to take full advantage of 9 simultaneous processors, even after being coded with specific support for it. Still, it'd give a nice speedup to a couple of specific Photoshop tasks, and that's all you need to feed the Jobs RDF. If a Pentium III can speed up the Internet, then why not?

      wouldn't i

  • by MagikSlinger ( 259969 ) on Wednesday February 09, 2005 @02:14AM (#11615827) Homepage Journal
    The one thing I don't understand is how I would code for this thing. As best as I understand it, I now have some instructions for controlling the cache (or LAM, whatever) which sounds cool, but are there any details yet of how I'd write code for this? I'm also disappointed that the article didn't explain how one would use their SIMD instructions if they aren't using any of the existing standards. So I load my vectors with the cache control and ask the processors to ever so kindly add them?

    Anybody out there with experience on this architecture or even attended the presentation itself can give us mere coders details? Preferably a website.
    • Since Toshiba is part of the collaboration, it is quite possible that the Cell's vector units are based on, and improved versions, of the PS2's vector units. Certainly the information I have seen so far hasn't led me to believe it was unlikely.

      Check out his earlier articles on the PS2 architecture to learn more about those vector units.
    • by Space cowboy ( 13680 ) * on Wednesday February 09, 2005 @02:49AM (#11615958) Journal

      The architecture of the Cell look like a much-improved PS2 system, with the PS2's vu0 and vu1 (vector units 0 and 1) replaced by 8 SPE's. Also, the programmable DMA (with chaining ability, allowing it to sequence multiple DMA events one after the other etc.) looks very similar to the PS2's.

      If that turns out to be the case, then PS2 programming is a hint towards how it'll work. On the PS2, you generally configured the DMA controller to upload mini programs to the vector units, then DMA-chained data as streams from RAM through the just-uploaded program and onto the destination (usually the GS which rasterised the display).

      On the Cell, it looks as though you can DMA-chain code & data through multiple SPE's and ultimately back to RAM/the PPC core/whatever is memory mapped. This is cool - it's software pipelining :-)

      So, my guess is that the PPC acts as a (DMA, IO, etc.) controller (much like the mips chip did in the PS2), and the heavy lifting goes on in the vector units, with code and data being streamed in on demand.

      It's a different model to normal programming, and as far as I can see it encourages you to be closer to the metal (ie: it's harder, I normally expect my L1 cache to take care of itself...), but assuming they release/port gcc for the SPE's, it might not be too hard if you're used to event-driven highly-threaded programming. Let's just hope they release a Linux port and 'vcl' so we can do something useful with the vector units...

      Oh, and if the xbox was a target for a self-hosting linux solution, I think the Cell will be irrestible :-)

      Simon
      • If that turns out to be the case, then PS2 programming is a hint towards how it'll work. On the PS2, you generally configured the DMA controller to upload mini programs to the vector units, then DMA-chained data as streams from RAM through the just-uploaded program and onto the destination (usually the GS which rasterised the display).

        Sounds a lot like pixel/vertex shaders. Is this how we're going to get around all our bandwidth problems now? Slice up our programs into little independent fragments and upl
    • by adam31 ( 817930 ) <adam31.gmail@com> on Wednesday February 09, 2005 @03:13AM (#11616059)
      This is similar to the 'scratchpad' RAM that Sony used in the PS2 and PS1. It's 16kb of on-chip (super-fast) memory that can be loaded and manipulated by the programmer, completely separate from the jurisdiction of the cache (which can cause big headaches-- think cache writeback with stale data).

      We'd do our skeletal animation skinning with this. DMA a bunch of verts to scratchpad, transform and weight them on the VU, DMA back to a display list. The thing is, there's really no high-level language support for this... the onus is on the programmer to schedule and memory map everything, mostly in assembly.

      The design of the cell-- it's incredible. It's every game programmer's wet dream. I just don't see how it's going to be as useful in other areas though. It's going to be a compiler-writer's nightmare, and to get real performance frome the SPEs is going to take a lot of assembly or a high-level language construct that I haven't seen yet.

    • by fuzzbrain ( 239898 ) on Wednesday February 09, 2005 @05:21AM (#11616442)
      I don't have much experience or knowledge but there was an interesting article [www.gotw.ca] the other week about how the next revolution in programming languages will be a turn towards concurrency:

      "Starting today, the performance lunch isn't free any more. Sure, there will continue to be generally applicable performance gains that everyone can pick up, thanks mainly to cache size improvements. But if you want your application to benefit from the continued exponential throughput advances in new processors, it will need to be a well-written concurrent (usually multithreaded) application. And that's easier said than done, because not all problems are inherently parallelizable and because concurrent programming is hard."


      Obviously, it's not clear whether this is directly relevant to cell processors, but I think it's at least of passing interest. It's also worth considering whether concurrency-oriented languages like Erlang and Oz could become more important with these sorts of processors (not for games but possibly for scientific work).
      See also the discussion [lambda-the-ultimate.org] of this article on Lambda [lambda-the-ultimate.org].
    • with a lot of blood.

      anyhow.. you don't probably have to sweat over it.. not that likely to be that open for just anybody(it's going to be a nightmare though.. as they've pretty much hinted that it's up to the programmer to keep the 8 apu's occupied).
    • First, you will use a language that supports a vector type. The languages used for GPU programming do, and there is a vector extension to C supported by GCC. You will write code that manipulates vectors instead of scalars. And that's about it. You try to keep your working set small, and your compiler will try to fit in the local memory.
  • by argoff ( 142580 ) on Wednesday February 09, 2005 @02:16AM (#11615833)
    Is that the 386 instruction set and arcitecture is so non proprietary. What made it so popular certainly wasn't that it was better. If I had the dough, I can literally make one and my own fab without asking a single soul. Alot of times it seems companies try to gather into consortiums to mimic the same effect and gather market momentum, but these are doomed to failure because the more valuable the technology becomes - the greater the pressure to diferentiate and fence off some "teritory" for themselves. We saw this happen first hand with UNIX, where all the flavors would constantly try to group under these unified standards - and they made little progress until Linux came along. The CPU world needs somthing similar to protect people from patent harassment. for design, cores, and fabrication.
    • by Anonymous Coward
      Who would conceivably have enough money to build microchip fabrication facilities but not enough money to license the powerpc architecture? [google.com]

      "Reverse engineered implementations exist" is not really much of a meaningful strength if you don't own one such reverse engineered implementation already. You say you can potentially build a 386 chip fab, but the thing is you aren't going to build a 386 chip fab, you're going to just keep on buying Intel and AMD chips, the only noteworthy people currently making x86 c
    • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Wednesday February 09, 2005 @03:09AM (#11616039) Homepage Journal
      True, but at the time it came out, Intel did everything short of pay the US Govt. to take the clone manufacturers out with tac nukes.


      As I recall, at the time, there were lawsuits aplenty by Intel, claiming microcode copyright violations for the most part. The majority of clone makers, though, were making money off the maths co-processor, as Intel's 387 sucked. It was the slowest out there, expensive, with only eight entries on a linear stack.


      By moving the coprocessor into the main CPU, Intel tried to destroy clone makers. Anyone who made just 386 clones or 387 clones would be out of business, and those who made both would be years behind combining them on the same die.


      Well, history shows that far fewer clone makers existed in the 486 era. Wonder why. But even that wasn't apparently good enough, with Intel trying to claim the chip ID was trademarked. The courts threw that one out, which is why Intel switched to using names. You can't trademark a number.


      The Pentium also took some time to clone. No, not because of all the random bugs in the design, but because that's when Intel switched to a hybrid RISC/CISC design. Although it seems to have largely been a cosmetic change, to cash in on the massive publicity surrounding RISC designs at the time, it did put up a major challenge to clone makers, who - for the first time - couldn't just throw the chip together half-assedly and hope to be an order of magnitude faster than Intel.


      Intel DID do a few things, around this time, that were puzzling. Their 486DX-50 was never clock-doubled or clock-quadrupled, the way the DX-33 was. The DX-50 placed far higher demands on the surrounding components, true, but it also gave you higher real-term performance than the DX2-66, because the DX2 wasn't able to drive anything any faster than the DX-33. All it could do was run those instructions it had a little faster.


      Intel are still playing these numbers games, which is why their multi-gigahertz processors aren't noticably any faster. The bottleneck isn't in the computing elements, so faster computing elements won't make for a faster chip.


      IBM's "cell" design seems to be working much more on the bottlenecks, which means that GHz-for-GHz, they should run faster than Intel's chips for the same tasks.


      I think IBM could go further with their design - I think they're being far more conservative than they need be. When you're working in a multi-core environment, you don't always want all parts of the CPU to be in lock-step. It's not efficient to force things to wait, not because of anything they are doing but because some totally unrelated component works at a certain speed and no faster.


      It would make sense, then, for the chip to be asynchronous, at least in places, so that nothing is needlessly held up.


      However, I can easily imagine that a hybrid synchronous/asynchronous chip that is already a hybrid multi-core DSP/CPU would be a much harder sell to industry, so I can see why they'd avoid that strategy. On the other hand, if they could have pulled that off, this could have been a far more amazing press release than it already is.

      • Well, history shows that far fewer clone makers existed in the 486 era.

        I don't quite remember how it was before, but for the 486, AMD had a pretty good clone that was (as far as I remember) both faster and cheaper.
    • The SPARC V8 spec is open, there's also an open source implementation: the Leon [gaisler.com] and it's supported by Linux.
  • Pattnaik said that if IBM were to publish the detailed monitoring information for end users to access, then the company would feel obliged to maintain backwards compatibility in future iterations, and so they'd be limited in the changes they could make to the scheme.

    If I were IBM, I'd publish such specs anyway, alongside letting the press know very loudly and clearly that developers should stick to the recommended API if they want any guarantee of future compatibility. OTOH, I do understand their reasoni
  • This chip seems insanely powerful. With 8 APU's capable of doing DSP, you would think that some countries would impose export restrictions on the thing. If you remember when the G4 came out Apple advertized that the military didn't want that thing leaving the country. But image a chip with the ability to do some serious SIMD operations? The CIA, NSA and others doing signal processing have to love this chip.
  • I understand (Score:2, Interesting)

    by JeffTL ( 667728 )
    that it runs at 30 watts, about like a Pentium M. And it's 64-bit. Can we say....

    Dare I say....

    Oh the Hell....

    PowerBook G5!
    • Actually, the quote was, "...it will run at 30 watts." Once it's been shrunk to 65nm, in 2006. Maybe.

      Right now, it has 4x as many transistors as a G5, runs at twice the clock speed, and likely puts out a hell of a lot more heat than a G5 does.

  • As I fully expected, Pattnaik could not discuss a possible workstation-class derivative (read: Apple-oriented derivative) of the POWER5. He also made it clear that he is and has been focused on POWER5 servers only, and any hypothetical workstation-class derivative of the design would be for someone else to discuss.

    I'm wondering about the feasibility of such a processor. This design seems to be rather heavily dependent upon the specific design of the OS (namely AIX in this case), and it seems to me that a
  • by renoX ( 11677 ) on Wednesday February 09, 2005 @02:44AM (#11615938)
    What I find interesting is that the vector processor are restricted to single precision floating point calculations.
    This isn't terribly useful for scientific computations (there is the same problem with the GPU): currently the IEEE is working on a standard for 128bit precision floating point calculations!

    Of course for 3D, video and sound, 32bit precision is good enough and *if* programmers (a big if) manage to overcome the pain of 'parallel programming' then it could be a big success.
    • SPEs (CELL SIMD processors..) have double precision units! IBM will discuss DP units for CELL today or tomorrow at ISSCC.
  • But does it run gcc? Or even have a cross-compiler target module? Will gcc become smart enough to emulate some of the SIMD techniques in my regular C++ code, even when I write the same old patterns?
  • by morcheeba ( 260908 ) on Wednesday February 09, 2005 @02:55AM (#11615989) Journal
    Cradle Semiconductor has been working for a while on a similar technology [cradle.com].

    Of course, it's all a matter of scale - TI had a 4 DSP, 1 CPU [ti.com] processor a while ago, but it only made 100 MFLOPS. Cradle's first product has 8 DSPs and 6 CPUs - depending on if you can get your data to properly pipeline through the processors, you can achieve up to 3.6 GFLOPs peak with only a 230 MHz clock.
  • by The_Dougster ( 308194 ) on Wednesday February 09, 2005 @03:19AM (#11616078) Homepage
    This arch is still a baby and this would be a great time for L4/Hurd to latch onto this processor. There is already a L4 PowerPC/64 port in some kind of development stage, and the very first platform is likely to be a PS/3 with somewhat fixed hardware specs. Marcus et. al. were discussing today something and they mentioned that there is nobody working on the driver interface for L4/Hurd yet.

    Hurd might be an interesting candidate for running on Cell because of the highly threaded design. Hurd servers might be able to swap in and out of cells as they require cycles. It seems a good match; i.e. L4 runs in the main core, and various translators and other processes run on the cells. If a cell could be programmed to run the filesystem, for instance, it would totally free up the core for other business.

    Because the PS/3 will have a highly fixed hardware set, implementing a minimal driver set might be feasible given enough reverse-engineering effort.

    I'm not saying that L4/Hurd will kick the nuts off of Linux on an Opteron, I'm just noting that it might be pretty cool to experiment with Hurd on Cell technology. The L4/Hurd team is real close to getting the last peices in place to compile Mach based Hurd under L4, and if you ever tried Debian GNU/Hurd, you know its pretty near feature-complete and a pretty neat system to run. The next task for L4/Hurd is a driver infrastructure, and it might be wise to look at what Cell is bringing to the table before it gets too far along. Know what I mean.

    • Marcus et. al. were discussing today something and they mentioned that there is nobody working on the driver interface for L4/Hurd yet.

      Like everything else with the Hurd, it'll come in time. I'd do something with it, but I don't have a clue as how I'd write a device driver, much less an interface for one.

      It seems a good match; i.e. L4 runs in the main core, and various translators and other processes run on the cells. If a cell could be programmed to run the filesystem, for instance, it would totally f
      • by The_Dougster ( 308194 ) on Wednesday February 09, 2005 @07:05AM (#11616745) Homepage
        Like everything else with the Hurd, it'll come in time. I'd do something with it, but I don't have a clue as how I'd write a device driver, much less an interface for one.
        Likewise. I'm in kind of a strange position as I am keenly interested in stuff like this, yet this really isn't my personal genre.

        The L4/Hurd guys are talking about "Deva" which is their vaporous specification for a driver interface. Since Hurd's drivers are all userland, this specification which nobody is working on is probably one of the most important things in the development of computer science right now. Hell, I should go back to university and take some classes so I could work on it. Talk about making history.

        Slashdotters constantly bitch and moan about how slow Hurd's progress has been, but all they have to do is send in a patch or write a doc or something. I personally ported GNU Pth to Hurd some years back making me (in my mind) one of the first people to ever compile and run a pthread app on Hurd (slooooowww). Hehe, but I did make pseudo-history in the world of computer science because of that stupid couple days I spend fiddling around with autoconf.

        L4/Hurd development is total anarchy. Work on whatever you feel like and send in patches. You don't have to "join GNU" or any such nonsense. In fact I have never ever seen RMS post to any Hurd developer list ever. He's more likely to post here.

        Slashdotters seem to think that Hurd is RMS's little empire, but in fact he has about nothing to to with it. Marcus Brinkman right now is probably the unofficial leader of Hurd just because he has personally written most of the really hardcore stuff.

  • In part II, he writes:

    "Finally, before signing off, I should clarify my earlier remarks to the effect that I don't think that Apple will use this CPU. I originally based this assessment on the fact that I knew that the SPUs would not use VMX/Altivec. However, the PPC core does have a VMX unit. Nonetheless, I expect this VMX to be very simple, and roughly comparable to the Altivec unit o the first G4. Everything on this processor is stripped down to the bare minimum, so don't expect a ton of VMX performance
  • by wakejagr ( 781977 ) on Wednesday February 09, 2005 @03:31AM (#11616116) Journal

    Another article on the Cell design at http://www.theregister.co.uk/2005/02/03/cell_analy sis_part_two/ [theregister.co.uk] seems to indicate that there is some sort of DRM built in.

    The Cell is designed to make sure media, or third party programs, stay exactly where the owner of the media or program thinks they should stay. While most microprocessor designers agonize about how to make memory accesses as fast as possible, the Cell designers have erected several (four, we count) barriers to ensure memory accesses are as slow and cumbersome as possible - if need be.

    Hannibal doesn't say anything about this (that I noticed) - anyone have more info?

    • by xenocide2 ( 231786 ) on Wednesday February 09, 2005 @04:40AM (#11616310) Homepage
      Sounds like an enourmous misinterpretation of the concept of caching. As a multimedia programmer on the Cell, its likely you'll have sole jurisdiction over where stuff goes on your processor. Think of it like programmable cache management. Usually that's pretty stupid, because you want to write things back for longevity, but media is more transient--streams and whatnot. Barriers within that context would be cache levels.

      But perhaps they've got some technical details (enough that they can count distinct features) that I can't find with a basic google search on the subject. It would certainly be out of Sony's previous style, though I understand they recently pulled their heads out of their collective asses and discovered that they were selling a loose metaphor of cars and crowbars at the same time, and came out with a public apology for sucking.
    • I believe it is DRMd. Makes sense from Sony's point of view. Blachford makes a brief reference to it.
  • by ndogg ( 158021 ) <the@rhorn.gmail@com> on Wednesday February 09, 2005 @03:33AM (#11616122) Homepage Journal
    This RAM functions in the role of the L1 cache, but the fact that it is under the explicit control of the programmer means that it can be simpler than an L1 cache. The burden of managing the cache has been moved into software, with the result that the cache design has been greatly simplified. There is no tag RAM to search on each access, no prefetch, and none of the other overhead that accompanies a normal L1 cache. The SPEs also move the burden of branch prediction and code scheduling into software, much like a VLIW design.

    Why? The reason for the instruction window was to simplify software development.

    Of course, I like to play devil's advocate with myself, so I'll answer that question.

    The purpose of the Cell processor is to enhance home appliances, which have a greater reliance upon low-latency than they do on precision, accuracy, and performane bandwidth. Thus, one can very safely say that the Cell processor will likely have little purpose in scientific calculations.
    • by taniwha ( 70410 ) on Wednesday February 09, 2005 @05:36AM (#11616493) Homepage Journal
      read it more carefully - they don't eliminate the instruction window - they set it to 2. They can decode exactly 2 instructions/clock (provided they meet some simple dependency rules between the instructions) makes for easy decode trees, fast cycle times.

      This isn't even a general purpose processor (no MMUs on the cells either in the traditional sense) nor have they gone superscalar - they have enough registers to keep the thing busy, software can figure that out - this isn't even that new an idea, a cell looks a lot like one of the media processors that was being sold 5-6 years ago

      You're right it's not designed to be a scientific processor - but then high precision scientific processing is a tiny market these days - way more people want to pay for fast gaming platforms than want to do fluid dynamics or what have you

  • A proposal for Apple (Score:4, Interesting)

    by Anonymous Coward on Wednesday February 09, 2005 @04:08AM (#11616221)
    A proposal for Apple

    I don't have an account, but this is an honest idea.

    Why doesn't Apple include a Playstation 2 support card into their Macintosh line?

    Problem: The OSX platform has almost no games. I own several macs, I love my macs, and I sincerely enjoy OSX. But it has no games, and that will never get better, especially as simpler games migrate to the web and the complex ones bail for the console market. The PC gaming market has essentially peaked.

    Solution: Embed (or include as a BTO option) a PS2 chipset to a Macintosh. Run the generated display straight through to the graphical overlay plane. Done.

    Everything works. The controllers are trivially converted to use USB. The DVD drive is already there. The display is already there. The USB and Firewire is already there. The harddrive is already there. The "memory cards" are already there.

    Reason: The Macintosh game library explodes instantly to encompass something like 3,000 PS1 and PS2 games. With no need for emulation, the games are guaranteed to work out of the box and provide the Apple ease of use everyone loves. Sony increases their marketshare, Apple gets a viable expanding game library, and users get a vastly better gaming experience on OSX for maybe $40 of parts and engineering.

    Why won't this work?
  • by rufusdufus ( 450462 ) on Wednesday February 09, 2005 @04:19AM (#11616253)
    The difference is that instead of the compiler taking up the slack (as in RISC), a combination of the compiler, the programmer, some very smart scheduling software

    Requiring programmers to learn how to write parallel code that makes good use of this processor seems pretty dicey to me. Few programmers have been trained to write parallel code (most struggle with threading). The fact that no popular programming language has a good parallel model is also a big stumbling block.

    This problem seems to be looming for all the dual core processors, but I havent seen a big effort to teach programmers how to adapt.
  • by Modab ( 153378 ) on Wednesday February 09, 2005 @05:56AM (#11616545)
    There are so many people saying dumb things about the Cell and the upcoming PS3, I have to set some things straight. Here goes:
    1. The Cell is just a PowerPC with some extra vector processing.
      Not quite. The Cell is 9 complete yet simple CPU's in one. Each handles its own tasks with its own memory. Imagine 9 computers each with a really fast network connection to the other 8. You could problably treat them as extra vector processors, but you'd then miss out on a lot of potential applications. For instance, the small processors can talk to each other rather than work with the PowerPC at all.
    2. Sony will have to sell the PS3 at an incredible loss to make it competitive.
      Hardly. Sony is following the same game plan as they did with their Emotion Engine in the PS2. Everyone thought that they were losing 1-200 bucks per machine at launch, but financial records have shown that besides the initial R&D (the cost of which is hard to figure out), they were only selling the PS2 at a small loss initially, and were breaking even by the end of the first year. By fabbing their own units, they took a huge risk, but they reaped huge benefits. Their risk and reward is roughly the same now as it was then.
    3. Apple is going to use this processor in their new machine.
      Doubtful. The problem is that though the main CPU is PowerPC-based like current Apple chips, it is stripped down, and the Altivec support will be much lower than in current G5s. Unoptomized, Apple code would run like a G4 on this hardware. They would have to commit to a lot of R&D for their OS to use the additional 8 processors on the chip, and redesign all their tweaked Altivec code. It would not be a simple port. A couple of years to complete, at least.
    4. The parallel nature will make it impossible to program.
      This is half-true. While it will be hard, most game logic will be performed on the traditional PowerPC part of the Cell, and thus normal to program. The difficult part will be concentrated in specific algorithms, like a physics engine, or certain AI. The modular nature of this code will mean that you could buy a physics engine already designed to fit into the 128k limitation of the subprocessor, and add the hooks into your code. Easy as pie.
    5. The Cell will do the graphics processing, leaving only rasterezation to the video card. Most likely false. The high-end video cards coming out now can process the rendering chain as fast as the Cell can, looking at the raw specs of 256Gflops from the Cell, as opposed to about 200GFlops from video cards. In two years, video cards will be capable of much more, and they are already optomized for this, where the Cell is not, so video cards will perform closer to the theoretical limits.
    6. The OS will handle the 8 additional vector processors so the programmer doesn't need to.
      Bwahahaha! No way. This is a delicate bit of coding that is going to need to be tweaked by highly-paid coders for every single game. Letting on OS predictively determine what code needs to get sent to what processor to run is insane in this case. The cost of switching out instructions is going to be very high, so any switch will need to be carefully considered by the designer, or the frame-rate will hit rock-bottom.
    7. The Cell chip is too large to fab efficiently.
      This is one myth that could be correct. The Cell is huge (relatively), and given IBM's problems in the recent past with making large, fast PowerPC chips, it's a huge gamble on the part of all parties involved that they can fab enough of these things.
    • Your points #4 and #6 almost conflict...

      "Easy as pie."

      and

      "This is a delicate bit of coding that is going to need to be tweaked by highly-paid coders for every single game."

      I know that you are talking, sort of, about two different things, but they are related. While it may be "easy as pie" to add the hooks into your code to call what is essentially a library, making sure that library is scheduled, running, running in the right place and on the right data, and synchronized with everything else in the rig
      • You bring up a good point. I gloss over it because the Emotion Engine would have had a bit of the same problems, yet developers eventually figured out how to use it... it all depends on the tools Sony ships to work with the platform, and also on how you view this parallel code executing.

        Comparing it with trying to work with threads definitely brings up nightmare conditions. But I don't think it has to be a nightmare. We use mammoth parallelization all the time and with great success. We hand off all the re
  • Division of labor (Score:4, Insightful)

    by chiph ( 523845 ) on Wednesday February 09, 2005 @10:11AM (#11617402)
    Reading the article, it reminds me of the typical mainframe architecture, where you have a central supervisory CPU, but most of the specialized work is done by the channel processors.

    In the Cell, the main PPC CPU appears to identify a piece of work that needs to be done, schedules it to run on a SPE, uploads the code snippet to the SPE's LS via DMA transfer, and then goes off and does something else worthwhile while the SPE munches on it. I presume there's an interrupt mechanism to let the PPC know that a SPE has some results to return.

    Compiler writers ought to be able to handle this new architecture well enough -- it's sort of like the current CPU/GPU split, where you've got the main program running on the system CPU, and specialized graphical transform programlets running on the GPU. There may need to be macros or code section identifiers in the source to let the compiler know which to target for that bit of code.

    Obviously, this is just the first iteration of the Cell processor. I can see them widening the SPE from single precision to double precision (for the scientific market -- the game market probably doesn't need it), and going to a multi-core design to reduce the die size.

    Chip H.

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...