Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

Prospects For the CELL Microprocessor Beyond Games 246

News for nerds writes "The ISSCC 2005, the "Chip Olympics", is over and David T. Wang at Real World Technologies put a very objective review of the CELL processor (the slides for the briefing are also available), covering all the aspects disclosed at the conference. Besides the much touted 256 GFlops single-precision floating point performance the CELL processor has 25-30 GFlops in double-precision, which is useful enough for scientific computation. Linus seems interested in CELL, too."
This discussion has been archived. No new comments can be posted.

Prospects For the CELL Microprocessor Beyond Games

Comments Filter:
  • by chris09876 ( 643289 ) on Friday February 11, 2005 @09:30AM (#11641112)
    This is a very positive review for the cell processor. It does seem like a really exciting new piece of technology. It promises a lot, and if it will do everything people say it will do, it really has the possibility to give the entire industry a big leap forward.

    That being said, I think it's important not to get too excited about it... it's hard to say if it will live up to everything that people have written about it. I'm a bit skeptical. Until I see some production units doing amazing things, I'm cautiously optimistic.
    • I too, am skeptical. Especially when I see Rambus mentioned. I keep looking around expecting to see a school of their lawyers circling. Biding their time until before a patent law suit frenzy.
      • by sjf ( 3790 ) on Friday February 11, 2005 @10:13AM (#11641558)
        They licensed technology from Rambus.
      • by BobPaul ( 710574 ) * on Friday February 11, 2005 @10:26AM (#11641734) Journal
        I keep looking around expecting to see a school of their lawyers circling, biding their time until before a patent law suit frenzy.

        I'd be more worried about that if they DIDN'T use Rambus's technology. Rambus can't sue someone who's licensing their tech... they can only sue someone they THINK is using tech too similar to theirs without licensing it. If cell used some sort of DDR or maybe an inhouse memory tech instead, maybe then Rambus would try to sue.
      • looks like they've licensed the latest RAC from Rambus ... they won't be being charged that much for it - I bet Rambus expects to make money on the DRAM side if they end up using it.

        For all their (business) faults Rambus makes cool technology - in particular stuff that allows parallelism in the CPU to be exposed to the memory hierarchy (or vice-versa) - but their hardware hasn't worked well with existing CPUs (x86 for examples) because of the bottleneck that the FSB in traditional designs presents. To use

    • by BobPaul ( 710574 ) * on Friday February 11, 2005 @09:44AM (#11641263) Journal
      That being said, I think it's important not to get too excited about it... it's hard to say if it will live up to everything that people have written about it. I'm a bit skeptical. Until I see some production units doing amazing things, I'm cautiously optimistic

      I'm a little bit concerned about the PowerPC Element. The article states that it's not simply a Power5 derivative, but a core designed for high mhz at the cost of per stage logic depth. To quote the author: "The result is a processing core that operates at a high frequency with relatively low power consumption, and perhaps relatively poorer scalar performance compared to the beefy POWER5 processor core. "

      The means the PPE in the CELL @ 4Ghz will not perform as well as a Power5 would could it reach 4Ghz (but since the CELL has 8 SPEs, I would hope it performs better as a whole than a POWER5 at the same frequency). It would be interesting to know at what frequency the two are similar, but since the PPE is integrated into an extended system, this isn't something that can ever really be benchmarked.
      • The means the PPE in the CELL @ 4Ghz will not perform as well as a Power5 would could it reach 4Ghz (but since the CELL has 8 SPEs

        No, it means it might not. The author suggested his opinion was up to debate. However, it's important to note the different design goals of a Power5, A 970 (G5), and a Cell. They have different needs, and for general purpose computing I think Cell will hold up just fine.

      • Remember (Score:3, Informative)

        by temojen ( 678985 )
        POWER5 is not the same as PowerPC 970 (G5). POWER5 is a really really expensive high performance mainframe chip. G5 is a server/desktop chip.
        • Re:Remember (Score:3, Informative)

          Also the PPC 970 (G5) is based on the POWER4 cpu.

          have you ever seen a picture of the POWER5? It's slightly smaller than a Mac mini.
          • Re:Remember (Score:2, Informative)

            Those photos actually show a ceramic multi-chip module containing 4 power5s and 4 cache chips. Up to 8 of em can go into a single chassis. Truly geek porn. Also I've read that the CELL would be about the same size as the Emotion Engine at 25nm. So SONY has already shown that they aren't afraid of using a big chip.
      • It's worth noting that various research papers have done analysis to determine the optimum level of pipelining, and found about 6 to 8 FO-4 gate delays* per stage is optimal - Intel's cancelled Tejas processor was apparently around there and would likely have run at similar clock speeds to the Cell processor. Note that in the real world, you hit other limitations earlier - right now, the main issue is power: chips that fast just run too hot.

        *a FO-4 gate delay is a "fan-out of 4 gate delay" - it's the amou

      • It would be interesting to know at what frequency the two are similar.

        0 MHz?
    • by adam31 ( 817930 ) <adam31 @ g m a i l .com> on Friday February 11, 2005 @11:19AM (#11642612)
      The fact is that this will be a much more difficult processor to program efficiently for. This is the same situation that faced developers when the PS2 came out. It's taken game developers 4 years to finally tame the beast, and this chip is everything that made PS2 programming difficult, times 8.

      But look at the graphics in PS2 games now compared to 1st gen titles. The improvement is incredible! The hardware hasn't changed: it's still just a 300Mhz cpu with 4MB graphics and no pixel shading. I think we'll see the same maturation process with Cell/PS3, where the 1st gen games don't live up to the hype but more and more of the Cell's enormous potential is realized with successive generations.

      The question is whether Sony decides that part of the slow evolution in efficient PS2 programming was because of the small, exclusive development community. I would love to see Sony push a Linux PS3 similar to the version of Linux PS2 they released.

      • by xero314 ( 722674 ) on Friday February 11, 2005 @12:01PM (#11643216)
        The PS3 should not have nearly the problems that the PS2 had in regards to it's difficulty of development (a.k.a. Lazy developers). Because Cell is a joint project by IBM, Toshiba and Sony it will have a much larger install base. Rather than being a specialized chip for a specialized system, it is to be a general chip useable in many systems. These means more people will be programing for it, not just game developers which are notorious for there lack of desire to change (hence why the 68000, 6502 and z80 were so popular for so long). Cell chips should end up making it into systems designed for scientific computing, where developers (a.k.a. computer scientists) will be willing to take more chances and dig deeper into the architecture.

        We will see some of the typical ramp up time in cell programs but being as the cell, if you beleive what you read, is so far above and beyond other modern processors (and that lazy developers for the PS3 can always let the NVIDIA GPU carry the load in a more traditional fashion) we should see leaps and bounds in program performance fairly quickly.
        • "game developers which are notorious for there lack of desire to change"

          Tell me about it. All the game developers I know are always "640k polygons a second should be enough for anyone!", and "pixels smaller than your thumb detract from gameplay!" or "why would anyone want stereo!?". Losers. Developing finacial software is so much more bleeding edge. Why, some of our kids don't even know FORTRAN! They don't even realize that it was the demand for bigger and bigger spreadsheets that delivered those fancy vi

    • That being said, I think it's important not to get too excited about it... it's hard to say if it will live up to everything that people have written about it.

      That's a reasonable attitude towards any new technology. There's always a difference between how something will perform on paper and how it will perform in the real world. And that's assuming that we have a serious innovation, like this one, rather than the vague hype that's much more common.

      Still, we can hope. In computing, change and innovation

  • Transmeta (Score:3, Funny)

    by Anonymous Coward on Friday February 11, 2005 @09:31AM (#11641121)
    Why should Linus be interested in the cell when he has the Transmeta Crusoe?
    • Re:Transmeta (Score:3, Informative)

      by Anonymous Coward
      Transmeta isn't doing the low heat processors anymore. Quoted from http://arstechnica.com/news.ars/post/20050105-4501 .html [arstechnica.com] .

      CPU manufacturer Transmeta, known for their low-power processors, is evaluating an exit from the CPU market. Instead of manufacturing chips themselves, their business focus would shift towards buzzwords: licensing their intellectual property and the formation of strategic alliances to utilize their processor design as well as their research and development skills.
      • Re:Transmeta (Score:3, Insightful)

        by mirko ( 198274 )
        These are not buzzwords : ARM [arm.com] have been doing this for years and are a very profitable R&D company.
      • Re:Transmeta (Score:3, Interesting)

        by BobPaul ( 710574 ) *
        Transmeta isn't doing the low heat processors anymore. Quoted from http://arstechnica.com/news.ars/post/20050105-450 1 .html [arstechnica.com].

        Just because they aren't manufacturing anymore doesn't mean they're exiting the business entirely. There just might not be a "Transmetta" anymore. Instead there will be something like an "Intel Pentium 5 using lowerpower Transmetta Technology" (well probably not, but you get the idea.)

        Transmetta will be doing R&D for low power processors for years to come, I'm quite sure.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Friday February 11, 2005 @09:33AM (#11641143)
    Comment removed based on user account deletion
    • How much will a PS3 cost to manufacture?
      If I was a computer company, I could buy them without the game-specific stuff, load on linux, and sell them as cheep alternative computers.. but that's just me. (assuming linux and friends are compiled for CELL in the next few months of course).


      The problem wiht that is a ps3 won't be anywhere close to a GP machine. It's going to require a lot of driver tweeaks, a load of hardware reconfiguration, defeat the drm. By the time someone figures hwo to do that cheap, comp
  • by Anonymous Coward
    ...playing The Game of Life.
    • by ceeam ( 39911 ) on Friday February 11, 2005 @10:03AM (#11641461)
      You mean.. If one part of this chip is surrounded by more than three other parts actually doing anything useful then it will die from overheating? : )
    • Some people may not be familiar with John Conway's Game of Life [bitstorm.org], though they have probably seen screen savers that demonstrate it. It is not really a game but the unfolding of a Cellular Automata simulation where each grid point state depends on the state of its 8 neighbors by a set of simple rules.

      I actually thought immediately of Cellular Automata when I read some of the specs on the new Cell, and the name may just be a coincidence, but maybe not. It would be interesting to see a Cell architecture wher

      • The life algo is all integer math, and won't improve much by se of the Cell....

        • True, true, and any modern CPU is way overkill for "The Game of Life" anyway, but this doesn't mean Cellular Automata have to be written as Integer only. What new routines could you write that comunicate between 27 processors to simulate 3D processes in a Cellular Automata way? Some new Protean Folding algorithms perhaps.
  • Deja Vu (Score:5, Interesting)

    by DrSkwid ( 118965 ) on Friday February 11, 2005 @09:36AM (#11641180) Journal
    Sony so badly wants its next-generation game console to offer a super-realistic "virtual reality" experience, the company will design and build its own advanced 128-bit processor to realize this goal.

    Processors inside game consoles usually toil away in anonymity, derided as as poor cousins to desktop chips such as Intel's Pentium line. But with Sony Computer Entertainment's ambitious plan, its chips could outclass the offerings of the world's largest chipmaker--if all goes well.

    ...

    The system is so advanced, MicroDesign Resources analyst Keith Diefendorff wrote in a report that the system "has the potential to swipe a chunk of the low-end market from under the noses of PC vendors." He wrote that the platform may "signal the company's intention to move upscale from current game consoles, cutting a wider swath through the living room," with its abilities to function like a stand-alone DVD player and Internet set-top box.

    Sony puts on game face with new chip [com.com]
    Published: May 5, 1999, 1:25 PM PDT
    By Jim Davis
    Staff Writer, CNET News.com
    • Re:Deja Vu (Score:5, Informative)

      by nutshell42 ( 557890 ) on Friday February 11, 2005 @10:12AM (#11641536) Journal
      He wrote that the platform may "signal the company's intention to move upscale from current game consoles, cutting a wider swath through the living room," with its abilities to function like a stand-alone DVD player and Internet set-top box.

      Well one reason the PS2 sold like hot cakes was that it was one of the cheapest DVD players at that time (at least in Japan). There is media player software available and it's quite popular the reason it isn't a internet set-top box is that noone wants internet set-top boxes they died a painful death. Now there's no EE desktop PC because it's too slow but the difference between Cell and PS2 in this regard are

      (a) Cell was co-designed by IBM which has an interest in selling workstations etc with that chip, Sony didn't it's not their business
      (b) Cell is designed for multiprocessor environments so if it becomes too slow for a task you can simply throw more processors at it
      (c) 2000 the clockspeeds still doubled every 18 months that stopped. x86 goes the way of multiple cores too so the programmers will have to get used to parallel design anyway

      That doesn't mean it will replace x86 or even make a dent but it means that contrary to the EE it's designed for such stuff and one of the companies behind it sells specialized workstations so it's at least a possibility.

      And this time you can find more credible sources than CNET (CNET's part of the yellow press of computer news sites. Almost as bad as yahoo news) who'll tell you that.


      • (a) Cell was co-designed by IBM which has an interest in selling workstations etc with that chip, Sony didn't it's not their business

        There's a lot of vaio [sonystyle.com] developers that will be unhappy to hear that.

        Sure, IBM and Sony both like the Cell CPU a lot. However, IBM likes the PPC chip that Apple uses, and yet it still hasn't a) taken over the world, or even b) been put into use by IBM themselves. Why doesn't IBM use Apple workstations across the enterprise? After all, they make the CPU, and for awhile eve
        • Re:Deja Vu (Score:4, Interesting)

          by MrResistor ( 120588 ) <peterahoff@gmYEATSail.com minus poet> on Friday February 11, 2005 @12:36PM (#11643673) Homepage
          Maybe--and this is a big maybe--if you needed a CPU that needed high visualization components. But then I guess you'd go with SGI.

          And why wouldn't IBM be going after SGIs market? I think your points hold in the consumer space, but in a specialized market like that I think it becomes a lot easier to gain a foothold simply based on technical merit.

          Heck, better yet, and in what seems to be more inline with IBM's current direction, why wouldn't they try to get SGI to switch to Cell?

          IBM likes the PPC chip that Apple uses, and yet it still hasn't a) taken over the world, or even b) been put into use by IBM themselves. Why doesn't IBM use Apple workstations across the enterprise? After all, they make the CPU, and for awhile even made the hard drives.

          Are you sure that isn't one of their long-term goals? IBM is a big company, and it hasn't been that long since they've decided to change how they do things. Just because you can't see any evidence that they're making that switch doesn't mean they aren't working on it. I mean, they aren't even out of the Wintel PC business yet, and won't be, at least in name, for another 5 years. Given how much MS loves it when their resellers start offering competitive products, that seems like a very important first step in any such plan.

          When you walk into an IBM facility, what brand of computers are sitting on the desks? I honestly don't know, but I would hope they eat their own dogfood. I very much doubt you'd see a Dell on every desk.

          If Apple has trouble getting developers to code for their CPU, I just don't see who would develop for a VAIO (or ThinkPad) Cell workstation or laptop

          Porting Linux takes care of a large portion of that. Yeah, I know Linux is pretty much in the same boat as Apple, but it's a real easy way to significantly boost their development community, and provides a huge amount of instant functionality.

        • IBM likes the PPC chip that Apple uses [] even b) been put into use by IBM themselves.

          Really? That might surprise IBM [ibm.com]. Guess they better stop selling them then...
          And if by likes you mean designed and fabbed the 970 for Apple at their request then yes they likes it fine. And while you think it hasn't taken over the world the core design is going to be used in (to varying degrees) in all 3 next gen gaming systems. Since IBM is simply actiing as chip fabricator that ain't bad for them at all. (How many m

      • by simpl3x ( 238301 )
        "Cell was co-designed by IBM which has an interest in selling workstations etc with that chip..."

        That's conjecture... IBM makes money designing, and fabbing chips more than in PCs, as the selling of the division attests. But, could Sony be one of the PC outfits interested in licensing a compatible version of OS X for the living room? Network workstations running the beast might be of interest to IBM however. Does your cash register really need Windows?
  • Reason why would Linux be ported to a gaming platform or scientific platform. (Current PS2 runs linux)

    Why

    Because they can.

    Depending on Sony's marketing, think of the DBZ tie ins .. Imagine playing Cell Games on a cell (based) game.
  • by LourensV ( 856614 ) on Friday February 11, 2005 @09:44AM (#11641267)
    Some time ago Chuck Moore [colorforth.com] proposed the 25x [slashdot.org] , a single chip holding a 5x5 array of simple processors. That's what this reminded me of when I first read about it. As Mr. Moore said in that Slashdot interview, "[...] the 25x is a solution looking for a problem." Cell theoretically has a lot of performance, and we're talking FLOPS not MIPS. It will certainly be useful or even revolutionary in televisions and game computers, as well as for scientific calculations. I don't see it making your desktop or server much faster though. Those don't need more FLOPS, they need more I/O bandwidth and faster peripherals, and perhaps more MIPS. I can see Cell workstations, but in the same way as you have SPARC workstations and laptops now: as development tools for the "real" hardware.
  • More Cell reviews? (Score:3, Insightful)

    by Anonymous Coward on Friday February 11, 2005 @09:45AM (#11641282)
    Sheesh, /. might as well make a Cell image & category, they post so many articles about it!
    • by adam31 ( 817930 ) <adam31 @ g m a i l .com> on Friday February 11, 2005 @10:44AM (#11642001)
      Well you can't say it isn't news for nerds. And this article has enough added information in it that I thought it to be worth posting. Most Cell news stories are dumbed down for the nonnerds, whose most pressing question is "Does it run Windows?" This article is the best source I've seen of all the info we know about Cell, without a painful amount of editorializing.

      It seemed there was a lot of misinformation/confusion going around because some people heard it supported DP floats and some people heard it used Altivec (which doesn't support DP). So half the people extrapolated that IBM had ditched Altivec (i.e. VMX), and the other half assumed there was no DP support... both of which angered people. The truth (according to this article) is that it uses BOTH: A version of VMX that supports DP. whew!

      The article also points out that the SP floats aren't truly 754-compliant, as they round-toward-zero on cast to int. This makes it compatible with that horrible C/C++ truncation cast (If anyone knows why C opts to round-toward-zero, please let me know!). However, rest assured, DPs are 854-compliant.

      Also, the article suggests that there is a memory limit (at least initially) of 256MB:

      The maximum of 4 DRAM devices means that the CELL processor is limited to 256 MB of memory, given that the highest capacity XDR DRAM device is currently 512 Mbits. Fortunately, XDR DRAM devices could in theory be reconfigured in such a way so that more than 36 XDR devices can be connected to the same 36 bit wide channel and provide 1 bit wide data bus each to the 36 bit wide point-to-point interconnect. In such a configuration, a two channel XDR memory can support upwards of 16 GB of ECC protected memory with 256 Mbit DRAM devices or 32 GB of ECC protected memory with 512 Mbit DRAM devices.

      • The article also points out that the SP floats aren't truly 754-compliant, as they round-toward-zero on cast to int.

        As far as I remember from implementing the spec years ago, the rounding mode can be varied. Indeed there are C runtime functions on many platforms that set this and other properties for floating point operations.
  • by acomj ( 20611 ) on Friday February 11, 2005 @09:48AM (#11641308) Homepage
    I like the fact that the presenters didn't remember/know what all the acronyms were in the cell diagram. I like the interview technique too. Get em drunk and watch em talk.

    I was wondering why the article was so in depth.

    Quoth
    "
    After some discussion (and more wine), it was determined that the ATO unit is most likely the Atomic (memory) unit responsible for coherency observation/interaction with dataflow on the EIB. Then, after the injection of more liquid refreshments (CH3CH2OH), it was theorized that the RTB most likely stood for some sort of Register Translation Block whose precise functionality was unknown to those outside of the SPE. However, this theory would turn out to be incorrect.
    "
  • by akc ( 207721 ) on Friday February 11, 2005 @10:10AM (#11641516) Homepage
    I've been reading about the Cell processor for a few weeks now, and there is never any discussion about the operating system architecture necessary to get this thing to perform.

    As I see it, its a Power PC of OK quality with 8 subsidiary processors optimised for operating a relatively simple task on a relatively small amount of memory.

    So - port Linux to it? But how?. Relatively easily, to make use of the main processor, but what sort of subsystem do you build so that the subsidiary processors get used to their full potential. Perhaps part of X could be configured to run on these processors - but that would be a very manual tweak to make use of the architecture. And with the best will in the world, these processors would then sit around unused for most of the time.

    What you need is a more general concept, probably at the programming language level, in which algorithmns can be expressed in such a way that the operating system can detect that they can be loaded into these subsidiary processors to be executed.

    But there doesn't seem to be anything about that in the news out there. Presumably Sony are going to do something for the PS/3 - what? and is it going to be general purpose, since much of the benefit from their purposes will be a super motion graphics processor for games.

    Until we understand what the software infrastructure to make use of the architecture of this new chip will be, then I can't see how we can make predictions of its success in the more general processor market. Before then its just marketing hype.
    • by Anonymous Coward
      Well, it seems to be ccNuma. The coprocessors can access shared memory but copy to local memory to do the processing. The ppc control processor is there to set up stuff for the special processors since they're not equipped to communicate with the outside world themselves.

      The iteresting thing which most commentators seemed to have missed is the virtualization technology. If you're going to have cell based devices job out stuff to execute on any nearby cell processors on the network, you're going to need

    • I don't think the operating system could make much use of the APUs. The best that can be hoped for is an OS that somehow allocates apulets to the APUs, but since the APUs will work best if used as stream processors this allocation is... well... non-trivial.

      However, given a way to allocate these units to userspace programs, there are lots of programs that could benefit. X and mplayer come to mind, provided someone implements the critical code for APUs, which may well mean coding in assembly.

      What you nee
    • by ReelOddeeo ( 115880 ) on Friday February 11, 2005 @12:30PM (#11643584)
      What software will it run? Software "cells".

      A software cell runs on one of the APU's (or SPU's, or whatever we're currently calling them). It is sandboxed. When the main processor sends a software cell to one of the sub processors, it specifies exactly what memory that the hardware will allow that processor to access.

      You can run a software cell from an untrusted source. The software cell is a combination of code/data. The processor performs some function on it. While running, the sub processor has access only to the memory that the main processor designated.

      Applications like X Window system, Xine, MPlayer, mpg123, LAME, XMMS, etc., ad-infinitum, can be designed with their own software cells. In fact, entire libraries of software cells can be constructed and re-used. Libraries of multiplexors, demultiplexors, encoders, decoders, compositing, FFT's, transcoders, renderers, shaders, GIMP Filters (blurr, effects, etc.), etc.

      If you're building an application, such as SETI at Home, then you organize your program as software cells. You can farm out as many software cells as you have hardware cell processors to handle.

      Cells can be safely shuffled from device to device. Spare cell capacity in your TV or PS3 can run your SETI at Home, or your Xine cells.

      The Cell processor isn't very helpful for, say OpenOffice.org spreadsheets or drawings, or spellchecking. But word processing isn't the function that usually needs super fire-breathing processor power.

      It is not inconceivable that things like spreadsheet calculations can be effectively improved using software cells. But this is not as obvious (at least to me) as the former applications that I mentioned.

      So if you had a 2 GHz main processor and one or more Cell co-processors (a variable, expandable number) you would have a tremendous amount of computing power. The applications that demand extraordinary power would have it -- even with just one cell coprocessor. And this was quite a list of applications I mentioned above. Just about anything audio-visual or doing massive parallel operations on pixels, or 3d.
      • But how all this is going to be multitasked?
        When my process is being switched out in the main CPU, should the running SPUs be also suspended somehow and their context saved along with the main context? Since their local memory isn't protected in any way, that would be quite a massive context, wouldn't it? If this is not to be done, access to the SPUs should be policed by the OS. Say, while some process has a device opened that controls access to an SPU, no other process can open the same device.
        • When Apache and Postfix have the main CPU, you don't want your mp3 decoding to stop do you?

          A single encode/decode task would ideally be coded as a single software cell. Perhaps even multiple functions in a single software cell. I.e. decode mp3, and add reverb as a single software cell that uses up a single SPU.

          I run The GIMP and do a massive filter, and it realizes that there are seven SPU's available, so it issues five hundred software cell problems (non serial) that are consumed and processed by t
          • What I reckon is needed is something akin to the memory management subsystem in a traditional OS - ie something that allocates SPU's to requesting tasks (possibibly on a priority basis) and puts the "stalled" tasks on backing store.

            Without doing any sums, it may be that some tasks are sped up so much that the SPU can be multiplexed between lots of tasks per second, so that they are effectively shared by several tasks at the same time - much like the CPU is today.

            The other thing to perhaps consider then is
          • Yeah, yeah, just imagine a Beowulf cluster of... nevermind :)
            But what prevents all these programs from stepping on each others' toes when they submit tasklets to SPUs? Will the arbitration be performed benevolently by a mutual convention or enforced by the OS?
        • Go to the article and read about the streaming in the SPE. You can overlap incoming with outgoing. So thread 1 is DMAing its data in while thread 0 is DMAing its data out.
  • What's the point? (Score:5, Insightful)

    by jeif1k ( 809151 ) on Friday February 11, 2005 @10:15AM (#11641577)
    Unless you are computing digital orreries, whether it has 256GFlops or 256TFlops makes little difference if the memory bandwidth isn't substantially increased, and people don't increase the memory bandwidth because that has expensive consequences all over the system.

    On the whole, my impression is that current mainstream CPUs have a pretty reasonable balance between CPU power and all the other system components. Changing just the CPU without making substantial (and expensive) changes to the rest of the system will not magically give you more performance.
    • Re:What's the point? (Score:5, Informative)

      by dfj225 ( 587560 ) on Friday February 11, 2005 @10:43AM (#11641991) Homepage Journal
      It seems like Cell will have more memory bandwidth than the processors commonly used today. From this article [yahoo.com]:

      " The memory and processor bus interfaces designed by Rambus account for 90% of the Cell processor signal pins, providing an unprecedented aggregate processor I/O bandwidth of approximately 100 gigabytes-per-second. "
    • Re:What's the point? (Score:4, Informative)

      by jdb8167 ( 204116 ) on Friday February 11, 2005 @10:55AM (#11642182)
      Why do you think they licensed the XDR interface from RAMBUS?

      There are 2 dual XDR interfaces. Each interface is running at 6.4 GB/s. So 4*6.4 = 25.6 GBytes/sec.

      So the CELL memory design is at least 4 times faster than current DDR2 memory systems.
    • by thpr ( 786837 ) on Friday February 11, 2005 @10:59AM (#11642281)
      Changing just the CPU without making substantial (and expensive) changes to the rest of the system will not magically give you more performance.

      Substantial changes, maybe. Expensive? Perhaps not. This all depends on the base assumptions from which you operate. One of the fundamental assumptions in today's existing systems is that any and all work should be done to maximize the utilization of the CPU. However, when considering how to design other types of systems, such may not be true (it may make sense to minimize the memory footprint, for example).

      If you've ever done some detailed algorithm work, you will quickly realize that there are many algorithms where you can make tradeoffs between memory and CPU time. The 'simplist' of these are the algorithms that are breadth first vs. depth first, which can trade off exponential in memory vs. exponential in time. [For a 'trivial' example, try forming the list of all operational assignments containing 6 variables and which use %, +, -, *, /, ^, &, ~, and ()... less than 50 lines of perl and you'll quickly blow through the 32-bit memory limit if written depth first, or take overnight to run breadth first]

      The significant question which has been brought up - and which remains unanswered - is what software development tools will be made available. Once this is better answered, we will all be in a better position to determine what fundamental assumptions have been changed, and therefore how we can follow the new assumptions through to conclusions about the net performance of the processor and machine in which it is contained.

  • by Anonymous Coward
    folks need to keep in mind these are max figures assuming software is perfectly written to take care of parallelization (does that word exist?). this means that most computer programs will hit no where near these rates, but super optimized versions of things like SETI-Home and an mpeg encoder/decoder could take advantage of it.

    just remember how many developers complained about the Emotion Engine from the SP2 and how it was such a bitch to program for, this will be worse. it's first gonna require a special
    • these are max figures assuming software is perfectly written to take care of parallelization ... this means that most computer programs will hit no where near these rates, but super optimized versions could take advantage of it...just remember how many developers complained about the Emotion Engine from the SP2 and how it was such a bitch to program for, this will be worse

      This is essentially what happened with the PS2. 1st gen game teams thought the compiler would handle more of the task of keeping the v

      • All the indications I've seen will show that the SPEs will be programmed via a "job" model, not a thread model. So you have jobs ("cells") that have some code and some data, you ask the OS to ship it off to an SPE, and then go do something else while you wait for the results.
  • by Doc Ruby ( 173196 ) on Friday February 11, 2005 @10:30AM (#11641795) Homepage Journal
    The real promise of these Cells is Internet MPP. IBM (and Sony) claim that Cell PCs will be able to cluster "natively" across Internet-latency TCP/IP networks, like broadband. If they deliver on that, then performance questions will revolve around interoperable network apps, not just the raw CPU HW.

    Intel's Pentium architecture was built to accomodate 6-way direct CPU interconnects. The idea was to build "cubic" structures for MPP computers. It took until the P4 to really deliver any of those, almost 10 years after the architecture was released. And the software is still bleeding-edge, and hand-rolled for each install. MPP SW techniques have evolved a lot since then, so perhaps the Cell will actually deliver on these "distributed supercomputer" promises.
  • Sure thing (Score:2, Interesting)

    by Reanimated ( 775900 )
    5 years ago the "Emotion Engine" from Sony was supposed to "steal a chunk" of the PC processing market. Didn't happen. Won't happen.
  • by YU Nicks NE Way ( 129084 ) on Friday February 11, 2005 @10:52AM (#11642136)
    You may not like Michael Kanellos usually, but I think he's hit the nail on the head here [com.com].

    This is a bigger, hotter, less stable chip with an exotic and hard to write-for architecture. That's fine for a gaming system with a dedicated revenue stream and no competition. It's not gonna make it outside that domain.
  • Maybe... (Score:4, Funny)

    by gUmbi ( 95629 ) on Friday February 11, 2005 @11:19AM (#11642615)
    Since IBM is now involved, should it be called the PS/3 instead of the PS3?
  • My view of the Cell chip is that it's actually 2 different kinds of chips put together. It has a general processor (the POWER5 core) core, and essentially co-processors that are optimized for a totally different class of programs. The POWER5 chip would let it run your normal office applications, but the SPEs allow the chip to do things like graphics processing, audio processing, simulations, etc. All those problems that lend themselves naturally to a vectorizes solution. Together, the 2 kinds of cores on a
    • It has a general processor (the POWER5 core) core

      Essentially correct, but it's not a Power5 derivative.

      Together, the 2 kinds of cores on a single chip has the potential to do a lot. But there has to be tools to allow developers to make use of the potential. Especially as vectorized programs are not easy to write and optimize, that makes the quality of the development tools very important in deciding the success of the chip.

      Right. And it's interesting that the CoreImage and CoreVideo APIs in the next v
  • 250 Gigaflops? (Score:5, Insightful)

    by CTho9305 ( 264265 ) on Friday February 11, 2005 @11:45AM (#11642995) Homepage
    People seem to think this is leaps and bounds above everything else, but they're missing the details. In order to obtain that much performance, you'll need a task which parallelizes well so it can be broken up into chunks for the 8 SPEs. Graphics rendering falls into this set of tasks, but a lot of general applications just don't gain that much from parallel processors. Even when you have a task that does parallelize, writing parallel code is quite a bit harder than writing code for just a single thread of execution.

    I've seen a lot of hype about having the Cell in your laptop talk to the Cells in your desktop, microwave, and TiVo, but you have to consider real-world limitations. When you set up a network like that (presumably wireless), you're going to be limited to around 100Mbps. In computer clusters and supercomputers, one of the main limitations of performance is the communcation bandwidth available between processors, and the latency of the network. To build a "home supercomputer", you not only need a task that parallelizes well, but one that doesn't require so much inter-node communication that it's held back by a slow network. You can't work around this problem with hardware magic - if the task you're working on requires lots of communication bandwidth, you're going to be held back.

    So how much beyond a modern PC is 250GFLOPS anyway? Not much! A GeForce FX at 500MHz does 200 gigaflops [zdnet.co.uk]. An AMD Athlon's peak performance is 2.4 GFLOPS at 600 MHz [amd.com]... if we scale this up to 2.2 GHz (high-end Athlon), that's 8.8GFLOPS (note: As we're talking about theoretical performance, nonlinear factors like bus speeds can be ignored). Basically, if the Cell dedicates most of its power to graphics rendering, you'll have computation power in the same range as a fast PC of today. Given that we're not going to see any products based on the Cell for a while, this isn't going to be the end of the world for Intel and nVidia (let alone the fact that Cell isn't x86).

    Consoles using the Cell will have the advantage of only having to render for TV resolutions - at most 1080 lines, while PCs will be rendering at up to 1600x1200, but if you look at recent history, you can compare the xbox to a then-good PC with a GeForce3 (which came out at around the same time) - the xbox looked better, but PCs did catch up and surpass it's performance and it didn't take all that long. Consoles have to be very high-end when they're released, because the platform doesn't change for 2-3 years, and they still need to be "good enough" after a couple years, before the next generation is released.
    • The GeForce FX's 200 gigaflops aren't all general-purpose though. A lot of them come from fixed-purpose circuits that you can't use for your own calculations. For a general purpose program, you've got about 80 gigaflops, of which you can extract 50-60 gigaflops in real-world programs.
  • by Hannibal_Ars ( 227413 ) on Friday February 11, 2005 @01:04PM (#11644018) Homepage
    If you're going to rip the links out of one of my Ars news posts [arstechnica.com] and submit them to slashdot (in the same order in which I linked them, no less), then at least credit your source.

  • think to the future (Score:3, Interesting)

    by coult ( 200316 ) on Friday February 11, 2005 @02:29PM (#11645145)
    Most of you are thinking of today's applications...but what about things like eye/head tracking, voice recognition, face recognition, telepresence, real-time cinema-quality CGI, etc...those are tasks requiring large-scale numerical computation, and they all might appear on your desktop in the not-too-distant future thanks to chips like CELL and its future ancestors.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...