Prospects For the CELL Microprocessor Beyond Games 246
News for nerds writes "The ISSCC 2005, the "Chip Olympics", is over and David T. Wang at Real World Technologies put a very objective review of the CELL processor (the slides for the briefing are also available), covering all the aspects disclosed at the conference. Besides the much touted 256 GFlops single-precision floating point performance the CELL processor has 25-30 GFlops in double-precision, which is useful enough for scientific computation. Linus seems interested in CELL, too."
I'll believe it when I see it (Score:5, Insightful)
That being said, I think it's important not to get too excited about it... it's hard to say if it will live up to everything that people have written about it. I'm a bit skeptical. Until I see some production units doing amazing things, I'm cautiously optimistic.
Re:I'll believe it when I see it (Score:3, Insightful)
Re:I'll believe it when I see it (Score:4, Informative)
Re:I'll believe it when I see it (Score:5, Interesting)
I'd be more worried about that if they DIDN'T use Rambus's technology. Rambus can't sue someone who's licensing their tech... they can only sue someone they THINK is using tech too similar to theirs without licensing it. If cell used some sort of DDR or maybe an inhouse memory tech instead, maybe then Rambus would try to sue.
Re:I'll believe it when I see it (Score:2)
For all their (business) faults Rambus makes cool technology - in particular stuff that allows parallelism in the CPU to be exposed to the memory hierarchy (or vice-versa) - but their hardware hasn't worked well with existing CPUs (x86 for examples) because of the bottleneck that the FSB in traditional designs presents. To use
Re:I'll believe it when I see it (Score:5, Insightful)
I'm a little bit concerned about the PowerPC Element. The article states that it's not simply a Power5 derivative, but a core designed for high mhz at the cost of per stage logic depth. To quote the author: "The result is a processing core that operates at a high frequency with relatively low power consumption, and perhaps relatively poorer scalar performance compared to the beefy POWER5 processor core. "
The means the PPE in the CELL @ 4Ghz will not perform as well as a Power5 would could it reach 4Ghz (but since the CELL has 8 SPEs, I would hope it performs better as a whole than a POWER5 at the same frequency). It would be interesting to know at what frequency the two are similar, but since the PPE is integrated into an extended system, this isn't something that can ever really be benchmarked.
Re:I'll believe it when I see it (Score:2)
No, it means it might not. The author suggested his opinion was up to debate. However, it's important to note the different design goals of a Power5, A 970 (G5), and a Cell. They have different needs, and for general purpose computing I think Cell will hold up just fine.
Remember (Score:3, Informative)
Re:Remember (Score:3, Informative)
have you ever seen a picture of the POWER5? It's slightly smaller than a Mac mini.
Re:Remember (Score:2, Informative)
Re:I'll believe it when I see it (Score:3, Interesting)
*a FO-4 gate delay is a "fan-out of 4 gate delay" - it's the amou
Re:I'll believe it when I see it (Score:2)
Figure 2 - Per stage circuit delay depth of 11 FO4 often left only 5~8 FO4 for logic flow [realworldtech.com]
The author of the article seems to think an 11 F04 is pretty aggressive.
Re:I'll believe it when I see it (Score:2)
It would be interesting to know at what frequency the two are similar.
0 MHz?
Re:I'll believe it when I see it (Score:5, Interesting)
But look at the graphics in PS2 games now compared to 1st gen titles. The improvement is incredible! The hardware hasn't changed: it's still just a 300Mhz cpu with 4MB graphics and no pixel shading. I think we'll see the same maturation process with Cell/PS3, where the 1st gen games don't live up to the hype but more and more of the Cell's enormous potential is realized with successive generations.
The question is whether Sony decides that part of the slow evolution in efficient PS2 programming was because of the small, exclusive development community. I would love to see Sony push a Linux PS3 similar to the version of Linux PS2 they released.
Re:I'll believe it when I see it (Score:4, Interesting)
We will see some of the typical ramp up time in cell programs but being as the cell, if you beleive what you read, is so far above and beyond other modern processors (and that lazy developers for the PS3 can always let the NVIDIA GPU carry the load in a more traditional fashion) we should see leaps and bounds in program performance fairly quickly.
Re:I'll believe it when I see it (Score:2)
Tell me about it. All the game developers I know are always "640k polygons a second should be enough for anyone!", and "pixels smaller than your thumb detract from gameplay!" or "why would anyone want stereo!?". Losers. Developing finacial software is so much more bleeding edge. Why, some of our kids don't even know FORTRAN! They don't even realize that it was the demand for bigger and bigger spreadsheets that delivered those fancy vi
Re:I'll believe it when I see it (Score:2)
That's a reasonable attitude towards any new technology. There's always a difference between how something will perform on paper and how it will perform in the real world. And that's assuming that we have a serious innovation, like this one, rather than the vague hype that's much more common.
Still, we can hope. In computing, change and innovation
Transmeta (Score:3, Funny)
Re:Transmeta (Score:3, Informative)
CPU manufacturer Transmeta, known for their low-power processors, is evaluating an exit from the CPU market. Instead of manufacturing chips themselves, their business focus would shift towards buzzwords: licensing their intellectual property and the formation of strategic alliances to utilize their processor design as well as their research and development skills.
Re:Transmeta (Score:3, Insightful)
Re:Transmeta (Score:3, Interesting)
Just because they aren't manufacturing anymore doesn't mean they're exiting the business entirely. There just might not be a "Transmetta" anymore. Instead there will be something like an "Intel Pentium 5 using lowerpower Transmetta Technology" (well probably not, but you get the idea.)
Transmetta will be doing R&D for low power processors for years to come, I'm quite sure.
Comment removed (Score:4, Insightful)
Re:hell ya, cheep awsome computers! (Score:2)
If I was a computer company, I could buy them without the game-specific stuff, load on linux, and sell them as cheep alternative computers.. but that's just me. (assuming linux and friends are compiled for CELL in the next few months of course).
The problem wiht that is a ps3 won't be anywhere close to a GP machine. It's going to require a lot of driver tweeaks, a load of hardware reconfiguration, defeat the drm. By the time someone figures hwo to do that cheap, comp
Re:Isn't linux running already ? (Score:2, Informative)
I bet this Cell processor will kick ass... (Score:2, Funny)
Re:I bet this Cell processor will kick ass... (Score:5, Funny)
You kid, but have a point (Score:2)
I actually thought immediately of Cellular Automata when I read some of the specs on the new Cell, and the name may just be a coincidence, but maybe not. It would be interesting to see a Cell architecture wher
Re:You kid, but have a point (Score:2)
Re:You kid, but have a point (Score:2)
Deja Vu (Score:5, Interesting)
Processors inside game consoles usually toil away in anonymity, derided as as poor cousins to desktop chips such as Intel's Pentium line. But with Sony Computer Entertainment's ambitious plan, its chips could outclass the offerings of the world's largest chipmaker--if all goes well.
The system is so advanced, MicroDesign Resources analyst Keith Diefendorff wrote in a report that the system "has the potential to swipe a chunk of the low-end market from under the noses of PC vendors." He wrote that the platform may "signal the company's intention to move upscale from current game consoles, cutting a wider swath through the living room," with its abilities to function like a stand-alone DVD player and Internet set-top box.
Sony puts on game face with new chip [com.com]
Published: May 5, 1999, 1:25 PM PDT
By Jim Davis
Staff Writer, CNET News.com
Re:Deja Vu (Score:5, Informative)
Well one reason the PS2 sold like hot cakes was that it was one of the cheapest DVD players at that time (at least in Japan). There is media player software available and it's quite popular the reason it isn't a internet set-top box is that noone wants internet set-top boxes they died a painful death. Now there's no EE desktop PC because it's too slow but the difference between Cell and PS2 in this regard are
(a) Cell was co-designed by IBM which has an interest in selling workstations etc with that chip, Sony didn't it's not their business
(b) Cell is designed for multiprocessor environments so if it becomes too slow for a task you can simply throw more processors at it
(c) 2000 the clockspeeds still doubled every 18 months that stopped. x86 goes the way of multiple cores too so the programmers will have to get used to parallel design anyway
That doesn't mean it will replace x86 or even make a dent but it means that contrary to the EE it's designed for such stuff and one of the companies behind it sells specialized workstations so it's at least a possibility.
And this time you can find more credible sources than CNET (CNET's part of the yellow press of computer news sites. Almost as bad as yahoo news) who'll tell you that.
Re:Deja Vu (Score:2)
(a) Cell was co-designed by IBM which has an interest in selling workstations etc with that chip, Sony didn't it's not their business
There's a lot of vaio [sonystyle.com] developers that will be unhappy to hear that.
Sure, IBM and Sony both like the Cell CPU a lot. However, IBM likes the PPC chip that Apple uses, and yet it still hasn't a) taken over the world, or even b) been put into use by IBM themselves. Why doesn't IBM use Apple workstations across the enterprise? After all, they make the CPU, and for awhile eve
Re:Deja Vu (Score:4, Interesting)
And why wouldn't IBM be going after SGIs market? I think your points hold in the consumer space, but in a specialized market like that I think it becomes a lot easier to gain a foothold simply based on technical merit.
Heck, better yet, and in what seems to be more inline with IBM's current direction, why wouldn't they try to get SGI to switch to Cell?
IBM likes the PPC chip that Apple uses, and yet it still hasn't a) taken over the world, or even b) been put into use by IBM themselves. Why doesn't IBM use Apple workstations across the enterprise? After all, they make the CPU, and for awhile even made the hard drives.
Are you sure that isn't one of their long-term goals? IBM is a big company, and it hasn't been that long since they've decided to change how they do things. Just because you can't see any evidence that they're making that switch doesn't mean they aren't working on it. I mean, they aren't even out of the Wintel PC business yet, and won't be, at least in name, for another 5 years. Given how much MS loves it when their resellers start offering competitive products, that seems like a very important first step in any such plan.
When you walk into an IBM facility, what brand of computers are sitting on the desks? I honestly don't know, but I would hope they eat their own dogfood. I very much doubt you'd see a Dell on every desk.
If Apple has trouble getting developers to code for their CPU, I just don't see who would develop for a VAIO (or ThinkPad) Cell workstation or laptop
Porting Linux takes care of a large portion of that. Yeah, I know Linux is pretty much in the same boat as Apple, but it's a real easy way to significantly boost their development community, and provides a huge amount of instant functionality.
Re:Deja Vu (Score:2)
Really? That might surprise IBM [ibm.com]. Guess they better stop selling them then...
And if by likes you mean designed and fabbed the 970 for Apple at their request then yes they likes it fine. And while you think it hasn't taken over the world the core design is going to be used in (to varying degrees) in all 3 next gen gaming systems. Since IBM is simply actiing as chip fabricator that ain't bad for them at all. (How many m
IBM (Score:2)
That's conjecture... IBM makes money designing, and fabbing chips more than in PCs, as the selling of the division attests. But, could Sony be one of the PC outfits interested in licensing a compatible version of OS X for the living room? Network workstations running the beast might be of interest to IBM however. Does your cash register really need Windows?
Linux on Cell (Score:2)
Why
Because they can.
Depending on Sony's marketing, think of the DBZ tie ins
Re:Linux on Cell (Score:2, Funny)
I can't stand LCD monitors, CRT all the way
Re:Linux on Cell (Score:2)
You might remember the platform the EFF (and others) used to crack some of the RSA encryption last year. It was dedicated silicon, designed for the purpose. If you wanted to run a GUI on the mass number of processors, I suppose you could
These CELL processors are more general purpose than this example, and they will (on the PS2) have a way to address a display of some sort
Re:Linux on Cell (Score:2)
I find this rather hard to believe! Scientific applications are supposed to be extremely demanding and are the driving force behind expensive workstations and huge clusters. Traditionally, I would expect scientific applications to be coded in C/C++ and Fortran, or maybe some special languages. In my experience with Java applications, they are usually much slower than their native counterparts.
P.
Reminds me of Chuck Moore's 25x multicomputer chip (Score:5, Interesting)
More Cell reviews? (Score:3, Insightful)
Re:More Cell reviews? (Score:5, Informative)
It seemed there was a lot of misinformation/confusion going around because some people heard it supported DP floats and some people heard it used Altivec (which doesn't support DP). So half the people extrapolated that IBM had ditched Altivec (i.e. VMX), and the other half assumed there was no DP support... both of which angered people. The truth (according to this article) is that it uses BOTH: A version of VMX that supports DP. whew!
The article also points out that the SP floats aren't truly 754-compliant, as they round-toward-zero on cast to int. This makes it compatible with that horrible C/C++ truncation cast (If anyone knows why C opts to round-toward-zero, please let me know!). However, rest assured, DPs are 854-compliant.
Also, the article suggests that there is a memory limit (at least initially) of 256MB:
The maximum of 4 DRAM devices means that the CELL processor is limited to 256 MB of memory, given that the highest capacity XDR DRAM device is currently 512 Mbits. Fortunately, XDR DRAM devices could in theory be reconfigured in such a way so that more than 36 XDR devices can be connected to the same 36 bit wide channel and provide 1 bit wide data bus each to the 36 bit wide point-to-point interconnect. In such a configuration, a two channel XDR memory can support upwards of 16 GB of ECC protected memory with 256 Mbit DRAM devices or 32 GB of ECC protected memory with 512 Mbit DRAM devices.
FP rounding mode (Score:2)
As far as I remember from implementing the spec years ago, the rounding mode can be varied. Indeed there are C runtime functions on many platforms that set this and other properties for floating point operations.
Lost acronyms in the article.. (Score:3, Funny)
I was wondering why the article was so in depth.
Quoth
"
After some discussion (and more wine), it was determined that the ATO unit is most likely the Atomic (memory) unit responsible for coherency observation/interaction with dataflow on the EIB. Then, after the injection of more liquid refreshments (CH3CH2OH), it was theorized that the RTB most likely stood for some sort of Register Translation Block whose precise functionality was unknown to those outside of the SPE. However, this theory would turn out to be incorrect.
"
What software will it run (Score:5, Interesting)
As I see it, its a Power PC of OK quality with 8 subsidiary processors optimised for operating a relatively simple task on a relatively small amount of memory.
So - port Linux to it? But how?. Relatively easily, to make use of the main processor, but what sort of subsystem do you build so that the subsidiary processors get used to their full potential. Perhaps part of X could be configured to run on these processors - but that would be a very manual tweak to make use of the architecture. And with the best will in the world, these processors would then sit around unused for most of the time.
What you need is a more general concept, probably at the programming language level, in which algorithmns can be expressed in such a way that the operating system can detect that they can be loaded into these subsidiary processors to be executed.
But there doesn't seem to be anything about that in the news out there. Presumably Sony are going to do something for the PS/3 - what? and is it going to be general purpose, since much of the benefit from their purposes will be a super motion graphics processor for games.
Until we understand what the software infrastructure to make use of the architecture of this new chip will be, then I can't see how we can make predictions of its success in the more general processor market. Before then its just marketing hype.
Re:What software will it run (Score:2, Insightful)
The iteresting thing which most commentators seemed to have missed is the virtualization technology. If you're going to have cell based devices job out stuff to execute on any nearby cell processors on the network, you're going to need
Re:What software will it run (Score:2, Insightful)
However, given a way to allocate these units to userspace programs, there are lots of programs that could benefit. X and mplayer come to mind, provided someone implements the critical code for APUs, which may well mean coding in assembly.
What you nee
Re:What software will it run (Score:4, Insightful)
A software cell runs on one of the APU's (or SPU's, or whatever we're currently calling them). It is sandboxed. When the main processor sends a software cell to one of the sub processors, it specifies exactly what memory that the hardware will allow that processor to access.
You can run a software cell from an untrusted source. The software cell is a combination of code/data. The processor performs some function on it. While running, the sub processor has access only to the memory that the main processor designated.
Applications like X Window system, Xine, MPlayer, mpg123, LAME, XMMS, etc., ad-infinitum, can be designed with their own software cells. In fact, entire libraries of software cells can be constructed and re-used. Libraries of multiplexors, demultiplexors, encoders, decoders, compositing, FFT's, transcoders, renderers, shaders, GIMP Filters (blurr, effects, etc.), etc.
If you're building an application, such as SETI at Home, then you organize your program as software cells. You can farm out as many software cells as you have hardware cell processors to handle.
Cells can be safely shuffled from device to device. Spare cell capacity in your TV or PS3 can run your SETI at Home, or your Xine cells.
The Cell processor isn't very helpful for, say OpenOffice.org spreadsheets or drawings, or spellchecking. But word processing isn't the function that usually needs super fire-breathing processor power.
It is not inconceivable that things like spreadsheet calculations can be effectively improved using software cells. But this is not as obvious (at least to me) as the former applications that I mentioned.
So if you had a 2 GHz main processor and one or more Cell co-processors (a variable, expandable number) you would have a tremendous amount of computing power. The applications that demand extraordinary power would have it -- even with just one cell coprocessor. And this was quite a list of applications I mentioned above. Just about anything audio-visual or doing massive parallel operations on pixels, or 3d.
Re:What software will it run (Score:2)
When my process is being switched out in the main CPU, should the running SPUs be also suspended somehow and their context saved along with the main context? Since their local memory isn't protected in any way, that would be quite a massive context, wouldn't it? If this is not to be done, access to the SPUs should be policed by the OS. Say, while some process has a device opened that controls access to an SPU, no other process can open the same device.
Re:What software will it run (Score:2)
A single encode/decode task would ideally be coded as a single software cell. Perhaps even multiple functions in a single software cell. I.e. decode mp3, and add reverb as a single software cell that uses up a single SPU.
I run The GIMP and do a massive filter, and it realizes that there are seven SPU's available, so it issues five hundred software cell problems (non serial) that are consumed and processed by t
Re:What software will it run (Score:2)
Without doing any sums, it may be that some tasks are sped up so much that the SPU can be multiplexed between lots of tasks per second, so that they are effectively shared by several tasks at the same time - much like the CPU is today.
The other thing to perhaps consider then is
Re:What software will it run (Score:2)
But what prevents all these programs from stepping on each others' toes when they submit tasklets to SPUs? Will the arbitration be performed benevolently by a mutual convention or enforced by the OS?
Re:What software will it run (Score:2)
What's the point? (Score:5, Insightful)
On the whole, my impression is that current mainstream CPUs have a pretty reasonable balance between CPU power and all the other system components. Changing just the CPU without making substantial (and expensive) changes to the rest of the system will not magically give you more performance.
Re:What's the point? (Score:5, Informative)
" The memory and processor bus interfaces designed by Rambus account for 90% of the Cell processor signal pins, providing an unprecedented aggregate processor I/O bandwidth of approximately 100 gigabytes-per-second. "
Re:What's the point? (Score:4, Informative)
There are 2 dual XDR interfaces. Each interface is running at 6.4 GB/s. So 4*6.4 = 25.6 GBytes/sec.
So the CELL memory design is at least 4 times faster than current DDR2 memory systems.
Re:What's the point? (Score:2)
So 1.5 GBytes/s it is not. You may have confused bits with bytes.
For more information look here [serialata.org].
Re:What's the point? - What are your assumptions? (Score:4, Informative)
Substantial changes, maybe. Expensive? Perhaps not. This all depends on the base assumptions from which you operate. One of the fundamental assumptions in today's existing systems is that any and all work should be done to maximize the utilization of the CPU. However, when considering how to design other types of systems, such may not be true (it may make sense to minimize the memory footprint, for example).
If you've ever done some detailed algorithm work, you will quickly realize that there are many algorithms where you can make tradeoffs between memory and CPU time. The 'simplist' of these are the algorithms that are breadth first vs. depth first, which can trade off exponential in memory vs. exponential in time. [For a 'trivial' example, try forming the list of all operational assignments containing 6 variables and which use %, +, -, *, /, ^, &, ~, and ()... less than 50 lines of perl and you'll quickly blow through the 32-bit memory limit if written depth first, or take overnight to run breadth first]
The significant question which has been brought up - and which remains unanswered - is what software development tools will be made available. Once this is better answered, we will all be in a better position to determine what fundamental assumptions have been changed, and therefore how we can follow the new assumptions through to conclusions about the net performance of the processor and machine in which it is contained.
these are max figures (Score:2, Insightful)
just remember how many developers complained about the Emotion Engine from the SP2 and how it was such a bitch to program for, this will be worse. it's first gonna require a special
Re:these are max figures (Score:3, Informative)
This is essentially what happened with the PS2. 1st gen game teams thought the compiler would handle more of the task of keeping the v
Re:these are max figures (Score:2)
Massively Parallel Promises (Score:4, Interesting)
Intel's Pentium architecture was built to accomodate 6-way direct CPU interconnects. The idea was to build "cubic" structures for MPP computers. It took until the P4 to really deliver any of those, almost 10 years after the architecture was released. And the software is still bleeding-edge, and hand-rolled for each install. MPP SW techniques have evolved a lot since then, so perhaps the Cell will actually deliver on these "distributed supercomputer" promises.
Re:Massively Parallel Promises (Score:2)
Re:Massively Parallel Promises (Score:2)
Sure thing (Score:2, Interesting)
Here's a more accurate review (Score:4, Informative)
This is a bigger, hotter, less stable chip with an exotic and hard to write-for architecture. That's fine for a gaming system with a dedicated revenue stream and no competition. It's not gonna make it outside that domain.
Re:Here's a more accurate review (Score:2)
We should reserve judgment on the "hard to write-for" until we actually have details. This alleged sample code [arstechnica.com] doesn't look too bad.
Maybe... (Score:4, Funny)
It's actually 2 different kinds of processors (Score:2, Interesting)
Re:It's actually 2 different kinds of processors (Score:2)
Essentially correct, but it's not a Power5 derivative.
Together, the 2 kinds of cores on a single chip has the potential to do a lot. But there has to be tools to allow developers to make use of the potential. Especially as vectorized programs are not easy to write and optimize, that makes the quality of the development tools very important in deciding the success of the chip.
Right. And it's interesting that the CoreImage and CoreVideo APIs in the next v
250 Gigaflops? (Score:5, Insightful)
I've seen a lot of hype about having the Cell in your laptop talk to the Cells in your desktop, microwave, and TiVo, but you have to consider real-world limitations. When you set up a network like that (presumably wireless), you're going to be limited to around 100Mbps. In computer clusters and supercomputers, one of the main limitations of performance is the communcation bandwidth available between processors, and the latency of the network. To build a "home supercomputer", you not only need a task that parallelizes well, but one that doesn't require so much inter-node communication that it's held back by a slow network. You can't work around this problem with hardware magic - if the task you're working on requires lots of communication bandwidth, you're going to be held back.
So how much beyond a modern PC is 250GFLOPS anyway? Not much! A GeForce FX at 500MHz does 200 gigaflops [zdnet.co.uk]. An AMD Athlon's peak performance is 2.4 GFLOPS at 600 MHz [amd.com]... if we scale this up to 2.2 GHz (high-end Athlon), that's 8.8GFLOPS (note: As we're talking about theoretical performance, nonlinear factors like bus speeds can be ignored). Basically, if the Cell dedicates most of its power to graphics rendering, you'll have computation power in the same range as a fast PC of today. Given that we're not going to see any products based on the Cell for a while, this isn't going to be the end of the world for Intel and nVidia (let alone the fact that Cell isn't x86).
Consoles using the Cell will have the advantage of only having to render for TV resolutions - at most 1080 lines, while PCs will be rendering at up to 1600x1200, but if you look at recent history, you can compare the xbox to a then-good PC with a GeForce3 (which came out at around the same time) - the xbox looked better, but PCs did catch up and surpass it's performance and it didn't take all that long. Consoles have to be very high-end when they're released, because the platform doesn't change for 2-3 years, and they still need to be "good enough" after a couple years, before the next generation is released.
Re:250 Gigaflops? (Score:2)
To the putz who submitted this news post: (Score:5, Informative)
think to the future (Score:3, Interesting)
Re:Cool, as a co-proc (Score:2, Insightful)
From the reviews I've seen they are touting it as if the cell communicates with other cells to handle all the processor intensive stuff.
so where one cell would not be as powerful as an x86 cpu two cells would be. And the way they have designed the things is as a seperate computer on a chip so you can basically upgrade your ?? just the same way you upgrade your memmory.
Or have I gotten the wrong end of the stick and they are designing these thin
Obviously a TROLL (Score:2, Insightful)
Having said that, if the original poster of this thread truly does think its underpowered, one should provide a bit more elaboration besides a trollish reference to the IBM/Sony marketing machine.
Re:Cool, as a co-proc (Score:4, Interesting)
I see great potential for the STI Cell Processor as a SETI@Home accelerator.
Seriously though, there may be good scientific uses for these exactly as you envisioned - in a coprocessor role. From folding proteins and weather simulations to cryptoanalysis, these could provide a great entry for distributed scientific computing.
Re:Cool, as a co-proc (Score:2)
Then again, today's CPUs are way overpowered for the jobs they are actually doing. Most of the power is used for sometimes important, sometimes pointless stuff around the edges such as antialiasing fonts and making icons bounce up and down.
A chip designed to be able to cooperate with others should have an advantage in that kind of environment. If the CPU can concentrate on actually running the word processor, and efficiantly coordinate with oth
Re:Cool, as a co-proc (Score:2)
My school's 2000Mhz machine running XP feel slower to me than my old 200Mhz machine running 98. IF they write/pick the OS/software for the Cell appliances correctly I could see it making some headway as a desktop replacement. If most monitors/TVs are shipping with a good office suite and web b
Re:Cool, as a co-proc (Score:3, Informative)
Which is the key, exactly. As Linus wrote in one of his linked form posts (from the blurb) it's gonna be a pain to program general purpose for those vector units (SPEs).
However, judging from the main review, it doesn't look like the PowerPC Element was casterated too much. It looks like it'll suffer from Pentium4 syndrome (boosting the frequency doesn't do as much as it used to
Re:Cool, as a co-proc (Score:3, Informative)
The role of the G5 cores seems to be to handle higher order logic that prepares and parses out tasks to the very fast vector units (SP
Re:Cool, as a co-proc (Score:3, Interesting)
And that is the whole point of this processor. The G5 NEEDS those pipelines and caches in order to feed the multiple execution units, reorder instructions and avoid reading slow host memory.
The CELL on the otherhand will have the instruction ordering done in software. All those 'bits' you describe are replaced with software: a much smarter compiler.
Yes this processor will perform poorly with today's code. With appropriately written code it will sc
Re:Cool, as a co-proc (Score:2)
No, it won't. Who uses an ASIC or FPGA to make just a processor? No one, that's who. Processors are often embedded in ASICs (and sometimes even FPGAs) along with lots of other goodies. If you just need a processor, you buy an off-the-shelf ARM, VR, or any of the dozens available. You don't spend the bucks to make an ASIC. This may compete with off-the-shelf processors and some ASSP (
Re:Cool, as a co-proc (Score:2)
That has to do with poor imagination at the game-producers and nothing to do with the performance of the cell-cpu.
Perhaps, but FOUR of them... (Score:2, Interesting)
...might be used to run the PS3 (assuming this [blachford.info] is true). Outside of a weighty OS (assuming you use Windows, Mac or a Linux GUI with that nVidia) they should do better.
Besides, 256 GFlops in single-prec. [realworldtech.com] can't be too bad either...can it?
Single Precision Rounding error (Score:3, Interesting)
Unfortunately single precision number ignore certain rounding conventions in order to boost the speed. You'll get super fast single precision results, but they won't be as acurate as on other systems. Probably won't matter for physics rendering in a video game (Sony's Emotion Engine did the same thing) but it could make a big difference when applied to general purpose situations.
Re:Perhaps, but FOUR of them... (Score:2)
Re:x86 compatibility? (Score:5, Informative)
There is this really neat group of operating systems called Unix/Linuxes. They have a major advantage in that you only need a small amount of assembler to get going on a new chip, then the rest can be ported over in C/C++. This has been the situation for decades - Unix (and now Linux) has been the initial OS for almost all new chips.
How fast will this chip be at general purpose stuff? Who cares if it can do 100GFLOPS on a couple operations.
Reasonable point, but FLOPs are a good general measure of the speed, as they are pretty complex operations. We all used to measure speed in MIPS (Million Instructions Per Second), but as chips got so diverse, one chip's instruction could not be easily compared with another's (particularly if RISC chips were involved, where the instructions could be very minimal). FLOPs are a better measure, as a divide is a divide and a multiply is a multiply no matter what chip architecture you use.
Re:x86 compatibility? (Score:2)
This is the good ol' anti-new-architectural speak. A new architecture is not necessarily a bad thing, provided
1) it's massively scalable to it's targeted size and hopefully beyond (either large or small)
2) it's easily portable
3) it's architecture doesn't have a super bottleneck (namely, x87 float point stack)
Apple managed to embrace MacOS from 68K to PowerPC.
HP wrote HP-UX for Itanium (non-emulation mode).
Di
Re:x86 compatibility? (Score:3, Informative)
Which means that the vast majority software I use everyday would work just fine on it.
Although it would be slow... Cell isn't optimized for general purpose and the extra 'SPE's add another 128 registers to the PowerPC and VMX ISA's. Which wouldn't get used by normally compiled PowerPC code.
You would have to have GCC worked over to provide 'vectorized' code to use as much as these SPE's as possible for single threaded applications, and even then you wouldn't ge
OSX Tiger & Longhorn? (Score:2)
No?
Re:Clock speed (Score:2)
Re:Clock speed (Score:3, Interesting)
(1) fetching and prefetching (multiple P4 stages) because the extra processors on Cell can directly address their local 256KB of memory.
(2) decoding x86 instructions into microops - since the extra processors are running code directly rather than running kludgy x86 code on a non-x86 microcore
(3) branch prediction (since the load penalty is a lot lower due to local 256KB of memory and shallower pipeline
Re:No more Moores Law? (Score:2)
Re:No more Moores Law? (Score:2)
NOOOOOOOOOOOOOOO!!
We're addicted to the upgrade treadmill.
Wouldn't it be preferable to just keep trying to push up the clock speeds, even artificially high clock speeds even if it meant lower actual performance, while at the same time building ever more bloated software applications?
Think of our poor corporations! What will happen to the econoomy if they are force
Re:No more Moores Law? (Score:2)
Re:Programming secondary processors (Score:2)
Why? Single precision floating point can accomodate up to 23 bits of precision, and full 24 if you consider the sign (all sound applications should use zero-centered FP samples because floating point becomes more precise towards zero). Sure many modern digital sound systems are exactly 24 bit so there is no margin for errors, but the lowest bits are for marketing and bit-padding pur
Re:Context switching (Score:2)
Re:Context switching (Score:2)
Do you really ne
Re:What about AI? (Score:3, Interesting)