Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IBM Hardware Entertainment Games

More Cell Processor Details And First Pictures 535

slashflood writes "After reading two articles on slashdot about the Cell architecture and another one that criticizes the extensive roundup of the STI patents, I found the first pictures of the Cell core. It seems that at least some predictions were true. Seeing is believing." mtgarden points to this ZDNet article which says that the "first version of the chip will run at speeds faster than 4GHz. Engineers were vague on how much faster, but reports from design partners say 4.6GHz is likely. By comparison, the fastest current Pentium PC processor tops out at 3.8GHz." (More below.)

Hack Jandy writes "Anand Shimpi has some details about the upcoming Cell processor (PS3) in his personal blog. According to Anand, "Rambus announced that the new Cell processor uses both Rambus XDR memory and their FlexIO processor bus. Because Rambus designed the interface for both the memory controller(s) and the processor interface, the vast majority of signaling pins are using Rambus interfaces - a total of 90% according to Rambus." Hasn't Rambus been showing up a lot again recently? The fact that Cell uses XDR has been widely speculated, but the fact that it will also use the Rambus bus signalling is something completely new."

This discussion has been archived. No new comments can be posted.

More Cell Processor Details And First Pictures

Comments Filter:
  • by Anonymous Coward on Monday February 07, 2005 @07:34PM (#11602048)
    http://www.scee.presscentre.com/imagelibrary/detai l.asp?MediaDetailsID=25555
    :

    CELL...bringing supercomputer power to everyday life with latest technology optimized for compute-intensive and broadband rich media applications

    SUMMARY:

    Cell is a breakthrough architectural design -- featuring 8 Synergistic Processing Units (SPU) with Power-based core, with top clock speeds exceeding 4 GHz (as measured during initial laboratory testing).

    Cell is OS neutral - supporting multiple operating systems simultaneously

    Cell is a multicore chip comprising 8 SPUs and a 64-bit Power processor core capable of massive floating point processing

    Special circuit techniques, rules for modularity and reuse, customized clocking structures, and unique power and thermal management concepts were applied to optimize the design

    CELL is a Multi-Core Architecture

    Contains 8 SPUs each containing a 128 entry 128-bit register file and 256KB Local Store

    Contains 64-bit Power ArchitectureTM with VMX that is a dual thread SMT design - views system memory as a 10-way coherent threaded machine

    2.5MB of on Chip memory (512KB L2 and 8 * 256KB)

    234 million transistors

    Prototype die size of 221mm2

    Fabricated with 90nanometer (nm) SOI process technology

    Cell is a modular architecture and floating point calculation capabilities can be adjusted by increasing or reducing the number of SPUs

    CELL is a Broadband Architecture

    Compatible with 64b Power Architecture(TM)

    SPU is a RISC architecture with SIMD organization and Local Store

    128+ concurrent transactions to memory per processor

    High speed internal element interconnect bus performing at 96B/cycle

    CELL is a Real-Time Architecture

    Resource allocation (for Bandwidth Management)

    Locking caches (via Replacement Management Tables)

    Virtualization support with real time response characteristics across multiple operating systems running simultaneously

    CELL is Security Enabled Architecture

    SPUs dynamically configurable as secure processors for flexible security programming

    CELL is a Confluence of New Technologies

    Virtualization techniques to support conventional and real time applications

    Autonomic power management features

    Resource management for real time human interaction

    Smart memory flow controllers (DMA) to sustain bandwidth
  • RTFA (Score:5, Informative)

    by temojen ( 678985 ) on Monday February 07, 2005 @07:46PM (#11602154) Journal
    The Cell CPU has a POWER Processor with VMX (it's vector based), plus 8 stream processors (which kick ass on vector processing units for some applications). So you've got
    • a regular CPU (good for program flow/logic and interdependant operations),
    • a vector unit (good for large arrays with no conditionals),
    • and 8 stream processors (good for applying the same operations plus flow control to lots of independant chunks of data).
    w00t!
  • Re:Cell (Score:5, Informative)

    by doormat ( 63648 ) on Monday February 07, 2005 @07:59PM (#11602304) Homepage Journal
    234M transistors @ 90nm is actually about as big as most graphics processors are. They tend to be 150M-200M @ 110nm or 130nm. I dont see it being terribly difficult to fab really,
  • by Sycraft-fu ( 314770 ) on Monday February 07, 2005 @08:02PM (#11602329)
    It is, and receantly with developments in chip design and compiler design, the architecture of a chip has become much less of a big deal.

    Back in the day, RISC was important because it allowed pipelining, the ability for a chip to be doing multiple things at once. Like old MIPS chips used to have 8 parallel piplines that took 8 cycles to execute an instruction, giving an effective rate of one instruction per cycle. Couldn't do that with CISC. Well now processors are decoupled from their ISAs. Each of those instructions is translated into a number of micro operations, which are actually what get handled by the processing section. Likewise it means there can be more registers than are exposed by the ISA.

    The upshot is that it doesn't matter as much it used to.

    However, there are still plenty of people who like to villify Intel for sticking with x86. They declare it to be an olde kludge of an architecture that needs to die and makes things all slow. However when AMD decided to stick with it, rather than hop on the EPIC bandwagon, they are suddenly heros for maintaining backwards compatibility, which is the whole reason Intel has stuck with x86 for so long.

    What's I'm pointing out is the bashing is done against Intel, regardless of what they do. Intel is in the "bad" position, no matter what that is. Like with the cell chips and speed. Slasdotters have been long raging on Intel for making a design that has higher MHz but less performance per MHz (as opposed to AMD). They declare it to be a marketing gimick, etc. Now here we have an article talking about cell chips that are designed to cycle even faster, and taking shots at how slow Intel chips cycle by comparison.

    It's not that these people actually have good reasons to like or dislike the decisions, they just dislike Intel and so slam on them.
  • by marshall_j ( 643520 ) on Monday February 07, 2005 @08:05PM (#11602361) Homepage

    You are right. Check out this article [newscientist.com]

    In laboratory tests, the Cell chip reached a top "clock speed" of 4GHz, which means it can perform more than four billion calculations per second. By comparison, the fastest Intel Pentium chip is currently capable of 3.8GHz.

    This difference in basic speed is not large but Richard Doherty, director of the computer industry analysts Envisioneering, in San Francisco, says Cell's modular architecture will give it a more substantial edge for many applications.

    "At first blush I think it's safe to say that it will be 10 to 20 times faster than the fastest graphics cards and processors," Doherty told New Scientist. "We think it is going to revolutionise computer science for entertainment and business."

  • Re:what's funny is.. (Score:3, Informative)

    by be-fan ( 61476 ) on Monday February 07, 2005 @08:06PM (#11602367)
    Prescott has 125M transistors, while the GeForce 6800 has 222M transistors. And on a tangent: this is typical Slashdot. IBM and Sony announce a 256 gigaflop chip, and Slashdotters' first reaction is to bitch about how hot and noisy it will be! Where are the real nerds in the audience?
  • Re:Conspiracy Theory (Score:3, Informative)

    by Sophrosyne ( 630428 ) on Monday February 07, 2005 @08:11PM (#11602416) Homepage
    Here is a link to the Apple E3 Article
    http://www.thinksecret.com/news/0502briefly.html [thinksecret.com]
    Also if you remember Sony recently admitted they made a mistake with their new walkman- and you also have to take into account Japanese culture and the concept of competition are not always thought of in the same respect- especially when Apple dominates the Japanese computer market (and now mp3 market).
  • Missing the point (Score:5, Informative)

    by egrinake ( 308662 ) <`erikg' `at' `codepoet.no'> on Monday February 07, 2005 @09:04PM (#11602645)

    There seems to be alot of confusion surrounding the Cell chip. This is not "just another processor", and it certainly has little to do with clock frequencies - the Cell is a whole new architecture, which might just be a glimpse into the future of computing.

    To begin with, it might be useful with some background on the ps2 architecture - there are a couple of really great in-depth articles at Ars Technica [arstechnica.com]; Sound and Vision: A Technical Overview of the Emotion Engine [arstechnica.com] and The PlayStation2 vs. the PC: a system-level comparison of two 3D platforms [arstechnica.com].

    What made the ps2 so awesome was that it was custom-built specifically for multimedia-processing, which requires completely different processing environments than general-purpose computing. Normal PCs are made for computing where you have a large number of instructions working on a small data-set (such as a spreadsheet) - this requires large data-caches close to the CPU, while instructions are streamed continually from RAM. Media-processing is the other way around; you have "simple" operations (like doing the calculations for a single pixel), which are run on a large set of data - so you wouldn't really need any data-caches. The ps2 did exactly this; it removed almost all the caches (only a few tiny ones were left), but it had a totally insane bus bandwidth. To borrow an analogy from the mentioned Ars Technica article:

    "Here's a goofy example to help you visualize what I'm talking about: imagine a series of large buckets, connected by pipes to a main tank, with a cow lapping water out of each bucket. Since cows don't drink too fast, the pipes don't have to be too large to keep the buckets full and the cows happy. Now imagine that same setup, except with elephants on the other end instead of cows. The elephants are sucking water out so fast that you've got to do something drastic to keep them happy. One option would be to enlarge the pipes just a little (*cough* AGP *cough*), and stick insanely large buckets on the ends of them (*cough* 64MB GeForce *cough*). You then fill the buckets up to the top every morning, leave the water on all day, and pray to God that the elephants don't get too thirsty. This only works to a certain extent though, because a really thirsty elephant would still end up draining the bucket faster than you can fill it. And what happens when the elephants have kids, and the kids are even thirstier? You're only delaying the inevitable with this solution, because the problem isn't with the buckets, it's with the pipes (assuming an infinite supply of water). A better approach would be to just ditch the buckets altogether and make the pipes really, really large. You'd also want to stick some pans on the ends of the pipes as a place to collect the water before it gets consumed, but the pans don't have to be that big because the water isn't staying in them very long."

    So, what does this have to do with the Cell? The Cell takes this concept even further. Cell systems are made up of multiple processors, called APUs (Attached Processing Units), which are connected using an insanely fast data bus. Each APU can be programmed to handle one specific task, and then pass the data on to the next APU for a different task. By doing this, you can just put in more processors to increase the throughput of the system. This works especially good for multimedia processing, which can be pipelined like this pretty easily. Here are a couple of snippets from the Wikipedia entry [wikipedia.org]:

    "While the Cell chip can have a number of different configurations, the workstation and PlayStation 3 version of Cell consists of one "Processing Element" ("PE"), and eight "Attached Processing Units" ("APU"). The PE is based on the POWER Architecture, basis of their existing POWER line and related to the PowerPC used by Apple

  • by drmerope ( 771119 ) on Monday February 07, 2005 @09:33PM (#11602829)
    Indeed. Even in a slow 0.18um technology, I can easily make an 8 GHz 3-inverter oscillator ring. So what?

    The "chip frequency" is determined by
    1) how fast can the transistors switch
    2) how many FIO4 inverter equivalents (standard measure of logic complexity) there are between the latches.

    #1 is just a process technology attribute

    #2 is where all the magic is because it is "how much work can take place in one cycle"

    #2 is commonly reduced in a technique called pipelining.

    General rule: Pipelining increases throughput at the cost of latency.

    Branches especially, but in other situations as well: latency becomes a limiting factor

    When this happens trading against latency is a bad decision.

    For any given ISA you're likely to reach this break point *somewhere*. The i386 architecture has reached it. This is because of the latency of decoding the _complex_ instructions.

    A simplier instruction set => incurs less latency penalty => can be pipelined further => can achieve higher clock speeds and accrue performance benefits to additional pipelining.

    Intel, though, still has probably the best process technology in the world and as a consequence if Intel were manufacturing these cell processors they'd run even faster.

    But simplier instructions tend to do less work. This means you need more instructions for the same task. More instructions might code to larger memory footprints. Larger memory footprints require faster i/o to memory and larger caches to not incur performance penalties. Thus in the end you might gain nothing.

    You can see this effect within amd64. Running in 64-bit mode gives you more registers, more registers should mean faster programs, but moving around all those 64-bit variables erases the benefit. (at least in compiler run-time benchmarks that I've seen).
  • Re:Cell (Score:3, Informative)

    by IdleTime ( 561841 ) on Monday February 07, 2005 @09:58PM (#11602971) Journal
    There is really no reason for Linux to use a 4.6 GHz processor though.
    This is the biggest rubbish I have ever heard!

    I see a gazillion areas that requires speed and processing power. Just because you don't have the need does not mean others don't.

    How someone could moderate that rubbish as insightsful is a mystery!
  • by be-fan ( 61476 ) on Monday February 07, 2005 @10:16PM (#11603092)
    So the CPU is just a normal POWER, right?

    No. Each Cell has one main (controller) CPU called a PU, and up to 8 seperate vector CPUs called SPEs. The main CPU is a regular 64-bit POWER processor (with SMT --- IBM's equivalent of hyperthreading), while the APUs are very simple processors with a lot of execution resources and insane bandwidth. Such processors are known as "stream processors" in the literature, because they are designed to handle streams of data.

    it's just a different brandname, right?

    Yes, "AltiVec" (like "G5") is an Apple/Motorola trademark, so IBM can't use it. And you're right, the AltiVec unit is on the PU.

    For what purposes is the VMX more suited?

    It's there most likely because if you're running some code that isn't suitable for the SPEs, but does need to do vector computations, you don't have to send it off to the SPEs.

    Will the SPEs have this same starvation problem?

    Potentially, but probably not. Altivec on the G4 was starved because the G4's bus was exceedingly slow. The SPEs are supposed to be on a shared 128GB/sec internal bus, and the Cell has 100GB/sec of bandwidth to main memory.

    That each of the SPEs has 256k of private memory to work with?

    Yes. In the Cell model, you design your code in "cells". A cell is a clump of code and data that's copied to the SPE's local memory. The code then runs, streaming in additional data from memory, and using the local memory as a workspace.

    Can SPEs freely read other SPEs "local memory", or only their own? And who fills up this memory initially, and who deals with it once it's done?

    The SPEs local memories are not connected to each other, so each SPE can only read from its own local memory. The memory is filled up by the PU, when a Cell is loaded onto the SPE. The SPE then runs autonomously, and when it finishes, sends the results back to the PU via main memory.

    I.E., do the SPEs have access to main or video memory or other hardware, or do they ever require for the CPU to shuttle data to keep them fed?

    The SPEs and the PU all talk to a single DMAC, which has access to main memory.

    But then the article seems to be saying the is SPE access to memory is limited-- i.e. it can only be done in block load/stores.

    Yes. The DMAC, actually, can only read/write in 1024-bit blocks. This isn't really a big deal if you think about it. When a regular CPU reads a memory address, it doesn't read a byte at a time. It loads a whole cacheline at a time. So a P4, for example, usually reads a 128-byte (1024-bit) block at a time from memory anyway.

    Do each of the 8 SPEs actually independently load their own instruction streams?

    Yes. All the processor units run seperate instruction streams. Each "software cell" runs in its own thread, if you will.
  • by iirving ( 73118 ) on Tuesday February 08, 2005 @12:00AM (#11603691) Homepage
    The New Xbox 2 (or Xbox 360?) is using the PowerPC [osviews.com], if fact Microsoft is currently using Apple G5 as the development platform [gamesindustry.biz]. So they will have experiance on the Power architecture. I seem to remeber them doing some work in with NT on PCC in 98? but it was killed.
  • by Ideaphile ( 678292 ) on Tuesday February 08, 2005 @02:15AM (#11604297)
    I was at the Cell event today, and quoted in some of the news stories. I also have the ISSCC technical papers.

    The PowerPC core in the Cell prototype chip is NOT a Power5, as speculated here. According to IBM, this core was designed from scratch for this application. One critical difference is that the new pipeline executes instructions in strict program order rather than reordering instructions to improve throughput as is done with Power5.

    Also, IBM has not described the core as "simultaneous multithreaded", just "multithreaded." I presume from this that the multithreading is coarse-grained-- only one thread is active at a time, unlike Power5 which can execute instructions from two different threads in the same cycle.

    The logic design for the Cell CPU was optimized for higher clock speeds in a given process than Power5 can achieve. This is a good tradeoff for more linear multimedia algorithms, but reduces effective throughput on other types of code.

    I think it's reasonable to suppose that if Apple were interested in using the Cell architecture, it would prefer to use a version of the design that includes a Power5 core in place of the one in the Cell prototype.

    . png

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...