Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware

The CPU Redefined: AMD Torrenze and Intel CSI 200

janp writes "In the near future the Central Processing Unit (CPU) will not be as central anymore. AMD has announced the Torrenza platform that revives the concept of co-processors. Intel is also taking steps in this direction with the announcement of the CSI. With these technologies in the future we can put special chips (GPU's, APU's, etc. etc.) directly on the motherboard in a special socket. Hardware.Info has published a clear introduction to AMD Torrenza and Intel CSI and sneak peaks into the future of processors."
This discussion has been archived. No new comments can be posted.

The CPU Redefined: AMD Torrenze and Intel CSI

Comments Filter:
  • huh? (Score:4, Insightful)

    by mastershake_phd ( 1050150 ) on Monday March 05, 2007 @08:01AM (#18236200) Homepage
    Werent the first co-processors FPUs. Arent they now integrated into the CPU? By having all these thing sin one chip they will have much lower latency with communicating between themselves. I think all in one multi-core chips is the future if you ask me.
    • Re: (Score:3, Interesting)

      by Chrisq ( 894406 )
      I think it has to do with the number of configuration options. Even if technology was able to fabricate one super chip with the best possible GPU and sound processor might be great for some people, but others would be better off with extra general purpose cores, cache, etc. The flexibility of "mix and match" probably outweigh the advantages of having the separate components on a single chip
      • Some people would like to be able to customize (especially sound cards), but mass producing a "super chip" would be more cost effective. You could of course have different versions of "super chips".
        • Re: (Score:3, Informative)

          by dosquatch ( 924618 )

          You could of course have different versions of "super chips".

          Thereby decreasing their cost effectiveness. 'Tis a viscious circle.

        • Re:huh? (Score:5, Insightful)

          by Fordiman ( 689627 ) <fordiman@@@gmail...com> on Monday March 05, 2007 @09:48AM (#18237138) Homepage Journal
          But think. There is definitely money in non-upgradable computers - especially in the office desktop market. The cheaper the all-in-one solution, the more often the customer will upgrade the whole shebang.

          Example: in my workplace, we have nice-ass Dells which do almost nothing and store all their data on a massive SAN. They're 2.6GHz beasts with a gig of ram, a 160G HD, and a SWEET ATI vid card each. Now, while I personally make use of it all proper-like, most people here could get along with a 1GHZ/512MRAM/16GHD/Onboard video system.

          I think Intel/AMD stands to make a lot of money if they were to build an all-in-one-chip computer, ie: CPU, RAM, Video, Sound, Network, and a generous flash drive on a single chip.
    • Re:huh? (Score:5, Interesting)

      by MrFlibbs ( 945469 ) on Monday March 05, 2007 @08:18AM (#18236318)
      The CPUs will still be multi-core. They will also integrate as many features as makes sense. However, there are limits on how big the die can be and remain feasible for high volume manufacturing. Using an external co-processor is both more flexible and more powerful.

      The interesting thing about this whole co-processor approach is that the same interface used to connect multiple CPUs to each other is being opened up for other processing devices. This makes it possible to mix and match cores as desired. For example, you could build a mesh of multi-core CPUs in a more "normal" configuration, or you could mate each CPU with a DSP-like number cruncher and make a special purpose "supercomputer". It will interesting to see what types of compute beasts will emerge from this.
      • Re:huh? (Score:5, Insightful)

        by *weasel ( 174362 ) on Monday March 05, 2007 @09:09AM (#18236756)

        However, there are limits on how big the die can be and remain feasible for high volume manufacturing.

        The limits aren't such a big deal.
        Quad-core processors are already rolling off the lines and user demand for them doesn't really exist.
        They could easily throw together a 2xCPU/1xGPU/1xDSP configuration at similar complexity.
        And the market would actually care about that chip.
        • Re: (Score:3, Informative)

          by sjwaste ( 780063 )
          Intel's quad cores are, and they're actually two Core2 dies connected together I believe. "Native" quad core is in the works by AMD and Intel, but is currently not on the consumer market.

          Now, if there are other CPU's out there doing native quad core for general purpose computing, I'm unaware and withdraw my ignorance if so :)
          • Intel's quad cores are, and they're actually two Core2 dies connected together I believe. "Native" quad core is in the works by AMD and Intel, but is currently not on the consumer market.

            At the end of the day, nobody cares whether the cpu is "native" or not. If its cheap, gives good performance, sucks less electricity and fits in "one" socket, its good enough for most folks
            • by sjwaste ( 780063 )
              I agree, but the post I was responding to was stating that quad core was rolling out despite demand and die space that might reduce yield. My point was only that because its two dies put together, its different than having to put four on one die and having corresponding yield issues.
        • small parts of the market might be interested, but only the parts that are now served by GPUs integrated into chipsets. The advantage of discrete graphics chips, whether plugged into a cpu socket or into an expansion slot, is upgradability, and the chance to differentiate products. Putting everything on one die lowers total system costs, and circuitboard complexity, but is a hard sell in many segments of the PC market.

          My question is what advantage will they get from plugging these coprocessors into CPU sock
    • Re:huh? (Score:5, Interesting)

      by Mr2cents ( 323101 ) on Monday March 05, 2007 @08:23AM (#18236352)
      Adapting another quote: "If you want to create a better computer, you'll you'll end up with an Amiga". It's more or less what they're describing here. Amiga made heavy use of coprocessors back in the days. It could do some quite heavy stuff (well, at the time), while the CPU usage stayed below 10%.

      One cool thing I discovered while I was learning to program was that you could make one of the coprocessors interrupt when the electon beam of the monitor was at a certain position. Pretty nifty.

      BTW, for those who are too young/old to remember, those were the days of dos, and friends of mine were bragging with their 16 color EGA cards. Amiga had 4096 colors at the time.
      • you could make one of the coprocessors interrupt when the electon beam of the monitor was at a certain position
        The Atari 800 could do that easily at the scan line level with Display Line Interrupts and somewhat harder with cycle counting at points across the line. And that was 1978 technology..
        • On the original IBM PC with a CGA adapter, you had to wait until the vertical flyback interval before updating the video memory. Otherwise the hardware couldn't keep up with sending data to the monitor (or something), and the monitor displayed snow.
          • Um, you'll see noise on any array display if you write to the memory while it is drawing. It may not always show up as noise, it could show up as unsynced portions of the display (re: looking sucktastic).

            Tom
            • No, you could update the original monochrome IBM PC display whenever you liked. The snow thing was only a (mis)feature of the colour display adapter.
              • I didn't say it would be snow (as in electrical noise).

                But think about it, if you are say 0.5 frames out of sync with the display, then only half of the display will actually have "live" content. The result (especially with movement) will look very distorted. So even with modern displays, you need to sync to vsync and draw only during vblank (or draw during the frame but use frame switching).

                LCDs are not instantaneous either. Try filming an LCD without the correct shutter. You'll see the same annoying b
      • by cHALiTO ( 101461 )
        I agree the Amiga was a great piece of hardware, but the palette was 4096 colors, It could actually use 32 of them simultaneously on screen (at least the amiga 500, the amiga 2000 could go up to 4096 colors on-screen in HAM6).
        EGA displays could use 16 out of 64 if I'm not mistaken.

        Ahh those were the days :)
        • Re: (Score:3, Informative)

          by thygrrr ( 765730 )
          Nope, the A500 also had 4096 colours in HAM mode. They were basically the same hardware, except the A2000 had different - and more - expansion slots and was a desktop machine while the A500 was a typical home computer/console kind of thingy.
      • Re:huh? (Score:5, Interesting)

        by walt-sjc ( 145127 ) on Monday March 05, 2007 @08:52AM (#18236572)
        Ahh - the Amiga. My favorite machine during that era. I got my A1000 the first day it was available. Modern OS's could still learn a lot from that 20 year old OS. Why oh why are we still using "DOS Compatible" hardware????

        Amiga had 4096 colors at the time.

        Better put "4096" with a "*" qualifier. You couldn't assign each pixel an exact color - the scheme got you more colors by being able to set a bit that said that the next pixel modifies the previous pixel by "x". In this way, they could get more colors using less memory than traditional X bits per color per pixel schemes (Amiga was a bitplane architecture.)

        Anyway, back on topic, I wish that the CPU manufacturers could finally come up with a "generational" standard socket. A well-designed module socket should last as long as an expansion slot standard (ISA,PCI,PCIe) and not change for damn near every model of chip. I should be able to go out and get a one, 2, 4, 8 socket motherboard, and stick any CPU / GPU / DSP module into it I want. Can we please finally shitcan the 1980's motherboard designs?
        • The problem is that there's a lot of other stuff that has to understand the CPU for the computer to work properly. You'd need to be able to snap in different chipsets so that the CPU could actually work. You'd probably want to be able to plug in new ram too. As processors get faster and faster, they require more pins. No more 80 pin CPUS for us. If they designed a universal CPU slot today, it would either have twice as many pins as we needed, or we'd run out of pins within 2 years.
          • As processors get faster and faster, they require more pins. No more 80 pin CPUS for us. If they designed a universal CPU slot today, it would either have twice as many pins as we needed, or we'd run out of pins within 2 years.

            That assumes you are plugging in individual processors. This is why I specified "module". Using high speed serial channels, like PCI Express does, you can reduce the pin count on the module level considerably. I would assume that the module would also contain a certain amount of local
        • Re: (Score:3, Interesting)

          by rbanffy ( 584143 )
          OK... Let's rephrase that:

          Folks with 16-bit PCs were bragging about their 16 out of 64 color EGA cards and single-tasking OSs when even the simplest the Amigas had 32-bit processors, 32 out of 4096 colors, PCM audio and a fully multi-tasking OS coupled with a GUI.

          As for the "processor socket", there are people selling computers that go into passive backplanes. If you put the CPU and memory in a card, there is little reason why you would have to upgrade the rest of the computer when you change the CPU (you w
          • by Yvan256 ( 722131 )
            I like the idea of "CPU slots". Not in the Celeron 300A-style "CPU on a card" concept, but to put everything related on a card, make the motherboard "braindead".

            Put the CPU, chipset and RAM slots on the "processor card", that way the only reason to upgrade a motherboard would be if new slots were introduced (PCI-Express, etc) and you actually needed them.

            Isn't this called "passive backplane" or something? If it already exists for some systems, why not desktop computers?
            • Re:huh? (Score:4, Interesting)

              by default luser ( 529332 ) on Monday March 05, 2007 @02:22PM (#18240870) Journal
              Isn't this called "passive backplane" or something? If it already exists for some systems, why not desktop computers?

              Early high-end computer systems started out like this, utilizing backplanes like VME. They've been phased-out, because ultimately that modularity was too expensive, and because the shared-bus architecture hurt performance. Hardware devices that used to require multiple cards can now fit on a single chip, and have their own PCIe drop to increase performance. Memory upgrades that used to require multiple cards just to reach 1MB are now eclipsed by 8 and 16-chip configurations on a single DIMM (a specialized expansion slot), and have their own bus to improve performance.

              Let's say they went with the Single-board computer design (CPU+memory+bus controller) - now your costs go up, because you have to build multiple "processor cards" for all the different backplanes you want to plug into. ISA backplane - 1 model. PCI backplane - 1 model. PCI + ISA backplane - 1 more model, and it also requires a new specification: the new bus designs have to play nice with the limited I/O space at the back of the card, so you end up either making the bus connector larger, or you end up making certain bus combinations impossible.

              With the motherboard and atached bus design, your costs go down because you can provide a mixture of the busses that are the most popular. Thus, you only have one product to design and electrically verify, and only one manufacturing line to test.

              Also, when you move to point-to-point architectures like PCI-Express, with a separate backplane you really limit yourself to the slot configurations you can offer. Unlike with a shared bus, with P2P interconnects you have to make sure the backplane layout matches the connector layout exactly. This means you either standardize on ONE configuration (boring), or you put the ports on the processor card (what we are doing).

              The only places that still use modular bus designs today are embedded developers, and that's because they still need the expandability and modularity that end-users do not. They also need the backward-compatibility affored by these old bus specifications (VME especially). They pay for it, in terms of performance - most of them bypass the slow backplane of VME or CompactPCI with faster interconnects like Gig/10GigE, Fibre Channel, RapidIO or Infiniband.
        • Got mine on Day 1 also.

          An Amiga 1000, Deluxe Paint, Flight Simulator, Amiga Basic and 2Meg of Ram = $3500.

          Later got the Sidecar for DOS, and Earl Weaver Baseball. Ahhhhhhhh.

          20 years later, and no hardware or software has given me such joy.

          NVidia, Matrox, ATI, AMD, Intel, WTF?

          The Amiga showed you how 20 years ago, and you are just now getting around to it?

          Bring back multi-resolution windows, bitches!

        • by Wavicle ( 181176 )
          Anyway, back on topic, I wish that the CPU manufacturers could finally come up with a "generational" standard socket. A well-designed module socket should last as long as an expansion slot standard (ISA,PCI,PCIe) and not change for damn near every model of chip.

          Unfortunately this isn't as practical as it sounds. As technologies to increase performance continue, the socket technology would quickly become a bottleneck. If we were to go back 10 years ago (1987) our boards would have 3.3v CPUs, 64 bit memory bu
        • by LWATCDR ( 28044 )
          Been there and done that and it just doesn't work.
          Kaypro and Zenith both offered PCs that let you swap out CPU cards. I have a great poster in my office for the Kaypo PC that says "The End of Obsolescence". It wasn't
          The problem is that the upgrades tended to cost as much as a new computer.

          You have a smaller potental market and the cost for the new CPU board is so close to a new motherboard it just isn't worth it.
          Then you add the improvements in memory systems and as you can see it just doesn't work out.
      • Re:huh? (Score:4, Interesting)

        by evilviper ( 135110 ) on Monday March 05, 2007 @10:18AM (#18237458) Journal

        "If you want to create a better computer, you'll you'll end up with an Amiga". It's more or less what they're describing here.

        That's what he's describing, but I don't believe for a second that's what it's going to be...

        I don't believe for a second practically ANYONE is going to buy an expensive, multi-socket motherboard, just so they can have higher-speed access to their soundcard... Ditto for a "physics" unit.

        This exists solely because CPUs are terrible at the same kinds of calculations ASICs/FPGAs are incredible at. That will be the only killer app here.

        Video cards are a good example on their own. CPUs are so bad, and GPUs are so good, that transferring huge amounts of raw data over a slow bus (AGP/PCIe) still puts you far ahead of trying to get the CPU to process it directly. And it works so well, the video card companies are making it easier to write programs to run on the GPU.

        And GPUs aren't remotely the only case of this. MPEG capture/compression cards, Crypto cards, etc. have been popular for a very long time, because ASICs are extremely fast with those operations, which are extremely slow on CPUs.

        The situation is much more like x87 math co-processors of years past, than it is like the Amiga, with independent processors for everything.

        It is likely that, in time, integrating a popular subset of ASIC functions into the CPU will become practical, and then our high-end video cards will be simple $10 boards, just grabbing the already-processed data sent by the chip, and outputting it to whatever display.

        Then maybe AMD and Intel will finally focus on the problem of interrupts...
      • Definitely. (Score:3, Insightful)

        by Svartalf ( 2997 )
        I remember the Amiga. I remember how much more capable and powerful they were over the other "personal" computers of the day.

        It's a damn shame that Commodore couldn't market/sell their way out of a wet paper bag.
    • Re:huh? (Score:5, Interesting)

      by TheThiefMaster ( 992038 ) on Monday March 05, 2007 @08:26AM (#18236380)
      It's a cost and feasibility thing. The original FPUs were separate because they were expensive, not everyone needed them, and it was impractical to integrate them into the cpu because it would make the die too large and result in large numbers of failed chips. They became part of the chip later once the design was refined and scaled down.

      The same applies to trying to integrate GPUs into the CPU, at the moment a top-end GPU is too large and expensive to integrate, and not everyone needs one. The move to having a GPU in a CPU socket should cut a lot of cost because the GPU manufacturers won't have to create an add-in-card to go with the GPU, they can just design the chip to plug straight into a standardised socket.

      At the same time low-end GPUs are small and cheap enough that they are being integrated into motherboards, integrating a basic GPU into the CPU seems like a good next move, and the major cpu manufacturers seem to agree. IIRC Via's smallest boards integrate a basic cpu, northbridge and gpu into one chip? AMD are definitely planning it with their aptly named "Fusion". *Checks wikipedia* Yeah, Via's is called "CoreFusion".

      Still, you are right, all-in-one cpus are the future, we're just not quite there yet.
      • I think all-in-one / system-on-a-chip have been around for a long time, but they just weren't popular because that meant a significant performance hit. They may become more common as the performance becomes "good enough" for most common tasks where a desktop or notebook computer would be unnecessary and overpowered. It hasn't been a very popular idea yet, I think in part because the cost difference wasn't much. The next mainstream computer platform just might be a phone though, I understand that a lot of
        • The next mainstream computer platform just might be a phone though

          Smartphones will (IMHO) evolve to a wireless portable computing device that "oh yeah, it can make phone calls too," but the problem is still that the screen is still WAY to small, and user input still sucks. Maybe they will finally be able to make LCD-like glasses that really are high-resolution, and maybe they will come up with a neural interface so we can ditch the keyboard / mouse... But I don't see those things being practical within the
      • Re: huh? (Score:5, Insightful)

        by Dolda2000 ( 759023 ) <fredrik.dolda2000@com> on Monday March 05, 2007 @09:56AM (#18237248) Homepage

        Still, you are right, all-in-one cpus are the future, we're just not quite there yet.

        Actually, no thank you. I've had enough problems ever since they started to integrate more and more peripherals on the motherboard. I'd be troubled if I'd have to choose between either a VMX-less, DDR3-capable chip with the GPU I wanted, a VMX- and DDR3-capable chip with a bad GPU, a VMX-capable but DDR2 chip with a good GPU, or a chip that has all three but an IO-APIC that isn't supported by Linux, or a chip that I could actually use but costs $500.


        Instead of gaining those last 10% of performance, I'd prefer a modular architecture, thank you. Whatever is so terribly wrong with PCI-Express anyway?

        • If I was reading the picture on the second page correctly, it looks like AMD plans to use a "4x4" type motherboard architecture, but with the second CPU spot made for a dedicated GPU chip instead of another redundant CPU. The CPU and GPU wouldn't be on the same die in this case.

          I think this would make sense to me. Right now when I upgrade my video card, I throw out the ram, GPU, and integrated circuitry of the entire package to replace everything with the new video card upgrade (which happens every 6 mo
    • Re: (Score:3, Informative)

      by mikael ( 484 )
      Werent the first co-processors FPUs. Arent they now integrated into the CPU?

      The Intel 8086 had the Intel 8087 [wikipedia.org]
      A whole collection of Intel FPU's is at Intel FPU's [cpu-collection.de]

      TI's TMS34020 (a programmable 2D rasterisation chip), had the TMS34082 coprocessor (capable of vector/matrix operations)
      (Some pictures here [amiga-hardware.com]. Up to four coprocessors could be used.

      Now, both of these form the basis of a current day CPU and GPU (vertex/geometry/pixel shader units).
      • by sconeu ( 64226 )
        Intel also had the 8089, which was a coprocessor for I/O. It's described (along with the 8087) in my vintage July 1981 8086 manual.
    • Re: (Score:3, Insightful)

      by Tim C ( 15259 )
      I think all in one multi-core chips is the future if you ask me.

      Great, so now instead of spending a couple of hundred to upgrade just my CPU or just my GPU, I'll need to spend four, five, six hundred to upgrade both at once, along with a "S[ound]PU", physics chip, etc?

      Never happen. Corporations aren't going to want to have to spend hundreds of pounds more on machines with built-in high-end stuff they don't want or need. At home, I want loads of RAM, processing power and a strong GPU. At work, I absolutely d
    • Re: (Score:2, Interesting)

      But by making specialized chips, you can limit and optimize the instruction set to allow for many more instructions per second. The performance gains of this strategy (as well as using this a a means of heat distribution) could out strip the latency gains of putting everything on one chip.
    • Re: (Score:3, Insightful)

      by ChrisA90278 ( 905188 )
      You are right. The distance and therefor communications time is better if the device is closer. But butting the device inside the CPU means it is NOT as close to something else. One example is the graphic cards. There, you want the GPU to be close to the video RAM,not close to the CPU. Another device is the phone modem (remember those) you want that device close to the phone wire. Now let's look at new types of processors. A Disk I/O processor that makes a database run faster. You would want that t
    • Re: (Score:3, Interesting)

      by real gumby ( 11516 )

      Werent the first co-processors FPUs

      Actually there was an evolution of processor design into single, monolithic processing units; until well into the '70s it was hardly uncommon that computers would have all sorts of processing units (remember the "CPU" is the "Central Processing Unit.") Of course in this case I'm primarily talking about mainframes; one of the distinctions of the minicomputer (and later microcomputer) was that "everything was together" in the CPU. But even then the systems didn't really

    • "Werent the first co-processors FPUs. Arent they now integrated into the CPU? By having all these thing sin one chip they will have much lower latency with communicating between themselves. I think all in one multi-core chips is the future if you ask me."

      You misunderstand displacement, first of all there are only so many chips you can pack into a CPU die before bandwidth and memory issues become a problem, high memory bandwidth devices will spam the communications channel waiting for data. Really is a disp
  • CSI? (Score:5, Funny)

    by BigBadBus ( 653823 ) on Monday March 05, 2007 @08:01AM (#18236208) Homepage
    CSI? De-centralized CPU? Where will they be located; Miami, New York or Las Vegas?
    • Re:CSI? (Score:5, Funny)

      by 91degrees ( 207121 ) on Monday March 05, 2007 @08:33AM (#18236430) Journal
      CSI? De-centralized CPU? Where will they be located; Miami, New York or Las Vegas?

      Well, clearly, they won't. They're decentralised.

      New on NBC, "CSI: Wherever". We even have a song by The Who for the opening credits - "Anyway, Anyhow, Anywhere".
    • Aside from the jokes, am I the only one who am more than a bit disturbed by Intel's CSI (apparently "Common System Interface")? Did they actually find anything really bad about HyperTransport that TFA fails to mention, or is it just a horrid example of the NIH syndrome?
    • by Jonavin ( 71006 )
      The new Intel CSI Miami. "It looks like, there is, some sort, of, connection." WWWWWAAAAAAAAAAAA!

      [minrant]Stupid David Caruso[/minirant]
  • by G3ckoG33k ( 647276 ) on Monday March 05, 2007 @08:03AM (#18236224)
    The first details emerged half a year ago:


    IBM and Intel Corporation, with support from dozens of other companies, have developed a proposal to enhance PCI Express* technology to address the performance requirements of new usage models, such as visualization and extensible markup language (XML).

    The proposal, codenamed "Geneseo," outlines enhancements that will enable faster connectivity between the processor -- the computer's brain -- and application accelerators, and improve the range of design options for hardware developers.


    http://www.intel.com/pressroom/archive/releases/20 060927comp_a.htm [intel.com]
  • Retro-innovation (Score:5, Informative)

    by Don_dumb ( 927108 ) on Monday March 05, 2007 @08:08AM (#18236252)
    Here spins the Wheel Of Reincarnation http://www.catb.org/~esr/jargon/html/W/wheel-of-re incarnation.html [catb.org] watch how everything comes back and then goes away again and then comes back . . .
    • Are we finally getting back to actually complete computers like the Amiga?
      It had custom designed processors for sound and video on the motherboard.
      And then it was sold together with a fitting OS, so you got computer and software as a complete functioning machine in stead of many loose ends in a PC.
      • by Goaway ( 82658 )
        You want that, get a Mac.

        Seriously, I did, and it's feeling just like the old days.
        • I did, at home, I'm just a bit frustrated with the PCs at work.
        • The problem is that the Mac's are all built like standard PC's now. If you replace the Apple firmware with a standard BIOS you could boot DOS. The Mac Mini is basically a standard notebook in a different form factor, ditto for the imac. The Mac pro is not much different in design than a Dell desktop. Why? Cost. They get to use standard parts / software. There is NOTHING on the market like the integrated design of the Amiga. The mac has a lot more in common with a 1982 IBM PC than an Amiga.
        • by mdwh2 ( 535323 )
          Or get a PC.

          Really, if GPUs and sound chips are sufficient for a comparison to the Amiga's chipset, then PCs have been doing that for at least as long as Macs.

          It's not clear to me why this article is about something more Amiga-like than what modern computers already have (especially since GPUs are fully programmable). The difference about this news is that the chips can be put on the motherboard via a standard socket - but it was never the case with the Amiga that you could plug in chips you wanted, you jus
          • by Goaway ( 82658 )
            I don't think you understood at all what this particular branch of the discussion is about.
            • by mdwh2 ( 535323 )
              Yes, I did - if not, perhaps you would care to explain to the rest of us rather than playing guessing games...
      • by Yvan256 ( 722131 )
        You mean like modern Macs have become? They have a CPU, a GPU, some audio chip (probably not a DSP but still). And the OS knows how to work with both the CPUs and the GPU.

        • I did :-)
          I moved from Amiga 1200 to iMac in 1999. Never had a PC in the house (except, perhaps, the bridgeboard on the A2000, which, back then, made me wonder what all the fuss of PCs was about).
      • Heck, this goes back to the Atari 800 series.
        All this is really doing is bringing a more standardised set of co-processers on to the mobo rather than any number of 3rd party ones - it would make it much easier keeping the OS stable if you have a more controlled number of architectures to deal with.
        On the downside, if these processors were DRM hobbled, it would make life harder too..
  • Interesting (Score:3, Interesting)

    by Aladrin ( 926209 ) on Monday March 05, 2007 @08:20AM (#18236330)
    I find the idea of multiple Processing Unit slots on the motherboard that can each take different type of chips to be very interesting. I'm not sure how well it will work, though. The article mentions 5 types that already exist: CPU, GPU, APU, PPU and AIPU. (Okay, the last doesn't exist yet, but company is working on it.) There's only 4 slots on that motherboard that's shown. I definitely do NOT want to see a situation where the common user is considering ripping out his AIPU for a while and using a PPU, then switching back later. I can only imagine the tech support nightmares that will cause.

    So the options are to have more slots, or make something I like to call an 'interface card'. See, there'll be these slots on the motherboard that cards fit into... wait, don't we have this already?

    And more slots isn't really an option because the computer would end up being massive with all the cooling fans and memory slots. (Which are apparently seperate for each PU.)

    I kind of hope I get proven wrong on this one, but I don't think this is such a great idea. Just very interesting. Having 16 slots and being able to say you want 4 AIPUs, an APU, 4 GPUs, 3 PPUs, and 4 CPUs on my gaming rig and 1 GPU, 1 APU, and 14 CPUs on my work rig would be awesome.
    • Re: (Score:2, Interesting)

      by eddy ( 18759 )

      Maybe if a motherboard featured a very large generic socket to which was attached one cooling solution, it'd work out better. Processing Units, which would be smaller as to fit as many as possible, would be able to go anywhere in this socket (in a grid-aligned fashion). Easiest solution, socket is X*X square grid, and all PUs must be say X/2 (or hopefully X/4) squares which can be arranged in any fashion. Plunk them in, reattach cooling over all of them, boot and enjoy that 4CPU, 2GPU, 2FPU configuration.

    • Re:Interesting (Score:5, Interesting)

      by Overzeetop ( 214511 ) on Monday March 05, 2007 @09:18AM (#18236828) Journal
      You are correct - sockets are just a reincarnation of slots, but less flexible because you're limited to what you can put on a single chip instead of an entire card.

      Perhaps the better thing to do would be better slot designs (not that we need more with all the PCI flavors floating around right now) with integrated, defined cooling channels. If you were to make the card spec with a box design rather than a flat card, you could have a non-connector end mate with a cooling trunk and use a squirrel cage (higher volume, quieter, more efficient)fan to ventilate the cards.

      • So basically have a passive backplane like industrial computers have been doing for YEARS, except that you allow multiple CPU boards. I like it.
      • Perhaps the better thing to do would be better slot designs (not that we need more with all the PCI flavors floating around right now) with integrated, defined cooling channels.

        Adding a connector means you will have more noise. Using chips means they both have a shorter path and are electrically better connected.

        Most solutions need only two things; a processor and memory. Everything else you see on the card is either there for I/O (video cards have RAMDACs for example, or whatever the chip that handles di

        • I think there are disturbingly few consumer or (volume) business applications which require additional process-specific (FPU, GPU) processors which don't interact with the ourside world. Heck, that's what makes computers great - connectivity. Just about any arguement that can be made for additional PU sockets could be made better by including them on a single die, save possibly thermal concerns, in which case you can add multiple smaller processors.

          In my opinion, what the slots lose in path length and elect
          • No, unless you're just going to put extra processor slots in so we can add SMP chips as the computer ages, I'll pass.

            If all the chips use HT to speak to one another, and all the chips use the same package, then you can put EITHER a CPU or another type of processor in it.

    • look up HTX slots
    • While there are a wide variety of co-processor options (or at least ideas) right now and few sockets into which to put them. I suspect the solution will more likely come in the form of unified co-processors rather than multiple sockets.

      Mother board shipsets are becoming the union of a lot of functionality (Disk, Ethernet, Sound, UDB, PCI/e and graphics). Even though you can still get best of breed addin cards for many of these functions, the majority of desktop systems do just fine with what the chipset of

  • Amiga? (Score:3, Insightful)

    by myspys ( 204685 ) * on Monday March 05, 2007 @08:29AM (#18236402) Homepage
    Am I the only one who thought "oh, they're reinventing the Amiga" while reading the summary?
  • by Mad_Rain ( 674268 ) on Monday March 05, 2007 @08:35AM (#18236446) Journal
    that revives the concept op co-processors.

    Slashdot's computers might benefit from a co-processor, the function of which is to monitor and correct spelling and grammar errors. It would serve like an editor's job, only better, because, you know, it might actually work.

    (Bye-bye karma!)
  • How 'bout Agnus, Denise and Paula. :)
  • EOISNA (Score:3, Insightful)

    by omega9 ( 138280 ) on Monday March 05, 2007 @08:47AM (#18236546)
    Everything old is new again.
  • by alta ( 1263 ) on Monday March 05, 2007 @09:39AM (#18237028) Homepage Journal
    Prepare to see the pornprocessor soon. I'm not going to give a lot of details here, but it's optimized for specific physics, AI and Graphics.
  • Cell Clusters (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Monday March 05, 2007 @09:43AM (#18237072) Homepage Journal
    How about the Cell uP [wikipedia.org] (first appearing in Playstation3), which embeds a Power core on silicon with a 1.6Tbps token ring connecting up to 8 (more later) "FPUs", extremely fast DSPs. IBM's got 4 of them on a single chip, connected by their "transparent, coherent" bus, a ring of token rings. One Cell can master a slave Cell, and IBM is already debugging 1024 DSP versions, transparently scalable by the compiler or the Power "ringmaster" at runtime.

    These little bastards are inherently distributed computing: a microLAN of parallel processors, linkable in a microInternet.

    Imagine a Beowulf cluster of those! No, really: a Beowulf cluster of Cells [google.com].
  • Is this the same as a bus-oriented system? I remember spec'ing out systems for a defence contractor back in the 90s, and there were systems designed around "daughter-card" processors, something like a modular mainframe on the cheap. It always seemed to me that a bus-centric system had a lot going for it performance-wise, rather than forcing everything in the computer to synch to the CPU.
  • AMIGA! (Score:2, Insightful)

    This sounds vaguely like the Amiga platform of years past (with a fervent following today still)... how innovative to copy someone else!
  • by LordMyren ( 15499 ) on Monday March 05, 2007 @11:19AM (#18238204) Homepage
    To the best of my knowledge, Torrenza is already implemented. The HTX port on many Opteron motherboards is a HyperTransport connection. You can already buy FPGA dev kits from U. of Mannheim that plug into this HyperTransport slot and interface with the rest of your system. Torrenza may continue to advance the HyperTransport / Coprocessor war, but as far as I'm concerned, Torrenza is already here.
    • by mzs ( 595629 )
      Yes! Can you do battery backed SRAM behind this gate array?
    • HTX is currently limited to 16-bits/800MHz, which is pretty anaemic by today's standards. This is the sort of bandwidth processors had back in the late '90s. For comparison, the main CPU is sitting on the end of a point-to-point HT interconnect that runs at 2.6Ghz, and is twice the width, giving a total of almost eight times the bandwidth. Since HT messages are comprised of 32-bit words, this width will also give a much lower latency (one cycle per word rather than two, and most messages are several word
  • by J.R. Random ( 801334 ) on Monday March 05, 2007 @11:56AM (#18238822)
    There are basically two models of parallelism that are used in practice. One is the Multiple Instruction Multiple Data model, in which you write threaded code with mutexes and and the like for synchronization. The other is Single Instruction Multiple Data, in which you write code that operates on vectors of data in parallel, doing pretty much the same thing on each piece of data. (There are other models of parallelism, like dataflow machines, but they don't have much traction in real life.) Multicore CPUs are MIMD machines, GPUs are SIMD machines. All those other processors -- physics processors, video processors, etc. are just SIMD machines too, which is why Nvidia and ATI could announce that their processors will do physics too, and why folding@home works so well on the new ATI cards. So I suspect that in real life there will be just two types of processors. At least I hope that is the case, because it will be a real mess if application A requires processors X, Y, and Z while application B requires processors X, Q, and T.
  • We've seen this before. The industry is constantly cycling between specialized co-processors (what I loosely call asymetric multiprocessing) to increase performance and increasingly powerful central processors and dumb peripherals to decrease cost and bus latencies. What's old is new again.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...