Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware

The CPU Redefined: AMD Torrenze and Intel CSI 200

janp writes "In the near future the Central Processing Unit (CPU) will not be as central anymore. AMD has announced the Torrenza platform that revives the concept of co-processors. Intel is also taking steps in this direction with the announcement of the CSI. With these technologies in the future we can put special chips (GPU's, APU's, etc. etc.) directly on the motherboard in a special socket. Hardware.Info has published a clear introduction to AMD Torrenza and Intel CSI and sneak peaks into the future of processors."
This discussion has been archived. No new comments can be posted.

The CPU Redefined: AMD Torrenze and Intel CSI

Comments Filter:
  • huh? (Score:4, Insightful)

    by mastershake_phd ( 1050150 ) on Monday March 05, 2007 @09:01AM (#18236200) Homepage
    Werent the first co-processors FPUs. Arent they now integrated into the CPU? By having all these thing sin one chip they will have much lower latency with communicating between themselves. I think all in one multi-core chips is the future if you ask me.
  • Amiga? (Score:3, Insightful)

    by myspys ( 204685 ) * on Monday March 05, 2007 @09:29AM (#18236402) Homepage
    Am I the only one who thought "oh, they're reinventing the Amiga" while reading the summary?
  • EOISNA (Score:3, Insightful)

    by omega9 ( 138280 ) on Monday March 05, 2007 @09:47AM (#18236546)
    Everything old is new again.
  • Re:huh? (Score:3, Insightful)

    by Tim C ( 15259 ) on Monday March 05, 2007 @10:08AM (#18236742)
    I think all in one multi-core chips is the future if you ask me.

    Great, so now instead of spending a couple of hundred to upgrade just my CPU or just my GPU, I'll need to spend four, five, six hundred to upgrade both at once, along with a "S[ound]PU", physics chip, etc?

    Never happen. Corporations aren't going to want to have to spend hundreds of pounds more on machines with built-in high-end stuff they don't want or need. At home, I want loads of RAM, processing power and a strong GPU. At work, I absolutely do not require the GPU - anything that can do 1600x1200 @ 32bpp and 60Hz for 2D is perfectly adequate.

    Likewise, the chip builders aren't going to want to have to release these all-in-one chips in a myriad of options, for low/medium/high spec CPU/GPU/PPU/SPU/$fooPU, it simply won't be cost-effective.

    It's lose-lose imho; you're either stuck buying things you don't want, or have a mind-boggling number of options to choose from (consumers/business) and support (manufacturers/OEMs/IT depts).
  • Re:huh? (Score:5, Insightful)

    by *weasel ( 174362 ) on Monday March 05, 2007 @10:09AM (#18236756)

    However, there are limits on how big the die can be and remain feasible for high volume manufacturing.

    The limits aren't such a big deal.
    Quad-core processors are already rolling off the lines and user demand for them doesn't really exist.
    They could easily throw together a 2xCPU/1xGPU/1xDSP configuration at similar complexity.
    And the market would actually care about that chip.
  • Re:huh? (Score:5, Insightful)

    by Fordiman ( 689627 ) <fordiman @ g m a i l . com> on Monday March 05, 2007 @10:48AM (#18237138) Homepage Journal
    But think. There is definitely money in non-upgradable computers - especially in the office desktop market. The cheaper the all-in-one solution, the more often the customer will upgrade the whole shebang.

    Example: in my workplace, we have nice-ass Dells which do almost nothing and store all their data on a massive SAN. They're 2.6GHz beasts with a gig of ram, a 160G HD, and a SWEET ATI vid card each. Now, while I personally make use of it all proper-like, most people here could get along with a 1GHZ/512MRAM/16GHD/Onboard video system.

    I think Intel/AMD stands to make a lot of money if they were to build an all-in-one-chip computer, ie: CPU, RAM, Video, Sound, Network, and a generous flash drive on a single chip.
  • Re: huh? (Score:5, Insightful)

    by Dolda2000 ( 759023 ) <fredrik@dolda200 0 . c om> on Monday March 05, 2007 @10:56AM (#18237248) Homepage

    Still, you are right, all-in-one cpus are the future, we're just not quite there yet.

    Actually, no thank you. I've had enough problems ever since they started to integrate more and more peripherals on the motherboard. I'd be troubled if I'd have to choose between either a VMX-less, DDR3-capable chip with the GPU I wanted, a VMX- and DDR3-capable chip with a bad GPU, a VMX-capable but DDR2 chip with a good GPU, or a chip that has all three but an IO-APIC that isn't supported by Linux, or a chip that I could actually use but costs $500.


    Instead of gaining those last 10% of performance, I'd prefer a modular architecture, thank you. Whatever is so terribly wrong with PCI-Express anyway?

  • AMIGA! (Score:2, Insightful)

    by elrick_the_brave ( 160509 ) on Monday March 05, 2007 @11:24AM (#18237516)
    This sounds vaguely like the Amiga platform of years past (with a fervent following today still)... how innovative to copy someone else!
  • Re:huh? (Score:4, Insightful)

    by Archangel Michael ( 180766 ) on Monday March 05, 2007 @11:58AM (#18237944) Journal
    "most people here could get along with a 1GHZ/512MRAM/16GHD/Onboard video system."

    Haven't tried to run Vista yet ... have you.
  • by J.R. Random ( 801334 ) on Monday March 05, 2007 @12:56PM (#18238822)
    There are basically two models of parallelism that are used in practice. One is the Multiple Instruction Multiple Data model, in which you write threaded code with mutexes and and the like for synchronization. The other is Single Instruction Multiple Data, in which you write code that operates on vectors of data in parallel, doing pretty much the same thing on each piece of data. (There are other models of parallelism, like dataflow machines, but they don't have much traction in real life.) Multicore CPUs are MIMD machines, GPUs are SIMD machines. All those other processors -- physics processors, video processors, etc. are just SIMD machines too, which is why Nvidia and ATI could announce that their processors will do physics too, and why folding@home works so well on the new ATI cards. So I suspect that in real life there will be just two types of processors. At least I hope that is the case, because it will be a real mess if application A requires processors X, Y, and Z while application B requires processors X, Q, and T.
  • Definitely. (Score:3, Insightful)

    by Svartalf ( 2997 ) on Monday March 05, 2007 @01:00PM (#18238876) Homepage
    I remember the Amiga. I remember how much more capable and powerful they were over the other "personal" computers of the day.

    It's a damn shame that Commodore couldn't market/sell their way out of a wet paper bag.
  • Re:huh? (Score:3, Insightful)

    by ChrisA90278 ( 905188 ) on Monday March 05, 2007 @01:23PM (#18239200)
    You are right. The distance and therefor communications time is better if the device is closer. But butting the device inside the CPU means it is NOT as close to something else. One example is the graphic cards. There, you want the GPU to be close to the video RAM,not close to the CPU. Another device is the phone modem (remember those) you want that device close to the phone wire. Now let's look at new types of processors. A Disk I/O processor that makes a database run faster. You would want that to be outside of the CPU. It should belocated between the PCI bus (or other I/O bus and the system RAM. Putting it inside the CPU will just cause more traffic on the CPU's bus.

    The reason you might want the processor to be NOT insde the CPU is to keep some data off the CPU's bus. A floating point processor is an example of something you dop want inside the CPU but a RAID chip is best outside the CPU. You need to deside case by case.
  • by Anonymous Coward on Monday March 05, 2007 @03:11PM (#18240716)
    This article ignores the main issue with GPU integration -- its the memory stupid. Current high end GPU boards have an order of magnitude more memory bandwidth than the Torrenza socket provides. At least 75% of the cost of a graphics board is just the memory chips. Sure, you could put the whole lot on the motherboard, but all you're saving is the cost of a connector. As long as it makes sense for GPUs to have their own separate high-performance memory subsystem, its going to make sense to have them with separate chips on a separate board. Since the cost of memory (bandwidth and latency) has not been decreasing as fast as the cost of CPU transistors in the past, it seems unlikely to do so in the future, so this seems unlikely to change.

Happiness is twin floppies.

Working...