The CPU Redefined: AMD Torrenze and Intel CSI 200
janp writes "In the near future the Central Processing Unit (CPU) will not be as central anymore. AMD has announced the Torrenza platform that revives the concept of co-processors. Intel is also taking steps in this direction with the announcement of the CSI. With these technologies in the future we can put special chips (GPU's, APU's, etc. etc.) directly on the motherboard in a special socket. Hardware.Info has published a clear introduction to AMD Torrenza and Intel CSI and sneak peaks into the future of processors."
huh? (Score:4, Insightful)
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:3, Informative)
Thereby decreasing their cost effectiveness. 'Tis a viscious circle.
Re:huh? (Score:5, Insightful)
Example: in my workplace, we have nice-ass Dells which do almost nothing and store all their data on a massive SAN. They're 2.6GHz beasts with a gig of ram, a 160G HD, and a SWEET ATI vid card each. Now, while I personally make use of it all proper-like, most people here could get along with a 1GHZ/512MRAM/16GHD/Onboard video system.
I think Intel/AMD stands to make a lot of money if they were to build an all-in-one-chip computer, ie: CPU, RAM, Video, Sound, Network, and a generous flash drive on a single chip.
Re:huh? (Score:4, Insightful)
Haven't tried to run Vista yet
and build it inside a monitor LCD.... like..imacs (Score:2)
end display panasonics for sales displays.
Re:huh? (Score:5, Interesting)
The interesting thing about this whole co-processor approach is that the same interface used to connect multiple CPUs to each other is being opened up for other processing devices. This makes it possible to mix and match cores as desired. For example, you could build a mesh of multi-core CPUs in a more "normal" configuration, or you could mate each CPU with a DSP-like number cruncher and make a special purpose "supercomputer". It will interesting to see what types of compute beasts will emerge from this.
Re:huh? (Score:5, Insightful)
The limits aren't such a big deal.
Quad-core processors are already rolling off the lines and user demand for them doesn't really exist.
They could easily throw together a 2xCPU/1xGPU/1xDSP configuration at similar complexity.
And the market would actually care about that chip.
Re: (Score:3, Informative)
Now, if there are other CPU's out there doing native quad core for general purpose computing, I'm unaware and withdraw my ignorance if so
Re: (Score:2)
At the end of the day, nobody cares whether the cpu is "native" or not. If its cheap, gives good performance, sucks less electricity and fits in "one" socket, its good enough for most folks
Re: (Score:2)
Re: (Score:2)
My question is what advantage will they get from plugging these coprocessors into CPU sock
Re:huh? (Score:5, Interesting)
One cool thing I discovered while I was learning to program was that you could make one of the coprocessors interrupt when the electon beam of the monitor was at a certain position. Pretty nifty.
BTW, for those who are too young/old to remember, those were the days of dos, and friends of mine were bragging with their 16 color EGA cards. Amiga had 4096 colors at the time.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Tom
Re: (Score:2)
Re: (Score:2)
But think about it, if you are say 0.5 frames out of sync with the display, then only half of the display will actually have "live" content. The result (especially with movement) will look very distorted. So even with modern displays, you need to sync to vsync and draw only during vblank (or draw during the frame but use frame switching).
LCDs are not instantaneous either. Try filming an LCD without the correct shutter. You'll see the same annoying b
Re: (Score:2)
EGA displays could use 16 out of 64 if I'm not mistaken.
Ahh those were the days
Re: (Score:3, Informative)
Re:huh? (Score:5, Interesting)
Amiga had 4096 colors at the time.
Better put "4096" with a "*" qualifier. You couldn't assign each pixel an exact color - the scheme got you more colors by being able to set a bit that said that the next pixel modifies the previous pixel by "x". In this way, they could get more colors using less memory than traditional X bits per color per pixel schemes (Amiga was a bitplane architecture.)
Anyway, back on topic, I wish that the CPU manufacturers could finally come up with a "generational" standard socket. A well-designed module socket should last as long as an expansion slot standard (ISA,PCI,PCIe) and not change for damn near every model of chip. I should be able to go out and get a one, 2, 4, 8 socket motherboard, and stick any CPU / GPU / DSP module into it I want. Can we please finally shitcan the 1980's motherboard designs?
Re: (Score:2)
Re: (Score:2)
That assumes you are plugging in individual processors. This is why I specified "module". Using high speed serial channels, like PCI Express does, you can reduce the pin count on the module level considerably. I would assume that the module would also contain a certain amount of local
Re: (Score:3, Interesting)
Folks with 16-bit PCs were bragging about their 16 out of 64 color EGA cards and single-tasking OSs when even the simplest the Amigas had 32-bit processors, 32 out of 4096 colors, PCM audio and a fully multi-tasking OS coupled with a GUI.
As for the "processor socket", there are people selling computers that go into passive backplanes. If you put the CPU and memory in a card, there is little reason why you would have to upgrade the rest of the computer when you change the CPU (you w
Re: (Score:2)
Put the CPU, chipset and RAM slots on the "processor card", that way the only reason to upgrade a motherboard would be if new slots were introduced (PCI-Express, etc) and you actually needed them.
Isn't this called "passive backplane" or something? If it already exists for some systems, why not desktop computers?
Re:huh? (Score:4, Interesting)
Early high-end computer systems started out like this, utilizing backplanes like VME. They've been phased-out, because ultimately that modularity was too expensive, and because the shared-bus architecture hurt performance. Hardware devices that used to require multiple cards can now fit on a single chip, and have their own PCIe drop to increase performance. Memory upgrades that used to require multiple cards just to reach 1MB are now eclipsed by 8 and 16-chip configurations on a single DIMM (a specialized expansion slot), and have their own bus to improve performance.
Let's say they went with the Single-board computer design (CPU+memory+bus controller) - now your costs go up, because you have to build multiple "processor cards" for all the different backplanes you want to plug into. ISA backplane - 1 model. PCI backplane - 1 model. PCI + ISA backplane - 1 more model, and it also requires a new specification: the new bus designs have to play nice with the limited I/O space at the back of the card, so you end up either making the bus connector larger, or you end up making certain bus combinations impossible.
With the motherboard and atached bus design, your costs go down because you can provide a mixture of the busses that are the most popular. Thus, you only have one product to design and electrically verify, and only one manufacturing line to test.
Also, when you move to point-to-point architectures like PCI-Express, with a separate backplane you really limit yourself to the slot configurations you can offer. Unlike with a shared bus, with P2P interconnects you have to make sure the backplane layout matches the connector layout exactly. This means you either standardize on ONE configuration (boring), or you put the ports on the processor card (what we are doing).
The only places that still use modular bus designs today are embedded developers, and that's because they still need the expandability and modularity that end-users do not. They also need the backward-compatibility affored by these old bus specifications (VME especially). They pay for it, in terms of performance - most of them bypass the slow backplane of VME or CompactPCI with faster interconnects like Gig/10GigE, Fibre Channel, RapidIO or Infiniband.
Amiga love.......memories......... (Score:3, Funny)
An Amiga 1000, Deluxe Paint, Flight Simulator, Amiga Basic and 2Meg of Ram = $3500.
Later got the Sidecar for DOS, and Earl Weaver Baseball. Ahhhhhhhh.
20 years later, and no hardware or software has given me such joy.
NVidia, Matrox, ATI, AMD, Intel, WTF?
The Amiga showed you how 20 years ago, and you are just now getting around to it?
Bring back multi-resolution windows, bitches!
Re: (Score:2)
Unfortunately this isn't as practical as it sounds. As technologies to increase performance continue, the socket technology would quickly become a bottleneck. If we were to go back 10 years ago (1987) our boards would have 3.3v CPUs, 64 bit memory bu
Re: (Score:2)
Kaypro and Zenith both offered PCs that let you swap out CPU cards. I have a great poster in my office for the Kaypo PC that says "The End of Obsolescence". It wasn't
The problem is that the upgrades tended to cost as much as a new computer.
You have a smaller potental market and the cost for the new CPU board is so close to a new motherboard it just isn't worth it.
Then you add the improvements in memory systems and as you can see it just doesn't work out.
Re:huh? (Score:4, Interesting)
That's what he's describing, but I don't believe for a second that's what it's going to be...
I don't believe for a second practically ANYONE is going to buy an expensive, multi-socket motherboard, just so they can have higher-speed access to their soundcard... Ditto for a "physics" unit.
This exists solely because CPUs are terrible at the same kinds of calculations ASICs/FPGAs are incredible at. That will be the only killer app here.
Video cards are a good example on their own. CPUs are so bad, and GPUs are so good, that transferring huge amounts of raw data over a slow bus (AGP/PCIe) still puts you far ahead of trying to get the CPU to process it directly. And it works so well, the video card companies are making it easier to write programs to run on the GPU.
And GPUs aren't remotely the only case of this. MPEG capture/compression cards, Crypto cards, etc. have been popular for a very long time, because ASICs are extremely fast with those operations, which are extremely slow on CPUs.
The situation is much more like x87 math co-processors of years past, than it is like the Amiga, with independent processors for everything.
It is likely that, in time, integrating a popular subset of ASIC functions into the CPU will become practical, and then our high-end video cards will be simple $10 boards, just grabbing the already-processed data sent by the chip, and outputting it to whatever display.
Then maybe AMD and Intel will finally focus on the problem of interrupts...
Definitely. (Score:3, Insightful)
It's a damn shame that Commodore couldn't market/sell their way out of a wet paper bag.
Re:huh? (Score:5, Interesting)
The same applies to trying to integrate GPUs into the CPU, at the moment a top-end GPU is too large and expensive to integrate, and not everyone needs one. The move to having a GPU in a CPU socket should cut a lot of cost because the GPU manufacturers won't have to create an add-in-card to go with the GPU, they can just design the chip to plug straight into a standardised socket.
At the same time low-end GPUs are small and cheap enough that they are being integrated into motherboards, integrating a basic GPU into the CPU seems like a good next move, and the major cpu manufacturers seem to agree. IIRC Via's smallest boards integrate a basic cpu, northbridge and gpu into one chip? AMD are definitely planning it with their aptly named "Fusion". *Checks wikipedia* Yeah, Via's is called "CoreFusion".
Still, you are right, all-in-one cpus are the future, we're just not quite there yet.
Re: (Score:2)
Re: (Score:2)
Smartphones will (IMHO) evolve to a wireless portable computing device that "oh yeah, it can make phone calls too," but the problem is still that the screen is still WAY to small, and user input still sucks. Maybe they will finally be able to make LCD-like glasses that really are high-resolution, and maybe they will come up with a neural interface so we can ditch the keyboard / mouse... But I don't see those things being practical within the
I have a phone dock for that already (Score:2)
I turn on the WiFi and my phone is part of the internet.
It serves it's flash/ram/rom/sd via the 9p protocol.
My terminal can boot from it, if I wanted it to, yours could to if you were in my authentication server.
I can store encrypted data on it useless without TCP access the dock.
Re: (Score:2)
Re: huh? (Score:5, Insightful)
Actually, no thank you. I've had enough problems ever since they started to integrate more and more peripherals on the motherboard. I'd be troubled if I'd have to choose between either a VMX-less, DDR3-capable chip with the GPU I wanted, a VMX- and DDR3-capable chip with a bad GPU, a VMX-capable but DDR2 chip with a good GPU, or a chip that has all three but an IO-APIC that isn't supported by Linux, or a chip that I could actually use but costs $500.
Instead of gaining those last 10% of performance, I'd prefer a modular architecture, thank you. Whatever is so terribly wrong with PCI-Express anyway?
looks like a revamped AMD 4x4 (Score:2, Interesting)
I think this would make sense to me. Right now when I upgrade my video card, I throw out the ram, GPU, and integrated circuitry of the entire package to replace everything with the new video card upgrade (which happens every 6 mo
Re: (Score:2)
So one is replacing one core of a dual-core cpu with a gpu and the other is replacing one cpu of a dual-cpu machine with a bigger gpu, with little change in power or cooling requirements in either case.
Re: (Score:3, Informative)
The Intel 8086 had the Intel 8087 [wikipedia.org]
A whole collection of Intel FPU's is at Intel FPU's [cpu-collection.de]
TI's TMS34020 (a programmable 2D rasterisation chip), had the TMS34082 coprocessor (capable of vector/matrix operations)
(Some pictures here [amiga-hardware.com]. Up to four coprocessors could be used.
Now, both of these form the basis of a current day CPU and GPU (vertex/geometry/pixel shader units).
Re: (Score:2)
Re: (Score:3, Insightful)
Great, so now instead of spending a couple of hundred to upgrade just my CPU or just my GPU, I'll need to spend four, five, six hundred to upgrade both at once, along with a "S[ound]PU", physics chip, etc?
Never happen. Corporations aren't going to want to have to spend hundreds of pounds more on machines with built-in high-end stuff they don't want or need. At home, I want loads of RAM, processing power and a strong GPU. At work, I absolutely d
Re: (Score:2, Interesting)
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
Actually there was an evolution of processor design into single, monolithic processing units; until well into the '70s it was hardly uncommon that computers would have all sorts of processing units (remember the "CPU" is the "Central Processing Unit.") Of course in this case I'm primarily talking about mainframes; one of the distinctions of the minicomputer (and later microcomputer) was that "everything was together" in the CPU. But even then the systems didn't really
Re: (Score:2)
You misunderstand displacement, first of all there are only so many chips you can pack into a CPU die before bandwidth and memory issues become a problem, high memory bandwidth devices will spam the communications channel waiting for data. Really is a disp
HTX (Score:2)
CSI? (Score:5, Funny)
Re:CSI? (Score:5, Funny)
Well, clearly, they won't. They're decentralised.
New on NBC, "CSI: Wherever". We even have a song by The Who for the opening credits - "Anyway, Anyhow, Anywhere".
AMD competes with... (Score:5, Funny)
Re: CSI? (Score:2)
Re: (Score:2)
[minrant]Stupid David Caruso[/minirant]
Previous announcements (Score:4, Informative)
IBM and Intel Corporation, with support from dozens of other companies, have developed a proposal to enhance PCI Express* technology to address the performance requirements of new usage models, such as visualization and extensible markup language (XML).
The proposal, codenamed "Geneseo," outlines enhancements that will enable faster connectivity between the processor -- the computer's brain -- and application accelerators, and improve the range of design options for hardware developers.
http://www.intel.com/pressroom/archive/releases/2
Re:Previous announcements (Score:5, Funny)
Since it became bloatware that is capable of wasting 90% of the processing power of a modern computer.
</sarcasm>
Re: (Score:2)
They should use Javascript instead!
*ducks*
Retro-innovation (Score:5, Informative)
Amiga had all processors on the main board (Score:2)
It had custom designed processors for sound and video on the motherboard.
And then it was sold together with a fitting OS, so you got computer and software as a complete functioning machine in stead of many loose ends in a PC.
Re: (Score:2)
Seriously, I did, and it's feeling just like the old days.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Really, if GPUs and sound chips are sufficient for a comparison to the Amiga's chipset, then PCs have been doing that for at least as long as Macs.
It's not clear to me why this article is about something more Amiga-like than what modern computers already have (especially since GPUs are fully programmable). The difference about this news is that the chips can be put on the motherboard via a standard socket - but it was never the case with the Amiga that you could plug in chips you wanted, you jus
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I moved from Amiga 1200 to iMac in 1999. Never had a PC in the house (except, perhaps, the bridgeboard on the A2000, which, back then, made me wonder what all the fuss of PCs was about).
Re: (Score:2)
All this is really doing is bringing a more standardised set of co-processers on to the mobo rather than any number of 3rd party ones - it would make it much easier keeping the OS stable if you have a more controlled number of architectures to deal with.
On the downside, if these processors were DRM hobbled, it would make life harder too..
Interesting (Score:3, Interesting)
So the options are to have more slots, or make something I like to call an 'interface card'. See, there'll be these slots on the motherboard that cards fit into... wait, don't we have this already?
And more slots isn't really an option because the computer would end up being massive with all the cooling fans and memory slots. (Which are apparently seperate for each PU.)
I kind of hope I get proven wrong on this one, but I don't think this is such a great idea. Just very interesting. Having 16 slots and being able to say you want 4 AIPUs, an APU, 4 GPUs, 3 PPUs, and 4 CPUs on my gaming rig and 1 GPU, 1 APU, and 14 CPUs on my work rig would be awesome.
Re: (Score:2, Interesting)
Maybe if a motherboard featured a very large generic socket to which was attached one cooling solution, it'd work out better. Processing Units, which would be smaller as to fit as many as possible, would be able to go anywhere in this socket (in a grid-aligned fashion). Easiest solution, socket is X*X square grid, and all PUs must be say X/2 (or hopefully X/4) squares which can be arranged in any fashion. Plunk them in, reattach cooling over all of them, boot and enjoy that 4CPU, 2GPU, 2FPU configuration.
Re:Interesting (Score:5, Interesting)
Perhaps the better thing to do would be better slot designs (not that we need more with all the PCI flavors floating around right now) with integrated, defined cooling channels. If you were to make the card spec with a box design rather than a flat card, you could have a non-connector end mate with a cooling trunk and use a squirrel cage (higher volume, quieter, more efficient)fan to ventilate the cards.
Re: (Score:2)
Re: (Score:2)
Adding a connector means you will have more noise. Using chips means they both have a shorter path and are electrically better connected.
Most solutions need only two things; a processor and memory. Everything else you see on the card is either there for I/O (video cards have RAMDACs for example, or whatever the chip that handles di
Re: (Score:2)
In my opinion, what the slots lose in path length and elect
Re: (Score:2)
If all the chips use HT to speak to one another, and all the chips use the same package, then you can put EITHER a CPU or another type of processor in it.
Re: (Score:2)
Unification (Score:2)
Mother board shipsets are becoming the union of a lot of functionality (Disk, Ethernet, Sound, UDB, PCI/e and graphics). Even though you can still get best of breed addin cards for many of these functions, the majority of desktop systems do just fine with what the chipset of
Amiga? (Score:3, Insightful)
Re: (Score:2)
The diehard Amiga fans were thinking, "This would really work well if the bus ran faster than any of the cores."
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It's been done before and with great success. To bad it took 23 years for the rest of the industry to catch up!!
Slashdot could benefit from a co-processor... (Score:5, Funny)
Slashdot's computers might benefit from a co-processor, the function of which is to monitor and correct spelling and grammar errors. It would serve like an editor's job, only better, because, you know, it might actually work.
(Bye-bye karma!)
Amiga v2? (Score:2)
Where have you been, (Score:2)
I mean, do you ever watch American Idol, hello!
EOISNA (Score:3, Insightful)
As rumored, first addopted by the porn industry (Score:3, Funny)
Re:As rumored, first addopted by the porn industry (Score:4, Funny)
AI? For porn? You have seen porn before, right?
Cell Clusters (Score:4, Interesting)
These little bastards are inherently distributed computing: a microLAN of parallel processors, linkable in a microInternet.
Imagine a Beowulf cluster of those! No, really: a Beowulf cluster of Cells [google.com].
Re: (Score:2)
can someone explain? (Score:2)
AMIGA! (Score:2, Insightful)
You can buy Torrenza today (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
Really just two types of processors (Score:4, Insightful)
Cyclical (Score:2)
Just plug in a spellchecker Co-Processor (Score:2, Funny)
Re: (Score:3, Funny)
It's nice to see you've finally caught up with all the people that have made an Amiga comment.
let's think how many ways this NOT like amiga. (Score:2)
2) Amiga had multiple processors, but they were all Commodore parts, and soldered in. We're talking about bus standards, and ISAs, and your choice of vendors and upgradability and all that stuff which is more difficult to spec-out AND get buy-in for. It's not a vendor stovepipe.
Hell, a friggin SNES has 4 coprocessors, a TurboGFX 16 had like 6, but you don't see people comparing THAT to PCs or Amigas or anyth
Re: (Score:2)
Except that the (single-source) chips won't be soldered into the motherboard.
On the Amiga, this caused perversions like the blitter (fast memory copying) chip eventually becomming slower than the CPU at copying memory. There was no way to pop in a new faster video chip or blitter chip. If you wanted a better rig, your only recourse was to head over to West Chester and join everyone else in begging Commodore to design one.
I like this modular (co)processor idea wa