AMD Quad Cores, Oh My 423
Lullabye_Muse writes "From engadget we learn that AMD has plans for putting 4 cores on one die by the time Apple has fully gone to Intel processors. Full story here. They say they could eventually have up to 32 cores with scalable technology, but most programs haven't even got the ability to hyperthread, so do we really need the extra cores?"
Hyperthreading (Score:2, Interesting)
Intel working on silicon laser to link cores (Score:5, Interesting)
Re:more cores, more heat (Score:4, Interesting)
Re:Must be a parallel universe you live in (Score:3, Interesting)
I'd even take a multi-core 1GHz chip (with only a passive heatsink on it...) vs a 3.x GHz with its gas-powered 150K RPM turbine blower on it to keep enough air blowing over it.
Oh, wait. I already have a dual-processor (2x833 MHz P3) server, and it's quite a bit more responsive than my single-CPU workstation. SCSI of course has something to do with that as well.
BEOS!!! (Score:3, Interesting)
Re:wicked (Score:3, Interesting)
Then you'll want to look into YAWS [hyber.org].
Basically, a web server written in Erlang, which supports lightweight processes and high concurrency. In other words, each connection is a completely separate process and shares no information with other processes except by message passing.
Also, a recent paper [www.guug.de] from the primary designer of Erlang, Joe Armstrong.
The key points are that Erlang process creation and message passing are orders of magnitude faster than Java/C# threads. Also, YAWS could handle dedicated traffic from a 16 machines. It handled over 80,000 connections, maintaining 800 kB/s traffic. Apache died around 4,000 connections. The key graphic is on page 4 of the paper. The red lines denote YAWS; notice haw it maintains that bandwidth (even though particular connections may drop, the web server keeps chugging along). Threaded Apache is in green; process-forking Apache is in blue.
Re:Yeah?!? Yeah?!? Well.... (Score:2, Interesting)
Until recently it was thought the long pipelines were at fault. But the boys at X-bit labs took a closer look at Intel patents and did some detailed performance measurements.
Turns out that it goes further. The long P4 pipes require "replay buffers" to reissue instructions with unresolved dependencies. These buffers more often than not end up causing further performance losses and power dissipation in case of common patterns of instruction dependency.
See http://www.xbitlabs.com/articles/cpu/display/replWhy wait.. its already here? (Score:2, Interesting)
The last part about programming architecture.. is interesting reading. From job queuing.. to micro kernels to streaming.. multi-cores are are a very good way to do things. And on Cell.. they are all seperate cores.. And for a server with 14 of these in one box.. coming soon..
http://techon.nikkeibp.co.jp/english/NEWS_EN/2005
Its pretty obvious why Intel and AMD are going multicore.. because it works.. and they have to catch up before they are lost in the dust.
Re:MULTIthreading != Hyperthreading (Score:4, Interesting)
It isn't Intel's technology either, Intergraph invented it, although Hyperthreading (TM) is Intel's branding of the idea. Alphas were supposed to get it, maybe EV7 has it, I'm not sure, it might have been something suposed to go into EV8.
Re:Doesn't have to be threads (Score:3, Interesting)
Even if the CPU usage is at 100%, benchmarks have shown that interactive processess generally respond in under a millisecond. It's really impressive how a system can be under heavy load but you wouldn't even be able to tell if you couldn't see the network lights blinking like mad, hear the hard drive, and see the CPU temperature going up.
The Hypervisor will use 'em, I tell ya! (Score:2, Interesting)
Consider this:
Imagine a PC where there is only the hypervisor directly accessing the hardware (and please, NOT one that also loads Outlook Express, IE7, WSH and Media Player). Now imagine all of your operating systems running on top of the hypervisor. All hardware is virtualized for these operating systems, right? So, your physical video card no longer needs a 3-d engine; in fact it doesn't need much more than a 2-d chip and enough memory to show all the pretty colors at whatever resolution is popular. Why, you ask? Because the 3-d rendering will be done by the *virtualized* 3-d card(s) in each virtual machine, and THAT, my friend, will take as many CPU cores on the host machine as you are able to give it. And, since virtual GPU's don't require foundries, it just might mean an Open Source video card. The key is to ensure that the vitualized "hardware" is modular enough to be replaceable.
It's the next step in the ongoing cycle between having the CPU do everything and offloading to specialized chips or subsystems. By virtualizing all of the "offloading" chips such as the GPU, 3-d wavetable synth, some networking functions, etc., the pendulum swings back toward centralizing all of the processing.
Re:Do we really need the extra cores? (Score:1, Interesting)
Re:Don't count the processes (Score:3, Interesting)
Tell that to the folk whose machines have been made completely unstable by filthware.
The type of programers you can get to write code that is utterly unwanted and corrupt tend to apply the same work ethic towards their employers. Getting a good programmer is difficult enough for honest companies.
Most of the spyware I have looked at has serious security issues, some of these may even be deliberate, a way of creating a deniable backdoor.
The spyware attempts to make itself uninstallable. Often the programers use O/S facilities that they do not understand properly to do so.