Quad Core Chips From Intel and AMD 412
lubricated writes "According to the San Fransisco Chronicle, in an effort to one-up AMD, Intel will be coming out with 4 core cpu's in 2007." From the article: "Chips with two cores have been the latest rage, with both Intel and AMD selling those microprocessors as their high-end offering. Apple Computer Inc.'s new iMac, which started selling last month, uses the dual-core chip ... Not to be outdone, Randy Allen, AMD's corporate vice president of server and workstation division, said Friday that his firm is working its own quad-core processor for release next year."
Re:The new race (Score:4, Interesting)
Really. It does seem that there's only so much that can be done to increase the clock. I hope this gives an impetus to improve multi-CPU software performance.
Multi-cores (Score:3, Interesting)
Re:The new race (Score:2, Interesting)
Yes, but there is the problem. With the Gigahertz race, you were sure to be able to enjoy the benefit. With multiple cores, you need software able to use these cores, am I wrong (I'm not really sure of what I'm talking about)? And so far we can't always fully exploit these multiple cores, am I wrong?
Good for SGI and Sun. (Score:2, Interesting)
IRIX and Solaris are known to scale far beyond 4 processors. They're proven technologies that are known to work very well on multiprocessored systems.
SGI could easily use this to their advantage, releasing affordable systems that offer the benefit of IRIX on such machines. If they can come out with a system that appeals to developers and business users, then they could take on Apple, Sun, Dell and others again.
Sun, of course, already offers Opteron-based workstations. A dual CPU entry-level system, with four cores per CPU, could be quite useful. When you factor in the superb quality of Solaris, we could really see some truly fantastic workstations, at a very affordable price.
Re:This is pure hype (Score:2, Interesting)
Octacore (Score:4, Interesting)
Bandwidth (Score:3, Interesting)
Re:When will Microsoft change its license? (Score:5, Interesting)
Also, this does not really eat into MS bottom line compared to Oracle or IBM. Most of MS revenue comes from the the desktop, while they are just competing in servers. Sql Server suddenly becomes more attractive, given Oracles complicated multi-core policy [sun.com]. (Remember that Oracle earlier announced that every core is a CPU, its just recently that they realized it will be a disaster and modified their original plans.)
Earlier CPU speeds doubled every 18 months. Multi-core will simply take another approach to achieving the same. I am not sure how this will hurt software companies any more than increasing cycles/sec.
SUN had 8 core CPUS in 2005 (Score:3, Interesting)
http://www.sun.com/servers/coolthreads/overview/i
Re:OpenSolaris and DragonFly won't take the lead (Score:4, Interesting)
As for dragonfly, I do think that Matt was right about some problems with freebsd 5 and 6 but each release is getting faster. 6.1 beta is noticably faster. Dragonfly isn't revolutionary though. I think some of the ideas from Mach inspired some of their design decisions and we all know Apple has the most succesful Mach kernel in the commercial world.
I don't know if we'll see freebsd or dragonfly look super impressive on multicore cpus, but I do know that openbsd and netbsd may not scale well depending on what they are working on. I can tell you that freebsd 6 does fine on my dual xeon machine. Solaris 10 on the same system seemed slightly slow but i think that was driver support more than anything. Linux IS SLOW on the system regardless of the scheduler. For that OS class, I had to install the 2.6.15 kernel and custom compile it for my system prior to our work on adding system calls. Its not as fast (gentoo) as freebsd 6 was on the system, but faster than freebsd 5.x. (especially disk io) I don't know why linux seems slow as it is using both cpus quite well. Of course this is percieved speed.. i haven't done any formal benchmarks.
Maybe someone should do a serious benchmark on FreeBSD 6, Netbsd 3, Dragonfly (whatever the latest is), OpenBSD 3.8, Linux 2.6.15 (gentoo distro?), and for kicks OpenDarwin all running on the same dual core hardware. Hell if i get time this summer, i might do it.
How many have seen / used the Sun T1 processor (Score:3, Interesting)
A nicely loaded Sun T2000 system, with 8 cores, 32Gig RAM, Dual 2GB FCA and 8 Gigabit Ethernet interfaces comes in with a street price of approximately 30K. Add in Solaris 10 with it's container technology, the fact that it only uses 325Watts of power, and is light on BTUs - we're talking serious datacenter contender for web services, application servers, database servers, etc...
I'm currently looking at consolidating approximately 20 aging systems using over 10KW of power and close to 20K BTUs/hr thermal output. I am planning on replacing these 20 systems with 4 T2000 servers totaling 1500KW and approximately 5K BTUs/hr thermal output. Not only will I be saving on maintenance for the hardware, but also on software licensing as common applications like Oracle and BEA are licensing their products at 1/4 cpu per core on these processors. I will also be saving on power and cooling requirements for the datacenter. Not to mention datacenter floor space - I will be able to empty 2 full racks with this consolidation project. I'm hoping to expand it and end up with 1 rack of T2000's replacing close to the entire datacenter's UNIX population.
The one thing I don't do, is video transcoding... (Score:3, Interesting)
Neither of these tasks is important for a desktop user. Large software compiles, scientific analysis -- these are specialized tasks and sure multiprocessor machines will help. The market for these, however, is specialized and limited.
IMO - there will be external specialized processor blocks for these kinds of tasks. For example, an external FFT processor board would make a great deal of sense. Its a well known algorythm, used in a huge variety of analysis -- and is easy to predefine with parameters. Think of a PCI-Express board with its own set of RISC processors dedicated to performing FFT transforms in the same way modern graphics work is offloaded. You define in the drive the frame size, domain size, etc etc.. and get back the results as fast as you can pass them in.
The point is, what holds back solutions like this the most -- is the front side bus.
Re:The new race (Score:1, Interesting)
in large parts this is why 'multi core design' is a 'path of deminishing returns' it's been pretty well shown that after 16 cores/processors there isn't much room for improvement except in very highly specialized OSes and applications. quad core in a 4 socket board offer all the cores that most applications oses etc would have a hard time figuring out a way to get better performance from 'more' cores.
Re:The new race (Score:3, Interesting)
The rational for multi-core CPU's is that 4 slow cores can do the same amount of work as a fast single core system and consume less power. A related rationale is that you can't get a single core system that's 4X faster per core than a 4 core system. With those two exceptions, there is nothing to prevent what you're proposing.
The CDC-6000 series peripheral processor had an interesting twist on the virtual core - the ALU was time-sliced amongst 10 register sets so that it appeared to be 10 processors. The Sun Niagara does something similar - each of the 8 cores has 4 register sets, allowing for very rapid switching between threads. Recently saw a usenet posting stating that a single T2000 performed twice as fast as a dual Xeon box - and the Niagara CPU uses less power than a single Xeon.