Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware

Quad Core Chips From Intel and AMD 412

lubricated writes "According to the San Fransisco Chronicle, in an effort to one-up AMD, Intel will be coming out with 4 core cpu's in 2007." From the article: "Chips with two cores have been the latest rage, with both Intel and AMD selling those microprocessors as their high-end offering. Apple Computer Inc.'s new iMac, which started selling last month, uses the dual-core chip ... Not to be outdone, Randy Allen, AMD's corporate vice president of server and workstation division, said Friday that his firm is working its own quad-core processor for release next year."
This discussion has been archived. No new comments can be posted.

Quad Core Chips From Intel and AMD

Comments Filter:
  • Re:The new race (Score:4, Interesting)

    by Jeff DeMaagd ( 2015 ) on Saturday February 11, 2006 @09:24PM (#14697180) Homepage Journal
    Say bye to the race to the Gigahertz. Say hello to the race to the core count

    Really. It does seem that there's only so much that can be done to increase the clock. I hope this gives an impetus to improve multi-CPU software performance.
  • Multi-cores (Score:3, Interesting)

    by acslat3r ( 848858 ) on Saturday February 11, 2006 @09:24PM (#14697183)
    I am looking forward to multi-core systems. I have an athlon 64 dual core 3800 using windows for my main ebay computer and it can pretty much handle anything i throw at it. It will be interesting to see how the motherboards of the future look and how the memory is allocated since I would assume all of these cores sharing the same memory has to have more of a performance penalty. Adobe premiere recognizes the dual core during startup but I don't know of many programs that use both cores..i guess it just splits the load between them. I would assume multi-cored processors will sharply scale up in price due to the lower yield rate from effectively making two-four-eight processors at once on a single die.
  • Re:The new race (Score:2, Interesting)

    by 4D6963 ( 933028 ) on Saturday February 11, 2006 @09:28PM (#14697199)
    "I hope this gives an impetus to improve multi-CPU software performance."

    Yes, but there is the problem. With the Gigahertz race, you were sure to be able to enjoy the benefit. With multiple cores, you need software able to use these cores, am I wrong (I'm not really sure of what I'm talking about)? And so far we can't always fully exploit these multiple cores, am I wrong?

  • by CyricZ ( 887944 ) on Saturday February 11, 2006 @09:28PM (#14697205)
    This is a trend that may play out well for SGI and Sun. Both have been building systems which involve a massive number of CPUs for quite a while now. They have the experience that Microsoft doesn't have, for instance.

    IRIX and Solaris are known to scale far beyond 4 processors. They're proven technologies that are known to work very well on multiprocessored systems.

    SGI could easily use this to their advantage, releasing affordable systems that offer the benefit of IRIX on such machines. If they can come out with a system that appeals to developers and business users, then they could take on Apple, Sun, Dell and others again.

    Sun, of course, already offers Opteron-based workstations. A dual CPU entry-level system, with four cores per CPU, could be quite useful. When you factor in the superb quality of Solaris, we could really see some truly fantastic workstations, at a very affordable price.

  • Re:This is pure hype (Score:2, Interesting)

    by east coast ( 590680 ) on Saturday February 11, 2006 @09:45PM (#14697314)
    If you don't build the hardware the software will never be developed for it.
  • Octacore (Score:4, Interesting)

    by drix ( 4602 ) on Saturday February 11, 2006 @09:55PM (#14697373) Homepage
    Why wait? Sun already makes processors with 8 cores [sun.com]. For realz.
  • Bandwidth (Score:3, Interesting)

    by Aardpig ( 622459 ) on Saturday February 11, 2006 @10:05PM (#14697439)
    I wonder whether the quad-core Intel chips will be as bandwidth-starved as the dual-core chips? Currently, the comparison between a dual-core Pentium and a dual-core Opteron is farcical, especially for memory-limited apps.
  • by cyberjessy ( 444290 ) <jeswinpk@agilehead.com> on Saturday February 11, 2006 @10:21PM (#14697502) Homepage
    This is silly. Microsoft made a conscious decision to license software per CPU (or per Slot) rather than core, and they had announced that they are doing so because multi-core looked like the natural way in which CPUs will improve, given that the Mhz war has ended. In fact they were the first major company to do so.

    Also, this does not really eat into MS bottom line compared to Oracle or IBM. Most of MS revenue comes from the the desktop, while they are just competing in servers. Sql Server suddenly becomes more attractive, given Oracles complicated multi-core policy [sun.com]. (Remember that Oracle earlier announced that every core is a CPU, its just recently that they realized it will be a disaster and modified their original plans.)

    Earlier CPU speeds doubled every 18 months. Multi-core will simply take another approach to achieving the same. I am not sure how this will hurt software companies any more than increasing cycles/sec.
  • by kireK ( 254264 ) on Saturday February 11, 2006 @10:46PM (#14697606)
    Sun is currently shipping EIGHT core CPUS, and each core handles 4 threads... so you are talking 32 threads in one RU of space.

    http://www.sun.com/servers/coolthreads/overview/in dex.jsp [sun.com]
  • Well Sun has changed the threading design several times in Solaris. You need to be more specific. Sun used a model where one cpu was the controller (scheduler) and the other cpus ran jobs in sun os 4.x and in early Solaris versions it used a m:n model where there would be m user threads to n kernel threads similar to how Microsoft's .NET framework creates threads. Newer versions of solaris (9 and 10) are more like linux and freebsd's latest threading library and make 1:1 relationships between userland and kernel threads. I'm not an expert at this and have been taking an operating systems class this term where this has been discussed.

    As for dragonfly, I do think that Matt was right about some problems with freebsd 5 and 6 but each release is getting faster. 6.1 beta is noticably faster. Dragonfly isn't revolutionary though. I think some of the ideas from Mach inspired some of their design decisions and we all know Apple has the most succesful Mach kernel in the commercial world.

    I don't know if we'll see freebsd or dragonfly look super impressive on multicore cpus, but I do know that openbsd and netbsd may not scale well depending on what they are working on. I can tell you that freebsd 6 does fine on my dual xeon machine. Solaris 10 on the same system seemed slightly slow but i think that was driver support more than anything. Linux IS SLOW on the system regardless of the scheduler. For that OS class, I had to install the 2.6.15 kernel and custom compile it for my system prior to our work on adding system calls. Its not as fast (gentoo) as freebsd 6 was on the system, but faster than freebsd 5.x. (especially disk io) I don't know why linux seems slow as it is using both cpus quite well. Of course this is percieved speed.. i haven't done any formal benchmarks.

    Maybe someone should do a serious benchmark on FreeBSD 6, Netbsd 3, Dragonfly (whatever the latest is), OpenBSD 3.8, Linux 2.6.15 (gentoo distro?), and for kicks OpenDarwin all running on the same dual core hardware. Hell if i get time this summer, i might do it.
  • by GuyverDH ( 232921 ) on Saturday February 11, 2006 @11:06PM (#14697683)
    Up to 8 cores, 3MB L2 Cache (total shared), 4 execution threads per core, so effectively 32 execution threads per CPU.

    A nicely loaded Sun T2000 system, with 8 cores, 32Gig RAM, Dual 2GB FCA and 8 Gigabit Ethernet interfaces comes in with a street price of approximately 30K. Add in Solaris 10 with it's container technology, the fact that it only uses 325Watts of power, and is light on BTUs - we're talking serious datacenter contender for web services, application servers, database servers, etc...

    I'm currently looking at consolidating approximately 20 aging systems using over 10KW of power and close to 20K BTUs/hr thermal output. I am planning on replacing these 20 systems with 4 T2000 servers totaling 1500KW and approximately 5K BTUs/hr thermal output. Not only will I be saving on maintenance for the hardware, but also on software licensing as common applications like Oracle and BEA are licensing their products at 1/4 cpu per core on these processors. I will also be saving on power and cooling requirements for the datacenter. Not to mention datacenter floor space - I will be able to empty 2 full racks with this consolidation project. I'm hoping to expand it and end up with 1 rack of T2000's replacing close to the entire datacenter's UNIX population.
  • ...other than that (which admittedly will dim the lights), I've never kept even a hyperthreading P4 pegged solid for more than a short time. I've written multithreaded audio processing software than can pick DTMF or Motorola QC-II tones out of a live audio stream with good reliability, and I've written anti-spam software that in test processes more than a million messages per hour. In neither case did I overwhelm the processors.

    Neither of these tasks is important for a desktop user. Large software compiles, scientific analysis -- these are specialized tasks and sure multiprocessor machines will help. The market for these, however, is specialized and limited.

    IMO - there will be external specialized processor blocks for these kinds of tasks. For example, an external FFT processor board would make a great deal of sense. Its a well known algorythm, used in a huge variety of analysis -- and is easy to predefine with parameters. Think of a PCI-Express board with its own set of RISC processors dedicated to performing FFT transforms in the same way modern graphics work is offloaded. You define in the drive the frame size, domain size, etc etc.. and get back the results as fast as you can pass them in.

    The point is, what holds back solutions like this the most -- is the front side bus.
  • Re:The new race (Score:1, Interesting)

    by Anonymous Coward on Sunday February 12, 2006 @01:03AM (#14698230)
    FWIW technically speaking, the most 'cores' a video encoder can sucessuflly use when converting a 640x480 resolutions stream into mpeg-4 format is 1200. however, the RAM requirements on a 1200 thread video encoder become somthing from a nightmare, it could be as high as 64 GB of ram in a poorly designed encoder. of course, if your system has 1200 cores, not having 128 GB of ram would make you look cheap and pathetic.

    in large parts this is why 'multi core design' is a 'path of deminishing returns' it's been pretty well shown that after 16 cores/processors there isn't much room for improvement except in very highly specialized OSes and applications. quad core in a 4 socket board offer all the cores that most applications oses etc would have a hard time figuring out a way to get better performance from 'more' cores.
  • Re:The new race (Score:3, Interesting)

    by calidoscope ( 312571 ) on Sunday February 12, 2006 @02:21AM (#14698497)
    Maybe what I mean is not clear, so imagine a quad-core CPU. Your threads can use each core etc, etc.. On a single-core, why don't you "cut" your CPU into 4 "virtual cores" so for example one thread that would act like it's only using one "virtual core" would actually be using 25% of the CPU time slices?

    The rational for multi-core CPU's is that 4 slow cores can do the same amount of work as a fast single core system and consume less power. A related rationale is that you can't get a single core system that's 4X faster per core than a 4 core system. With those two exceptions, there is nothing to prevent what you're proposing.

    The CDC-6000 series peripheral processor had an interesting twist on the virtual core - the ALU was time-sliced amongst 10 register sets so that it appeared to be 10 processors. The Sun Niagara does something similar - each of the 8 cores has 4 register sets, allowing for very rapid switching between threads. Recently saw a usenet posting stating that a single T2000 performed twice as fast as a dual Xeon box - and the Niagara CPU uses less power than a single Xeon.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...