Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

AMD Unveils Barcelona Quad-Core Details 206

mikemuch writes, "At today's Microprocessor Forum, Intel's Ben Sander laid out architecture details of the number-two CPU maker's upcoming quad-core Opterons. The processors will feature sped-up floating-point operations, improvements to IPC, more memory bandwidth, and improved power management. In his analysis on ExtremeTech, Loyd Case considers that the shift isn't as major as Intel's move from NetBurst to Core 2, but AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together."
This discussion has been archived. No new comments can be posted.

AMD Unveils Barcelona Quad-Core Details

Comments Filter:
  • by ExploHD ( 888637 ) on Wednesday October 11, 2006 @03:15AM (#16389479)
    the memory controllers now support full 48-bit hardware addressing, which theoretically allows for 256 terabytes of physical memory.
    256 terabytes should be enough for anybody.
  • wha? (Score:3, Funny)

    by macadamia_harold ( 947445 ) on Wednesday October 11, 2006 @03:16AM (#16389483) Homepage
    AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together.

    So Intel's Ben Sander claims that AMD's claim is that Intel claims that their dual-cores grafted together qualify as quad-core technology? That's not confusing at all.
  • On snap! (Score:5, Funny)

    by joe_cot ( 1011355 ) on Wednesday October 11, 2006 @03:17AM (#16389489) Homepage
    "In his analysis on ExtremeTech, Loyd Case considers that the shift isn't as major as Intel's move from NetBurst to Core 2, but AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together."
    BUUUUUUUUUURNED
    Next week: Intel responds by telling us how fat AMD's mother is.
  • by Anonymous Coward
    Quad core? Bah! That's only 4.
    Wake me up when they have a processor that goes to eleven.
  • "AMD Unveils Barcelona Quad-Core Details"

    It's the processor that runs like a dog with no nose!
  • Once again... (Score:5, Insightful)

    by tygerstripes ( 832644 ) on Wednesday October 11, 2006 @03:49AM (#16389647)
    Firstly, can I just say that stating that "the shift isn't as major as Intel's move from NetBurst to Core 2" is like... er... comparing a decent incremental car improvement with swapping a bicylce for a car. Or something. I'm not saying Core2Duo isn't great tech, but look; Netburst was shit. Everyone knows it. They flogged that horse for far too long, so comparing on the grounds of the proportional improvement is not a useful comment. It's like when the thick kid in school got the "most improved" award, and everyone sat there and went "Well yeah, but what was his alternative?".

    As for the quad-core thing, it's the same story all over again. Intel rush out a solder-together-two-chips job to beat the competition to market, and then the actual innovators come out with something coherent that works more efficiently etc.

    I'm not saying the AMD will necessarily be better. What I'm saying is I don't care who gets to market 2 months earlier. I want the better chip, and I can live with the mystery for a few weeks.

    Although, frankly, I can barely afford to eat having just built a decent Core2Duo rig, so I won't be investing either way just yet...

    • by eebra82 ( 907996 )
      Firstly, can I just say that stating that "the shift isn't as major as Intel's move from NetBurst to Core 2" is like... er... comparing a decent incremental car improvement with swapping a bicylce for a car. Or something. I'm not saying Core2Duo isn't great tech, but look; Netburst was shit. Everyone knows it.

      We all know that already. The point is that AMD needs that kind of jump to get ahead of the competition like it was half a year ago.

      I would say that if the writer's point of view is that AMD need
  • by cperciva ( 102828 ) on Wednesday October 11, 2006 @03:49AM (#16389649) Homepage
    AMD claims that its quad core is true quad core, while Intel's is two dual-cores grafted together

    Note to AMD: We don't care about the implementation details. We care about performance, cost, and power consumption; the clock speed, cache sizes, and how cores talk to each other is irrelevant.

    For all I care, Intel's "quad core" processor could be using a team of psychic circus midgets.
    • by Anonymous Coward on Wednesday October 11, 2006 @05:17AM (#16390099)
      Some of us do care. Some for work, some for fun. AMD's "designed as quad-core" approach has some notable consequences, especially in the cache layout that (on paper, of course) seems very well suited to virtualization -- much more so than the Intel solution in TFA.

      AMD: a shared L3 feeding core-specific L2 caches. Intel: each core-pair sharing a L2 cache. AMD's approach better avoids threads competing for the same data (thanks to copying it from L3 to every L2 that needs it), while keeping access latencies more uniform and predictable (thus better optimizable).

      Other AMD enhancements look more like catch-up to Core 2: SSE [and it's "Extensions", dammit, not "Enhancements"] paths from 64bit to 128bit, more advanced memory handling (out-of-order loads versus Intel's disambiguation et al.), more instructions per clock by beefier decoding (more x86 ops through fast path instead of microcode) and more "free" ops (where Intel added way more discrete execution units from Core to Core 2).

      If AMD's quad manages to be better due to better memory bandwidth and latency (in practice), then they were quite right about "true quad-core" :)

      • by dfghjk ( 711126 )
        so if it turns out that AMD's design is faster then AMD is better? Is that what you are trying to say. For someone bent on refuting the parent's claim, you did a pretty poor job. The claim was that all that mattered was performance (and that is irrefutable).
    • Re: (Score:3, Insightful)

      Is that meant to be sarcastic?

      You don't care because you don't understand. Performance, cost and power consumption are directly affected by such things as clock-speed, cache, core integration, architecture etc, and different aspects offer different advantages for different uses.

      If it were that easy to put a reliable figure on Performance, the Megahurtz shambles would never have happened.

    • by Visaris ( 553352 ) on Wednesday October 11, 2006 @08:25AM (#16391133) Journal
      Note to AMD: We don't care about the implementation details. We care about performance, cost, and power consumption; the clock speed, cache sizes, and how cores talk to each other is irrelevant.

      AMD it taking the route that will give better performance. I hear you saying that soldering some copper pipes with rubber-bands would be fine as long as it would perform. The point is that it will work... just not very well.

      If you don't think I'm right, look at Intel's own product roadmap. They plan to release a new version of Kentsfield that has all four cores on one peice of Si, with a shared cache, just like AMD is about to do... only later in 2007 after AMD's version comes out. When the two major chip companies move in the same direction, usually that means it is the right one. The only difference is that AMD is going to get there sooner because they didn't bother to play around with this MCM (Multi-Chip-Module) junk. Intel just wants to get to market first; they don't seem to put quality first.
  • by nanoakron ( 234907 ) on Wednesday October 11, 2006 @04:16AM (#16389781)
    AMD: 4=4
    Intel: 4=2x2

    Where do they hire these guys?

    -Nano.
    • by ozbird ( 127571 )
      Intel: 4=2x2

      4=2x2, or 4=2+2? And where does AMD's 4x4(=16?) fit in?
      1st grade maths, yet I'm still confused by what it all means... ;-)
  • Hmmmm Wrong. (Score:3, Informative)

    by Solokron ( 198043 ) on Wednesday October 11, 2006 @04:48AM (#16389965) Homepage
    Looks like someone RTFA a bit wrong. Ben Sander works for AMD. He is one of their media presenters. Here are a few of the events he has done: http://www.cpd.iit.edu/cpd/events.htm [iit.edu] http://www.ewh.ieee.org/r4/chicago/foxvalley/meet. thru.mid2005.html [ieee.org] http://www.instat.com/FallMPF/06/conf1.htm [instat.com] http://mtv.ece.ucsb.edu/MTV/index_files/program-mt v.txt [ucsb.edu]
  • by wysiwia ( 932559 )
    I won't buy any AMD processors anymore until AMD clears its socket plans and guaranties a minimum of 3 year availability for processors on a socket. See also http://hardware.slashdot.org/comments.pl?sid=19821 5&cid=16242757 [slashdot.org].

    O. Wyss
    • by somersault ( 912633 ) on Wednesday October 11, 2006 @05:42AM (#16390215) Homepage Journal
      Why - do you think todays processors won't still be useful in 3 years? Most games don't take advantage of current technology for a year or more I'd say, and your office applications/OS are going to run fine on any of today's decently specced systems (3000+, 3Ghz Pentium, doesn't even matter if they only have one core). The only people that can truly make use of multicore chips would be scientists and people who do any other kind of intensive parallel processing, like like graphics rendering. In 3 years you'll probably want a new mobo anyway to take advantage of whatever new-fangled technology has come out. I guess you could say I'm becoming less of a geek these days even though I'm an IT manager, but if my computer works, and plays the games I like sufficiently (say 1280x1024@60fps with details maxed out), I don't see the need for upgrading my processor (I'd upgrade my graphics card before anything else, since graphics cards come out more often and usually would have a larger effect on performance from one generation to the next).

      Since most of the chipset is becoming integrated into the processor these days then your argument will make more sense over time, but if you were more patient and waited for things to come down in price, as they always do, and rather quicker than I expect sometimes, then you'd be able to buy a new mobo, ram and processor for the same price as the new processor would have cost 6 months previously (not meant to be a perfect example, I haven't been following the prices of stuff since I built my last system a couple of years ago, but the idea is sound :p )
      • The only people that can truly make use of multicore chips would be scientists and people who do any other kind of intensive parallel processing, like like graphics rendering.

        The way people use PCs is drastically changing. Now SMP benefits any gamer, anyone transcoding video (not everyone does it? Uh, Windows Media, digital camcorders, Windows Media Center|MythTV|Other PVR app, and in the case of Windows users, running various spyware in the background without totally dragging down the system ;), and other

        • Personally out of that I only play games and do photo editing, but I accept that more and more people will be able to 'benefit' from multicore, though the examples you list can all be done fine on a single core processor. When you want to do them at the same time, then you have more of an argument, but again most single core processors can handle multitasking/threading and have done so for over a decade.

          And as for a spyware monitor, I don't even believe anyone should have to run one of those if they have
          • Considering that your average "Dude, you're getting a $299 Dell" special comes with Spyware preloaded, Windows' being on entry-level budget systems where they are priced to the point where the kindly-preinstalled spyware subsidizes the cost of the machine, I do not think that Linux is going to be much of an option for most Joe Sixpack-type consumers, and with the upcoming vista, unfortunately, it is likely that big-box-branded preinstalled spyware is going to be a whole lot more difficult to remove.

            Also, if
    • by wolrahnaes ( 632574 ) <seanNO@SPAMseanharlow.info> on Wednesday October 11, 2006 @05:45AM (#16390223) Homepage Journal
      As the person who responded to your last post explained, that's just not possible with the K8 architecture as it is. The memory controller is on-die and memory technology is evolving, therefore the interface between the processor (where the controller is) and motherboard (where the DIMMs are) must also change.

      The closest to a solution we have would be going back to Pentium 2/3 style processor-on-a-card designs which would move the memory slots to an expansion card shared with the processor which would then have a HyperTransport interface to the motherboard.

      This works, as some motherboard manufacturers (ASRock on the 939DUAL for one) have implemented something along these lines for AM2 expandability. The problem lies in laying out the circuitry for this new slot, not to mention the incompatibility with many of the large coolers we often use today. It also would become even more complex when faced with another one or two extra HyperTransport lanes as found on Opteron 2xx and 8xx chips, respectively.

      AMD made a compromise when they designed K8. On the one hand, the on-die memory controller improves latency by a huge amount and scales much better by completely eliminating the memory and FSB bottlenecks that Intel chips get in a multiprocessor environment. On the other hand, new memory interface = new socket, no way around it.

      From what I understand, the upcoming Socket F Opterons will have over 1200 pins in their socket so as to allow both a direct DDR2 interface and FB-DIMM. If I understand FB-DIMM technology correctly, it should end this issue by providing a standard interface to the DIMM which is then translated for whatever type of memory is in use. Logically this will trickle down to the consumers in another generation. For the time being however, AMD has stated that the upcoming "AM3" processors will still work in AM2 motherboards, as they will have both DDR2 and DDR3 controllers.
      • by beezly ( 197427 )
        Indeed, Socket F has 1207 pins. There are some snippets of information and some more links available at http://en.wikipedia.org/wiki/Socket_F [wikipedia.org]. We're delaying the upgrade of our cluster to wait for Socket F systems to become available (so we can compare them against Intel's latest offering at that point).
      • by wysiwia ( 932559 )
        As the person who responded to your last post explained, that's just not possible with the K8 architecture as it is. The memory controller is on-die and memory technology is evolving, therefore the interface between the processor (where the controller is) and motherboard (where the DIMMs are) must also change.

        Yet that doesn't matter more than the last time you responded. It's no problem to merge a new core (or multi cores) with a memory controller for the 939 socket. It's not even a big problem to put sever
        • First off, that isn't true. Things like Vcc sources have to move around to acommodate new designs. You're also disregarding the move to DDR2 which has a different interface as well.

          You've been able to get 939 and 940 pin boards for a LONG WHILE [even now given AM2 is out]. Sure 754-pin has disappeared but AMD doesn't even sell 754-pin desktop processors anymore [laptops being the exception].

          You might as well bitch out Intel for not being able to get Super Socket 7 motherboards anymore for your P54C proce
          • by wysiwia ( 932559 )
            First off, that isn't true. Things like Vcc sources have to move around to acommodate new designs. You're also disregarding the move to DDR2 which has a different interface as well.

            You haven't done any chip design, haven't you? What's the speed inprovement of DDR2 against DDR?

            But that's all not the point, people are simply annoyed with AMD's socket policy, rightfully or not. Just read http://hardware.slashdot.org/article.pl?sid=06/09/ 29/0542214 [slashdot.org].

            O. Wyss
            • sockets change because the technology changes. A different die design may need more Vcc inputs, different memory technology may need more I/O pins etc. Also newer opterons are adding HT links which definitely require more pins.

              Also DDR2 is just double-data rate memory [hence the name]. The diff between DDR1 and DDR2 is the electrical spec is different, the process is different, there are different memory commands and the frequency is higher.

              In theory a dual-channel DDR2-800 should top out at a max of 12.
          • Psssst P54C installed to socket 7, as did P55C. It's K6 that went into SS7, which would simply also accomodate P5[45]C.
            • It's hyperbole for a reason.

              Point is, you can still buy 939-pin [shoprbc.com] boards today. So even though AMD is going through new sockets you're not stuck if you need a replacement.

              Also keep in mind AMD is the company bringing on-board memory controllers, HT links and the like. HT v3.0 is around the corner and it promises higher bandwidth, lower latency and more versatility.

              Sure Intel is stuck on 775 today [with no less than a 4 or 5 diff incompatible chipsets] but they're also the company NOT bringing you point-to-p
    • You want the latest and greatest features, but you aren't willing to cope with changing your hardware to keep up?

      CPU manufacturers don't change interface designs for fun. It costs them time and money to design a new interface. They do it because the market demands new technology.

      Besides, looking at recent history, Socket A, 940 and 939 have had roughly 3 years. Socket 754 was a red herring that no one in their right mind should have bought if they were looking for platform longevity.

      If you compare AMD's soc
      • CPU manufacturers don't change interface designs for fun. It costs them time and money to design a new interface. They do it because the market demands new technology.

        Show of hands: Who's been demanding new CPU technology? What percentage of the "market" has already gone to dual-core, and is clamoring for quad-core to run their apps?

        You don't think maybe a manufacturer would push new technologies out the door to get new sales do you? "..the market demands.." my ass.
        • by beezly ( 197427 )
          Maybe I'm biased. I work in the High Performance Compute sector and we can never get enough CPU cycles!
    • by Visaris ( 553352 ) on Wednesday October 11, 2006 @08:18AM (#16391055) Journal
      I won't buy any AMD processors anymore until AMD clears its socket plans and guaranties a minimum of 3 year availability for processors on a socket.

      I suppose that means you won't buy an Intel chip either. Look at what happened with Conroe. Core 2 Duo uses a socket with the same name as the P4 socket, the same number of pins too. But guess what? When Conroe came out there were less than a handful of reasonable boards out of the hundreds of models out, that would actually support it. The voltage requirements changed slightly, the BIOS requirements changed, and the end result was that upgrading to Conroe on a given board was hit or miss. I fail to see how Intel's MB upgrade situation is any better than AMD's. It sounds to me like you're falling for Intel's game: "We kept the socket name and number of pins the same, so that means we have better socket longevity." Sorry, but I'm not falling for it. I've read too many horror stories on the forums from Conroe upgraders that thought they could use their current P4 boards.

      Don't get me started on Intel's TDP scam either (AMD's = max, Intel's = average). AMD may not always have the best tech, but I find them to be a much more straight-forward company, with fewer sneaky games designed to trick customers.

      And why are we posting a story about AMD's tech said/written by an Intel employee? Sounds like it was biased before it even started to me.
    • by C_Kode ( 102755 )
      A Meat Socket has a longevity of 5 years... Maybe thats what you need. :)
  • by Kopretinka ( 97408 ) on Wednesday October 11, 2006 @05:48AM (#16390245) Homepage
    Can anyone please shed some light on the difference (for the user) between a true quad-core and a dual dual-core processor? I expect a quad-core can be cheaper because it is more integrated, but is that it?
    • by glwtta ( 532858 )
      Probably has to do with memory/cache access and total available bandwidth between the cores. Memory architecture seems to be the one area where the Core still can't touch the Opteron.

      Of course, I'm just guessing.
    • by Phleg ( 523632 ) <<gro.tesuot> <ta> <nehpets>> on Wednesday October 11, 2006 @07:13AM (#16390677)
      A "true" quad-core means that all of them share the same L2 cache, AFAIK. Basically, performance benefits as they can all use the same high-speed memory cache for L1 misses. This is also extremely useful in the case of multiple processes which aren't bound to a CPU. If process A is scheduled on processor 1, then 2, then 3, then 4, there are going to be a lot of cache misses (since it's in no CPU's L1 cache). With two dual-cores bolted on to each other, processes switching from processors 1-2 to 3-4 are going to incur severe performance penalties as any relevent memory is fetched over the memory bus from RAM.
      • by Phleg ( 523632 )
        As a silly analogy, imagine two cars strapped to each other versus a single car with dual engines but lots of shared components where it makes sense to do so. The one that actually had some engineering and design behind it will likely make better use of resources, rather than the ad-hoc, bolted-together solution.
        • by smithmc ( 451373 )

          As a silly analogy, imagine two cars strapped to each other versus a single car with dual engines but lots of shared components where it makes sense to do so. The one that actually had some engineering and design behind it will likely make better use of resources, rather than the ad-hoc, bolted-together solution.

          Meanwhile, Intel got their two-cars-strapped-together out first, thus meeting the needs of some people who might've needed a solution like this, and will have a huge market lead on AMD by t

    • by tomstdenis ( 446163 ) <tomstdenis@ g m a i l .com> on Wednesday October 11, 2006 @08:23AM (#16391103) Homepage
      As others pointed out, inter core communication has to hit the FSB. That makes things like owning/modifying/etc cache lines slower as you have to communicate that outside the chip.

      There are also process challenges. Two dies take more space than 4 cores on one die since you have replicated some of the technology [e.g. FSB interface driver for instance]. Space == money therefore it's more costly.

      If one dual-core takes 65W [current C2D rating] than two of them will take 130W at least [Intels ratings are not maximums]. AMD plans on fitting their quadcore within the 95W enveloppe. Given that this also includes the memory controller you're saving an additional 20W or so. In theory you could save ~55W going the AMD route.

      Also currently, C2D processors have lame power savings, you can only step into one of two modes [at least on the E6300] and it's processor wide. The quad-core from AMD will allow PER-CORE frequency changes [and with more precision than before] meaning that when the thing isn't under full load you can save quite a bit. For instance, the Opteron 885 [dual core 2.6Ghz] is rated for about 32W at idle down from 95W at full load. I imagine the quad-core will have a similar idle rating.

      Tom
      • by dfghjk ( 711126 )
        "As others pointed out, inter core communication has to hit the FSB."

        Whereas AMD's cores all share a single on-die memory controller. Just because AMD has HT and a memory controller built in doesn't mean that it has a significant advantage. In a multi-CPU system it's a different story.

        "Two dies take more space than 4 cores on one die"

        The aggregate size of the dies is meaningless. Intel's design requires two dies but they are cheaper, perhaps even less than half the cost, of AMD's die. Cost is proportion
        • I don't know where you went to school but slapping two dies on a chip instead of putting one moderately larger [than a single] die is going to cost more. There is going to be more die surface area when you have two independent processors on the chip since you duplicate a lot of house keeping [e.g. FSB interface, clocks, etc]. And space is money since it limits the number of dies per wafer and ultimately their yields.

          Suppose you're right and there is no advantage to using a dual-die approach. Why doesn't
          • by dfghjk ( 711126 )
            "I don't know where you went to school..."

            Apparently where you went to school they teach you to lead with an insult.

            "...slapping two dies on a chip instead of putting one moderately larger [than a single] die is going to cost more."

            Yes, but 2x larger is more than moderate, especially when a 2x larger die may cost 5x to make due to yields. May you should retake that class.

            "There is going to be more die surface area when you have two independent processors on the chip since you duplicate a lot of house keepi
            • I think you are missing something...

              Who is saying a quad-core is larger than two dual-core dies? What I am saying is a dual-core die takes $X mm^2, a quad-core takes $X+$Y mm^2 and that $Y $X.

              Otherwise, why would Intel EVER move to true dual-core or quad-core? Why not just put 4 dies on the processor?

              Also HT links ARE memory bandwidth. I can access memory from another NUMA node via HT while simultaneously performing an operation in my process local node. ...

              Don't bother replying. I don't really care th
              • by dfghjk ( 711126 )
                It isn't, but it is roughly twice the size of a single dual-core die. The cost of such a die would be 2x IFF the die size were reasonable, but at the sizes these processors use the actual cost will be much more. Designers of these processors have to balance die size with overall cost all the time and it's ludicrous to assume that the quad-core die will always be cheaper. Until process improvements support quad-core, the opposite will likely be true.

                "Otherwise, why would Intel EVER move to true dual-core
                • At least for AMD they are going 65nm in their first quad-core so the die size will be comparable to the dual-core. HT links are used for PCI/PCIE devices too. Even in a single processor box you benefit from HT. That said in server setups HT links are gold because they are used for NUMA.

                  Anyways, I don't even know what we are discussing anymore. All I'm saying is dual-die == dumb, wait for a properly designed quad-core. That doesn't mean specifically AMD but hey if they hit it first all the power to them
                  • by dfghjk ( 711126 )
                    "At least for AMD they are going 65nm in their first quad-core so the die size will be comparable to the dual-core"

                    Intel will do the same with their next generation of course.

                    "HT links are used for PCI/PCIE devices too. Even in a single processor box you benefit from HT."

                    Yes, IO uses HT but that's not memory bandwidth either.

                    "That said in server setups HT links are gold because they are used for NUMA."

                    Yes, and in larger MP machines AMD has a definite advantage. This is a discussion of a single processor, t
                    • Um, actually PCI/PCIE traffic ***IS*** memory bandwidth. Most devices on the market are memory mapped, and even those which art port based still use the same god damn bus. That all this is moved to an HT link and off the memory bus means you have more bandwidth.

                      Since you can't sort out even that little detail I'd like to just assume you're a clueless newb. Go hide now.

                      Tom
                    • by dfghjk ( 711126 )
                      Again with the personal attacks. I guess it upsets you when you can't win an argument outright. Care you measure your technical penis with me, Tom? What are your qualifications?

                      Yes, I/O peripherals are frequently memory mapped, but even IO cycles are performed over the bus on Intel (vs HT on AMD) so it doesn't really matter what kind of cycles they are. What does matter is how much total bandwidth is used to program hardware. The answer, of course, is that it's a trivially small amount compared the the G
                    • If you can both fetch data/code for the processor to crunch and feed your graphics card with GL commands then you win. It's not always about bandwidth but about bus availability. For instance, most processors go far underutilized as it is. But in the "crunch" times latency is king and being able to return a result quickly is more important. So moving the I/O devices off the memory bus is a good thing in terms of bandwidth and also latency.

                      As for your DMA comment it's actually better now. You can read
    • True QC versus MCM: (Score:5, Informative)

      by Visaris ( 553352 ) on Wednesday October 11, 2006 @08:39AM (#16391255) Journal
      Intel's QC is really an MCM, or multi-chip-module. That means they have literally grabbed two Conroe (Core 2 Duo) chips off of the assembly line, and mounted them in a single package. From the outside it looks like a single chip, but inside, it has two, separate peices of Si, connected over the FSB. That is the problem: the two chips are connected to the same bus. A single chip presents one electrical load on the bus, and two chips present two loads. This means that the speed of the bus needs to be dropped. That is why kentsfield will have a slower bus speed than normal chips. If you think about it, this is the exact opposite of the situation you want. You have just added a core, so it would be nice to add more bus bandwidth. Instead, the Intel solution lowers the overall bus bandwidth, not to mention that it is a shared bus. The two cores fight each other over a very slow external bus, and this creates a performance bottleneck.

      When all four cores are on a single peice of Si, all sharing a L3 cache, the chips don't need to fight over the external bus as much. The cores can share information between them internally, and do not need to touch the slow external bus to perform cache coherency and other synchronization. Also, true QC chip presents one load to the outside bus. This means that the bus speed does not need to drop because of electrical load.

      There are many people who don't care how the cores are connected as long as the package works. The point is that the way the cores are connected have a direct impact on performance. We'll be talking about Intel vs. AMD cache hierarchy in 2007 when AMD uses dedicated L2 and shared L3 while Intel uses only shared L2. Expect cache thrashing on Intel's true QC chips with heavily threaded loads when it comes out. Next I'll hear people say that the cahce doesn't matter as long as it works. As long as it works for what? Single-threaded tiny-footprint benchmarks like SuperPi or Prime95? How about a fully threaded and loaded database or any other app that will actually stress more than the execution units?
    • Re: (Score:2, Funny)

      by automattic ( 623690 )
      Hey, if those cores were boobs, I'm almost positive you could pick up the chick with 4 teets (have over 2 and they're not considered boobies anymore, bub) a lot cheaper than scoring two racks of twins.

      Nobody wants her, and you get to reap the rewards of extra mammary performance(or is that memory?)

  • Loyd Case considers that the shift isn't as major as Intel's move from NetBurst to Core 2,

    Well, yeah. AMD was starting with a superior processor to NetBurst to begin with. If they haven't advanced as far over their previous designs as Intel has, perhaps it's because they didn't have as far to go to start with. Pretty stupid remark overall by Loyd IMHO.

  • the memory controllers now support full 48-bit hardware addressing, which theoretically allows for 256 terabytes of physical memory.

    I always felt the IBM AS400 had a nice scheme with its revolutionary large address space at the time. Not only did every byte -- possibly even bit -- of main RAM memory have a unique address, but so did all the attached mass storage devices. With this type of addressing, one could bring that same type of architecture to the desktop.

  • Are you going to crack open your quad core Intel and suddenly be disappointed because they didn't put all the cores on one wafer of silicon?

    I think two dual cores in one socket is as good as one quad core in one socket, everything else being equal.

    I think we should complain to AMD that they didn't do eight cores, because intel already made four cores (true or not). AMD is just throwing around insults to cover up the taint of being a "me too!"

    In the end I think we will just let sales numbers talk for themsel

A list is only as strong as its weakest link. -- Don Knuth

Working...