Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD's Showcases Quad-Core Barcelona CPU 190

Gr8Apes writes "AMD has showcased their new 65nm Barcelona quad-core CPU. It is labeled a quad-core Opteron, but according to Infoworld's Tom Yeager, is really a redefinition of x86. Each core has a new vector math processing unit (SSE128), separate integer and floating point schedulers, and new nested paging tables (to vastly improve hardware virtualization). According to AMD, the new vector math units alone should improve floating point operation by 80%. Some analysts are skeptical, waiting for benchmarks. Will AMD dethrone Intel again? Only time will tell."
This discussion has been archived. No new comments can be posted.

AMD's Showcases Quad-Core Barcelona CPU

Comments Filter:
  • Things have come a long way since the heady days of bit slice processors. The first microcode I wrote was for an XOR operation - I could not think of anything simpler, that would actually do something useful...
  • Anyone know what "SSE128" means? SSE registers have been 128 bit from day one.
    • by Zenki ( 31868 ) on Saturday February 10, 2007 @02:45AM (#17960622)
      SSE+ operations up until now were operated on 64 bit at a time within the processor. SSE128 just means the new AMD chip will complete a SSE instruction in one pass.

      This was pretty much the reason why most people only bothered with MMX optimizations in their applications.
      • Re: (Score:2, Interesting)

        by pammon ( 831694 )

        SSE+ operations up until now were operated on 64 bit at a time within the processor

        Hmm...do you mean specifically on AMD's hardware? That stopped being true for Intel starting with the Core, which has 1-cycle latency on SSE instructions.

        • by waaka! ( 681130 ) on Saturday February 10, 2007 @05:00AM (#17961192)

          Hmm...do you mean specifically on AMD's hardware? That stopped being true for Intel starting with the Core, which has 1-cycle latency on SSE instructions.

          Core2 has single-cycle throughput on most SSE instructions, not single-cycle latency. Most of these instructions still take 3-5 cycles to generate results, which is similar to the Pentium M, but now a vector of results finishes every cycle, instead of every two or four cycles.

          An important consequence of this is that if your instructions are poorly scheduled by the compiler (or assembly programmer) and the processor spends too much time waiting for results of previous operations, the advantages of single-cycle throughput mostly disappear.

          • by pammon ( 831694 ) on Saturday February 10, 2007 @05:33AM (#17961332)

            Core2 has single-cycle throughput on most SSE instructions, not single-cycle latency

            Well, certainly you won't be able to get a square root through in one clock cycle, but many/most of the simple integer arithmetic, bitwise, and MOV SSE instructions on the Core 2 really do have single cycle latency. source [agner.org]. None do on the AMD64, which supports the theory that SSE128 means more "new for us" than "new for everyone." Not to put AMD down - many of the other features sound promising (but the article is long on breathlessness and light on details, alas).

    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Saturday February 10, 2007 @02:50AM (#17960650)
      Comment removed based on user account deletion
      • Re: (Score:3, Interesting)

        by adam31 ( 817930 )
        With the other chips, you have to load the first part(if it's a full 128bit instruction, or if it's multiple instructions added together), save, load, save, add, execute.

        Please explain this. Do I understand correctly that you think some SSE instuctions are 16 bytes? Issuing is one thing, and latency another. In most cases I've found AMD/Intel can issue 1 mulps/shufps/adds per cycle, the *ss instructions at 2 per (AMD sometimes 3 per cycle). If you mean that only the first 64-bits, 2 components, are

      • SSE first appeared in the Katmai (that's why SSE was also known as "KNI" or "Katmai New Instructions") which was produced in a 250 nm (0.25 micron) process. 250 nm was already pretty mature when the Katmai came out so I doubt they ever targeted the design for 350 nm production.
  • Dethrone? No. (Score:2, Insightful)

    by NXprime ( 573188 )
    "Will AMD dethrone Intel again?" Dear AMD, meet Larrabee. http://www.theinquirer.net/default.aspx?article=37 548 [theinquirer.net] AMD might kick Intel in the nuts a little but definitely not dethrone.
    • Re: (Score:2, Interesting)

      No but a good hard, well aimed, holding nothing back kick in the nuts can leave them impotent,
      so they'll have to do some ugly procedures to survive it in the long run. A couple of identical
      blows in the meantime could leave them sterile, so if the current setups begin to die out.
      And Intel had no more babies waiting anymore, they will not be dethrowned, but will be getting
      an hounerable mention in the history books.
    • read the article, that is an x86 GPU it wouldn't be able to compete with general purpose CPUs
  • by Weaselmancer ( 533834 ) on Saturday February 10, 2007 @02:50AM (#17960654)

    As long as AMD and Intel continue to chase each other in the x86 market, high end chips become low end in the span of six months. Just keep buying 6 months behind the press releases and you get great processors for next to nothing.

  • It is labeled a quad-core Opteron, but according to Infoworld's Tom Yeager, is really a redefinition of x86.
    I don't get the surprise or disapointment here. It appears that the submiter thinks x86 isn't an opertron or something. As far as i know, the opertron is the same thing- IE and extention to the x86 that can handle 64 bit extentions.

    Am i missing something or am i completly wrong?
  • Well.... (Score:2, Interesting)

    Keeping in scientific fact, how much heat has to be generated for 1 MIPS?

    The fact is, absolutely none. It has been shown that only the destruction of information via AND and like instructions create entropy (heat). As long as you use only 3 types of gates (pass through, not, xor), you can create a heat-free CPU. Provided we do want to check for bit errors, we could maintain a very low heat via ECC like checking. Estimates on that are 10^8 lower than present.

    We could keep 98% of our efficiency of current day
    • by DimGeo ( 694000 )
      If memory serves right, you need the constant 1 and xor to for a Boolean base with xor.
    • Re: (Score:3, Informative)

      by Khyber ( 864651 )
      Heat-free? Did you forget the Second Law? Or did you just forget about pure friction itself? Moving ANYTHING is going to involve friction. Nothing moves without SOME force, and friction will happen.
      • electrical resistance formulas cannot be derived from friction formulas. friction is a macroscopic thing that is the statistically accumulation of many microscopic effects.
      • Take it more fundamental than that. Temperature arise from the motion of atoms in a material. Switching transistor the way they do, allows those electrons to really get those atoms moving.
  • AMD64 is very fast (Score:5, Interesting)

    by GreatDrok ( 684119 ) on Saturday February 10, 2007 @03:17AM (#17960760) Journal
    In my own benchmarks (generic C integer and floating point scientific code) I have found that the Core Duo and Core 2 Duo aren't all that quick compared with an AMD64. Clock for clock the AMD64 Opterons we have are about 50% quicker than an equivalent Core 2 Duo for integer work. I know this doesn't agree with all the usual magazine benchmarks but they are heavily biased towards using SSE instructions where possible and it is SSE where the Core 2 Duo has been a real improvement over previous Intel designs and also bests the AMD chips. Hopefully, AMD has recognised this and the new SSE implementation will bring them back on par with Intel for these benchmarks but even today an AMD64 processor is a beast and more than a match for anything Intel produces.
    • Re: (Score:3, Informative)

      by pjbass ( 144318 )
      Care to publish your numbers that debunk all the other hardware sites that are typically AMD-biased anyways?

      And pointing out that it isn't fair to compare because a Core2 duo already executes the full SSE instruction in one pass vs. the 2 clocks for a curret AMD64 is the same as saying it's not fair to compare the on-die memory controller on AMD's vs. Intel's FSB. But people didn't seem to care when the numbers went in AMD's favor.

      I'd really be interested in seeing your numbers, your programs, and what com
      • by GreatDrok ( 684119 ) on Saturday February 10, 2007 @03:54AM (#17960946) Journal
        "Care to publish your numbers that debunk all the other hardware sites that are typically AMD-biased anyways?"

        OK. I can't give you the code but it is my own implementation of a pretty standard bioinformatics sequence comparison program which doesn't use SSE/MMX type instructions and is single threaded. On all platforms it was compiled using gcc with -O3 optimisation. I have tried adding other optimisations but it doesn't really make much difference to these numbers (no more than a couple of percent at best).

        AMD Opteron 2.0Ghz (HP wx9300) - 205 Million calculations per second
        Intel Core 2 Duo 2.66Ghz (Mac Pro) - 146 Million
        Intel Core Duo 2.0 Ghz (MacBook Pro) - 94 Million
        IBM G5 PPC 2.3 Ghz (Apple Xserve) - 81 Million
        Motorola G4 PPC 1.42 Ghz (Mac mini) - 72 Million
        Intel P4 2.0 Ghz (Dell desktop) - 61 Million
        Intel PIII 1.0 Ghz (Toshiba laptop) - 45 Million

        Interesting things about these numbers. The Core Duo is clearly a close relative of the PIII since the performance at 2Ghz is roughly twice that of the PIII at 1Ghz. The P4 at 2Ghz is really very poor indeed which isn't a huge surprise as it was never very efficient. The G4 PPC puts in a reasonable result easily beating the much higher clocked P4 (what, the Mac people were right? Shock!) although I have to say that the performance of the G5 is disappointing. The Core 2 Duo isn't a bad performer although it does have the highest clock speed of any processor in this set but it is seriously beaten by the Opteron. From these numbers, a Core 2 Duo at 2Ghz would be about half as quick as an Opteron at the same speed.
        • Well, until you show us your source code those numbers are as believable as anything else one might randomly type here...
          • "Well, until you show us your source code those numbers are as believable as anything else one might randomly type here..."

            I can't because the program is really large and it doesn't entirely belong to me (you know, work for people, they own your code).

            You're right, I could just be making these numbers up and if you prefer to believe that then there is nothing I can do to change your mind. All I can say is that this is my own (admittedly anecodatal) experience.
            • Actually I was thinking more about benchmarking/coding flaws than lying from your part.
              • "Actually I was thinking more about benchmarking/coding flaws than lying from your part."

                Certainly a possibility. In my defense I would like to point out that all benchmarks are open to question. I know my own code, I know what it does and it doesn't do much but it does a lot of it so the performance figures are what they are. I originally wrote this code on an SGI, ported it to Linux on a 486, SPARC, Alpha, PPC and so on. Its old and simple but does real work. While I could make it faster using SSE an
        • I hope the benchmarks don't take get advantage out of using 64-bit arithmetic.
          • "I hope the benchmarks don't take get advantage out of using 64-bit arithmetic."

            Nope, straight 32 bit. If it had been 64 bit then the Core 2 Duo would also have seen a more significant boost versus its 32 bit predecessor not to mention the G5 should have been better than the G4 which it wasn't.
        • by jez9999 ( 618189 )
          Any numbers for an Athlon 64? I just bought a 3800+ single core and would like to be made really excited about it. :-P

          Also which of these chips are single, and which are dual, and which are quad cores?

          What's the point of dual and quad core, anyway? Anyone figured out why it's better than just having 2/4 CPUs?
          • "Any numbers for an Athlon 64? I just bought a 3800+ single core and would like to be made really excited about it. :-P"

            Pretty much the same as the Opteron in this case. The program doesn't really hammer cache or main memory, just the CPU. Work out your clock speed as a percentage of 2Ghz and do the sums and that should be the number.

            The Opteron, Core 2 Duo and Core Duo are all dual core chips in this test, the others single core although the G5 was a dual processor system. Since the program is single th
          • by ocbwilg ( 259828 )
            What's the point of dual and quad core, anyway? Anyone figured out why it's better than just having 2/4 CPUs?

            It's better than just having 2/4 CPUs because you can now get dual CPU functionality on consumer-level mainboards. You get SMP without having to shell out for workstation or server level hardware. Of course, if you do have workstation or server boards with 2 or 4 CPU sockets on it, then you can put dual or quad core CPUs in those sockets as well. So instead of having 2-way SMP with 2 sockets yo
        • by waaka! ( 681130 ) on Saturday February 10, 2007 @05:19AM (#17961272)

          OK. I can't give you the code but it is my own implementation of a pretty standard bioinformatics sequence comparison program which doesn't use SSE/MMX type instructions and is single threaded. On all platforms it was compiled using gcc with -O3 optimisation. I have tried adding other optimisations but it doesn't really make much difference to these numbers (no more than a couple of percent at best).

          When you say you've tried "adding other optimizations," are you referring only to other GCC optimization flags? If your program's algorithms have any moderate degree of parallelism and you haven't tried vectorization either by compiler (GCC and ICC can both do this) or by hand, the benchmark you've done is not unlike a race where no one is allowed to shift out of first gear. Can you go into any more specifics about how this program does sequence comparisons?

          Also, the disappointing numbers from the G5 may be partially explained by the fact that its integer unit has higher latency than the other desktop processors in that list. The G5 isn't exactly known for blistering integer performance, anyway.

          • I should say that this program was written a very long time ago originally. It implements an efficient but standard Smith and Waterman dynamic programming algorithm. I have done vectorisation of this algorithm in the past and the performance improvement was dramatic (about x20). With this test program though, it hasn't really benefited from extreme compiler optimisations. I do remember running it after compiling with ccc on an Alpha and seeing a 30% speedup so there is definitely room for improvement bu
          • We're not testing the compiler. IMHO, turning optimization OFF would be a fine idea, or at least unobjectionable.

            The only important thing is that the compiler choices and options are fair. Using gcc on the Opteron and icc on the Core Duo would not be fair. Using gcc everywhere, with the same options, it completely fair.

            One can also define "fair" as "all systems tweaked to the max", but this is rather difficult to do right. (see also: OS benchmarks, where the benchmarker knows all the ways to tweak the OS he
        • Re: (Score:3, Interesting)

          by NovaX ( 37364 )
          AMD64 is not a processor, it is an instruction set. So you need to clarify whether you compiled your programs using 32-bit or 64-bit x86 instructions. I am not a gcc user, but I'm assuming that it chooses the default architecture based on your environment settings, thus AMD64 on 64-bit Linux. Since you've included a PowerPC processor, its really not obvious.

          When the Core2 was released, benchmarks made it clear that Intel did not optimize for 64-bit performance. They have the architecture, but they pushed th
          • It depends on what kind of data you are processing. If you are doing 32 bit calculations then you would want to compile your code for 32 bit, assuming your processor can handle it, as most 64 bit CPU can. If you are using 64 bit calculations then of course the 64 bit CPU would out perform the 32 bit as you would have to do additional coding steps to simulate 64 bit on 32 bit architecture, multiple 32 bit operations with bit shifting and the like.

            If you took code that was written for 32 bit operations and
            • by NovaX ( 37364 )
              You're assuming that the only difference between 32-bit and 64-bit x86 instructions are the bit sizes. That's not true, and the most immediate gain from AMD64 are the extra registries. There are a lot of changes to the ISA that will dramatically skew the results. The only negative results you would get compiling 32-bit code to 64-bit would be: A) The cache can contain fewer entries; B) Platform assumptions, such as when performing pointer arithmetic, would break. His code is probably fairly clean since he
        • Intel Core 2 Duo 2.66Ghz (Mac Pro) - 146 Million

          Where did you get a Mac Pro with a Core 2 Duo?

          Should be LGA-771 2-socket Xeon Woodcrest, and not fit a LGA-775 C2D, right?
        • I have some benchmarks too:

          http://www.vips.ecs.soton.ac.uk/index.php?title=Be nchmarks [soton.ac.uk]

          Again, plain C code, no SSE/whatever. It is threaded, which makes it slightly different. The source is there too.

          Results:

          Opteron 850, 2.4 GHz, 4 CPUs, 4.5s
          Opteron 254, 2.7 GHz, 2 CPUs, 6.9s
          P4 Xeon (64 bit), 3.6 GHz, 2 CPUs (4 threads), 7s
          Core Duo, 2.0 GHz, 2 CPUs, 18.1s
          P4 Xeon (32 bit), 3.0 GHz, 2 CPUs (4 threads), 19.7s
          P4 (Dell desktop), 2.4 GHz, 1 CPU, 36.6s
          PM (HP laptop), 1.8 GHz, 1 CPU, 58.5s

          So I agree: an Opteron
        • Good god, how did this get modded as informative? "This just in, random poster redefines reality, AMD64 really faster than Core 2 Duo regardless of the tons of real world application performance data which completely contradicts this!!!"

          Please people, get a grip. This guys little application does tons of random memory reads. This is the one area where the Opteron still kicks ass because it has an IMC. The number of applications where this is useful is fairly small, and it's been known for a long time.

          • Re: (Score:3, Informative)

            by GreatDrok ( 684119 )
            "This guys little application does tons of random memory reads"

            If only that was the case but actually it is very linear. The application can hold the whole of its memory requirements in cache these days so it hardly has to touch main memory and it was designed to do all the inner loop code using only registers. Heck, I doubled the size of the inner loop just to avoid a single register copy because it made a significant performance increase.

            The reason I like this code is that it shows how many operations y
            • Java is big-endian, like the SPARC and G4.

              Java has strictly-defined floating-point math that is incompatible with the x86. An x86 chip must save floating-point options out to memory to force the exponent to be the right size.

              JIT/emulation systems in general, including Java, do better with more registers. The G4 has about 6x as many once you exclude registers that are unavailable. (about 5 for x86, but at least 30 for the G4)
        • by pjbass ( 144318 )
          One of the things that makes a big impact on performance, on any platform, is the type and speed of memory used. Looking at your list of platforms above, I see an HP Workstation used with the Opteron. Not having one in front of me to verify, but reading on HP's website what chipset and memory is available, you could get a very distinct increase in performance simply due to lower memory latency through the chipset and memory type.

          What chipset(s) and memory were used in the Mac's? Were they on-par with a w
        • Ok thanks for the interesting result, an IBM Power5 would be more interesting in this comparison. But your core Duo Opteron comparison is somewhat flawed in the logic, you compare basically single core performance of those processors, if you split the problem up in multiple threads then the results will look entirely different, and that is what multiple cores are about, as many threads and processes as possible without significant slowdown. Given modern operating systems hosting 20-100 processes each of the
  • Comment removed based on user account deletion
  • I want a floating quad.
  • Does SSE128 mean some significant departure from the doomed SSE instruction set?

    I'm not kidding. In SSE I'm familiar with, one of the input registers is always an output register, which means its contents are destroyed. Another flaw is that there aren't enough registers... SSE uses 8, where 32 are commonly not enough when latency is longish (especially with SoA-style progamming, where pragmatically a single vec3 occupies 3 128-bit registers).

    ... or Madd. You know, multiply-add. Does it have that?

    • Does SSE128 mean some significant departure from the doomed SSE instruction set?

      No. It means 128 bit SSE ops can be done in a single cycle instead of two (64-bit chunks).

      In SSE I'm familiar with, one of the input registers is always an output register, which means its contents are destroyed

      How is this different from regular x86 (non-SSE) instructions? They have two operands where one is a source and destination.

      Another flaw is that there aren't enough registers... SSE uses 8

      AMD64 specifies 16 SSE (XMM) regi
      • by edwdig ( 47888 )
        The trade off to have 32 registers was probably not worth the die space and extra complexity. Having 16 probably gave most of the benefit, and having 32 provided diminishing returns.

        At least with the general purpose registers, AMD wanted to go to 32, but couldn't do it without changing the instruction set. I'd assume the same thing applies to the SSE registers.
        • At least with the general purpose registers, AMD wanted to go to 32, but couldn't do it without changing the instruction set. I'd assume the same thing applies to the SSE registers.

          How so? Unless I'm missing something here, I think the only cost is in the size of the register file and rename register set, but nothing ISA-related.
  • Great (Score:3, Funny)

    by Trogre ( 513942 ) on Saturday February 10, 2007 @04:33AM (#17961096) Homepage
    So now I'll see four penguins at startup!

  • What really interest me is how does it compare with single and double precision calculations. If AMD gets in the range of Itanium performaces will Intel follow and kill their own Itanium by boosting core 2 FP ?
  • Can we start rejecting 'scoops' that sound like a radio/TV demolition durby or monster-truck madness advertisement?
  • by barracg8 ( 61682 ) on Saturday February 10, 2007 @07:43AM (#17961908)
    • Each of Barcelona's four cores incorporates a new vector math unit referred to as SSE128
    SSE has always been 128bit (the 64bit simd extensions were called MMX). AMD used to funnel the instructions through a 64bit execution unit by splitting the work into two halves, the new core has a full 128bit SSE pipeline so doesn't need split the operations. Nothing new here, just a faster internal implementation. Can this deliver and 80% improvevment in benchmark performance? - quite possibly. Take a look at the Core2 FP perfromance numbers - it also has a full 128bit implementation of SSE.
    • And separating integer and floating-point schedulers also accelerates this thing called virtualization
    Huh. Hardware virtualization affects how the processor handles certain instructions such as priviledged operations. FP instruction execution is unaffected. Virtualized workloads will benefit no more than non-virtualized workloads. Separate issue queues are good but does it specifically benefit virtualization? - no.
    • Barcelona blacks out power to individual portions of the chip that are idled, from in-core execution units to on-die bus controllers. This hasn't made it into PCs before ...
    Intel call this 'intelligent power capability'.
    http://www.intel.com/technology/magazine/computing /core-architecture-0306.htm?iid=search& [intel.com]
    • Barcelona adds Level 3 cache, a newcomer to the x86
    Xeons have featured L3 caches for years. http://en.wikipedia.org/wiki/List_of_Intel_Xeon_mi croprocessors [wikipedia.org]
    • Barcelona is genius, a genuinely new CPU that frees itself entirely of the millstone of the Pentium legacy.
    • Barcelona is a new CPU, not a doubling of cores and not extensions strapped on here and there.
    Barcelona is an Opteron, with a doubling of cores and some extensions strapped on here and there.

    I'm not meaning to detract from AMD here - the fact that they have still not had to make any radical changes to the opteron micro-architecture is a testament to the quality of the original design. They are slightly ahead of the game on virtualization - they're going to beat Intel to nested page tables - but other than that this chip is playing catchup. Overall this is going to be a very nice piece of kit to work with. But nothing radical and new here.

    G.

    • Re: (Score:3, Informative)

      by ocbwilg ( 259828 )
      Xeons have featured L3 caches for years. http://en.wikipedia.org/wiki/List_of_Intel_Xeon_m i [wikipedia.org] croprocessors

      Actually, if you go waaaay back to the Socket 7 days you could have L3 cache as well. The AMD K6 and K6-2 CPUs only had on-die L1, and the L2 cache was on the mainbaord. But the K6-3 CPU had 256KB or 512KB of on-die L2 and was compatible with the same mainboards. So when you put that K6-3 in a socket 7 mainboard the mainboard's cache actually functioned as L3. Sure it wasn't on-chip, but L3 cache
    • Re: (Score:3, Informative)

      I fully agree, the article is mainly empty of information - it took words from AMD briefings and produced a meaningless salad.

      Now, as far as some claims, in detangled order:
      • FPU boost: this seems to be based on several things - one is the obvious widening of SSE2 issues. Others are increasing instruction fetch from 16B/cycle to 32B/cycle, making the FPU scheduler 128bit, unaligned loads and a doubling of cache bandwidth.
      • Virtualization: Nested page tables and reduces witching times for the hypervisor.
      • Powe
      • by Björn ( 4836 )
        Here is another article or post [siliconinvestor.com] that has a relatively long lists of the improvements in Barcelona.
    • by mczak ( 575986 )

      * And separating integer and floating-point schedulers also accelerates this thing called virtualization

      Separate issue queues are good but does it specifically benefit virtualization? - no.

      True. Additionally, the article implies this is something new. All K8 chips (=Opterons, Athlon64) however always had seperate schedulers for float and int instructions (in contrast to the intel core2 chips, so amd is touting that as an advantage - it's more of a design choice than really a simple "better" or "worse" for either solution probably). There is a reason the codename of Barcelona is K8L! As you mentioned, it'

      • by Björn ( 4836 )
        There is a reason the codename of Barcelona is K8L! As you mentioned, it's certainly not somehow a completely new chip.

        From an article [theinquirer.net] in The Inquirer:

        "WE'VE BEEN HEARING the "K8L" codename for ages now, but we can say now, straight from the horse's mouth, K8L was never a codename for AMD's upcoming generation of chips."

        If we are to believe the article, K8L was apparently the code name for the Turion64 where the L stands for Low-power. K9 was the X2 processors, so that would make the upcoming Barcelon

  • by master_p ( 608214 )
    Rumours have it that their next CPU model will be named 'Real Madrid'...
  • Paging Tables (Score:5, Informative)

    by Doc Ruby ( 173196 ) on Saturday February 10, 2007 @10:16AM (#17962778) Homepage Journal

    Nested paging tables is a per-core feature that will light the afterburners on x86 hardware virtualization. A paging table holds the map that translates virtual memory addresses to physical memory addresses, and each CPU core has only one. Virtual machines have to load and store their page tables as they get and lose their slice of the CPU. AMD solved the problem with nested paging tables. Simplified, each VM maintains its own paging table that stays fixed in place. Instead of loading and saving paging tables as your system flips from VM to VM, your system just supplies Barcelona with the ID of the virtual machine being activated. The CPU core flips page tables automatically and transparently. This is another feature that's implemented for each core.


    Context-switching has long been the weakest design point for x86 in "PCs", especially servers. x86 arch is rooted in single-user, single-threaded, single-context apps. The in-core registers that CPU operations execute directly against have to be swapped out for each context switch. In *nix, that means every time a different process gets a timeslice, it's got to execute two slow copies between registers and at best cache RAM, at worst offchip RAM (over some offchip bus). If the register count is larger than the bus width (even onchip), that's another multiple on that slow cycle. That context-switch overhead can be larger than the timeslice allocated to each process's "turn" in the schedule for lower-latency / higher-response (lower "nice") processes, approaching realtime.

    Unix was designed for multiusers, context-switching from the beginning. The chips it's run on coevolved with it. Linux arrived when x86 CPUs ran fast enough that context-switching was OK, but still a big waste compared with, say, MicroVAX multiple register sets. Windows architecture is rooted in the x86 architecture that DOS was designed for, though perhaps Vista has finally lost all of the old design baggage originated in the 8088/8086, but its long history of UI multitasking means it's context-switching all the time, which will gain in speed. The MacOS switch to BSD means it's got lots of power bound up in the context switches that could be released with Barcelona.

    So while low-level benchmarks might show something like 80% FPU improvement, the high level (application) performance could improve quite a lot more. Recompiling apps to machine code that exploits more registers without the context-switching penalties could find multiples, especially apps with realtime multimedia that run concurrently with other apps. Intel's hyperthreading already gets past some of these bottlenecks in distributing tasks among multiple cores, but the Barcelona paging tables go even deeper, for likely extra performance (on top of Barcelona's own hyperthreading and new L3 cache).

    Aside from the marketing "vapormarks" we'll surely see out of AMD (and their sockpuppets) before it's actually released "midyear", I'm looking forward to seeing how this thing really runs in multitasking apps. I'm expecting "like a greased snake across a griddle".

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...