Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
AMD Hardware

45nm Opteron Performance, Power Efficiency Tested 129

An anonymous reader writes "Now that Intel has unleashed its next-generation Core i7 processors, all eyes are turned to AMD and its incoming wave of 45nm CPUs. To get a feel for AMD's future competitiveness, The Tech Report has taken a pair of 2.7GHz 45nm Opterons (with 75W power envelopes) and put them through the paces against Intel Xeons and older, 65nm Opterons in an extensive suite of performance and power efficiency tests — from Cinema 4D and SPECjbb to computational fluid dynamics and a custom XML handling benchmark. The verdict: AMD's new 45nm quad-core design is a notable improvement over the 65nm iteration, and it proves to be a remarkably power-efficient competitor to Intel's Xeons. However, 45nm AMD chips likely don't have what it takes to best Intel's Core i7 and future Nehalem-based Xeons."
This discussion has been archived. No new comments can be posted.

45nm Opteron Performance, Power Efficiency Tested

Comments Filter:
  • AMD had it going (Score:5, Insightful)

    by bb84 ( 1301363 ) on Tuesday December 02, 2008 @04:30AM (#25956843)
    ...but have since really lost momentum and competitiveness. They truly awakened the sleeping giant when they were kicking Intel's ass a few years ago.
    • Re:AMD had it going (Score:5, Interesting)

      by speed of lightx2 ( 1375759 ) on Tuesday December 02, 2008 @05:22AM (#25957041)

      ...but have since really lost momentum and competitiveness

      Seven out of the top ten supercomputers in the latest top500 list have AMD in them, including the top two, so I don't really see the whole "AMD losing momentum and competitiveness.

      • by c0p0n ( 770852 )
        Sales. Particularly in laptops.
        • Re:AMD had it going (Score:5, Interesting)

          by LordMyren ( 15499 ) on Tuesday December 02, 2008 @06:09AM (#25957249) Homepage

          Yes Intel laptop sales are better.

          And I find it hilarious: Intel consistently makes better mobile CPUs definitely but everything else they do in mobile space reeks to high heaven. To this day its nearly impossible to buy a Atom netbook without a Intel GMA based chipset: thats a 2 watt cpu and a 12-25 watt chipset. If you buy a normal laptop, its probably a 45w or 35w chip, even though the Pxx00 series is 25w and almost the same price, and again it comes with an absolutely worthless video card that sucks down >10 watts.

          AMD certainly doesnt have as nice a processor offering. Their power is close (31w) but the performance just isnt as good. But in my mind they more than make up for it by always having power-thrifty chipsets boasting really good graphics capabilities. Amd's gone even further by offering PowerXpress and CrossfireX, allowing users to switch between integrated and discrete video cards or to use both at once (respectively). I'll take the un-noticable cpu speed hit for a huge power savings and good integrated video boon.

          The biggest thing keeping AMD down in the mobile world is the systems. OEM's tend to slap together something in a cheap case missing half the plugs you'd expect when they put together Athlon systems.

          • Re: (Score:3, Insightful)

            by wisty ( 1335733 )

            It probably doesn't help that the Megahertz race is no longer relevant, and savvy customers can no longer divide the MHz by the price, and find out which system gives you the most e-penis for your buck. Customers no longer have a clue, so everyone just shrugs and goes with the biggest brand name.

          • Re: (Score:3, Insightful)

            by lawaetf1 ( 613291 )

            The biggest thing keeping AMD down in the mobile world is the systems.

            Speaking beyond just the mobile market, it's important to keep in mind that Intel is facing anti-trust suits around the world. And has already been found guilty in S. Korea with Europe getting increasingly annoyed at their delays. If the accusations are true, Intel's unlimited R&D budget is ill-gotten via illegal, exclusory business practices.

            Frankly I'm all but blown away at how a company with a smaller market cap than either NVIDIA or Intel can continue to compete and sometimes win.

          • Re: (Score:3, Informative)

            And I find it hilarious: Intel consistently makes better mobile CPUs definitely but everything else they do in mobile space reeks to high heaven. To this day its nearly impossible to buy a Atom netbook without a Intel GMA based chipset: thats a 2 watt cpu and a 12-25 watt chipset. If you buy a normal laptop, its probably a 45w or 35w chip, even though the Pxx00 series is 25w and almost the same price, and again it comes with an absolutely worthless video card that sucks down >10 watts.

            Not true at all, wh

            • Your still talking about best-case chipsets that consistently drain 3x more power than the CPU running full tilt. And have the most atrocious graphics support imaginable.

              Intel's solution to this ir-redeemable state of affairs was to buy PowerVR IP from Imagination and to ditch their own graphics core entirely. I have yet to see any 3d benches of the PowerVR kit, but I fully expect it to outperform Intel's solutions.

              But, if Intels specs are to be believed, it would seem the overall system power budget nod

              • Of course Intel's specs are to be believed. If those TDP numbers weren't representative, nobody could do business with them; manufacturers who buy Intel parts have to design their systems with power dissipation in-mind. Nvidia has recently earned the ire of the industry with their new 9300/9400 chipsets for this very reason.

                I wouldn't hold my breath on Intel's PowerVR graphics on-board Polubso (GMA500). According to this review, it scores 405 3D marks [] 3dmark 2003. For comparison, The Eee PC 701 sco []

      • by bb84 ( 1301363 ) on Tuesday December 02, 2008 @05:42AM (#25957143)
        and out of all computer owners in the world, how many of them have supercomputers? Right now, in the consumer level most people consider Intel's stuff better. No bias here, but just looking at specs and performance, Intel currently sells the best goods. That's exactly what I meant--they awoke the sleeping giant. Intel has more experience, money, employees, and resources. No, that doesn't mean they have to have the best products. However, when you take all that and combine a damaged ego when AMD first whooped 'em, they pooled their talent, money, and everything else and slammed back. I interned at Intel's fab20 in 2006. People talked about AMD and how Intel really needed to make a comeback. My impression was that they were not very amused at the ratings then, and ever since the Core 2 Duos general user preference is swinging back in their direction because they started delivering a much superior product. AMD needs to get their act together if they want to hold out against Intel in the long run.
        • by LordMyren ( 15499 ) on Tuesday December 02, 2008 @06:31AM (#25957351) Homepage

          You're perspective's demented, because you think cpu performance still matters for end users. Cpu performance has always been a rat race; the difference is that its fast enough now.

          Its not the numbers of computers or supercomputers you should be counting, its the number of cores. Google runs data centers with >50,000 computers; they're working on data center #20 in the states now. Yahoo, Microsoft, Sun, Ibm, Ebay, Amazon, Pixar... they all need these colossal systems to support their business. These are huge volume sales. Ask how much CPU any of these companies wants and they'll ask how much you can give them.

          The desktop on the other hand is growingly irrelevant. The square-mm of the average desktop cpu are going to shrink considerably; Atom is Intel trying to cut room for x86 in clothes of devices of a much smaller size. Consumers wont need the 6 core or 12 core cpus AMD's putting out next; most can barely use the dual core they have now. In another decade I am 100% certain most desktops will have been subsumed into phones; phones with bluetooth keyboards and some hdmi-analog. Frame buffer limitations aside, we're almost at that power level already.

          In the workplace, virtualization and increasing computing power will probably lead to thin clients again. Why give everyone a $900 workstation when $250 terminals and a couple heavily virtualized servers are easier to maintain?

          What me and my grandparent are saying is, if you want to build big fast machines, you need someone who has a use for those super machines. And frankly I dont see any commitment aside from dedicated gamers and the businesses for whom computing is life.

          • Re: (Score:3, Funny)

            by wisty ( 1335733 )

            I think Steve Ballmer has a cunning plan to prevent this :D

            • Unfortunately, Ballmer's current cunning plan, Vista, hasn't worked out :-) The next cunning plan, V7, probably won't do worse, and maybe will do better, but by around XP time the CPU people had pretty much outrun MS, and the RAM people have also gotten ahead of it even with DDR or certainly by DDR2. One thing I'm missing by not doing Vista is the cache-on-flash feature, which is too bad; it'd let my disks spin down most of the time.

              The other traditional driver for the CPU makers has been gamers, who can

            • Yes, his cunning plan was to let oems continue to ship absolutely craptacular Intel integrated video (as opposed to the mildly craptacular Intel integrated?) with stripped down Vistas, causing an uproar 18 months down the road, inciting Windows 7 to revert to software rendering skipping the video card altogether, thereby inflating the need for powerful CPUs. What a diabolical plan!

          • by smallfries ( 601545 ) on Tuesday December 02, 2008 @09:11AM (#25958211) Homepage

            There is a very simple reason that the Top500 is full of Opteron systems. Until the i7 Intel did not have an integrated memory controller. Although the Core2 does more work per cycle, at lower power, and with better caching - there is a measurable difference in large memory bound workloads. The other factors were enough to make them faster on the desktop, but the lack of integrated memory controller was killing them in large-scale systems.

            The i7 continues the advantages that Core2 had over the Opteron range, but adds that missing memory controller. It's not clear yet if it is good enough. The memory subsystem graphs in the article are interesting. Intel still has a faster, larger cache, but may lack raw bandwidth to main memory.

            I'm not going to disagree with your comments on the impending death of the desktop (or agree with them either). But I will point out that people have been making exactly the same comments and predictions for 20 years. We still have desktop computers.

            • by h4rm0ny ( 722443 )

              All of which goes to show that unlike most desktop buyers who choose a chip based on easy to grasp numbers like the MHz, people who buy servers look more closely. Intel appear to have taken the speed crown for a bit, but AMD have always offered a good, solid, power-efficient design that does the job. And it looks like they're creating the basis for an impressive line to come. I hope the recession doesn't hit them too hard, they deserve to do well with their recent efforts.
            • HT has a good part to do with the AMD's standing in the Top500 as well, I believe. And again, Intel has an answer-by-imitation response of QPI that we have yet to see the viability of.

              Core i7's raw memory hypotheticals are almost legendary... 48GB/s with tri channel DDR3-2000. Thats very near echelons formerly reserved for video cards.

              Intel's making up all the platform work AMD's been schooling them on for years. Virtualizations supposedly far improved on Nehalem too, but again like QPI I havent seen any

          • In the workplace, virtualization and increasing computing power will probably lead to thin clients again. Why give everyone a $900 workstation when $250 terminals and a couple heavily virtualized servers are easier to maintain?

            It's just like my COBOL teacher in college said!

            No, seriously, I'm not being snarky. He really said that about eight years ago. Said he had seen it happen before and that it would happen again.

            • Actually come to think of it, we have seen it happen again. We just didnt notice.

              The modern PC is rarely used as little more than a thin client: a thin client to the web, where colossal server farms feed us and do our work for us. Sure its not VNC, but if you look where the cycles are getting spent, its typically far far away.

          • by LWATCDR ( 28044 )

            I tend to agree but there are some trends that will use more CPU.
            HD-Video editing. You can buy a digital camera that will shoot 720p video for all of $150. Editing and transcodeing video takes CPU time.
            Gaming is the huge question. The consoles often make more sense for gaming but I think people will always want to play games on their PCs.
            Then you have speech input. That can always use more CPU power.
            So we may not be at good enough yet but I think we are getting very close.

            • Gaming is an exception here; theres a lot of control flow. But with regards to video editing and (almost certainly) speech input, they're both data intensive tasks that are far better tailored to the GPU. Video's already decoded on the GPU, and its only a matter of investing the necessary work to get encoding on the GPU there. Theres already some commercial solutions, and they boast incredible speed ups. Theres constant murmurings about the video card companies and/or apple getting prepped for releasing

          • From that perspective, I've been watching the low-power VIA chips slowly creep into desktop speeds. We're at the point where the smaller players can start competing in the desktop and laptop markets soon, and it should be fun.

          • by Jaeph ( 710098 )

            "You're perspective's demented, because you think cpu performance still matters for end users. "

            It depends - in mmorpgs (e.g. daoc) i have found cpu performance to be a significant factor when it comes to lag. Not the only significant, but certainly one of them. Maybe that's updating the UI, I'm not sure, but it is.

            As for the future, if your cpu has enough unused horsepower, why not move some of the graphical functions off the graphics card and back into the cpu? E.g. do the physics calculations in softw

            • re: daoc, I cited games / dedicated gamers as a place where CPU is still important. also, the game is probably poorly coded if it affects lag.
              re: cpu with enough power for gpu, i think this is a common fallacy. silicon is free and getting free-er (cite: Mr. Gordon Moore). whats not free is power. why would you spend so much more power running data parallel computation on a batch of single-threaded monsters? a gpu is a simpler machine, but its one extremely tailored to data intensive ops, and its far mo

        • Since when do most computer consumers know what a processor is, or have an opinion about AMD?

          Most people buy Intel because most people are shown an Intel machine by the sales people. Some of the very large OEMs don't even make AMD systems available. Personally, when I've done custom machines, or advised family on low end machine purchases, *every* one of them has chosen AMD when I gave the option, simply because the price for a given performance point was better.

          And no, I'm not an AMD fangirl, I'm on an I

      • Re:AMD had it going (Score:4, Informative)

        by Henriok ( 6762 ) on Tuesday December 02, 2008 @06:46AM (#25957411)

        Seven out of the top ten supercomputers in the latest top500 list have AMD in them, including the top two, so I don't really see the whole "AMD losing momentum and competitiveness.

        Seven out of the top ten supercomputers have Power Architecture processors in then too, including the top two, but I'd say that Power Architecture has lost its momentum, wouldn't you?

        PS. For those who don't know. Roadrunner uses PowerXCell 8i processors, which are Power Architecture. All Cray XT3/4/5 supercomputers uses PowerPC 440 based communication processors called SeaStar. BlueGene uses PPC 440/450 based custom CPUs. DS.

        • Re: (Score:2, Insightful)

          by hattig ( 47930 )

          Power and PowerPC is doing great. XBox360, PS3, Wii, Toshiba TVs, supercomputers, set top boxes, ...

          Just because it isn't doing well in the desktop PC market doesn't mean it is losing its momentum.

      • Re: (Score:3, Informative)

        by rsmith-mac ( 639075 )

        True, but for how much longer? The reason you find Opterons in such massive servers is because HyperTransport scales up much better in 4P+ designs than Intel's ancient FSB. Now that they have QuickPath Interconnect for Nehalem/Core i7 and its derivatives, they aren't going to be held back by buses any longer. HT was AMD's one last trump card against the Core 2 generation, but they have no such card for use against the Core i7 generation.

        • No Intel only plan to refresh the Xeon UP and DP series with Nehalem in Q1 2009. They will only offer QPI for 4P+ systems in late 2009, when they release their Nehalem-based Xeon MPs. So it is safe to say that AMD will continue to dominate the 4P market for at least 1 more year.
          • They might and as far as bang for the buck AMD is still there. The main thing that burns me with AMD is the stunt they pulled with the short-lived 939 socket. AMD left those people out to hang, high and dry.

            • Re: (Score:2, Insightful)

              Absolutely, I was happy to support the underdog until I was the beneficiary of a '939 shafting; AMD promised AMD-V support on Socket 939 and then pulled it, and although not AMD's fault (I'm looking at you nVidia & ASUS) the chipset/motherboard performance, driver support and hardware reliability left me with a very sour taste.

              Now that there is no longer any viable non-x86 solution, I've gone intel 100%. not just the CPU but I'll now only buy Intel motherboards, turns out they are _very_ reliable, stabl

              • The boards are fine (made by Foxconn I believe). They just don't have as many "offerings" as others. e.g. more ports, overclocking (remember Intel was the first to lock their FSB), etc. Also while Intel does CPUs great, chipsets aren't their strong point (same could be said for AMD).

                • The boards are fine (made by Foxconn I believe). They just don't have as many "offerings" as others. e.g. more ports, overclocking (remember Intel was the first to lock their FSB), etc. Also while Intel does CPUs great, chipsets aren't their strong point (same could be said for AMD).

                  I actually don't mind if the chipset doesn't have every bell and whistle, all I really care about is stability.

                  To intel's credit I've found their motherboards have stability that almost approaches some of the old SPARC, PA-RISC, Power & Alpha Boxen I still have in-play.

                  That said, we would all be in a much better position if there was still a viable alternative architecture in the market place (HPC and embedded aside). The intel guys have certainly pulled some clever tricks to take their Instruction Set

            • As far as I remember, Socket 939 had a longer and more useful life than Socket 754 that preceded it. I remember helping someone build a Sempron 3400 on a Socket 754, then a bit later when they wanted to upgrade there wasn't really any place to go.

              Of course, the worst lemon in somewhat recently history had to be Intel's Socket 423, of which I have an example. Expensive Rambus memory, 100Mhz FSB, and Intel only produced CPUs for it for a year, only supporting P4's up to 2.0Ghz.

              • 754 lasted longer. []

                Also as Nevarre [] points out. The 939 was suppose to be THE AMD socket.

                • Socket 754 was kept around for a long time as a budget/mobile socket, but AMD really didn't do anything with it. If you bought into Socket 754 early on, by the end of its life you could still only get a single core Sempron or Athlon 64 for it. At least with Socket 939, you could move up to an Athlon X2 with some faster clock speeds. Not saying that 939 lived a long life or anything, but I found that people who bought into Socket 754 seemed a bit more ticked off than those that bought into Socket 939.


      • ...but have since really lost momentum and competitiveness

        Seven out of the top ten supercomputers in the latest top500 list have AMD in them, including the top two, so I don't really see the whole "AMD losing momentum and competitiveness.

        It is incredibly short sighted to gauge company performance by supercomputing statistics. The reality is that AMD have been second best for quite some time now. This is retail. Not how many chips are in supercomputing top tens.

        The truth is, they are not losing their competitiveness or their momentum, they're simply maintaining a fairly steady pace of being second best by a similar margin.

        • Considering the size disparity between Intel and AMD, its really very impressive that they keep up at all.

    • Re: (Score:3, Insightful)

      by Joce640k ( 829181 )

      But ... if they're cheaper than Intel then why do they need to be faster?

      PS: These days power efficiency is almost as important as speed.

    • by Hal_Porter ( 817932 ) on Tuesday December 02, 2008 @07:36AM (#25957679)

      What scares me is that AMD might decline into a purely budget CPU house like Cyrix did and then leave the market together.

      Now think back to the Itanium fiasco. If AMD hadn't have been around or hadn't been making high end chips Intel could have made the high end IA64 and gradually migrate the whole market to it. So now we'd be running underpowered and overpriced IA64 chips. In a sense the thing that prevented that was that chips were dual sourced so Intel couldn't force a transition to an inferior successor like Microsoft did with XP to Vista. And IA64 was likely so patented that no one else would be ever be able to make compatible chips.

      Of course with AMD around Intel was forced to adopt x64 and produce the excellent Core, Core2 and now Core i7 microarchitectures and do it very quickly. Just imagine what would have happened if they hadn't been. Recently I've heard AMD they will go fabless for example. TSMC and other commodity fabs don't have technology to match Intel, so AMD will lag behind. For low end stuff it doesn't matter much, but it really does for the high end. Mind you Intel is kicking ass in the netbook market too. It really makes you wonder how long AMD will be around. And if AMD go under so would ATI since they bought it. I actually prefer Intel and NVidia in this generation but I'm not sure they would be much good if there was no competition.

      Not a very comforting thought is it?

      • AMD is splitting in two, a fabless AMD and a foundry (still not named), so they still have technology to match Intel. But they could use commodity fabs for cost reason.


      • Of course with AMD around Intel was forced to adopt x64 and produce the excellent Core, Core2 and now Core i7 microarchitectures and do it very quickly.

        Actually the Core 1 doesn't support 64-bit and doesn't use the Core architecture, it's basically just a Pentium-M (or two). The first CPU to use the Core architecture is the Core 2.

      • by rbanffy ( 584143 )

        I think if IA-64 ever achieved the kind of volume the x86 market has, it would end up being a fine processor with lots of room for improvement still. It never really stood a chance: it was marketed as a server processor and Microsoft offerer only a half-assed support for it (it's their best interest to keep computers a commodity and they will fight any attempt to differentiate in that space). In addition, by the time it could be a viable high-power desktop workstation for developers or data-crunchers (a spa

        • I fail to see the benefit of Explicitly Parallel Multiprocessing over SMT. Why decide in advance what has to get run together, when really all you want to do is keep all your functional units under use? I never saw the appeal in IA64. There's been Linux IA64 support for years, and aside from some number crunching HPC clusters no ones used it.

          That being said, I would very much like to see x86 die.

          I'd far prefer PPC; it upset me to no end that Apple went from giving up on PPC to nailing the coffin shut by

      • by Kjella ( 173770 )

        Recently I've heard AMD they will go fabless for example. TSMC and other commodity fabs don't have technology to match Intel, so AMD will lag behind.

        Well, neither did AMD, being a second-fiddle CPU producer just isn't enough to fund it, these days you're looking at speciality process manufacturing companies like TSMC doing CPUs, GPUs, memory, SSDs and whatever else needs making, the AMD foundry probably going the same way. At the absurd investments we're looking at here, it might not be long before you see one TSMC/IBM/AMD joint venture vs Intel as we head into 10-20nm land.

    • Re: (Score:2, Informative)

      by hattig ( 47930 )

      Luckily for them Nehalem Xeons are a long time off in the computer world, especially in 4 and 8 socket variants, where AMD excels. Indeed the graphs in the review show that given another two processors, AMD would have been far more competitive. And in the server benchmarks Shanghai performed extremely well from the start, apart from the reimplementation in C# of XMLBench (instead of using the C, C++ or Java version that is well tested) that had problems.

      In addition AMD have a platform that has already been

    • Re:AMD had it going (Score:5, Informative)

      by this great guy ( 922511 ) on Tuesday December 02, 2008 @08:13AM (#25957873)

      I wouldn't be so quick to say that AMD has lost it all.

      First of all, in the 4 and 8-socket market, AMD still has no competition. The Intel Xeon MP series is still using the outdated FSB technology. This series also requires expensive and power-consuming FB-DIMM modules instead of DDR2/DDR3. Nehalem-based Xeon MPs are not going to ship before Q4 2009. Etc.

      Secondly, in the 1-socket and 2-socket market, and regarding the latest 45nm AMD Shanghai and Intel Nehalem, so far there are very few benchmarks comparing directly the 2 microarchitectures; most of the hardware review sites do the mistake of comparing Shanghai against the older Intel generation, or the older AMD generation against Nehalem. But from what I have seen, clock-for-clock, for most workloads, Shanghai and Nehalem are very close, +/-10% in terms of performance, and Shanghai seems to do this in the same or a slightly lower power envelope. Some workloads do exhibit a more significant performance difference, with either Shanghai or Nehalem pulling ahead of its competitor. Now comparing clock-for-clock isn't really what matters. What matters is dollar-for-dollar comparisons. But what is interesting is that AMD has priced the Shanghai Opterons 23xx to match very closing the Nehalem Xeon 55xx series at equivalent frequencies. This tends to indicate that AMD thinks that they offer a clock-for-clock value identical or better than Intel.

      The only area where AMD will clearly be unable to compete in the 1 and 2-socket market is the very high end: 1-socket Shanghai processors will top out at 3.0 GHz, 2-socket processors will top out at 2.8 GHz, while Intel goes all the way up to 3.2 GHz. However these expensive processors represent a very small proportion of the market share (virtually nobody buys $1000+ processors), so it shouldn't be a huge factor regarding which processor manufacturer "wins" this 45nm battle. Intel will have the bragging rights, but that's about it.

      Another last point I would like to mention is that AMD will be the only one to offer low-power 1-socket 45nm Shanghai for at least the entire first half of 2009: 55W and 75W ACP Opteron 13xx, and 95W TDP Phenom II. While Intel will only offer Core i7 and Xeon 35xx processors rated at 130W TDP (!). They are planning to release lower-power 45nm Nehalems only during the second half of 2009. I find it rather stunning for Intel to not care more about power consumption... especially for their Xeon 55xx line, the server market cares about energy efficiency. We all remember that extravagant power consumption and temperature was a major factor that caused the failure of the Pentium 4 Netburst microarchitecture...

      • Re:AMD had it going (Score:4, Interesting)

        by LWATCDR ( 28044 ) on Tuesday December 02, 2008 @10:55AM (#25959375) Homepage Journal

        You also only kind of hinted at the ease of migration. All that the OEMs need to do to introduce the Shanghai is to put it in the socket. with the I7 family it they will need to move to a new motherboard as well. For manufactures this will be a big win since for may buyers it will be seen as a nice safe evolutionary change. The one thing that worries the server market is big changes. It is all about stability.

        • That's right. Nehalem requires new chipsets, new sockets, hence new motherboards, *and* new memory (more expensive DDR3 replacing DDR2). As opposed to Shanghai that can just be dropped into any 2-year old socket F motherboard. While Intel had no choice and had to do these architectural changes, this is a factor that is going to hamper the rate of adoption of Nehalem.
          • by LWATCDR ( 28044 )

            Well I would guess most server users will not be doing a CPU upgrade. But for the OEMs it is great. You have a new SKU with just a change of CPU. Does Intel even have a server version of the Nehalem yet?

        • AMD will be moving to HT3 in 2009: its a new socket. But yes, the initial release is socket compatible.

        • I myself unfortunately bought a Intel Core2Quad a little over a year ago, and regret big time I did not wait for the AMD quad instead. The reason is that Intel only allow real Virtualization in crippled protected mode!!! That essentially destroyed my objective for buying a Quad core... For virtualization, AMD has a much better design that avoids these problems. If you intend to run separate systems such as Windows and Linux sharing resources such as the network card and so on without loosing power, you bett

    • AMD were royally in the red when they bought ATI. One would assume that they couldn't afford to plough billions into r/d and so their chips suffered. They have since more or less recovered, ATI is still more than a match for Nvidia and will provide a healthy revenue for AMD (even if AMD chips aren't the fastest, the HD4870X2 is still the fastest single card on the market i believe). They'll get back eventually. The trouble (if you can call it that) with Intel is that they have a ridiculously large pot o
      • AMD desperately needs to realize the synergy between the graphics and cpu factions. They've had plans for gpu-integrated cpus for a while and they must deliver in a good way. If they do, they'll be in a fantastic spot.

        I'm confident AMD will hold up fine against NVidia. OpenCL should level the playing field that NVidia has dominated in GPGPU: AMD has >2x the double-precision fp performance; with a common spec for using it people hopefully will. AMD should do fine in the graphics space; they already ha

    • AMD was kicking Intel's ass only during the time period where Intel was making what turned out to be a losing gamble on the end of x86, putting most of their resources into Itanium instead. They ran their x86 line during that period largely on autopilot, which AMD took advantage of to catch up and surpass Intel in the x86 space, betting (correctly) that x86 would remain the platform of choice, and Itanium would go nowhere.

      When Intel eventually realized this was the case, and shifted most of their R&D ba

  • I'm not wise to all the marketing names that chip vendors use these days: will this 'Opteron' chip be priced competitively as an alternative to the Core i7, or will it just be an expensive server processor? I know that having the fastest top-end chip has a halo effect on the rest of the range but with Intel's mid-range processors being good and cheap, that's where AMD most needs to make an improvement.

    • by thona ( 556334 ) on Tuesday December 02, 2008 @04:39AM (#25956881) Homepage

      No marketing talk in those names.

      Not sure you would call it expensive, but the OPTERON chips per definition are only server chips. The Opteron 23xx series (45nm shanghai) is dual processor, while the 83xx series is quad processor.

      The end user equivalent is the PHENOM series.

      Note that this is a technical difference, not marketing talk. The Opterons use Socket F, while the Phenoms (single processor only) use the AM2+ socket. Different pin count, different number of interconnect ports (for connecting to other processory).

      45nm Phenoms are IIRC supposed to appear soonish ;) Opterons start being available now - I pick up a new server on friday.

      • by this great guy ( 922511 ) on Tuesday December 02, 2008 @07:11AM (#25957551)

        Well to be pedantic:

        • The Opteron 1xxx series is using the same AM2+ socket as the Phenom processors, and are in fact rebranded Phenoms (no technical difference). But you are correct in that Opteron 2xxx and 8xxx are completely different animals.
        • The Opteron 8xxx series is for systems with 4 or more sockets (not restricted to 4). This is made possible because each of the 3 HT links per processor is running the cache coherency protocol (whereas only 1 out of the 3 HT link of an Opteron 2xxx runs the protocol).
    • Bit of both, actually. Most often it has been known as a server chip. That being said there have been opterons that were more targeting desktop market. Generally opterons for desktop market had better cache or something to that effect compared to their regular retail brothers. Good example being the emergence of the opteron 165 which was a regular socket CPU that was known for excellent overclocking potential and definitely not priced for server market.
  • --- Or at least I can't reach it. I guess their servers doesn't feature any of the cpu's reviewed ;)

    Mirror anyone?

  • It's too bad AMD rested on their laurels after destroying Intel's itanium. Too bad they destroyed Itanium by the simple expedient of backwards compatibility, as opposed to superior architecture.

    Because that's what's happened to them now--Intel has them dead beat on core architecture, and no amount of size reduction or megahertzing can save them now.

    Sure hope they're hard at work on some kickass new architecture in their basement, because we desperately need Intel to have a strong competitor.
    • more the reverse (Score:5, Interesting)

      by Trepidity ( 597 ) <delirium-slashdot.hackish@org> on Tuesday December 02, 2008 @07:03AM (#25957493)

      AMD didn't really destroy Itanium and then rest on their laurels. Although you have to give them some credit for coming up with reasonably good chips that the market wanted, it was more that Itanium was the reason AMD was competitive with Intel in the x86 space for a few years in the first place.

      Intel has orders of magnitude more R&D budget and especially capital for fab construction than AMD does. So AMD is perpetually at least a half-generation behind Intel on the tech curve: they keep coming up with chips that could beat Intel... if they had come out a year ago. Now when Intel effectively skips a generation, as they did when they sunk all their resources into Itanium and mostly ignored x86 for a year or two, this is enough to give AMD the lead. But once Intel shifted fully back into x86, they crushed AMD again.

      • On our benchmarks we can't get 4 cores' worth of performance out of an Intel CPU, but we can get nearly 8 cores' worth out of AMD. AMD's memory bus architecture is simply better.

  • It's a shame, too (Score:4, Insightful)

    by kmike ( 31752 ) on Tuesday December 02, 2008 @06:28AM (#25957331)

    I find it disappointing that the test of the supposed server-oriented processors does not include web server tests - after all it's probably the largest market for such processors.

    I mean, does anyone really care about Folding@Home number these processors can crunch? Or "VRAD map build benchmark"? WTF?

    • That would come down to chipset, ram and HD performance also which would likely distort the question of which CPU would be the better choice.
      • by kmike ( 31752 )

        Don't other tests, especially those involving HD encoding, use RAM, IO and chipset to run?

        I mean, all these semi-synthetic benchmarks tell me nothing about how the system would fare in the real-world usage such as web hosting.

    • Re:It's a shame, too (Score:5, Interesting)

      by ThePhilips ( 752041 ) on Tuesday December 02, 2008 @08:51AM (#25958085) Homepage Journal

      Also, they have used several openly pro-Intel applications: Cinebench and M$ .Net.

      Cinebench never hid the fact that they optimize for Intel and if you want to have best performance you need to buy Intel CPUs.

      M$ .Net XML benchmark - M$ C/C++ compiler and libraries in many parts use Intel's hand written asm code. And it always produced code optimized for Intel architectures.

    • Something like a multi HTTP / SMTP server test would be nice, and running a few dozen virtualized servers too.

  • by GNUPublicLicense ( 1242094 ) on Tuesday December 02, 2008 @06:41AM (#25957389)
    Again, I wonder if the benchmarks used AMD optimized code (they have to use the proper GCC backend). It seems that most of the time, the benchmarks for non-Intel processors are based on Intel optimized code. I have never seen mentionned in the benchmarks if the tools were using the best machine code for the targetted processor... yeah... that smells bad.
    • Re: (Score:3, Interesting)

      call me a conspiracy theorist, but i'd like to see the benchmarks on *bsd and gnu/linux systems as well. i've often wondered if microsoft has a deal with intel to slow amd processors.
      • NUMA (Score:4, Informative)

        by DrYak ( 748999 ) on Tuesday December 02, 2008 @09:57AM (#25958597) Homepage

        i've often wondered if microsoft has a deal with intel to slow amd processors.

        Yes, sort of.
        It's called NUMA - Non Uniform Memory Architecture.

        Up until recently Intel platforms had the memory controlled by the northbridge, with all CPUs and all cores having the same access to the memory.

        Newest Intel platform and all 64bits AMD had the memory controller on the processor package. In a multi-socket configuration, each processor controls it's own chunk of memory, so for some range, the access will be faster because the processor is directly accessing it, and for other the latency will be increased because the processor has to ask its neighbour over HyperTransport / QuickPath.

        To be able to function in a such configuration, an OS should pay some attention when scheduling process and threads to cores : it should be best that all threads from some process are all scheduled to cores having all direct access to the resources used by said process. (While at the same time scheduling two threads at a physical core and it's corresponding hyperthreading virtual core if there's a another physical core sitting idle)

        Windows has always deeply sucked at this. Opensource OS, on the other hand, have much more work applied to them for that. (That's why they are much more popular on super computers).

        This also introduces technical difficulties (like keeping the cache coherent). That's also why heavily multi-socketed (4 and up) motherboard won't be coming during the first year of Core i7's life. They probably have to fix all the fine details before that. As usual expect a change in socket format and a new iteration of Core i7 not quite exactly compatible with the previous one.

        On AMD's side, currently sold Opteron are already adapted for 4 and more sockets configuration. (As explained by other /.ers, the 8000 series has a coherency protocol running on 3 HT interconnects, which should be enough to help on 4 and more sockets).

        • by afidel ( 530433 )
          Windows 2003 is NUMA aware. My problem is the Oracle NUMA patches are not very well tested and hence less stable. We quickly backed out the NUMA commands from our 10GR2 systems after experiencing a number of issues. Perhaps Oracle's code has gotten better in the last ~2 years, but for us the slight performance increase wasn't worth the headaches.
          • by Bert64 ( 520050 )

            Oracle isn't NUMA aware? Surely it depends more on the OS tho...
            NUMA systems have been around for many years, SGI had such systems over 10 years ago... You'd have thought Oracle would have been able to take advantage of such systems...
            Also, how much does the application need to be NUMA aware? Shouldn't the OS take care of most of it...

            Linux has a big head start on windows when it comes to NUMA, due to having support from other architectures, and having 64bit AMD support for a lot longer than windows.

            • by afidel ( 530433 )
              Oracle IS NUMA aware, but at least in early 2006 the code extensions that made it NUMA aware were immature and had significant issues in our environment. I'm not sure if that was code that was ported to Windows from the UNIX codebase or new code that was Windows specific, all I know is it caused us issues and the performance increase on our DL585 G1 was in the low double digits so the tradeoff was a non-starter. A financial system that is down is infinitely slower than even the slowest of systems.
    • by Bert64 ( 520050 )

      Yes, comparing a phenom 9500 (2.3ghz quad core) to a Q6600 (quad 2.4ghz) running gentoo linux, compiled with relatively conservative CFLAGS (-O2) with the cpu type set appropriately, the AMD chip performs very well and beats the Intel chip in some benchmarks...

  • Amd has lower cost chipset and good on board video.

    Also you can use any chipset in a 2P and 4P+ system unlike intel where you are stuck with nvidia chipsets. Back when the intel mac pro first came out at the time amd 2p systems had more pci-e lanes and a better I/O setup for them. Also intel does not have low to mid range corei7 chipsets / MB they only have high - mid and up.

    The new phenom 2 will work with to days amd chipsets / boards unlike intel. Core i7 is fast but $250 - $300 + for a MB that is over ki

  • the codenames AMD is using for the opteron line are all cities that hosts, or used to host, formula 1 grand prixes.

    maranelo, sao paulo, magny cours...

    nice to see my own city (sao paulo) mentioned in the road map. gotta start saving $$$ to buy me one of those "sao paulo" chips when they get released.

Each new user of a new system uncovers a new class of bugs. -- Kernighan