Forgot your password?
typodupeerror
AMD Hardware

AMD Launches First 45nm Shanghai CPUs 264

Posted by ScuttleMonkey
from the steady-progress dept.
arcticstoat writes "The wait for AMD's next-gen CPUs is finally over, as the company has now officially launched its first 45nm 'Shanghai' Opteron chips for servers and workstations. 'AMD's move to a 45nm process relies on immersion lithography, where a refractive fluid fills the gap between the lens and the wafer, which AMD says will result in 'dramatic performance and performance-per-watt gains.' It's also enabled AMD to increase the maximum clock speed of the Opterons from 2.3GHz with the Barcelona core to 2.7GHz with the Shanghai core. Shanghai chips also feature more cache than their predecessors, with 6MB of Level 3 cache bumping the total up to 8MB, and the chips share the same cache architecture as Barcelona CPUs, with a shared pool of Level 3 cache and an individual allocation of Level 2 cache for each core.'"
This discussion has been archived. No new comments can be posted.

AMD Launches First 45nm Shanghai CPUs

Comments Filter:
  • Which to buy now? (Score:3, Interesting)

    by Ed Avis (5917) <ed@membled.com> on Thursday November 13, 2008 @11:09AM (#25746801) Homepage

    Does this mean that AMD chips are now competitive on price-performance with Intel's? I mean for a fairly high-end desktop or server; obviously different considerations apply in the embedded or netbook market.

  • Re:Which to buy now? (Score:1, Interesting)

    by Anonymous Coward on Thursday November 13, 2008 @11:13AM (#25746865)

    Depends on what you were doing, we need both heaps of memory bandwidth and good floating point so Intel wasn't competitive with AMD (though this might change now Intel have released Nehalem, aka iCore7).

    But Barcelona's TLB bug certainly blotted their copybook with us. :-(

  • Re:Which to buy now? (Score:2, Interesting)

    by Anonymous Coward on Thursday November 13, 2008 @11:25AM (#25747023)

    Ed...A fairly high-end desktop isn't anything close to comparable to a high-end server. AMD's have been superior for building high-end servers for some time now, thanks to bandwidth considerations.

  • ...and so? (Score:1, Interesting)

    by tchernobog (752560) on Thursday November 13, 2008 @11:27AM (#25747037)

    AMD says will result in 'dramatic performance and performance-per-watt gains.'

    Okay, that's marketing talk. I think that at virtually *ANY* presentation of a new CPU in the last twenty years someone had said that.

    Me, I just have a 6-yrs-old P4 laptop which, compared to nowadays new models w/ Core Duo, isn't much different.

    This because there are other bottlenecks: hd speed, RAM, etc.

    So, why upgrade, for a desktop user? Even for middle business servers, we live with two 8-yrs-old Sun machines which are more than adequate for keeping up all the services we need internally. We never have CPU spikes.

    Sometimes I just wonder if all this isn't just a grab at customer pockets.

  • Making Me Feel Old (Score:5, Interesting)

    by withoutfeathers (743004) on Thursday November 13, 2008 @11:39AM (#25747219)
    The first computer I ever worked on (as a data entry operator in the mid '70s) was an IBM S/360 mainframe with 64KB of "main" (physical) memory.

    The first computer that I was a primary operator on, a S/360-135 plug-compatible 2Pi, had 768KB when it was delivered and was eventually bumped to 1.25MB shortly before I moved on to programming.

    The computer upon which I wrote my first professional (COBOL) program was an IBM 3033 with a (for then) eye-popping 4MB of physical memory.

    The first computer I ever owned was an RCA COSMAC with 4KB of memory.

    The first DIY computer I ever assembled completely from parts (about 15 years ago) had 4MB of interleaved DRAM and a 256KB SRAM cache and was considered somewhat amazing by everyone who saw how fast it ran OS/2. I eventually boosted it up to 16MB

    Now you get 8MB of on die cache with your four cores... And I still can't get a decent flying car.
  • About Time (Score:3, Interesting)

    by ShakaUVM (157947) on Thursday November 13, 2008 @11:40AM (#25747239) Homepage Journal

    It's about time... I mean, seriously. The CPUs coming out of AMD have stagnated in the last few years. The Phenoms are decent enough, I guess, if you have apps that can take advantage of the three or four cores, but they clock at slower than comparable X2s, and two cores is still the optimal point on the diminishing returns curve (on adding more cores).

    I remember the 90s and early 00s when you were basically required to upgrade your processor every year or two or be hopelessly behind when the latest game came out. Now, I'm running the same machine I was back in '04, except with a new video card and an upgrade from a 3800+ (2.4Ghz) to a 4800+X2 (2.6Ghz) a year and a half ago.

    I got curious how far I was behind these days, and found that as far as everything goes, a 4800X2 is still about as good a chip as anything AMD produces, only about 30% below the top chips AMD makes right now.

    By contrast, Intel has the E8500 which is not only significantly faster, but is heavily, heavily OCable as well. I think Moore's Law has finally broken down for AMD.

  • Re:Which to buy now? (Score:1, Interesting)

    by Anonymous Coward on Thursday November 13, 2008 @11:45AM (#25747295)

    I would go with AMD for servers because of the lower power/heat for the complete system. FB DIMM's run hot as to Intels southbridges, not to mention Intel understating how much heat their CPU's will generate compared to AMD. Plus, if you go multi CPU, AMD has the more mature way of handling it efficiently.

  • Re:Oh please. (Score:2, Interesting)

    by mfh (56) on Thursday November 13, 2008 @11:54AM (#25747453) Homepage Journal

    The two companies take turns one-upping each other for the bleeding edge, but every time (10 years running) I've specced out a mid-range (home gamer, single CPU motherboard) to low-end (grandma's email/photo machine) machine, AMD's been the way to go. It's a lot like trying to decide which company's video boards to pick if you're trying to make a game machine without breaking the bank.

    I have to agree. When the Quad cores shipped, I tested them and I compared the speed per dollar. AMD was half price for performance. If you understand that a decent graphics card, and having a nice power supply to run the show, then you are ahead of the game.

    Value is what I look for when I buy things, not bleeding edge performance. Because money isn't a factor I could easily spend to get the best available but I have too much remorse wasting an extra thousand bucks on a slight increase. It's not worthwhile to me, considering the frame rates in games I get on my AMD system are good enough for 25man raiding in WoW, or world pvp.

  • Re:Oh please. (Score:3, Interesting)

    by Lonewolf666 (259450) on Thursday November 13, 2008 @12:00PM (#25747545)

    Starting with some of GP's requirements (game-capable PC but at a reasonable price) and wanting to use ECC RAM for reliability I ended up buying an AMD last year. It is an AMD Athlon64 X2 EE 4600, a dual core with 2x2.4 GHz, not overclocked. In practice, this machine is fast enough, especially considering that I don't run the very latest games.
    The deciding factor in terms of Intel vs. AMD was that ECC capable mainboards for Intel are expensive. The cheapest C2D would have been not much more expensive than the Athlon (and a tad faster), but on the mainboard side the difference was 100 Euros or more.

  • Re:...and so? (Score:5, Interesting)

    by pinkocommie (696223) on Thursday November 13, 2008 @12:01PM (#25747557)
    Don't know about you but try playing back a 1080p H.264 video file and watch it choke to death and then some.
  • by default luser (529332) on Thursday November 13, 2008 @12:01PM (#25747559) Journal

    Just an off-the-cuff calculation on my part shows power consumption dropped over %50 over Barcelona, clock-for-clock.

    This is good news, because when AMD moved from 90nm to 65nm, their leakage was so bad that the power consumption only dropped around %10 clock-for-clock. Combine this with better cache architecture (larger, and faster), and AMD may have a winner in the server space.

    I'm not sure if they're going to take back the desktop anytime soon. Intel doesn't have the FBDIMM downside on desktop systems, and I'm fairly sure that Shanghai didn't add major microarchitecure changes, so a quad-core Core2, let alone an i7, should continue to dominate the desktop.

    However, it is nice to know that the market once again will have a choice in processors. AMD's 65nm offerings were spanked in terms of performance and power consumption by Intel's lineup, but Shanghai will at least compete on the power front, if not the performance front. We shall see what happens when AMD releases their desktop version.

  • Re:Congratulations! (Score:3, Interesting)

    by postbigbang (761081) on Thursday November 13, 2008 @12:35PM (#25747975)

    You haven't examined SPECjbb, then, have you? It's a Java-based business transaction kit that seems to have quite a bit of both fairness and repeatability; it's not easily manipulated, like others I've seen. After running more than several thousand runs with it, I find it pretty reasonable in terms of systems performance comparison, rather than motherboard/subsystem/peripheral benchmarks-- which shed light only on one specific characteristic of a machine, or are operating systems-specific. Admittedly, it doesn't do things with GPUs, network I/O, and the like.

    But to call it a lie is specious.

  • Re:...and so? (Score:4, Interesting)

    by BLKMGK (34057) <morejunk4me@hotm[ ].com ['ail' in gap]> on Thursday November 13, 2008 @01:07PM (#25748519) Homepage Journal

    So you're advocating Windows? I'd love to do H.264 decoding on the GPU in Linux, which driver and which video card do that for me?

    I use XBMC, we're stuck with using the CPU for now but at least we can use both cores. So far the AMD CPUs haven't fared well with that software for full 1080P H.264 decoding either.

  • Re:Oh please. (Score:3, Interesting)

    by billcopc (196330) <vrillco@yahoo.com> on Thursday November 13, 2008 @01:31PM (#25748903) Homepage

    You might be surprised at the performance leap from a new CPU. I went from an X2 4800 with an 8800GTS, to an overclocked C2Q. I noticed a tremendous improvement in almost all games, despite running on the same GPU. For one, it eliminated any and all stuttering, even in older games. I'd say it pumped a good 30% more fps into my main games like WoW, LoTR, and the shooters of course.

    Today's graphics cards are so ridiculously fast, they're very commonly limited by the CPU. It's like the 3Dfx days all over again!

  • Re:Oh please. (Score:3, Interesting)

    by level_headed_midwest (888889) on Thursday November 13, 2008 @01:55PM (#25749303)

    I have had a similar experience with machines I have had and built/administered:

    1. K6-2/500 on a VIA MVP4 chipset: no problems at all
    2. Celeron 900 on an i810/ICH1 chipset: no problems at all
    3. P4-M 2.2 on an i845MP/ICH3M chipset: integrated Intel PRO/100 NIC died
    4. Duron 1600 on an NForce 2 Ultra 400 chipset: no problems at all
    5. X2 4200+ on an NForce 4 SLi chip: no problems at all
    6. Dual 2.8 Xeon Irwindale on an E7320/6300 chipset: integrated Intel IDE controller was recognized intermittently
    7. Pentium D 820 on an i945G/ICH7 chipset: southbridge PCIe controller went AWOL, knocking out the integrated NIC
    8. C2D U7500 on an i945GM/ICH7M chipset: no problems at all
    9. Mobile Sempron 3600+ (65 nm) on an AMD M690T/SB600 chipset: no problems at all

    The AMD units have been good to me, while the Intel ones had the problems, particularly with the southbridges.

  • Re:...and so? (Score:5, Interesting)

    by Belial6 (794905) on Thursday November 13, 2008 @02:14PM (#25749569)
    Sometimes it's cheaper to buy the new stuff than not to. I bought a new Athlon X2 5400 a little over a year ago to replace my Athlon 1.2. I plugged the whole system into a Kill-A-Watt to see the power draw. I calculated that it would take 10 months for the energy savings to completely pay for the upgrade. So, my faster computer is now not only free, but actually saving me money. Because of this, I have also upgraded my wife's computer, my kid's computer, and my server. My wife's computer and my server should be breaking even over the next month or so, and my kid's computer will have payed itself off in June.

    I also downgraded my speakers. My old Klipsch 5.1 surrounds sound speakers sounded great, but they drew something like 45 watts. I replaced them with a generic set of 2.1 speakers that don't sound as good, but are more than adequate for the purpose, and I am now only drawing 2 watts.
  • by link-error (143838) on Thursday November 13, 2008 @04:26PM (#25751953)

      When hacking my Iopeners, I learned you can pull the bios chip from a running computer, put in the bad one, then just re-flash it.
        Just have to be careful putting it in not to short the pins. Worked great.

      -Mike

  • by waferbuster (580266) on Friday November 14, 2008 @12:35AM (#25757303)

    The higher Numerical Aperature lithography tools are definitely helping for making narrow lines (hence faster transisters) for both Intel and AMD alike. However, the biggest advantage Intel has in the chip-making business is the use of hafnium [intel.com] for forming the gate of the transister. As Gordon Moore put it, "It's the biggest change in transister technology in the past 40 years."

    The rest of the industry is feverishly trying to match/duplicate the hafnium process improvements which Intel discovered. Unless there's some equivalent breakthrough at AMD (which is highly unlikely), Intel will retain the crown for performance.

    Disclaimer: I work for Intel, the above is my opinion and I am not a spokesman for Intel. Heck, I'm just a lowly peon. I'm not even authorized to tell you the time of day!

Are we running light with overbyte?

Working...