Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD's Six-Core Istanbul Opterons 123

EconolineCrush writes "AMD's latest 'Istanbul' Opterons add two cores per socket, for a grand total of six. Despite the extra cores, these new chips reside within the same power envelope as existing quad-core Opterons, and they're drop-in compatible with current systems. The Tech Report has an in-depth review of the new chips, comparing their performance and power efficiency with that of Intel's Nehalem-based Xeons. Istanbul fares surprisingly well, particularly when one considers its performance-power ratio with highly parallelized workloads."
This discussion has been archived. No new comments can be posted.

AMD's Six-Core Istanbul Opterons

Comments Filter:
  • by smittyoneeach ( 243267 ) * on Tuesday June 02, 2009 @07:16AM (#28180349) Homepage Journal
    Istanbul runs your shells
    Through shaves as tight as Dardanelles.
    Use Opteron and the gallant foamy,
    And thus avoid Gallipoli [wikipedia.org].
    Burma Shave
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday June 02, 2009 @08:35AM (#28181147) Homepage Journal

      Istanbul was Constantinople
      Now it's Istanbul, not Constantinople
      So if you were waiting for a core called Constantinople
      It's been released as Istanbul.

      • It all went to hellespont [wikipedia.org].
      • Uhhharg. I thought that song was hilarious the first time I heard it. But it's turned into a mind-worm that goes round and round every time I hear "Istanbul". And I hear it a lot because I'm writing documentation for an Istanbul-based server.

        Just for that, I'm going to force you to watch this really dumb video [youtube.com]. You are required to drink a V8 every time you spot a geographical blooper.

        Actually, a lot of Greeks find this song extremely unfunny, because the name change reflects the way Greek communities have b

        • Istanbul processor with six c (Constantinople) cores that utilizes the HS (Hagia Sophia) memory controller which breaks the OP (Orthodox pathways) with a bus known to bomb on the cross over to the OIC (Olympic image controller).
    • by Lumpy ( 12016 )

      But wasn't Istanbul called Constantinople?

      And what do the Turks think about that?

    • My pappy said, "Son, you're gonna' drive me to drinkin' If you don't stop drivin' that Hot Rod Lincoln"....
  • by OzPeter ( 195038 ) on Tuesday June 02, 2009 @07:20AM (#28180389)
    Or isn't that anyones business but the Turks?
  • by Anonymous Coward

    Over 9000!!!!1

  • get a couple of these to test? Sounds like we could get some pretty good number-crunching results.

  • That's nothing compared to 14 cores.
    • From http://www.sun.com/processors/UltraSPARC-T2/features.xml [sun.com]

      "Features and Benefits
      With eight cores and 64 threads on one chip, integrated 10 GbE networking, crypto, and PCI-Express expansion, you have the jump on anything else on the market. The opportunities for system consolidation and virtualization are here like never before. Consumes less power per core and thread than any processor in its class - without compromising on performance. The UltraSPARC T2 processor gives OEMs a massively threaded, multi-c

      • Hyperthreading helps you avoid the cost of context switches when multithreading, but a) the cost of context switches is remarkably lower these days due to register renaming and other tricks and b) only on Unix do you care anyway; traditionally we spawn lots of processes on Unix and lots of threads on Windows. It's not necessarily the right way to do things, and the Windows thread-heavy model is paying off now that multicore processors have brought multiprocessing to the masses.

        • Hyperthreading helps you avoid the cost of context switches when multithreading,

          I had impression that it's not about context switches. In case of Sparc T2, they actually try to execute several threads in parallel. If one thread stalls on memory access or IO, CPU picks some other thread to execute.

          I can't say overall, but for well optimized C/C++ programs this is a disaster. My employer did benchmarks on Sparc T2. With HT enabled system couldn't deliver stable latencies: performance figures were shattered all over the graphs. With HT disabled it performed just like on other Sparcs,

    • by scotch ( 102596 )

      That's nothing compared to 14 cores.

      You are bad at math.

  • Does the Istanbul have Extended Page Table support like Nehalem does? This is supposed to give a big performance boost to virtual machines, though I haven't seen any hard numbers. Any info?
    • Re:EPT? (Score:4, Informative)

      by KonoWatakushi ( 910213 ) on Tuesday June 02, 2009 @09:20AM (#28181767)

      AMD has supported nested page tables since the Shanghai series processors.

      • by JF-AMD ( 1568173 )
        And we also support it in VMware ESX 3.5. I believe intel only supports it with VMware 4.0 (VSphere). Upgrading the hypervisor is not on the radar for a lot of customers.
        • We must not have the same customers...all my clients want it and I (we) keep telling them to wait 4-6 months. Ug.
          • by JF-AMD ( 1568173 )
            Wow, I have never come across that. Almost universally the customers I talk to are loathe to change the hypervisor because they have it working across so many different platforms that they don't want to qual version 4.0 across all of those platforms.
  • by iamdrscience ( 541136 ) on Tuesday June 02, 2009 @07:31AM (#28180477) Homepage
    You think it's crazy? It is crazy. But I don't give a shit. From now on, we're the ones who have the edge in the multi-core game. What part of this don't you understand? If two cores is good, and four cores is better, obviously six cores would make us the best fucking processor that ever existed. Comprende? We didn't claw our way to the top of the CPU game by clinging to the two-core industry standard. We got here by taking chances. Well, six cores is the biggest chance of all.

    Here's the report from Engineering. Someone put it in the bathroom: I want to wipe my ass with it. They don't tell me what to inventI tell them. And I'm telling them to stick two more cores in there. I don't care how. Make the cores so thin they're invisible. I don't care if they have to cram the sixth blade in perpendicular to the other five, just do it!
    • by Gldm ( 600518 )
      And suddenly my sig is relevant again. ;)
  • Harnessing muli-cpu machines with these installed is going to be.... Interesting.

    • by Nursie ( 632944 )

      Not really.

      Threaded programming has been around for many years now, and multi-process computing has been around for decades. If you can't utilise multiple cores by now you're way behind the curve.

      That said, I will watch the progress of these languages designed specifically for the task, though I don't see them unseating C/C++/Java any time soon.

      • Yea but properly use them? Today, the OS uses the cores in a pretty stupid way, and you end up with data structures being shared by cores, and so you need to lock them (expensive) and copy data between cores (expensive).

        Once the operating systems handle them well, and application programmers are more aware of these issues, things will be much better in multi-core-land.

        • It's the applications.

          Actually, you might have a point -- I honestly don't know how well OS kernels are implemented for this sort of thing. On the other hand, Linux has been ported to machines with more cores (and CPUs!) than that before. Worst case, the kernel-level stuff won't receive a boost -- your filesystem won't go much faster -- but how much of your CPU time is currently spent there?

          No, most CPU time is spent in applications, as it should be. And that's where you have the issues you describe -- eith

      • That said, I will watch the progress of these languages designed specifically for the task, though I don't see them unseating C/C++/Java any time soon.

        I think I prefer languages matched primarily to the problem the program is solving, rather than languages matched primarily to the hardware used to run the program (primarily; some degree of the latter is necessary, for example if your hardware is a GPU or an FPGA). ;)

    • No. (Score:5, Insightful)

      by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Tuesday June 02, 2009 @07:54AM (#28180723) Homepage Journal

      Harnessing muli-cpu machines with these installed is going to be.... Interesting.

      No more interesting than existing many-core machines.

      Seriously, having a couple dozen or more cores is nothing new.

      • Think of the shared memory bus. Won't somebody please think of the shared memory bus!?! It's going to get clogged with so many cores.
        • by Gldm ( 600518 )
          HyperTransport is not a big truck! It's a series of tunnels!
          • by cas2000 ( 148703 )

            not 'tunnels'. the word you're looking for is 'tubes'.

            as in the famous revelation about teh internet: "My god, it's full of tubes!"

            • by Gldm ( 600518 )
              I know the Ted Stevens quote. HT is commonly organized as tunnels though, hence the pun.
          • by smithmc ( 451373 ) *

            HyperTransport is not a big truck! It's a series of tunnels!

            Too bad it's not a station wagon full of tapes.

    • by LWATCDR ( 28044 )

      Nope. This is a server CPU. Things like Database servers already scale. well.
      Virtualazation by definition will scale well.
      Or to put it in simple terms.
      You know that old four server with 8 cores total? You can now replace it with a two socket machine with 12 cores total.
      Or you know that four socket 16 core server? Well you can now upgrade that to a 24 core server.

  • I'll be finally able to run Crysis at a decent framerate.
    • Re: (Score:3, Insightful)

      by Kotoku ( 1531373 )

      I'll be finally able to run Crysis at a decent framerate.

      Just in time to be behind the curve for Crysis 2!

  • by IYagami ( 136831 ) on Tuesday June 02, 2009 @08:24AM (#28181029)

    http://it.anandtech.com/IT/showdoc.aspx?i=3571 [anandtech.com]

    Includes information about virtualization performance: http://it.anandtech.com/IT/showdoc.aspx?i=3571&p=9 [anandtech.com]

    Conclusion:
    "The six-core Opteron is not an alternative to the mighty Xeons in every application. The Xeons are more versatile thanks to the higher clockspeeds, higher IPC, Hyperthreading and higher bandwidth to memory. The Xeon 55xx series is clearly the better choice in OLTP, ERP, webserving, rendering and there is little doubt that it will continue to reign in the bandwidth intensive HPC workloads. There are two types of applications where we feel that the AMD six-core deserves your attention: decision support databases and virtualization."

    • Re: (Score:3, Interesting)

      by Vancorps ( 746090 )

      I believe Anandtech is showing it's bias here. I had heard great things about the Xeon 55xx series CPUs so I went and bought a couple of servers. Specifically one web server and one database server. I also had Opteron-based servers performing the same tasks. My webservers are load balanced using a hardware load balancer. During January I was under an extremely heavy load scenario. I ended up having to weight more traffic to the Opteron servers because the Xeons were choking under 100% cpu load. I barely squ

      • by Anonymous Coward

        Wow, you mean for _just you_ your old Opterons perform better than a chip that is quite superior to it in every way including memory bandwidth?

        I think _someone's_ showing their bias here...

        • Who said anything about old Opterons? The chips are not superior in every way so thanks for playing.
    • by MobyDisk ( 75490 )

      Are they seriously touting hyperthreading as a benefit? It's a dubious-enough feature, but with 4 cores, it really stretches believability. I dare someone to find the one application that benefits from seeing 2 additional fake CPUs when there are already 4 real ones.

      • by JF-AMD ( 1568173 )
        When they launched, Nehalem there were a few benchmarks that showed negative performance for SMT. Just like the good old days with SQL Server and hyperthreading.
      • Re: (Score:3, Interesting)

        Hyperthreading shows you eight fake cores which map to four real cores. I benchmarked it extensively. Computationally intensive routines with a small memory footprint can gain up to 20%. Bandwidth or memory intensive routines can lose up to 50%. In the extreme case, 8 threads on virtual cores can be half the speed of 4 threads on 4 real cores on a Core i7. Keep in mind, this is on a crazy application that generates lots of data.

        If your algorithm is designed to break up the problem to exploit the ca
    • I'm most surprised that AMDs extra two cores didn't give it an advantage in many of the server applications, I know that the Xeons are 4 way superscalar (instructions running in the pipeline in each core) versus AMDs 3 way. So as the article said its only 18 AMD instructions per clock versus 16 intels, instead of 4 versus 3. But this is only for the shorter instructions. 8 core xeons are expected in autumn so any tenuous lead AMD has anywhere in performance is going to disappear fairly soon. But never-mind,
  • by Gazzonyx ( 982402 ) <scott.lovenberg@nOspam.gmail.com> on Tuesday June 02, 2009 @08:39AM (#28181193)

    [...] Not only that, but it's hitting the market early. AMD had originally planned to introduce this product in the October time frame, but the first spin of Istanbul silicon came back solid, so the firm pulled the launch forward into June. Even with the accelerated schedule, of course, Istanbul comes not a moment too soon, now that Nehalem Xeons are out in the wild.

    Does anyone else think that this seems a little convenient? I'm really hoping that they didn't just tone down the testing to make it to market. I'm thinking they'll go to market and then quickly release a new revision to fix the corners that they cut the first time around. I hope I'm wrong, but AMD has been slipping lately.

    Any EE's out there know the process well enough to confirm or deny my suspicions?

    • by Bigby ( 659157 )

      I think AMD learned from their last mishap. It nearly destroyed the company.

    • Re: (Score:3, Interesting)

      by Narishma ( 822073 )
      From an interview with bit-tech [bit-tech.net]:

      bit-tech: Has the launch of Istanbul been brought forward in response to Nehalem EX's updated launch date?

      Patler: Istanbul being pulled in by five months is a result of excellent execution by our design and manufacturing teams who were about to take it from first stepping of silicon to production. Also, the fact that Istanbul is based on our existing socket infrastructure, enables our OEMs to save time on validation cycles that are normally associated with a new processor that delivers the performance Istanbul can.

    • by dpilot ( 134227 ) on Tuesday June 02, 2009 @11:43AM (#28184101) Homepage Journal

      I'm in the silicon business. Not CPU, but still silicon.

      It sounds as if AMD budgeted time for another pass at the design, and turned out not to need it. The amount of time they pulled out of the schedule looks more like a silicon pass than short-cutting testing and validation. Adding that extra pass, and making sure it was scheduled is probably a result of having been so badly burned last time, but that's good. You can always be a hero by doing better than plan.

    • by JF-AMD ( 1568173 )
      Actually, when we laid out the project, we planned for a major spin and some minor tweaks before we would have production silicon. The first silicon came out strong enough that our partners said "let's take it to market." No corners were cut. When you start with the solid Shanghai silicon it makes it a lot easier.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Actually, testing was increased for 6Core... As our 4Core tests no longer stressed a system with 50% more cores the same way.

      What changed was Process. 6Core uses all of the 'good' tech from Shanghai, then implements a few things differently (rev upgrades, etc). The reason 6Core launched soo quickly, is we learned all of our lessons on the initial quad core fiasco. We did things 'right' this time, and the result is... a launch date that is nearly 12mos ahead of the initial schedule (which was set 2yrs ago

  • Yeah, but will it run a hackintosh?

    • How's this offtopic? It's a legitimate question. I have an older AMD that will not run the hackintosh software. I like AMD products - they _seem_ to be faster - but I'm not spending money on this, as nice as it may be, if it won't run what I want.

      • by Pulzar ( 81031 )

        This is a (very expensive) server CPU... I don't think you're going to spend money on this to run hackintosh either way.

  • How many of your favorite app already re-written to take advantage of the additional cores?
    How many of your favorite compiler already re-designed to generate codes that uses additional cores?
    How many of your favorite boss already re-wired to fuss about the additional cores?

    • My favourite apps are written in Fortran, so it only takes a nice compiler to generate multiprocessor code from it. The first time I did something like that was in 2001, so the compilers have certainly been around for a while.

  • Intels next processor will go to Eleven!
    When asked by the reporters, as to why Eleven was chosen as the target number of cores, Nigel said
    It's six louder than AMD! I mean faster...
  • Energizer corporation is now seeking to purchase AMD and fold it into the Schick lineup, in order to one-up Gillette's vibrating razor.
  • by markhahn ( 122033 ) on Tuesday June 02, 2009 @10:19AM (#28182865)

    the real news here is not the extra couple cores, but coherency snooping. this feature will make 4/8s machines far more attractive; it doesn't hurt that with 48 cores and 32 ddr3/1333 dimms, you have quite a monster. _and_ incidentally something that Intel can't currently answer.

    there's no question that nehalem has put a serious dent in the market, but Intel's going quite slow in rolling out higher-end products. yes, a nehalem socket delivers about 50% more bandwidth than a current opteron socket, but show me the 8s nehalem machines. nehalem-ex is coming, but how soon and at what price?

    one thing I haven't seen is any attempt to measure real SMP performance on new-gen chips. I don't mean something like Stream or VMs, where there is no real sharing inherent to the workload. how long does it take to exchange a _contended_ lock between cores (in the same socket vs remote)?

    finally, the real question is whether there is actual demand for more-core chips. I'm in HPC, and we always want more, and throw good money. but it has to be smart more - the 6-core core2, for instance, was just asinine because even 2c core2 is drastically memory-bandwidth-starved. nehalem-ex seems quite promising, but if it's cheaper to cluster dual-socket machines rather than pay the premium for 4s's, the 4s market will be stunted and less successful in a self-fulfilling way...

    • the real news here is not the extra couple cores, but coherency snooping. this feature will make 4/8s machines far more attractive; it doesn't hurt that with 48 cores and 32 ddr3/1333 dimms, you have quite a monster. _and_ incidentally something that Intel can't currently answer.

      That's actually 16 channels of DDR2/800, according to page 1 of TFA. I think it's supposed to be what comes out after this one that goes to 4xDDR3 per socket.

    • Scaling vertically hasn't been a good idea for a long time unless you're app has trouble scaling horizontally. I'm in the process of creating a proposal with a back-end database cluster considering of 4-6 nodes. Now I could achieve the same horsepower by buying an 8s or a 4s server and not have to buy as many machines but 4s servers seem to be 3 times more expensive than 2s socket servers so I can just buy more dual processor servers and scale out to achieve the target number of connections served.

      Of cours

    • We run a lot of commerical OCR (as in millions of images), which is extremely processor-intensive, disk-intensive, memory-intensive, you name it. Our current main OCR server is a dual quad-core Xeon X5355 box with 16 GB of RAM. Our OCR software multithreads and the processor is no longer the bottleneck -- it's now disk I/O. While current drives continue to increase in size, their read / write speed is what keeps us from getting work done faster. It now takes several orders of magnitude longer to build,

      • by tomz16 ( 992375 )

        Maybe I'm missing something here... but if you process data in 2GB chunks shouldn't your software just keep it all in memory. Once processing is complete writing it out to one of those SSD arrays should take 10 seconds (which is nothing for 2 hours worth of processing time!!!). If you don't have access to the source code, a quick fix is to just mount a RAM drive.

        Furthermore, OCR is stupidly easy to parallelize. The results of each page do not depend on previous pages. You can process each page independ

  • Tried pricing up a decent box for some heavy-lifting, there's just so much complexity out there! It's hard to figure out where the bleeding edge is and where the most effective bang for the buck zone is behind all the blood. 286, 386, 486, a man used to be able to tell where computers sat! And then all that Pentium bullshit started. I don't know what the fuck I'm looking at. I'm crossing my fingers and going with a Tom's Hardware recommended build list.

  • Anyone have any clue how Nehalem and these multicore AMD beasts would compare for video editing or render farm applications?

Life is a whim of several billion cells to be you for a while.

Working...