Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD Dual-Core Performance Revealed 318

Timmus writes "In two separate articles, FiringSquad takes a look at the performance of AMD's dual-core Opteron CPU. The first article examines the performance of dual-core in scientific computing applications (MATLAB and LS-DYNA) as well as digital photography, while the second story focuses on the performance of dual-core Opteron paired against Intel's dual-core Pentium Extreme Edition in video encoding, Cinebench, and a few other applications. The performance improvements are pretty impressive in multi-threaded applications that take advantage of the technology."
This discussion has been archived. No new comments can be posted.

AMD Dual-Core Performance Revealed

Comments Filter:
  • YESSSS (Score:4, Funny)

    by sehryan ( 412731 ) on Thursday April 21, 2005 @10:52AM (#12302500)
    I am running one right now, which is why I got first post!
  • full article mirrors (Score:5, Interesting)

    by winkydink ( 650484 ) * <sv.dude@gmail.com> on Thursday April 21, 2005 @10:53AM (#12302506) Homepage Journal
    here [networkmirror.com] and here [networkmirror.com].
  • OK then. (Score:4, Insightful)

    by millennial ( 830897 ) on Thursday April 21, 2005 @10:56AM (#12302532) Journal
    So we have:
    scientific computing applications (MATLAB and LS-DYNA)
    digital photography
    video encoding
    Cinebench and
    "a few other applications".
    So what about the average user? Will the college kid who just needs to type their papers, the parents who want to do their taxes, the gamers who want to play high-end stuff, etc. get any sort of boost from this?
    • by ahsile ( 187881 ) on Thursday April 21, 2005 @11:00AM (#12302579) Homepage Journal
      Notice the lack of an Athlon 64 FX version of AMDs dual-core strategy. For the time being, its recognized that games are exclusively written for single-threaded operation and as such run better on single-threaded processors at elevated frequencies. Thus, the FX series marches on at 2.6GHz for now. ... so for games, keep to your single core CPU.
      • by millennial ( 830897 ) on Thursday April 21, 2005 @11:02AM (#12302608) Journal
        I suppose that makes sense. The question this raises, though, is whether there are any games designed to work better on hyperthreaded/multiprocessor systems.
        • by phoenix.bam! ( 642635 ) on Thursday April 21, 2005 @11:09AM (#12302693)
          Definitely not on the hyperthreaded system. Hyperthreading is only useful is you have 2 or more low demand threads. The benefit of hyperthreading disappears when 1 process needs 99% of the cpu, like many games. This can even be deterimental as your game will never be able to use the entire processor.
          • by John Courtland ( 585609 ) on Thursday April 21, 2005 @11:30AM (#12302886)
            And then detrimental again because both processes share the L1 cache... I don't know if Intel fixed that problem yet, but the cache sharing actually decreased performance compared to a processor with HT disabled while running high-demand single-threaded applications (games).
          • Hyperthreading is only useful is you have 2 or more low demand threads.

            Not quite. SMT is useful if you have two threads which make mutually exclusive demands on the processor's execution units, for example one doing a lot of integer arithmetic and another doing a lot of floating point calculations. Additionally, SMT is marginally useful if you have two processor intensive threads, since the cost of a context switch between two threads is less on an SMT system than on a single context system (although

        • I suppose that makes sense. The question this raises, though, is whether there are any games designed to work better on hyperthreaded/multiprocessor systems.

          I very much doubt it. I've always thought of Blizzard as being one of the better companies when it came to "doing it right" with regard to coding their games. I know playing Warcraft III it always consumed 100% of one processor and did not put a dent in the other. I have not noticed any games that do a better job.

        • Galactic Civilizations [galciv.com] from Stardock [stardock.com] has a mode that can take advantage of hyperthreading. Of course it is a turn base strategy game and is able (I assume) to offload a lot of background processing to take advantage of it.
        • IIRC Quake3 used separate threads for sound and video.

          In my own experience with both a dual P2-450 system and a dual AMD 1.2ghz system is that the game will run on one cpu and the OS will use the other... simply balancing the load between both processors. Quake2/3/UT/ET ran on the dual 450s with comparable oomph to a single 1ghz system.
        • " The question this raises, though, is whether there are any games designed to work better on hyperthreaded/multiprocessor systems."

          Why worry about the past? If these processors take off, new games will support them.

        • by goates ( 412876 ) on Thursday April 21, 2005 @01:59PM (#12304205)
          Unreal 3 will use multiple threads.

          http://www.anandtech.com/cpuchipsets/showdoc.asp x? i=2377&p=3

          goates
      • by DigitumDei ( 578031 ) on Thursday April 21, 2005 @11:29AM (#12302878) Homepage Journal

        Depends I guess. I know I don't have the luxury of keeping my gaming machine seperate from all other applications I use, so my gaming machine is also my work machine and it tends to have a lot of stuff running at any given time. Now when playing shooter games I often notice a sudden drop in fps when some service or other decides it needs to do something. A dual core machine would be a lot less prone to this I guess.

        Also, from the article. "And although the company says dual-core isn't for gamers quite yet, perhaps it is, only in a different usage model. Alan Dang and I were discussing processor benchmarking moving forward and he came up with the idea that we don't run compute-intensive tasks in the background today because we think they can't be done. However, if a dual-core processor enables a DVD encode while you're playing Half-Life 2: Deathmatch, there's a good chance that the way we think about demanding tasks may change. Even though games aren't currently threaded, the background processes a dual-core processor enables may very well catapult the technology into favor with game enthusiasts."

        • by Malc ( 1751 )
          So have there been any multi-threaded games since Quake 2? That was awesome on a dual proc machine. When things in the game got hectic, the framerates didn't drop in the same way they would on a single proc machine.
      • I believe that ut2004 is completely multithreaded
      • The reason games are written without multithreading is because most games are written with consoles in mind. While the PS 2 has a type of parallel processing, neither XBox nor Nintendo support any kind of SMP.

        The next generation of consoles from Sony and MS are supposedly going to be fully SMP capable, so game developers will start taking advantage. That will make multithreaded ports to PC a no-brainer.

        There's nothing about gaming that makes multithreading less usefull - in fact the need to run a real tim

    • Re:OK then. (Score:5, Insightful)

      by antifoidulus ( 807088 ) on Thursday April 21, 2005 @11:02AM (#12302609) Homepage Journal
      Why would they need a "boost"? These are expensive and obviously aimed at high end users. You can already get sub $1k laptops that really do all the stuff you described, so why would they buy a dual core desktop system?
      If you are using a dual core system to run word either a) you have WAY too much money, or b) the code bloat at Microsoft has REALLY gotten out of hand......
      • Re:OK then. (Score:3, Insightful)

        by nmg196 ( 184961 ) *
        > the code bloat at Microsoft has REALLY gotten out of hand......

        I wish people would stop talking about Microsoft code bloat when nobody else does any better.

        Currently, 50 processes. The two highest (memory and VM wise) are Thunderbird which is using (60mb of main memory) and Firefox which is using 55mb of main memory. All the microsoft products I'm running like Visual Studio.NET 2003 are WAY down the list as none are using more than 10-15mb of main memory.

        Nearly all popular linux distributions now co
        • I don't know much about memory usage under Windows, but here on Linux I am running X with KDE 3.3.2, Firefox AND Konqueror, Citrix, Konsole and some other stuff, and my total actual RAM in use is about 165MB. Of course there's a bunch of RAM in use by the disk cache but it doesn't make sense to count that. Nothing is in swap.

          On Linux at least the memory usage numbers reported by individual applications can be misleading, since it's hard to account for things like copy-on-write. It's easier to look at a
          • On my Gentoo AMD64 -O3 compiled system, running Gnome, Rhythmbox, Evolution, Epiphany, Liferea and GAIM, it is using 365 MB of RAM, not including buffers or disk cache.

            In contrast, Windows XP running a similar set of applications was only using 230 MB.
          • Re:OK then. (Score:3, Insightful)

            by Otter ( 3800 )
            I don't know much about memory usage under Windows, but here on Linux I am running X with KDE 3.3.2, Firefox AND Konqueror, Citrix, Konsole and some other stuff, and my total actual RAM in use is about 165MB.

            I've never used Citrix, so don't know how heavy that is, but -- 165 megs for the OS, desktop, file browser, web browser and a terminal isn't exactly what I'd call svelte. Open a GNOME app, dragging in a whole other suite of libraries, and you'll be pushing 200 megs without doing anything heavy.

          • Re:OK then. (Score:3, Interesting)

            by r_naked ( 150044 )
            As for the "more than one CD" complaint, Linux distributions these days come bundled with about all the software there is. Windows XP doesn't even include a C compiler as far as I know.

            I am sure Microsoft would love to include a stripped down version of VS.NET with XP. With all the grief they are given for including a web browser, what kinda headaches do you think they would get if they started shipping a compiler?
            I hear compliants on here all the time about how you get so many more applicatio
        • Re:OK then. (Score:3, Interesting)

          Oh my, here it comes again. Comparing pears to apples. If I install MS Windows, what do I get? Operating system with a few (let me say - lousy) applications. If I install Linux distro of my choice, what do I get? Depending on my choice, it can be a full blown suite of application ranging from development to office apps to video processing.
          And further more, e.g. KDE has been quite successfull at speeding up between 3.2 and 3.4. I am not so sure about the memory print, but that is no concern for me today (R
          • Re:OK then. (Score:2, Insightful)

            by alecks ( 473298 )
            Meanwhile, everyone's complaining that MS is bundling a friggin media player and internet browser with THEIR OWN OS!!! I've always wondered, and i'm sure (err hope) there's a good explanation... Why doesn't Apple get sued for packaging everything with their OS?
            • Re:OK then. (Score:4, Informative)

              by Mycroft_VIII ( 572950 ) on Thursday April 21, 2005 @01:39PM (#12304004) Journal
              The reason Microsoft gets so much more scrutiny and leagle flack on the bundling issue is because they have been found (leagly) to be a monopoly. This changes the rules for them so as to prevent them from locking out any future competition or taking over related/inlinked markets by virtue of thier having a near captive audience.
              Apple with it's small slice of the market is very unlikely to say put opera out of bussiness by shipping thier own browser for free with thier operating system like Microsoft did to Netscape(I know that's a simplification of ie/netscape history, but it serves to illistrate my point I hope).
              In short Microsoft is a victim of thier own success here.

              Mycroft
        • So if I use Mac OS X, is it okay for me to talk about Microsoft's code bloat?

          There are people who do it way better!
        • Nearly all popular linux distributions now come on more than one CD (even if you ignore the source code) and the default installations are WAY bigger than that of Windows XP.

          That's not really fair, as Windows XP doesn't really include very much with it. A media player, a web browser, a few simple utilities, and that's about it. Most Windows XP users have to install quite a bit of software to do what they want. On the other hand, most Linux distros have tons of packages included, for many users the defa
        • Re:OK then. (Score:5, Insightful)

          by Chris Burke ( 6130 ) on Thursday April 21, 2005 @12:07PM (#12303229) Homepage
          Currently, 50 processes. The two highest (memory and VM wise) are Thunderbird which is using (60mb of main memory) and Firefox which is using 55mb of main memory.

          Point conceeded. Some OSS software chews up the memory, and FireFoo are major culprits.

          Though I'd hope to hell Visual Studio is way down the list. It's just an IDE! It has a GUI and a text editor. All the memory-chewing hard work is done in the compiler back end. With that comparison, my Emacs session is 6MB.

          Nearly all popular linux distributions now come on more than one CD (even if you ignore the source code) and the default installations are WAY bigger than that of Windows XP.

          Of course they are -- they include reams of free software! Nobody would complain about the large size of Windows installations if that installation came with practically every piece of software you would ever need! Even a 'default' install that doesn't install everything still has vast swaths of software from compilers to office suites to web browsers to web servers to image manipulation to whatever.

          Who could possibly complain about getting more free stuff, even if it takes another CD or two or three to fit it? Consuming disk space for useful things is fine. Windows installs are considered bloated because the size increases but the perception is that you're not actually getting more stuff. Honestly -- what comes with the XP install these days?
          • >With that comparison, my Emacs session is 6MB.

            Oh, come on. It took more than that 20 years ago :)

            More seriously, I just launched one, and it immediately was using 9.5Mb on FreeBSD. It then did a couple of things on its own (???), displaying a notes buffer, and hopped up to 9.5Mb.

            I shudder to think of what it will do if I actually type anything in its window . . .

            hawk
        • Nearly all popular linux distributions now come on more than one CD (even if you ignore the source code) and the default installations are WAY bigger than that of Windows XP.

          Also, they work properly.
        • and most of those come with full suites of software. Microsoft's one CD is just the OS. Then you have to download the latest patches (Which you should do for any OS). I wonder how big Windows would be if it included several text editors, compilers, office suites, games, etc.
    • Re:OK then. (Score:2, Insightful)

      by millermj ( 762822 ) *
      The people who want to do all of that at once, maybe? Honestly, ever tried to MD5SUM your CD1 ISO at the same time as you were encoding your MP3s for CD2? Dual-core processors would make multitasking much smoother.
      • Honestly, ever tried to MD5SUM your CD1 ISO at the same time as you were encoding your MP3s for CD2?

        If you're in that situation so often that it's worth buying a high-end workstation to expedite it, and if the bottleneck is really in the number of CPU cores and not elsewhere in the system, then, good!

        As far as the OP's question about taxes, though -- unless you're really, really good at doing your taxes, the CPU does not even begin to approach being the limiting factor.

    • Re:OK then. (Score:2, Insightful)

      by Anonymous Coward
      So what about the average user? Will the college kid who just needs to type their papers, the parents who want to do their taxes, the gamers who want to play high-end stuff, etc. get any sort of boost from this?

      Here's a clue: The Bottleneck Ain't The Processor.
    • TFA goes on to mention in the conclusion that gamers may like the ability to do real work in the background while not interupting their game.

      What the average user gets out of this? Hopefully a nice price cut on all those boring single core procs sometime soon.
    • Re:OK then. (Score:3, Informative)

      by NanoGator ( 522640 )
      "So what about the average user?"

      Windows is multi-threaded and behaves better in a multi-processor environment. Even the average user will notice this.

    • Okay for the typing papers, doing taxes, sending email, and surfing the web you do not need even 1 Ghz. Frankly a 600 Mhz PIII will do all that at least using Linux and or W2k and yes I have done it.
      Playing games it will depend on the game. Most games are single threaded so not yet. When multi core is common will that change? Possibly as soon as the game engines are recoded to take advantge of multi core. It may help some buy running other tasks on the second core like your anti virus, tcp/ip stack and so o
  • by Amiga Lover ( 708890 ) on Thursday April 21, 2005 @10:56AM (#12302535)
    OK. Anyone have a quick simple explanation of why Dual Core over Dual CPU motherboard? are there inherent advantages to dual CPUs so close together?

    • Imagine two dual core CPUs plugged into a dual CPU mobo.

      Pop an erection yet?
    • by Anonymous Coward
      It cuts down the wait time of communication between the CPUs as the dual core chips don't need to travel on any sort of MOBO bus to communicate thus effectively giving a clock speed bus of inter CPU communication.

    • by Anonymous Coward on Thursday April 21, 2005 @11:01AM (#12302592)
      Less heat, less space, less energy requirements, eventually less money because there is only one chip.
    • by Moraelin ( 679338 ) on Thursday April 21, 2005 @11:03AM (#12302617) Journal
      Probably the biggest advantage is that it's cheaper. (Although if by much, that remains to be seen.)

      Plus, AMD's promise was something like being able to double the number of CPUs without having to buy a new motherboard. Though how much saving that will be (I expect AMD to price these pretty high), and whether it will mean that you're stuck with much slower cores to keep the TDP limits, that remains to be seen.

      There are other possibilities for improvement, such as using a shared cache and IMC instead of just throwing two cores together and going over HT like on a dual CPU system. But AMD hasn't yet done that.
    • Cost.

      With a BIOS update, I turn my single socket board into a dual CPU rig. Now your 799$ low end server gets almost 2x the CPU horsepower. This is killer for a cluster or similar. 2x the CPU in the same rack space.

      For the bigger boxes, it turns a 4-way high end box into a 8-way. Think database servers, virtualization servers or any other multi-threaded app that uses a lot of CPU. If you need an 8-way box, your cost just went down by 40% or more.

      For the board makers, they no longer have to build a
    • by shawnce ( 146129 ) on Thursday April 21, 2005 @11:07AM (#12302660) Homepage
      Yes.

      One example... dual core (true dual core) CPU have the ability to exchange data between the cores at faster rates and more importantly with less latency then when having to exchange data between CPUs on a dual CPU system. This can improve SMP flow.

      Another example... good dual core implementations will utilize some form of cache unification to allow better bulk sharing of data between cores while still allow high-levels of independent cache activity (the IBM's Power5 [arcade-eu.info] is a good example of this).
      • One example... dual core (true dual core) CPU have the ability to exchange data between the cores at faster rates and more importantly with less latency then when having to exchange data between CPUs on a dual CPU system. This can improve SMP flow.

        Is this really true in the case of the Hammer core? CPU to CPU and Core to Core links both use HT.

        and, as you fail to mention but allude to by mentioning the technology, there is no cache unification in hammer (yet?)

        • Well I am not an expert on the latest AMD stuff (of course they had the two core design in mind from the beginning of the Opteron) but from what I have seen the two dies in the Opteron interface with a common crossbar switch (the each have an independent L2 cache so no real unification at that level).

          So they are not interconnected via HT on die but by some cross bar interconnect that presumably allows some level of concurrent point-to-point (core to core , core to HT link, core to memory, etc.) transfers a
    • by hobuddy ( 253368 ) on Thursday April 21, 2005 @11:16AM (#12302761)

      1) Cost.
      Since there need only be half as many sockets, the motherboard can be smaller, less complicated, and therefore less expensive. This is especially true in the case of single-socket motherboards, which are usually 50-60% as expensive as their dual-socket brethren. AMD has sweetened the cost savings even further by arranging it so that most single-socket motherboards already in use with a single-core CPU can accomodate a dual-core CPU after just a BIOS flash.

      2) More efficient interconnection between the cores.
      This advantage currently applies to AMD's design but not Intel's. As explained here [techreport.com], "As you can see, AMD didn't simply glue a pair of K8 cores together on a single piece of silicon. They've actually done some integration work at a very basic level, so that the two CPU cores can act together more effectively. Each of the K8 cores has its own, independent L2 cache onboard, but the two cores share a common system request queue. They also share a dual-channel DDR memory controller and a set of HyperTransport links to the outside world."

      After reading the TechReport article I linked to above, it looks to me like AMD is way ahead in the dual core market in all of the areas that count: better backward-compatibility, better cache coherency, and lower heat.

    • Doubling the calculation power without changing the encumbering? (aka layout and space optimization)
    • Not sure if anyone answered this properly yet...

      The main advantage of dual core over dual processor (where the processors are not in the same CPU package) is that it should be possible to allow the two CPUs to communicate at very high speed.

      Inside a single CPU, data is moved around at, or very close to, the clock speed of the CPU (e.g. 2.7GHz). Outside of the chip, the longer distances signals need to travel mean that it is more difficult to run data busses at high speeds. So talking to a hard disk or oth
  • by Prince Vegeta SSJ4 ( 718736 ) on Thursday April 21, 2005 @10:58AM (#12302553)
    where I can encode mpeg2 DVD (maybe it will be HD-DVD by then), rip & copy a DVD, Download a huge torrent, and Play UT with a respectable framerate.
  • by uofitorn ( 804157 ) on Thursday April 21, 2005 @10:59AM (#12302560)
    Wouldn't a better benchmark be to compare a dual core setup to a similarly configured dual processor workstation?
    • Wouldn't a better benchmark be to compare a dual core setup to a similarly configured dual processor workstation?

      If you're not going to RTFA, why do you think you're qualified to discuss what they did and did not benchmark? They did compare dual core and dual processor setups as well. They also discussed the relative advantages of both.

    • I think the benchmark for comparison should be whatever else you can get at a similar price.

      Dual processor motherboards and CPUs were never priced to make them attractive for widespread use, whereas dual core chips supposedly will be. We shall see.

  • The simple future (Score:5, Interesting)

    by caryw ( 131578 ) <.carywiedemann. .at. .gmail.com.> on Thursday April 21, 2005 @10:59AM (#12302561) Homepage
    Why stop at dual core?
    Once a way to link multiple cores of a CPU is firmly implemented scaling the chip to 4, 8, or even 32768 cores should be relatively easy.
    With chip dies getting smaller and smaller the only real reasons not to continue this multi-core scaling would be physical space and power usage.
    Perhaps they could scale multiple cores vertically instead of just making the chip wider and longer.
    And perhaps the cores could only be "turned on" when called for instead of using up juice all the time.

    Interesting look at the future of chips.
    Sony's Playstation 3 is using a "cell processor" or similar multi-core design that has already been covered here in the past.

    Arstechnica article on the cell processor here [arstechnica.com].
    --
    NoVA Underground: Fairfax County, Loudoun County, Arlington, Price William chat and local forums [novaunderground.com]
    • by Detritus ( 11846 ) on Thursday April 21, 2005 @11:16AM (#12302765) Homepage
      It wouldn't work. Why do you think we have processors with two or three levels of cache? There is a serious speed/bandwidth mismatch between the processor and the main memory system. There are ways of increasing main memory bandwidth, but they are very expensive. There's no point in adding more processors if they are going to spend 95% of their time stalled, waiting for cache lines to be filled.
    • Why stop at dual core?
      Once a way to link multiple cores of a CPU is firmly implemented scaling the chip to 4, 8, or even 32768 cores should be relatively easy.


      How about because the benefit from adding a 2nd core is not a 2x speed increase. You get diminishing returns as you rapidly run out of scalable instructions. Besides the architectural nightmare of a multiple core setup, there is the economic factor. Will you pay 4, 8, or 32768 times as much for a 10%, 13%, or 15% speed increase over a dual cor
    • by mobiux ( 118006 ) on Thursday April 21, 2005 @11:32AM (#12302910)
      I am waiting for intel's 32768 core processor.

      iirc, the p4ee dual core puts out 225 watts at full power.
      That would make the chip putting out roughly 3686400 watts, or 3.686 megawatts.

      It's too cold on this planet anyway.

      • That would not be too bad. Just use the heat to boil water and run a steam turbine to help power the cpu. Let's not let all the heat go to waste after all. Many gas turbine systems already do this:) Of course the CPU will have to run at 100C plus so something besides water might really be better. Like.. NH4... gives the whole liquid cooled cpu idea a whole new meaning.
    • Re:The simple future (Score:5, Interesting)

      by jd ( 1658 ) <imipak@ y a hoo.com> on Thursday April 21, 2005 @11:36AM (#12302945) Homepage Journal
      IIRC, the latest generation of Sun UltraSPARCs has 6 CPU cores. An alternative approach is to have "virtual" cores - have a stack of registers and pools of computational elements. This does require some extra element of sophistication, to share out resources, but if you have two programs with very different CPU needs, both programs should run faster. Also, if you have fewer programs than there are virtual cores, but instruction parallelization can be performed, you still get a speedup.


      The idea of turning off parts of the CPU would work, if you have a large enough cache. What you would need to do is prefetch all possible paths far enough ahead that you could turn on any deactivated part of the CPU before the instruction needed to be executed. You then have an independent "monitor" processor (an MPU?) which purely scans the cache and turns off all elements on the CPU that aren't needed within the lifetime of any of the contents of the cache.


      Another poster noted the bandwidth issue between processor and main memory. That is certainly a problem, but one that may be fixable. One way is to sped up memory (and the bus). The other is to look at ways of reducing the amount that needs to be transferred, by putting some of the CPU in memory. (The technique is called "Processor-In-Memory", and has been around for about 10-15 years.)

    • Okay, let's think about two things:
      the limited three dimensions of space, and the limited number of layers you can put on the chip wafer as it is fabricated. The limited number of layers on the wafer is a simple concept to get. The limited 3-d space to work in also limits interconnects between multiple processors in clusters by limiting the topology which the cluster can form.

      The uber-cluster concept was in the Thinking Machine (TMI) something-or-other which had 1024 processors linked together in (effect
    • Look at the Power5 which is a combination of dual core dies integrated in a multichip package. It provides for L3 to L3 sharing as well as ring style interconnect I believe.

      Amazingly the following is the best info I could find that isn't private...
      IBM's POWER5 Chip with 8 cores and 144MB cache showcased [anandtech.com]

      Not exactly what you are talking about but close... of course the cell processor is closer to what you would likely get on a single die at this time given feature sizes and heat issues.
    • Each chip has multiple calculation units.

      So really the concept has already been implemented, but managing these cores in applications which can't be optimized for them can yield a net loss in speed. So... But the 875 totally smokes even on old benchmarks.
  • For the lazy... (Score:4, Informative)

    by Anonymous Coward on Thursday April 21, 2005 @11:03AM (#12302615)

    ...and for those who don't want to flip through pages and pages of flash banner ads:

    Scientific Computing

    MATLAB: Though the script includes a moderate amount of matrix math, it doesn't seem like much of it is parallelized. Our recommendation from two years ago still stands - for most Matlab users, the fastest performance will come with a single Athlon64 line.

    LS-DYNA: I will bench the CPUs using two classic tests, a 3-vehicle collision and a single front-collision. The 3-vehicle collision takes more than 24 hours to complete - we do not have these numbers ready for this round of articles.

    Digital Imaging

    Capture One: With Capture One only supporting two CPU threads, the dual-core Opteron's lower clockspeed is a disadvantage.

    Bibble: It took only 4 minutes to complete with the 2x Dual Core Opteron 275. 4 minutes! That's 4.2MB/sec of processing time - a 2x Dual-Core Opteron 275 can process RAW images about as fast as it takes to copy them from to your computer using a standard-grade USB 2.0 CF card reader!

    Noise Ninja: On the slower Opteron 246, the fastest results were had with 4 threads, but on the faster CPUs, 8 threads was better.

    Video

    After Effects: Since the decoding of WMV-HD does not seem to take advantage of both CPUs, the performance gain from the Dual-Core AMD Opterons is virtually absent.

    • So in summary: This would be good in number crunching applications that are optimized for multiple cores, but not good to run multiple programs simultaneously. (i.e. play Half-Life 2 and run a webserver at the same time)
  • by FlyByPC ( 841016 ) on Thursday April 21, 2005 @11:05AM (#12302650) Homepage
    ...LONG LIVE COMPETITION!

    I wish both AMD and Intel well. All the better for us. Lower prices and better performance.
  • Wow! (Score:5, Interesting)

    by truesaer ( 135079 ) on Thursday April 21, 2005 @11:11AM (#12302715) Homepage
    This is pretty disasterous for Intel. The game benchmarks show significant performance penalties for dual core chips, as expected. Intel launched its dual core specifically as an Extreme Edition for games.


    On other benchmarks the AMD dual core gets 10-20% better performance! SiSoft Sandra is an exception, where there is a mixed bag between the two processors.


    This pretty much verifies for me that Intel did a seriously rushed cludge to get this thing out the door. The only reason I can think of to target this to gamers is that no OEMs would want to buy them for server or desktop use, so you have to target people who like the latest technology even if it isn't that great.


    AMD on the other hand seems to have a pretty good product here. I can't wait until the desktop versions come out.

    • Re:Wow! (Score:3, Informative)

      by i41Overlord ( 829913 )
      On other benchmarks the AMD dual core gets 10-20% better performance! SiSoft Sandra is an exception, where there is a mixed bag between the two processors.

      In the article on Anandtech, they do a pretty good job of explaining this.

      Basically, in the past, programs that had multiple threads heavily favored Intel's Hyperthreading chips since they could handle multiple threads at once. AMD's chips lacked this capability. Intel's dual core chips did pick up a performance boost, but they didn't pick up the larg
      • Re:Wow! (Score:2, Insightful)

        by t35t0r ( 751958 )
        AMD single processors can't handle multiple threads at once? What planet do you live on? So you're saying I can't run multiple threads of the same application on an athlon and do it effectively? The CPU will automatically split prioritization and CPU processing power evenly between the two. While this may not be as effective as Intel's hyperthreading technology, I'd take an athlon64 3200+ 939 pin or athlon xp 3200+ over a pentium4 3.2ghz any day, simply because of the fact that I haven't noticed any differe
        • Um, think again... (Score:3, Insightful)

          by Svartalf ( 2997 )
          Just because it'll priortize, etc. doesn't mean that it's running the threads simultaneously which is what "at once" actually means. The only way is to have Hyperthreading or SMP for that. In the case of the SMP machine, it'll priortize the threads and divvy them up across the CPUs/Cores on the machine, to be executed as in-parallel as is possible.

          On a non-Hyperthreading, non-SMP machine, it's going to execute only as fast as the one-legged man is able to get to kicking asses...
  • by denominateur ( 194939 ) on Thursday April 21, 2005 @11:11AM (#12302716) Homepage
    Now, it's struck me as very peculiar that the benchmarks where the dual dual core setup from AMD really shines leave out any comparison whatsoever to the Intel dual-core offering. This begs the question whether the person doing this review is a journalist or a marketing represenatative of AMD.

    "We did not have time to evaluate the Intel platform with the Intel MKL, the P4 3.0GHz is an older reference measurement." is a very cheap excuse and indicates either lazyness or bribes on the side of AMD... I hate hardware review sites!
    • Check out the article at Anandtech linked a little bit above, which does have comparisons with the Pentium lines. The results are still clearly in AMD's favor.
    • Now, it's struck me as very peculiar that the benchmarks where the dual dual core setup from AMD really shines leave out any comparison whatsoever to the Intel dual-core offering.

      They couldn't test a dual core multiprocessor chip from Intel because one doesn't exist yet. They've only released single processor dual core chips so far.

      AMD introduced dual core on their multiprocessor server chips first, with desktop chips coming later on. Intel introduced dual core on their single CPU desktop chips first, w
    • I'm an AMD fan boy I was thinking it would smoke the Intel too hard.

      The dual core is mostly tested in encoding an area where Intel has traditionally dominated. And it still kicks intel's ass.
  • Dr. Dobbs last month had an item regarding threading in real-world environments. The authur said that while multi-threaded applications run a lot faster than single-threaded applications, that always isn't so. In addition, there are some significant issues in running in a multi-tasking, multi-threaded environment, not solved with the use of mutexes and semaphores.

    Multi-threading and mult-cores are definitely the way the industry needs to go, but the current development methodologies and application archite
    • by zx75 ( 304335 ) on Thursday April 21, 2005 @12:12PM (#12303269) Homepage
      The computing theory and architectures are already there. Now that Java has finally jumped on the bandwagon of reliable multi-threading with v1.5 (or v5.0, or whatever the hell they're calling it today), chances are unless you're using really legacy code the language will have the appropriate system calls available to it.

      The difficulty is that in order for multi-threading to be worthwhile, a developer really needs to know their stuff. It is not easy, there are a number of things that must be taken into consideration that simply do not occur in single-threaded programming. A programmer who just picked up a 'C++ in 24 hours' book is most likely not going to have the tools available to them in order to handle or understand the complexities of multi-threaded programming.

      That being said, there are many situations where multi-threading is not appropriate, but if you think the theory needs to play catch-up, you might be surprised at how common it is in professional development.
  • by Luscious868 ( 679143 ) on Thursday April 21, 2005 @11:16AM (#12302757)
    Dell still won't sell servers with them ....
  • by IronChefMorimoto ( 691038 ) on Thursday April 21, 2005 @11:37AM (#12302953)
    Anandtech has an AMD dual core Opteron and Athlon64 X2 article that might compliment the original poster's story pretty well. It has a sh*tload of benchmarks:

    http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2397 [anandtech.com]

    I really wish they wouldn't do gaming benchmarks with an Opteron in stories like these. Just because the Opteron used has similar specs to the dekstop processor that hasn't been released doesn't necessarily mean that the gaming benchmarks are all that useful. Just my 2 cents.

    It'll be interesting to see how soon prices fall for these AMD processors (server and desktop) when they go mainstream. Read the cost comparisons for these badboys in the article.

    Finally, I'm glad that Anand decided to demonstrate that the new AMDs will be backwards compatible with Socket 939 motherboards WITH BIOS revisions. Intel's dual core processors don't offer that luxury, from what I read in the article.

    IronChefMorimoto
  • by Anonymous Coward
    Given ASUS has not updated the socket 940 SK8V BIOS in over 6 months, and hasn't even had a new beta BIOS (yuck) since December, what are the odds of any motherboard really supporting these CPUs? How many companies rush to support the older boards?

    In fact, the SK8V is suddenly gone from both the motherboard page and the retired products page on the asus website. Hmmm...
  • by GweeDo ( 127172 ) on Thursday April 21, 2005 @11:50AM (#12303087) Homepage
    With comments like:
    "Even grandmothers own 8-megapixel consumer digital cameras now"

    I really have to question the intellegence of this poor guy. I don't know many grandma's that drop $700-$1000 on digital camera's.
  • Amateurs (Score:5, Insightful)

    by Jah-Wren Ryel ( 80510 ) on Thursday April 21, 2005 @12:41PM (#12303499)
    How can anyone take an article seriously when the very first sentence just screams, "AMATEUR!!" like this one does:

    Intel may very well go down in history as the first processor manufacturer with a dual-core solution, if only by three days.

    IBM Power4, Power5
    HP PA-8800
    Sun Sparc IV

    All full-fledged dual-core processors shipping long before Intel -- HP's been shipping for over a year and IBM's already well in to their 2nd generation of dual core processors with Power5.

    Sure, you can excuse the author with some hand-waving about x86 context only or whatever. But if they really knew what they were talking about, they would have said it that way - or at least a competent editor would have corrected it. If these guys can't even get the trivial stuff right, how can anyone trust them to get the real technical details right?
    • Re:Amateurs (Score:5, Insightful)

      by leoc ( 4746 ) on Thursday April 21, 2005 @01:00PM (#12303660) Homepage
      Another thing that pisses me off is that he tests these 64 bit CPU's with 32 bit Windows, claiming that Linux is "hardly mainstream".

      What a load of crap.

      These dual core chips are PERFECT for high performance NON-GAMER Linux systems, and yet these guys disregard the most mature and stable 64 bit platform to run game benchmarks on 32 bit windows.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...