Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Hardware Technology

AMD's Dual-core Athlon 64 X2 reviewed 309

ChocolateJesus writes "Weeks after formally announcing its dual-core Athlon X2 desktop processor, reviews are finally trickling out. The Tech Report's coverage tests two flavors of the Athlon 64 X2 against a whopping 17 competitors, including AMD and Intel's fastest single- and dual-core offerings. They've even thrown in a handful of dual-processor systems (and dual-core, dual-processor systems) for good measure. Testing focuses on multi-threaded applications, and the X2s deliver remarkable performance. Perhaps even more impressive is the fact that unlike Intel's dual-core Pentiums, AMD's X2s consume no more power than single-core chips." Looks like this story has come out of embargo - if you've find more reviews, post them in comments.
This discussion has been archived. No new comments can be posted.

AMD's Dual-core Athlon 64 X2 reviewed

Comments Filter:
  • Cooling (Score:4, Interesting)

    by Anonymous Coward on Monday May 09, 2005 @10:33AM (#12477148)
    I don't get how this can run on the same power level as the single core chips. Can someone explain on how this is possible?
  • by MaceyHW ( 832021 ) <maceyhw@gmai[ ]om ['l.c' in gap]> on Monday May 09, 2005 @10:35AM (#12477169)
    will they be able to outmarket AMD again?
  • Rollout process (Score:5, Interesting)

    by fbody98 ( 881072 ) on Monday May 09, 2005 @10:39AM (#12477229)
    I'm relieve to see at least one thing out of this launch, and I would hope that other companies would do as much. AMD has clearly defined their rollout process so there will be no confusions and hopefully no false expectations.

    1. Announcement
    2. Technical Preview (benchmarks Appear)
    3. Launch (OEM Availability)
    4. Ramp-up and Reseller Availability

    They even give dates, if they can keep to those dates then we might actually have a product launch that doesn't antagonize the community with accusations of a 'paper launch'.

    I'd like to see more companies be more upfront about this.
  • Re:market for this? (Score:1, Interesting)

    by Anonymous Coward on Monday May 09, 2005 @10:40AM (#12477237)
    More power means less needed optimisations! Programmer's dream :-)
  • by amcdiarmid ( 856796 ) <amcdiarm.gmail@com> on Monday May 09, 2005 @10:50AM (#12477366) Journal
    The market for this is everyone who uses an agressive anti-virus program. The AV will run on one prcessor, what you are doing on the other.

    It's a sad case that as malware becomes more previlent, hardware vendors win. Really, you can be productive with (for example) Win2K on a 1GHz machine and 256MB, in an office. Now add the wait as every file is scanned on access for viruses (per corporate policy), and the machine somehow becomes "too slow."

    OH well. I guess it's time to put all productivity applications on a Server & run them remotely. Again;-(
  • by amichalo ( 132545 ) on Monday May 09, 2005 @10:51AM (#12477376)
    Does dual core have to mean 2 of the SAME processor?

    I recall reading a /. comment on a previous news day that suggested using dual core to allow the OS and anti-virus software run on one proc, while applications share another, thus improving stability/security/performance.

    But does a vendor HAVE to make a dual core chip with two of the same processor? Perhaps gains could be made using a less powerful, commodity chip core and pairing it to a top of the line core.

    Costs would be lower and they could sell more of this hybrid dual core because they would only need 1 top of the line cores.

    Oh, you get what I am saying.
  • Re:market for this? (Score:3, Interesting)

    by Malc ( 1751 ) on Monday May 09, 2005 @10:54AM (#12477422)
    If they ever make it to a significant size of the market, you will see more of the CPU intensive tasks that people do today becoming multi-threaded. Some of the of long running processes that are common on home computers lend themselves nicely to divide and conquer, such as ripping music or video. By going dual core or SMP, one can halve the processing time without having to wait a few years for the processing power of CPUs to double.
  • Re:market for this? (Score:4, Interesting)

    by be-fan ( 61476 ) on Monday May 09, 2005 @11:02AM (#12477490)
    Programmers? Multimedia people? Scientific computing folks? There are quite a few markets that can use dual-core right now. Basically, anybody who buys a PowerMac :) Moreover, in the future, everyone will have to move to dual core (including gamers), because AMD and Intel cannot ramp up the clockspeed of single core chips. So AMD's strategy makes quite a bit of sense. Sell dual-core chips to the high-end now (notice how all of these CPUs are high-end chips that carry quite a price premium), and start getting the ball rolling on multithreaded software.
  • by Xoro ( 201854 ) on Monday May 09, 2005 @11:10AM (#12477550)

    Dual CPU systems tho are useless to the home users, it's for businesses and scientists with more computing need. Real enterprise applications are multithreated.

    Not so!

    I was one of the lucky people buy a cheap dual Celeron setup right after that hack was first discovered and I can tell you that multiprocessors on the desktop rock. My old system was a dual Celeron 400, and while it couldn't compete with a modern system in terms of benchmark speed, it had my current 1400 MHz Celeron system beat bloody when it comes to interactivity and responsiveness -- that elusive "feel".

    The price is steep now, but don't let arguments about application benchmarks dissuade you from trying out multicore when prices go down. The Anandtech review cited about has some really telling benchmarks about how well a dual system performs when loaded down with multiple tasks.

    Unlike the unnoticeable 200 or 400 MHz incremental bumps you usually see with processors, dual core really brings something of value to the desktop user. Try it and you'll see.

  • RISC (Score:3, Interesting)

    by delirium of disorder ( 701392 ) on Monday May 09, 2005 @11:14AM (#12477583) Homepage Journal
    Packing all this circuitry will cost more in heat and fabrication costs then conventional cpus. SPARC and MIPS CPUs get more flops, mips, and overall thoroughput per watt and per millions of transistors on a die. Maybe we will see a resurgence of eligent RISC designs as dual/quad/oct core chips become more previlent.
  • Re:market for this? (Score:5, Interesting)

    by quarkscat ( 697644 ) on Monday May 09, 2005 @11:35AM (#12477798)
    Scientific workstations?

    Anyone involved in matrix math (circuit design, mechanical engineering, fluid dynamics, etcetera) would love to be able to do this on their desktop instead of shared time on an HPC. Or combine the computational power of an office full of these machines at night or over weekends for the really big jobs. What's not to like?

    Any scientific organization that has been holding off on capital expeditures while waiting for a clear winner to emerge ((AMD vs. Intel) vs. (PPC vs. SPARC)) will have come that much closer to making a decision.

    Intel's IA64 gambit has not panned out -- their marketing hype has brought down some of their competition (PA-RISC and MIPS), but it has not proven to be the market leader Intel would have hoped. But like a wildfire in the woods, Intel's IA64 has opened up competition for diversity and some new leadership.
  • by ackthpt ( 218170 ) on Monday May 09, 2005 @11:41AM (#12477848) Homepage Journal
    will they be able to outmarket AMD again?

    Intel is obviously relying on fat vendors like Dell, but with performance like this and power consumption like that, buyers will be asking Dell what their problem is. When Dell finally cracks, you'll know Intel have spent too long fixating on their stock price rather than their products. It's a tough thing to recover from, too, and will call for a major shake-up.

    Pity is, companies which go though this usually are considerably weaker. AMD looks good, but you have no idea what may be coming out of Japan/Taiwan/China in 10 years.

  • by NanoGator ( 522640 ) on Monday May 09, 2005 @11:48AM (#12477913) Homepage Journal
    "What a lot of people dont realize (Including a lot of programmers). That a lot of applications are not multithreaded."

    Well we realize it here, because it's BROUGHT UP every single time there's a mention of more than one processor running!! Yeesh. Heh.

    On a lighter note: When these processors become more popular, multi-threaded apps will come. Besides, its not like our machines aren't keeping up with apps today. Except for my 3D rendering, I don't have anything that would benefit from a faster processor, and I doubt many other people do either.
  • Re:Cooling (Score:3, Interesting)

    by ThosLives ( 686517 ) on Monday May 09, 2005 @11:55AM (#12477969) Journal
    The really interesting thing is they measured system power consumption, not chip consumption. They specified that the power supplies were the same, but the systems have different specs [techreport.com].

    It's hardly accurate to judge a CPU's performance based on a "power drawn at the wall" measurement.

  • by Big_Breaker ( 190457 ) on Monday May 09, 2005 @12:08PM (#12478109)
    Apps that are written with multiple threads are typically also written with care to avoid locking up the computer.

    Single threaded apps are typically written with far less care and don't leave cycles free for the GUI and OS functions.

    That is why having the second processor is nice. It has free cycles when an app is hogging the other one. A multi-threaded app will use both but will probably not hog both, leaving the GUI still "snapppy".
  • by dtjohnson ( 102237 ) on Monday May 09, 2005 @12:30PM (#12478314)
    The performance of the AMD X2's is absolutely amazing but...will anyone really buy them? The big computer companies seem to be offering mostly P4's at about 3 Ghz using some elderly Intel core. The newspaper this morning carries an ad from Fry's Electronics offering a wimpy '2800+ Sempron with motherboard' for $69 and that's the only AMD thing listed in their ad. Can't be much money for AMD at that price. It just doesn't look like the desktop computer market cares much about performance anymore.

    AMD might be turning out some pretty good products but they are not making any money [networkworld.com] selling them and it is only a matter of time before they have to fold their tent and leave the field to Intel.
  • That statement is a red herring every time its brought up.

    Most people who post it don't realize that your CPU is context switching dozens of times per second when idle in your OS already. Simply letting two cores handle different interrupts is a benefit for system responsiveness.

    How often is your CPU wanting to do more than one thing at a time? All the time in an OS like Linux or Windows.

    If you're running Linux, run vmstat and check the context switches per second.

    If you install a second CPU, you may not see a 2x performance increase, but you wouldn't if you doubled your CPU speed either.

    You *will* however see a much more responsive machine, because of how the system handles load better.
  • by D. Book ( 534411 ) on Monday May 09, 2005 @01:08PM (#12478789)
    What about the operating system's role in multitasking efficiency? Just a few days before joining the latest dual-core drumbeat, Scott Wasson of The Tech Report posted the following item:

    We've already asked you for some input on our possible multitasking tests, but let's talk for a sec about that creamy smoothness that comes from having multiple processors in a well-tuned system. I've said many times that it smoothes over potholes and allows the user experience to feel friction-free. In fact, if you pick up the latest copy of PC Enthusiast magazine, my column this month extols the virtues of dual-core CPUs for multitasking. I use an example of a problem with my own PC slowing down to a halt while checking mail, caused by the convergence of too much client-side spam, virus, and mail filtering. Dual-core processors should make problems like this almost a thing of the past.

    However...

    After writing that article, I decided to troubleshoot the mail-checking slowdown problem one more time, and I realized that I hadn't applied some basic tweaks to this installation of Windows XP Pro. Once I set the OS scheduler preferences to optimize for "background tasks" instead of "applications," my mail problem was largely resolved. I also used registry tweaks to increase the size of the system disk cache and to disable paging of the Windows executive, and all told, my system is much more responsive now.

    Now, I still think dual-core CPUs will be a great thing for multitasking, but this raises the question: How much creamy smoothness can you squeeze out of a box with only one CPU, with or without Hyper-Threading? And what proportion of PC slowdowns and performance "hiccups" are really caused by inadequate CPU power as opposed to lousy OS scheduling, hard drive bottlenecks, running out of RAM, lousy drivers, or the like? Is multitasking nirvana really just a second CPU core away? What, in your experience, has the most impact on your PC's responsiveness, and what upgrades have helped the most?

    Unfortunately, it seemed the question was mostly rhetorical, as The Tech Report prompted their users to "discuss" the issue subjectively rather than getting some multitasking benchmarks going to back up the anecdote.
  • by ebrandsberg ( 75344 ) on Monday May 09, 2005 @01:19PM (#12478912)
    Read the article, it was the flash memory business that caused the loss, not the cpu business.
  • by TheRaven64 ( 641858 ) on Monday May 09, 2005 @01:46PM (#12479201) Journal
    Not only that, but scheduling algorithms for heterogeneous processors are a whole lot more complicated than those for homogenous sets (see the problems with getting good performance out of HyperThreading for an example). It might be possible to do something fairly simple, like run all processes on the slow processor and migrate them to the fast one when they use more than a certain percentage of the CPU speed, but in this case why not just down-clock the faster one when it is not in use?

    The only time when heterogeneous processors are really useful is when each is better than the others at a sub-set of tasks. Current PCs are usually a set of 3 different processors in a single box[1]. They have a reasonably fast general purpose CPU, and on the same die a simple vector processor (e.g. MMX, SSE, AltiVec), which has a different instruction set to the main processor and must be invoked explicitly. They also have a highly parallel large vector processor on a separate chip, which is usually used for graphics. No automatic scheduling is performed between these - it is up to the programmer to explicitly code for each one. Ideally, a heterogeneous processing environment would require code to be JIT compiled for each processor, and then moved between them depending on run-time profiling information.

    [1] Yes, this is an oversimplification.

  • Re:Cooling (Score:3, Interesting)

    by Jherek Carnelian ( 831679 ) on Monday May 09, 2005 @05:57PM (#12482143)
    For what it is worth, those charts are moderately deceptive. Like most amatuer sites, Anandtech doesn't have the equipment to measure actual cpu power consumption. So they measured the consumption of the entire system.

    So, assuming they used the same system for all measurements and just swapped out the cpus, the relative differences are accurate. But you can not draw any conclusion about the absolute power requirements of the cpus based solely on Anandtech's review.

    Maybe no one cares, but it would be easy to read that article and come away with the idea that the dual core cpu consumes (and thus must dissipate) 150 watts under load. While that might be in the realm of possibility for Intel's cpus which are little micro-furnaces, the AMD chips are significantly less hot than that.
  • Everything pre-2.5 was pretty bad for interactivity, but 2.6 is excellent.

    Don't really share your user experience.. Being a college student at the time, I salivated over a dual system for years, and finally found the opportunity with the dual Pentium II-class celeron motherboards by Abit. That brief window in history when you could have a full dual processor system for under $250. It was dual 433's overclocked to 466. At the exact same time, I had an AMD K5-400 as my main machine. The dual ran on Red Hat 8 I believe originally, and is still running to this day as my main home-server, and I've only upgraded it to Red Hat 9.0 (main because if it ain't broke, I ain't fixen it). So as I look at uname, it's still only running kernel 2.4.20-31.9smp. I remember running this puppy side-by-side to the K5-400, and later my K6 Thunderbird 800MHZ. The Thunderbird should have blown away with dual 466, but it didn't. I had better throughput of mp3 encoding on the duelies (which I was doing a lot of at the time, farming out all my machines at home and at work 24/hours a day some weeks). One of the features I specificly played with was encoding single-threadedly with grip+lame v.s. dual threadedly. When dual threaded. Obviously single-threadedly the system was almost perfectly responsive (since lame isn't HD or even memory bound), but even when dual-threaded, the system was more responsive than my faster single-CPU K6. I quickly fell in love with the dual processor concept, and used it as my main home-station for just about anything that wasn't video games.. When my K5 literally exploded one day due to moisture damage, I was rather forced to migrate over to the new machine; but it was a welcome change from a mostly windows unresponsive environment.

    I am convinced that even Linux 2.4 was more smooth operating with multiple CPUs than windows. Perhaps it is because X is single threaded, but graphical thinking occurs in the application-space and is thus inherently multi-processed. Thus you get the best mixture of non-race-conditions streamlined code with concurrent processing capability. This is purely speculative. Whatever the deal, it was great.

    Unfortuntaely, I don't remember if the standard Linux benchmark of doing a parallel make of the kernel was faster on the dual 466 v.s. the single 800. I guess one of these day's I'll have to fire that 800 back up again to check; the dual's still chugging along fine as my server.

    Unfortunately I haven't had the luxury of having ANY affordable dualies in the past 5 years, so I've just gone for greater single-threaded horse-power for work-stations.

    As for the point of this thread. I seem to recall that the 2.6 kernel had more overhead than the 2.4 kernel. This along with my anxiety for changing a back-end special-purpose servers' OS kept me from wanting to up the now ancient machine. Most likely this overhead is compensated for by the better MT-support, and is especially unnoticeable at the 2GHZ range. But I find it hard to believe a perceptible difference in UI responsiveness could be found between the 2.4 and 2.6 kernels. Perhaps measureably in application benchmarks, but surely not on the GUI.

    Sadly, as I've said, I can not provide empirical data as I don't have $1,500 to spend on a simple file-server.

"Engineering without management is art." -- Jeff Johnson

Working...