Follow Slashdot stories on Twitter


Forgot your password?
AMD Hardware

Inside AMD's Phenom Architecture 191

An anonymous reader writes "InformationWeek has uncovered some documentation which provides some details amid today's hype for AMD's announcement of its upcoming Phenom quad-core (previously code-named Agena). AMD's 10h architecture will be used in both the desktop Phenom and the Barcelona (Opteron) quads. The architecture supports wider floating-point units, can fully retire three long instructions per cycle, and has virtual machine optimizations. While the design is solid, Intel will still be first to market with 45nm quads (the first AMD's will be 65nm). Do you think this architecture will help AMD regain the lead in its multicore battle with Intel?"
This discussion has been archived. No new comments can be posted.

Inside AMD's Phenom Architecture

Comments Filter:
  • Support? (Score:2, Interesting)

    by Sorthum ( 123064 ) on Monday May 14, 2007 @11:47AM (#19115009) Homepage
    Quad core is all well and good, but are there really that many apps as of yet that can take advantage of it? TFA claims this is for servers and for desktops, and I'm not certain of its utility on the latter just yet...
  • by Eukariote ( 881204 ) on Monday May 14, 2007 @11:53AM (#19115099)
    When it comes to multi-processing scalability, AMD's Barcelone/10h/Phenm single-die four core with hypertransport inter-chip interconnects will do far better than the two-die four core shared-bus Intel chips. Also, both the old and new AMD architecture will do relatively better on 64-bit code than the Intel Core 2 architecture: Intel's micro-op fusion does not work work in 64-bit, and their 64-bit extensions are a relatively recent add on to the old Core architecture. The FPU power of the new 10h architecture will be excellent as well. On the other hand, Intel chips will remain very competitive on integer code, cache-happy benchmarks, particularly when run in 32-bit mode. Also, the SSE4 extensions of the upcoming 45nm Intel ships will help for encoding/decoding and some rendering applications, provided that the software has been properly optimized to take advantage of them.
  • by Eukariote ( 881204 ) on Monday May 14, 2007 @12:05PM (#19115299)

    Indeed, let's wait for the benchmarks. I would like some more real-world and 64-bit benchmarks: most recent reviews seems to have studiously avoided those in favor of synthetic 32-bit only benchmarks that are not very representative and are easily skewed with processor-specific optimizations.

    And I'm not sure going to 45nm process will allow Intel to step back ahead. It seems process improvements have been yielding diminishing results in performance related areas. Transistor density will go up, though, so Intel can compensate by adding more cache. Also, AMD's process technology is a little advanced than Intel's at the same feature size: Intel does not do Silicon on Insulator, dual stress liners, and a few other things.

  • Re:Sorry what? (Score:3, Interesting)

    by CastrTroy ( 595695 ) on Monday May 14, 2007 @12:15PM (#19115471) Homepage
    Does multi-cores really help things like video rendering that much? Usually multicore means faster processor so yes it would help, but do you actually get better performance on 4x1GHz than you would on 1x4GHz? If not, then what you're actually looking for is a faster processor, not necessarily dual core. Servers need multiple cores because they are often fulfilling multiple requests at the same time. Desktops on the other hand are usually only doing 1 processor intensive thing at a time, and therefore, would probably not benefit as much as you might think from multiple processors/cores. That being said, it's a lot easier to get a 10 GHz computer with 4x2.5GHz CPUs, than it is to make a single 10 GHz CPU.
  • Re:Support? (Score:3, Interesting)

    by vertinox ( 846076 ) on Monday May 14, 2007 @12:20PM (#19115547)
    Quad core is all well and good, but are there really that many apps as of yet that can take advantage of it?

    Maya 3D

    Or any other 3d rendering software where every CPU cycle is used to the last drop.

    But other than that I can't think of anything off the top of my head, but multi-cores is very important to these types of apps. It is the different between 12 and 6 hours waiting for the project to render then people will go with the 6 hours.
  • by DrMrLordX ( 559371 ) on Monday May 14, 2007 @01:10PM (#19116571)
    Exactly why is AMD a fool to be concerned with "beating Intel at its own game"? Even Intel tried coming out with a revolutionary new CPU architecture, and look where that got them. Itanic has been undermined by Intel's own Xeon processors. The market has spoken, and it wants x86. Not even Intel has been able to change that (yet).

    A smaller firm operating on tighter margins like AMD could easily go belly-up trying to break out with a new CPU microarchitecture. At least Intel could afford all of Itanic's failures.
  • Re:Sorry what? (Score:1, Interesting)

    by ASBands ( 1087159 ) on Monday May 14, 2007 @02:40PM (#19118365) Homepage

    Yes, Windows Vista uses true multi-core optimization (XP SP2's scheduler does it, but Vista's does it better, according to Microsoft) so that when you're converting all your video files from MPEG TS to H.264 on a dual-core processor, you can convert two movies at once and they will end up on different cores. While Windows isn't exactly open-source, the Windows.h file and the .NET framework allow for operations that imply there is a method Windows uses to switch a thread's processor/core based on apartment state, priority and various other important things. I would imagine this operation takes quite a bit of time.

    The Linux processor scheduler isn't as powerful and I cannot seem to find any documentation as to any multi-core optimizations. This isn't a huge deal, as only a few people would really see a difference (multiple number-crunching operations at once - specialty servers, video transcoders, world simulators) and it would most likely take a fundamental change to the way the scheduler works (which was just completely re-written for the 2.6 kernel).

    I think both Windows and Linux could benefit from a operation for dedicating cores. When a thread is created, a function call such as

    bool RequestDedicatedCore(PTHREAD)
    before the thread is started which would request the OS to dedicate an entire core to a thread. There may be a 5% performance boost (and that's being generous) as the processor registers do not need reloading on thread changes, but the slight increase would make somebody happy.
  • Re:Sorry what? (Score:3, Interesting)

    by robi2106 ( 464558 ) on Monday May 14, 2007 @03:11PM (#19118965) Journal
    Oh yes. The improvement is easily more than double. I have a P4HT 3.xGHz alienware and a 2GHz T2700 Core 2 Duo made by Lenovo (IBM Thinkpad). Both have 2GB of RAM, though the Alienware has a RAID0 storage system.

    But the Core 2 Duo is easily 2 times as fast to render AND is far superior when previewing video with lots of color correction or lots of layers of generated media (movie credits or text overlays are particularly harsh because of all the alpha blending for each source). The P4 system struggles to play native HD footage (m2t) at 1/2 resolution while the Core 2 Duo has no problem. Remember that native HD stored in the m2t file is highly compressed so just viewing the footage is very taxing on the CPU-to-RAM bus as well as the HD.

    My render app is Sony Vegas (not the cheap movie studio version) which is fully multi-core / multi-cpu aware. You can even set the # threads to take advantage of quad CPU systems. Vegas is entirely CPU dependent (unlike Edius / Avid which have hardware render assist).

    But as other posts mention, aside from the multimedia creators no one will need multi cores. Well, unless they are running Aero on Vista. Good luck with that beast.

    Side note: most media creators cannot move to Vista yet because of how direct sound access is blocked. Sony pretty much says stay away for now.
  • Re:Sorry what? (Score:3, Interesting)

    by tomstdenis ( 446163 ) <> on Monday May 14, 2007 @04:04PM (#19120075) Homepage
    Linux is aware of SMT, multi-core, SMP and combinations there of. It calcs migration costs for moving processes around and the like. For example, in a NUMA aware setup, it won't migrate a process to a different NUMA zone unless it has to. It does round-robin processes through cores though kinda like load-balancing [keeps the heat down too].

    You can extract CPU info from the /sys dir, and use sched_setaffinity() to lock your threads to a given core if you want.


Always leave room to add an explanation if it doesn't work out.