Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Hardware

Inside AMD's Phenom Architecture 191

An anonymous reader writes "InformationWeek has uncovered some documentation which provides some details amid today's hype for AMD's announcement of its upcoming Phenom quad-core (previously code-named Agena). AMD's 10h architecture will be used in both the desktop Phenom and the Barcelona (Opteron) quads. The architecture supports wider floating-point units, can fully retire three long instructions per cycle, and has virtual machine optimizations. While the design is solid, Intel will still be first to market with 45nm quads (the first AMD's will be 65nm). Do you think this architecture will help AMD regain the lead in its multicore battle with Intel?"
This discussion has been archived. No new comments can be posted.

Inside AMD's Phenom Architecture

Comments Filter:
  • What?! (Score:4, Funny)

    by rumith ( 983060 ) on Monday May 14, 2007 @10:44AM (#19114951)
    From the TFA:

    However, the dual-core duel became, and remains a performance battle. AMD was widely perceived to have taken an initial lead. Intel was seen as recovering the advantage when its introduced its Core 2 Duo family in mid 1996.
    Looks like it happened in a parallel universe.
  • Sorry what? (Score:5, Insightful)

    by tomstdenis ( 446163 ) <tomstdenis&gmail,com> on Monday May 14, 2007 @10:45AM (#19114959) Homepage
    I had a 2P dual-core opteron 2.6GHz box as my workstation for several months. To be honest I couldn't really find a legitimate use for it. And I was running gentoo and doing a lot of my own OSS development [re: builds].

    While I think quad-cores are important for the server rooms, I just don't see the business case for personal use. It'll just be more wasted energy. Now if you could fully shut off cores [not just gate off] when it's idle, then yeah, hey bring it on. But so long as they sit there wasting 20W per core or whatever at idle, it's just wasted power.

    To get an idea of it, imagine turning on a CF lamp [in addition to the lighting you already have] and leave it on 24/7. Doesn't that seem just silly? Well that's what an idling core will look like. It's in addition to the existing processing power and just sits there wasting Watts.

    Tom
    • Re:Sorry what? (Score:5, Insightful)

      by LurkerXXX ( 667952 ) on Monday May 14, 2007 @10:50AM (#19115043)
      Certain apps get a big boost from quad cores, lots of others don't. Some of those apps aren't for servers. For example, if you happen to do a ton of video editing, a quad core might be a good choice. I'll agree with you for most of us it's silly on the desktop right now. That won't necessarily be true in a few years when they write a lot more apps that need and take advantage of multithreading.
      • Re: (Score:3, Interesting)

        by CastrTroy ( 595695 )
        Does multi-cores really help things like video rendering that much? Usually multicore means faster processor so yes it would help, but do you actually get better performance on 4x1GHz than you would on 1x4GHz? If not, then what you're actually looking for is a faster processor, not necessarily dual core. Servers need multiple cores because they are often fulfilling multiple requests at the same time. Desktops on the other hand are usually only doing 1 processor intensive thing at a time, and therefore,
        • Re:Sorry what? (Score:4, Insightful)

          by LurkerXXX ( 667952 ) on Monday May 14, 2007 @11:31AM (#19115745)
          That being said, it's a lot easier to get a 10 GHz computer with 4x2.5GHz CPUs, than it is to make a single 10 GHz CPU.

          That's the entire answer right there.
        • Re: (Score:2, Insightful)

          Well, there's the copious amounts of per-core cache. That helps. Then there's the fact that it's a hell of a lot cheaper to make a four parts that run at 2 GHz than one part that runs at 8GHz. (Like, it can't be done right now.)
        • Re:Sorry what? (Score:4, Insightful)

          by somersault ( 912633 ) on Monday May 14, 2007 @12:09PM (#19116551) Homepage Journal
          Also your computer tends to be doing quite a lot in the background (especially with lots of 3rd party crapware/virus scanners/firewalls loaded onto it) rather than just running whatever app you currently want to be using. It's nice to be able to experience the full potential of one core in the app that you do want to use while leaving another core to handle background services, though I don't know if Windows automatically organises processor time to do that kind of thing, and I've never tried splitting my tasks over my 2 cores manually. I guess my system is nippier than my old single core one, though the thing is that you tend not to notice stuff that *isn't* there (ever got a shiny new graphics card and just been like "oh.. everything's the same but without the slowdowns!" .. can be kinda anticlimactic!)
          • The real bottleneck for all that Background Unintelligent Transfer Task crap isn't the Processing Unit Scheduling System Yield, it's the the Disk Interface and Controller Kludge.
          • Re: (Score:3, Insightful)

            by MrNemesis ( 587188 )
            I'd be nice if things worked like that, but 90% of the time you're bottlenecked on I/O anyway (usually swapping due to insufficient memory to run all those craplets) and you're hard pressed to take advantage of one core, let alone four of the things. Of course, once everyone has their 1GB+ of RAM then SMP might get a better chance to shine...

            I've been telling people not to bother buying fast processors for years now, unless I know they're heavily into their gaming or media editing. Every pound they don't sp
        • by Kjella ( 173770 )
          Does multi-cores really help things like video rendering that much?

          Yes.

          Usually multicore means faster processor so yes it would help, but do you actually get better performance on 4x1GHz than you would on 1x4GHz? If not, then what you're actually looking for is a faster processor, not necessarily dual core.

          Nothing will ever run faster on 4x1GHz than on 1x4GHz. But it might be related to the fact that the former is:
          1) Possible (show me a 10GHz processor, whereas 4x2.5GHz is possible. Must have same IPC)
          2) Mo
        • Re: (Score:3, Interesting)

          by robi2106 ( 464558 )
          Oh yes. The improvement is easily more than double. I have a P4HT 3.xGHz alienware and a 2GHz T2700 Core 2 Duo made by Lenovo (IBM Thinkpad). Both have 2GB of RAM, though the Alienware has a RAID0 storage system.

          But the Core 2 Duo is easily 2 times as fast to render AND is far superior when previewing video with lots of color correction or lots of layers of generated media (movie credits or text overlays are particularly harsh because of all the alpha blending for each source). The P4 system struggles t
      • by Nevyn ( 5505 ) *

        That won't necessarily be true in a few years when they write a lot more apps that need and take advantage of multithreading.

        I've heard people saying that for years, explicit threading has been around for a long time now and even in the server space I still see pretty much noone doing it "well". If you think all the desktop people are going to magically "get it" in the next 5, then good luck ... personally my next desktop machine is going to have two cores (mainly so that when one task goes nuts, it onl

    • Re:Sorry what? (Score:4, Informative)

      by Applekid ( 993327 ) on Monday May 14, 2007 @10:56AM (#19115163)
      According to a writeup on HardOCP back in September, the new design features the ability to pretty much halt cores on-die and save power [hardocp.com]. (hit next a few times, I wish I could get my hands on the actual Powerpoint)
    • Re:Sorry what? (Score:5, Informative)

      by TheThiefMaster ( 992038 ) on Monday May 14, 2007 @11:04AM (#19115295)
      My workstation is a core 2 quad, and a full debug build of our project takes 20 minutes, despite using a parallel compiler. On a single core it takes about an hour. You don't want to know how long the optimised build takes on one core.

      So there are plenty of workstation uses for a quad core, but I agree that at the moment it's overkill for a home desktop.
      • less power (Score:5, Insightful)

        by twistedcubic ( 577194 ) on Monday May 14, 2007 @11:16AM (#19115487)
        Actually, I just got a 65W Athlon X2 4600+ from Newegg which uses less power than my current 6 year old Athlon XP 1800+. The motherboard (ECS w/ ATI 690G) I ordered supposedly is also energy efficient. I guess I could save $60 by getting a single core, but almost all single core Athlons are rated at more than 65W. Why buy a single core when it costs more long term and is slower when multi-tasking?
      • by Chirs ( 87576 )
        Heh...our full build takes about 6 hrs on a quad. Full kernel/rootfs built from scratch for 7 separate boards using 4 architectures.
    • Someone makes this same comment every time advances in CPU technology are mentioned.
    • Re:Sorry what? (Score:5, Informative)

      by rrhal ( 88665 ) on Monday May 14, 2007 @11:30AM (#19115733)

      While I think quad-cores are important for the server rooms, I just don't see the business case for personal use. It'll just be more wasted energy. Now if you could fully shut off cores [not just gate off] when it's idle, then yeah, hey bring it on. But so long as they sit there wasting 20W per core or whatever at idle, it's just wasted power.

      AMD's cool & quiet tech will shut down individual cores when you are not using them. I believe this is all new for the Barcelona. It idles down cores when you are not using them fully. It shuts off parts of cores that you aren't using (eg the FPU if you are only using integer instructions).


      • AMD's cool & quiet tech will shut down individual cores when you are not using them. I believe this is all new for the Barcelona. It idles down cores when you are not using them fully. It shuts off parts of cores that you aren't using (eg the FPU if you are only using integer instructions).

        According to the last picture [imageID=9] in the Image Gallery, different cores on the same chipset can run at different voltages and different MHz's:

        http://www.informationweek.com/galleries/showImage .jhtml?gall [informationweek.com]
        • You read wrong. It says voltages are locked to the highest utilized core's p-state. So while the freq will change the voltage won't, and yes, that will still result in saving power.

          I should point out that the Intel Core 2 Duo's can do this already.

          Tom
    • I just don't see the business case for personal use. Depends on the business. I can definitely see these being useful in the financial segment or in animation studios. But if you're comparing it to Boss X reading email and loading spreadsheets then I agree it's overkill.
    • Uh... (Score:3, Informative)

      I had a 2P dual-core opteron 2.6GHz box as my workstation for several months. To be honest I couldn't really find a legitimate use for it. And I was running gentoo and doing a lot of my own OSS development [re: builds].


      Uh, doesn't "make -j 3" gives you a good speedup? I'd imagine multi-core being great for development, at least for compiled languages.
    • I had a 2P dual-core opteron 2.6GHz box as my workstation for several months. To be honest I couldn't really find a legitimate use for it. And I was running gentoo and doing a lot of my own OSS development [re: builds].

      man make

      -j [jobs], --jobs[=jobs]
      Specifies the number of jobs (commands) to run simultaneously. If
      there is more than one -j option, the last one is effective. If
      the -j option is given without an argument, make will not limit
      the number of jobs that can run simultaneously

      • You're like the 9th person to point that out. Yes, i know about parallel builds. you don't build a $7000 workstation without the basics of how to use build tools like make.

        My point though is that unless you're doing build 24/7 it's just not worth it. That opteron box can build LTC in 8 seconds. A decent dual core can do it in 14 seconds. A single core can do it in ~30 seconds. The single core box can also use a lot less power.

        I think a reasonable tradeoff for developers is a dual-core box. But for mo
    • Re: (Score:3, Informative)

      by Oddster ( 628633 )
      I work in the games industry, and I assure you, the industry is moving towards taking full advantage of multi-core machines. In fact, the move is good, because it coincides well with the XBox 360 and the PS3 - the 360 has 3 hyper-threaded cores, with 5 hardware threads available for the game, and 1 for the OS. The PS3 has the central processor, and 7 coprocessors which all run independently. PC Hardware moving in this same parallelization direction makes life a little bit easier for game software develop
    • If you are doing builds, you will be gaining a lot out of that CPU if you do multi-threaded compiling. I wouldn't mind using your machine to build binaries for my box, it should build a lot more faster.
    • by Pulzar ( 81031 )
      But so long as they sit there wasting 20W per core or whatever at idle, it's just wasted power.

      Well, that's where AMD shines -- the current 65nm X2s idle at under 4W, and that's for 2 cores... So, each idles at 2W. Yeah, they are still wasting power, but not nearly as much as you make it sound. That's 17KWh per year if you run a core *all the time*, or about $1 a year in electricity.

      Mind you, Intel, idling at 3-4 times that power is still "free" for even your high-end home user with a couple of computers ru
  • by Smidge204 ( 605297 ) on Monday May 14, 2007 @10:46AM (#19114991) Journal
    Ultimately, it's performance that makes a successful product, not gigahertz or nanometers.

    Sure, the 45nm process has great potential for better performance and higher efficiency, just like faster clock speeds had great potential - until AMD made a better architecture and achieved better performance at a lower clock speed than Intel's offerings at the time.

    Let's wait and see how it really performs before passing judgement.
    =Smidge=
    • by Anonymous Coward on Monday May 14, 2007 @10:55AM (#19115143)
      So what you're saying: size doesn't matter?
    • Re: (Score:3, Interesting)

      by Eukariote ( 881204 )

      Indeed, let's wait for the benchmarks. I would like some more real-world and 64-bit benchmarks: most recent reviews seems to have studiously avoided those in favor of synthetic 32-bit only benchmarks that are not very representative and are easily skewed with processor-specific optimizations.

      And I'm not sure going to 45nm process will allow Intel to step back ahead. It seems process improvements have been yielding diminishing results in performance related areas. Transistor density will go up, though, so

    • by Belial6 ( 794905 )
      Exactly. If AMD can make a faster/cooler processor at 65nm than Intel can at 45nm, AMD is the better processor. This is particularly true for the long run, as Intel is closer to hitting the size wall than AMD is.
  • Support? (Score:2, Interesting)

    by Sorthum ( 123064 )
    Quad core is all well and good, but are there really that many apps as of yet that can take advantage of it? TFA claims this is for servers and for desktops, and I'm not certain of its utility on the latter just yet...
    • Re: (Score:2, Insightful)

      by EvanED ( 569694 )
      MAKE -j6.

      Mmmmmmmm....

      (-j6 instead of -j4 in an effort to counter I/O latencies... Actually that'd be an interesting benchmark; figure out what the optimum level of parallelism is. Too little and processors will be idle, too much and context switches would become an issue.)
      • Re:Support? (Score:5, Informative)

        by Mr Z ( 6791 ) on Monday May 14, 2007 @12:22PM (#19116837) Homepage Journal

        Prevailing wisdom and personal experience suggest using "-j N+1" for N CPUs. I have a 4 CPU setup at home (dual dual-core Opterons). Here's are approximate compile times for jzIntv + SDK-1600, [spatula-city.org] which altogether comprise about 80,000 lines of source:

        • -j4: 6.72 seconds
        • -j5: 6.55 seconds
        • -j6: 6.58 seconds
        • -j7: 6.59 seconds
        • -j8: 6.69 seconds

        Now keep in mind, everything was in cache, so disk activity didn't factor in much at all. But, for a typical disk, I imagine the difference between N+1 and N+2 to be largely a wash. N+1 seems to be the sweet spot if the build isn't competing with anything else. Larger increments might make sense if the build is competing with other tasks (large background batch jobs) or highly latent disks (NFS, etc). But for a local build on a personal workstation? N+1.

        --Joe
        • I've found kernel builds mirror those results on my dual 3.06GHz Xeon workstation, with hyperthreading enabled, -j5 gives me the best performance.
        • by renoX ( 11677 )
          Could you explain why N+1 is the 'sweet spot'?

          I would have expected N to be the right choice, not N+1..
          • Re: (Score:3, Informative)

            by Mr Z ( 6791 )

            Happy to. At various points, one or more of the processes will be blocked in I/O. With N+1 tasks running, there's a higher likelihood that all N CPUs will be busy, despite the occasional I/O waits in individual processes. With only N tasks running, an I/O wait directly translates into an idle CPU during that period.

            --Joe
          • by Mr Z ( 6791 )

            Oh, and I should add, as you add more processes, you spend more time context switching and you pollute the caches more, so it's a tradeoff. That's why performance falls as you go to higher and higher parallelism. At very high parallelism, you can go off a cliff if you exceed the available system RAM. That's why kernel devs like to do "make -j" on the kernel as a VM stress test.

            --Joe
    • Re: (Score:3, Informative)

      by homer_ca ( 144738 )
      Photo and video editing parallelize nicely. Besides gaming, that's the only CPU intensive process that most home computers will run. On the gaming side, most games don't run any better on quad core, but Supreme Commander [hardocp.com] is one of the few that do.
      • by gravesb ( 967413 )
        With consoles becoming multi-core, won't the video game industry have to learn how to better write games that take that into account? Before, most of their audience was single CPU computers (with a GPU of course) and consoles. However, now that most computers are multi, as are consoles, it seems like they have to better use that power. Of course, it may take a few years before they figure out the best way to do so, and apply it consistently.
    • Re: (Score:3, Interesting)

      by vertinox ( 846076 )
      Quad core is all well and good, but are there really that many apps as of yet that can take advantage of it?

      Maya 3D

      Or any other 3d rendering software where every CPU cycle is used to the last drop.

      But other than that I can't think of anything off the top of my head, but multi-cores is very important to these types of apps. It is the different between 12 and 6 hours waiting for the project to render then people will go with the 6 hours.
    • Re: (Score:3, Insightful)

      by QuasiEvil ( 74356 )
      So I suppose whatever OS you're using only has one thread/process running at a time? I've never understood the argument that multi-core doesn't benefit the desktop user. As I look at my machine right now, I have two development environments going (one actually in debug), four browser windows, an email client, an IM client, various background junk (virus scanner, 802.1x client for the wireless), and of course the OS itself - XP. None of those needs a more powerful proc, but it's nice when they're all grab
      • Well the thing to remember is that for most people's usage patterns they will have many processes running at once but usually only one of them will be cpu-limited. Nevertheless, the argument for dual-core desktops is a slam dunk: One core to run whatever CPU intensive task you want, one core to run everything else. The performance of the CPU bound app goes up while the responsiveness of every other task also improves.

        Quad core becomes a little tricky. When one spare cpu can run every background task wi
      • by Rolgar ( 556636 )
        Agreed. I've been a Firefox user since version .6, and the only thing I've ever disliked was the interface freezing when I load a group of bookmarks together. Last week, I upgraded from a Athlon 64 2400+ with Debian i386 to a Athlon X2 3800+ on Debian AMD64, and I can switch tabs within 2 seconds of clicking on the folder, as opposed to the 10 or more seconds I would wait before. Previously, it would take me minutes to get all of my links loading, now I can be done in about 10 seconds (3 or 4 folders of
    • by TopSpin ( 753 ) *

      but are there really that many apps as of yet that can take advantage of it?

      Desktop apps that can leverage quad-core... Hmm, let's see:

      • Several high end image and video processing tools.
      • Development tools (parallel compilers, etc.)
      • Virtualization.
      • Some very popular games [techreport.com].
      • Most contemporary operating systems.

      Intel released its first Quad almost 6 months ago and by all accounts there are plenty of customers. So, either you're correct and these buyers are morons making ~$300 [1] mistakes or you're wrong and people with the dough to pay for it actually need [2] it.

      Which do you think it is

    • What I don't get is the rationale behind claims that the software market is not ready for quad cores. When so many apps are running a LOT of simultaneous threads - iTunes itself runs about 4 or 5, why can these cores not be made use of?
  • by Eukariote ( 881204 ) on Monday May 14, 2007 @10:53AM (#19115099)
    When it comes to multi-processing scalability, AMD's Barcelone/10h/Phenm single-die four core with hypertransport inter-chip interconnects will do far better than the two-die four core shared-bus Intel chips. Also, both the old and new AMD architecture will do relatively better on 64-bit code than the Intel Core 2 architecture: Intel's micro-op fusion does not work work in 64-bit, and their 64-bit extensions are a relatively recent add on to the old Core architecture. The FPU power of the new 10h architecture will be excellent as well. On the other hand, Intel chips will remain very competitive on integer code, cache-happy benchmarks, particularly when run in 32-bit mode. Also, the SSE4 extensions of the upcoming 45nm Intel ships will help for encoding/decoding and some rendering applications, provided that the software has been properly optimized to take advantage of them.
  • Core 2 Duo? (Score:4, Funny)

    by ratboy666 ( 104074 ) <fred_weigel AT hotmail DOT com> on Monday May 14, 2007 @11:14AM (#19115443) Journal
    1996? Wow, have *I* been misled. Mid 1996 is the vintage of my Dual Pentium Pro 200Mhz, and I *really* thought that it was state-of-the-art.

    Colour me disappointed...
  • It seems that AMD's research department is only concerned with beating Intel at its own game. This is foolish, IMO. AMD is doomed to always be a follower unless its engineers can come up with a revolutionary new CPU architecture based on a revolutionary software model. The new architecture must address the two biggest problems in the computer industry today: reliability and productivity. Unreliability puts an upper limit to how complex our software systems can be. As an example, we could conceivably be ridi
    • Re: (Score:2, Interesting)

      by DrMrLordX ( 559371 )
      Exactly why is AMD a fool to be concerned with "beating Intel at its own game"? Even Intel tried coming out with a revolutionary new CPU architecture, and look where that got them. Itanic has been undermined by Intel's own Xeon processors. The market has spoken, and it wants x86. Not even Intel has been able to change that (yet).

      A smaller firm operating on tighter margins like AMD could easily go belly-up trying to break out with a new CPU microarchitecture. At least Intel could afford all of Itanic's f
      • Itanic has been undermined by Intel's own Xeon processors. The market has spoken, and it wants x86. Not even Intel has been able to change that (yet).

        This probably has more to do with the fact that IA64 was garbage than any inherent attachment to x86. Microsoft even went to great lengths to support it, which is much more than you can say for SPARC or POWER. There's plenty of room, especially in the *nix server market, for processors unrelated to x86. With Linux or the BSDs, all you really need to do is sen

    • Re: (Score:2, Informative)

      by Anonymous Coward
      wait.. What? AMD is following Intel? Mind telling me how that is exactly? IIRC, both Intel and AMD are using the 64 bit extensions that... guess who... AMD made on their Athlon 64 processors first. Also, AMD was first to move their processors away from a shared BUS. The reason why they say their processors are "True" dual or quad core is because their architecture was designed better to scale. Take a look at the multi processor benchmarks compared to netburst, and even take a look at how much better A
      • wait.. What? AMD is following Intel? Mind telling me how that is exactly?

        They are following because they are barely making a profit, the last I heard. Why? Because they have to compete by drastically cutting prices to compete head-on with Intel. With a new architecture and a new market niche (mostly embedded systems and mission-critical systems), they would leave Intel in the dirt. The desktop market would follow soon afterwards when the industry comes to its senses and realizes that it has been doing it wr
        • They are following because they are barely making a profit, the last I heard.

          There's a semantic distinction between "following" and "trailing". "Following" is doing whatever your competitor is doing, after they have done it. "Trailing" is to merely be behind, as in a race or other competition. AMD may be trailing due to being unprofitable and losing some marketshare, however this does not indicate that they are following Intel.

          It seems you want AMD to make a completely new architecture (microarch or ISA?
    • "The new architecture must address the two biggest problems in the computer industry today: reliability and productivity."

      The Actor model is what you ask for.
    • Re: (Score:2, Insightful)

      by agg-1 ( 916902 )
      I hate to break the news to you, but your proposed "silver bullet" is hardly something new. Synchronous dataflow has been with us at least since the 1970s. It's great for designing hardware, DSP software and other simple kinds of algorithms, but as a panacea for all the diseases of the software world? I wish I had some of the stuff that you're smoking. :)

      Now, asynchronous dataflow (with the appropriate support for dealing with complex data structures) might actually be helpful to slash some of the comple
  • What's new in the article? All this was announced at the conference in Germany in January!!! Why is Slashdot even posting this? The only interesting thing is that the hype has made AMD's share go up 10 % in the last 2 sessions:)
  • They seem a bit slippery there. When will the Barcelona Opterons ship? Anyone know?
    • Re: (Score:2, Informative)

      by DrMrLordX ( 559371 )
      You might be able to get them in Q4 2007 [wikipedia.org]. With launch dates of August 2007, we'll probably see the actual chips in retail channels by October. OEMs/builders should have products featuring the new Opterons much earlier.

If you didn't have to work so hard, you'd have more time to be depressed.

Working...