Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

65nm Athlons Debut With Lower Power Consumption 151

TheRaindog writes "AMD has finally rolled out Athlon 64 X2 processors based on 65nm process technology, and The Tech Report has an interesting look at their energy usage and overclocking potential compared to current 90nm models. The new 65nm chips consume less power at idle and under load than their 90nm counterparts, and appear to have plenty of headroom for overclocking. An Athlon 64 X2 5000+ that normally runs at 2.4 GHz was taken all the way up to 2.9 GHz with standard air cooling and only a marginal voltage boost, suggesting that we may see faster chips from AMD soon."
This discussion has been archived. No new comments can be posted.

65nm Athlons Debut With Lower Power Consumption

Comments Filter:
  • HTPC (Score:5, Interesting)

    by tedgyz ( 515156 ) * on Thursday December 21, 2006 @09:27AM (#17324480) Homepage
    The little gem in this story is the Athlon 64 X2 3800+ EE SFF 2.0GHz. At 35W, that sounds like a perfect CPU choice for a super-silent HTPC.
    • If your HTPC needs that much processing power from its CPU, you're doing it wrong.
      • Re: (Score:3, Informative)

        by Mayhem178 ( 920970 )
        What, your HTPC can't render Final Fantasy: The Spirits Within on the fly? Lame. ;)

        Okay, no, seriously. I have an Athlon X2 3800, and it runs deathly quiet for any operation I've thrown at it. Considering that the machine I have it in is my primary gaming PC, I'd say that's noteworthy. And I've never noticed any great amount of heat production, either.
      • Re:HTPC (Score:5, Insightful)

        by Neon Spiral Injector ( 21234 ) on Thursday December 21, 2006 @09:46AM (#17324664)
        How do you suggest that one decode 1080i H.264 transport streams with AC3 5.1 audio? This processor may be slightly more than required, but not by much.
        • by Noehre ( 16438 )
          Why would you decode the AC3? Use passthrough.
          • Re:HTPC (Score:5, Interesting)

            by Neon Spiral Injector ( 21234 ) on Thursday December 21, 2006 @10:27AM (#17325084)
            OK, I'll give you that. But the HD H.264 requires a huge ammount of CPU to decode. My current dual 1.6 GHz Opteron system can't do it in real time. Doesn't even come close.

            So I was thinking the same thing about this new chip. It sounds pretty close to what I was wanting.
            • by eno2001 ( 527078 )
              What OS and software are you using? I'm able to do it fine on a P4 (with HT) that I bought in 2002 with Gentoo Linux. DV in from an 1080i Sony Handycam and it works great. I also can easily play back 1080i video files. I've got a WMF file in 1080i and it looks great and plays without hassle or even making the CPU break a sweat using Xine.
              • by Tack ( 4642 )
                GP is talking about h264.
                • 3.4Ghz Prescott, 1Gb 533DDR2 no issue with h264 1080p decode, encode at about 20% of realtime.
                  one up:
                  PIII 550 512meg PC100 and a PCI Vertex FPGA, no issue with decode and encode 1080p both (just) at real time, wiggle the mouse and there may be jitter.
                  Next up
                  Spartan FPGA in a PCIe socket with a core2DuoEE with 4Gb ram, should be capable of 1080p encode at 4x realtime. (just need money :-)
                  -nB
              • Re: (Score:3, Informative)

                It is just this specific codec, any ffmpeg based player in either Linux or Windows just dies on 1080i H.264. 720p H.264 is fine, as is 1080i MPEG2. I also have some 1080p WMVs that play fine.
                • by eno2001 ( 527078 )
                  Hmmm... H.264 Quicktime files in Cinelerra seem to work OK from the DV I pull in. But I create them myself, so I can't say for non-Cinelerra generated content. And only Cinelerra seems to be able to play them back, not Xine. But that wouldn't seem to be a CPU issue. It seems more like a codec problem.
                  • Re: (Score:3, Interesting)

                    Quicktime only seems to use a subset of the features of H.264. I can easily create videos that play fine with ffmpeg, but are a corrupted mess with the Quicktime player.
                    • Quicktime only seems to use a subset of the features of H.264.

                      Yeah, it's capable of the Baseline profile and partially supports the Main profile. Quicktime doesn't support any of the following:

                      • CABAC
                      • Bidirectional prediction
                      • Macroblock partitions
                      • Weighted prediction
                      • Deblocking

                      You can turn those off in Nero Recode's Standard-AVC profile to make a Quicktime compatible video, or follow this guide [doom9.org] for encoding with x264.

                      Quicktime also obviously doesn't support High profile. A full list of the features it supports

            • My current dual 1.6 GHz Opteron system can't do it in real time. Doesn't even come close.

              CoreAVC's requirements for 1080p24 [coreavc.com] are:
              # 2.8 GHz Pentium 4 or faster processor
              # At least 1GB of RAM
              # 256MB or greater video card

              So if you have a good video card, I don't see why your dual Opteron couldn't do it with CoreAVC. Quicktime is a different story though. But Quicktime has the worst performance of practically any H.264 player/decoder.

              • CoreAVC's requirements for 1080p24 are:
                # 2.8 GHz Pentium 4 or faster processor

                CoreAVC is cheating. Though nobody has figured out quite how, yet. It doesn't decode the h.264 videos nearly bit-exact, like other codecs do.

                You can demonstrate this by comparing the checksum of h.264 video frames decoded with CoreAVC to the same video decoded by anything else.

                It's safe to say CoreAVC is lower quality, as well as closed source, non-free, etc.
                • A checksum wouldn't be quite fair. Lossy compression format standards are often described as a bitstream, not as an encoder or decoder. So it is left open to interpretation as to how exactly the samples should be reconstructed. Any slightly different decoding by different program will yield vastly different checksums. So what CoreAVC is doing isn't exactly against the spec.

                  It would be interesting to see a visual difference of identical frames one decoded by ffmpeg and the other by CoreAVC. That would g
        • by Fweeky ( 41046 )
          Use GPU-accelerated decoding, ala PureVideo [nvidia.com]. Not that I wouldn't also want the CPU power to do it too.
        • You've got a point there. If you're watching Xvid bittorrent files any 1ghz machine will do, but my last computer upgrade (from an Athlon XP 2100 to a Sempron 3400) was done specifically because hte Athlon XP was choking down to a crawl on Apple's HD movie trailers. Even the 3400 stutters a bit on the really high res stuff, but it does ok on 720p so I don't worry about it too much.
        • by aliquis ( 678370 )
          Modern graphics card? Or doesn't they?
      • Maybe he wants to play PC video games on his HD big-screen. If I had an HD big-screen, I'd certainly play a few games on it. :-)

        steve
    • Re: (Score:3, Interesting)

      by javilon ( 99157 )
      Do you know of any video player that will be capable of taking advantage of two processors?

      As far as I know mplayer doesn't, xine doesn't and vlc doesn't.

      • Re:HTPC (Score:5, Funny)

        by Ultra64 ( 318705 ) on Thursday December 21, 2006 @10:25AM (#17325068)
        right, because it's totally impossible for a computer to run more than one program at a time.
        it's too bad video playing couldn't happen on one cpu while video compression happened on another.
        someone should invent that. it could be called "Sametime Many Programs" or "SMP" for short.
      • by tlhIngan ( 30335 )

        Do you know of any video player that will be capable of taking advantage of two processors?

        As far as I know mplayer doesn't, xine doesn't and vlc doesn't.

        If VLC doesn't, then something VERY strange is going on.

        In Windows, I use VLC to test out video playback (because it's the only way I can be sure that stuff like FairUse4WM and QTFairUse actually work!). I've decoded 1080p (1440x1080 - strangely, it displays properly on a 1920x1080 panel...) video that consumes about 18-30% CPU (via Windows Task Manager).

      • Do you know of any video player that will be capable of taking advantage of two processors?

        Kind of a funny question. The only reason a person would ask is if a single processor in their machine was too slow to play a video on its own. I've never heard of that. Otherwise, what's the point in using both processors to decode video? Only one processor is required, and the other processor of your SMP system will take care of any other processes that need to run. Splitting a task that requires less than 100%

        • The only reason a person would ask is if a single processor in their machine was too slow to play a video on its own. I've never heard of that.

          Then you don't pay attention, and you've never heard about people working on highdef playback...

          Your entire post is therefore moot.
          • Your entire post is therefore moot.

            Uh, no. One would expect that people working on decoder software that requires multiple CPUs to run in realtime would probably develop it... for multiple CPUs. Not really the case the OP was talking about I think. The question was, why no multicore codecs for common video formats?

            • One would expect that people working on decoder software that requires multiple CPUs to run in realtime would probably develop it... for multiple CPUs.

              That makes no sense what-so-ever.

              There is no magic codec that requires X CPU time. As you change the resolution, bitrate, and encoding options, CPU requirements change dramatically.

              The question was, why no multicore codecs for common video formats?

              That isn't even remotely close to the question asked.

              • There is no magic codec that requires X CPU time. As you change the resolution, bitrate, and encoding options, CPU requirements change dramatically.

                Yes, but you can typically say with certainly whether a particular codec will be able to run with reasonable parameters on a single core or not. Can you do realtime decoding of 1080i video on a Pentium 300? Probably not, who cares.

                That isn't even remotely close to the question asked.

                Riiiight. Let's go back to the original post:

                Do you know of any vid

                • Yes, but you can typically say with certainly whether a particular codec will be able to run with reasonable parameters on a single core or not.

                  "Reasonable" is entirely in the eye of the beholder.

                  The fact of the matter remains, just because most systems can't play back high resolution video on a single core, does not mean those writing it are going to make it threaded.

                  Examples of which include three common codecs, all of which are known to work fine on a reasonably modern single-core system.

                  WHAT THE HELL AR

                  • I have no idea where you get "known to work fine on a reasonably modern single-core system" from. How did you possibly determine that? How did you reach that conclusion?

                    Why, you're absolutely right. There is no proof that video players work. I've never seen a computer play video, nor have I ever met a person who has. Point conceded.

      • by Spoke ( 6112 )

        Do you know of any video player that will be capable of taking advantage of two processors?

        On the mythtv mailing lists, a number of people have reported better performance (smoother playback, fewer hiccups) when playing HD content when using a dual-core processor. Now, this isn't because the player itself can take advantage of multiple CPUs, but because the player uses a good amount of CPU and so does X. Having two cores lets you dedicate a processor to both processes giving you more headroom.

        Having another core is especially important if it's mixed frontend/backend system where the backend may

      • As far as I know mplayer doesn't, xine doesn't and vlc doesn't.

        MPlayer does.

        For any video that can be played by libavcodec (maybe 95% of them, including practically all the common HDTV formats, and otherwise CPU-intensive ones like WMV9 and H.264) you just need to set the -lavdopts threads= option.

        Threads are also supported for encoding, though you inherently get some quality loss by encoding with seperate threads, so it's a trade-off, and I'd prefer to stick with one, faster core.

  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Thursday December 21, 2006 @09:28AM (#17324498)
    If you have only one core, you need to rely on the OS to not get in the way of running processes during task switches. With more than one core, processes can be split amongst the cores so that they do not need to be interrupted all the time by the OS timer interrupt handler. The more cores you have, the better you can scale up, even if the cores themselves are slower than a competing single core chip.

    It's like driving down the highway in your train vs riding the rails in your Audi. Sure, you can try to drive the car on the train tracks for a while, but eventually the springs will break and your tires will pop and you end up walking to your final destination. But if you took the train, you'd probably tear up the road and it would take a while since you couldn't get much traction with the large metal wheels, but since you're carrying a whole lot of stuff in the train cars being pulled behind you, your bandwidth / time ratio is very favorable.
    • by Anonymous Coward on Thursday December 21, 2006 @09:33AM (#17324552)
      A bad analogy is like a leaky screwdriver.
    • by KingArthur10 ( 679328 ) <arthur...bogard@@@gmail...com> on Thursday December 21, 2006 @09:42AM (#17324628)
      Concerning your analogy: I was thinking more along the lines that a train runs on a single track and sometimes has to be held up for another train to use the same track. They have some track switching, but most operations are serial. A car on the highway might not be allowed to go as fast as a train, but it's got four lanes to maneuver through. A bunch of cars will reach their destinations faster than a bunch of trains because the trains have to share single tracks often.
    • Re: (Score:3, Interesting)

      by pclminion ( 145572 )
      Concerning the more serious first part of your post, it seems that ideally what you want to do is dedicate one CPU/core to interactive tasks, and another core for batch tasks. That way, the interactive tasks can easily interrupt each other as often as necessary on one CPU, while the other CPU cranks along on the batch tasks with a much longer time quantum without any unnecessary interruptions.
  • by joshetc ( 955226 ) on Thursday December 21, 2006 @09:28AM (#17324500)
    considering my 3800+ X2 runs at 2.8ghz with 1.5V. 2.9ghz really doesnt seem like much for a higher end model.. I'm thinking they will need at least 3.1ghz or so overclocks on air to have much of a chance in most highend enthusiast rigs.
    • Re: (Score:3, Informative)

      by gone9teen ( 958480 )
      You do realize, well obviously you don't, the clock speed of a processor means nothing between different models when it comes to performance. A newer 2.0 Core 2 Due processor SMOKES my 3.0 Pentium 4.
      • Re:Interesting.. (Score:5, Informative)

        by joshetc ( 955226 ) on Thursday December 21, 2006 @09:55AM (#17324730)
        Duh, all athlon 64 dual cores to date are clock for clock nearly identical though. This means clock speed does matter. I can't believe you got modded up for making such a shitty assumption on a "geek" website.
        • by Zaatxe ( 939368 )
          I can't believe you got modded up for making such a shitty assumption on a "geek" website.

          Maybe Slashdot's been attracting Digg's readers...
        • Re: (Score:3, Informative)

          by teg ( 97890 )

          Duh, all athlon 64 dual cores to date are clock for clock nearly identical though. This means clock speed does matter.

          They're almost identical - cache sizes vary, and, more importantly, the new ones (65 nm) have higher cache latency [anandtech.com]

          .
  • by IPFreely ( 47576 ) <mark@mwiley.org> on Thursday December 21, 2006 @09:41AM (#17324616) Homepage Journal

    Anand [anandtech.com] has a nice review of these new processors, including performance comparisons.

    The surprise is that it was a little slower than it's 90nm counterpart. They chased it down to the cache latency going up from 90nm to the 65nm part.

    Other than that, it looks good.

    • by MrFlibbs ( 945469 ) on Thursday December 21, 2006 @12:22PM (#17326422)
      According to the AnandTech article you referenced, saying that "it looks good" is a bit of an overstatement. Here are a few quotes from the article:

                "It's clear that these first 65nm chips, while lower power than their 90nm
                counterparts, aren't very good even by AMD's standards."

                "Performance and efficiency are still both Intel's fortes thanks to its Core 2
                lineup, and honestly the only reason to consider Brisbane is if you currently
                have a Socket-AM2 motherboard."

      In every single AnandTech benchmark, Intel wins in both raw performance and performance per watt. And if raw power consumption is important to you, the winner was a 90nm AMD SFF part. In no case was a 65nm AMD better at anything.

      The article does point out that a mature 90nm process is being compared to an immature 65nm process and thus future steppings are bound to be better. However, this doesn't change the fact that the current crop of AMD 65nm parts are a major disappointment.
      • by John Jamieson ( 890438 ) on Thursday December 21, 2006 @02:07PM (#17327776)
        Whenever AMD or Intel moves to a new process, they do not expect much from the first cores(they are happy if they get as many cores from a wafer as they did before-which if my sources are correct, Intel didn't do, and AMD has).
        A lot of people forget that when Intel moved to 65nm, the new chips were slower in many ways, and the clock speeds were lower than the top end 90nm P4's.
        By industry standards these AMD 65nm chips are a SUCCESS.

        My only beef with the 65nm Athlons is that I cannot buy one at newegg, or order one from DELL. In my world, if I cannot order a PC with one, or buy it at newegg, IT IS A PAPER LAUNCH!

  • I'm thinking of buying a new notebook. When will these be available?
  • by IYagami ( 136831 ) on Thursday December 21, 2006 @09:59AM (#17324778)
    ...But most of the time irrelevant.

    Anandtech has two good reviews here (lower power) [anandtech.com] and here (lower performance) [anandtech.com]

    The main reason is the increase of L2 Cache Latency from 12 cycles to 20. But in most of the benchmarks the difference is very low.
  • Is that 90nm should be enough for anyone....
  • by Black Parrot ( 19622 ) on Thursday December 21, 2006 @10:11AM (#17324918)
    Next time your class stud mentions his 9", you can counter by mentioning that your 6.5" consumes less power and gets the job done faster!
  • Where's the 64 bit ver...

    Oh wait
  • by OneSmartFellow ( 716217 ) on Thursday December 21, 2006 @10:48AM (#17325310)
    If you are about to buy a AMD chip, ensure you buy a AM2 version, this is becuase non-AM2 versions do no support low level Hardware Virtualization (which means that XEN - and competitiors - can only operate in a paravirtualization mode)
    • Re: (Score:3, Informative)

      by Courageous ( 228506 )
      This is if you want to run Windows guests. Linux guests are best run paravirtualized for performance reasons. But point well-taken.

      C//
      • Re: (Score:3, Interesting)

        by TheRaven64 ( 641858 )
        True, but only as far as it goes. On x86, you have four protection rings. The hypervisor lives in ring 0, the kernel gets moved to ring 1, and the apps go in ring 3 as usual. When they designed x86-64, AMD 'helpfully' removed rings 1 and 2, so now the kernel and the apps have to share the same ring. They also removed the segmented memory model, so you have to use (more expensive) paged protection mechanisms to protect guests from each other. This makes paravirtualisation more expensive on 64-bit x86 sy
        • You know, I have to say: thank you for taken the time to write up this meaningful reply.

          As it so happens, we at my company are in the midst of a giant virtualization study (and prototype), starting first with a large VI3 deployment, to follow shortly by Xen deployments and likewise some Virtuozzo. Your comment on paravirtualization perhaps not having the performance on x86-64 systems as might be expected was interesting enough that I'll now have something to pay close attention to when we get there early ne
  • Okay. Let's test a low power CPU. We need to stick it in a low-power board to get good measurements, of course. Let's ignore that we've got a 6150 with integrated graphics.Then let's stick on a big-ass 7950 which consumes over 70w on it's own at idle.

    Is this a mistake in the article, or is this just... Insane?

    Nice.
    • Re: (Score:2, Insightful)

      They used the same video card on the Intel test rig too. They're just trying to keep as many components as possible in common between the platforms so that the power draw comparisons are more useful.

      Not too complicated really. As to why they chose that particular video card, I don't know, but I'd wager that the reviewer just had it on hand.
    • Yeah, it seems like a silly way to do it. Why not just put a power sensor on the main ground pin for the CPU (or do they have multiple ground pins?) and measure the power draw of the CPU directly? Come on, these are overclockers and hardware hacker geeks, surely they can do something simple like that.
      • by joshetc ( 955226 )
        Because nobody uses JUST a CPU. One of the power saving features of AMD CPUs comes from the integrated memory controller which makes the motherboard not require one reducing overall power consumption.
      • by jelle ( 14827 )
        "(or do they have multiple ground pins?)"

        Multiple? More than half the pins on ICs like such CPUs are usually for power, not just to keep the input power stable, but also to prevent 'ground bounce' where the ground goes up from 0v when the chip draws a peak current.

        And, hum, 'just' add a 'power sensor'? You can measure strong currents with a coil around the wire, or smaller currents with a small resistor in-line (and then measuring the voltage over the resistor), but both influence the power flowing from/to
  • Apparently Brisbane (65nm) has a 20-cycle L2 cache latency, vs. the 12-cycle latency from the 90nm versions. http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2893&p=3 [anandtech.com]
  • These are nice, but I'm just trying to track down one of the new 35W Semprons that AMD makes (model # SDD*). Unfortunately, nobody carries them at retail. And unless I want to order them in packs of 12, or pay for shipping from Sweden, I seem to be completely out of luck.
    • by tindur ( 658483 )
      Where do you get them in Sweden?
      • A place called Dustin Home [dustinhome.se]. Search for "sempron" and it'll come up, though I have no way of telling if it's in stock. The ones in question are marked "EE" for "Energy Efficient", with model numbers starting with "SDD" as opposed to "SDA". AMD has a nice chart [amdcompare.com] on what the model numbers mean. I just wish that AMD would be a little more explicit in the names on the packaging.
  • I have posted an update to my initial look at AMD's 65nm processors here:

    http://techreport.com/onearticle.x/11486 [techreport.com]

    The update addresses some anomalies in L2 cache performance and raises some possibly related questions about die sizes for the 65nm Athlon 64 X2. It appears this chip is not just a die shrink with the same performance characteristics, after all.
  • My current rig is an Athlon 64 3200+, 10k RPM drive, ATI 9800 Pro, 2GB of RAM.

    There are 2 HUGE mistakes there. The CPU and the drive. Both are HOT, and hungry hungry for $power.

    My next machine I'm looking exclusively at the dual core 35W CPU's, leaning a little to Intel over AMD. For the drive, I'll probably go for a SATA laptop drive, since by 10 min after booting, absolutely everything is in RAM anyway - turns out drive performance is 100% irrelevant.

    The 9800 runs all the games I've playyed since (Lineage
  • I'm one of the fortunate 'many' that have had nothing but problems with AMD's x2 chips. Everywhere I turn I see the same problem listed and lamented about in forums- random total hard freezes with the X2 chips running a variety of mobos and configurations.

    Some say to disable CnQ, others all USB (like THATS a fix nowadays), while still others recommend a full re-installation of everything and hope it works (doesn't).

    I've swapped PSUs, memory, motherboard, drives, RAID cards, firewire cards, keyboard wedges
  • You all realize that power scales with voltage CUBED right?

    So 1.42 volts / 1.35 volts ~ 9%
    But 9% cubed is about 1.09^3 = 30%

    So 30% more power isn't exactly marginal.
    Otherwise, CPU vendors would sell the chip at 1.42 volts.

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...