Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD QuadFX Platform and FX-70 Series Launched 130

MojoKid writes, "AMD officially launched their QuadFX platform and FX-70 series processors today, previously known as 4x4. FX-70 series processors will be sold in matched pairs at speeds of 2.6, 2.8, and 3GHz. These chips are currently supported by NVIDIA nForce 680a chipset-based, dual-socket motherboards, namely the Asus L1N64-SLI WS, which is currently the only model available. HotHardware took a fully configured AMD QuadFX system out for a spin and though performance was impressive, the fastest 3GHz quad-core FX-74 configuration couldn't catch Intel's Core 2 Extreme QX6700 quad-core chip in any of the benchmarks. The platform does show promise for the future, however, especially with AMD's Torenzza open socket initiative." And mikemuch writes that the QuadFX "not only fails to take the performance crown from Intel's quad-core Core 2 Extreme QX6700, but in the process burns almost twice as much electricity and runs significantly hotter in the process. ExtremeTech has a plethora of application and synthetic benchmarks on QuadFX, including gaming and media-encoding tests."
This discussion has been archived. No new comments can be posted.

AMD QuadFX Platform and FX-70 Series Launched

Comments Filter:
  • by Anonymous Coward
  • From core 2 duo to core 2 extreme, just doubling the cores.. how are they going to top that with 8 cores? Core 2 tubular?
  • Until we replace our kitchen hobs and kettles with computer CPUs.
    Having the home server also doing all water heating for the house might be a good idea.

    You need to run some intensive process to heat up enough for a bath or shower.
  • While performance may be disappointing, it's pretty clear that AMD is just releasing this as a stopgap solution to "stay in the game" for the performance sector until their new developments are ready next year. The name is a good choice and reflects that intention - they combine their performance branding, FX, with "Quad", the term Intel is using, to indicate that it fills the same niche as a quad-core processor. I think it does what it is meant to do - give the impression of a comparable offering until AMD

    • Lame though...

      I'd rather an Opteron for HPC than an Intel box anyday.

      For "gaming" and other 1337 chores, I guess the Intel box is the winner. But really until Intel figures out this "not use the FSB" approach they can kiss my HPC using ass good bye.

      [and this is coming from a dude who loves his Core 2 Duo workstation...]

      Tom
      • by afidel ( 530433 )
        Well, it depends on what your doing for HPC, if you have something with a high FP workload to messaging/data workload then the Intel part might be a good choice since it is by far the Floating Point king. If your workload consists of large data sets or lots of message passing then the AMD solution might be the right fit. I deal with more classical IT workloads (large database and n-tier systems) so the AMD solution is currently the better fit for my heavy lifting boxes. My blade servers for things like Citr
      • It is rather obvious that the QuadFX is a niche release for gamers, on several grounds.

        1) Only one memory module per channel supported, and no ECC. Not what you want for high reliability or in memory-intensive tasks. This disqualifies the QuadFX as server processor.

        2) Much cheaper than the (closest in performance) Opteron 2220 SE but castrated as described above. I guess AMD did that intentionally to avoid cannibalizing the server market.

        So I'd consider the Intel, with a high quality board, for "affordable
        • Re: (Score:2, Insightful)

          by saider ( 177166 )
          Don't forget...

          Piss off gamers with a problematic part and you might lose some "street cred".

          Piss off IT managers with a problematic part and you will lose significant revenue for many quarters to come.

          If I were going to test out a new product, a bunch of rich kid early adopters would be the market segment to target. They are always willing to try something new and their decisions do not significantly impact your bottom line.

          Once the process kinks are worked out, incorporate the other features for your main
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      Sooo... when Intel had the hotter, more energy wasting, slower processor... it was "omg look how much Intel sux0rz" but now when AMD is in that boat its... "this is just a stopgap, you just wait!"

      Fanboys... it just doesn't get more entertaining then that.
      • by traindirector ( 1001483 ) * on Thursday November 30, 2006 @11:13AM (#17050168)
        Sooo... when Intel had the hotter, more energy wasting, slower processor... it was "omg look how much Intel sux0rz" but now when AMD is in that boat its... "this is just a stopgap, you just wait!"

        Intel spent years in that boat with no indication that they had an intention to 1) aim for low power consumption (they were happily gloating about the forthcoming Pentium 4 5GHz) or 2) do what it took to gain the performance crown. It was not clear (in recent history) that they had an eye on the super-performance desktop market until the announcement of the Extreme Edition and little indication of concern about power usage on the desktop until they announced that their new desktop processors would be based on the Pentium M.

        On the other hand, we already know AMD's plans for next year, and we have statements of what they hope to achieve. I'm not saying just to wait and that it will be awesome. I'm posting on a Core 2 Duo system built using the remnants of my last Athlon XP system. My previous post indicated my expectations for what AMD is doing from a business perspective, not my feelings about the company or their product.

    • I can only assume that by "give the impression" you actually meant "give the illusion".

      I'm actually surprised the review was so positive. It's somewhat neutral, but really the conclusion is simple - 4x4 is a piece of crap. It uses ridiculou power and still loses the majority of benchmarks. In fact, in most applications, an E6600 would match or exceed it in performance, especially when both are overclocked.

      Espensive, high power requirements, average performance, questionable future. Seriously, pretty

    • It may be true that it's just a "Stay in the game" measure, but that won't stop me from buying Intel from now on. I've had horrible heat problems with AMD's 280 and 285 dual-core CPUs and 854 single-core CPUs (at least on Tyan boards, the only ones I've tried), and now they're asking me to pick up a quad core that's slower, hotter, and has to run on a linux-hating (try temp monitoring) Asus board? I think not. This is enough to make me switch to Intel. And when AMD's ahead in the benchmarks, I'm not going t
  • yep, it's a power hog. 400W @ idle? Youch!

    But, this is a first release, and what's important is the strengths shown. Notably, that 2 AMD 64 processors (granted, the 1207 pin versions) scale up to Intel's brand new Core 2 QX series best (itself 2 CPUs slapped together). It will be interesting when AMD releases their true quad core CPUs on 65nm in 2007. It looks like they'll be on par with Intel at worst.

    This is only good news for us consumers!
    • Re: (Score:2, Interesting)

      This is true, while Amd lags now, it is still on 90nm while intel is on 65, when both are on 65 then we'll have some real competition going on. -Ed
        • by nbowman ( 799612 )
          Err, no? Your article says: "The sample processors, known as the Penryn family within Intel, are being made at a factory in Oregon, and the company is on track to begin selling the chips in the second half of 2007, said Mark Bohr, director of process architecture and integration at Intel." and "AMD is expected to ship its first processors made using 65-nanometer technology by the end of 2006, and the company has said it wants to get 45-nanometer products on the market by the middle of 2008 as it tries to c
      • by Svartalf ( 2997 ) on Thursday November 30, 2006 @10:50AM (#17049828) Homepage
        Ahh... Someone that gets this concept.

        Intel wins on extra Cache- and the benchmarks that keep getting ran don't reveal performance snags with the SMP operation.

        Intel's got a shared L2 that's 2-4 times the size of the AMD equivalents' pool.
        AMD's got a coherent, but NON-shared L2 split across multiple CPUs- each core has it's own L2. You'll have less L2 thrash with that design.

        Under an SMP load, the AMD design will have an edge if all four cores are busy in different parts of system memory.
        If you pop out of cache, the memory bus design and overall architecture of the AMD parts will have an edge.

        Intel has an edge only due to process shrink and the things they can do as a result thereof. As soon as AMD goes to the
        smaller process size, they'll pick up the lower TDP advantage Intel has right at the moment and then the whole deal will
        flip-flop on who's got the "best" CPU unless Intel comes up with a few new tricks along the way, which may/may not happen
        for them.
        • Only problem then is that as it currently stands, Intel is, allegedly, ahead in the process game: " Intel to hit 65nm-45nm cross-over in 2008 [techreport.com]"

          I like AMD for Hypertranspart, and Intel need to go there too sooner or later, but elegance isn't going to get AMD the win in the mass market; only performance can do that (and performance per dollar at that).

          Sadly (for reasons of competition), I'm afraid that Intel may remain on top unless they run into problems with 45nm and AMD can sneak up on them by

        • by fitten ( 521191 ) on Thursday November 30, 2006 @11:14AM (#17050194)
          Intel's got a shared L2 that's 2-4 times the size of the AMD equivalents' pool.
          AMD's got a coherent, but NON-shared L2 split across multiple CPUs- each core has it's own L2. You'll have less L2 thrash with that design.

          Under an SMP load, the AMD design will have an edge if all four cores are busy in different parts of system memory.
          If you pop out of cache, the memory bus design and overall architecture of the AMD parts will have an edge.


          In CPU architecture circles, the shared L2 is considered a more ideal design than split L2 for multi-core processors. There are plenty of talks around the 'net as to why.

          As far as cache size, that's a design tradeoff just like any other. Because of the slowness of main memory, you want to have as large a cache system as possible. However, cache system latency increases with the size of the cache so that is a tradeoff as well. Intel chose to use some chip realestate for cache. "Faulting" them for this is just being an apologist for your puppy.

          There are many types of "SMP loads". Multi-threaded loads where all threads work on the same data will be similar on both as there is only one pipe to the memory on both the NUMA and the FSB model, for example. But yet, on SMP loads that are more 'lose', you can get good benefits from NUMA. By the way, Intel also has the IMC with their equivalent to HT on the roadmaps, so this discussion (NUMA vs FSB) won't be relevant for much longer.

          Additionally, it isn't until AMD's 'next thing' where their NUMA architecture will be able to scale much better (it doesn't do that well with lots of sockets because it falls back to being limited by the number of HT connections so some communication has to be multi-jump with current multi-socket solutions - the new core adds an HT link so that 4+ sockets can have a more direct path around the system).

          There are a number of examples of "popping out of cache" in the tests on various sites. AMD does show that it helps in those when it can use the bandwidth of both NUMA branches but it isn't convincingly better than Intel's FSB on many/any of the tests that are shown (you'd hope to see idealistically 2x performance improvement on many of those, but even with all the extra bandwidth, AMD doesn't seem to 'blow the doors' off of the Intel parts... in fact, AMD doesn't even beat them even with the added bandwidth... this just shows that there may be more to the picture than an IMC + more bandwidth). Even AMD's latency isn't that much better than Intel's FSB design anymore (the nice advantage that had against NetBurst is pretty much gone).

          I'm eagerly awaiting AMD's next 'real' move, myself, but given that Intel is already sampling 45nm parts and even on 65nm Core is able to clock to 3.5GHz ranges (meaning Intel has a lot of headroom even on 65nm), the short amount of time that Intel and AMD will overlap on 65nm will probably just show equality (at best) between the two. I haven't really seen what performance advantages AMD's new features give, other than the obvious benefits of wider paths and the FPU issue increase (to bring it equal to Intel's issue rate, although AMD has typically had a stronger FPU). AMD claims a lot, but that could simply be marketing at this point.
          • by sxpert ( 139117 )
            the announced quad core AMD has a shared L3 on the chip itself...
          • So... L2 cache speed. When I look at Memtest86+ numbers, I see:

            ~19700 MB/s for L1
            ~4700 MB/s for L2
            ~3000 MB/s for main memory

            This is on a Athlon64 X2 4600+ w/ low-speed DDR2 RAM (4 sticks of 1GB).

            I'm guessing that L2 gains are because it can respond to a memory request faster (fewer clock cycles) then because of the bandwidth? Because the L2 bandwidth of 4.7GB/s doesn't seem to be that exciting anymore once main RAM can feed the CPU at 3GB/s.

          • It's interesting that you mention the FPU, because I'm still thinking that there are some nice thing that could come out of AMD's Fusion project.

            Fusion may only be a CPU and a graphic card in the same package, targeted at the entry market, as some have speculated.
            But it can also turn out to be something more similar to the Cell processor a CPU with several general purpose Vector units that could be used for higher level computation (Physics, Geometry, or even scientific calculation @Home) while leaving the
            • Re: (Score:2, Insightful)

              by fitten ( 521191 )
              Maybe... Microsoft's DirectX10 has an API in it for offloading vector type work to 'something else' in the system. The interesting thing about it is that it will be a standard API, meaning that hardware can be built to take advantage of it while drivers can also be written to either do it in emulation or by actually handing it off to the specialized hardware. This would help AMD out a lot as far as that kind of hardware goes... without some standard APIs, it would likely end up in a mess, IMO.

              Personally,
        • by geekoid ( 135745 )
          Then Intel will come out with something else, and AMD will be playing catch up again.
        • Intel's new arch also has a true 128-bit FPU for SSE operations in 1 cycle as oppose to 2. Amd will have this with their quad core whenever that gets released.
    • by fitten ( 521191 )

      Notably, that 2 AMD 64 processors (granted, the 1207 pin versions) scale up to Intel's brand new Core 2 QX series best (itself 2 CPUs slapped together). It will be interesting when AMD releases their true quad core CPUs on 65nm in 2007.

      "true" again... spoken like a "true" fanboi. Anyway... Core has a lot of headroom for frequency increases. In the ExtremeTech article, they overclocked the QX part from 2.66GHz to 3.55GHz even. Not only that, but Intel is already sampling 45nm parts and will likely have 45

      • by fitten ( 521191 )
        overclocked the QX part from 2.66GHz to 3.55GHz even.


        My bad... I read that wrong. The Core2Duo was overclocked to 3.55GHz, not the quad core part. However, this still shows the amount of headroom the core has on 65nm.
      • by Gr8Apes ( 679165 )
        1) I'd buy a Core 2 today if I needed a performance part. No "fanboi" here. Note my comment was about competition and what's good for consumers.

        2) The increases seen from dropping mask scale haven't been that great over the past few drops. Certainly not in the frequency bump arena. Since 65 can do 3.6 or more, I wouldn't be shocked if they can eek 4GHz out of 45nm, finally, only about 4 years late.

        3) Intel's architecture has already shown problems with the FSB limitations with Woodcrest in Apple benchmarks.
        • by fitten ( 521191 )
          Good points...

          True SMP type multi-threading

          What is "true SMP type multi-threading"? I've worked on parallel (multi-process and multi-threaded) code for almost 20 years now. There is no "one true" SMP processing type. The differentiators are data partitioning and how much serial vs. parallelism are inherent in your problem/algorithms. All are "true" in every sense of the word. Some are more efficient than others and some scale better... that's it.

          As far as L2 cache sizes, they *are* going to grow with t

          • by Gr8Apes ( 679165 )
            On the "true SMP type multi-threading", that would be multi-threading where the cores work on any thread, rather than assigning processes/threads to specific CPUs. This means that task1's work could occur in work units across any of the existing cores. This has a direct performance impact where task1 could run on core 1 with cache 1, then run on core 'x' with cache 'x', with the cache having most likely to be updated each time a task is run.

            This is more related to how a task is run within the system than ho
        • 3) Intel's architecture has already shown problems with the FSB limitations with Woodcrest in Apple benchmarks. True SMP type multi-threading causes degraded performance, especially when heavy large-scale memory access occurs that exceeds the cache capacity. I can't imagine this will get better @ 45nm, except that the cache might grow. 16MB L2 cache anyone? 32MB? 64MB? Heck, why not just go for a GB? Who needs system memory anyway? -- that's a joke btw

          http://en.wikipedia.org/wiki/Cache_only_memory_arc [wikipedia.org]

    • by geekoid ( 135745 )
      Throwing a crap product on the market as a 'stopgap' is not good for consumers.

      OTOH, this is typical AMD. Copy Intel, but hotter and a power hog, then try to refine. About the time AMD is actually a real competitot to the Intel version, Intel cmoes out with a better chip and AMD follows. Rinse repeat.

  • nforce chipset? (Score:1, Interesting)

    by Anonymous Coward
    it's strange that the new amd processor is only supported by an nvidia chipset, now that amd owns ati
  • Hotter? (Score:4, Informative)

    by afidel ( 530433 ) on Thursday November 30, 2006 @10:41AM (#17049688)
    The QX6700 has the same TDP(125-130W) per socket as the FX70-74 so I assume they run at about the same temperature on chip. Overall system temperature might be higher for the FX based quad core system since it uses twice as many sockets, but that's a matter of case design, if the case design can eliminate the heat from the heatsink effectively I would imagine both systems would run at the same temperature. This is of course ignoring the fact that AMD TDP is worst case and Intel's is average case.
    • I think the community of folks who follow processor awesomeness generally equate high TDP with system heat (and therefore ability to function as a space heater, cook eggs, burn the fingers of curious children, etc.) It's sort of sensationalist, in both senses of the term - "Man I bet you can feel the heat from that thing on the other side of the house!" Making fun of power draw just doesn't have as much good material.

    • Re:Hotter? (Score:4, Informative)

      by Anonymous Coward on Thursday November 30, 2006 @11:03AM (#17050018)
      The QX6700 uses only one socket; the FX-70 uses two dual-cores in two sockets, hence about double the power requirement.
      • by jelle ( 14827 )
        Nah, imho the socket itself is not the main reason for the difference. The qx6700 'processor' has two separate dies inside too, basically the same as the fx70 but then inside of the package. Sure, the traces are longer for the fx70 because it doesn't stay inside the package, but imho the main difference in power (besides the well-known difference in what the two manufacturers mean when they say 'tdp'), is the fact that the fx70 is built at 90nm and the qx6700 is built at 65nm.

      • Re: (Score:3, Interesting)

        by MojoStan ( 776183 )

        the FX-70 uses two dual-cores in two sockets

        I think it's also worth noting that the QuadFX platform apparently doubles some parts of NVIDIA's power-hungry chipset [hothardware.com] (12 SATA ports??). Back when the single-CPU AM2 platform was launched, the NVIDIA chipset consumed a lot more power than the ATI chipset: somewhere between 20 watts [techreport.com] and 40 watts [hothardware.com].

    • by Kjella ( 173770 )
      The QX6700 has the same TDP(125-130W) per socket as the FX70-74 so I assume they run at about the same temperature on chip [so with enough cooling] both systems would run at the same temperature.

      They might not run hotter (operating temp) but they produce twice the heat (power draw), so you're really pushing the semantics here. More heat = more cooling = more noise = higher power bill = lower battery life. These are what matters to end-users, I couldn't care less if it spends it all in one place or two.
    • but that's a matter of case design, if the case design can eliminate the heat from the heatsink effectively I would imagine both systems would run at the same temperature

      Which brings up another "con" for the QuadFX platform: so far, it's only available in the eATX (extended ATX) form factor. The motherboard is too big to fit in almost all popular gaming cases (which max out at "standard" ATX). In contrast, a Core 2 Quad can be used in standard ATX and even microATX SFF cases like the Falcon NW Fragbox [falcon-nw.com].

      I

  • by Non-CleverNickName ( 1027234 ) on Thursday November 30, 2006 @10:46AM (#17049762)
    I wonder which will come first?

    processors with 10 cores
    or
    razors with 10 blades
  • am still happy with my Duron 1300...
    • You don't run Gentoo, do you?
      • Re: (Score:3, Insightful)

        You don't run Gentoo, do you?

        The more up-to-date version would be:

        You don't do virtualization, do you?

        Start cramming multiple virtual servers onto a single box and all of a sudden dual-core solutions start to seem limiting. And you find yourself wondering just how much a 4-way quad-core machine would cost...

        (That 4-CPU quad-core machine is still going to be cheaper then maintaining 4 separate quad-core servers.)

    • Well I'm satisfied with my P3-800, so there!
      • by raynet ( 51803 )
        And I am happy with my 1MHz Commodore 64...
        • by leoc ( 4746 )
          I'm not very happy with my pile of rocks for counting, so I think I'm going to upgrade to an abacus once the price comes down just a LITTLE further.
    • what do you do with all that processing power?

      i'm barely maxxing out my intel 66mhz with the turbo activated.

    • Then you probably don't do HPC. Yes, most desktop users are perfectly content with their clock speeds currently, and their single core systems (I am content with my home system), but in HPC, it's never enough. There are always bigger simulations, real time needs, etc. that will require more cores, more power, more flops.
    • by Sloppy ( 14984 )
      What's really great, is that the longer you're happy with it, the awesomer your next computer will be, when you finally tire of this one. You won't be stuck with a mere 8-core 6 GHz box like all those other suckers who wasted their money on that obsolete crap back in 2009.
    • by Belial6 ( 794905 )
      I've been actually downgrading machines around the house. Epias with 1200MHz are just fine for 99% of the tasks, use less power and are completely silent. I'll keep my primary work machine as a power system, and I might have a medium level machine for my wife to game on, but that will usually be turned off.
  • by eddy ( 18759 )

    I think most here knew that this was always going to be a stupid vanity platform, almost as stupid as water-cooled memory modules [ocztechnology.com]. Now, the only thing more sad and stupid than a vanity platform, is one where the vanity isn't even there.

    This should have ended as abandoned concept art in a drawer.

    (PS. My current gaming rig is AMD X2-based, but if they don't have the performance/$ then they won't get in on the next upgrade)

    • Agreed. AMD should have butched up and just waited until K8L to take on Intel. This product is absolutely ridiculous, it's strictly for AMD fanbois with more money than sense. Let's see.. slower on almost every benchmark, very expensive motherboard and memory requirements, ridiculous power requirements, questionable future.

      Yeah, it's a real winner.

    • by dbIII ( 701233 )
      I think most here knew that this was always going to be a stupid vanity platform

      No - this will be the difference between having an 8 way system that takes up 5U in a rack and an 8 way system that takes up 1U since it only needs two sockets. You don't need a huge board that only goes into oversized cases - that said the huge board does have 16 memory slots.

      • OK - I'm wrong after reading the fine article - this is a 4 way dual CPU system and not the 4 core chip I thought it was from the name. The 5U system descibed above is 4 dual core AMD Opterons set up to be 8 way on a ridiculously big board that I hope to have very soon. What would be very nice is a two socket system with 4 cores per socket that can fit into 1U in a rack.

        Any games out there on linux that would really show off an 8 way system to "test" it over the weekend? Whatever happened to the cluster

  • by Wonko the Sane ( 25252 ) on Thursday November 30, 2006 @10:49AM (#17049818) Journal
    The Extreme Tech benchmarks seemed to expose a lack of windows XP's ability to benefit from NUMA. I wonder what testing on a newer linux kernel with NUMA scheduler support would show.
    • This appears to be exactly the problem. The architecture/scalability is superior on paper, yet in benchmarks it's worse off than slower AMD processors. Take a look at the memory benchmarks. I bet XP is spending a lot of time moving threads between memory banks. All these tests are handicapped by inadequate OS support. Not great press for AMD, but I'm waiting to see what it will do on an OS that supports NUMA.
    • TEST IT on VISTA 64
    • by kscguru ( 551278 ) on Thursday November 30, 2006 @01:06PM (#17052086)
      Having done NUMA benchmarks ... on AMD chips, certain workloads take a latency hit from 60ns / memory access to 80ns / memory access. Bandwidth is halved. Net, it's a 5-10% slowdown across all workloads (5% if you try for average-case performance, 10% if you just hope for the best). Both sites point out that it is NUMA making the difference, yet both sites insisted on staying with Windows XP. A new motherboard like that, it's defaulted for a NUMA OS, so this is the 10% realm. As you point out, it would be extremely simple to run modern Linux (or Windows 2003, or Windows XP x64, or Vista) and see how well a NUMA scheduler works. (Note to Linux fans: Linux didn't have good NUMA scheduling either when Windows XP came out. A fair comparison would be against Linux 2.4.3 or so). This benchmark is fantastically stupid - it's the equivalent of running game benchmarks with a Voodoo3 graphics card to see CPU differences, determining that the graphics card is the bottleneck, then claiming one CPU is faster! Their benchmarks exposed a major slowdown in the memory system, easily corrected with an OS upgrade, and they refused to correct it.

      In short, once you factor NUMA out of these benchmarks, the difference between AMD quad and Intel quad is approximately the same as the difference between AMD's K8 arch and Intel's Core arch for single cores. Umm... duh?


      • Anybody running a 2.4.2 version of the Linux kernel should be shot. Nobody runs 2.4.2 these days and anybody suggesting that is far out of touch with what Linux is doing. Compare it against 2.6.19 with all of the NUMA options turned on (CPU local memory allocators, RCUed algorithms) and you'll see an expected an expected trumping of XP for kernel load hands down because of all of the MP work on it over the 4 years.
  • HardOCP QuadFX Review [hardocp.com].

    I'd go with the QuadFX platform just so I could swap in two quadcore AMD chips mid-2007, or one quadcore and one Torrenza platform coprocessor... if I had a few $thousand lying around and could make proper use of all that firepower. I suspect that quadcore + coprocessor combination is going to be really, really interesting within a year.
  • But this will hopefully spur them on, to get back where they belong. I just hope they don't get desperate and start slashing prices - that may work in the short run, but long term, AMD will sink into "second tier" status, simply because they go cheap. AMD has surpassed Intel in quality for years, and has mostly shaken off the 'discount chip' label.

    Nobody wants to be seen as the Dollar General of processors...Cyrix anyone?

  • Tom's Hardware... (Score:2, Informative)

    by SirKron ( 112214 )
    has another review [hothardware.com] that says reaffirms the same findings. Performance is not beating Intel yet and the AMD/ASUS solution is very expensive. I feel the only market here is those that cannot wait and have money to burn.
  • Well, having lived with an AMD 64x2 for over a year now, I feel comfortable saying that a dual core proc is pretty useless to me. I've noticed absolutely no difference in my computing experience, either in the newest games or in day-to-day non-game activity. It's no different than my similarly clocked A64 with one core.

    I'm sure quad cores are great for my servers, especially a couple of my mail servers that process a boatload of mail... but honestly, it's completely useless for the desktop. I would go so
    • by geekoid ( 135745 )
      I do.

      I also like to play games and listen to mp3s. Game music has a tendency to get boring.
      I also like to play games while downloading large files.

      Many times I ahve been watching a DVD while burning a disk, or cimpiling and buring disks for distribution.

      OTOH, I found that a SCSI 320 with 3 ms seek time takes care of this problem just fine.
    • by N7DR ( 536428 ) on Thursday November 30, 2006 @11:45AM (#17050732) Homepage
      Well, having lived with an AMD 64x2 for over a year now, I feel comfortable saying that a dual core proc is pretty useless to me. I've noticed absolutely no difference in my computing experience, either in the newest games or in day-to-day non-game activity. It's no different than my similarly clocked A64 with one core.

      Stating the blindingly obvious: some people aren't going to notice much (if any) difference; others are going to see a huge difference. Parent falls into the former camp; I fall into the latter. I also have been using 62x2 for a year, and no way would I go back to single core. It would be worth having dual core if only for the fact that I can start a job and it will consume a core while all my interactive work runs on the second core, and hence I don't even notice that a huge job is running in the background. Everything else one gets with dual-core is an added bonus. I'm not totally certain that going to 4 cores on the desktop will be as useful, but I can believe that it might be, and will certainly be worth trying. For me, anyway (and I can't believe that I'm particularly untypical of slashdot users).

      Given my experience, I'm even fairly convinced that the rest of my family (who are much more like ordinary users) would benefit from dual core too. Everything is simply so much more responsive.

      • by Kjella ( 173770 )
        Stating the blindingly obvious: some people aren't going to notice much (if any) difference; others are going to see a huge difference.

        Given the number of PCs that were selling with faaaar too little memory arond when XP Home was released, I think most are oblivious. If they can't notice that their machine is swapping like crazy, what are they going to notice? Personally I think my biggest issue seems to be locking IO calls - the worst thing I can do to my Windows box is put in a semi-scratched CD/DVD. It'l
        • by maraist ( 68387 ) *
          I just can't figure out why it needs to CPU lock like that in the first place.

          preface: I-am-not-a-windows-programmer...

          My guess is that windows isn't actually locking, but instead that the file-system directory is locking... But since virtually any OS call is accessing some kind of file object, those OS calls will likely lock too.

          Same is true in UNIX, except UNIX isn't retarded enough to have unrelated virtual paths block one another. Gnome, on the other hand... Well it does still dream about becoming win
      • Given my experience, I'm even fairly convinced that the rest of my family (who are much more like ordinary users) would benefit from dual core too. Everything is simply so much more responsive.

        What? What is more responsive? Someone show me some tangible proof that day-to-day activities are "more responsive" or "smoother." I have yet to see it. Maybe I just set my shit up properly from the get-go, so my computing experience is already hyper responsive, so I don't notice the difference. Maybe dual and qu
    • Are you kidding me? I notice the performance difference all the time.

      I spent 4 years on a 600mhz G3 iBook, and just upgraded to a Quad Mac Pro 2.66. I've also regularly used a P4 2.4 and a Celeron 1.7 laptop at work.

      To start with, actually, I ran a 6-proc 200mhz PPro at an old job I had, and while the thing was never lightning fast, it ran like a train.. NOTHING slowed it down, ever.

      I was continually surprised at how slow the P4 2.4 running XP felt, given my slow iBook at home... so part of it may be the
      • And it's obvious to say that when I stepped up to the Mac Pro, it blew everything out of the water. But I can't believe you're saying multiple cores is only good for running multiple apps. That's simplistic and wrong. To begin with, we're ALL running multiple apps all the time -- MP3 player, web browser, email, and so on. Sure, none of these processes are taking a ton of CPU time, but the ability for the OS to assign them to different cores means your computing experience is much smoother and more consistan
  • So it's more of the same, no wonder it's not so impressive. Once they get 65nm stuff out, we may see real improvements (not only speed, but power consumption too).
    • http://www.tomshardware.com/2006/11/30/brute_force _quad_cores/ [tomshardware.com] Read the Tom's article they explain the 90nm process and why these are more advanced than the intel chips. Gotta remember this is revision 1 of this system, we have yet to see how far this can be taken.
      • by dlenmn ( 145080 )
        The article you linked to says, "AMD's current 90 nm silicon-on-insulator process is more advanced than the Intel equivalent". So AMD's 90nm process is more advanced that Intel's 90nm process (what else could they mean by "equivalent"?), but Intel is using a 65nm process and is gearing up for a 45nm process. What the article did NOT say (because it's not true) is that AMD's 90nm process is better than Intel's smaller processes, which is what I think that you're claiming.

        I still think that AMD's move to 65nm
      • by stevel ( 64802 ) *
        The article does NOT "explain the 90nm process and why these are more advanced than the Intel chips". Rather, it just makes this bald assertion and moves on. Intel, I'm sure, would disagree. AMD and Intel use somewhat different silicon technology, but is one "more advanced" than the other? Depends on whom you ask. What AMD solves with SOI Intel solves a different way. There is, of course, the fact that Intel has been cranking out very successful 65nm chips for about a year now and AMD has not (yet) shi
  • Before you pass judgment on this. This is Rev 1, let's see how far they can take this. http://www.tomshardware.com/2006/11/30/brute_force _quad_cores/ [tomshardware.com]
  • I don't understand why they say the prices for Intel and AMD quad core system are the same when the Intel QX6700 seems to go for $1500 on newegg, while a pair of AMD CPUs seems to range from $600 to $1000 (couldn't find real prices yet).
  • Unless you're running Dreamworks studios in your basement or running a simulation for MIT, I don't see the usefulness of this.

This is now. Later is later.

Working...