Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD FX-8350 Review: Does Piledriver Fix Bulldozer's Flaws? 259

An anonymous reader writes "AMD just officially took the wraps off Vishera, its next generation of FX processors. Vishera is Piledriver-based like the recently-released Trinity APUs, and the successor to last year's Bulldozer CPU architecture. The octo-core flagship FX-8350 runs at 4.0 GHz and is listed for just $195. The 8350 is followed by the 3.5 GHz FX-8320 at $169. Hexa-core and quad-core parts are also launching, at $132 and $122, respectively. So how does Vishera stack up to Intel's lineup? The answer to that isn't so simple. The FX-8350 can't even beat Intel's previous-generation Core i5-2550K in single-threaded applications, yet it comes very close to matching the much more expensive ($330), current-gen Core i7-3770K in multi-threaded workloads. Vishera's weak point, however, is in power efficiency. On average, the FX-8350 uses about 50 W more than the i7-3770K. Intel aside, the Piledriver-based FX-8350 is a whole lot better than last year's Bulldozer-based FX-8150 which debuted at $235. While some of this has to do with performance improvements, that fact that AMD is asking $40 less this time around certainly doesn't hurt either. At under $200, AMD finally gives the enthusiast builder something to think about, albeit on the low-end." Reviews are available at plenty of other hardware sites, too. Pick your favorite: PC Perspective, Tech Report, Extreme Tech, Hot Hardware, AnandTech, and [H]ard|OCP.
This discussion has been archived. No new comments can be posted.

AMD FX-8350 Review: Does Piledriver Fix Bulldozer's Flaws?

Comments Filter:
  • How about idle?? (Score:5, Interesting)

    by Anonymous Coward on Tuesday October 23, 2012 @01:52PM (#41743569)

    90+% of my CPU is idle time.

    How much power does the new chip use at idle and how does that compare to Intel?

    50W at the top end means about $25/yr if I was running it 24/7. But since typical desktop is idle, what is the power difference there??

    And yes, I don't care about single thread performance as I care about multithread performance. Single thread performance has been good enough for desktop for almost a decade, and the only CPU intensive task I do is running those pesky `make -j X` commands. No, not emerging world or silly things like that ;)

    • by Animal Farm Pig ( 1600047 ) on Tuesday October 23, 2012 @02:01PM (#41743699)

      I agree about multithreaded performance being important thing moving forward.

      Regarding power consumption, anandtech review [anandtech.com] puts total system power consumption for Vishera tested at 12-13W more than Ivy Bridge. Scroll to bottom of page for chart. Bar and line graphs at top of page are misleading-- they put x axis at 50W, not 0W.

      If you are concerned about power consumption, find 100W lightbulb in your house. Replace with CFL. You will have greater energy saving.

      • Re: (Score:3, Informative)

        So true. AMD isn't competitive energetically in any way anymore, but the desktop is probably the only place while it doesn't matter. When you're thinking mobile, saving energy is a priority. On huge server farms, little relative gains can mean a tangible different in absolute numbers. But on desktops, their difference is about a light bulb, at load, and hardly anything when idle. And with PSUs under 350W being incresingly harder to find and an FX-8350-based system only gobbling about 200W at its most intens

      • by amorsen ( 7485 )

        If you are concerned about power consumption, find 100W lightbulb in your house. Replace with CFL. You will have greater energy saving.

        Surely you already did that 3 years ago, so now you're preparing for the switch to LED?

        • by Lumpy ( 12016 ) on Tuesday October 23, 2012 @02:32PM (#41744093) Homepage

          "Surely you already did that 3 years ago, so now you're preparing for the switch to LED?"

          Why? so I can get less light output for the same watts used but instead of spending $8.95 per bulb I get to spend $39.99?

          LED is a joke for home lighting, only fools are buying it right now. CFL is still way more efficient.

          • by amorsen ( 7485 )

            Why? so I can get less light output for the same watts used but instead of spending $8.95 per bulb I get to spend $39.99?

            LED can come much closer to proper full-spectrum light than CFL ever will. CFL is just a stopgap technology we have to deal with until LED gets there.

            Also, it is possible to make LED spotlights to handle that strange modern trend of building lots of spotlights into ceilings. CFL cannot do that.

            • by afidel ( 530433 )

              What?!? There are CFL's with a CRI of 95, where is the LED bulb that competes? Oh, that's right they're [earthled.com] the same [kinoflo.com], because they use the same phosphor based system to create full spectrum light.

              As far as the use in cans, both LED's and CFL's have the same problem with overheating ballasts, unless you have DC power run to your LED can light you'll run into the same power and a similar lumen ceiling that you do with CFL's in that application.

            • Also, it is possible to make LED spotlights to handle that strange modern trend of building lots of spotlights into ceilings. CFL cannot do that.

              If you have a lightsource and an appropriately curved reflector, you have a spotlight. This can definitely be done with CFLs. I'm pretty sure of that, since my house has those ceiling spotlight mounts (originally with halogens, but replaced with CFLs when we bought the house.)

          • by h4rr4r ( 612664 )

            You do realize they last like 20 years, right?

            Please explain why you think someone would be foolish to buy a longer lived and nearly as efficient device?

            • Comment removed based on user account deletion
            • Re: (Score:3, Insightful)

              You live in an apartment and dont plan to be there for 20 years?

              I imagine for a lot of people, dumping $40 into each light socket is a losing proposition for you, and a winner for your landlord (who I am sure would greatly appreciate the gift).

            • by Lumpy ( 12016 )

              Really, what ones have lasted 20 years, Because I have found none that do. Note: I have worked for a LED bulb distributor. the return rate is nasty high. They last about 1-2 years on average. Warranty replacement rates are high as hell on them.

          • I'm fairly certain the light output is basically the same per watt for LEDs compared to CFL. In fact here, I looked it up http://cleantechnica.com/2011/09/01/led-vs-cfl-which-light-bulb-is-more-efficient/ [cleantechnica.com] and the LED's actually produce MORE lumens per watt.

            Now, lets expand and point out something this article got wrong. They say you would only have to replace a CFL three times over the course of an LED lifespan. In my experience, this is at least an order of a magnitude on the low side. I have dimmers in
          • I have an outdoor porch light that seems to eat florescent bulbs. I've only had the LED bulb in for a few months, but so far it's doing fine.
      • by war4peace ( 1628283 ) on Tuesday October 23, 2012 @02:30PM (#41744077)

        I play games maybe 1h 30m a day on average. My 5 year old dual-core E6750 overclocked at 3.2 GHz handles most of them gracefully, but there are some new releases which require more processing power. However, in choosing a new platform, I'm mostly looking at TDP, not from a consumption perspective, but heat dissipation. I hate having to use a noisy cooler.
        My current CPU has a TDP of 65W and a Scythe Ninja 1 as cooler, and the fan usually stays at 0% when the CPU is idling. While gaming, I can't figure out whtehr it makes noise, because my GPU cooling system makes enough noise to cover it. And I'd like to keep it that way when I pick my new CPU.

        You're saying that graphs are misleading. No, they're not, if one has half a brain. I'm not looking at the hard numbers and the power consumption difference is of about 100W. The i5 3570K draws about 98W and Zambezi and Vishera (who the fuck names these things?) draw around 200W. if you put TWO i5 on top of the other, they barely reach ONE AMD cpu power consumption. Thanks, but things DO look bad for AMD. I'll just have to pass.

      • More heat equals louder fans. and more dust on the vents.
        The limosine tax imposed by our ISP overlords is a couple orders of magnitude more painful.

      • Comment removed (Score:5, Informative)

        by account_deleted ( 4530225 ) on Tuesday October 23, 2012 @04:39PM (#41745629)
        Comment removed based on user account deletion
        • Compared to the Phenom II in 40 nm, I guess that Piledriver is finally faster. There were a few benchmarks where a Phenom II X6 could beat a Bulldozer, but IIRC only by a few percent. Which is not enough to beat the Piledriver in the same tests.

          Now I still wonder how a Phenom II in 32 nm would have performed. That hypothetical chip might still embarass the Piledriver ;-)

    • Re:How about idle?? (Score:5, Informative)

      by ifiwereasculptor ( 1870574 ) on Tuesday October 23, 2012 @02:01PM (#41743703)

      Idle power seems pretty competitive with Intel's Core offerings. Anand found little difference and attributed it to their selection of a power-hungry motherboard.

    • Its not just 50W at full load, there is another graph that shows the same workload on Intel used 220.8watt-hours, the AMD took 352.5watt-hours. Not only did the Intel system use less power under load, it finished the work quicker.
  • Finally something positive from AMD, while I'm not interested in this CPU since I'm a gamer, the lack of competition kept the Intel CPU price stagnant. This new AMD CPU seems to have some strength in multi-threaded applications. But then again, a 2 year old o/c Intel i5 eats any game you give him, while the same can't be said for a same priced video card. So Intel is not all that evil (watching AMD marketing troll campaign for Bulldozer on the other hand made me hate AMD).
    • I'm a gamer too, and I'm actually interested, mainly because the FX-4300 seems to be now a fierce competitor to Intel's i3 while costing quite a bit less. The FX-8xxx still sucks, but this is a major improvement for AMD on the mainstrem segment. They were losing to cheaper Pentiums with the FX-4100, it was embarrassing.

      • Actually check out the Anandtech review. They do it on windows 8 with the new scheduler... the difference in performance of the FX series from Windows 7 to Windows 8 is fucking mind-boggling. The FX-8350 actually trounces all but the higher end I5s and gives the I7 a run for its money in several games. For Sub-$200 if you're getting Windows 8(which I apparently will be now) you can't get anything thats even close performance wise.

    • You do realize that most of your processing occurs via your gpu in a game right...?

    • by Z00L00K ( 682162 )

      It certainly will make things more interesting - and increased number of cores is the way to go today. Most applications and operating systems will benefit from multiple cores - even though some applications may benefit from additional tuning.

      However - most bread&butter applications today runs well on even the cheap processors on the market, it's only when you start to look at things like gaming and advanced processing that you really will benefit from a faster processor. Most computers will be a lot fa

  • by Gothmolly ( 148874 ) on Tuesday October 23, 2012 @01:57PM (#41743633)

    I put together an 8way,32GB machine (no local storage) for $400 to play with ESXi. Courtesy of the freebie VMWare download and a reasonably priced 8way machine, I can get into some pretty serious VM work without spending a ton of dough. I don't need massive performance for a test lab.

    • Re: (Score:3, Interesting)

      by h4rr4r ( 612664 )

      Get an SSD.
      Local storage is a must for performance. iscsi cannot hold a candle to local SSDs. In a lab you won't need to share the storage with multiple machines anyway.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Local storage is a must for performance

        This is hyperbole. If what you're doing is mostly CPU or Memory intensive and requires very little disk activity having fast local storage isn't going to help much, if at all.

        Besides, apparently it isn't a must for the grandparent as he stated he doesn't "need massive performance for a test lab."

        Don't get me wrong, using an SSD to provide storage for a handful of VMs is a great idea (massive read/write IOPs), but it isn't necessary.

      • Without iSCSI you cant really use shared storage, which means 90% of the features of ESXi cant be used. Kind of dampers the whole "for a lab" thing.

        SSDs ARE quite sweet for VMs, Id recommend setting up a VM that serves out a local SSD as iSCSI over an internal ESXi storage network-- thats actually how things were done during my VCP training. I believe they were using FreeNAS (MIGHT have been openfiler) to serve up iSCSI and NFS targets. Its a little buggy but sufficient for a lab.

    • by rrohbeck ( 944847 ) on Tuesday October 23, 2012 @02:52PM (#41744307)

      Same here. I built a Bulldozer machine for compiling projects in VMs last year and it works very nicely. If Intel had had a CPU with ECC memory and hardware virtualization support at a reasonable price I would probably have bought it, but I would have needed at least a $500 Xeon for that, with a more expensive motherboard, and I wouldn't be able to overclock it. For the same performance I have now I would probably have needed a $1k CPU.

    • by Simulant ( 528590 ) on Tuesday October 23, 2012 @04:03PM (#41745215) Journal
      I converted about 20 physical servers to hyper-v VMs running on two 2 socket/32 core Bulldozer, hyper-v hosts at the beginning of the year and have been thrilled with the results. Those two servers provide more horsepower than my company needs and, in a pinch, either one of them can run all of the critical VMs itself. Not a single problem since deployment. I paid around 11k/box. Intel cost quite a bit more in hardware as well as for the per socket licensing of Hyper-v for that many cores using intel chips. I was converting >4-5 year old hardware and as far as the users are concerned, everything is faster now.

      I will keep buying AMD as long as they are cheaper and "good enough", if only to keep some competition alive.

      Still running a quad-core AMD gaming machine at home as well and it is still playing every thing I throw at it.
  • by CajunArson ( 465943 ) on Tuesday October 23, 2012 @02:09PM (#41743777) Journal

    These chips "excel" at big, heavily threaded workloads. Which is to say that they can beat similarly priced Intel chips that are simply up-clocked laptop parts. Move up to a hyperthreaded 3770K (still a laptop part) and Vishera usually loses. Overclock that 3770K to be on-par with the Vishera clocks while still using massively less power than Vishera and the 3770K wins practically every benchmark.

      Unfortunately, if you *really* care about those workloads (as in money is on the line) then Intel has the LGA-2011 parts that are in a completely different universe than Vishera, including using less total power and being much much better at performance/watt to boot. I'm not even talking about the $1000 chips either, I'm talking about the sub $300 3820 that wins nearly every multi-threaded benchmark, not to mention that $500 3930K that wins every one by a wide margin.

        So if you want to play games (which is what 90% of people on Slashdot really care about): Intel is price competitive with AMD and you'll have a lower-power system to boot. If you *really* care about heavily-multithreaded workloads: Intel is price competitive because the initial purchase price turns into a rounding error compared to the potential performance upside and long-term power savings you get with Intel.

          Vishera is definitely better than Bulldozer, but AMD still has a long long way to go in this space.

    • GPU's benchmark game performance...

      • GPUs are very important for games, which is why an Ivy Bridge with an on-die PCIe 3.0 controller is going to do better at keeping next-generation GPUs running full-tilt than the PCIe 2.0 controller on the Northbridge of an AM3+ motherboard.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      AMD has never been about pure performance. It's all bang for buck. You can : buy an AMD system for much less than an Intel one, get a motherboard that has a lot more connectivity than the equivalent Intel board for less, AND get a true 2 x PCI-Ex 16x (while Intel force you to get an LGA2011 board, much costlier). The tradeoff? You'll get a machine with a CPU that perform maybe 10-20% less in benchmark than the Intel equivalent. But seriously, who cares in 2012? Most game are GPU starved, so you're much bett

    • by Anonymous Coward on Tuesday October 23, 2012 @02:29PM (#41744049)

      You are missing an importan point when it comes to "money is on the line". No one in their right mind would use a desktop processor from Intel for anything critical. Why? All non-xeon processors have been crippled to not support ecc memory. If money really is on the line there is just no way that is acceptable.

      Amd on the other hand does not cripple their cpu's as all. The whole Vishera lineup support ecc memory, as did Bulldozer.
      The xeon equivalent of 3820 is in a completely different price league.

      So please, when you compare price and use cases make sure you fully understand which processors are the actual alternatives.

      • Sure Vishera theoretically supports ECC memory, but you need a motherboard that takes ECC memory to tango... and those are a rare beast in the consumer space, meaning you really are looking at Opteron socketed motherboards and Opteron chips (which are nowhere near as cheap as Vishera). So there is no free lunch.

      • by Kjella ( 173770 )

        You are missing an importan point when it comes to "money is on the line". No one in their right mind would use a desktop processor from Intel for anything critical. Why? All non-xeon processors have been crippled to not support ecc memory. If money really is on the line there is just no way that is acceptable.

        Well if it's critical then I wouldn't want to use a desktop processor or system in any case, then you should get a proper Opteron/Xeon server with all the validation and redundancy and managed environment. As for the general employee, I've yet to see anyone working on a "workstation" class machine. Unless they need the horsepower for CAD or something like that my impression is that 99.9% from the receptionist to the CEO use regular non-ECC desktops/laptops to do their work and I'm pretty sure that means mon

    • When you talk about Intel being price competitive, it depends.

      AMD clearly wins on budget gaming systems where the processors and motherboards are much cheaper, but Intel has the fastest high end systems out there right now.

      Just a couple months ago I priced two builds with similar benchmark numbers on NewEgg, and the AMD budget gaming rig was around $800, and the Intel equivalent was around $1100.

    • Look at the Phoronix benchmarks. Vishera beats the 3770k in many benchmarks as long as you're running multithreaded code.
      http://www.phoronix.com/scan.php?page=article&item=amd_fx8350_visherabdver2&num=1 [phoronix.com]
      And do the math about power savings. Unless you're a folder you'll need several years to recap the additional cost of an Intel CPU.

      • You have a very interesting definition of "many". I would say that the non-overclocked 8350 beats a 3770K in "a few" multi-threaded benchmarks, usually by a small margin, while still losing by much much larger margins in many other multi-threaded benchmarks (in fact the Phoronix article barely has any lightly-threaded benchmarks in the mix at all).

        The Vishera OC'd to 4.6 GHz wins a few more benchmarks, but, as I said above, all you have to do is apply an easy OC to Ivy Bridge to get up to 4G

        • I would say that the non-overclocked 8350 beats a 3770K in "a few" multi-threaded benchmarks, usually by a small margin, while still losing by much much larger margins in many other multi-threaded benchmarks (in fact the Phoronix article barely has any lightly-threaded benchmarks in the mix at all).

          Yes, thats what you would say, and the last thing you would do is mention that the 3770K costs a whopping 50% more than the FX-3850.

          Obviously your fanboyism has clouded your ability to grasp the situation. The AMD chip which beats this Intel chip in "a few multi-threaded benchmarks" is doing it for significantly less cost.

          If you really want to do a point-for-point comparison, and count up how many benchmark tests a CPU wins or loses by, then you need to note the cost as well, or whats the point of "poi

          • by 0123456 ( 636235 )

            Obviously your fanboyism has clouded your ability to grasp the situation. The AMD chip which beats this Intel chip in "a few multi-threaded benchmarks" is doing it for significantly less cost.

            So Intel are presumably beating the snot out of AMD's margins, as the AMD chip is twice the size of the i7 yet has to sell for a fraction of the price because it's unable to compete.

            And unless you're overclocking, I believe the i7 3770 will have the same performance as the 3770K at a lower price (about $30-40 less last I looked).

            • So Intel are presumably beating the snot out of AMD's margins, as the AMD chip is twice the size...

              Typical fanboy knows a fact or two but doesnt know what he is talking about (transistor counts are about equal), and why would I care about AMD's margins anyways?

              You seem to want to poke at AMD without considering what it means to you. I guess if you want to ignore the fact that you have to bend over and let Intel stick it right into you in order to "enjoy" an equally performing chip.. thats fine by me. Have fun paying more.. it makes you look really smart.. really.. it does.. paying more for the same pe

          • I'm an engineer not a fanboy. I takes a whole lot more engineering talent to design a laptop chip like the 3770K that is faster in the large majority of benchmarks, including the large majority of multithreaded benchmarks, than a chip which is twice the size, has a much much higher transistor budget, uses much larger caches, has a 50 - 100% larger practical power envelope, and runs at 15% higher clockspeeds.

            I could hand a price gun to a homeless guy out on the street and have him slash the price of practica

  • tl;dr version (Score:5, Informative)

    by gman003 ( 1693318 ) on Tuesday October 23, 2012 @02:24PM (#41743983)

    New AMD processor, higher clocks than the last one but no massive improvements performance-wise. Still rocks at multi-threaded, integer-only workloads, still sucks at single-threaded or floating-point performance, still uses a huge amount of power. AMD giving up on the high end, their top-end parts are priced against the i5 series, not the i7. Since Intel's overpricing stuff, they're still roughly competitive. Might be good for server stuff, maybe office desktops if they can get the power down, but not looking good for gaming. Overall mood seems to be "AMD isn't dead yet, but they've given up on first place".

    There. Now you don't need to read TFAs.

  • by coder111 ( 912060 ) <coder.rrmail@com> on Tuesday October 23, 2012 @02:25PM (#41743999)
    I've read through some of the reviews. It looks like a nice CPU with a bit too high power usage for my taste.

    And please take benchmark results with a pinch of salt- most of them are compiled with Intel compiler, and will have lower results on AMD CPUs just because Intel compiler will disable a lot of optimizations on AMD CPUs.

    I don't know of any site which would have Java application server, MySQL/PostgreSQL, python/perl/ruby, apache/PHP, GCC/llvm benchmarks under Linux. Video transcoding or gaming on Windows is really skewed and nowhere near to what I do with my machine.

    --Coder
    • I think Phoronix took Vishera through some GCC tests. They have another article about GCC optimizations for the Bulldozer architecture in general (it seems to improve some workloads by quite a bit).

      • by coder111 ( 912060 ) <coder.rrmail@com> on Tuesday October 23, 2012 @03:02PM (#41744425)
        I read phoronix a lot these days. I find more technical news in there than in Slashdot.

        However, their benchmarks are often flawed. For example they did a Linux scheduler benchmark recently which measured throughput (average this or that) and not latency/interactivity (response times), which was totally useless. Well, ok, you can consider it a test checking for throughput regressions in interactivity oritiented schedulers, but it did not measure interactivity at all.

        And regarding their Vishera benchmark, they measured most of their standard stuff, mostly scientific calculation, video/audio encoding, image processing, rendering. I very rarely do any of this.

        The developer related benchmarks they had were Linux kernel compilation times (Vishera won), and you might count OpenSSL as well. They didn't do PostgreSQL, they didn't benchmark different programming languages, nor application servers, nor office applications, nor anything that would really interest me. I wish someone would measure Netbeans/Eclipse and other IDE performance.

        And anyway, did you notice that AMD usually does much better in Phoronix reviews than in Anandtech/Toms Hardware/whatever sites? That's because Phoronix doesn't use Intel Compiler nor Windows, so results are much less skewed.

        --Coder
        • You're talking about the low-jitter Linux kernel testing where they didn't test jitter at all, right? I've seen that one, it was embarrassing. However, their often incomplete (and sometimes flawed) testing is still the best we can get for Linux.

          Linux already has scheduler fixes for Bulldozer, too, unlike Windows 7. Not a game changer, but it does grant another 1-3% performance, if I remember correctly. not the the Intel compiler doesn't cripple AMD quite a bit - just run Handbrake on Windows on both platfor

  • For linux... (Score:5, Insightful)

    by ak3ldama ( 554026 ) on Tuesday October 23, 2012 @02:34PM (#41744105) Journal
    Here are a set of benchmarks that are more centered on the Linux world from phoronix [phoronix.com] and are thus a little less prone to intel compiler discrimination. The results seem more realistic: better and worse and similar to an i7 at different work, still hard on power usage, low purchase price.
  • The new Trinity, and now these FX Procs, are perfect for "the 99%," that is to say for what 99% of people do with their machines: surf the web, check email, maybe do some photo editing or piecing together home movies.

    They're cheap, reasonably fast, and support all the latest extensions and optimizations. Plus, even for enthusiast prosumers who want to screw around with things like virtualization, you can get into the IOMMU space cheaply with AMD, which is nice for a platform like ESXi that has a mature PCI

    • Don't underestimate these things for video encoding or playing around with scientific computing. They're great for embarrassingly parallel computing problems, and the price is very good.
    • by 0123456 ( 636235 )

      The new Trinity, and now these FX Procs, are perfect for "the 99%," that is to say for what 99% of people do with their machines: surf the web, check email, maybe do some photo editing or piecing together home movies.

      I did all those things on a Pentium-4.

      I agree about AMD's low-end CPUs, but why would you want an 8-core CPU to do any of those things? Does an email program really need eight cores these days?

      • by Billly Gates ( 198444 ) on Tuesday October 23, 2012 @03:24PM (#41744699) Journal

        I am typing this on a Phenom II 6-core system. It is quiet, 45 watts, and at the time (2010) it was only 10-15% slower than an icore5. What did I get that the intel icore5 didn't?

        - My whole system including the graphics card was $599! Also an Asus motherboard by the way too and part of their extended warranty boards.
        - Non crippled bios where I can run virtualization extensions (most intel mobos turn this off except on icore7s)
        - 45 watts
        - My ati 5750 works well with the chipset
        - the AM3 socket can work with multiple cpus after bios updates.

        What the icore5 has
        - It is made by intel
        - It is 15% faster
        - The cost of the cpu alone is 2x the price and I can pretty much include a motherboard as well if you are talking up to icore7s.

        An icore7 system costs $1200 at the store. An icore5 gaming system similiarly specced cost $850 and does not include virtualization support to run VMWare or Virtualbox.

        The FX systems ... ble. I am not a fan. But for what I do AMD offered a quieter cheaper system that could run VMs of Linux and can upgrade easier. To me my graphics card and hard drive are the bottlenecks. I would rather save money on the cpu. I was so hoping AMD would use this to have a great graphics for tablets and notebooks :-(

        • Agreed on the core count. I too have a 6-core Thuban and I would never go back to less.

          Sure, when all I am doing is surfing the web then it doesnt matter, but most of the time that I am surfing the web I've got other processes going that are actively doing work.. on top of the browsing I frequently have netflix/hulu running, and either a big compile or a video encode is also sometimes taking place (in fact, when something really time consuming begins, I am *always* browsing the web during it)

          The system
        • by Kjella ( 173770 )

          I am typing this on a Phenom II 6-core system. It is quiet, 45 watts (...) - 45 watts

          I guess if the facts don't support your argument, make something up. All the Phenom II 6-core CPUs have either 95W or 125W TDP [wikipedia.org]. But yes, the X6 was a quite competitive chip by offering you 50% more cores for about the same money as an Intel quad. Anandtech's conclusion [anandtech.com] did have a prelude to what was coming though:

          You start running into problems when you look at lightly threaded applications or mixed workloads that aren't always stressing all six cores. In these situations Intel's quad-core Lynnfield processors (Core i5 700 series and Core i7 800 series) are better buys. They give you better performance in these light or mixed workload scenarios, not to mention lower overall power consumption.

          Let's look at what has happened since 2010:
          Cinebench R10 single-threaded: [anandtech.com]
          Intel Core i5 750: 4238
          Intel Core i5 3570K: 6557
          AMD Phenom II X6 1090T: 3958
          AMD FX-8350: 4319
          Intel has improved 55%. AMD? 9%

    • The new Trinity, and now these FX Procs, are perfect for "the 99%," that is to say for what 99% of people do with their machines: surf the web, check email, maybe do some photo editing or piecing together home movies.

      If you're building a "good enough" system for a non-technical user, why in the world would you even consider a Vishera FX CPU? It's expensive, power-hungry, and has a high TDP. And it doesn't even have integrated graphics, so you'd have to add the expense of a discrete graphics card.

      For an in

      • What's the intended audience for Vishera

        The audience is people that want the most performance they can get for their $140, and thats not the "I shopped around to find a great price on CPU's that have piled up in someones inventory" price.. thats the "I went to the place the most people trust" price.. The straight NewEgg introductory price for the FX-6300 Vishera is $139.99.

  • SATSQ (Score:5, Informative)

    by JDG1980 ( 2438906 ) on Tuesday October 23, 2012 @03:12PM (#41744533)

    AMD FX-8350 Review: Does Piledriver Fix Bulldozer's Flaws?

    No. It still guzzles power like crazy compared to Sandy/Ivy Bridge, and its single-threaded performance still sucks royally. (And that's still very important since many, many programs cannot and will not ever support full multithreading.)

    • Low end would be fine but it would need to be low power there too. Ivy Bridge CPUs are great at sipping power, particularly the dual core variety. So if what I'm doing is just real light usage like web surfing and so on, I'm better off with that.

      For heavier usage, well the Intel CPUs are better particularly at floating point calculations which is what most heavy performance is these days, at least on desktops. All the programs I can think of that I have which hit the CPU real heavy, are doing FP stuff, not

  • by Billly Gates ( 198444 ) on Tuesday October 23, 2012 @03:55PM (#41745111) Journal

    Man I really want AMD to win!

    I am typing this on a phenom II which is a better chip in my opinion and fast at the time in (unfortunately in 2010 standards). But these things run well over 130 watts, are loud with huge freaking fans, 4.4 ghz, and it seems AMD is trying to pump out as much speed as possible to beat intel's lowest end chips.

    Just call it pentium IV 2.0 while we are at it? I am not a fan of intel because I run vmware and hate that intel cripples its chips and the bios to exclude virtualization on all but the most expensive units. I hate the cost of a high end icore 7 which in 2010 was only 10 - 15% faster than a Phenom II but cost 400% more where I can buy a whole system for the cost of a single intel core 7 extreme.

    Well gentlemen. Expect dark days ahead and a return to $1000 desktops, $500 chips, and virtualization only available on xeon chips by next year. :-(

    With AMD junk status [arstechnica.com] it is bound to happen now since these chips can't match intels offering.

  • AMD is getting spanked badly in per-core performance. AMD was actually quite competitive a while back. From the benchmarks it looks like Intel had a very substantial leap in per-core performance with one generation of their core architecture. What did they do that made such a huge gain? And it's not that they're ahead on fab process. The single core per-clock performance jumped. What's up with that? How'd they do it?
  • by sa1lnr ( 669048 ) on Tuesday October 23, 2012 @11:19PM (#41748497)

    When you call an 8 core 4GHz CPU low-end.

You can be replaced by this computer.

Working...