Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD 'Bulldozer' FX CPU Reviews Arrive 271

I.M.O.G. writes "Today AMD lifted the embargo on their most recent desktop AMD FX architecture, code named Bulldozer, whose CPU frequency record Slashdot recently covered. The fruition of 6 years of AMD R&D, this new chip architecture is the most significant news out of AMD since the Phenom II made its debut. The chips are available now in all major retail outlets, and top tier hardware sites have published the first Bulldozer reviews already." Here are reviews from a few different sites — pick your favorite: Tom's Hardware, PC Perspective, Hot Hardware, [H]ardOCP, or TechSpot. They don't agree on everything, but the consensus seems to be that the new chips aren't blowing anyone's socks off, and that they struggle to compete with Intel's comparable offerings. The architecture shows promise, but performance gains will take time to materialize, making it difficult to leapfrog Intel to any significant degree.
This discussion has been archived. No new comments can be posted.

AMD 'Bulldozer' FX CPU Reviews Arrive

Comments Filter:
  • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday October 12, 2011 @07:24AM (#37688774) Journal
    And the comparisons seemed pretty much benchmarked performance based, with a side of price comparison. Fair enough, as these are pitched as 'enthusiast' parts; but left me wondering about one thing:

    Of late, intel's somewhat confusing set of model numbers has been distinguished, in addition to differences in speed, by various features being lasered off of certain parts, but not others, mostly virtualization-related stuff. AMD generally left those on at all times and distinguished primarily by speed.

    Does anybody have an idea how the price/performance comparisons change(if in fact they do) from the pure-benchmark ones given in TFAs, if the buyer requires that all the relevant virtualization features be enabled?
    • by pankkake ( 877909 ) on Wednesday October 12, 2011 @07:36AM (#37688844) Homepage
      All AMD CPUs allow ECC for instance, so if you require ECC memory it's much cheaper to go with AMD.
      • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday October 12, 2011 @07:46AM (#37688930) Journal
        This is partially the motherboard makers' fault, since they can generally scuttle such features in the BIOS even if enabled on die(laptop makers, in particular, seem to revel in doing this); but Intel's "VT-x", for various values of x, is a pit of confusion, and some of those VT-x's make a significant difference for VM workloads.

        It's of interest to me because my next build/config to order is likely to be primarily for VM hosting, with routine desktop/workstation tasks taken care of by the fact that modern CPUs are fast as hell. Unfortunately, a lot of the enthusiast benchmarks generally focus on running Medal Of Warfare fast and cheap, and the virtualization benchmarks generally start from the assumption that you are looking to buy a palletload of 1Us...
    • As a whole you only see these features turned off if you use the lower end CPUs.

      The i7 2600K has all of the bells and whistles enabled. Except maybe ECC memory...
      • by Kjella ( 173770 )

        The i7 2600K has all of the bells and whistles enabled. Except maybe ECC memory...

        No, it doesn't have VT-d or TXT enabled, the 2600 - not the K - does though. It less for a fraction less, has slightly worse integrated graphics and is multiplier locked, it's basically the business version of the 2600K. And like you say, if you want ECC you must get Xeons.

        • The i7 2600 is a $300 CPU. I thought it was really cool you could get a Sandy Bridge cpu for as low as $57 (G530), and on the newegg page it does have "Virtualization Technology Support." VMWare Workstation would still run fine with 1 or 2 guests without VT-d, wouldn't it? Lower speed for a lower price makes sense, but "you cannot run application X on this CPU" is more troubling.
          • by Kjella ( 173770 )

            VMWare Workstation would still run fine with 1 or 2 guests without VT-d, wouldn't it?

            Depends on what you're doing, VT-d is virtualized IO. Both have VT-x so CPU-intensive clients should run just fine with or without it, but disk access will be slower without it. But in my experience running virtualization without it, it's not that slow anyway.

    • by vlm ( 69642 )

      Does anybody have an idea how the price/performance comparisons change(if in fact they do) from the pure-benchmark ones given in TFAs, if the buyer requires that all the relevant virtualization features be enabled?

      If I were to set up a business to do this, to cut thru the incredibly frustrating marketing from both chip manufacturers in exchange for a small cut of the price, would I currently have any competition?

      The point of a confuse-opoly like CPUs or american cellphone contracts is to screw over the buyer by confusing them. Aside from screwing over the buyers, it also creates a business opportunity for someone to intermediate themselves while un-screw-up-ing the marketplace.

      The general class of idea is something

      • I generally don't find CPUs too confusing as they don't change toooo often, but graphics cards I just stopped trying to keep up with years ago.

        When I bought a card recently I just googled "x-card vs y-card", hwcompare.com [hwcompare.com] was generally the top result. It has lots of automatically generated pages which compare benchmarks of one card vs another. If you made a site similar to that for comparing CPUs/mobos within certain categories or price ranges then you might make some money from advertising revenue, especia

      • The problems with your business idea are (1) it's a niche business (since most people these days just take whatever CPU their OEM gives them) and (2) the people in the niche find figuring stuff like this out for themselves to be enjoyable, not tedious. They're hardware nerds -- that's why they're building their own rigs in 2011 -- and that means they love poring over spec sheets.

        Generally speaking, good businesses are found by looking for things that people hate doing and offering to do it for them for a s

    • The part most benchmarks are concentrating on is the 8150, which costs $250 to retailers – i.e. it costs about the same as an i7 2600 when it gets to your wallet. Unfortunately, it seems to perform worse than an i5 2400, so... fail.

      • i've only seen the 8150, no other chips out there yet.

        My main comparison point is the x6, since i'm looking at upgrading from a x4 940, so far it looks like the 8150 has serious trouble beating the x6 1100T in anything but the heaviest threading and a few x264 encoding benches. So it looks like i'll pick a 1090T for 100 bucks less then the 8150.

        Bulldozer might be interesting from an architectural standpoint, but to me it looks like they gimped the execution hardware and tried to make up for that with rather

        • by TheLink ( 130905 )

          subsequently Global foundries fumbled the ball on the production, meaning Bulldozer isnt hitting the clock speeds needed for being competitive.

          125W TDP, 3.6GHz and still not competitive? Why don't they rename it P4/Prescott AMD Edition while they're at it?

          Kinda ironic don't you think?

          • very ironic indeed, it is very sad to see AMD gamble the good old netburst-play and then foul it up (as is to be expected really). Even if they had the process-control which intel enjoyes, building something a tad inefficient and praying for the clocks to compensate is just stupid.

            It also reminds me of the original Phenom, AMD overreached themselves on features, didnt provide solid IPC improvements (although for Phenom, there actually was some improvement over K8), and then fumble the clock speed so as to m

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Wednesday October 12, 2011 @11:54AM (#37692132)
      Comment removed based on user account deletion
  • by CajunArson ( 465943 ) on Wednesday October 12, 2011 @07:28AM (#37688790) Journal

    buy a 6 core Phenom II, overclock it, and pray that AMD can stay around long enough to fix this mess.

    Go check the techreport review and look at the price/performance chart: The 2500K has slightly higher performance, lower price, and *much* better energy efficiency.

    Go look at the LKML where you'll see Linus & Ingo Molnar calling out AMD for design flaws in Bulldozer's cache that AMD wants to paper-over with kludgy software workarounds in the kernel: http://us.generation-nt.com/answer/patch-x86-amd-correct-f15h-ic-aliasing-issue-help-204200361.html [generation-nt.com]

    I feel bad for AMD's engineers. I *don't* feel bad for the marketing hype machine that has been relying on "geek-cred" from sites like Slashdot and the usual David vs. Goliath myth to get unearned praise. If Intel had come out with Bulldozer instead of AMD, we'd be calling this Prescott version 2.0.

    • You might want to buy some sort of AMD processor, if only to decorate the shelf, even if you prefer Intel... Consider it an investment.

      The...optimism... of Intel's pricing guys can get a touch out of hand when their competitors get weak.
      • by dbIII ( 701233 )
        The prices of the Intel ten core stuff is insane on the level of if you have to ask you can't afford it as an example in server space. Meanwhile there are slightly slower AMD 12 core CPUs for under $1000.
        • by Kjella ( 173770 )

          The prices of the Intel ten core stuff is insane on the level of if you have to ask you can't afford it as an example in server space. Meanwhile there are slightly slower AMD 12 core CPUs for under $1000.

          Intel's most expensive chip is $4616, AMDs is $2649. True you get AMD chips to under $1000, but then it's no longer very fair to compare them to Intel's most expensive ones either. The 10-cores are more like the Extreme Edition chips, which despite having two cores less perform much higher than the fastest Opteron...

          • by SQL Error ( 16383 ) on Wednesday October 12, 2011 @08:50AM (#37689528)

            We've benchmarked the 10-core 2.0GHz E7 Xeons against the 8-core 2.0GHz Opteron 6128. The Opteron CPUs deliver about 70% of the performance on our workload for about 12% of the price.

            The AMD motherboards are much cheaper too.

            Bulldozer is underwhelming on the desktop, but it could still deliver great price/performance in the server market. We'll soon see.

          • by dbIII ( 701233 )

            which despite having two cores less perform much higher than the fastest Opteron.

            Only for single threaded applications which are now less relevant than they used to be. Of course you only look at 12 cores (or ten) if you are going to use all of them most of the time.

        • I just bought a dual 12 core server from Dell. The difference (all other specs being equal) between the R810 with two 10-core Xeon's, and the R815 with two 12-core AMD's is about $8k (we also have 256GB of ram in them. The Intel ram was more expensive for some reason, even though the speed was the same). That difference in price is going towards a Fusion IO card, which will be a nice little benefit to our IO performance on our database.

        • by Bengie ( 1121981 )

          When you drop $60k on a server and another $20k of licensing fees, one really doesn't care about shaving $2-3k from the price through a slower processor, causing you to order more servers. Give me the fastest CPU.

          • When you drop $60k on a server ...then you're doing it wrong. You can get a fully tricked out 2.5GHzx48 6100 with 512GB RAM from SuperMicro for about GBP 16,000. If you want to throw in a 2U case wil loads of disks you might go up to 25,000. But 60k for a server? You're obviously doing something exotic.

            another $20k of licensing fees, one really doesn't care about shaving $2-3k

            That's per processor, by the way, of which there are 4. That's a saving of $8k to $12k, which is beginning to get significant, espec

            • by Shinobi ( 19308 )

              Or you may just as likely have 48 very CPU intensive non-parallellizable tasks.

              Or most likely of all, looking over all the different fields, you have a mix of tasks that utilize CPU's differently, and find that at peak use, you need 48 cores.

              • Or you may just as likely have 48 very CPU intensive non-parallellizable tasks.

                That's kind of the definition of parallelizable and is the ideal case. Actually, it's the case I have. It means that I pay a hefty premium for the fast HT links and large system image, but it's still the cheapest way of getting high density computing.

                Or most likely of all, looking over all the different fields, you have a mix of tasks that utilize CPU's differently, and find that at peak use, you need 48 cores.

                The cluster I use s

                • by Shinobi ( 19308 )

                  "That's kind of the definition of parallelizable and is the ideal case. Actually, it's the case I have. It means that I pay a hefty premium for the fast HT links and large system image, but it's still the cheapest way of getting high density computing."

                  No, the definition of embarrassingly parallell, aka ideal case, is when a single task can easily be spread over multiple processors with little performance loss due to overhead/locks/stalls or simply waiting for other processors to finish their job.

                  Raytracing

                  • No, the definition of embarrassingly parallell, aka ideal case, is when a single task can easily be spread over multiple processors with little performance loss due to overhead/locks/stalls or simply waiting for other processors to finish their job.

                    Wow, that's arguing semantics. If you have 48 independent tasks to run, then your problem (that the tasks belong to) is embarressingly parallel and won't even stress your interconnects. It happens to be tha case I have. The problem splits into a large number of e

          • by dbIII ( 701233 )
            The difference is with even with four twelve core AMD CPUs and 512GB of RAM you won't be dropping as much as $60k on the server. Now do you get the point?
            The CPU price difference alone would be $8000+ at the very top end (assuming the Intels are four way), and a bit more than that shaving a few hundred MHz off the speed. Of course the boards are also a hell of a lot cheaper for some reason. You also get eight more real cores which you are going to care a lot about if you are even considering such a mach
    • AMD had a real good run in the early 2000's AMD actually was selling more PC's with its chips then Intel. Then Intel Core 2 Duo processors came out and AMD had to go back to catch up mode again.

      But I have stopped watching the processor market as closely as I did before. Then I wanted to build myself a PC... I was like Dag-Nabit! They seem to name all the chips with a code name and a number... Now I would expect the larger number next to the code name would mean it is a better chip then the previous code n

      • by Kjella ( 173770 )

        AMD had a real good run in the early 2000's AMD actually was selling more PC's with its chips then Intel. Then Intel Core 2 Duo processors came out and AMD had to go back to catch up mode again.

        I'm pretty sure they didn't and it was just a majority of the retail sales outside the big OEMs. I don't ever think AMD ever had the fab capacity to supply over 50% of the total market.

    • by epine ( 68316 )

      That's a pretty constructive dialog. Is that the norm on LKML these days? Linus, I feel your pain.

      Argh. This is a small disaster, you know that, right? Suddenly we have user-visible allocation changes depending on which CPU you are running on. I just hope that the address-space randomization has caught all the code that depended on specific layouts.

      But honestly, it's not like the transition to AMD64 was all that smooth, either. We're all members of the breakage-of-the-month club. Others should be care

      • I don't think BD is a failure because it can't beat Sandy Bridge in every benchmark. I think it's a failure because it is a *massive* (2 BILLION transistor) chip with very large (315 mm^2) die and a LARGE power envelope that still doesn't beat a 2600K even with higher clockspeeds and at using highly multithreaded code where BD is supposed to be superior.

        If AMD had come out with a chip that had the same performance as BD but was much smaller and more power efficient (basically the chip that AMD promised rat

        • Look at the transistor count for the i7 - it certainly pushing 2G just like the AMD designs. The big difference is the direction AMD is taking the thing.

          What I suspect is that in another couple of generations is when we'll start seeing the real benefit of AMD's design.

          Some points to think about

          1. APU = FPU
          2. GPU != FPU
          3. Power Consumption

          I suspect AMD is moving back towards the slot based board designs (Slot A) and putting the entire computer onto a card. The only thing a mobo will need to provide are things like

          • by Bengie ( 1121981 )

            APU != FPU.. not directly anyway

            The APU works just like a GPU, the only real difference is the 1-2 magnitudes lower communications latency(shared L3 does that), which allows for smaller matrices of data to effectively be computed.

            APU = super low latency, but mediocre throughput GPU

            huge potential, but still has the basic limitations of a GPU for the type of work, but not the amount of work.

      • Intel's Pentium was greeted with ridicule in the smoking hot 60Mhz incarnation (15 watts, can you believe that?) It went on to great success after a die shrink.

        I talked to the head of the Pentium Pro to Pentium 4 projects (after he left Intel), and he said that their first power wakeup call came with that chip, when they were told by a company in New York that they couldn't upgrade their desktops because their building's power supply wouldn't be able to cope with the increased load. Sadly, it wasn't until after the Pentium 4 that they really learned this lesson to any degree.

      • Prescott was designed as it was for bad engineering reasons. They were trying to glue an extra pair of legs onto the frequency horse, with no concern for hay consumption; it didn't end well.

        Well, wait a minute. Obviously it turned out to be a misstep, but what reason do you have for thinking they knew that going in and were just being disingenuous?

        • Well, wait a minute. Obviously it turned out to be a misstep, but what reason do you have for thinking they knew that going in and were just being disingenuous?

          They didn't know. They (the engineers) thought they were just pursuing the next logical step in the path marketing had decided on with the original P4, selling on high frequency. There was very convincing data they showed that they could extend the P4 architecture up to well above 10GHz and get good performance. Nobody in industry called them on it, because they too didn't see the problem that was just around the corner.

          At 60nm, the leakage current of transistors blew up. What was previously a minor pro

      • But honestly, it's not like the transition to AMD64 was all that smooth, either. We're all members of the breakage-of-the-month club. Others should be careful what they drool over.

        The 64 bit AMD chips, when considered in conjunction with relevant chipsets and when compared to the intel offerings of the day with their relevant chipsets, were astounding examples of efficiency and performance, and while there were real compatibility issues, they were at least fairly scarce.

        Intel's Pentium was greeted with ridicule in the smoking hot 60Mhz incarnation (15 watts, can you believe that?) It went on to great success after a die shrink.

        This is the time when AMD actually failed, with the K6 and its ever so slightly incompatible FPU implementation. Lucky for them, the Pentium had 0.99999999997 or two FPU problems of its own.

        I'm always concerned for it

        • by Shinobi ( 19308 )

          The problem for AMD is, over the lifespan of a cluster/supercomputer/data center, the major costs aren't manpower, it's floorspace, power and cooling. These Bulldozer cores use drastically more power and run MUCH hotter than the 10 months older Intel parts. Also, not all workloads(even in science) are easily parallellized, so overall balance of performance advantage leans over towards Intel.

          Using the same memory, SSD's, GPU and such, the FX-8150 gurgles down 79 watts more under heavy load than the i7-2600k.

          • Are you counting the cost of the chipset? In the past, intel has been horribly bad at producing chipsets that don't suck down the power, but I honestly have no idea if they've rectified that situation or not. The early Athlon 64 laptops actually had desktop parts in them and still had power consumption comparable to their Intel-based competitors due in part to the more efficient chipset.

            • by Shinobi ( 19308 )

              This is entire system power use, minus monitor. The systems also used the same PSU's, to remove that factor from the comparison.

              The Sandy Bridge chipset and CPU revisions really cut down power consumption.

              The i7-2600k put under heavy load sucked down 164 watts, the FX-8150 sucked down 243 watts. The i5-2500k sucks down 148 watts under the same heavy load. Another interesting comparison is the A8-3850, which sucks down 165 watts under the same heavy load.

          • Also, not all workloads(even in science) are easily parallellized,

            Yes, but if you're running a cluster, you are by definition running problems that parallelize well. If your workload isn't parallelizable, then the best you can do is run the single thread on the most overclocked, most expensice i7 you can get your hands on.

            These Bulldozer cores use drastically more power and run MUCH hotter than the 10 months older Intel parts.

            On the server end with the quad 12 core 6100s, the numbers are less difinitive. I

            • by Shinobi ( 19308 )

              "Yes, but if you're running a cluster, you are by definition running problems that parallelize well. If your workload isn't parallelizable, then the best you can do is run the single thread on the most overclocked, most expensice i7 you can get your hands on."

              This is not true these days, since many use clusters even for tasks that are not easily parallellizable, simply because that's what's available.

              Also, the 12-core 6100 is Magny-Cours which is not based on Bulldozer. Bulldozer-based Opterons are under th

              • I specifically stated that the Bulldozers run really hot, I said nothing about other AMD chips.

                Well, given the article (we are discussing the articles, right?), you did indirectly. The x6 is basically the same core as the 6100 (more or less), and there is a power comparison between x6 cores and bulldozer cores in every single linked article. In fact, the performance per watt figures are very similar.

                • by Shinobi ( 19308 )

                  No, I specifically compared Bulldozer with Sandy Bridge.

                  If we add in the x6(Thuban core), it just gets MORE embarrasing for Bulldozer, considering that Thuban is using the 45nm process while Bulldozer is on the 32nm process, yet has less power consumption and overall comparable performance to the 8150.

          • by afidel ( 530433 )
            In the supercomputer space you'll be comparing Xeon E7 (Westmere) to Interlagos (Bulldozer), there the Bulldozer is doing well enough that next years #1 or #2 supercomputer is going to be using Interlagos. What will be interesting will be in 2 socket server space that makes up 85% of the x86 server market where Xeon E5 and Interlagos will shoot it out. I'm betting there it will be highly dependent on your target workload, if you need good single threaded performance E5 will be the obvious choice, if you nee
            • by Shinobi ( 19308 )

              The direct reason the Jaguar/Titan is upgraded with Interlagos is that it only requires major redesign of cooling. Everything else, including the internode memory controllers etc don't require much in that way, making it mostly(keyword mostly) a drop-in replacement. I'm not saying Interlagos is bad or anything, I'm just pointing out that in the case of Jaguar/Titan, it's for economical and engineering simplicity reasons on the hardware side at least.

              The software side is going to be "somewhat" more tricky...

    • by Antisyzygy ( 1495469 ) on Wednesday October 12, 2011 @10:21AM (#37690838)
      Marketing people are almost always morons. Thats not to say some don't come out brilliant, but its maybe 1 percent. Its one of the easiest degrees to get in college due to the constant dumbing down of college programs, and its accelerated by the number of people that pick it due to a misconception that business degrees will make you a lot of money for little effort. "Dude! You mean I can do easy homework, make teh big bucks some day and hit keggers every night! Bad-ass!". I sincerely wish business programs would start actually demanding MORE out of people rather than making it more accessible to idiots and burn-outs. It used to be a prestigious thing to have a business degree, as it was hard. Now its like 4 more years of high school with classes about "feelings" and barely any math beyond what you need to balance a check book and draw pretty pictures, and sometimes not even that.
    • Look, Intel's not stupid enough to push AMD out of the market. They could have done it by now if they really wanted to by dropping prices to barely positive margins. Like Microsoft investing in Apple in the 90s, they'll keep AMD around as a defense against anti-trust/monopoly accusations. Instead Intel will just price their CPUs higher and delay release of their next-gen units. People feeling "bad" about AMD dropping out of the market seem to think Intel becoming a monopoly would somehow be a permanent
  • by epine ( 68316 ) on Wednesday October 12, 2011 @07:29AM (#37688796)

    You have to hope that whatever sacrifice AMD made in this design was made to better enable the CPU and GPU to be fabricated on the same process in Trinity and beyond.

  • by scalarscience ( 961494 ) on Wednesday October 12, 2011 @07:31AM (#37688822)
    Well now I know for sure why SB-E (sandybridge-e) is not arriving until Q1 2012... Intel is just going to continue to milk SB parts for the time being. Sad because I really wanted to get an Ivy Xeon rig to replace my current dual proc mobo, but I'm not sure I can wait until 2013!
  • I buy AMD, mostly when I build myself and when I'm on a budget. Not because I like weak chips. I love CPU speed, but I also love to keep my wallet as full as possible. This is what AMD offers: "bang for your buck". AMD is interesting for anyone who wants to balance between spending money and reasonable performance. Want pure performance and it doesn't matter what it costs? Go Intel... No questions asked. (This wasn't so in the Athlon XP/64 days)

    Also keep in mind that we are now really on a computing

    • I love CPU speed, but I also love to keep my wallet as full as possible. This is what AMD offers: "bang for your buck". AMD is interesting for anyone who wants to balance between spending money and reasonable performance.

      Which is why this chip is not a good buy until they drop the price. Right now it is not a good "bang for your buck". Looks like I will be putting Phenom II X6 chips in my existing systems and sitting tight for a year or two.

      • If you want bang-for-buck and you are reaching for a Phenom II x6 then you aren't paying attention..

        The AMD A-series APU's are by far the best value around. The AMD A6-3650 ($120) and the AMD A8-3850 ($135) are on par in performance with the Phenom II 1055T ($150) and the Phenom II 1075T ($160) respectively. If you need the (very) marginal performance boost of the Phenom II 1090T ($170) or the Phenom II 1100T ($190) then you should probably be buying the Intel i5-2400 ($188)

        They perform as well as the x
        • Note the part where I said "existing systems". If any of my existing systems already had Socket FM1 motherboards, I'd already have APUs in them. ;-)

          There's also the minor detail of the APUs not supporting ECC RAM. I generally use ECC RAM for any box that acts as a server (prefer it for desktops too, actually... but it is less of a "must have" and more of a "nice to have" for desktops).

        • What? i'd like to know how a A8 3850 (which is basically a 2.9 GHz athlon II X2, better known as a l3-less PII) can keep up with a 1075T.

          Sure, for general browsing/wordprocessing the A8 is more then sufficient, but once you get into stuff you would actually need a quad-core for, the PII chips are superior.

          Yes, if you are building a new surfer-box for mom, by all means get a A6/A8, but if you do any gaming/encoding stuff, the few extra tenners for a PII x4/x6 pay off. I agree with the intel i5 2400 recommend

    • Pentium G840 + decent H61 mobo with 4 RAM slots + 4*4GB of RAM == $215, and beats your A6 system in every respect except GPU performance, and quite frankly, if you're able to cope with a 6530, then you're not looking for a big graphics card, and the HD2000 on the pentium is almost certainly enough.

      • by BrentH ( 1154987 )
        I think going down to 8GB and putting that money towards the A6 is by far the best option. The 6530 smokes any Intel graphics, and be honest: the 6530 is even for nongamers much preffered over the HD2000. Casual games like the Sims3 will thank you for it, as well as gpu-assisted computer which is increasingly common.
        • If you're going to roll back to 8GB of RAM, why not put the money towards a Radeon 6770 to go with the G840... that way you have a faster CPU *and* faster GPU.

          • by BrentH ( 1154987 )
            I don't think that card is only $30? But yeah, you could get a decent separate card that beats the Intel for that price, but still, I doubt that you get much more performance than the A6 though.
            • The point is that the intel system is $35 cheaper... if you're going to say "but I could shave off 8GB from the AMD rig" I can equally say "I could shave off 8GB from the intel rig, and use the $70 now left to get a better graphics card"

  • Similar to Intel Insider? In 2007 there was talk of disallowing users access to the framebuffer, did any of this ever materialise?

    This is what is keeping me away from buying core i5 cpus, even if the AMD ones might be a bit slower
  • The architecture shows promise, but performance gains will take time to materialize, making it difficult to leapfrog Intel to any significant degree.

    Hasn't that been the case for 20 years now?

    • Nope. Compare the original Althlon to the comparable Intel chips at the time. It was both faster and cheaper, making it an obvious win. The K6-2 was about the price of a Pentium, but performance was comparable to a Pentium 2 or 3. My K6-2 350MHz cost about as much as a 266MHz Pentium 2 - much less if you included motherboard costs in both cases. The K6-2 and K6-3 were a bit slower than fast Intel chips, but it was very close. The Athlon was faster than the Pentium 3 and than the early Pentium 4s. The

      • The outstanding "integrated" graphics performance of the laptop and desktop Llano APUs (ugh, marketing) greatly expands the number of niches where AMD is the better choice. That now includes nearly home user on the planet.

        I wouldn't worry much about AMD even if this Bulldozer launch is underwhelming.

  • As usual, I feel somewhat obligated to post up AnandTech's review, which always seems much more in-depth and polished than almost all the sites out there:

    http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested [anandtech.com]

  • Intel has limited pci-e in the i3 i5 and low end i7 boards that makes USB 3.0 and other on board stuff eat in to the X16 for video.

    With amd you can get a board with lot's of pci-e lanes with out the need for the high end cpu and or get a high end cpu and not need to buy a super high end MB.

  • It seems this new generation of AMD will run Dwarf Fortress more slowly than Phenom II :(
    When new computers are slower, there's something seriously wrong!
  • by unity100 ( 970058 ) on Wednesday October 12, 2011 @09:33AM (#37690040) Homepage Journal
    - First, there is the huge delay intel caused by engaging in fraud by paying pc makers to not use amd chips, right at the time amd was at an advantage.

    - Then there is the fact that these synthetic benchmarks use intel's proprietary libraries, which were proven to work ineffectively when 'non genuine intel' architecture was detected.

    - Then there is the fact that this is a new platform, and its just out, and the main deal with this is being easily increasable in cores. so amd will just add more cores without any research being needed. expect 32 core cpus in a year or so. 16 cores already out.

    - As you can understand these cpus are geared more for server environment, and will take that environment over.

    - Amd is moving to trinity in one year or so. Trinity is the APU format that all amd cpus will take from then on. Llano apus have been quite successful in gaming fro example 50-80 fps in starcraft 2 (crossfired and not) -> you dont need to buy an external card anymore, and if you do you can crossfire it with the cpu contained one. http://www.anandtech.com/show/4476/amd-a83850-review/6 [anandtech.com] http://techreport.com/articles.x/21730/8 [techreport.com] intel is worlds behind in this one.

    and then there is the ultimate question of what the fuck i am going to do if i grab a powerful processor. really. i bought an overclockable board, and an unlocked cpu. and when i played games, i found out that it was mostly the video card i added that did most of the thing. the cpu i had was way, way over any potential requirements and needs of these games. i didnt need to buy a powerful one at all.

    i went about hardware/software forums asking what i could do with a powerful computer. answers have been 'video encoding', 'benchmark', 'seti'. as it seems, any daily usage for cpus are WAY behind the power of modern cpus. to utilize your cpu power at all, you need to do unorthodox, unnecessary shit, or be in a profession that works on these.

    so i think all this performance talk is bullshit. there is no way in hell you will use that performance, even in hardcore gaming with an eyefinity 3 monitor setup in 5000x resolution, with 2x antialiasing and full detial. (and i just have 2x 5670 cards).

    future is in the heterogeneous chips i think. llano already has been a success, and its possible to save 30% on the cost of cpu + mobo + graphics card if you go the llano way over anything intel, and gaming performance is incomparable. when trinity comes, i think there will be a big change in computing. especially when amd puts out a computing platform like cuda.
  • The point of Bulldozer and pairing it with AMD GPGPUs is to leverage all the wealth of work AMD has put into OpenCL 1.1 with OpenGL 4.x.

    When more and more apps leverage OpenCL 1.1 [and the list is growing rapidly] using the likes of LLVM/Clang where AMD has worked hard at leveraging you'll begin to see a lot of these ``benchmarks'' being truly useless and tuned specifically for Intel.

    The work AMD is putting in with that marriage should be obvious: http://developer.amd.com/pages/default.aspx [amd.com]

    Until appl

Economists state their GNP growth projections to the nearest tenth of a percentage point to prove they have a sense of humor. -- Edgar R. Fiedler

Working...