Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

Bulldozer Server Benchmarks Not Promising 235

New submitter RobinEggs writes "Some reviews of Bulldozer's server performance have arrived. Ars Technica has the breakdown, and the results are pretty ugly. Apparently Bulldozer fares just as poorly with servers as with desktops. From the article: 'One reason for the underwhelming performance on the desktop is that the Bulldozer architecture emphasizes multithreaded performance over single-threaded performance. For desktop applications, where single-threaded performance is still king, this is a problem. Server workloads, in contrast, typically have to handle multiple users, network connections, and virtual machines concurrently. This makes them a much better fit for processors that support lots of concurrent threads. ... It looks as though the decisions that hurt Bulldozer on the desktop continue to hurt it in the server room. Although the server benchmarks don't show the same regressions as were found on the desktop, they do little to justify the design of the new architecture.' It's probably much too early to start editorializing about the end of AMD, or even to say with certainty that Bulldozer has failed, but my untrained eye can't yet see any possible silver lining in these new processors."
This discussion has been archived. No new comments can be posted.

Bulldozer Server Benchmarks Not Promising

Comments Filter:
  • by hellop2 ( 1271166 ) on Tuesday November 22, 2011 @09:10AM (#38134492)
    Bulldozers do not make good servers. Use a computer. Problem solved.
  • by unity100 ( 970058 ) on Tuesday November 22, 2011 @09:11AM (#38134498) Homepage Journal
    And yet, 3 supercomputers with those opterons were ordered in the last 4 weeks ? and in a month, one of them - which is being revamped from #3 supercomputer position of the world - will be #1 supercomputer of the world when complete ? Was lockheed martin also morons to choose an opteron based supercomputer ?

    Why is an article which is apparently written to bash amd was included in slashdot despite its apparent bias ?
    • by CajunArson ( 465943 ) on Tuesday November 22, 2011 @09:27AM (#38134578) Journal

      1. Nobody with a sig advertizing knock-off PHP plugins even has the right to use the word "supercomputer" in a sentence.

      2. Supercomputers are NOT built based on processor speed. If you took the SPARC CPUs used in the K computer (the worlds fastest and *not* running opterons) and put them into a regular server or desktop, then you'd have a pretty underwhelming computer. Most of the $$$ going into supercomputers goes to the interconnects, not the CPUs. So sure, use the opterons in the supercomputer where AMD sells them at firesale prices and does not make any money. The rest of us will use Xeons and be very happy with the results.

      3. You are a well known AMD fanboi and your repetitive posts are becoming less and less amusing.

      • by PIBM ( 588930 )

        You forgot to point that the many of the highest performing super computers are using tons of NVIDIA video cards to achieve those performances..

      • by serviscope_minor ( 664417 ) on Tuesday November 22, 2011 @10:34AM (#38135230) Journal

        Supercomputers are NOT built based on processor speed.

        Um.

        That's rather an oversimplification, to the point of being wrong.

        Supercomputers need good interconnects and lots of processing power. One or the other alone won't do.

        Much of the $$$ goes into interconnects, but also the CPUs and the cooling, which is very dependent on the CPUS. All things considered, neither AMD nor Intel have the fast interconnects on-die (unlike Fijutsu), so pretty much the main thing to choose between the CPUs is, well, the CPUs.

        And it seems like AMD are the best option at the moment for this kind of workload.

        The rest of us will use Xeons and be very happy with the results.

        No, you will. I'll stick with my Supermicro quad 6100s for as long as I can and be very happy with the immense price/performance they offered.

    • by gman003 ( 1693318 ) on Tuesday November 22, 2011 @09:32AM (#38134596)

      Supercomputer workloads are significantly different than server workloads, as they typically focus on embarrassingly parallel problems and on throughput rather than latency.

      You may as well be saying "why are so many desktops built on x86 chips? It seems like every day I read something on how ARM is better for smartphones".

      • Supercomputer workloads are not embarrassingly parallel problems. For those tasks, you use a much cheaper grid computer, connected through gigabit ethernet, or even over the internet. By definition, embarrassingly parallel problems need relatively little communication, so there is no sense wasting the extremely high end interconnects you find on supercomputers for such problems.
      • by prefect42 ( 141309 ) on Tuesday November 22, 2011 @01:05PM (#38137432)

        Hang on, "typical focus on embarrassingly parallel problems"? That's just plainly not true. Pick a classical problem for HPC, weather forecasting. You break up the atmosphere into a bunch of cubes and distribute those cubes in a sensible way between your nodes. You model the flows between the cubes on a local machine and pass the edge information to neighbouring nodes. If it's embarrassingly parallel then you wouldn't be passing edge information, but that would mean weather wouldn't move from one area to another...

        CFD for modelling heat or air flow, or pathogen propagation. Modelling population trends with microsimulation, or even parallel simulation of software systems. None of that is embarrassingly parallel. You wouldn't spend all your money on low latency high bandwidth interconnects if all the nodes spent their days playing with themselves.

        Something like raytracing *can* be embarrassingly parallel, but I'd say most that runs on HPC isn't.

    • by gl4ss ( 559668 )

      in 'supercomputer' use it's more likely that the processes can be herded to the right cores to get the best boost in performance from the architecture.

      also you'd buy what you have available in such high numbers when you're buying something in such high numbers.

      the article itself is quite poorly written, at points considering money of sw into the performance equation, at times not telling if the benches are per core(or "thread" in new amd lingo).

    • by the linux geek ( 799780 ) on Tuesday November 22, 2011 @10:45AM (#38135364)
      There is roughly zero overlap between what makes a good HPC processor and what makes a good datacenter processor.

      Hint: AVX throughput matters almost none when running an SQL server, but looks very good on Linpack.
    • by Junta ( 36770 )

      one of them - which is being revamped from #3 supercomputer position of the world - will be #1 supercomputer of the world when complete ?

      You mean Jaguar, which is adding nVidia Tesla GPUS, memory, and refreshing the cluster interconnect while also doing Bulldozer? Where the Bulldozers are replacing Istanbul processors and *not* Magny-Cours? Even amongst the Magny-Cours in the top, they are 8-core not 12-core. Even for HPC there is some thought that 12-core will outperform Bulldozer due to shared FPU for many workloads, *but* GPUs are becoming the vogue way of doing that stuff anyway.

      As others have pointed out, processors matter, but every

      • by Shinobi ( 19308 )

        Fairly good summary of the situation, but I think you can cut it even shorter:

        People chose Cray for the I/O systems and the expertise available. The I/O just happens to be built around Opterons, since that's what it was first designed for, back when Opterons kicked Xeons ass.

  • by raddude99 ( 710064 ) on Tuesday November 22, 2011 @09:14AM (#38134510)
    The standard of writing at "Ars Technica" have declined far more than AMD's relative performance to Intel.
    • by sgt scrub ( 869860 ) <saintium@NOSpAM.yahoo.com> on Tuesday November 22, 2011 @10:23AM (#38135074)

      I completely agree. You have to hunt down which link is the correct link to find the specs that they eventually skewed to make an inflammatory point. They are writing articles to fill pages with advertisements based on a headline that is sure to piss off someone.

    • by Kjella ( 173770 ) on Tuesday November 22, 2011 @10:28AM (#38135132) Homepage

      I don't go there for the tech articles, but the part on page 2 where they pull AMDs TPC-C numbers apart is pretty damn good.

      AMD claims 1.2 million tpmC for a two-socket Opteron 6282 SE system. The company compares this to a score for a two-socket Opteron 6176 SE system (each socket having 12 cores), (...) AMD also claims that this beats "competing solutions" by "as much as" 18 percent. (...) the reference AMD uses is another official result: dual Xeon X5690s (6 core, 12 thread, 3.46 GHz) with 384GB RAM. (...) looking just at the servers and their storage, and assuming similar discounts, we get prices of around $260,000 for the Opteron 6100 system, $879,000 for the Opteron 6200 system, and $511,000 for the Xeon system.

      Basically their figures are doped with a massive SSD storage solution to make a slow CPU look good. And they show that if you wanted to spend $879,000 on a system, there's much faster Intel solutions (even though the CPUs cost more). So they're doing pretty good on the economics end at least.

    • I posted this bellow but realized it should go here.

      Anandtech.com provides much more knowledgeable and professional reviews. They had this to about AMD's new chip,

      "Unfortunately, with the current power management in ESXi, we are not satisfied with the Performance/watt ratio of the Opteron 6276. The Xeon needs up to 25% less energy and performs slightly better. So if performance/watt is your first priority, we think the current Xeons are your best option. The Opteron 6276 offers a better performance per
  • Recall the Itanium (Score:5, Insightful)

    by G3ckoG33k ( 647276 ) on Tuesday November 22, 2011 @09:19AM (#38134536)

    Recall the Itanium from Intel and HP.. It started out with great hype more than ten years ago. When the first benchmarks came no-one wanted to believe them. Still that particular architecture is about to die.

    Unfortunately, Bulldozer may end up with a similar fate. The big difference is that Intel had its regular desktop cpu line-up to finance the Itanium disaster. If nothing can be much improved on the AMD cpu side, can the shrinking graphics card business save AMD?

    I hope so.

    • by Xanny ( 2500844 )
      Itanium failed moreso because it tried to replace x86 with a new 64 bit only version. That is why it bombed more than any performance benchmarks. The sad thing for AMD is that Bulldozer is all around not favorable for anything - it always comes up to a 9 / 10 where someone else has a 10 / 10, it is a jack of all trades but in processor land that is bad. It has somewhat decent power efficiency, but is terrible compared to other 32nm processors from Intel, it is more in 45nm land. It has good performance
    • It's way too soon to call Bulldozer dead. Unlike Itanium it does run standard software just fine, although it should do much better with software that's compiled for it. The cost certainly will come down and the performance will improve, the leap to this type of an architecture is more or less inevitable as time progresses.

      AMD has been way behind before, but this time they have a better position as their video cards are still quite good and can make use of them to speed the process up. I wouldn't personally

  • i always liked the AMD CPUs, mostly for almost equal computing power for less money but at the moment this is not really true anymore it seems when i look at the benchmarks (doesn't matter if desktop or server)
    • by tqk ( 413719 )

      [I] always liked the AMD CPUs, mostly for almost equal computing power for less money ...

      Me too, and because Intel's a bully. I don't support bullies.

      Besides, CPU performance is only a small part of overall system performance. Doubling the speed of storage or network I/O is much easier/cheaper/more effective than dropping in a faster CPU.

      And, I hate bullies on principle.

  • by TheSunborn ( 68004 ) <mtilstedNO@SPAMgmail.com> on Tuesday November 22, 2011 @09:21AM (#38134546)

    I really don't get the conclusion.

    The bulldozer is faster then the Xeon chip on all cpu benchmarks which can generate enough threads to fill all cores.

    Each bulldozer core is as fast as a core on a Opteron 6100.

    It looks exactly like the cpu I want in my web/db server, and my supercomputer.

    • by Chrisq ( 894406 ) on Tuesday November 22, 2011 @09:34AM (#38134610)
      I agree. Its a very biased summary. From TFA:

      In AnandTech's benchmarks, the 6200 failed to beat Intel's Xeon processors, in spite of Intel's core and thread deficit. In others, 6200 pulled ahead, with a lead topping out at about 30 percent.

      That's hardly an unmitigated disaster for a cheaper chi and the first release from a new architecture.

    • That's how I read the other reviews as well. It seems like a fairly good chip for servers or workstations.
    • by RobinEggs ( 1453925 ) on Tuesday November 22, 2011 @09:52AM (#38134746)

      I really don't get the conclusion.

      The bulldozer is faster then the Xeon chip on all cpu benchmarks which can generate enough threads to fill all cores.

      Each bulldozer core is as fast as a core on a Opteron 6100.

      It looks exactly like the cpu I want in my web/db server, and my supercomputer.

      Do the majority of real world uses 'fill all cores'? Are you arguing that the vast majority of these benchmarks are useless? I can't distinguish between which tests use all of the cores and which don't, but it's not my field.

      However, the results fall far short of a resounding success for AMD. The results are broadly split between "tied with Opteron 6100" and "33 percent or less faster than Opteron 6100." For a processor with 33 percent more cores, running highly scalable multithreaded workloads, that's a poor show. Best-case, AMD has stood still in terms of per-thread performance. Worst case, the Bulldozer architecture is so much slower than AMD's old design that the new design needs four more threads just to match the old design. AMD compromised single-threaded performance in order to allow Bulldozer to run more threads concurrently, and that trade-off simply hasn't been worth it.

      That's the problem. There are several instances in which AMD isn't even beating itself. Almost none of the tests show it working better than the old 6100 Opterons on a per-core basis. And the Xeons the 6200 only sometimes beat are 18 months old; new Xeons ship next quarter. I suppose if I accept your statement about "filling all cores" at face value, given my general ignorance of the server market, then I have to admit that Bulldozer could be superior in situations that filled all of the cores most or all of the time. Is that a significant potential market share? Does it justify an entirely new architecture?

      • by swalve ( 1980968 )
        This sounds depressingly like when the Pentium 4 came out. And what are we all using now? Dual core pentium III's with extra stuff bolted on.
        • And what are we all using now? Dual core pentium III's with extra stuff bolted on

          Only if you're still using a Core Solo / Core Duo. The Core 2 and later chips were all a completely different microarchitecture. And one of the things that was bolted on to the Pentium III to make the earlier one was... the Pentium 4's branch predictor.

      • by Curunir_wolf ( 588405 ) on Tuesday November 22, 2011 @10:31AM (#38135160) Homepage Journal

        Do the majority of real world uses 'fill all cores'? Are you arguing that the vast majority of these benchmarks are useless? I can't distinguish between which tests use all of the cores and which don't, but it's not my field.

        Obviously. The high performance server market these days doesn't really include web and mail servers. Most are being deployed for one of 2 purposes: (1) Large database servers, and (2) Virtual Server hosts. Both of those utilization of servers will take advantage of this architecture, unlike the contrived "benchmarks" used to test these chips.

        I haven't deployed a single server NOT used in a virtual environment in over 2 years. We are even deploying database servers as virtual these days, because the backup and fault-tolerant features are so good. These new Bulldozers look like they'll be on the list for the next set of hardware I need.

        • Well the question remains are they better than Intel chips that will be shipping soon? It appears that the per core performance has not increased than older AMD chips but the number of cores per chip had. Will this beat the Sandy Bridge Xeons coming out? They will have only 6 cores but if overall chip performance is higher then it is a new debate.
      • The US Office of Management and Budget (OMB) has a virtual to physical server target of 15:1.

        Every large business, and most medium sized ones, are going to try to (at least) match that target.

        (athough memory seems to be a bigger constraint.)

        • by smash ( 1351 )

          The US Office of Management and Budget (OMB) has a virtual to physical server target of 15:1.

          Every large business, and most medium sized ones, are going to try to (at least) match that target.

          (athough memory seems to be a bigger constraint.)

          They're still not likely to use all the cores unless they have some peculiar workload. They'll run out of RAM and IO (on a single server) first.

  • by __aazsst3756 ( 1248694 ) on Tuesday November 22, 2011 @09:23AM (#38134554)

    We need healthy competition to Intel, to keep pushing tech forward and prices down. Sadly AMD simply has not performed over the last year or two, with no real answers to Intel's I series.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      Sadly AMD simply has not performed over the last year or two, with no real answers to Intel's I series.

      While i totally agree on your first statement, i don't on the second. Last two years you say?.. My desktop is 1 year old, running a quad-core phenom@3.4GHz. Not only was it the best-value-for-money, costing me only 169 euro for the processor, it is also one of the fastest around - up to this very day, even for single-thread tasks.

      Here's a hint. Artificial benchmarks don't say a thing. There's one thing where AMD is very, very good and outperforms intel in any way, and that's memory management. I couldn't ca

    • by serviscope_minor ( 664417 ) on Tuesday November 22, 2011 @10:28AM (#38135124) Journal

      Sadly AMD simply has not performed over the last year or two,

      That's just Simply not true. On the server side, the quad 6100 1U servers are very competitive, supplying as much (sometimes more) power than iuntel boxes for considerably less money. At this point they're a bit of a no-brainer in the server room.

      On the desktop, it is different. More of the benchmarks show that the core i5 is faster than the Phenom2 x6 and 8150. But some benchmarks show that the AMD showings can be considerably faster. The choice is really simple. If your workload is dominated by the kind of things that Intel do well, then buy intel, otherwise buy AMD.

      The CPUs are simply too close otherwise.

    • by dc29A ( 636871 ) * on Tuesday November 22, 2011 @11:00AM (#38135580)

      We need healthy competition to Intel, to keep pushing tech forward and prices down. Sadly AMD simply has not performed over the last year or two, with no real answers to Intel's I series.

      I built a Linux server/desktop earlier this year:
      AM3+ motherboard (4 RAM slots, 6 x SATA 6GB ports, 2 x USB 3.0 ports): 90$
      AMD 1090T six core CPU: 160$

      Great performance, incredible value. Once Bulldozer gets better, I can seamlessly upgrade it. Now, I'd like to see an Intel equivalent for this.

  • Virtualization (Score:3, Interesting)

    by Anonymous Coward on Tuesday November 22, 2011 @09:29AM (#38134588)

    When someone says that a CPU was designed around multiple threads I think virtualization. yeah you can argue that servers are multithreaded in that they have to handle multiple users connecting, but that's bull. I can write a badly threaded application that doesn't effectively use the multiple cores...

    So how do these cpus perform with something like ESX running on them?

    Scott

  • Great for BOINC! (Score:4, Interesting)

    by courteaudotbiz ( 1191083 ) on Tuesday November 22, 2011 @09:31AM (#38134594) Homepage
    That's perfect for running BOINC though, which is very good at using multiple cores at their full capacity. Useless for the business, but great for contributing to science projects :-)
  • by unity100 ( 970058 ) on Tuesday November 22, 2011 @09:42AM (#38134670) Homepage Journal
    Bulldozer chips are in short supply due to sales. Because they are not able to immediately meet opteron demands, amd is keeping 8150 supply low, binning them as opterons instead, and therefore leaving desktop market undersupplied. read the informative thread below.

    http://www.overclock.net/t/1171264/compared-3-different-bulldozer-fx-8120s-want-to-know-the-difference/10 [overclock.net]

    bulldozer 8150s have been in short supply on newegg and amazon. sometimes they are out of stock, and you cant even put them on watchlist.

    way too high sales for a 'failed' processor ?
    • Re:And moreover (Score:4, Insightful)

      by PIBM ( 588930 ) on Tuesday November 22, 2011 @10:17AM (#38135008) Homepage

      Or simply, way too low yield...

      • if it was a catastrophe, there wouldnt be enough sales to cause yield issues either.
        • by PIBM ( 588930 )

          Sadly, there are too many fanboys just like someone I know.

          • Re:And moreover (Score:4, Interesting)

            by TheLink ( 130905 ) on Tuesday November 22, 2011 @11:37AM (#38136080) Journal
            Too many? I don't think so. And please stop trying to convince the AMD fanboys that AMD is producing crap.

            Why?
            1) We need AMD alive and kicking to at least give Intel some competition (look at what has happened now that AMD is weak - Intel started having "unlock codes" to unlock more performance/features for their processors ).
            2) So someone needs to buy the current batch of AMD crap[1] to keep AMD alive till they come up with something better.
            3) I'd rather not buy AMD's current crap. It is inferior for most popular desktop and server tasks.
            4) Therefore we need as many AMD fanboys as possible to continue thinking that AMD is great and buying lots of AMD crap.

            [1] Yes I know AMD produced better stuff than Intel some years ago. However the latest CPUs ironically appear to be AMD's Prescott Edition CPUs.
            • by PIBM ( 588930 )

              I like your point of view. #1 unlock code is something that's been done a lot of times before (think hardware raids!) but it's somewhat valid nonetheless. Beside, as long as unity100 stops astroturfing, I'll be happy ;)

              • by TheLink ( 130905 )
                The reason why Intel can do this "unlock code" for extra _performance_ is because they're so far ahead of AMD that they can actually sell CPUs clocked slower than they can run. If AMD releases a similar priced CPU that's 10% faster, Intel can then price the "unlock code" to whatever they need to compete with AMD. Could even be free - they've already made money from the first sale.

                If AMD's CPUs were more competitive, Intel would have to sell most of their CPUs at the fastest speeds they can run. They wouldn'
        • by PIBM ( 588930 )

          That's great news! That way, no one will make the error of buying one!

          Now, go away.

  • One element has me curious about how these benchmarks were prepared: Is the benchmark software compiled on the target platform/cpu combination with all available optimisations of that platform?

    Many of these benchmarks have a binary/library or set thereof that is written for a single target platform (the platform the original developers of the benchmark were working on), Usually pre-compiled, usually for intel, on an intel system, by an intel compiler, with intel optimisations or at least two of the four. Th

  • Maybe it's early, but I was having a hard time seeing the comparisons they were trying to make. Also when Ars was comparing pricing, X system is 400k and Y system is 600k, what the hell was that, usually stats like that would be accompanied with a link or site to said system. It said benchmarks were "here", I didn't see any. I'd like to see benchmark details such as OS. May be too early to judge as this is the first generation chip, and will the Bulldozer perform better under the next iteration of wind
  • by Anonymous Coward on Tuesday November 22, 2011 @09:56AM (#38134786)

    TPC-C is performed on Windows 2008 see http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=111111501
    Anantech tested on Windows 7.
    It is known that Windows 7 and 2008 are not optimized for Bulldozer, especially at the task scheduling level.
    So we do not know the real power of the Bulldozer architecture in the Windows world yet
    See http://hexus.net/tech/news/cpu/32394-bulldozer-benchmarks-correct-definitive which unfortunately only has very few benchmarks.
    You can also look at the phoronix site, where Bulldozer is tested on Linux.

    • Toms Hardware and Tech Report have also discussed and tested this theory. They found the current Windows scheduler does not use the hardware correctly. AMD has whitepapers available explaining the issue and the changes requried for the scheduler(s). Microsoft is working to make the changes as is the Linux kernel team.
    • by smash ( 1351 )
      Be this as it may: Windows is what 99% of people run. By the time Windows 8 hits RTM, the CPU landscape is going to look pretty different.
    • by washu_k ( 1628007 ) on Tuesday November 22, 2011 @02:03PM (#38138516)
      Windows is "not optimized" for Bulldozer because BD lies to the OS. A BD claims to have twice as many cores as it really has and Windows schedules as if this were true. In reality the BD "cores" are just a better form of hyper-threading. If BD said it had hyper-threading instead of real cores then Windows would schedule properly. All Linux and Windows 8 do is ignore the lies from the chip and use the hyper-threading scheduler.
  • by Bleek II ( 878455 ) on Tuesday November 22, 2011 @10:21AM (#38135054)
    Anandtech.com provides much more knowledgeable and professional reviews. They had this to about AMD's new chip, "Unfortunately, with the current power management in ESXi, we are not satisfied with the Performance/watt ratio of the Opteron 6276. The Xeon needs up to 25% less energy and performs slightly better. So if performance/watt is your first priority, we think the current Xeons are your best option. The Opteron 6276 offers a better performance per dollar ratio. It delivers the performance of $1000 Xeon (X5650) at $800. Add to this that the G34 based servers are typically less expensive than their Intel LGA 1366 counterparts and the price bonus for the new Opteron grows. If performance/dollar is your first priority, we think the Opteron 6276 is an attractive alternative." http://www.anandtech.com/show/5058/amds-opteron-interlagos-6200/14 [anandtech.com]
  • Bad artcile... (Score:5, Insightful)

    by Junta ( 36770 ) on Tuesday November 22, 2011 @10:41AM (#38135302)

    Though I'm suspicious that Bulldozer is going down remarkably like NetBurst (NetBurst made design compromises for marketable massive clock gains, Bulldozer similarly makes compromises to boost the now-marketable core count) and time may prove that wrong, but this article was crap.

    It looked like they cherry picked some benchmarks from the world at large with no control. As pointed out in the article, the tpmC benchmark had massive storage differences and the cost delta means there were probably node count differences. There are so many things in play that it is impossible to derive any sort of statement specifically about the processors. The article, however uses that as a point to show AMD is more expensive to make AMD look bad but in the same breath says better SSDs probably drove the benefit to steal AMD's thunder. He can't have it both ways. I'm inclined to believe the storage architecture was the key in terms of cost and performance given the nature of the test.

    Later, the article says AMD should have just done 16-core Magny-Cours. Clearly AMD should hire him as he is a genius who *must* have considered all the complexities and figured out a way to achieve that core density when no one else in the industry has. No one pretends for a second that a bulldozer module matches 2 'real' cores, but they can't just wave their wand and make a 16-core package of the old architecture. Bulldozer is all about trying to ascertain the 'important' bits of a core and share other bits in the hopes the added resource gives most of the benefit of an additional core without the downsides that make it impossible to do that many cores on a socket.

  • Sunk cost fallacy (Score:3, Insightful)

    by JDG1980 ( 2438906 ) on Tuesday November 22, 2011 @10:41AM (#38135306)

    Bulldozer can't consistently beat Phenom X6 in desktop workloads.

    It can't consistently beat Magny-Cours in server workloads.

    It doesn't seem to be any more power-efficient than AMD's last generation, despite being built on a smaller process node (32nm vs 45nm).

    At what point does AMD simply admit Bulldozer is a failure, pull the plug, and write off the sunk costs? Putting good money after bad is a classic business mistake that has killed many companies.

    AMD should continue improving their existing cores on the 32nm process (they already have some of the work done with Llano) and forget about their "revolutionary new" architecture which is basically this decade's Prescott.

    Or, heck, see if it's possible to scale up the Bobcat cores for mainstream desktop use. Don't forget, Intel's very successful Core 2 Duo came from a previous design (Pentium M) that had been reserved to laptops. AMD will probably have more luck increasing performance (both raw clock and IPC) on Bobcat than trying to tame the heat, insane transistor count, and long pipeline of Bulldozer.

    • by Junta ( 36770 ) on Tuesday November 22, 2011 @12:11PM (#38136554)

      Don't forget, Intel's very successful Core 2 Duo came from a previous design (Pentium M) that had been reserved to laptops

      That was a bit of a special case. It's not a testament of how fundamentally awesome low power processors are, and more of a illustration of *just* how bad NetBurst was. The Pentium M skipped NetBurst entirely because they *couldn't* make it work acceptably in a mobile device.

      *Usually* the low power parts optimize for overall wattage and *not* performance per watt. If they can get 25% more performance but at 10% more power, a desktop context may elect to do it and a mobile may elect not to.

  • After clicking on links I finally found some benchmarks. As usual, they were bullshit. Can't these people think of a test that can put them through real hoops? I used to throw 60G pcap files (1 minute of traffic) at machines to determine if the hardware could run our IPS software. The machine with the fewest millions of threads not yet processed won. The application opened a thread for every packet that traversed a 1G nic. The content of each packet was then sent (branched) through the appropriate ins

    • If this is a serious production application, consider optimizing your software. Firstly, spawning endless threads is rarely an efficient use of resources. After the thread count exceeds the number of available threads the CPU can process, the overhead of managing threads becomes pure overhead. The degree to which this overhead can be reduced is application dependent, but it is often worth chasing.

      Additionally, applying 10,000 and 200 rules at a rate of one thread per rule per packet is probably not a se

The use of money is all the advantage there is to having money. -- B. Franklin

Working...