Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Upgrades Hardware

AMD Details Upcoming Bulldozer Architecture 234

Vigile writes "AMD is taking the lid off quite a bit of information on its upcoming CPU architecture known as Bulldozer that is the first complete redesign over current processors. AMD's lineup has been relatively stagnant while Intel continued to innovate with Nehalem and Sandy Bridge (due late this year) and the Bulldozer refresh is badly needed to keep in step. The integrated north bridge, on-die memory controller and large shared L3 cache remain key components from the Athlon/Phenom generation to Bulldozer but AMD is adding features like dual-thread support per core (but with a unique implementation utilizing separate execution units for each thread), support for 256-bit SIMD operations (for upcoming AVX support) all running on GlobalFoundries 32nm SOI process technology."
This discussion has been archived. No new comments can be posted.

AMD Details Upcoming Bulldozer Architecture

Comments Filter:
  • by kg8484 ( 1755554 ) on Tuesday August 24, 2010 @12:25PM (#33357852)

    Compared to such articles as AnandTech's [anandtech.com] coverage of this in November 2009, I don't see much new information. Perhaps the key bit, and this is glossed over but you can tell from the slides AMD gave them, is the difference between the bulldozer and bobcat cores. The bulldozer cores contain the two integer units that have been revealed before, but the bobcat core only has one but it still implements hyperthreading.

    • by Rockoon ( 1252108 ) on Tuesday August 24, 2010 @01:23PM (#33358846)
      Just to be clear, when you say "integer units" you mean "integer schedulers" and not actual integer execution units, of which even the old Athlon's had 3 per core (and that hasnt changed since then.)

      Unlike Intel design, with highly asymmetric execution units, AMD's have had 3 symmetric integer execution units per core since the original Athlons. Its actually a pleasant breeze to write hand-optimized integer code on AMD's.

      This new design looks (in the diagram) like it actually has 4 symmetric integer execution units per integer scheduler, with the bulldozer having 2 schedulers per core while the bobcat only having 1 per core (I would guess that the logical cores are alternated on rise-and-fall states of the clock on the bobcat, and the diagram certainly makes it look like that is the case.)

      Each seem to have two wide floating point execution units, so the floating point performance of both bulldozers and bobcat's are probably equivalent.

      What I think AMD has done here is that with the bulldozer, in integer performance it is going to behave like it has 2x the number of real cores. So an 8 core (16 thread) chip will perform much like an 8 core CPU in floatng point work, but much more like a true 16-core CPU in integer work. This should give it a large advantage over Intel in integer work in equal-core comparisons, but the floating point performance will still lag behind Intel.
      • Re: (Score:3, Insightful)

        by imgod2u ( 812837 )

        I believe Bobcat's 2 FPU paths are 64-bits wide. For a total of 128-bits. It initially will not support the 256-bit AVX instructions that are coming with Sandy Bridge and Bulldozer.

        Its ALU's also appear to be significantly different than Bulldozer. With only one of the integer units can support multiplies and only two of them can support arithmetics. Two others (using a different scheduler) are load/store units. Bulldozer doubles the ALU resources (but not the number of schedulers) compared to Bobcat. So ea

        • Re: (Score:3, Informative)

          by Rockoon ( 1252108 )

          I was never a big fan of the 3x symmetric ALU's in the Athlons. When it comes to integer intensive code, having a ton of independent ADDs or MULs that I'd need that kind of parallelism for was rare. And the latency (compared to a sane design like Core at least) were significantly higher due to the units being multi-purpose.

          In the Phenom II design the latency of most of the register to register integer instructions is exactly 1 cycle just like the i7. The units being multi-purpose is not a latency sacrifice at all, although maybe the original Athlons had poor latency for another reason and Agner Fog's reference actually indicates that most of the register to register integer instructions on even the early K7's also had 1 cycle latency.

          Even in mem,reg operations, the Phenom II beats out the I7 in latency on many operations (

      • by ravyne ( 858869 )
        Actually, from the article at PC perspectives and Anandtech's coverage, I'm under the impression that there are 2x128bit paths in the shared FP unit, and that each path could be scheduled to one thread or another. If that's the case, then traditional floating point and SSE math should perform as if there are two full FPU/SIMD ALUs. Only when using the new, 256bit AVX instructions would you have less performance than a processor with one full AVX unit per processor core. Unless they're saying that the last g
  • Mmm (Score:3, Insightful)

    by elsurexiste ( 1758620 ) on Tuesday August 24, 2010 @12:27PM (#33357874) Journal
    Call me whatever you want, but the only reason AMD is still alive and well is because they've been innovating and building good products for a while now. Itanium, anyone?
    • by mangu ( 126918 )

      the only reason AMD is still alive and well is because they've been innovating and building good products for a while now. Itanium, anyone?

      A processor that's nearly ten years old is relevant today exactly how?

      TFA says: "all good things must come to an end and with the development of the very impressive Nehalem architecture from Intel, and the upcoming Sandy Bridge, AMDs primary CPU architecture is certainly showing its age"

      The market is ruthless, no one buys products from a company that used to do great thi

      • Its relevant because you can learn from history. AMD will be on top in the future at some point. Besides, why would you want people to stop buying AMD? AMD will eventually go bankrupt without supporters like myself. Then Intel will be the only CPU provider and will end up jacking their prices even higher than they are now and then they will release inferior products.
      • They've been competing just fine, otherwise they'd be out of business. Intel has been creaming AMD on performance at the high end for some time, but at a cost, the price of the high end chips has been much higher than that of the high end AMD chips. And you really don't get that much more performance for your money. Sure if you really need that performance you're going to pay for it, but most people don't really need that last bit of performance. Still, it is overdue that AMD introduce a complete refresh li
    • Re:Mmm (Score:5, Interesting)

      by Amouth ( 879122 ) on Tuesday August 24, 2010 @01:27PM (#33358892)

      the first AMD64 CPU shipped in April 2003

      the first Itanium Shipped June 2001

      So the AMD was 23 months late - all they did was tack on to existing x86 where as Intel was trying and did develop a whole new architecture.

      almost all of the complaints for the Itanium being slow was due to it having to emulate x86 for software that was not written specifically for the IA64 - Code that was and is written for IA64 runs fast as hell and there is a reason why they are still used today - just is specific applications.

      Intel's failure was due to them trying to jump to a whole new computing architecture and expecting programmers to go with them - instead programmers resisted and AMD jumped on that by just extending the existing x86.

      Development on what became the IA64 started in 1989 by HP and Intel was brought in in 1994 and the first implementation was in 1998 - hell it is the reason we don't see Alpha's anymore.

      AMD64 started in 1999.

      So in computing terms AMD had many generations to watch Intel actually Innovate - and then take the short cut to market. Please note I'm not putting AMD down for AMD64, I'm just pointing out that you can not compare the success of it VS the Itanium because they are not the same by a long shot.

      Also if you want to learn something new - read up on why IA-64 is so different form x86 and you will see why it is worth investing in. Not for the current project but rather for the knowledge gained by doing it. You would be surprised how much of the R&D that went in to the Itanium is currently running in your newer computers and servers.

      • Re:Mmm (Score:5, Insightful)

        by MechaStreisand ( 585905 ) on Tuesday August 24, 2010 @01:38PM (#33359082)
        Bullshit. The Itanium is slow as shit because Intel didn't bother to give it out-of-order execution like every other modern processor has. As a result, it is only fast on DSP-like operations and slow at everything else. Out-of-order execution is essential because the compiler can't know at compile time exactly where everything is going to stall: it's provably impossible.

        Remember, this is the same company that designed the P4 without a barrel shifter.
      • Re:Mmm (Score:5, Insightful)

        by imgod2u ( 812837 ) on Tuesday August 24, 2010 @01:40PM (#33359110) Homepage

        There are plenty of things to learn from Itanium, specifically, what not to do if you want a good general purpose processor. For one, you don't make processor performance so incredibly reliant on instruction scheduling that the biggest compiler team on Earth (Intel's compiler group) couldn't make it run fast on anything except a small subsection of problems.

        Secondly, when attempting to gain ISA adoption, making it an exclusive ISA that only you have control and rights to use is a big no no. Sure, it'd be heaven for Intel to be the sole supplier.

        And lastly, process and iterations mean more for performance than any fancy ISA. Itanium is consistently one or two process generations behind its x86 counterparts and consistently one or two micro-architectural iterations slower (it takes 2 revisions of the Core micro-arch before Itanium comes out with one).

        You can have as clean and fancy of an ISA (which IA-64 was not, btw) as you'd like but implementation matters far far more.

        In the end, it wasn't fast enough (the best it ever did was match its x86 counterparts) and it didn't have any other advantages to warrant the switch.

        Now, ARM on the other hand....

      • Re: (Score:3, Informative)

        by coredog64 ( 1001648 )

        Development on what became the IA64 started in 1989 by HP and Intel was brought in in 1994 and the first implementation was in 1998 - hell it is the reason we don't see Alpha's anymore.

        The reason we don't see Alpha anymore is that Intel coerced HP to buy up Compaq and kill it off by offering to assist HP in porting HP-UX to Itanic.

    • Re: (Score:3, Insightful)

      by petermgreen ( 876956 )

      Itanium, anyone?
      Yes some time ago intel was screwing arround with itanium (which hardly anyone wanted because it ran x86 code so badly) and netburst (which was slower per clock than a P3) while AMD was pushing ahead with the hammer architecture.

      However since core 2 and especially with nahelm (where intel moved to a point-point architecture from a shared FSB architecture) intel has gradually regained the lead starting with the single sockets and gradually moving up to larger platforms. AMD is resorting to th

  • AMD's stagnant? (Score:5, Insightful)

    by iamhassi ( 659463 ) on Tuesday August 24, 2010 @12:27PM (#33357880) Journal
    "AMD's lineup has been relatively stagnant while Intel continued to innovate with Nehalem and Sandy Bridge (due late this year) and the Bulldozer refresh is badly needed to keep in step."

    AMD just came out with Six-Core processors for $200 [slashdot.org], how is that stagnant? Intel's only 6-core processor is still $1000 [google.com]
    • Re:AMD's stagnant? (Score:5, Insightful)

      by TrisexualPuppy ( 976893 ) on Tuesday August 24, 2010 @12:32PM (#33357936)
      Likely another Intel fanboy trying to spread FUD about the company that he doesn't like and at the same time getting his username posted on the front page.

      AMD may not have the resources that Intel does, but it isn't as though Intel is walking AMD around on a leash. This mindset gets annoying after a while.
      • Re:AMD's stagnant? (Score:5, Informative)

        by blair1q ( 305137 ) on Tuesday August 24, 2010 @12:45PM (#33358168) Journal

        "AMD's lineup has been relatively stagnant while Intel continued to innovate with Nehalem and Sandy Bridge (due late this year) and the Bulldozer refresh is badly needed to keep in step."

        Likely another Intel fanboy trying to spread FUD about the company that he doesn't like and at the same time getting his username posted on the front page.

        The facts in that quote were presented clearly. AMD is a generation behind on architecture, trying to get comparable performance by multiplying old cores, while Intel has been advancing architecture and multiplying cores at the same time. For about 4 years now, Intel has had 2-4 chips performing at levels above anything AMD could produce.

        It remains to be seen if Bulldozer will put AMD anywhere near at-par on a performance/core basis, but it's not 2002 any more, and AMD has no hope of a performance lead.

        • Re:AMD's stagnant? (Score:4, Interesting)

          by Anonymous Coward on Tuesday August 24, 2010 @12:59PM (#33358410)

          Actually, AMD's has a great chance of beating Intel in the future. You fail to recognize that AMD has ATI now and they are going to be fusing CPU's and GPU's onto the same die in the future. They benefit from the experience and IP of ATI. Intels graphics capability so far has been a joke.

          • by 0123456 ( 636235 )

            Actually, AMD's has a great chance of beating Intel in the future. You fail to recognize that AMD has ATI now and they are going to be fusing CPU's and GPU's onto the same die in the future.

            If you want fast graphics then you buy a discrete graphics card. If you're using integrated graphics you don't much care whether it's a crappy ATI chip or a crappy Intel chip because it won't run modern games at any reasonable speed either way.

            • Re:AMD's stagnant? (Score:5, Insightful)

              by MBGMorden ( 803437 ) on Tuesday August 24, 2010 @01:30PM (#33358944)

              If you want fast graphics then you buy a discrete graphics card. If you're using integrated graphics you don't much care whether it's a crappy ATI chip or a crappy Intel chip because it won't run modern games at any reasonable speed either way.

              That's conventional wisdom, but conventional wisdom doesn't always hold steady in the computing market. 15 years ago what you said there was true for both audio chips and network cards. Anybody who wanted one that was half-way decent bought a discrete unit because those performed well, and the hokey versions that you might find integrated were pretty much junk.

              Today? All but a few holdouts and professional level users just use the integrated network and sound, because for your average user - even your average power user - the integrated stuff is plenty good enough.

              I'd wager that in less than 8 years your statement of "If you want fast graphics then you buy a discrete graphics card." will sound just as outdated and clueless as "If you want to crunch numbers faster than you buy a dedicated math co-processor.".

              • by 0123456 ( 636235 )

                I'd wager that in less than 8 years your statement of "If you want fast graphics then you buy a discrete graphics card." will sound just as outdated and clueless as "If you want to crunch numbers faster than you buy a dedicated math co-processor.".

                Except there's an infinite capacity to use graphics power, so there's no way that in only eight years we will have reached an effective limit on processing power.

                • Re:AMD's stagnant? (Score:5, Insightful)

                  by MBGMorden ( 803437 ) on Tuesday August 24, 2010 @02:11PM (#33359616)

                  There's an infinite capacity to use floating point arithmatic too, but we abandoned the separate chip for it idea long ago. FPU's these days are still getting faster with each chip - no limit on processing power was hit. We simply got to a point where a completely capable FPU could be bundled in with the CPU and it's performance was sufficient for most users.

                  Imagine this scenario: the integrated solutions don't suck. Instead of being virtually useless for 3D graphics, they have performance about equal to the mid-line $150 to $200-ish cards of today (and let that scale for whatever cards meet that definition of the time). You can get better performance, but it's going to take huge full-length cards running SLI or the like, and it's going to take several hundred dollars to beat your standard integrated solution.

                  My wager is that 95% of the people who currently buy discrete chips would accept integrated at that point. The chips would still get faster over time, and there still might be a few extreme solutions available, but the average user wouldn't need them anymore. My guess is we'll get there quite soon. And if you're asking why the chip companies would want to sell us 1 chip where they previously sold 2? Simple answer: market competition. If AMD can push out a chip as fast or faster than Intels that also has an integrated GPU that rivals discrete solutions, then they'll take a lot of business from Intel. That's all the motive they need.

                  • Re: (Score:3, Insightful)

                    by imgod2u ( 812837 )

                    No, what happened was that the most FP intensive tasks (rendering, 3D modeling for games) moved to another dedicated chip (the GPU). Bigger and better compute capacity there has not stopped being in demand ever since and shows no signs of slowing down.

                    The only thing left that were really compute intensive on the CPU were things like video transcoding and precise (production quality) 3D rendering due to the lack of double-precision support in GPU's as well as the difficulty of using them for compute (i.e. wr

                  • by Belial6 ( 794905 )
                    For the most part, I feel like we already are there. Out of the 9 computers running in my home, only 1 has a discreet graphics card, and I am toying with the idea of removing it because it increases the watts used by the computer over the integrated solution. While we don't play a lot of games that are just released, there are at least 40 man hours of PC games played in my house each week, so it isn't like we don't play games.
                • Wow, so you have discovered that the human eye is somehow infinite in its capacity to perceive?! You'll win the Nobel prize for this!

                  The reason that audio leveled off is that the human ear and capacity to hear is finite. Once graphics are fully photorealistic, there really is no higher level.
                  • by Runefox ( 905204 )

                    Except that fully photo-realistic graphics have been a long time coming, and will continue to be. We're still using the same resolutions we used a decade ago, we just have better models and fancy post-processing effects that make it look that much better. Strip away the pixel shaders, lighting and texture filtering, and the models and textures are still actually pretty ugly today. We can start talking photorealism when we begin to see screen resolutions in the vicinity of 4096px wide on 17" displays and tex

                • I'd wager that in less than 8 years your statement of "If you want fast graphics then you buy a discrete graphics card." will sound just as outdated and clueless as "If you want to crunch numbers faster than you buy a dedicated math co-processor.".

                  Except there's an infinite capacity to use graphics power, so there's no way that in only eight years we will have reached an effective limit on processing power.

                  Ridiculous.

                  Once upon a time you bought a machine with a separate, discrete math coprocessor because the CPU couldn't handle math on its own. I remember there being a noticeable difference just running a spreadsheet on a two computers that were identical except for the presence of a math coprocessor. I remember some pieces of software simply refusing to run because I didn't have an FPU on my machine.

                  Those days are simply gone. These days nobody has a discrete math coprocessor. Sure, yes, you can get fanc

              • by blair1q ( 305137 )

                No, it will always be true.

                The amount of stuff you can cram on a single chip is smaller than the amount of stuff you can cram on two chips, and chips that are twice as big are twice as likely to have catastrophic production flaws.

                At the consumer level, separate CPU and GPU will be the only way to make a buck, at least until someone reformulates both computing and graphics to be indistinguishable (what Intel was trying to do with Larabee).

                • If your logic reflected to any degree the way things actually are, multicore CPUs wouldn't exist. As with CPU production today, dies with bad CPUs or GPUs will just go in the next bin down. Bad CPU? Put it in a discrete graphics card. Bad GPU? Make it a CPU only. Just as bad hexas become quads and bad quads become triples etc. Intel and AMD both do this all the time.
                  • by blair1q ( 305137 )

                    Not so much for you with the "logic" taunts.

                    You can do things with two simple CPUs better than one complicated CPU. And you still have a GPU to do the video.

                    Putting a simple GPU and a simple CPU on one chip is not the same as having a complicated CPU and complicated GPU on different chips, and putting a complicated CPU and GPU on a chip is way more expensive than putting simple ones together.

                    The "logic" isn't immutable with price point. CPU/GPU combinations will be the way to go in palmtop/smartphone devi

                    • I would be interested in a real world contrast of supposedly 'simple' vs. 'complicated' CPUs. Current multicores are designed to scale. That's why failed multicores are able to downscale. A Core Solo is not a 'complicated' single CPU, nor is a Core Duo a pair of 'simple' CPUs. They are virtually identical, just scaled and linked. The same is true of every other recent architecture.

                      Your hypotheticals about cost are only so much blather until the market ultimately decides what works at which prices.
            • Power consumption. To use the discrete graphics card you need lots of bus drivers on the motherboard and the card. They have to drive the bus inductance and capacitance. The faster they go, the more bits on the bus, the more power they use. Integrate usable graphics and you are driving tiny loads, so the power consumption drops.

              People simply don't want to sit in a fixed position governed by a box and a monitor, which is one reason laptops outsell desktops. The future is untethered, which means low power bat

              • by 0123456 ( 636235 )

                The future is untethered, which means low power battery operated systems. Your discrete graphics card will never be more than a niche market.

                So no-one is going to play PC games anymore? I guess you could be right, but Microsoft better hope you're wrong.

                • You have completely missed my point (you _have_ got electronic systems design experience, haven't you?). Given identical graphics capability, if you can put the GPU on the same die, or next door on the motherboard, as the main CPU, you will consume less power, as sure as V = Ldi/dt.

                  Anyone who has been paying attention for the last 10 years is well aware that the entire consumer electronics industry is largely driven by integration and shrinkage.

                • You know laptops can still run PC games, right?

                • by imgod2u ( 812837 )

                  The next iteration of the XBox 360 will have an SoC that integrates both the GPU and CPU on a single die.

                  I think we're at a point where only a small niche (well, more so than before) pushes for the $600 behemoth video cards and $900 CPU's.

                  People are moving towards "just enough" machines that are light on price and power consumption.

              • by blair1q ( 305137 )

                Even laptops are more comfortable to use on a desktop. They're actually annoying to use on a laptop.

                Regardless, almost all the laptops you've ever used had separate CPU and GPU chips in them. So "untethered" is not driving integration.

                What drives integration is price and the premium that can be charged for a lighter, smaller device. But the integrated chip will not have the same performance as separated chips at the same manufacturing cost. So you will get an integrated CPU/GPU based platform that costs

              • I misread that at first, yes you're correct, discrete cards are somewhat problematic in that respect. However, it's not quite that bleak, with technology that's been coming for a while, it wouldn't surprise me if before too long the integrated on die graphics chip was teamed up with either an integrated or more likely discrete graphics card to somewhat bump the quality even further.

                But there's a reason why AMD wants to do it, they've had a lot of good luck with integrating things and strategically moving
            • by Nadaka ( 224565 )

              Not always the case. Intels on chip and motherboard integrated graphics are horrible.

              But ATI's HD 3300 motherboard integrated line of CPU's will actually run games up to about 2 years old fairly well.

              You will always be able to get better discrete GPU's, but it is no longer the case that integrated can not be used for gaming at all unless you need the latest and greatest.

            • Yeah, you get much better performance going through a northbridge & PCI-e than talking to a gpu on the same chip...

          • by blair1q ( 305137 )

            Thinking like that cost AMD $3 billion in goodwill value from when they bought ATI, and led them to have to sell off their production facilities and become a design and marketing company.

            "Fusion," the project they had in mind when they made the acquisition, was supposed to be out three years ago.

            What they'll release this year or next will be a small, low-performing CPU melded clumsily to a small, low-performing GPU.

            And Intel already did it [arstechnica.com].

        • Re:AMD's stagnant? (Score:5, Insightful)

          by cynyr ( 703126 ) on Tuesday August 24, 2010 @01:01PM (#33358454)

          but not per $, which is the whole point. Sure i can build a screaming rig using a $1500 intel CPU, and a $400motherboard, and then toss in the ECC ram that board needs... and all of a sudden i could have bought a honda civic....

          Or i could get 80-90% of that same rig, in certain loads 120-150%, for $500 from AMD.

          • by imgod2u ( 812837 )

            The quote you were fanboy-blasting said nothing about price. Technically minded people are concerned about technical features, such as performance and power consumption. And in that, the quote is exactly right. AMD is a generation behind.

            I fail to see what the pricing structure for consumer sales has to do with that.

          • Re: (Score:3, Interesting)

            by blair1q ( 305137 )

            That is how AMD stays in business, by cutting its prices well below the average market price for the performance rating.

            But there is a large chunk of performance rating they can't even approach.

            Here's last year's numbers [techreport.com] (didn't see this year's in the first page of google results), which should give you an indication of why AMD went looking for more performance from each chip. I'm still not expecting Bulldozer to get AMD up to the top. They might match the second- and third-place chips from Intel, but the

            • Performance per chip, or per core, is important. More cores is nice and all... If you can use them. Not everything can. If your apps use only 2 cores, the other 4 don't do you a lot of good. However maybe those apps need a lot of performance out of the cores they do use (games are like that). As such you are interested in performance per core, not just having more cores.

              • Re: (Score:3, Informative)

                by blair1q ( 305137 )

                If you're running any Windows since XP, you're using all of your cores all the time, and it's benefitting you. You may not get all of them working on the same task, but the fact that the other cores can do background and response tasks while your foreground task pegs one of the cores is always a bonus. If it doesn't show up in outright speed of completion, it hides a vast array of niggling little delays that make things jerky.

          • by Bengie ( 1121981 )

            the real question is how does Intel vs AMD compete a price ranges. Who cares that Intel's top end CPU is $1k+. I can get an i7 and OC it to 4ghz and beat the crap out of AMD.. at least for now. and still keep a close cost

            CPUs tend to be better at different things. From the sound of it, AMD is claiming about a 50% increase in performance because of per-core reduced die space while offering a relative per-core performance.

            How will that compare to Intel's 32nm 6core with HT?.. who knows. I hope AMD does well b

        • While AMD inarguably has been a manufacturing generation or two behind for quite some time you fail to realize that the platform was design from the ground up to be SMP friendly. This has helped AMD pretty much rule the roost for several years in the virtual market. The 12 Core Opterons are beating the hell out of the Xeons on price and performance. Intel made a lot of great strides to improve their situation but in the end AMD has been pretty good about maintaining their lead there. AMD also has the additi

    • Re: (Score:3, Informative)

      by SQL Error ( 16383 )

      AMD are selling six-core dual-socket CPUs for $200 [acmemicro.com] now. They're not quite as fast as the Xeon 5500/5600, but the price/performance is awesome.

    • by adbge ( 1693228 )

      I can't tell if you're trolling or not.

      Intel's hexacore offering features hyperthreading technology, which allows each core to execute two threads simultaneously. This means that Intel's hexacore chips actually have twelve logical cores, while the AMD hexacore chips only have six logical cores. Your number of physical cores comparison is meaningless, and actual performance benchmarks show that the Core i7 980X is more than twice as fast as the AMD Phenom II X6 1055T. [1]

      [1] http://www.cpubenchmark.net/h [cpubenchmark.net]

      • I used to have a P4 with HT until some piece of the machine became unstable at normal operating temps, and then got an i7 quad-core with HT. In between, I also purchased an AMD Phenom quad-core, which is still running, though it probably doesn't have enough fans.

        I wouldn't count hyper-threading as doubling the CPUs. Often times I would run a single CPU-bound app and find that the "hyperthread" CPU to also spike to 50-100% as shown on conky [sf.net]. So while you may sometimes see a doubling of your processing pow

        • One thing you've got to keep in mind that more cores, logical or otherwise, only help when you're running multiple processes or processes that spawn (and make effective use of) child threads. If you're doing comparisons with software that's not multithreaded, the only difference you'll see is that of clock speeds and processor efficiency. What I'm trying to say is that you can't expect a eight logical cores to be that much faster than four when the software's only running on one and the difference you're se

          • I think you may have misinterpreted what I said.

            A single thread of high CPU usage should only impact a single (virtual?) CPU. However, since the secondary (hyper-thread) CPU also was impacted, it tells me that there are some situations where a HT CPU cannot do two things literally at once. For lack of better statistical methods, I estimate this as the virtual CPU counting only as half a CPU.

            When I run "make -j13", there are so many processes flying around that I can't tell quite so directly whether the hy

        • by Bengie ( 1121981 )

          My cousin was doing server benchmarks before doing a large purchase for his datacenter and he found the i7 with HT disabled was still beating the AMD and with HT it about doubled in speed. He runs a mix of DB/Video Compression and everything is ran on Solaris/Linux and stores several petabytes of data.

          I guess consumer grade apps/hardware tend not to enjoy the extra kernel threads so much.

      • "Your number of physical cores comparison is meaningless, and actual performance benchmarks show that the Core i7 980X is more than twice as fast as the AMD Phenom II X6 1055T. [1]"

        Trolling much? Comparing a $200 CPU with a $1,000 CPU isn't really fair, is it? Someone shopping for a $200 CPU isn't going to even consider a $1,000 CPU and vice versa. Might as well compare a $100,000 Porsche Turbo to a $20,000 Ford Focus.

        Might want to follow your own link and look at the far right column with the price
      • by cynyr ( 703126 )

        http://www.anandtech.com/show/3674/amds-sixcore-phenom-ii-x6-1090t-1055t-reviewed/3 [anandtech.com]
        some loads the 1090T is better than the 980x....

        Intel's hexacore is also ~$900, i can buy a whole computer with a X6 and a 58xx or gtx4xx with 8GB ram for that... and according to your benchmark get ~60% of the performance, but they don't list the test rigs, instruction sets used, etc etc...

        Also 2x performance for 4-5x the cost? seems worth it huh?

      • Re:AMD's stagnant? (Score:4, Informative)

        by ArcherB ( 796902 ) on Tuesday August 24, 2010 @01:19PM (#33358766) Journal

        Intel's hexacore offering features hyperthreading technology, which allows each core to execute two threads simultaneously. This means that Intel's hexacore chips actually have twelve logical cores, while the AMD hexacore chips only have six logical cores.

        I think you may be misunderstanding what hyperthreading is. A processor (or core) can only execute one instruction at a time, hyperthreading or not. All hyperthreading does is allow for two sets of instructions to be queued up, so if one thread (or queue) gets hung up for whatever reason, like waiting over a cache miss, the other instructional thread can proceed, rather than patiently waiting in line.

        Think of it as one of those tumbling thingies you have to pass through to get into Six Flags or the subway. It's like that, but hyperthreading has two lines instead of one. If one moron has to stop to find his ticket at the front of the line, the other line may move until he finds it.

        Your number of physical cores comparison is meaningless...

        Um... no. I believe your "virtual" core comparison is meaningless. I'll take a quad core anything over a dual core hyperthreaded-anything-else any day, thank you. Virtual cores don't mean shit until a thread stalls.

        and actual performance benchmarks show that the Core i7 980X is more than twice as fast as the AMD Phenom II X6 1055T. [1]

        From the site you linked:

        Intel Core i7 980X @ 3.33GHz: Score of 10,325 at $989.99*
        AMD Phenom II X6 1055T: Score of 5,146 at $194.99*

        Hmmmm... Twice the performance at over 5x the cost. Strange, I don't know why you chose that AMD chip. It's odd that you would choose the fastest Intel chip and a middle of the road AMD Chip. Why not this one?
        AMD Phenom II X6 1090T: Score of 6,057 at $289.99*.

        Oh, I know. Thenyou wouldn't be able to use the 2x faster line. I get it now.

        Here, take a look at THIS [cpubenchmark.net] chart and pay attention to the price/performace graph. You'll see that your chip performs about 2.5x less than the AMD Phenom II X5 965 when price is a consideration. Oh, and for nearly everyone that is not living off their mommy's credit cards, price is a consideration.

        • Comment removed based on user account deletion
      • by bhima ( 46039 ) *

        Outside of the benchmark you have listed, how much time does the average business user really use those 12 logical cores? All other things being equal (including price) wouldn't most folks be better off buying few but faster coares?

    • AMD just came out with Six-Core processors for $200 [slashdot.org], how is that stagnant? Intel's only 6-core processor is still $1000 [google.com]
      And WHY are they selling 6-core processors so cheap? it's because their quad core processors can't keep up with intels so they are trying to make up for poor performance per core with higher core count.

      Trouble is for desktop applications going from 4-core to 6-core is only of marginal benefit.

      • Depends on what you are doing. If you are running a VM (or a few VMs) having the extra cores and a lot of RAM is a good thing. If all your doing is email, basic office tasks, or a few games* maybe not.

        *Some of the newer games use all the cores. This may not be true going forward. The latest games may benefit from many cores.

        I would say the 'base' new computer should be quad core today. That is overkill for most regular tasks. But it is enough for the rest of regular user tasks. Ultra high end people usually

  • Bulldozer? (Score:3, Funny)

    by Megahard ( 1053072 ) on Tuesday August 24, 2010 @12:29PM (#33357918)
    Sounds like a slow-moving behemoth. Not the best choice for a name.
    • Re: (Score:3, Interesting)

      by rotide ( 1015173 )
      Or it pushes everyone/everything else to the wayside. I guess it depends on your interpretation.
    • Re:Bulldozer? (Score:5, Insightful)

      by sideslash ( 1865434 ) on Tuesday August 24, 2010 @12:39PM (#33358068)
      I think people's mental impression will vary; yours is typical of an office worker. To someone employed in construction or agriculture, a bulldozer is a symbol of getting huge amounts of work done in a very short time. This reminds me of an ad I saw some years back for an "object oriented database", where they showed a photoshopped race car with a tractor on the back end. Their marketing message was "Why do you have a sluggish RDMS on your web app's back end?" I found it hilarious, because my reflexive response was to ask, "Why is that totally useless racecar pasted on the front of that excellent looking tractor, the kind of vehicle that is used to grow all the crops that feed the world?" :)
      • Re: (Score:2, Funny)

        "Why is that totally useless racecar pasted on the front of that excellent looking tractor, the kind of vehicle that is used to grow all the crops that feed the world?" :

        Maybe it's because the people that were selling that "object oriented database" were far more honest than you assumed.

      • My impression of the name "Bulldozer" is something that gets a lot of shit done, but also spends a shitload of space and power. I thought both mobile and server worlds were moving into lighter and more efficient architectures, but I guess we just have to keep on using x86 for our closed software.

        (Power-efficient x86 brings to mind names such as Intel Atom and Via Nano, but neither is particularly impressive when you compare them to real mobile/embedded architectures such as ARM and MIPS.)

    • Re: (Score:3, Interesting)

      by Minwee ( 522556 )

      Perhaps the team at AMD had been drinking heavily the night before.

      At eight o'clock on Thursday morning Arthur didn't feel very good. He woke up blearily, got up, wandered blearily round his room, opened a window, saw a bulldozer, found his slippers, and stomped off to the bathroom to wash.
      Toothpaste on the brush -- so. Scrub.
      Shaving mirror -- pointing at the ceiling. He adjusted it. For a moment it reflected a second bulldozer through the bathroom window. Properly adjusted, it reflected Arthur Dent's

    • Sounds like a slow-moving behemoth. Not the best choice for a name.

      If you've ever used a good bulldozer, you might be a slow moving behemoth, but the feeling you get is of an unstoppable juggernaut.

      To paraphrase a popular parody commercial:

      It get's shit done.

  • by lymond01 ( 314120 ) on Tuesday August 24, 2010 @01:04PM (#33358504)

    Processor Speed: Very fast hamsters on well-oiled wheels
    Multiple Cores: Many well-oiled wheels
    On die memory controllers: dangled cheese
    Cache: water trough next to the wheel
    L3 Cache: Camelback packs for each hamster
    Shared L3 Cache: This is where the real innovation comes in and won't be defined as patent is pending.

    • Comment removed based on user account deletion
  • Honestly, I wish Via had the resources AMD and Intel have. Their Nano CPU is pretty nice, but it's languishing. They're only just now coming out with a dual core version. The Nano's on-die crypto extensions, low power use, and higher performance per watt would otherwise make it ideal for server applications, particularly SSL front-ends.

  • by pgn674 ( 995941 )

    I worked with 128 bit SIMD (Single Instruction, Multiple Data) on an Intel x86 processor for my undergrad capstone, specifically SSE4.1. SIMD mainly allows vector operations. As one example, instead of adding 42 to a single 32 bit number in RAM, you can add 42 to four 32 bit numbers in RAM, if they're all next to each other, and do it in almost the same amount of time. Good for graphics and, well, vector operations. Kind of the CPU's answer to the GPU's specialties.

    My capstone dealt with finding out if an i

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...