AMD Details Upcoming Bulldozer Architecture 234
Vigile writes "AMD is taking the lid off quite a bit of information on its upcoming CPU architecture known as Bulldozer that is the first complete redesign over current processors. AMD's lineup has been relatively stagnant while Intel continued to innovate with Nehalem and Sandy Bridge (due late this year) and the Bulldozer refresh is badly needed to keep in step. The integrated north bridge, on-die memory controller and large shared L3 cache remain key components from the Athlon/Phenom generation to Bulldozer but AMD is adding features like dual-thread support per core (but with a unique implementation utilizing separate execution units for each thread), support for 256-bit SIMD operations (for upcoming AVX support) all running on GlobalFoundries 32nm SOI process technology."
Not much new information (Score:4, Informative)
Compared to such articles as AnandTech's [anandtech.com] coverage of this in November 2009, I don't see much new information. Perhaps the key bit, and this is glossed over but you can tell from the slides AMD gave them, is the difference between the bulldozer and bobcat cores. The bulldozer cores contain the two integer units that have been revealed before, but the bobcat core only has one but it still implements hyperthreading.
Re:Not much new information (Score:5, Informative)
Unlike Intel design, with highly asymmetric execution units, AMD's have had 3 symmetric integer execution units per core since the original Athlons. Its actually a pleasant breeze to write hand-optimized integer code on AMD's.
This new design looks (in the diagram) like it actually has 4 symmetric integer execution units per integer scheduler, with the bulldozer having 2 schedulers per core while the bobcat only having 1 per core (I would guess that the logical cores are alternated on rise-and-fall states of the clock on the bobcat, and the diagram certainly makes it look like that is the case.)
Each seem to have two wide floating point execution units, so the floating point performance of both bulldozers and bobcat's are probably equivalent.
What I think AMD has done here is that with the bulldozer, in integer performance it is going to behave like it has 2x the number of real cores. So an 8 core (16 thread) chip will perform much like an 8 core CPU in floatng point work, but much more like a true 16-core CPU in integer work. This should give it a large advantage over Intel in integer work in equal-core comparisons, but the floating point performance will still lag behind Intel.
Re: (Score:3, Insightful)
I believe Bobcat's 2 FPU paths are 64-bits wide. For a total of 128-bits. It initially will not support the 256-bit AVX instructions that are coming with Sandy Bridge and Bulldozer.
Its ALU's also appear to be significantly different than Bulldozer. With only one of the integer units can support multiplies and only two of them can support arithmetics. Two others (using a different scheduler) are load/store units. Bulldozer doubles the ALU resources (but not the number of schedulers) compared to Bobcat. So ea
Re: (Score:3, Informative)
I was never a big fan of the 3x symmetric ALU's in the Athlons. When it comes to integer intensive code, having a ton of independent ADDs or MULs that I'd need that kind of parallelism for was rare. And the latency (compared to a sane design like Core at least) were significantly higher due to the units being multi-purpose.
In the Phenom II design the latency of most of the register to register integer instructions is exactly 1 cycle just like the i7. The units being multi-purpose is not a latency sacrifice at all, although maybe the original Athlons had poor latency for another reason and Agner Fog's reference actually indicates that most of the register to register integer instructions on even the early K7's also had 1 cycle latency.
Even in mem,reg operations, the Phenom II beats out the I7 in latency on many operations (
Re: (Score:2)
Mmm (Score:3, Insightful)
Re: (Score:2)
A processor that's nearly ten years old is relevant today exactly how?
TFA says: "all good things must come to an end and with the development of the very impressive Nehalem architecture from Intel, and the upcoming Sandy Bridge, AMDs primary CPU architecture is certainly showing its age"
The market is ruthless, no one buys products from a company that used to do great thi
Re: (Score:2)
Re: (Score:2)
Re:Mmm (Score:5, Interesting)
the first AMD64 CPU shipped in April 2003
the first Itanium Shipped June 2001
So the AMD was 23 months late - all they did was tack on to existing x86 where as Intel was trying and did develop a whole new architecture.
almost all of the complaints for the Itanium being slow was due to it having to emulate x86 for software that was not written specifically for the IA64 - Code that was and is written for IA64 runs fast as hell and there is a reason why they are still used today - just is specific applications.
Intel's failure was due to them trying to jump to a whole new computing architecture and expecting programmers to go with them - instead programmers resisted and AMD jumped on that by just extending the existing x86.
Development on what became the IA64 started in 1989 by HP and Intel was brought in in 1994 and the first implementation was in 1998 - hell it is the reason we don't see Alpha's anymore.
AMD64 started in 1999.
So in computing terms AMD had many generations to watch Intel actually Innovate - and then take the short cut to market. Please note I'm not putting AMD down for AMD64, I'm just pointing out that you can not compare the success of it VS the Itanium because they are not the same by a long shot.
Also if you want to learn something new - read up on why IA-64 is so different form x86 and you will see why it is worth investing in. Not for the current project but rather for the knowledge gained by doing it. You would be surprised how much of the R&D that went in to the Itanium is currently running in your newer computers and servers.
Re:Mmm (Score:5, Insightful)
Remember, this is the same company that designed the P4 without a barrel shifter.
Re:Mmm (Score:5, Insightful)
There are plenty of things to learn from Itanium, specifically, what not to do if you want a good general purpose processor. For one, you don't make processor performance so incredibly reliant on instruction scheduling that the biggest compiler team on Earth (Intel's compiler group) couldn't make it run fast on anything except a small subsection of problems.
Secondly, when attempting to gain ISA adoption, making it an exclusive ISA that only you have control and rights to use is a big no no. Sure, it'd be heaven for Intel to be the sole supplier.
And lastly, process and iterations mean more for performance than any fancy ISA. Itanium is consistently one or two process generations behind its x86 counterparts and consistently one or two micro-architectural iterations slower (it takes 2 revisions of the Core micro-arch before Itanium comes out with one).
You can have as clean and fancy of an ISA (which IA-64 was not, btw) as you'd like but implementation matters far far more.
In the end, it wasn't fast enough (the best it ever did was match its x86 counterparts) and it didn't have any other advantages to warrant the switch.
Now, ARM on the other hand....
Plus, Windows Server support is over... (Score:3, Insightful)
http://www.microsoft.com/windowsserver2008/en/us/2008-IA.aspx [microsoft.com]
Re: (Score:3, Informative)
Development on what became the IA64 started in 1989 by HP and Intel was brought in in 1994 and the first implementation was in 1998 - hell it is the reason we don't see Alpha's anymore.
The reason we don't see Alpha anymore is that Intel coerced HP to buy up Compaq and kill it off by offering to assist HP in porting HP-UX to Itanic.
Re: (Score:3, Insightful)
Itanium, anyone?
Yes some time ago intel was screwing arround with itanium (which hardly anyone wanted because it ran x86 code so badly) and netburst (which was slower per clock than a P3) while AMD was pushing ahead with the hammer architecture.
However since core 2 and especially with nahelm (where intel moved to a point-point architecture from a shared FSB architecture) intel has gradually regained the lead starting with the single sockets and gradually moving up to larger platforms. AMD is resorting to th
Re: (Score:2)
but Itanium's great for its niche.
Itanium really sucks.
Itanium II is really good at FP. But that's a *tiny* niche, and x86_64 is catching up.
It didn't supplant X86.
But Intel wanted it to. That's why they had to copy AMD's 64-bit architecture to stay relevant.
AMD's stagnant? (Score:5, Insightful)
AMD just came out with Six-Core processors for $200 [slashdot.org], how is that stagnant? Intel's only 6-core processor is still $1000 [google.com]
Re:AMD's stagnant? (Score:5, Insightful)
AMD may not have the resources that Intel does, but it isn't as though Intel is walking AMD around on a leash. This mindset gets annoying after a while.
Re:AMD's stagnant? (Score:5, Informative)
"AMD's lineup has been relatively stagnant while Intel continued to innovate with Nehalem and Sandy Bridge (due late this year) and the Bulldozer refresh is badly needed to keep in step."
Likely another Intel fanboy trying to spread FUD about the company that he doesn't like and at the same time getting his username posted on the front page.
The facts in that quote were presented clearly. AMD is a generation behind on architecture, trying to get comparable performance by multiplying old cores, while Intel has been advancing architecture and multiplying cores at the same time. For about 4 years now, Intel has had 2-4 chips performing at levels above anything AMD could produce.
It remains to be seen if Bulldozer will put AMD anywhere near at-par on a performance/core basis, but it's not 2002 any more, and AMD has no hope of a performance lead.
Re:AMD's stagnant? (Score:4, Interesting)
Actually, AMD's has a great chance of beating Intel in the future. You fail to recognize that AMD has ATI now and they are going to be fusing CPU's and GPU's onto the same die in the future. They benefit from the experience and IP of ATI. Intels graphics capability so far has been a joke.
Re: (Score:2)
Actually, AMD's has a great chance of beating Intel in the future. You fail to recognize that AMD has ATI now and they are going to be fusing CPU's and GPU's onto the same die in the future.
If you want fast graphics then you buy a discrete graphics card. If you're using integrated graphics you don't much care whether it's a crappy ATI chip or a crappy Intel chip because it won't run modern games at any reasonable speed either way.
Re:AMD's stagnant? (Score:5, Insightful)
If you want fast graphics then you buy a discrete graphics card. If you're using integrated graphics you don't much care whether it's a crappy ATI chip or a crappy Intel chip because it won't run modern games at any reasonable speed either way.
That's conventional wisdom, but conventional wisdom doesn't always hold steady in the computing market. 15 years ago what you said there was true for both audio chips and network cards. Anybody who wanted one that was half-way decent bought a discrete unit because those performed well, and the hokey versions that you might find integrated were pretty much junk.
Today? All but a few holdouts and professional level users just use the integrated network and sound, because for your average user - even your average power user - the integrated stuff is plenty good enough.
I'd wager that in less than 8 years your statement of "If you want fast graphics then you buy a discrete graphics card." will sound just as outdated and clueless as "If you want to crunch numbers faster than you buy a dedicated math co-processor.".
Re: (Score:2)
I'd wager that in less than 8 years your statement of "If you want fast graphics then you buy a discrete graphics card." will sound just as outdated and clueless as "If you want to crunch numbers faster than you buy a dedicated math co-processor.".
Except there's an infinite capacity to use graphics power, so there's no way that in only eight years we will have reached an effective limit on processing power.
Re:AMD's stagnant? (Score:5, Insightful)
There's an infinite capacity to use floating point arithmatic too, but we abandoned the separate chip for it idea long ago. FPU's these days are still getting faster with each chip - no limit on processing power was hit. We simply got to a point where a completely capable FPU could be bundled in with the CPU and it's performance was sufficient for most users.
Imagine this scenario: the integrated solutions don't suck. Instead of being virtually useless for 3D graphics, they have performance about equal to the mid-line $150 to $200-ish cards of today (and let that scale for whatever cards meet that definition of the time). You can get better performance, but it's going to take huge full-length cards running SLI or the like, and it's going to take several hundred dollars to beat your standard integrated solution.
My wager is that 95% of the people who currently buy discrete chips would accept integrated at that point. The chips would still get faster over time, and there still might be a few extreme solutions available, but the average user wouldn't need them anymore. My guess is we'll get there quite soon. And if you're asking why the chip companies would want to sell us 1 chip where they previously sold 2? Simple answer: market competition. If AMD can push out a chip as fast or faster than Intels that also has an integrated GPU that rivals discrete solutions, then they'll take a lot of business from Intel. That's all the motive they need.
Re: (Score:3, Insightful)
No, what happened was that the most FP intensive tasks (rendering, 3D modeling for games) moved to another dedicated chip (the GPU). Bigger and better compute capacity there has not stopped being in demand ever since and shows no signs of slowing down.
The only thing left that were really compute intensive on the CPU were things like video transcoding and precise (production quality) 3D rendering due to the lack of double-precision support in GPU's as well as the difficulty of using them for compute (i.e. wr
Re: (Score:2)
Re: (Score:2)
The reason that audio leveled off is that the human ear and capacity to hear is finite. Once graphics are fully photorealistic, there really is no higher level.
Re: (Score:2)
Except that fully photo-realistic graphics have been a long time coming, and will continue to be. We're still using the same resolutions we used a decade ago, we just have better models and fancy post-processing effects that make it look that much better. Strip away the pixel shaders, lighting and texture filtering, and the models and textures are still actually pretty ugly today. We can start talking photorealism when we begin to see screen resolutions in the vicinity of 4096px wide on 17" displays and tex
Re: (Score:2)
I'd wager that in less than 8 years your statement of "If you want fast graphics then you buy a discrete graphics card." will sound just as outdated and clueless as "If you want to crunch numbers faster than you buy a dedicated math co-processor.".
Except there's an infinite capacity to use graphics power, so there's no way that in only eight years we will have reached an effective limit on processing power.
Ridiculous.
Once upon a time you bought a machine with a separate, discrete math coprocessor because the CPU couldn't handle math on its own. I remember there being a noticeable difference just running a spreadsheet on a two computers that were identical except for the presence of a math coprocessor. I remember some pieces of software simply refusing to run because I didn't have an FPU on my machine.
Those days are simply gone. These days nobody has a discrete math coprocessor. Sure, yes, you can get fanc
Re: (Score:2)
No, it will always be true.
The amount of stuff you can cram on a single chip is smaller than the amount of stuff you can cram on two chips, and chips that are twice as big are twice as likely to have catastrophic production flaws.
At the consumer level, separate CPU and GPU will be the only way to make a buck, at least until someone reformulates both computing and graphics to be indistinguishable (what Intel was trying to do with Larabee).
Re: (Score:2)
Re: (Score:2)
Not so much for you with the "logic" taunts.
You can do things with two simple CPUs better than one complicated CPU. And you still have a GPU to do the video.
Putting a simple GPU and a simple CPU on one chip is not the same as having a complicated CPU and complicated GPU on different chips, and putting a complicated CPU and GPU on a chip is way more expensive than putting simple ones together.
The "logic" isn't immutable with price point. CPU/GPU combinations will be the way to go in palmtop/smartphone devi
Re: (Score:2)
Your hypotheticals about cost are only so much blather until the market ultimately decides what works at which prices.
Oh yes you do, because the future is not desktop. (Score:2)
People simply don't want to sit in a fixed position governed by a box and a monitor, which is one reason laptops outsell desktops. The future is untethered, which means low power bat
Re: (Score:2)
The future is untethered, which means low power battery operated systems. Your discrete graphics card will never be more than a niche market.
So no-one is going to play PC games anymore? I guess you could be right, but Microsoft better hope you're wrong.
Doesn't follow (Score:2)
Anyone who has been paying attention for the last 10 years is well aware that the entire consumer electronics industry is largely driven by integration and shrinkage.
Re: (Score:2)
You know laptops can still run PC games, right?
Re: (Score:2)
The next iteration of the XBox 360 will have an SoC that integrates both the GPU and CPU on a single die.
I think we're at a point where only a small niche (well, more so than before) pushes for the $600 behemoth video cards and $900 CPU's.
People are moving towards "just enough" machines that are light on price and power consumption.
Re: (Score:2)
Even laptops are more comfortable to use on a desktop. They're actually annoying to use on a laptop.
Regardless, almost all the laptops you've ever used had separate CPU and GPU chips in them. So "untethered" is not driving integration.
What drives integration is price and the premium that can be charged for a lighter, smaller device. But the integrated chip will not have the same performance as separated chips at the same manufacturing cost. So you will get an integrated CPU/GPU based platform that costs
Re: (Score:2)
But there's a reason why AMD wants to do it, they've had a lot of good luck with integrating things and strategically moving
Re: (Score:2)
Not always the case. Intels on chip and motherboard integrated graphics are horrible.
But ATI's HD 3300 motherboard integrated line of CPU's will actually run games up to about 2 years old fairly well.
You will always be able to get better discrete GPU's, but it is no longer the case that integrated can not be used for gaming at all unless you need the latest and greatest.
Re: (Score:2)
Yeah, you get much better performance going through a northbridge & PCI-e than talking to a gpu on the same chip...
Re: (Score:2)
Thinking like that cost AMD $3 billion in goodwill value from when they bought ATI, and led them to have to sell off their production facilities and become a design and marketing company.
"Fusion," the project they had in mind when they made the acquisition, was supposed to be out three years ago.
What they'll release this year or next will be a small, low-performing CPU melded clumsily to a small, low-performing GPU.
And Intel already did it [arstechnica.com].
Re:AMD's stagnant? (Score:5, Insightful)
but not per $, which is the whole point. Sure i can build a screaming rig using a $1500 intel CPU, and a $400motherboard, and then toss in the ECC ram that board needs... and all of a sudden i could have bought a honda civic....
Or i could get 80-90% of that same rig, in certain loads 120-150%, for $500 from AMD.
Re: (Score:2)
The quote you were fanboy-blasting said nothing about price. Technically minded people are concerned about technical features, such as performance and power consumption. And in that, the quote is exactly right. AMD is a generation behind.
I fail to see what the pricing structure for consumer sales has to do with that.
Re: (Score:3, Interesting)
That is how AMD stays in business, by cutting its prices well below the average market price for the performance rating.
But there is a large chunk of performance rating they can't even approach.
Here's last year's numbers [techreport.com] (didn't see this year's in the first page of google results), which should give you an indication of why AMD went looking for more performance from each chip. I'm still not expecting Bulldozer to get AMD up to the top. They might match the second- and third-place chips from Intel, but the
And to some people (Score:2)
Performance per chip, or per core, is important. More cores is nice and all... If you can use them. Not everything can. If your apps use only 2 cores, the other 4 don't do you a lot of good. However maybe those apps need a lot of performance out of the cores they do use (games are like that). As such you are interested in performance per core, not just having more cores.
Re: (Score:3, Informative)
If you're running any Windows since XP, you're using all of your cores all the time, and it's benefitting you. You may not get all of them working on the same task, but the fact that the other cores can do background and response tasks while your foreground task pegs one of the cores is always a bonus. If it doesn't show up in outright speed of completion, it hides a vast array of niggling little delays that make things jerky.
Re: (Score:2)
the real question is how does Intel vs AMD compete a price ranges. Who cares that Intel's top end CPU is $1k+. I can get an i7 and OC it to 4ghz and beat the crap out of AMD.. at least for now. and still keep a close cost
CPUs tend to be better at different things. From the sound of it, AMD is claiming about a 50% increase in performance because of per-core reduced die space while offering a relative per-core performance.
How will that compare to Intel's 32nm 6core with HT?.. who knows. I hope AMD does well b
Re: (Score:2)
While AMD inarguably has been a manufacturing generation or two behind for quite some time you fail to realize that the platform was design from the ground up to be SMP friendly. This has helped AMD pretty much rule the roost for several years in the virtual market. The 12 Core Opterons are beating the hell out of the Xeons on price and performance. Intel made a lot of great strides to improve their situation but in the end AMD has been pretty good about maintaining their lead there. AMD also has the additi
Re: (Score:2)
This means AMD old genration processors are capable of the same performance as brand new Intel ones. I don't think this is called being left behind for AMD.
Being faster than a brand new Intel Atom isn't really a great selling point for a modern CPU.
Re: (Score:3, Informative)
AMD are selling six-core dual-socket CPUs for $200 [acmemicro.com] now. They're not quite as fast as the Xeon 5500/5600, but the price/performance is awesome.
Re:AMD's stagnant? (Score:5, Insightful)
Re: (Score:2)
AMD just came out with Six-Core processors for $200 [slashdot.org], how is that stagnant? Intel's only 6-core processor is still $1000 [google.com]
I can't tell if you're trolling or not.
Intel's hexacore offering features hyperthreading technology, which allows each core to execute two threads simultaneously. This means that Intel's hexacore chips actually have twelve logical cores, while the AMD hexacore chips only have six logical cores. Your number of physical cores comparison is meaningless, and actual performance benchmarks show that the Core i7 980X is more than twice as fast as the AMD Phenom II X6 1055T. [1]
[1] http://www.cpubenchmark.net/h [cpubenchmark.net]
Re: (Score:2)
I used to have a P4 with HT until some piece of the machine became unstable at normal operating temps, and then got an i7 quad-core with HT. In between, I also purchased an AMD Phenom quad-core, which is still running, though it probably doesn't have enough fans.
I wouldn't count hyper-threading as doubling the CPUs. Often times I would run a single CPU-bound app and find that the "hyperthread" CPU to also spike to 50-100% as shown on conky [sf.net]. So while you may sometimes see a doubling of your processing pow
Re: (Score:2)
One thing you've got to keep in mind that more cores, logical or otherwise, only help when you're running multiple processes or processes that spawn (and make effective use of) child threads. If you're doing comparisons with software that's not multithreaded, the only difference you'll see is that of clock speeds and processor efficiency. What I'm trying to say is that you can't expect a eight logical cores to be that much faster than four when the software's only running on one and the difference you're se
Re: (Score:2)
I think you may have misinterpreted what I said.
A single thread of high CPU usage should only impact a single (virtual?) CPU. However, since the secondary (hyper-thread) CPU also was impacted, it tells me that there are some situations where a HT CPU cannot do two things literally at once. For lack of better statistical methods, I estimate this as the virtual CPU counting only as half a CPU.
When I run "make -j13", there are so many processes flying around that I can't tell quite so directly whether the hy
Re: (Score:2)
My cousin was doing server benchmarks before doing a large purchase for his datacenter and he found the i7 with HT disabled was still beating the AMD and with HT it about doubled in speed. He runs a mix of DB/Video Compression and everything is ran on Solaris/Linux and stores several petabytes of data.
I guess consumer grade apps/hardware tend not to enjoy the extra kernel threads so much.
Re: (Score:2)
Trolling much? Comparing a $200 CPU with a $1,000 CPU isn't really fair, is it? Someone shopping for a $200 CPU isn't going to even consider a $1,000 CPU and vice versa. Might as well compare a $100,000 Porsche Turbo to a $20,000 Ford Focus.
Might want to follow your own link and look at the far right column with the price
Re: (Score:2)
http://www.anandtech.com/show/3674/amds-sixcore-phenom-ii-x6-1090t-1055t-reviewed/3 [anandtech.com]
some loads the 1090T is better than the 980x....
Intel's hexacore is also ~$900, i can buy a whole computer with a X6 and a 58xx or gtx4xx with 8GB ram for that... and according to your benchmark get ~60% of the performance, but they don't list the test rigs, instruction sets used, etc etc...
Also 2x performance for 4-5x the cost? seems worth it huh?
Re:AMD's stagnant? (Score:4, Informative)
Intel's hexacore offering features hyperthreading technology, which allows each core to execute two threads simultaneously. This means that Intel's hexacore chips actually have twelve logical cores, while the AMD hexacore chips only have six logical cores.
I think you may be misunderstanding what hyperthreading is. A processor (or core) can only execute one instruction at a time, hyperthreading or not. All hyperthreading does is allow for two sets of instructions to be queued up, so if one thread (or queue) gets hung up for whatever reason, like waiting over a cache miss, the other instructional thread can proceed, rather than patiently waiting in line.
Think of it as one of those tumbling thingies you have to pass through to get into Six Flags or the subway. It's like that, but hyperthreading has two lines instead of one. If one moron has to stop to find his ticket at the front of the line, the other line may move until he finds it.
Your number of physical cores comparison is meaningless...
Um... no. I believe your "virtual" core comparison is meaningless. I'll take a quad core anything over a dual core hyperthreaded-anything-else any day, thank you. Virtual cores don't mean shit until a thread stalls.
and actual performance benchmarks show that the Core i7 980X is more than twice as fast as the AMD Phenom II X6 1055T. [1]
From the site you linked:
Intel Core i7 980X @ 3.33GHz: Score of 10,325 at $989.99*
AMD Phenom II X6 1055T: Score of 5,146 at $194.99*
Hmmmm... Twice the performance at over 5x the cost. Strange, I don't know why you chose that AMD chip. It's odd that you would choose the fastest Intel chip and a middle of the road AMD Chip. Why not this one?
AMD Phenom II X6 1090T: Score of 6,057 at $289.99*.
Oh, I know. Thenyou wouldn't be able to use the 2x faster line. I get it now.
Here, take a look at THIS [cpubenchmark.net] chart and pay attention to the price/performace graph. You'll see that your chip performs about 2.5x less than the AMD Phenom II X5 965 when price is a consideration. Oh, and for nearly everyone that is not living off their mommy's credit cards, price is a consideration.
Re: (Score:2)
Re: (Score:2)
A processor (or core) can only execute one instruction at a time, hyperthreading or not.
Uh, no. That hasn't been true for years.
Nehalem, I believe, can execute up to five instructions per clock per core; though you'll rarely be able to reach that limit.
I said one instruction at a time, not per clock. Still, the point was that when something went awry, the entire instruction chain would have to be dumped and reloaded. The processor would sit idle while waiting for this to complete. The P4 had an extremely long instruction pipeline that allowed it to clock at higher speeds, but choke when pipeline had to be flushed. This is why the Athlons of the day were much faster at lower clock speeds due to the shorter pipelines. Whenever a stall happened, the Ath
Re: (Score:2)
Outside of the benchmark you have listed, how much time does the average business user really use those 12 logical cores? All other things being equal (including price) wouldn't most folks be better off buying few but faster coares?
Re: (Score:2)
Logical cores are a marketing gimmick. Just like you. See what I did thar?
No they're not. They're particularly important on Atoms, which can't use out-of-order execution to hide pipeline delays and need something to fill up those clock cycles, and the Nehalem architecture has additional execution units to greatly increase the chance that it will be able to execute instructions from two threads in a single clock cycle.
Re: (Score:2)
AMD just came out with Six-Core processors for $200 [slashdot.org], how is that stagnant? Intel's only 6-core processor is still $1000 [google.com]
And WHY are they selling 6-core processors so cheap? it's because their quad core processors can't keep up with intels so they are trying to make up for poor performance per core with higher core count.
Trouble is for desktop applications going from 4-core to 6-core is only of marginal benefit.
Re: (Score:2)
Depends on what you are doing. If you are running a VM (or a few VMs) having the extra cores and a lot of RAM is a good thing. If all your doing is email, basic office tasks, or a few games* maybe not.
*Some of the newer games use all the cores. This may not be true going forward. The latest games may benefit from many cores.
I would say the 'base' new computer should be quad core today. That is overkill for most regular tasks. But it is enough for the rest of regular user tasks. Ultra high end people usually
Bulldozer? (Score:3, Funny)
Re: (Score:3, Interesting)
Re:Bulldozer? (Score:5, Insightful)
Re: (Score:2, Funny)
"Why is that totally useless racecar pasted on the front of that excellent looking tractor, the kind of vehicle that is used to grow all the crops that feed the world?" :
Maybe it's because the people that were selling that "object oriented database" were far more honest than you assumed.
Re: (Score:2)
My impression of the name "Bulldozer" is something that gets a lot of shit done, but also spends a shitload of space and power. I thought both mobile and server worlds were moving into lighter and more efficient architectures, but I guess we just have to keep on using x86 for our closed software.
(Power-efficient x86 brings to mind names such as Intel Atom and Via Nano, but neither is particularly impressive when you compare them to real mobile/embedded architectures such as ARM and MIPS.)
Re: (Score:3, Interesting)
Perhaps the team at AMD had been drinking heavily the night before.
At eight o'clock on Thursday morning Arthur didn't feel very good. He woke up blearily, got up, wandered blearily round his room, opened a window, saw a bulldozer, found his slippers, and stomped off to the bathroom to wash.
Toothpaste on the brush -- so. Scrub.
Shaving mirror -- pointing at the ceiling. He adjusted it. For a moment it reflected a second bulldozer through the bathroom window. Properly adjusted, it reflected Arthur Dent's
Re: (Score:2)
Sounds like a slow-moving behemoth. Not the best choice for a name.
If you've ever used a good bulldozer, you might be a slow moving behemoth, but the feeling you get is of an unstoppable juggernaut.
To paraphrase a popular parody commercial:
It get's shit done.
Hamster Farm Analogy (Score:5, Funny)
Processor Speed: Very fast hamsters on well-oiled wheels
Multiple Cores: Many well-oiled wheels
On die memory controllers: dangled cheese
Cache: water trough next to the wheel
L3 Cache: Camelback packs for each hamster
Shared L3 Cache: This is where the real innovation comes in and won't be defined as patent is pending.
Re: (Score:2)
Re: (Score:2)
This is Slashdot. We need car analogies!
Well a train analogy would be a lot easier, the whole multiple engines sort of make since there, But I'll give the car analogy a go:
Processor Speed: Compression ratio
Multiple Cores: Number of Cylinders
On die memory controllers: 4 Wheel drive differential
Cache: Rim Size
L3 Cache: Tire Thickness
Shared L3 Cache: Flying Cars?
Re: (Score:2)
On die memory controllers: semi Auto-Transmission
Via Nano (Score:2)
Honestly, I wish Via had the resources AMD and Intel have. Their Nano CPU is pretty nice, but it's languishing. They're only just now coming out with a dual core version. The Nano's on-die crypto extensions, low power use, and higher performance per watt would otherwise make it ideal for server applications, particularly SSL front-ends.
SIMD (Score:2)
I worked with 128 bit SIMD (Single Instruction, Multiple Data) on an Intel x86 processor for my undergrad capstone, specifically SSE4.1. SIMD mainly allows vector operations. As one example, instead of adding 42 to a single 32 bit number in RAM, you can add 42 to four 32 bit numbers in RAM, if they're all next to each other, and do it in almost the same amount of time. Good for graphics and, well, vector operations. Kind of the CPU's answer to the GPU's specialties.
My capstone dealt with finding out if an i
Re: (Score:2, Insightful)
And why, exactly, should/do we not care? This is akin to the announcement of i7 or Sandy Bridge. Maybe if you don't care you shouldn't be reading this story.
Re:Nobody cares. (Score:4, Funny)
What's next? People getting excited about new cars?
Comment removed (Score:5, Insightful)
Re: (Score:3, Informative)
Re: (Score:2)
Cool, thanks!
Re: (Score:2)
So, let's see, I can buy a 3.2 ghz hexacore from AMD for ~$300 or one from Intel for ~$900 on NewEgg right now... 5% price difference my ass. Even if it were 10% slower it would still be a killer deal.
Re: (Score:2)
The Intel one does a lot more in 3.2 GHz than the AMD one does. That's the point of AMD needing a new architecture.
Pricing is not linear with performance, and never has been. If you have a performance advantage at the high end that your competitors can't approach, you get paid for it.
But if you want a ghetto computer, by all means buy one.
Ghetto computer (Score:3, Insightful)
Re: (Score:3, Insightful)
paying 1/3 as much for more than 1/3 the computing power is a viable strategy known as "value based judgment".
At any given price point where there exists an AMD processor, there are few if any intel CPU's with equal or better performance.
The i5 750 and i7 920 are among the very, very few intel chips that compete with AMD on value (performance / price).
Re: (Score:2)
Paying 3X as much for 2X the capability and profiting 4X as much from it is a viable strategy known as "making more money".
Seriously, if all you want is enough computer to post to /. on, you have no business worrying about CPUs more recent than the Celeron and Duron.
Re:Sweeeeet nectar (Score:4, Insightful)
Re: (Score:2)
By that logic, you should be buying old Pentium-D chips for $5 a pound and bragging that you've got more performance/$ than the new Bulldozer will have.
Re: (Score:2)
There does come a point where the increase in speed is not at all worth the extra cost. The top-tier AMD parts (the hexa-cores and the 965's) are more than capable of handling any task effectively, and they represent about the mid-high range of the Intel line in terms of performance (high i5/low i7) for about the same price - the 965 Black Edition is more or less on par with the i5-750, and is $30 less expensive. Calling AMD "ghetto" as you have in other posts is wholly incorrect, and remember that it wasn'
Re: (Score:2)
Re: (Score:2)
No.
Re:Will it be compatible with AM3? (Score:5, Informative)
Almost certainly not AM3 (Score:3, Interesting)
It's a whole new architecture and will almost certainly require a new socket. ISTR the article saying nothing about memory technologies as well. The good news is that a new architecture on the horizon which almost certainly requires a new socket makes it seem less likely that AMD will bring out another socket before then.
Re: (Score:2)
Well, it seems it will be AM3r2 (AM+?). In keeping with tradition, bulldozer may be backwards compatible with AM3
as many people suggest in various forums (like AM2/AM2+). There is no evidence that it won't be compatible and
at this stage this is good news...
Re: (Score:2)
Yes, please. (Score:2)
Just about every iteration of SIMD from Intel and AMD has been utterly worthless. (Not to mention NEON on the ARM.) Altivec was an example of SIMD done right, and AVX finally incorporates some of the better features of it.