AMD Starts Shipping First Bulldozer CPU 202
MrSeb writes "After an awfully long wait, AMD has finally begun shipment of its Bulldozer-based Interlagos (Opteron 6200) server-oriented CPU. If you believe AMD's PR bots, it is the world's first 16-core x86 processor. Unfortunately, and possibly because of reports that AMD is struggling to clock its Bulldozer cores to speeds that are competitive with Intel's Core i7, there's no word of the 8-core desktop-targeted Zambezi CPU. If AMD doesn't move quickly, Intel's Sandy Bridge-E will beat Zambezi to market and AMD will lose any edge that it might have."
Sandy Bridge-E (Score:5, Insightful)
If AMD doesn't watch out their mainline $200 processor will be made obsolete by Intel's $1000 EXTREME CPUs!
Re:Sandy Bridge-E (Score:4, Insightful)
You may laugh but think of it this way... if that $1000 Gulftown CPU from March of 2010 can still beat an 8 core Bulldozer that comes out 19 months later, then you would only have to realize a marginal benefit of about $1.75 per day to make it economically worth your while to have bought the "overpriced" Gulftown chip. (that includes the cost from Intel motherboards that tend to be more expensive and the extra RAM for a triple-channel configuration). Nevermind the fact that 6-core chips have been sold for $600 for some time as well. I can think of a bunch of professional applications that can easily show a $1.75 / day benefit from the extra cores. Maybe not for playing games, but for a lot of real applications.
Bulldozer should beat the consumer-level SB chips at perfectly threaded integer benchmarks, but it remains a very open question if it will be able to beat the almost 2 year old Gulftowns at the same tasks, and it is an almost foregone conclusion that it won't beat the 6 core SB-E chips at those tasks. Factor into account the 315 mm^2 die size of EVERY Bulldozer (not just the 8 core ones, but the cheap 4 core ones too since AMD only has 1 die design) and the immaturity of AMD's 32nm process and things could be expensive for AMD on the desktop. That's why it makes sense to ship the server chips first where AMD has some hope of getting higher ASPs.
Re: (Score:2, Insightful)
I've (honestly) just been asked what our expected budget requirements are for hardware for the next year. Please inform me where I can go to use the patented Intel time travelling technology so that I can retroactively use things before I decide to purchase them.
Re: (Score:2)
Please inform me where I can go to use the patented Intel time travelling technology Haha an anti-Intel snarky comment on Slashdot... way to speak truth to power. What's funny is that if I actually told you I had a time machine in 2010 that could bring you the fastest CPU available from AMD at the end of 2011 and you only had to pay a few hundred bucks for my service, you'd probably jump at the chance... which is exactly what Intel effectively did with Gulftown.....
Re: (Score:2)
Re: (Score:2)
The reason for the pinless chips is obviously to reduce returns. With a chip with 1000+ pins, it's easy for one to get bent during transit and then you have customers returning them. On top of that, they pretty much have to use gold conductors for the pins, and have you seen the price of gold lately?
You put the pins on the motherboard and you shift the liability and cost to the motherboard manufacturer. And since Intel designs the socket, that's what they did.
Re: (Score:2)
You put the pins on the motherboard and you shift the liability and cost to the motherboard manufacturer. And since Intel designs the socket, that's what they did.
Bending pins on a CPU is easy. Bending pins on the motherboard is hard if you even remotely follow the instructions.
Re: (Score:2)
Re: (Score:2)
Extreme or not, Intel's price range has really gone down with Sandy Bridge; their highest priced chip (and quite possibly the single fastest consumer CPU on the market) is about $300ish. And I believe their highest price Sandy Bridge Xeon is only $600ish.
Re:Sandy Bridge-E (Score:5, Interesting)
the immaturity of AMD's 32nm process and things could be expensive for AMD on the desktop.
That is true, and as you point out, for the desktop. Machines in a data center are cooled so the number of cores is a better measure of functionality. If you build machines that run multiple VM's, which is usually the case, that cheaper 6200 will not only outperform Intel's Gulftown and more likely be preferred when adding more machines to the data center even over the SB-E chips. If AMD can get a better footing in the "cloud" infrastructure they might make enough to move to a die size smaller than 32nm, which is REALLY what they must to do.
Re: (Score:2)
Re: (Score:2)
"the Opteron 6200 has eight Bulldozer modules, and each module contains two independent integer processors, but only a single FPU and shared fetch/decode/execute units; in other words, it has more than one core, but not quite two."
This is an interesting architecture. Most general processing is just moving data so an FPU per core is overkill. Also, the main part of an algorithm (or a benchmark, anyway) is likely to fit in cache so use of the bus fetch hardware is likely to be in bursts. It should give bet
Re: (Score:2)
Most programs can't saturate the FPU anyway.
Intel/AMD FPUs can accept data on every clock cycle, the data bubbles through the system and a result pops out N clock cycles later. It's a rare program that can keep putting data in on every cycle so it makes sense to have one FPU per two cores.
You can easily see the effect, try this:
float array[arraySize];
float result=0;
for (i=0; iarraySize; ++i) {
result += array[i];
}
print(result);
float array[arraySize];
float result1=0, result2=0;
for (i=0; iarrayS
Re: (Score:2)
Actually, if your compiler is any good it would rewrite the first loop to look like the second (or another even faster variant) at compile time.
Re: (Score:2)
Most programs can't saturate the FPU anyway.
Well yes. No one uses the FPU anymore.
It's a rare program that can keep putting data in on every cycle so it makes sense to have one FPU per two cores.
No, it's actually very easy. Avoid dependency chains.
If your compiler's any good the second loop will run about twice as fast as the first because it keeps the FPU busier
Uhm. Wut? It may run faster, but not for the reasons you state.....
1. If your compiler is any good, they will run at the same speed. Compilers aren't stupid - a good one would move the computation on the SSE/AVX regs. A semi-good compiler would find the second version harder to optimise. KIS-KIS!
2. Continually assigning a value to the same variable is a big performance no no!! (which you are doing for both i and t
Re: (Score:2)
IIRC, all of AMD machines are full multi core, Intel has patents on the partial application.
Re: (Score:2)
Re:Sandy Bridge-E (Score:5, Funny)
Part of what makes Bulldozer interesting, is that there is no quick straight-forward soundbite answer to that question. It's 16 less-than-complete cores, or 8 more-than-merely-dual-threaded cores.
I look forward to the day when some profiler nerd figures out that most cost-effective ratio is to make a chip with 3 memory busses, 17 integer units and 11 floating point units, with 23 sets of registers. /proc/cpuinfo will say it's a 23-core machine so that'll be the soundbite answer, and then some math twit will whine that it's really only an 11 core machine. And everyone will be both wrong and right simultaneously.
Re: (Score:2)
a) It was called Niagara.
b) It had no delay when switching.
c) It was slow as shit.
Re: (Score:2)
Could be, could be, but there is no information out there how production Bulldozer processors will perform once they start pumping out consumer chips. AMD may have been forced to wait this long to release Bulldozer just to deal with those 32nm process issues. I am not saying that Bulldozer will beat SB, but on the flip side, AMD machines have been selling well in that $500 and under range to this point, so Bulldozer SHOULD help.
The average consumer doesn't need a LOT of processing power, so if the At
Re: (Score:2)
Not only that (Score:2)
But let's stop pretending that Intel's highest end chip is the only one to talk about. Yes, Intel has, for a long time, had a chip for people with more money than sense. They make an ultra high end chip for $1000 that is only a tiny improvement over the one below it. It is for sale to people who buy for bragging rights, more than anything else. That would be the i7-990X right now. 3.46GHz, 6 cores.
However right below that is a chip with near the performance but around half the cost. Right now that is the i7
Re: (Score:2)
If AMD doesn't move quickly, Intel's Sandy Bridge-E will beat Zambezi to market and AMD will lose any edge that it might have.
Ok, so nothing will be lost?
(No, I'm not trolling or flaming, personally I would get a new machine to play Starcraft II, AFAIK SCII only seem to use two cores. AMD themselves has claimed their chip (best consumer chip?) would be similar to 2600k in performance. 2600k is quad core vs octo core for the AMD chip. Considering the work load the 2600k will still outperform the AMD by a lot. Also with socket 2011 we talk quad channel memory instead of dual channel and even back in 1156 vs 1366 days and with SCII o
Re: (Score:2)
Funny, that's why I went dual-core vs quad-core when I built my game machine (specifically for Starcraft II).
[John]
Re: (Score:2)
There's no evidence to indicate that AMD's "mainline" $200 CPU will be much better than the existing "mainline" $200 2500K that's out right now... Just because Intel offers chips at a higher range than AMD doesn't mean that AMD automatically beats Intel at everything below the highest range. When the 2500K first came out it was priced lower than AMD chips that were substantially slower... AMD "corrected" the price to performance ratio by slashing its own prices, which didn't do too much to help its profita
Re: (Score:3)
There is some, depending on your application of course. If computer chess analysis is your thing, you would see benchmarks results like these [hardwarecanucks.com], where the $189 Phenom II X6 1100T beats the $219 Intel 2500k.
So AMD already has CPUs which are price-performance competitive, surely Bulldozer shouldn't be worse in terms of price-performance.
Re: (Score:2)
Re: (Score:3)
And in the vast majority of tasks the X6 is slower, including some very multithreaded ones:
http://www.anandtech.com/bench/Product/203?vs=363 [anandtech.com]
And that's the real rub (Score:2)
It isn't a question of if you can find a benchmark that the AMD processor does better, it is a question of how it does overall and the AMD processor and Intel processor compare. That Anad benchmark shows the answer is not well. For example the i5 is ahead in x264 encoding. Ok well that is a completely parallel activity, it will use all cores 100%. So being that the AMD processor has a slight clockspeed advantage and 2 more cores is should stomp the i7, be 50% faster. Instead the i7 bests it slightly.
Also an
Re: (Score:2)
You didn't know that Photoshop, DivX, h264, Windows Media, Cinebench, Blender, MS Excel, WinRar, dragon age, dawn of war, WoW, starcraft or power consumption existed? What are you doing on slashdot?
Re: (Score:2)
Sure I know those things existed, but I just don't see what a photography supply store, a failed DVD pseudo-rental scheme, the planet the xenomorphs were found on in Alien, a food processor, an exclamation of surprise or amazement, and that other crap has to do with processor performance!
Re:Sandy Bridge-E (Score:5, Insightful)
Re: (Score:2)
Huh? You can get $70-80 intel boards with the feature set you're describing, and next year's Intel CPU (Ivy Bridge) will still use LGA1155. Notably, the whole "AMD boards carry on working with future CPUs" thing is a myth –they work for one generation, at most. Bulldozer chips will not work in AM3 boards, only AM3+. Similarly, the current crop of Phenom IIs will not work in AM2 boards.
Re: (Score:2)
Well what do you expect? That they would make a Bulldozer with no memory controller and a 100MHz bus so you can drop it into Socket 7 motherboards? The idea is that you can buy a motherboard, then a year or two later get the next generation processor and drop it in. After that it doesn't even make any sense -- the bottleneck stops being the CPU and becomes the fact that the older socket is using two channels of DDR2 instead of four channels of DDR3 etc., which is the whole reason that sockets change in the
Re: (Score:2)
Agreed, it doesn't make much sense to keep the same socket for many years. This is one of many reasons why the above "zomg, intel changes sockets all the time" mantra is bollocks.
Intel releases a chip on one socket, they then do a die shrink on the same socket. They then move to a new architecture with a new socket. This is not very much different to what AMD is doing, and gets you the same guarantee of being able to upgrade your CPU in a year or two if you really want to on your now-a-bit-out-of-date bo
Re: (Score:2)
Don't forget that those cheaper AMD motherboards also usually include a decent (not great) video card.. That can run complete circles around any embedded intel card.. (for cheap PC's that you might want to play some games on)
Re:Sandy Bridge-E (Score:5, Interesting)
Talk to a server farm when Intel is putting money in the development of Coreboot and we'll talk.
For now AMD is superior, because a server reboot requires about 1/100th of the time that it takes an Intel CPU farm to get back up due to horrible BIOSes. The more motherboards you have, the longer it takes due to serialized bootup. Ouch... Massive ouch...
Downtime versus marginable CPU speed... And less cores...
Re: (Score:2)
Any particular boards you recommend for Coreboot server work? I've transitioned to all AMD over the past 9 months, but I still suffer with legacy BIOS.
Re: (Score:2)
http://www.coreboot.org/Supported_Motherboards#Servers [coreboot.org]
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
No, head said his entire cluster has to be booted one at a time. So AMD is better because it uses coreboot rather than UEFI.
If you have an environment that requires that everything goes down together and then comes back up one at a time in order, you've got bigger problems that what code executes at poweron.
The situation described is highly bizarre and would likely benefit from someone with some application architectural expertise.
Re: (Score:2)
You can start working for Microsoft. I heared they were looking for good PR in case Windows 8 throws a Vista...
Re: (Score:3)
You obviously don't run Windows, or at least don't install the patches. ;)
Re: (Score:2)
I'm trying to tell you two things:
1. CPU's statistically mean shit when it comes to uptime; it's the OS. And Windows and Mac OS X and any other OS on the planet crashes every now and then;
2. You can spin shit around to make it sound like it's awesome, while it is not.
So you'd make a great PR guy, because you can make shit believable while it's not.
Re: (Score:2)
Yeh... Because AMD's $200 X6 1100T isn't already beaten by intel's $190 i5 2400... oh wait, yes it is.
Re: (Score:3)
On what applications? Post benchmarks...
Re: (Score:2)
http://www.anandtech.com/bench/Product/203?vs=363 [anandtech.com]
Let's see here, according to this:
--All Sysmark tests
--Photoshop CS4
--DivX encoding
--x264's first pass
--Media Encoder 9
--3dsmax r9
--Cinebench single and multi-threaded
--Blender
--Ever single game tested
Not what one would call a trivial list. Oh and it uses less power while doing it.
Re: (Score:2)
Here's a much more complete set of benchmarks that don't rely purely on synthetics –oh look, the i5 wins everywhere except for h264 encoding, which the i5 has a hardware unit for.
http://www.anandtech.com/bench/Product/203?vs=363 [anandtech.com]
Re: (Score:2)
We need AMD alive (I hope I don't have to explain why). But I'm not going to buy inferior CPUs. So someone _else_ has to buy them.
So you should stop going around trying to convince the fanboys that AMD is inferior.
Re: (Score:2)
Even at 6 threads, the i5 gives the X6 a very very tough time http://www.anandtech.com/bench/Product/203?vs=363 [anandtech.com]. More very-multithreaded benchmarks go the i5's way than the X6's, and they're all extremely close.
Conclusion... for 6 threads and more they're equally fast, for less than 6 threads the i5 beats it silly. That's the i5 being a faster CPU in my book.
Re: (Score:2)
The i5 isn't overclockable at all, but its barely more expensive bigger brother the 2500k has *way* more overclocking headroom than the Phenom. They've got happily to 4Ghz on the stock cooler, and 4.5Ghz on better air coolers.
Re: (Score:2)
You can't overclock an 1100t to run ALL 6 cores at high speed at once without water cooling and without it drawing such an extreme amount of power that it doesn't make sense to do it in the first place. I've done this myself.
You can overclock an Intel i7 to almost 5 GHz with water cooling and it will use half the power of an overclocked phenom in a similar configuration.
There's no comparison.
The only place where the phenom x 6 might win running all 6 cores is with floating point, because the 4 core x 2 hyp
Re: (Score:2)
You're missing the point, though. The i7 is not a $1000 cpu. Intel differentiates their chipsets and locks the multiplier on many of them BECAUSE THEY HAVE THE PRICING POWER TO DO SO. This is why their lowest-end cpus don't have virtualization and why their highest-end consumer cpus don't have ECC, and why they have *three* different lines of SandyBridge consumer cpus (i3, i5, i7) instead of one.
In anycase, the i5 beats the crap out of AMD's high-end offering too. Just because you can overclock an unloc
Re: (Score:2)
So in your universe motherboards are free?
List price of CPU and board.
Re: (Score:2)
okay then...
X6 1100T - $200
cheap AMD mobo - $50
mid ranged mobo that supports SATA 6000 and USB 3 - $100
i5 2400 - $190
cheap Intel mobo – $50
mid ranged mobo that supports SATA 6000 and USB 3 - $65
Re: (Score:2)
Re: (Score:2)
AMD Phenom II x6 1100T [newegg.com] for $189
So he lied about both the cost of the AMD CPU and the cost of the AMD motherboard.
What else did he lie about? I cant find a single SATA 6Gb + USB 3.0 motherboard for Sandy Bridge for under $89. [newegg.com] Hard to prove that he lied here.. absence of evidence and all that, but we know that he is a liar so we cannot accept what he says....
As for you. We kn
Re: (Score:2)
Re: (Score:2)
Uh you're ignoring this fact: if those fanboys ignore the facts and continue buying AMD it certainly helps AMD more than if those fanboys started buying Intel instead.
I don't plan to buy an AMD CPU for my future machine (unless AMD really pulls something amazing out of the bag - yeah right), but someone has to. If it's AMD fanboys doing the buying, what's your big problem with that?
My concern is if AMD goes poof, Intel might not be so focused on making CPUs faster.
Intel are already doing stuff like this: ht [slashdot.org]
Re: (Score:2)
So, where are the USB3 ports on it?
and amd has more MB choice will more pci-e lanes (Score:2)
and amd has more MB choice will more pci-e lanes on some of the boards. While on intel you have to take some of the x16 lanes for video or use the x4 DMI bus to fit in USB 3.0 sata and other stuff.
Re: (Score:2)
Except that USB3 and SATA 6Gb/s are all on the chipset, so you're bullshitting. What are you, as a desktop user (note, server boards have QPI links and hence more PCIe bandwidth) actually going to do with that extra PCI bandwidth?
Re: (Score:2)
Furthermore, only one PCIe x16 2.0 is supported by this generation chipset.
The fact that you are not aware of how stunted the Sandy Bridge chipset is with regards to I/O, is telling... fanboy much?
Re: (Score:2)
Re: (Score:2)
but on the low end boards you have to cut into video pci-e just to fit USB 3.0 in and even if it moves to the chip set the x4 link for cpu to chip set is will have to have network, sound, sata, usb, x1 pci-e slots all running over it and to get more pci-e lanes you have to get a high end i7 cpu.
With amd you can get a low-end to mid range cpu and get boards with lot's of pci-e io and even the low end chip sets has.
on a intel board useing a x4 cable card tuner eat's up a lot of the pci-e io.
InterLAGOS CPU Family? (Score:2)
Great - My CPU will start generating kernel messages telling me it that it's the SON OF THE TECHNOLOGY MINISTER OF NIGERIA and that it has 25 GIGAFLOPS of PROCESSING that it needs help smuggling out from behind seven proxies and I can have 25% of it in return for overclocking the CPU by 10%.
Re: (Score:3)
It is a negative feedback loop for them - by failing to create competitive technology
You missed the bit about how they create competitive technology and never get any customers because intel has a huge monopoly.
Also, for cramming flops into U's the quad socket AMD 6100s beat out anything Intel has to offer in the same segment.
Re: (Score:2)
Re: (Score:2, Troll)
How is it a monopoly if (as purported) AMD chips are both better AND cheaper?
Well, Intel provided huge kickbacks provided that they didn't sell AMD processors. AMD lost billions when the Opteron was stomping all over the Pentium 4 on the desktop and in the datacentre. Intel got a big fine, but not big enough. Given the amounts involved, there is no way the whole thing wasn't a net benefit to Intel.
Decision makers don't give a crap about the name, they care about the value for the dollar. And Intel has that
Re: (Score:2)
A few years ago, AMD had intel by the balls. I mean, seriously they were the king in terms of price, performance AND efficiency. I'm talking about the Athlon and Opteron 64's, the clawhammer/sledgehammer. They had a brilliant chip design, they pushed 64bit computing and all Intel could offer was the Pentium 4, which struggled to go above 3Ghz and could heat your entire home.
Intel bet on pushing more and more Ghz while AMD decided to go down the efficiency route. Sure, they had to create a new way to name pr
Re: (Score:3)
The real problem for AMD is that they didn't expect Intel to turn it around so quickly with Conroe. They had an excellent design and they expected to continue extracting reasonable margins from it in order to fund Fusion development, but when Core 2 came around they lost that.
The thing is, Bulldozer is a great direction for them. It will not beat Intel's best at single-thread performance, but it isn't supposed to. What it's supposed to do is offer better performance per watt and per rack unit for common ser
Re: (Score:2)
What happened? Intel got their shit together, ... and gave massive bribes to vendors to prevent AMD getting any significant increase market share. This cost AMD enormous amounts of revenue and seriously hurt their ability to compete. This is something that Intel have been found guilty of in court. The damage, however has been done.
Re: (Score:2)
Except that all came about when AMD came around with the Athlon 64's, because people were asking why the likes of Dell refused to sell them.
Re: (Score:3)
Except that all came about when AMD came around with the Athlon 64's,
Well yes, and that really wrecked AMDs revenue. The were number on on terms of absolute performance and price/performance, but a distant two in sales.
The massive market fixing done by Intel meant that AMD was unable to put nearly as much as they could have done to develop future processors. Perhaps as a result (we will never know), Intel was able to catch up. It takes several years to develop a new microprocessor architecture.
That was also
Re: (Score:3)
Well yes, and that really wrecked AMDs revenue. The were number on on terms of absolute performance and price/performance, but a distant two in sales.
It did more than that. It wrecked their entire financial structure.
For a brief period, AMD's market share was fab limited. So AMD built a huge new fab, acquiring several billion dollars in debt in the process. No big deal, given the possible revenue the extra capacity could supply. Fabs are expensive. It's part of the game.
Once the fab came online, though, AMD's market share barely moved. Their market share was now Intel-limited. This is bad, because while their fabs were formerly full and thus efficie
The price/performance ratio. (Score:4, Interesting)
Re: (Score:2)
The Athlon XP was more or less on par with the P4, P4A, and P4B. I bought into AMDs architecture at the time because it was a bit more bang for the buck, and excellent companion chipsets for gamers and enthusiasts (nForce and nForce 2). They couldn't stand up top the P4C's, so in comes the At
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
First to market is relevant, but not everything (Score:2)
If AMD can kick out a believable press release stating that they'll have the 8-way chips out in a reasonable amount of time, then they'll be fine.
What is the FPU performance of these things...? (Score:5, Interesting)
Does anyone know the FPU performance of these things?
So comparing a 16 "core" 'dozer to a 12 core magny-cours:
The number of parallel integer (and memory addressing) threads has gone up from 12 to 16.
The number of FPUs has dropped from 12 to 8.
The new FPUs are now twice as wide with the AVX instructions.
So, two threads share one wider FPU now. If it's hard keeping an FPU full, then this should make better use of the hardware. It seems that if your code does well for parallel, scalar FPU work already, then there may be a performance drop.
If you have trouble filling the FPU for scalar work, then this should give better utilisation of less hardware. There's a possible performance increase if your utilisation is currently under 67%. Since the two core units can feed the FPU independently, there is a little latency hiding now. This could help even if there are two completely independent processes using the FPU at the same time.
I suppsose the reasoning is that there is often fine-grained parallelism to be had, and the problem of fine grained parallelism and keeping the FPU full are often independent. So AVX will improve performance there.
So, it seems that the peak FPU performance has increased in the ratio of 16/12.
The actual performance could be all over the place. It will be interesting to see.
The other thing is that these are now single chips with 8 bulldozer units on and 16ish cores. Perhaps AMD will go and make more MCMs like before, giving 32ish cores per socket :)
Re: (Score:2)
No.
Or at least nobody who can talk.
Next question?
Re: (Score:2)
Yep, that is the old problem of chossing processors that did never realy go away.
You simply can't know what you are buying before you go out and buy one of each to test. And, of course, if you are buying just one or two machines there is no reason at all to test anything.
You can read the specs and go with the processor that is more likely to fit your work, but any detail could change things.
It is all about the die size (Score:5, Informative)
Unfortunately, and possibly because of reports that AMD is struggling to clock its Bulldozer cores to speeds that are competitive with Intel's Core i7, there's no word of the 8-core desktop-targeted Zambezi CPU.
If you increase the clock on the CPU you have to cool it. Reducing the die reduces the amount of cooling that needs to be done. AMD is not able to shrink their die. Yet.
Re: (Score:2, Insightful)
Yeah, Intel's vast capital reserves mean they typically have a generational lead in process tech, and get the increased efficiency / decreased temps of a die shrink "for free". AMD have to out-innovate them just to produce an equivalent CPU, let alone a better one. Unfair, but that's how (near-monopolistic) business works I guess.
Re: (Score:2)
AMD could outsource production to those who have competitive process tech.
Maybe they could get Intel to make their chips.
Re: (Score:2)
AMD could outsource production to those who have competitive process tech.
Intel is the undisputed front-runner in process tech. The closest thing is Global Foundries, formerly AMD's own fabs, which it now outsources production to.
Re: (Score:3)
For the future, I think This [semiaccurate.com] is what we are going to see from AMD. An integrated, heterogeneous MPSoC architecture with open (or at least standardized) on chip interfaces that might allow a mix and match CPU Core, GPU, PIO and Memory. Sort of taki
Re: (Score:2)
It's more complicated than that. As you shrink die size, you have to fight all sorts of things that bleed current through paths its not intended to go. As you increase frequency, you have to increase voltage to make it work. As you increase voltage, you increase the bleed through. The better you are at fabbing a particular die size, the less bleed through you have and the more you can crank up the voltage and frequency. That's what they were talking about when they discuss AMD's problems getting their
Re: (Score:2)
Not necessarily. The smaller die usually means shorter paths through the chip (less resistance so less heat). Also, you can generally get higher speeds with less voltage (within reason). Less voltage generates less heat. And that's the whole point to smaller die sizes. Faster speeds, less power draw, lower heat generation.
Re: (Score:2)
Not necessarily. The smaller die usually means shorter paths through the chip (less resistance so less heat). Also, you can generally get higher speeds with less voltage (within reason). Less voltage generates less heat. And that's the whole point to smaller die sizes. Faster speeds, less power draw, lower heat generation.
That hasn't been true for a while. Nowadays, reduced sizes mean that the heat generation doesn't go down because the gates and chip features are small enough that voltage leaks through and generates waste heat. Intel and AMD have done a lot to avoid or mitigate this through better materials and different process technologies. However, it's still there since the TDP for processors have basically been stagnant for a while.
Re: (Score:2)
TDP has remained constant but actual throughput and power has increased dramatically. The 6 core I'm running now has the same TDP as the dual core I bought years ago. I guarantee my 6 core runs circles around the dual core. So I think the rule still holds true.
Re: (Score:2)
However, it's still there since the TDP for processors have basically been stagnant for a while.
Except the new-generation i5 have very low power consumption for their performance. Leakage current used to give you a CPU that could fry eggs when idling, whereas the i5s use very little power when idle these days and (unless overclocked) only seem to peak at around 50-60W under full load.
That's actual measurements, not TDP.
Is high performance really an issue? (Score:3)
I've never really considered AMD the manufacturer to look towards when looking for high-performance stuff. In my mind, at least, they're the "dirt cheap and good enough" side - I bought a triple-core Phenom for about the price of a low-end Core 2 Duo a year or two back. They've always had the best performance per dollar. Sometimes, yeah, they did even have the best absolute performance, but Intel's back in the lead again.
High performance just isn't a very profitable market segment. Gamers and high-end servers will buy it, but that's not where the big market is. The big market is desktops and laptops - stuff where a 4gHz sextuple-core processor is overkill. A business machine will work fine with half that - and with AMD's price advantage, they've been moving in on business and desktops. Supercomputers might also be enough to sustain the company - they buy by the thousands, and AMD's power efficiency and multi-core design has usually been attractive to the few in that business. There, performance per core isn't nearly as important as cores per watt.
That said, I'm not surprised that AMD is (supposedly) having issues meeting their targeted clock rates. Pre-release info pegged the top desktop processor at 4.2gHz - a record for an x86 processor. The last to get close to that was the last few Pentium IV HTs at 3.8gHz. AMD's top processor to date only reached 3.7ghz (Phenom II X4 980BE), and that was after years of refining their process. AMD set their sights too high, and is having problems for it.
Re: (Score:2)
However I would be interested in having a CPU with multiple cores, where some of the cores could have different architecture, to guarantee real time processing.
I don't think CPU architecture has much to do with that.
Wouldn't it be interesting to be able to have one CPU with normal applications running, but also with an app or part of apps running in the same CPU that could do real time processing - guarantee real time response with a simpler OS running on that core?
Just get yourself an RTOS. These days a s
Re: (Score:2)
You don't think there is an issue with the common 86 architecture for real time processing?
Well, not per se, but there can be a lot of strangenesses of particular implementations. Like the SMBus horror, for instance.
I put together an Atmel based 3d printer controller back in 2003. If the bus for that controller was shared with another processor somehow, then there would be conflicts created in resource sharing
I guess that's the problem with shared busses, though probably not limited to x86. Modern x86 mul