Dell Set to Introduce AMD's Triple-core Phenom CPU 286
An anonymous reader writes "AMD is set to launch what is considered its most important product against Intel's Core 2 Duo processors next week. TG Daily reports that the triple-core Phenoms — quad-core CPUs with one disabled core — will be launching on February 19. Oddly enough, the first company expected to announce systems with triple-core Phenoms will be Dell. Yes, that is the same company that was rumored to be dropping AMD just a few weeks ago. Now we are waiting for the hardware review sites to tell us whether three cores are actually better than two in real world applications and not just in marketing."
Re:You know what would be even better? (Score:2, Interesting)
Re:You know what would be even better? (Score:1, Interesting)
With a quad core system, each core cant directly talk to the core diagonal to it which slows things down.
Three core systems can talk among the cores easily without any bottlenecks so they are faster than dual core and quad core.
Re:You know what would be even better? (Score:2, Interesting)
A native triple-core would have equal spacing between the three cores such that any core could talk to any other core without having to go through a middleman.
Re:Yield, effectiveness (Score:5, Interesting)
If one is disabled, it would cycle 1,2,4,1,2,4 (assuming #3 is the bad one).
Moreover, if one of the cores isn't running, and you have a cooling system designed for four cores, it really doesn't matter. If it can handle four full-tilt cores, it can handle three. The zero heat production is a bigger benefit than a slightly uneven distribution. If it's truly a suitable medium, the heat generated will be spread throughout pretty well, even if the heat-production is only on one edge of the medium. Think of an electric stove burner--it only has heat applied at one end, but the opposite end heats up pretty well. Obviously it's not perfect, but it doesn't need to be.
The advantage of dual-core... (Score:5, Interesting)
Dual-core means that for most cases, I can run a video encode, a backup/compression process, a long-ish compilation (of the sort that doesn't like 'make -j2'), etc -- not so much all at once, as I can fire off any background process and not worry about it, as I have a whole other core to use. It's shameful -- Amarok will occasionally use 100% of one core, and I won't notice for hours.
Having more than two cores wouldn't benefit me a lot right now. I wouldn't mind it, certainly -- I've been playing a bit with things like Erlang, which should be able to scale arbitrarily -- but I think the real applications are only just catching on to the idea that threading is a good thing. I imagine it's still going to be a lot longer till a quad-core machine is useful for anything other than, say, running virtual machines, as most programming languages do not make threading easy. (Locks and semaphores are almost as bad as manual memory management.)
While I'm playing crystal ball, I'll predict that the first application of multicore will be things which were already running on multiple machines in the first place -- video rendering, for instance. Not encoding, rendering.
The second application for it will be gaming. This will take longer, and will only be the larger, higher-quality engines, who will simply throw manpower at the problem of squeezing the most out of whatever hardware is available.
I suspect that the old pattern will be very much in effect, though -- wherein gamers will buy a three-core system and unlock the fourth one (if possible), then use maybe one core, probably half of one, with the video card still being the most important purchase. If there's a perceptible improvement, it'll be because their spyware, IM, torrents, leftover Firefox with 20 MySpace pages and flash ads, etc, won't be able to quite fill the other three cores.
I'd like to add that for most people, including me, one core is plenty if you know how to manage your processes properly -- set priorities, kill Amarok when it gets stuck in that infinite loop, and get off my lawn!
Re:Yield, effectiveness (Score:2, Interesting)
I haven't really looked at Phenom's design, but I highly doubt that it'll rotate between the cores while running. You can't really transfer the contents of registers and whats in the pipeline between cores in any sort of efficient manner (unless there is something about the Phenom I don't know about).
Why would the thermal design even matter that much? It'll be equivalent to having hotspots on the motherboard (though nowhere near as dramatic, the die is tiny and very conductive to heat). By simply having a heatsink on it that can handle four loads, it'll easily be able to handle 3 active (and 1 idling) core.
Re:You know what would be even better? (Score:2, Interesting)
Then again, I haven't been following CPU product lines in the past few months, so I could be mistaken.
In the end, this CPU will enable AMD to yield more CPU's and actually turn profit, but it won't be on market too long once AMD perfects the process and yields working quad-core chips most of the time.
software compatability? (Score:4, Interesting)
I'm guessing there is a lot of code out there that's looking for power of 2 number of cores. A program might run fine with 1,2,4,8, or 16 cores, but if you do some kind of odd number I wouldn't be surprised if several applications just refused to run. It will be interesting to see what kind of compatibility testing AMD has done with this new processor.
In the end though, this just seems like another last ditch attempt by AMD to marginally compete on the lower end market with Intel. Intel says they have no need for 3 core chips since their yields are so much higher.
Re:The advantage of dual-core... (Score:2, Interesting)
Dual cores are easy to keep busy. Do anything somewhat demanding, and use the other for other everyday tasks. If you do any kind of encoding, that'll utilize both just fine too.
However most quad cores (like the Q6600) have four somewhat slower cores. That will often be significantly slower for apps that can't make use of more than 1 core. And it's a lot harder to keep all 4 cores busy.
Anyways, I really don't know why Dell would bother with this. The Intel e8400 is faster than AMD's unreleased phenom 9700 (which has more cores, and likely clocked higher), and costs like 50$ less than the even slower phenom 9600! AMD just doesn't make anything I want to buy right now. I'm hoping to upgrade to a Intel Q9450 as soon as the price drops a bit (Quad core, 45nm, 12MB cache, runs cooler/uses less power than the Q6600, OC's a LOT better, has SSE4, 1333MHz FSB, etc). And so far phenom had significant problems (requiring BIOS patches making it 10% slower, most existing AM2 motherboards not supporting it, etc) and it doesn't seem to overclock quite as well as Intel's latest either. I don't think the 3 core idea will be popular either. You're essentially buying a partly defective chip, and most people don't like buying partly broken things (would you buy a car with a 4 cylinder engine, if only 3 of them worked?) The price would have to be quite low for me to even consider buying one.
AMD desperately NEEDS to come up with something better REAL soon.
There is a known problem with current Phenom... (Score:5, Interesting)
The first (and less relevant) problem is the TLB errata. The second (and more relevant to this discussion), is a problem in which core #2 (out of [0,1,2,3]) is lower yielding than the first three. For example, on the same CPU die, cores [0,1,3] may work fine at 2.6Ghz, but core [2] yields only at 2.0GHz. This is a widespread problem, mostly found out through failed overclocking attempts.
Google it yourself and find out..
Re:You know what would be even better? (Score:5, Interesting)
The distinguishing feature is often the number of tests done to certify the hardware and in some cases it is not a failure in a certain test but that the test required for the higher spec was not done at all. The rumor with the Celeron mentioned above was that they were rebadged after passing all the tests required for the Pentium II 450 spec but there were a lot of them in storage and more Celeron 300's were required - so they got the "A and circle" symbol to distinguish them from the other Celeron 300's.
Re:You know what would be even better? (Score:3, Interesting)
486es with a working co-processor (Floating Point Unit) were sold as "DX" models, the ones where it was broken were sold as "SX".
Even better, it allowed a market for FPU co-pro upgrades where one would install a co-processor upgrade alongside their 486SX later on.
Once production yields improved, this practice was continued for a while maintaining a market for both "SX" and "DX" models, where the "SX" models would have their FPU deliberately disabled. What on earth moved AMD and Intel not to simply start selling the "DX" processors at a pricepoint closer to the "SX" ones, I don't know.
The DRAM market has been much the same for even longer. The ZX Spectrum (Timex in U.S.) 48K model had in fact 80K of possible RAM on board. The first 8K were a sort of memory swapping / paging bank, and the remaining 40K actually consisted of DRAM chips where only half of the chip worked, which were cheaper than even the half-size but fully working ones. Replace those DRAM chips with fully working full-size ones and you'd have a whopping 80k in your computer.
(This post outs me as a dinosaur fossil, doesn't it?
Re:software compatability? (Score:5, Interesting)
Re:You know what would be even better? (Score:2, Interesting)
It's all about economics and "perceived value", not technology.
Graphics cards too... (Score:3, Interesting)
Re:You know what would be even better? (Score:5, Interesting)
#1: you do test these chips before the saw step (chopping the wafer up into individual die)
#2: its hard to predict speeds/vcc/temp sensitive yields at that stage, but you do test all the die and usually check for full functionality (as much as the test coverage allows)
#3: once packaged, the chips are "binned" to functional fails, speed grades. etc, and are tested at temp, vcc limits for speed sorting. so you could have 1 core that fails at 30C with a high vcc, but the others are ok (this is should be rare since they all sit together on the wafer in close proximity, and thus shouldn't vary much from each other)
#4: nanoscopic defects occur and could take out one or two of the die. It would be advantageous to bin this out as a tri/dual core.
#5: I am 100% sure that if these become popular, there will be some chips that pass all tests fully, but have one core disabled. happens all the time.
JP