Dell Set to Introduce AMD's Triple-core Phenom CPU 286
An anonymous reader writes "AMD is set to launch what is considered its most important product against Intel's Core 2 Duo processors next week. TG Daily reports that the triple-core Phenoms — quad-core CPUs with one disabled core — will be launching on February 19. Oddly enough, the first company expected to announce systems with triple-core Phenoms will be Dell. Yes, that is the same company that was rumored to be dropping AMD just a few weeks ago. Now we are waiting for the hardware review sites to tell us whether three cores are actually better than two in real world applications and not just in marketing."
Yield, effectiveness (Score:5, Informative)
I am sure some units will make it through the process with a functional-enough fourth core to be useful to "overclockers", but I think the majority will have actual problems. That is, unless there is no 4-working-core version of this processor for the known-working ones to be sold as?
One concern... How do they keep thermal load even if 1/4 of the die is not running?
Re:You know what would be even better? (Score:5, Informative)
Re:You know what would be even better? (Score:5, Informative)
Re:Yield, effectiveness (Score:5, Informative)
No (Score:5, Informative)
At work we have purchased a dual processor system with a quad core CPU in each that runs Vista. All 8 cores show up and are usable by software.
Re:I've been away from IT for very long (Score:5, Informative)
There's no need to wait for any reviews (Score:5, Informative)
For most things, no 3 cores isn't really going to be much benefit at this point. While there are now multithreaded games out there that make use of 2 cores pretty well, they don't really scale past that at this point. I imagine that'll change as time goes on since quad core processors are getting more common, but it hasn't yet. As for desktop apps, well they don't tend to use much power so it won't help much. I suppose it might help responsiveness in some cases a tiny bit, but I doubt it.
However for some professional apps it can help. Cakewalk's Sonar makes use of multiple processors quite handily. Every effect plugin, every instrument, all run as a separate thread so it can easily use a large number of cores. I've seen it run on a quad core system and it distributes load quite well across them. I don't imagine anything would be different with 3 cores, it'd just have one less to use.
Re:You know what would be even better? (Score:5, Informative)
With a quad core system, each core cant directly talk to the core diagonal to it which slows things down.
Re:You know what would be even better? (Score:5, Informative)
Happens all the time in graphics cards. The main difference between different model numbers in the same line is the number of pipelines on the GPU. Top end cards have them all enabled, lower models progressively less. Often the lower end cards will have working pipelines disabled.
Re:You know what would be even better? (Score:5, Informative)
Depends (Score:3, Informative)
In general most modern OSes do a pretty good job moving things around. It isn't necessarily an app per core situation since many apps don't use much power and thus can all run on a single core. Also a single multi-threaded app may run on multiple cores at the same time. In general the OS will move things to try and get all threads as much CPU as they want, and to try and have CPU left over for new tasks.
Re:The advantage of dual-core... (Score:5, Informative)
In general I'd agree with you, but I've found that a quad-core (which is actually pretty cheap these days) is much better than a dual-core if you watch HD video. h264 at 1080p is pretty taxing on the processor, and on a C2D you generally can't have anything in the background or you'll drop frames. A quad-core means you can run one or two other processor-intensive tasks (usually as you said, video encoding/backup/compilation type stuff) and don't have to pause them when you want to watch video. Also, it's very helpful if you use Mathematica a lot for large computations.
Re:You know what would be even better? (Score:2, Informative)
I don't imagine they'll allow it (Score:3, Informative)
1) Reduces complaints. You'd get people who would enable a defective core and then bitch that their system didn't work, especially since it could be somewhat random when failures happened.
2) Allow them to have a cheaper part. Yields may improve to the point that there are few defective cores, however there may still be demand for the cheaper part. Thus disabling 1 core allows them to continue selling both.
Multicore cpus and threaded games and applications (Score:2, Informative)
Many of the newest Operating Systems, applications, and games are multi-threaded. Multiple cpu cores just allow modern systems to take advantage of them, when available.
I have a dual quad-core computer, that dual boots Windows Vista Ultimate, 64-bit, and Fedora 8 Linux, 64-bit. Many programs do take advantage of this system, including modern PC games, such as Crysis and Unreal Tournament 3. UT3 does use all 8 cpu cores during parts of the game.
So, even though multiple cores are not necessary, I find it helps in many ways, and many programs. The system seems to perform very smoothly.
Re:I don't imagine they'll allow it (Score:3, Informative)
God, Dell is NOT dropping AMD (Score:4, Informative)
My personal opinion is that they still need to be fleshed out though. I am not sure why, but all the AMD systems we have only accept DDR2 unbuffered as well has having issues with very large amounts of ram ( More than 64gigs). I will admit however, they use ALLOT less power and much quieter.
Obligatory Onion Article (Score:5, Informative)
We're doing five cores (Score:4, Informative)
For reference, see The Onion [theonion.com] reference, "... We're doing five blades [theonion.com]". (Rough language. If you're at a school maybe NSFW). From February, 2004. For the record, the Gillette Fusion with five blades and two lubricating strips was introduced in early 2006 [cnn.com].
Hilarious though:
I'm a big AMD fan but three cores are barely better than two. Buy it anyway - AMD needs to live if the computer market is to be bearable at all in ten years. Via makes some interesting stuff too - and they're not afraid to cut the watts and make them small. You can do some very neat stuff [viaarena.com] with a low watt CPU on a small board.
It doesn't take a great deal of insight to see we're going to 8 cores per processor on the desktop sometime in the next few years. Dual 16 core processors will happen within ten if competition keeps the pressure up. Personally I don't care if every core is on a separate slab of silicon as long as they integrate in the package well. Yields are better that way I imagine. Somebody tell them to get the watts down. Electricity [intelligen...rprise.com] is mostly made from CO2 emissions [doe.gov]:
Re:You know what would be even better? (Score:5, Informative)
Moreover, even if a certain program, running on a 4-core system, generates 4 processes or threads, you still cannot claim that that particular program "handles 4 cores". It is up to the operating system to manage the system's resources, including where and how a process is ran. It might even run all the 4 processes or threads in the same core.
Another silly thing that you imply which is clearly wrong is that a user can only take advantage of the multiple cores in a system if that user happens to run applications which spawn as many processes or threads as the number of cores. That is just plain wrong. The operating system manages the execution of all the system's processes and threads, which means that it distributes the execution of those processes and threads through all the available processing cores. So, if you run 4 separate applications (single-process/threaded) on a decent operating system running on a 4 processing core system then the operating system may end up executing those 4 separate applications in the 4 separate processing cores. As any desktop computer is running at any given time more than 20 different processes (single or multi-threaded) then the advantage of having more processing cores on your system is rather obvious.
But hey, don't let logic and concrete knowledge on the issue get in the way of your judgement.
Re:Un No. Well, Not even close... (Score:4, Informative)
Re:You know what would be even better? (Score:4, Informative)
His claim thay threads are useful in powers of two is of course complete junk since threads are usually used one at a time for specific tasks (data aquisition thread, rendering thread, etc), or in groups (maybe of run-time configurable size) to provide thread pools for specific tasks - e.g. server threads.
Let's not forget also that the OS itself will be competing with whatever application(s) you are running for the CPU, so even a single single-threaded program will benefit from a multi-core CPU by way of not having to compete with the OS as much for the CPU cores.
Re:You know what would be even better? (Score:4, Informative)
Or perhaps you're just not comprehending the semantics here. It was purposeful disabling, to avoid problems with a problem core (or maybe they're just having thermal problems, for all we know.) The cores don't disable themselves. Thus it was disabled to deal with the problem of a defect.
It's not any more misleading than telling you that one Cell SPE is disabled on every PS3.
Re:Really? (Score:3, Informative)
There's also the fact that clockspeed isn't the only metric - an AMD chip at the same clockspeed as an Intel one may actually be slower overall (or faster at some things and slower at others). This is because what you're interested in is work/second, not clocks/second. Assuming you get the same amount of work done per assembly instruction (since it's all x86 with only minor differing extensions, that's not an outlandish assumption), instructions/clock is a crucial metric. Because of various factors, Core2 Duos can do more instructions per clock than Phenoms. Previously, Athlons were beating Pentium4s at instruction/clock. So clockspeed isn't the only metric, and in fact isn't the most crucial one.
Additionally, most CPUs have only one clock and one voltage setter. So either the entire chip runs at 2.6 GHz, or the entire chip runs at 2.0 GHz. You can't mix and match them currently. Because you need a stable processor, you're only as strong as your weakest link - if one core can only hit 2.0 GHz at a set voltage, you have to make the entire processor 2.0 GHz. If disabling that core lets you hit 2.6 GHz with the 3 "healthy" cores, that may be a more attractive option, depending on the workload. Because a lot of software isn't multithreaded, 3 faster cores are sometimes superior to 4 slower ones. Heck, a 3.2 GHz dual-core is sometimes better than a 2.4 GHz quad-core (for some limited workloads).
Processors aren't designed individually, they're made by the thousands. They start out as silicon wafers. Then they get put in a machine with a whole bunch of lasers and stuff I don't even pretend to understand, which etches a few dozen processors on the wafer. Because of a variety of factors (manufacturing process issue, stray pieces of dust, impurities in silicon, whatever), some cores wind up testing better than others. A processor which can meet the 2.6 GHz benchmarks gets sold as a 2.6 GHz chip. The chip next to it may fail the 2.6 GHz tests, but meet the 2.2 GHz benchmark, and so gets sold as a 2.2 GHz chip. If a dual-core chip has one busted core (some kind of massive defect in one core but not the other), it gets the bad core blasted off and lives life as a single core chip. If a chip has an issue with some of its cache, then it gets half the cache disabled and is sold as a Celeron.
It's not a hassle to manufacture this extra stuff, whether its cache or cores. It's actually more of a pain in the butt to completely re-tool all the machines to make a pure triple-core. If you look at the economics of it (and I've only done that from the homework standpoint), most of the cost is the fixed cost of buying the machines and setting them up just right. After that, the goal is to get as much out of the chips you manufacture as possible. The choice you're making is between selling a chip with features disabled for a lower price, or tossing it in the trash.
Each chip has 4 cores, but with the slower core enabled, the chip can only hit 2.0 GHz. Without having to deal with the slow core, the other 3 can run faster (at 2.6 GHz). Obviously, AMD would prefer to sell the chip as a quad-2.6, but they can't. They can sell it at the speed it can hit with 4 cores (2.0 GHz), the speed it can hit with 3 cores (2.6 GHz) by disabling a core, or throw it out as defective.
Re:Un No. Well, Not even close... (Score:2, Informative)
You are living in the past on that quote.
AMD used a per core cache on older designs. On the new design they use both a per core cache AND a shared cache. So on a quad core that has 512k per core and 2m shared the cache for a chip with one core disabled is (512x3)+2048/(512x4)+2048 or 7/8. So instead of disabling 1/4 of the cache they are disabling 1/8th, but because they disable 1/4 of the cores when disabling 1/8 of the cache it actually helps the cache per core ratio instead of hurting it.
Tri core Phenoms get 1195k of L2/L3 cache per core in that example. Quad core Phenoms get 1024k. So the tri core gets 16% larger cache based on that logic.
Besides that math is wrong/too simplistic because you are only considering L2 and L3 cache. Each core also has 128KB L1 but it seems in vogue to ignore it. It makes the math simpler especially when you get to 45nm and below when you bump that L3 cache up every time the process improves. 6MB L3 plus 512kb L2 on a tri core vs 6MB L3 plus 512kb L2 on a quad core gives you a 15/16 ratio vs the 3/4 ratio. The bigger the L3 the better the advantage for the tri core.
2560 vs 2048 in the 6MB L3 cache scenario, the 16% advantage becomes a 25% advantage at that node.