Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Businesses Hardware

Dell Set to Introduce AMD's Triple-core Phenom CPU 286

An anonymous reader writes "AMD is set to launch what is considered its most important product against Intel's Core 2 Duo processors next week. TG Daily reports that the triple-core Phenoms — quad-core CPUs with one disabled core — will be launching on February 19. Oddly enough, the first company expected to announce systems with triple-core Phenoms will be Dell. Yes, that is the same company that was rumored to be dropping AMD just a few weeks ago. Now we are waiting for the hardware review sites to tell us whether three cores are actually better than two in real world applications and not just in marketing."
This discussion has been archived. No new comments can be posted.

Dell Set to Introduce AMD's Triple-core Phenom CPU

Comments Filter:
  • by LaskoVortex ( 1153471 ) on Sunday February 17, 2008 @12:24AM (#22450618)
    Enable that other core!
    • Re: (Score:2, Interesting)

      by TI-8477 ( 1105165 )
      I don't understand why they disabled it in the first place. Anyone care to explain?
      • by Azh Nazg ( 826118 ) on Sunday February 17, 2008 @12:30AM (#22450646) Homepage
        It allows them to sell chips with one of the cores broken, thereby getting higher yields from their production lines.
        • by LaskoVortex ( 1153471 ) on Sunday February 17, 2008 @01:03AM (#22450818)
          Ah, yes. This makes great sense, but the announcement should have read "one of the cores defective", which would be more correct. The word disabled suggests purposeful disabling, which is misleading--but perhaps the announcement was a victim of marketing language chicanery.
          • by edwdig ( 47888 ) on Sunday February 17, 2008 @01:21AM (#22450912)
            If the demand for triple core processors is higher than the supply of quad core processors with one defective core, then AMD could disable a working core on the quad core chips to ensure supply.

            Happens all the time in graphics cards. The main difference between different model numbers in the same line is the number of pipelines on the GPU. Top end cards have them all enabled, lower models progressively less. Often the lower end cards will have working pipelines disabled.
            • by dbIII ( 701233 ) on Sunday February 17, 2008 @03:30AM (#22451546)
              Had that sort of thing with the Intel Celeron 300 A (with a stargate sort of A symbol). There was not enough supply so they were rebadging sweet 450MHz symmetric multiprocessor capable Pentium II processors as the cheaper Celeron - just the thing for a two CPU socket board. It made it possible to have a fast two CPU system for about the same price as a fast single CPU system with Pentium II on the label.

              The distinguishing feature is often the number of tests done to certify the hardware and in some cases it is not a failure in a certain test but that the test required for the higher spec was not done at all. The rumor with the Celeron mentioned above was that they were rebadged after passing all the tests required for the Pentium II 450 spec but there were a lot of them in storage and more Celeron 300's were required - so they got the "A and circle" symbol to distinguish them from the other Celeron 300's.

            • Which begs the question: are there ways of enabling the extra cores in such devices?
              • Re: (Score:2, Insightful)

                by billcopc ( 196330 )
                Yes and no.

                If the cores aren't actually defective, then yes, AMD will make it relatively easy to unlock because that's what they were once famous for, with the Athlon XP.

                If the cores are crap, then most likely they will lock them down securely to avoid bad PR. Enthusiasts like you and I understand that there are no guarantees once you start tweaking, but we're not the problem. The problem is shady vendors that unlock/overclock to defraud the client.

                Example: I just finished building a cheap machine for my
          • by Anonymous Coward on Sunday February 17, 2008 @01:27AM (#22450956)

            Ah, yes. This makes great sense, but the announcement should have read "one of the cores defective", which would be more correct. The word disabled suggests purposeful disabling, which is misleading--but perhaps the announcement was a victim of marketing language chicanery.
            So... They disable the defective one. How is this misleading? Other companies do it too -- HDD makers sell bigger HDDs as smaller ones when they fail QA testing, for example.

            Seriously, if the price difference is enough to make buying one of these "tricores" worth it, and more importantly, if these Dells allow me to throw in a "real" Phenom aftermarket (or even ship with the option to buy a true quad-core Phenom...) well, more power to them.

            Not only that, AMD seriously wins in this -- they sell these (likely Dell Precision Workstations and/or Dell XPSes) with their "tri" core CPUs, as well as -- I would wager -- their Quad Core CPUs as an upgrade, and they'll start to finally make some inroads with them. So far the impression I've gotten is that both Intel and AMD's quad core offerings have been kinda DOA with consumers (as opposed to the enterprise). But then again, I typically work with office workstations (Optiplex, PWS, etc).

            Ob-Full Disclosure: I work for Dell as a Prosupport Tech Support Agent.
          • Re: (Score:2, Informative)

            by frostband ( 970712 )
            IANAICFE (IC Fab. Expert) but I do know that in testing for functionality, they just test a small sampling from a batch to determine whether the whole batch is good or not. It's possible that they found that one batch had a bad core by their small sample which means that other chips in that batch of quadcores (that are now selling as 3 cores) possibly had 4 functioning cores. Anyway, to the semantics, one core is definitely disabled but not necessarily defective (yes, I know you said "more correct" sugges
            • Re: (Score:3, Insightful)

              by Jeff DeMaagd ( 2015 )
              I don't think sampling can necessarily tell whether a given batch will have a lot of chips with one defective core. I think they have to go farther with testing. It sounds like the kind of defect that's dependent on like a microscopic speck of dust to fall onto the silicon, but in a good enough place such that you can just map out an entire CPU core.
              • Re: (Score:2, Insightful)

                by Hal_Porter ( 817932 )
                You could probably test the chips on the wafer before you chop it up. I can imagine supply power to the wafer and looping JTAG [wikipedia.org] test lines through all the chips. Then some self test would run in parallel on all chips and you'd know which chips were bad, which had one bad core and so on. Actually just testing the cache would be a good idea. Since most of the die area is cache, most of the dust-spec style defects should be found there.

                Of course a few chips might fail in other ways and you'd catch them after pa
                • by jp102235 ( 923963 ) on Sunday February 17, 2008 @10:12AM (#22453338)
                  ok, I am an IC test engineer:

                  #1: you do test these chips before the saw step (chopping the wafer up into individual die)
                  #2: its hard to predict speeds/vcc/temp sensitive yields at that stage, but you do test all the die and usually check for full functionality (as much as the test coverage allows)
                  #3: once packaged, the chips are "binned" to functional fails, speed grades. etc, and are tested at temp, vcc limits for speed sorting. so you could have 1 core that fails at 30C with a high vcc, but the others are ok (this is should be rare since they all sit together on the wafer in close proximity, and thus shouldn't vary much from each other)
                  #4: nanoscopic defects occur and could take out one or two of the die. It would be advantageous to bin this out as a tri/dual core.
                  #5: I am 100% sure that if these become popular, there will be some chips that pass all tests fully, but have one core disabled. happens all the time.

                  JP
          • by Joce640k ( 829181 ) on Sunday February 17, 2008 @02:37AM (#22451286) Homepage
            You're sold a three core chip, it has three working cores.

            Which part of that is "defective", misleading, or unfit for purpose?

            How many dual core chips are really four core chips with two failed cores? Do you know? Face it, it's just the number three which bugs you, and that's pretty childish...

            • by r_jensen11 ( 598210 ) on Sunday February 17, 2008 @08:51AM (#22452820)

              How many dual core chips are really four core chips with two failed cores? Do you know? Face it, it's just the number three which bugs you, and that's pretty childish...
              The number 3 pisses off a lot of people. I like/tend to attribute it to Mr. Owl's amazing ability to consume Tootsie Roll Pops.
          • Re: (Score:3, Interesting)

            by Swoopy ( 101558 )
            This used to happen even in the 486 days already.
            486es with a working co-processor (Floating Point Unit) were sold as "DX" models, the ones where it was broken were sold as "SX".
            Even better, it allowed a market for FPU co-pro upgrades where one would install a co-processor upgrade alongside their 486SX later on.
            Once production yields improved, this practice was continued for a while maintaining a market for both "SX" and "DX" models, where the "SX" models would have their FPU deliberately disabled. What on
            • Re: (Score:2, Interesting)

              by Peet42 ( 904274 )
              What about the original Athlons and Durons? The only difference between them was often a cut link on the top surface of the CPU that disabled moste of the cache. Have a Google and you'll find lots of instructions on how to remake the link and turn a Duron into a fully functional Athlon.

              It's all about economics and "perceived value", not technology.
              • by Joce640k ( 829181 )
                I've hacked a couple of graphics cards by moving a resistor on the top of the chip. One was a GeForce and it came up afterwards as a "Quadro". The other was an ATI 9500 which came up afterwards as a 9700 (more shaders). Both cards worked perfectly for years.

          • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday February 17, 2008 @09:23AM (#22453016) Homepage Journal

            The word disabled suggests purposeful disabling, which is misleading--but perhaps the announcement was a victim of marketing language chicanery.

            Or perhaps you're just not comprehending the semantics here. It was purposeful disabling, to avoid problems with a problem core (or maybe they're just having thermal problems, for all we know.) The cores don't disable themselves. Thus it was disabled to deal with the problem of a defect.

            It's not any more misleading than telling you that one Cell SPE is disabled on every PS3.

      • by jericho4.0 ( 565125 ) on Sunday February 17, 2008 @12:39AM (#22450680)
        Chip yields. A significant number of the 4 ways have a defect rendering one core useless. For the same reason, the Cell is speced with 8 SPEs, but the PS3 ships with 7.
    • Or: We could activate the other core only partially, let's say it runs on 14,159% capacity. This way, we'd have a -Core!
    • Re: (Score:3, Funny)

      by ozbird ( 127571 )
      To paraphrase:

      Nigel Tufnel: The numbers all go to three. Look, right across the board, three, three, three and...
      Marty DiBergi: Oh, I see. And most cores go up to two?
      Nigel Tufnel: Exactly.
      Marty DiBergi: Does that mean it's better? Is it any better?
      Nigel Tufnel: Well, it's one better, isn't it? It's not two. You see, most blokes, you know, will be playing at two. You're on two here, all the way up, all the way up, all the way up, you're on two on your computer. Where can you go from there? Where?
      Marty

  • Yield, effectiveness (Score:5, Informative)

    by Sparr0 ( 451780 ) <sparr0@gmail.com> on Sunday February 17, 2008 @12:29AM (#22450642) Homepage Journal
    Making 3-core machines out of 4-core CPUs will do wonders for their yield. So many chips get trashed because of single tiny failures, this will allow them to keep any chip with any number of failures as long as they are limited to just one of the cores. The same sort of benefit Intel saw by using Pentiums with bad cache segments to make Celerons, or nVidia saw when disabling (supposedly) bad pipelines to turn 16-pipe GPUs into cheaper 12-pipe versions.

    I am sure some units will make it through the process with a functional-enough fourth core to be useful to "overclockers", but I think the majority will have actual problems. That is, unless there is no 4-working-core version of this processor for the known-working ones to be sold as?

    One concern... How do they keep thermal load even if 1/4 of the die is not running?
    • Re: (Score:3, Funny)

      by MiniMike ( 234881 )
      > One concern... How do they keep thermal load even if 1/4 of the die is not running?

      If running Windows, the OS will cycle through the cores so 3 are always running, and one is cooling. This will usually not cause a problem before the system crashes due to something else.

      For other OSes, I would think that the conductive layers over the non-functional core would still be working, and capable of distributing the heat evenly. Same problem as when 1 core is running full tilt and (1, 2, 3 for dual, triple,
      • by Sparr0 ( 451780 ) <sparr0@gmail.com> on Sunday February 17, 2008 @12:43AM (#22450702) Homepage Journal
        OK, perhaps I am mis-educated regarding this particular device, but I expect that one of the four cores will be defective on almost every Phenom CPU. That means cycling through them would not be an option.
        • by mr_matticus ( 928346 ) on Sunday February 17, 2008 @01:13AM (#22450872)
          Why?

          If one is disabled, it would cycle 1,2,4,1,2,4 (assuming #3 is the bad one).

          Moreover, if one of the cores isn't running, and you have a cooling system designed for four cores, it really doesn't matter. If it can handle four full-tilt cores, it can handle three. The zero heat production is a bigger benefit than a slightly uneven distribution. If it's truly a suitable medium, the heat generated will be spread throughout pretty well, even if the heat-production is only on one edge of the medium. Think of an electric stove burner--it only has heat applied at one end, but the opposite end heats up pretty well. Obviously it's not perfect, but it doesn't need to be.
        • Re: (Score:2, Funny)

          by Ibiwan ( 763664 )
          Maybe you missed where he specified this was only feasible in Windows... Who's gonna notice something trivial like a non-functioning CPU core a fourth of the time!?
      • Re: (Score:2, Interesting)

        by Iguanadon ( 1173453 )

        If running Windows, the OS will cycle through the cores so 3 are always running, and one is cooling. This will usually not cause a problem before the system crashes due to something else.

        I haven't really looked at Phenom's design, but I highly doubt that it'll rotate between the cores while running. You can't really transfer the contents of registers and whats in the pipeline between cores in any sort of efficient manner (unless there is something about the Phenom I don't know about).

        Why would the ther

    • One concern... How do they keep thermal load even if 1/4 of the die is not running?
      I was wondering the same thing and I imagine that they don't. I never heard anyone say it was a problem in dual core systems when Windows pegs 1 core and lets the other sit idle.

      Ultimately, that's what the heat spreader is for. Right?
    • It is getting more common for companies to physically disable the section on a chip that isn't supposed to be used. I'm not sure how it is done but I imagine just burning the traces with a laser would work. I'm going to guess AMD will be doing this with their 3 core systems. It servers 2 purposes:

      1) Reduces complaints. You'd get people who would enable a defective core and then bitch that their system didn't work, especially since it could be somewhat random when failures happened.

      2) Allow them to have a ch
      • Re: (Score:3, Informative)

        by 91degrees ( 207121 )
        I believe they're designed with the idea of disabling a core afterwards, using fusable tracks. Apply a high voltage to the right pins and part of the chip breaks.
  • Licensing? (Score:5, Funny)

    by kermit1221 ( 75994 ) on Sunday February 17, 2008 @12:32AM (#22450650)
    So, does one have to purchase 1.5 Vista licenses?
    • by Pyrion ( 525584 )
      Ha ha, funny, but why would it matter? This is a single socket we're talking about, so unless Microsoft has changed their licensing to per-core as opposed to per-socket (and AFAIK, they haven't [microsoft.com]), this is a non-issue.
    • No (Score:5, Informative)

      by Sycraft-fu ( 314770 ) on Sunday February 17, 2008 @12:51AM (#22450760)
      Microsoft has declared for all their products that a processor is defined as a physical processor in one socket. No matter how many cores it has, it is a single CPU for licensing purposes. Also you don't have to buy more licenses to run more processors, you have to buy different versions. Last I checked it was 2 processors for workstation versions, 4 for server, 8 for advanced server and 32 for datacentre. Not sure if that's changed.

      At work we have purchased a dual processor system with a quad core CPU in each that runs Vista. All 8 cores show up and are usable by software.
      • Re: (Score:3, Funny)

        by rastoboy29 ( 807168 ) *
        Wow, how generous of them.
      • Re: (Score:2, Insightful)

        by dbIII ( 701233 )
        If you can't do a bare metal reinstall without reading a 42 digit number over the phone to somebody in India their licencing confusion is barely relevant in a serious computing environment. Limits of 2GB and 2 processors show that the world passed them by long ago, let alone birrare connection limits defined by licences instead of the capabilities of the hardware that should render them irrelevant for fileservers once you get a company big enough to have more than five computers.

        You have an 8 processor mac

        • There is no 2GB limit on Vista. If you are referring to the 2GB per process limit in 32-bit Windows, that is a function of how they do 32-bit memory allocation, and they aren't alone in that. Windows splits the virtual address space in half, 2GB for user 2GB for kernel. In 64-bit Windows the limit on 32-bit processes is 4GB, and there is no effective limit on 64-bit processes since the virtual space is larger than the physical memory you can get (it again splits the address space in half, however that gives
          • by dbIII ( 701233 )

            If you are referring to the 2GB per process limit in 32-bit Windows, that is a function of how they do 32-bit memory allocation, and they aren't alone in that

            They are completely alone in that since many other systems properly support the pentium pro and later processors - including other 32 bit systems sold by Microsoft. Their hobby line (including their latest 32 bit operating system) unfortunately does not.

  • Shick (Score:4, Funny)

    by Anonymous Coward on Sunday February 17, 2008 @12:54AM (#22450774)
    Works for razors - 2 is better than 1, so 3 has got to be better than 2. I'm not switching from Intel until someone comes out with 5 - count 'em, 5! - micro sharp cores...
  • by Sycraft-fu ( 314770 ) on Sunday February 17, 2008 @12:59AM (#22450798)
    3 cores will be better if you have a use for them. It's that simple. That answer will hold true for any arbitrary number of cores. Basically you need to have a number of threads equal to or greater than your number of cores that each need a lot of CPU time. This could all be from one program that's heavily multi-threaded and CPU intensive, or it could be from multiple applications running at the same time.

    For most things, no 3 cores isn't really going to be much benefit at this point. While there are now multithreaded games out there that make use of 2 cores pretty well, they don't really scale past that at this point. I imagine that'll change as time goes on since quad core processors are getting more common, but it hasn't yet. As for desktop apps, well they don't tend to use much power so it won't help much. I suppose it might help responsiveness in some cases a tiny bit, but I doubt it.

    However for some professional apps it can help. Cakewalk's Sonar makes use of multiple processors quite handily. Every effect plugin, every instrument, all run as a separate thread so it can easily use a large number of cores. I've seen it run on a quad core system and it distributes load quite well across them. I don't imagine anything would be different with 3 cores, it'd just have one less to use.
    • Re: (Score:2, Insightful)

      For most things, no 3 cores isn't really going to be much benefit at this point. While there are now multithreaded games out there that make use of 2 cores pretty well, they don't really scale past that at this point.

      But now you can play games and encode a dvd at the same time. It's still useful. And at some point or another there will be games that support use of multiple processors, just like there are games now that support physics processors (though few) even though most people don't have one.

    • by Kjella ( 173770 )
      Unfortunately, that's why I think this processor won't be very useful. If you have poorly threaded applications, then the "one core does the critical task, other core runs the rest" is fine and the third core is almost idle. And if it scales well, then you'll probably benefit from 4+ cores. I guess it all comes down to price but I wouldn't be buying AMD stock, to put it that way.
      • I think it is mostly marketing. AMD is likely having poor yields on their quad core processors since it is actually 4 chips on one die and not 2 separate 2 core dies as Intel is doing. So they probably figured for chips where 1 core fails, they'll just disable and market it as 3 cores. Ok that's fine, but as you noted, it is a solution looking for a problem. Every app I have falls in to one of the following categories:

        1) Only uses a single core.
        2) Uses 2 cores, but no more (games mostly).
        3) Can scale to an
    • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Sunday February 17, 2008 @01:36AM (#22450988) Journal
      Ironically, the main advantage of dual-core has nothing to do with applications taking advantage of that second core -- in fact, just the opposite.

      Dual-core means that for most cases, I can run a video encode, a backup/compression process, a long-ish compilation (of the sort that doesn't like 'make -j2'), etc -- not so much all at once, as I can fire off any background process and not worry about it, as I have a whole other core to use. It's shameful -- Amarok will occasionally use 100% of one core, and I won't notice for hours.

      Having more than two cores wouldn't benefit me a lot right now. I wouldn't mind it, certainly -- I've been playing a bit with things like Erlang, which should be able to scale arbitrarily -- but I think the real applications are only just catching on to the idea that threading is a good thing. I imagine it's still going to be a lot longer till a quad-core machine is useful for anything other than, say, running virtual machines, as most programming languages do not make threading easy. (Locks and semaphores are almost as bad as manual memory management.)

      While I'm playing crystal ball, I'll predict that the first application of multicore will be things which were already running on multiple machines in the first place -- video rendering, for instance. Not encoding, rendering.

      The second application for it will be gaming. This will take longer, and will only be the larger, higher-quality engines, who will simply throw manpower at the problem of squeezing the most out of whatever hardware is available.

      I suspect that the old pattern will be very much in effect, though -- wherein gamers will buy a three-core system and unlock the fourth one (if possible), then use maybe one core, probably half of one, with the video card still being the most important purchase. If there's a perceptible improvement, it'll be because their spyware, IM, torrents, leftover Firefox with 20 MySpace pages and flash ads, etc, won't be able to quite fill the other three cores.

      I'd like to add that for most people, including me, one core is plenty if you know how to manage your processes properly -- set priorities, kill Amarok when it gets stuck in that infinite loop, and get off my lawn!
      • by jgrahn ( 181062 )

        Dual-core means that for most cases, I can run a video encode, a backup/compression process, a long-ish compilation (of the sort that doesn't like 'make -j2') ...

        A broken Makefile, that is. I have them too.

        ... etc -- not so much all at once, as I can fire off any background process and not worry about it, as I have a whole other core to use. It's shameful -- Amarok will occasionally use 100% of one core, and I won't notice for hours.

        I bet you would be mostly fine with one core, too. Nothing really b

        • by repvik ( 96666 )
          I'll tell you another useful bit about SMP. Back in the days when I had two Pentium II (or III?) cpus, I was suckered into clicking a link that got FF and IE to completely freak out and use 101% cpu resources. Fortunately for me this didn't bring my box into a grinding halt like "everybody elses", because they only ran on one of the cores ;-)
      • by Kuciwalker ( 891651 ) on Sunday February 17, 2008 @02:13AM (#22451150)
        Having more than two cores wouldn't benefit me a lot right now. I wouldn't mind it, certainly -- I've been playing a bit with things like Erlang, which should be able to scale arbitrarily -- but I think the real applications are only just catching on to the idea that threading is a good thing. I imagine it's still going to be a lot longer till a quad-core machine is useful for anything other than, say, running virtual machines, as most programming languages do not make threading easy. (Locks and semaphores are almost as bad as manual memory management.)

        In general I'd agree with you, but I've found that a quad-core (which is actually pretty cheap these days) is much better than a dual-core if you watch HD video. h264 at 1080p is pretty taxing on the processor, and on a C2D you generally can't have anything in the background or you'll drop frames. A quad-core means you can run one or two other processor-intensive tasks (usually as you said, video encoding/backup/compilation type stuff) and don't have to pause them when you want to watch video. Also, it's very helpful if you use Mathematica a lot for large computations.

        • Re: (Score:2, Interesting)

          by Anonymous Coward
          I have to disagree here. I watch 1080p H.264 video a lot (I even encode some using x264). Using an old version of coreavc (cpu doing all the work, no directx acceleration of any kind) i get ~20% CPU usage on my C2D. If I use a codec that can make use of my geforce 8500gt's video acceleration, it drops below 5%. I never, ever drop a single frame. It must be your video card driver's or the codec you use that's being problematic.

          Dual cores are easy to keep busy. Do anything somewhat demanding, and use the othe
          • Agreed, any decent h.264 decoder can run on a ~2 Ghz single core cpu and the great decoders can probably run on ~1Ghz. A lot of video codecs are written poorly (like every single one in quicktime) and don't care because it helps keep hardware sales up.

            Core AVC is wicked fast, I just downloaded the Iron Man trailer from quicktime.com and here's some rough figures for the h.264 decoders that I tried on my Athlon X2 running at ~2.5Ghz:
            Quicktime - ~80% load
            Nero - ~60% load
            Core AVC - ~25% load
        • I'm a bit afraid that I will be loosing most of the benefits of a dual CPU (better GUI response) when applications are going to start using all those cores. Even now, if the applications are run from the same disk things get really messy. Of course, flash SSD's may alleviate some of that last problem.

          I've had it with my video screwing up when running an IO or CPU sensitive task. The problem is that prioritizing tasks - for both CPU and IO - on current O/S does not really work, and it is starting to hurt.
          • Vista is actually much better in this area then previous Windows OSes, if you have a DX10 capable card and an EVR capable media player. The reason is that you can actually have the H.264 video decoding/rendering happening within your video card, rather then having the CPU do the work and output a constantly moving 2D image.

            Say what you will about all the eye candy, if you have a decent video card Vista performs far better with the Aero enabled then disabled for the simple reason that Aero is actually using
      • by Fweeky ( 41046 )
        Video encoding's been multithreaded in at least some encoders for a while now, some decoders too. Compression can be done multithreaded (e.g. pbzip2, WinRAR), as can generation of par files (I use concurrent par2cmdline [chuchusoft.com] for backups). My audio player [foobar2000.org]'s supported running NUM_CPU conversion/ReplayGain threads for years.

        And yes, even when apps don't use it themselves, SMP's nice; I was doing it 9 years ago when I got my first BP6 [wikipedia.org] and it was great, despite relatively little other than the OS actually making us
        • Using ultra-high density drives with larger cache sizes helps substantially.

          Seagate's 7200.11 750GB are a great example, in my informal testing (called "actual use") they do far better then the 36.7GB 10,000 rpm Raptors drives I was using previously, and at a fraction of the $/GB cost.
      • One of the more obvious gains with lots of cores (>2), if matched with lots of memory, is that you can still use your system for games while doing a week long matlab computation and transcoding a dvd into xvid...

      • by llzackll ( 68018 )
        What about a program doing a CPU intensive task that isn't multi thread aware? Woul these see any advantage from a multi-core CPU? One reason I ask is because usually the multi core CPU's run at a lower clock speed than the fastest single core CPU's.
    • Re: (Score:3, Insightful)

      I expect these to be popular for virtualization systems as well, where a spare CPU for the spare OS can do wonders for your performance, and a vastly cheaper set of triple cores can easily satisfy the needs of a few very expensive quad-cores, with an option for upgrades as needed.
    • As I have stated before:

      Many of the newest Operating Systems, applications, and games are multi-threaded. Multiple cpu cores just allow modern systems to take advantage of them, when available.

      I have a dual quad-core computer, that dual boots Windows Vista Ultimate, 64-bit, and Fedora 8 Linux, 64-bit. Many programs do take advantage of this system, including modern PC games, such as Crysis and Unreal Tournament 3. UT3 does use all 8 cpu cores during parts of the game.

      So, even though multiple cores are not n
    • Re: (Score:3, Funny)

      by mpcooke3 ( 306161 )
      Or to put this another way, my girlfriend can now leave two flash adverts open in firefox on her profile before it totally cripples my machine.
  • by fahrbot-bot ( 874524 ) on Sunday February 17, 2008 @01:27AM (#22450954)
    The AMD Triple Track has three cores - one core to cut into the problem, a second to grab what is left before it can snap back into the cache, and a third core to finish it off. The AMD Triple Track, because you'll believe anything!

    [For those too young, the reference is the 1975 SNL parody about the Remco Triple Track Razor - done just after twin-bladed razors first appeared.]

    • Re: (Score:3, Funny)

      by TubeSteak ( 669689 )
      Would someone tell me how this happened? We were the fucking vanguard of computing in this country. The Intel Pentium 4 was the CPU to own. Then the other guy came out with a 64 bit CPU. Were we scared? Hell, no. Because we hit back with a little thing called the Pentium 4 Extreme Edition. That's 3.2GHz and 2 MB of L2 cache. For performance. But you know what happened next? Shut up, I'm telling you what happened--the bastards went to two cores. Now we're standing around with our cocks in our hands, selling
    • by symbolset ( 646467 ) on Sunday February 17, 2008 @04:15AM (#22451726) Journal

      For reference, see The Onion [theonion.com] reference, "... We're doing five blades [theonion.com]". (Rough language. If you're at a school maybe NSFW). From February, 2004. For the record, the Gillette Fusion with five blades and two lubricating strips was introduced in early 2006 [cnn.com].

      Hilarious though:

      Here's the report from Engineering. Someone put it in the bathroom: I want to wipe my a?? with it. They don't tell me what to invent--I tell them. And I'm telling them to stick two more blades in there. I don't care how. Make the blades so thin they're invisible. Put some on the handle. I don't care if they have to cram the fifth blade in perpendicular to the other four, just do it!

      You're taking the "safety" part of "safety razor" too literally, grandma. Cut the strings and soar. Let's hit it. Let's roll. This is our chance to make razor history. Let's dream big. All you have to do is say that five blades can happen, and it will happen. If you aren't on board, then .... you. And if you're on the board, then .... you and your father. Hey, if I'm the only one who'll take risks, I'm sure as hell happy to hog all the glory when the five-blade razor becomes the shaving tool for the U.S. of "this is how we shave now" A.

      People said we couldn't go to three. It'll cost a fortune to manufacture, they said. Well, we did it. Now some egghead in a lab is screaming "Five's crazy?" Well, perhaps he'd be more comfortable in the labs at Norelco, working on #### electrics. Rotary blades, my white #!

      I'm a big AMD fan but three cores are barely better than two. Buy it anyway - AMD needs to live if the computer market is to be bearable at all in ten years. Via makes some interesting stuff too - and they're not afraid to cut the watts and make them small. You can do some very neat stuff [viaarena.com] with a low watt CPU on a small board.

      It doesn't take a great deal of insight to see we're going to 8 cores per processor on the desktop sometime in the next few years. Dual 16 core processors will happen within ten if competition keeps the pressure up. Personally I don't care if every core is on a separate slab of silicon as long as they integrate in the package well. Yields are better that way I imagine. Somebody tell them to get the watts down. Electricity [intelligen...rprise.com] is mostly made from CO2 emissions [doe.gov]:

      PCs worldwide consume about 80 billion kilowatt-hours of electricity every year.
  • by Doppler00 ( 534739 ) on Sunday February 17, 2008 @02:30AM (#22451252) Homepage Journal
    I think I remember reading an article on Tomshardwareguide where they tried running one dual core, and a single core CPU in the same system for 3 cores. While they got it to boot the OS, a lot of applications failed to run.

    I'm guessing there is a lot of code out there that's looking for power of 2 number of cores. A program might run fine with 1,2,4,8, or 16 cores, but if you do some kind of odd number I wouldn't be surprised if several applications just refused to run. It will be interesting to see what kind of compatibility testing AMD has done with this new processor.

    In the end though, this just seems like another last ditch attempt by AMD to marginally compete on the lower end market with Intel. Intel says they have no need for 3 core chips since their yields are so much higher.
    • by flyingfsck ( 986395 ) on Sunday February 17, 2008 @04:52AM (#22451888)
      That may well be true with DOS or Windows ME, but certainly not with any version of Unix.
    • by Fweeky ( 41046 )
      A lot of apps which depend on such things probably only check CPUID and capabilities of a single CPU, and assume it goes for all available ones; this would be Bad if, say, one of your CPUs had SSE3 and one didn't - an app which can make use of it would only work if it happened to only be scheduled on the more capable CPU (so SSE3's always available), or happened to be scheduled on the less capable one when it did its capabilities check (so it never uses it).

      k8temp [hur.st] is a good example; if you have one really o
  • by ELiTeUI ( 591102 ) on Sunday February 17, 2008 @02:56AM (#22451378)
    There are a couple known problems with the first spin of the Phenom die (codename Agena).

    The first (and less relevant) problem is the TLB errata. The second (and more relevant to this discussion), is a problem in which core #2 (out of [0,1,2,3]) is lower yielding than the first three. For example, on the same CPU die, cores [0,1,3] may work fine at 2.6Ghz, but core [2] yields only at 2.0GHz. This is a widespread problem, mostly found out through failed overclocking attempts.

    Google it yourself and find out..
  • by WarlockD ( 623872 ) on Sunday February 17, 2008 @04:09AM (#22451698)
    I hate it when people tell me this. They have dropped WAY to much effort into the whole 6950 and SC1435 lines. Hell, the new 2970's are out if not already.

    My personal opinion is that they still need to be fleshed out though. I am not sure why, but all the AMD systems we have only accept DDR2 unbuffered as well has having issues with very large amounts of ram ( More than 64gigs). I will admit however, they use ALLOT less power and much quieter.
    • Breaking news: Weird sparking coming from basements all over the US.

      People have reported that sparking and loud screams have been reported all over the US. It has been thought that a posting on Slashdot caused the sparking when thousands of nerds drooled over their laptop computers. In other unrelated news, the amount of rugged laptop sales has skyrocketed.
  • by flyingfsck ( 986395 ) on Sunday February 17, 2008 @04:42AM (#22451828)
    With one dead core dropped per processor, that would explain the rumours.
  • IIRC, somebody designed and sells a three socket mobo where all the data paths are also equal. (Ah, here it is: http://hardware.slashdot.org/hardware/07/08/13/1749213.shtml, a three socket Opteron machine with two PCIe slots and two Infiniband 4x ports.) I'd like to see a version for the Phenom 3-core CPUs; even better would be building some sort of Beowulf cluster using three of them, each using a pair of cross-over cables for the interconnects. That would give you one sweet 27-way cluster.
  • Seriously. Will the 3 core Phenoms work with Linux? I'm very excited to see what develops here.

Genius is ten percent inspiration and fifty percent capital gains.

Working...