Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Upgrades Hardware

AMD Going Dual-Core In 2005 309

gr8_phk writes "We recently learned of Intel's plans to go dual-core in late 2005. Well it seems AMD has decided to follow suit. It should be noted that the K8 architecture has had this designed in from the start. Will this be socket 939 or should I try to hold out another year to buy?"
This discussion has been archived. No new comments can be posted.

AMD Going Dual-Core In 2005

Comments Filter:
  • by schwep ( 173358 ) on Monday June 14, 2004 @07:24PM (#9424864)
    I have seen some licensing schemes that apply to per-processor costs... 1 CPU = $1,000, 2 CPU = $2,000 etc.

    How long will it take to argue that consumers with a dual core processor should pay 2x the price? I'm betting not long.
  • by wmeyer ( 17620 ) on Monday June 14, 2004 @07:25PM (#9424870)
    Interestingly, in a review of P4 vs. K8, the K8 had a clear advantage at the 4 processor level and above, apparently because of reduced bus conflicts with their individual memory spaces. If AMD were to proliferate cores on chip, they'd wind up contesting for the memory bandwidth, just like the P4.
  • by cyfer2000 ( 548592 ) on Monday June 14, 2004 @07:29PM (#9424919) Journal
    I could see a big future of heatsink business in Intel and AMD's plans.
  • Re:Just get... (Score:5, Interesting)

    by ruiner5000 ( 241452 ) on Monday June 14, 2004 @07:29PM (#9424922) Homepage
    Sure, if you are happier not only with liquid radiator cooling, and also having to have copper heatpipe cooling. That is right as I have discovered here [amdzone.com] Apple has had to implement not one, but two separate cooling solutions for their 2.5GHz PowerMac G5. What were you saying again? You do realize don't you that you will be able to swap out a single core dual Opteron system with two dual core CPUs and have Quad CPU power don't you? And that makes the G5 an advantage how?
  • by polyp2000 ( 444682 ) on Monday June 14, 2004 @07:34PM (#9424963) Homepage Journal
    To be perfectly honest, it depends how rich you are. At the end of the day when it comes to buy now, buy later; the state of technology generally speaking is that in most cases (particularly with computer hardware) after only a short period of time , whatever technology you invest in becomes obsolete.

    From my own personal point of view, my dual athlon 1.5ghz is still holding out beautifully. When the cash comes my way Im banking on a powerbook. Truth is I dont need another desktop just yet. However if i had a stupid disposable income, and one that predictably would hold out till these dual cores come out id proabably get one now, and get one later.

    When I built this machine I bought the highest spec parts I could afford at the time and I havent upgraded for 2 or 3 years aside from upgrading the graphics card. The rule I live by is get the best available that you can afford at the time and it should keep you going for a good while.

    Im running gentoo box; faster processors would be very nice for source compiles but I gave up on churning out seti blocks a while ago and dont have a massive reason for further processor power ...
  • by Vario ( 120611 ) on Monday June 14, 2004 @07:36PM (#9424987)
    Dual cores processors seem to me like a pretty good alternative to a dual processor system. You don't have the hassle of 2 huge coolers blowing out hot air, the mainboards are don't have to be overpriced and it is already supported by all OS.

    Some years ago I was thinking about getting a dual processor system. Alone the motherboard was two times as expensive as a similar single processor one, applications did not support it all and so on. I hope newer applications are ready for dual cores. Quake III was the first game I know that used two processors and finally I can consider that animated desktop background.

    Is there a list which applications can effectively use dual cores besides obvious things like webservers?
  • Re:What about Apple? (Score:5, Interesting)

    by HiredMan ( 5546 ) on Monday June 14, 2004 @07:44PM (#9425046) Journal
    Dual cores have been in the IBM PPC pipeline for quite a while - of course the (now old) Power4 arch has been multi-core all along.

    In all probability the PPC little brother of Power5 (rumored to be called the 975) will debut at 90 nanometers and the next chip will be a ~60 nanometer dual core version possibliy called the 976.

    Which if these will be called the G6 is left up to the reader as an exercise. My money is on the 976. Either way the PPC has some serious legs.

    =tkk
  • Re:Just get... (Score:2, Interesting)

    by Anonymous Coward on Monday June 14, 2004 @07:45PM (#9425047)
    The reason people buy apples isn't to churn out SETI data blocks, it's for stlyle, and always will be (now). Apple is sort of like the Mercedes Benz of computers, they look nice, work nice, but aren't the power hungry rice rockets PCs are nowadays.
  • Well (Score:4, Interesting)

    by Sycraft-fu ( 314770 ) on Monday June 14, 2004 @07:47PM (#9425064)
    Seeing as the G5 is, more or less, a sinlge core from the larger IBM Power4 processor, I'm not seeing that it would be a large problem to make dual core chips.

    I highly doubt Apple will switch to x86, it's a pride thing if nothing else. Also, at this point, a switch would upset everything. It could have been done, potentially, with the OS-X switch. Since software was having to be ported to a new OS, a new architecture port is just one more thing. Now, however, x86 Macs would be binary incompatible with PPC Macs. That means emulation, which isn't very efficient.

    I think Apple is pretty much stuck on PPC for good.
  • Re:What about Apple? (Score:2, Interesting)

    by Fooby ( 10436 ) on Monday June 14, 2004 @07:50PM (#9425079)
    No. The PPC architecture has the RISC advantage which makes engineering them that much easier. It would be easier to make multicore PPC than multicore Intel.

    Apples are the only RISC-based consumer desktop platform, it would be tragic if they moved towards Intel with all its legacy baggage.

  • by Vario ( 120611 ) on Monday June 14, 2004 @07:51PM (#9425089)
    Only if the application is doing time consuming stuff in at least two threads. You say any modern GUI app, so is Firefox rendering a page multithreaded? What about my DVD Player Software, Games, TeX, Maple?
  • by philipgar ( 595691 ) <pcg2 AT lehigh DOT edu> on Monday June 14, 2004 @08:03PM (#9425188) Homepage
    While the idea of dual core cpus is really cool, and will take over shortly due in part to the fact that we need something to do with all those extra transistors, I wonder why the focus of the industry is on chip multi-processors (CMP).

    While CMP processors can give us rougly the same performance of a standard SMP system (somewhat faster due to interprocessor communication and shared memory, but also slower due to a larger memory bottleneck) I don't think that a CMP system would compete with a simultaneous multi-threading (SMT) solution.

    While Intel's response to SMT (hyperthreading) has some benifits the performance of it is rather lackluster. The reason has more to do with their particular implementation. If you've read about the initial observations on SMT an 8-way SMT processor was shown to outperform a 4-way CMP processor. Now, I must note that the 8-way smt processor had more functional units then the cores in the 4-way CMP processor, but the overall area of the 8-way SMT processor would be much much smaller (far less structures need to be duplicated for SMT as opposed to CMP). For more information on this check out some of the papers at http://www.cs.washington.edu/research/smt/ .

    What I don't understand is the insistance of the industry to use CMP first. From everything I've read, an 8-way SMT processor should take up less die space then a two way CMP processor. Even assuming that the 8 way processor contains more functional units. It kind of makes sense that a CMP processor is faster when there aren't enough threads to fully utilize a SMT processor (say only 2 or 3 threads that want full cpu usage). I guess SMT is a big chance in the model of programming and application development (I'm currently running research on the subject which is why I'm so interested in it). Is the reason to embrace CMPs simply because there's less new technology to add (they "just" have to interconnect two cores as opposed to adding the extra logic for SMT).

    Does anyone else have any other opinions regarding this matter, or any idea why no one seems to be fully embracing SMT's potential.

    Philip Garcia
  • by Lord Crc ( 151920 ) on Monday June 14, 2004 @08:13PM (#9425272)
    My case is crap. Yet, with a 3000+ athlon, my two WD hd's are "outnoising" my 4 case fans + cpu fan by quite a bit (I notice when they're powered down).
    Cpu temps in the mid 50's C. Not what I would call screaming...
  • by Nom du Keyboard ( 633989 ) on Monday June 14, 2004 @08:23PM (#9425375)
    Why not take an older processor (e.g. i80486) that already is basically single cycle execution -- or Pentium which has two execution pipes already -- update it to modern geometry which should increase speed and decrease power, and put as many as you can easily fit onto the die? After all, those older cores execute all the basic i86 code including MMX with a lot less transistors. How much does SSE, SSE2 and HT contribute verses a lot of cores just executing threads with little context switching?
  • by HuguesT ( 84078 ) on Monday June 14, 2004 @08:32PM (#9425454)
    This raises questions regarding stability and Windows.

    While I find that multiprocs settings under Linux improve things to a significant degree (although there are still outstanding issuess with NVidia proprietary drivers and SMP), I found the opposite true for Windows.

    The last time I tried, which was about 2-3 years ago, many drivers didn't seem to expect true concurency under Win2k and I was experiencing significantly more crashes on my dual P-III than when I forced the system to only use one of the CPUs. Yet it probably wasn't the hardware because that same machine was very stable with Linux.

    With the advence of hyper-threading, have things improved markedly with WinXP?
  • by charnov ( 183495 ) on Monday June 14, 2004 @08:52PM (#9425626) Homepage Journal
    The current Opteron has dual channel controllers. There really isn't that much of a reason to go dual dual channel when in many situations, the single channel Athlon 64's outperform the Opterons because of reduced latency (no registered dimms).
  • by lsdino ( 24321 ) on Monday June 14, 2004 @08:54PM (#9425637) Homepage
    While typically computers do use powers of 2, you can run SMP machines w/o powers of two. For example I've heard of 3-proc machines. I believe it's just a quad proc w/o the 4th processor. Odd, but true.
  • by NerveGas ( 168686 ) on Monday June 14, 2004 @09:02PM (#9425688)
    No it doesn't

    Yes, it does.

    If you're at all familiar with the Opteron architecture, you'd realize that each chip's memory controller does, indeed go to a new memory bank.

    As an example, I just bought a 4-way Opteron. It's got four seperate banks of memory on it. Each processer has a 128-bit, DDR400 memory controller, all independent of each other.

    If you have a program on each CPU, accessing memory tied to that CPU, the 4-way machine I mentioned would have a theoretical memory throughput of 25.6 gigabytes/second. The theoretical throughput of a dual-Xeon machine is 5.4 gigabytes/second. That's a huge difference.

    You're right, it takes some intelligent work to schedule programs on CPUs that are close to the memory the program will access. If you hadn't been in a hole for the past year or two, you'd know that there has been a lot of work put into Linux to make it handle these NUMA architectures more intelligently. IBM has some VERY large NUMA systems, and has been pouring a lot of development into Linux.

    As for system costs going up so much that it would be prohibitive for a desktop, think again. AMD's entire desktop line is transistioning to the Opteron architecture. Even the lowly 1xx single-proc Opterons and Athlon64's have nearly all of the features of the highest 8xx 8-way chips. The difference between a 848 and a 148 is just reduced cache, and fewer Hypertransport lines out of the chip.

    steve
  • Re:What about Apple? (Score:1, Interesting)

    by Anonymous Coward on Monday June 14, 2004 @09:07PM (#9425719)
    Ooops, all AMD and Intel cores have been RISC based for generations. Time to leave the 1980s methinks.
  • by minator ( 744625 ) on Monday June 14, 2004 @10:17PM (#9426196)
    This is almost exactly what Sun are doing in their next generation CPUs. Taking multiple simple cores and stuffing 8 of them onto a single die. I believe they are based on UltraSparc II based cores with support for 4 threads per CPU added. On specific types of tasks the Sun CPUs are going completely toast any x86 system.
  • by Anonymous Coward on Monday June 14, 2004 @10:33PM (#9426307)
    not always i've had some version of 3d studio i think it was, i dunno it was a while ago that refused to run on a dual cpu machine without the proper license
  • by philipgar ( 595691 ) <pcg2 AT lehigh DOT edu> on Monday June 14, 2004 @11:34PM (#9426615) Homepage
    Granted wide-issue super scalars are area-hungry and more expensive to implement, that doesn't explain the question I had asked. It stands by obvious reasoning that doubling the number of functional units in a processors (and adding 100 or so extra registers to the rename unit) is less expensive then doubling the entire processor die. What may get tricky is the VLSI design of the extra functional units (as going from 1 to 2 cores in VLSI layout should be straightforward enough in terms up chip area, but I'm no expert in this matter).

    From this simple fact it seems intuitive that a super wide issue processor would take up maybe 10-15% more die area then a conventional superscalar. Then you add another 5-10% of the die area to the required duplicated structures and extra rename registers, and you end up with a super wide-issue SMT processor that takes the advantages of a superwide issue processor, and adds the latency hiding features of SMT to get a speedup of well over 2 times.

    I think the most accurate reason may have more to do with the simplicity of going the CMP route.

    Phil
  • by addaon ( 41825 ) <addaon+slashdot.gmail@com> on Monday June 14, 2004 @11:35PM (#9426617)
    Also, you don't need to round trip one signal at a time, you can have multiple signals in flight. Which makes this whole discussion rather moot.
  • by Ninja Programmer ( 145252 ) on Tuesday June 15, 2004 @12:08AM (#9426828) Homepage
    Hypertransport only supports up 16 nodes, and one of them has to be the southbridge. So you can't get to 16 processors anyways. :) Seriously though, I've only seen topologies of up to 8 processors at once. So quad-boards with two processors per core is probably about as high as they will go with this architecture.
  • by Anonymous Coward on Tuesday June 15, 2004 @12:31AM (#9426923)
    Intel's Itanium 3 (that name is of course not confirmed yet), codenamed "Montecito", will be the first CMP *and* SMT chip for the Itanium line. It's interesting that Intel are going to introduce both CMP and SMT in the very same product. The only other chip which is both CMP and SMT is IBM's POWER5 (not really shipping just yet, but soon)

    By the way, the SMT in Montecito is not "fine-grain" SMT like Hyperthreading, but "coarse-grain" SMT as in IBM's RS64 series of chips (as opposed to POWER series).
  • More Cores (Score:2, Interesting)

    by thebdj ( 768618 ) on Tuesday June 15, 2004 @08:11AM (#9428296) Journal
    Let me explain something, though it may have already been done for many of you. You joke about more cores, but both groups are surely already in the process of adding more cores to their architectures. Granted I heard my news through a third party but apparently they know a person at Intel who said there was development of upwards of 16 cores on a single chip.
    The reason this works out as more is better is simply because we can. Think about how small the processes have gotten. Most will be over to .09 soon and there is technology to get that down even smaller. Before the limitations on the expansion of the speed of a chip were often affected by Cache size. Look at the crazy performance given by doubling cache sizes on a CPU. The problem is Cache is expensive to place on a chip, cores are not. Expect the new war in the CPU world to be more along the lines of more cores and not so much on clock speeds. This is part of the reason the companies are trying to break the traditional numbering schemes for processors and inventing convoluted messes of numbers that literally mean nothing.
    My only concern so far has been on the usefulness of dual cores. I am sure they have made some sort of hardware method to allow current software to continue treating the chip as a single CPU, because otherwise it would be pretty useless to have what amounts to really having twice the CPU on the same chip space since most software isn't multithreaded to handle multiple chips. But I am sure they have taken care of this. Better stop before I look like I am rambling....

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...