Follow Slashdot stories on Twitter


Forgot your password?
AMD Intel Hardware

AMD Quad Cores, Oh My 423

Lullabye_Muse writes "From engadget we learn that AMD has plans for putting 4 cores on one die by the time Apple has fully gone to Intel processors. Full story here. They say they could eventually have up to 32 cores with scalable technology, but most programs haven't even got the ability to hyperthread, so do we really need the extra cores?"
This discussion has been archived. No new comments can be posted.

AMD Quad Cores, Oh My

Comments Filter:
  • by Anonymous Coward on Saturday June 11, 2005 @06:10PM (#12791027)
    You must be new here.
  • Hyperthreading (Score:2, Interesting)

    by Anonymous Coward
    What does a developer have to do to take advantage of this? When will compilers, or are there, compilers written that will automatically take full advantage of multi-core proccessors?
    • Re:Hyperthreading (Score:4, Informative)

      by Mad Merlin ( 837387 ) on Saturday June 11, 2005 @06:18PM (#12791093) Homepage
      It's more an issue of programs taking advantage of multiple cores or multiple processors than the compiler. Using multiple cores means that a single program must either have multiple concurrent processes or multiple threads, you can't just magically compile that sort of thing in, IPC can be a complex beast. That, or you need to run multiple programs at the same time to take advantage of more than one core at a time.
      • Perhaps a nice job scheduler would be nice. Perhaps, if one of the cores ran at 4x or in a very low latency mode and the other ones ran at 0.5x, the critical very interrupt-driven tasks could live on the fast core, and other tasks (like Word, Excel, etc.) could be scheduled on the other core(s). That way, even if a user app locked up on one of the non-critical cores, the rest of the system stays up and accessible.

        I'd even take a multi-core 1GHz chip (with only a passive heatsink on it...) vs a 3.x GHz with
      • Re:Hyperthreading (Score:4, Insightful)

        by pla ( 258480 ) on Saturday June 11, 2005 @08:44PM (#12791842) Journal
        That, or you need to run multiple programs at the same time to take advantage of more than one core at a time.

        On my home XP Pro box, freshly after a reboot, I currently have 15 distinct processes running, with FireFox as the only obviously user-interactive one.

        And that on a box with all the useless default XP crap turned off - I frequently see machines at work where, with nothing user-interactive running, the task list doesn't fit on one screen.

        The whole red herring about not having enough multithreaded apps yet (BTW, please write "Hyperthreading does not equal multithreading, nor does it equal multicore" a hundred times on the black board, please) has not mattered since the first version of Windows 95. I can find ways to use a few more CPUs, multithreaded apps or not. Just having a second core, so you can keep your "boring" processes like the OS and antivirus separate from your interactive programs, makes a system immensely more responsive.

        If you want a single-threaded program to run faster, more cores won't help. If you want your entire system to run faster, throw CPUs at it. However, looking at both Intel and AMD's roadmaps, I'd say the days of a MHz race have (finally!) neared their conclusion. They'll keep pushing their clocks, sure, but major leaps will move increasingly toward number of cores and how those cores interconnect (those two will basically need to alternate: A few doublings of core counts leading to memory bottlenecks, then a new way to keep the cores fed, then a few more doublings, rinse wash repeat).

        I wonder, though... Will Microsoft, Apple, or Linux (or some entirely new player) take the first leap to requiring one (or even a few) cores dedicated solely to the OS?
    • Use Fortran, or another language/extension that automatically parallelizes on appropriate code.
    • " What does a developer have to do to take advantage of this?"
      Easy use threads.

      "When will compilers, or are there, compilers written that will automatically take full advantage of multi-core processors?"
      That may take a new language or maybe c+++. Multi threading is not all that hard. And yes I have written code that uses threads.
      However what most people seem to forget that you will take advantage of a multi core cpu right now. Bring up your task manager and look at how many tasks are running.
      • Re:Hyperthreading (Score:5, Insightful)

        by grumpygrodyguy ( 603716 ) on Saturday June 11, 2005 @07:25PM (#12791490)
        " What does a developer have to do to take advantage of this?"
        Easy use threads.

        Multi-threaded code is very difficult to write correctly and debug. It's hardly 'easy'.

        Multi threading is not all that hard. And yes I have written code that uses threads.

        When, for a school project? There are very few cases where integrating a multi-threaded handler into a progrom doesn't introduce a formidable degree of complexity. What really needs to take root is a new programming paradigm. One that assumes all procedures, functions and system calls are designated as concurrent from the get-go. People smarter than most of us need to design a language/compiler that doesn't burden the programmer with the responsibility of 'keeping track' of when to use threads and when not to.
        • Re:Hyperthreading (Score:4, Insightful)

          by OrangeSpyderMan ( 589635 ) on Saturday June 11, 2005 @08:12PM (#12791716)
          There are very few cases where integrating a multi-threaded handler into a progrom doesn't introduce a formidable degree of complexity.

          I would not be so categoric. It's a design issue - making a program that was designed from the ground up with single thread of "logic" play nicely with many different threads is stupidly complex and usually winds up being very kludgy - much of the threaded advantage is eaten away by the hacks that are needed to make it work. Design it from scratch to work this way, however, and the multi threading may not be simple, but it is at least "obvious", and that makes for good efficient threaded code. Lot's of tasks can be broken up quite easily and once the designer has understood inter-process communication and its constraints and overhead, the decision to create a new thread for a particular task or keep it in the exisiting one, is often far more straightforward than you make out, and yields good results.
          • Re:Hyperthreading (Score:4, Insightful)

            by willy_me ( 212994 ) on Saturday June 11, 2005 @09:58PM (#12792252)
            I agree 100%... In fact, I would go even further. There are times that multi threaded algorithms greatly simplify their equivalent single threaded algorithms. But in the past, the performance hit of running multiple threads on a single CPU often made it better to use the quicker single threaded algorithm. But with newer hardware, this isn't the case.

            Once one knows about the issues in multithreaded programming it is actually quite simple. However, as the original poster pointed out, it is also very hard to debug and easy to make mistakes. This is where design comes in. Those mistakes shouldn't be made in the first place. Today, programmers have a nasty tendency to jump into code too quickly and rely on tools to debug and evolve the code into the final product. This approach works surprisingly well for simple programs, but you'll crash and burn if you try to use this approach with a multithreaded application.

            For my undergraduate I concentrated on learning to program using threads. I took courses like Distributed Systems, Concurrent Systems, Parallel Computing... I observed first hand many of the problems associated with using threads - and I also learned by making most of the common mistakes. Looking back, I see that I learned a great deal. I see how multithreaded applications will play a bigger and bigger part of programming in the future. I also see how all those programming habits picked up in previous years will have to be thrown out and how proper software engineering practices must be adopted...


    • There's a certain amount an optimizing compiler could do to take advantage of multithreading technology without requiring anything from the developer (although I don't know which do).

      Writing decent multithreaded programs is as much a discipline as writing decent object-oriented code (although the two go together well). Basically you break a program into a set of independently-operating 'threads'. Thread safety becomes a concern -- if multiple threads access the same global variable you need a way to loc

    • With most OS vendors shipping some sort of hypervisor that lets you run multiple OS's on a machine simultaneously, I can finally get rid of some of the extra boxes sitting around my room.

      It might be nice if these could use separate CPUs, since I never know when one of them might be busy (say, getting slashdotted).

  • Short Answer (Score:2, Redundant)

    by sp0rk173 ( 609022 )
    Yes, Yes we do.
  • by DarkSkiez ( 11259 ) on Saturday June 11, 2005 @06:12PM (#12791038)
    Of cores we do!
  • by Anonymous Coward
    Now that Intel is running with Apple, Intel must be Doomed (tm).
  • by howman ( 170527 ) on Saturday June 11, 2005 @06:12PM (#12791043)
    4 cores on one chip... I guess they will have to call it the earth simulator as the temprature of the chip will be reaching that of the earths core.
    At least it will open up innovative new designs like built in coffee pot as well as new uses for old technology, like making pizza pops in your old cd burner.
  • Hyperthread? (Score:2, Informative)

    by Anonymous Coward
    Hyperthread(ing) is a term for a CPU that has two sets of states but one execution unit.. shouldn't the article use the phrase multithread?
    • Re:Hyperthread? (Score:3, Informative)

      by Pandaemonium ( 70120 )
      Yes, the poster should have used 'multithread' instead of the Intel branded and copyrighted term, 'HyperThread' which is in regards to their proprietary virtual processor technology on Pentium 4's and Xeons.

      Let's not let Intel get the next 'Kleenex'ing of the English language, shall we?
  • by strredwolf ( 532 ) on Saturday June 11, 2005 @06:13PM (#12791050) Homepage Journal
    Anything to go faster for Gentoo's sake, the better! Anything to make compiles go fast!
  • by fitten ( 521191 ) on Saturday June 11, 2005 @06:13PM (#12791052)
    but most programs haven't even got the ability to hyperthread, so do we really need the extra cores?

    Once upon a time, most programs didn't have the ability to do IEEE754 floating point either so did we really need the FPUs?

    Once upon a time, most programs didn't have the ability to do 3D graphics at 30fps. Do we really need dedicated high performance graphics cards?

    The list goes on... but no one learns...
    • Once upon a time, 640_KB_ ought to be enough for everybody... today, >640_MB_ is common.

      Me, my laptop has 1.25GB, my desktop has 1GB, my backup PC has 512MB and anything below 512MB is marginally usable as far as I am concerned.
      • by Tim C ( 15259 ) on Saturday June 11, 2005 @07:16PM (#12791456)
        My first computer (a Sinclair ZX Spectrum) had 8KB of RAM. My first PC had 32MB.

        My current graphics card has 256MB of RAM.

        Even if none of my apps can take advantage of 4 cores, my PC can - I could be running a lengthy compile and transcoding some video while playing a game and still be contributing to SETI@home or something.

        More to the point, you could have a long-running process (like video transcoding/encoding) running on one or two cores, with the remaining core(s) doing something else for you while you wait.
    • by Angst Badger ( 8636 ) on Saturday June 11, 2005 @09:24PM (#12792007)
      It's worse than that. Run ps under Linux or the task manager under Windows, and tell me how many processes you see running. Sure, most of them are single-threaded applications, but they're all competing for the same CPU (or two). A 32-way chip would make things much speedier even if there were no multithreaded applications running. (And yes, I'm aware that other issues, like memory contention, come into play.)

      You don't want that 32-way CPU? Well, give it to me and I'll let you have this old Pentium.
      • When I run PS, I see 1 program running: PS. bash, my shell, is blocked because ps has control of the terminal.

        If I turn around and run top, I see that, indeed, the main program running is top. All the rest are usually sleeping on some event. Unless that event occurs, they won't be woken up. The speed with which Linux can react to my keypresses, read the key presses, send those keypresses into a user-land safe buffer, wake up the userland program waiting on it (in this case, Mozilla), and then schedule
  • by Anonymous Coward
    Only a few programs can use multiple processors/cores (CAD, Animation, Scientific). But just unloading some of the OS processes onto other cores leaves more power for each standard programs. (Limewire + Firefox + Xvid compression)
  • Since we seem to have hit a wall as far as ramping up the actual clock speeds of processors, adding more cores so the processor can do more work will be where Intel and AMD will be focusing their development the next few years. So yes, we do need more cores otherwise Intel and AMD will have a hard time selling you a chip that's only 3-5% faster.
    • Speed isn't all about speed. Though I'm a hardware simpleton, I do wonder if we'd be better off (after 2 cores) with simply adding a ton more cache.
      • Re:No more Mhz! (Score:2, Insightful)

        by Kufat ( 563166 )
        Cache may well be reaching the point of diminishing returns. I seem to recall reviewers' benchmarks of 1MB vs. 2MB showing almost no gain, although I'm sure Intel has a set of benchmarks showing massive improvements.
  • by tbuckner ( 861471 ) on Saturday June 11, 2005 @06:15PM (#12791070)
    See MIT Technology review article: ue/feature_intel.asp [] The silicon laser, being made from the same material as the rest of the chip, would replace the copper wires that need to connect cores, thus letting Intel 'keep Moore's Law alive for decades', the article says. It would do this by permitting many, many cores in fast communication with less heat and less energy required than current copper-wired chips. Question: will Intel's possession of si-lasers shut AMD out?
  • by Lokni ( 531043 ) <reali100.chapman@edu> on Saturday June 11, 2005 @06:16PM (#12791072)
    1.21 Jigawatts!
  • [...] but most programs haven't even got the ability to hyperthread, so do we really need the extra cores?

    Writing code for hyperthreading is not easier than writing code for multi-code/SMP. Both are just writing code targetted for SMP. NUMA-like concerns, for systems with multiple chips make more of a difference. If anything, hyperthreading is harder to optimize for, since you have to figure out when to issue PAUSE instructions.

  • by m50d ( 797211 ) on Saturday June 11, 2005 @06:17PM (#12791080) Homepage Journal
    Who still uses one application at a time, really? I know there's less benefit when it's different applications because of register sharing, but if it's cheaper to get 4 cores than 2 cpus it's probably worth it.
    • by wfberg ( 24378 ) on Saturday June 11, 2005 @06:23PM (#12791135)
      I recently ditched a dual pentium-II for a AMD64 3000+.. and I miss the SMP machine. Why? Because if some stupid app was taking 100% CPU power, on the old machine that meant it was using 50% of my CPUs, and I had a whole nother CPU available for killing errant apps with.

      Even gamers now do stuff like run skype side-by-side with their resource-hogging game.

      Yes, you need multi-core, multi-processor, whatever.

      • Linux does a pretty reasonable job. I use XP at work and when it's doing something CPU-bound (generating a key pair with putty sticks out in my mind) the machine becomes unresponsive, but doing the same thing on my Linux machine doesn't have any perceptible effect. 2.4 kernels kinda sucked at that, but 2.6 classifies threads based on whether they use up all their CPU time. If they sleep voluntarily or wait for I/O, they are given higher priority.

        Even if the CPU usage is at 100%, benchmarks have shown that
    • by imsabbel ( 611519 ) on Saturday June 11, 2005 @07:00PM (#12791345)
      You dont understand:
      These are 2 complete cpus+a crossbar switch on one die. No shareing of execution units/registers,no sharing of anything but the ram bandwith.

      Amd dual core cpus are FASTER than 2 single core cpus in dual socket boards (with the exception of extremely bandwith demanding streaming applications) simply because of much faster on-die cache coherence communication.

      A quad core cpu will most likely see more bandwith problems, but could (with ddr-2, ect) still be very well in the same class as a 4 single-core machine.
  • A computer that will burst into flame without being /.ed first... I want one.
    • Re:wicked (Score:3, Interesting)

      by OoSync ( 444928 )
      A computer that will burst into flame without being /.ed first... I want one.

      Then you'll want to look into YAWS [].

      Basically, a web server written in Erlang, which supports lightweight processes and high concurrency. In other words, each connection is a completely separate process and shares no information with other processes except by message passing.

      Also, a recent paper [] from the primary designer of Erlang, Joe Armstrong.

      The key points are that Erlang process creation and message passing ar

  • by Lemurmania ( 846869 ) on Saturday June 11, 2005 @06:17PM (#12791087)
    Need? What is this "need" you speak of? I'm having a very hard time understanding the post's question. If only the poster would use words I can comprehend, such as "want," "desire," "lust" and "pointless splurge."

    What we have here is a failure to communicate.

  • by Anonymous Coward
    It is relatively easy to add multiple cores (copy and paste in your IC layout program) but I wonder if this is just another manifestation of that "megahertz myth" (multicore myth?). Adding bunches of cores is fune and dandy but you have to keep those cores fed with a wide and fast bus.

    The largest chip packages currently available have fewer than 2000 pins (and I don't expect that to scale as quickly as the number of cores grow) and you can only cram so many DDR/Rambus channels before you run out of I/Os. P
  • one would need either a ton more ram or faster I/O for the HDD than is standard tosday or even in the near future. the bottleneck is non volatile storage throughput, fix that and even todays systems could be much faster than they are with SATA/scsi/ata100/133
  • Lots of expensive software vendors are pricing expensive software (like SQLServer's "enterprise" version at $40000/CPU) on a per-CPU, not a per-core basis.

    Multiple cores on a single chip is extremely important if you buy such sillily licensed software.

  • by Animats ( 122034 ) on Saturday June 11, 2005 @06:19PM (#12791109) Homepage
    • CPU 0: Windows Update
    • CPU 1: Virus scanner
    • CPU 2: Client for P2P network decompressing "Star Wars 7 - The Revenge of Jar-Jar"
    • CPU 3: Useful work.
  • but most programs haven't even got the ability to hyperthread

  • You can get your XP-box rooted that much faster. Just think how efficiently Joe Sixpack can finally work on his system while the leeches of the internet get their share too! It is about time...


    Actually, if I can ask a serious question, does multi-core work the same way as multi-processor? (ie. Two procs isn't twice is fast, but closer to 1.5x...) And if it is essentially the same, will this not inevitably lead to far denser blade servers? (Ie. Two 8-core chips on blade as opposed to two one
  • AMD semiconductor manufacturer petitioneed the NRC for a rule change to allow small home use nuclear reactors, saying in the application "consumers will need it".

    Also, they announced the acquisition of the frigidaire refrigeration company for an undisclosed amount, saying that "our product lines have a mutual synergy".
  • It's more a case of it's the only way forward.

    Clock speeds have, for the foreseeable future, hit the wall but transistor counts are still going up.

    Clock speeds have been the way forward to date because they require no change in the way programs are written, yet provide performance improvements.

    Now that the only way to improve performance is to harness increased transistor counts, multi-cores are in, but this means a programming paradgym shift is needed, because current programming languages are insuffici
  • By then, Intel is gonna have, like, a million BILLION cores, with super powers like laser eyes and an invisibility shield!


    Let the macho dick-waving contests begin.
    • Difference is AMD's quadcore will be faster and take less power than Intels single core ;-)


      Ok, fanboy I may be but at least AMD is taking actual strides in MEANINGFUL improvements [e.g. low-power equal-performance AMD64 venice core] whereas Intel [outside of the PentiumM] is relying solely on a massively high clock rate [with an massively inefficient ALU] to get attention.

      I mean why is it at something like bignum math or compiling a half clockrate AMD or PentiumM can get equal or better wall-time
  • by Dun Malg ( 230075 ) on Saturday June 11, 2005 @06:32PM (#12791192) Homepage
    The word "Hyperthreading" describes a specific hardware kludge by Intel to make a single-core CPU pretend it's dual-cored. Apps that utilize multiple CPUUs are called multithreaded. All you dorks parroting the article submitter and calling it "hyperthreading" are idiots.
    • by Jeff DeMaagd ( 2015 ) on Saturday June 11, 2005 @07:47PM (#12791596) Homepage Journal
      Hyperthreading isn't necessarily a kludge. It works very well and is often well worth the sliver of a die to implement, so long as the operating system knows the difference. It was never intended to be a replacement for a full dual processor system, I don't think it was ever sold as such.

      It isn't Intel's technology either, Intergraph invented it, although Hyperthreading (TM) is Intel's branding of the idea. Alphas were supposed to get it, maybe EV7 has it, I'm not sure, it might have been something suposed to go into EV8.
  • ...have a look at these slides of a technology presentation given last friday ay.htm?slide=1&a []

    Impressive. If they execute on all that, Intel will have to keep on playing catch up for the forseeable future.

  • It's pretty obvious that the next wave of Moore's law seems to be moving computing towards parallelism.

    This is pushing software developers to make their applications multi-threaded in order to exploit the performance gains of parallel processors.
    The interesting thing about this is that writing concurrent multi-threaded applications is extremely diffucult. I expect there to be an increase in demand on skilled programmers in the near future to overcome this diffuculty.

    Look at it this way: the increase in C
  • by trims ( 10010 ) on Saturday June 11, 2005 @06:35PM (#12791215) Homepage

    Yes, Virginia, we can use mutli-core. I mean, we're all into SMP heavily in the non-desktop role (does anyone actually make a "server" that doesn't have SMP?)

    There are two big things I love about the multi-core Opterons: They draw less power than equivalent SMP machines (acutally, quite a bit less), and they allow multiple "CPUs" to use the same memory controller. Nominally, the second isn't a big win, but it can be for practical purposes.

    Opterons have dedicated memory channels on them, so a current dual-socket Opteron has two DISTINCT DIMM banks - that is, on a motherboard with 8 DIMM sockets, 4 are allocated to each CPU socket. So if you have only one CPU, you can only use 4 DIMM sockets. Since those 4 sockets are often configured as a single bank (i.e. they all have to be filled to work), you can't add another CPU to the system without buying more RAM. This is wasteful. But with a multi-core opteron, all on-chip cores share the same memory bank.

    The jist of this is that it'll be easier to have High-Compute, lower RAM configurations than it currently is reasonable to do. There are a lot of tasks out there which it is really nice to have a modest amount of RAM (say 4GB), but with huge crunch. Currently, it's hard to buy a config to do that, since you generally either end up way over-paying for CPUs, a huge number of tiny DIMM chips (which sucks for future expansion), or a larger number of motherboards, which draws more power.

    And, hey, they're not tooo bad in price. Sun's dual-core v40z is less than twice as expensive as their single-core v40z, and you save lots on power/cooling/space.

    Overall, a nice win.


  • More power the better. We need it. Lets advance the technology and not start worrying about if we need it :)

    You bet your ass we need it.
  • BEOS!!! (Score:3, Interesting)

    by dextr0us ( 565556 ) <dextr0us AT spl DOT at> on Saturday June 11, 2005 @06:44PM (#12791261) Homepage Journal
    Thats why I still run BeOS with a complete lack of application support! Every app is fully threaded... so might as well run fewer of them!
  • by LionKimbro ( 200000 ) on Saturday June 11, 2005 @06:45PM (#12791270) Homepage
    In Intel Developer Forum 2005 keynote speech, [] Justin Rattner [] said Intel is working towards having x100's, (at least x10's,) of cores in there.

    He shows demos and explains several driving forces:
    • voice interaction
    • visual interaction (face recognition, identifying shape, video analysis)
    • 3D graphics
    • machine learning

    An example of video analysis [] is demonstrated. You can get a stable image out of a cell phone, and get a much higher resolution to boot, simply by analyzing lots of images in sequence. Right now, it takes a lot of time to crank out the analysis. But the problem is parallelizable, and Intel thinks we'll have this sort of things in cell phones by 2015.

    This is also the technology behind automatic construction of 3D from images. [] This is where you pull your cell phone out, walk around, waving it around the room, and get back a 3D model of the room.

    People ask: "Do we really need all this computing power?" Yes, yes we do. There's plenty of stuff to do with it.

    Scott talks about sitting in front of the computer, and not needing to log in, because the computer knows who you are by your face.

    There's all kinds of stuff to do with it.
  • What we need (Score:3, Insightful)

    by fm6 ( 162816 ) on Saturday June 11, 2005 @06:48PM (#12791283) Homepage Journal
    ...but most programs haven't even got the ability to hyperthread, so do we really need the extra cores?"
    You think new systems are designed to run existing software? That's backwards. New software is designed to fully exploit existing systems. When more people have hyperthreading hardware there will be lots of software that uses it. Same for multi-core systems.

    That said, most users run word processors, web browers, and other simple productivity software that doesn't even fully exploit the old P2s we were running a few years ago. But if you want to run the latest graphic-intensive games, you better have the lastest hardware.

  • most programs haven't even got the ability to hyperthread, so do we really need the extra cores?

    This statement makes no sense. And, besides:

    zcat foo.gz | bzip2 -c > foo.bz2

    Look, ma! Code that will run twice as fast on a multiprocessor system!

  • Think Fast (Score:3, Funny)

    by Doc Ruby ( 173196 ) on Saturday June 11, 2005 @07:34PM (#12791530) Homepage Journal
    The logic that says the scarcity of hyperthreading programs means that multicore dies don't benefit from lots of cores is wrong. Once in the hyperthreading game, more cores is better. So few hyperthreading programs might mean less reason for multicores, but once there is a reason, the more cores the better. It's like saying that most people don't drive, so there's no need for really fast cars. That kind of fuzzy illogic is completely common in the media. On Slashdot, where we'd like to think we can think, it's really irritating.

"The number of Unix installations has grown to 10, with more expected." -- The Unix Programmer's Manual, 2nd Edition, June, 1972