Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware IT

The Quest for More Processing Power 104

Hack Jandy writes "AnandTech has a very thorough, but not overly technical, article detailing CPU scaling over the last decade or so. The author goes into specific details on how CPUs have overcome limitations of die size, instruction size and power to design the next generation of chips. Part I, published today, talks specifically about the limitations of multiple cores and multiple threads on processors."
This discussion has been archived. No new comments can be posted.

The Quest for More Processing Power

Comments Filter:
  • by 2.7182 ( 819680 ) on Wednesday February 09, 2005 @08:31AM (#11617176)
    the quantum computer!! Until then we'll have to suck it up with these Si things.
  • by Pan T. Hose ( 707794 ) on Wednesday February 09, 2005 @08:37AM (#11617209) Homepage Journal
    What we need is a better architecture which would allow for a better implementation of algorithms. Will we ever have an MMIX [stanford.edu]-like processor with 256 general-purpose 64-bit registers that each can hold either fixed-point or floating-point numbers? That is what I am waiting for, not more "power," whatever that means.
    • by LiquidCoooled ( 634315 ) on Wednesday February 09, 2005 @08:45AM (#11617262) Homepage Journal
      Didn't the powerpc have something approaching this.
      I remember the old motorola 68000 range having 16 32bit regs for general coding, and one of the prime benefits of the ppc was the vastly greater registry capacity.

      I stopped coding assembler when I moved to x86 - what a horrible cludge of a stack stack biased platform it is.
    • by cnettel ( 836611 ) on Wednesday February 09, 2005 @08:50AM (#11617290)
      Ok, classic x86 is cramped and the CPU does a lot of register renaming to get around it. I don't agree that more registers would actually do that much good.

      What kind of algorithm are you imagining would benefit from 256 fields of non-vectorized data?

      Of course, those registers could be used in larger things for everything that's worthy of a local variable, but as soon as you run into a stack operation you'll either only want to push a subset of the registers to the stack, or face a harder blow of memory access times by making each function call a 2048 byte write to memory.

      Explicit encoding of parallelism, hints to branch prediction, and similar stuff, seems far more appropriate.

      Again, few single functions in an imperative language have 256 separate variables, without involving arrays of data. Unless the register file is addressable by index from another register (basically turning it into a very small addressed memory, which is whta you try to avoid with registers), you have little use for 256 of them. Take for example a trivial string iteration algorithm, most of those registers would be completely useless. The same holds true for common graph algorithms.

      • I don't agree that more registers would actually do that much good.

        Clarification: It's easy to see that you move in and out of registers and force the CPU to do register renaming to get good parallelism in x86. I fail to see the benefits from a real performance standpoint when you reach above let's say 32 of each kind, and I think that the 16 available in AMD64 should be fine for most tasks. The problem in x86 is that they are eight and even those have locked meanings to some degree.

        • The problem in x86 is that they are eight and even those have locked meanings to some degree.

          Locked meanings? I'm not so sure. If we do a MUL EAX then the result goes into EDX:EAX. Since EAX gets clobered, it'll get renamed. Combine that with the fact that most compilers generate code that does not use instructions in which registers have special meaning anyway and I don't think this is actually a problem.

      • by Jeff DeMaagd ( 2015 ) on Wednesday February 09, 2005 @09:23AM (#11617538) Homepage Journal
        Ok, classic x86 is cramped and the CPU does a lot of register renaming to get around it. I don't agree that more registers would actually do that much good.

        It does. Take a look at x86-64. The 98% reason 64 bit x86 code is faster when you are using less than 4 gigs of RAM is the fact it has double the registers. With the same number of registers, 64 bit code normally slows things down measurably because the pointer size doubled. The instruction word length doesn't change.

        256 registers goes a bit far unless half of them are predication bits.
        • by cnettel ( 836611 ) on Wednesday February 09, 2005 @09:41AM (#11617695)
          Read my own clarification response above yours, I intended to write that x86 is cramped by its register count (and the further restrictions on what to use when), but that 256 is very, very much.

          The Itanium has a huge file with, IIRC, even more registers in total. They are not inter-changeable, though, but the (almost) only point in that would be to keep the total number of registers down, while being flexible for most types of code. As I think that it's generally actually easier to make them separate for different execution units, that's not very interesting. Also, note that the Itanium currently has a 2-cycle (again, IIRC) register access time! They tried to be visionary, adding a huge register set, in addition to some parallelism encoding and other things I mentioned in the parent, but they traded (what seems to be) far too much to get it.

          A huge (defined as MMIX-like, not AMD64-like)register file might be great, but you need selective register pushing to stack to get away with it, unless you or the compiler are performing very aggressive inlining. What's easier, if you're doing assembler -- calling a function and put a local on the stack or writing a huge fricking implementation of your main algorithm, taking great care to use all different registers in each function inlining?

          • If I recall correctly, MMIX uses its whole huge register file as a stack. All of your instructions specify register numbers as counted from the top-of-stack. Stack space is allocated and deallocated in frames, not a register at a time. A frame must be small enough to fit in registers. The stack spills to memory if it overflows, and refills from memory if it underflows. It does not have to spill/refill on a frame boundary. But activation records for compiled C routines could nest five or six deep and not spi

      • Branch prediction sounds decent to most people, until it hits reality. Having a program "predicting" that it will need a certain path is *backwards*. If a certain calculation *should* go down a path, it should pre-tell the channels.
    • True. These dual core CPUS are an indication that they are having difficulty increasing their CPU throughput.

      As with dual CPU motherboards, you go to dual, when you cant get anything else out of the single...

      10GHz CPU, lol. Why not release one that requires a 100GHz clock? If its only processing every 30th cycle, whats the big deal? Oversimplification I know, but that is the essence of Intels laughable strategy. Consumer ignorance vs. product innovation. Well take the ignorance. How long can it las
    • by Leroy_Brown242 ( 683141 ) on Wednesday February 09, 2005 @09:07AM (#11617386) Homepage Journal
      Smart power, not more power? How unamerican!

      TERRORIST!

    • You make a good point here. To add to what you said, I think we don't really need more "raw power" (at least, not for general use), but we need more "intelligent" use of the available power. We are a few who think the future is some kind of "soft core" where the available cells could perform different functions over time. Kind of like a super-scalar, on-the-fly reprogrammable FPGA. Think of how much of a "classic" processor is just a huge waste of ressources, most of the time. We need to improve on that.
      • You can already buy PCI boards that will let you do this. It is just that software support is seriously lacking (non-existant).

        My guess is that this would work wonderfully for certain classes of problems, and would be quite useful for things like finite element analysis, MPEG encoding, and the like. The main problem is that a FPGA takes a fair bit of time to load its configuration file. Obviously, you would not want to multitask between two different applications trying to use this FPGA. Otherwise, you
        • Of course, a classic FPGA architecture wouldn't cut it. But there are some more advanced architectures that are being tested already, that allow extremely fast reprogramming. Imagine if some areas of your processor could be reprogrammed in the time it takes for, say, a context switch. And of course the underlying OS needs to be written so as to optimize the processor's use at any given time.
  • by klang ( 27062 ) on Wednesday February 09, 2005 @08:39AM (#11617222)
    That's what's been happening the last 10-15 years. Where are the indications that "time to market" and "sloppy programming" will suddenly vanish?
    • Pun (Score:3, Funny)

      by 2.7182 ( 819680 )
      From my point of view, chips lead to more bloat.
    • by Rinikusu ( 28164 ) on Wednesday February 09, 2005 @09:07AM (#11617387)
      Because, overwhelmingly, no one really cares but a handful of people. The days of hand-tweaked, ASM optimized code are pretty much over for consumer code. Yes, there will always be a market, but it is ever diminishing with the size of market expanding. To use analogies, look at furniture. Go to just about any furniture "gallery" positioned for the great American unwashed and you'll find several hundred, almost identical mass-produced fat-ass-cliners, some with machine stitched leather, some with vinyl, some with cloth, etc. Dressers and other cabinetry are stapled, nailed, screwed and glued with machine precision accuracy. The demand for hand-built, crafted furniture has dropped tremendously (and the prices for these craft pieces seems to have gone up.. ). Yes, a "hand-tweaker" coder will probably find work with a small shop somewhere, or create their own consultancy for constituents who demand that kind of programming, and chances are that coder will make quite a bit more than the average, churn and burn programmer (people like me), but for the overwhelming majority, it's overkill.

      (Here's a simple cost analysis: We can pay this guy $100k/year to do hand-optimized tweaks on this code that then becomes a liability for future maintanence if that coder dies, quits, or whatever. Or, we could add another stick of $100 RAM, and buy a new processor next year for a fraction of his cost and get a similar performance bump... The math doesn't add up...)
      • by EvilTwinSkippy ( 112490 ) <yoda@NosPAM.etoyoc.com> on Wednesday February 09, 2005 @11:19AM (#11618728) Homepage Journal
        Hey, I shop at Ikea. The stuff isn't event assembled. It's a flat box full of precision cut boards with bolts and one of those funky allen keys.

        Getting back to your point, there is still a market for hand-coders. With most consumer electronics, I'm talking kid's toys, alarm clocks, talking dolls, you try to shave off every penny you can in manufacturing costs. Plus, once you start a product line, you run it out for years.

        In that case, of high volume and low cost, it is easy to absorb the cost of a $100,000 hand coder. Especially if he can save you $0.10 a unit on lines where volume is measured in the millions of units.

        Besides, most of the "hand coders" I know work more in the $36,000 dollar range.

    • From what I've gathered OS X has been cleaning up and improving speed on their code. A few select open source products have also reached a "stable" feature set and are working on smoothing things out. On the whole though, not all of it has gone to bloat. Much of it has gone to abstraction, reuse and consistency. I'd rather they reused a known, tested component that's 10% or 20%, or depending on the application, 1000% slower than to rewrite a new custom piece that'll have new bugs.

      Speed is rarely an issue t
  • Quick answer (Score:5, Interesting)

    by LiquidCoooled ( 634315 ) on Wednesday February 09, 2005 @08:40AM (#11617228) Homepage Journal
    Run old software.

    Its only new software thats sucking up all the extra processing power.

    Remember back with really sluggish 33mhz 486s etc (and a lot lower) and thinking of the ultimate computer being a whole 50mhz.
    Well now you got a computer thats over 10 times faster with practically infinate capacity.

    Fire up that old operating system and run you original software, you will be in heaven!
    • How many % does your CPU use on average? Unless I run BOINC or something, the CPU is usually on 2% of its capability. When compiling and other heavy stuff, off course it will go up, but I have all the processing power I need for now. This article mentions that the development isn't going as fast as it used to, not that we don't necessarily haven't got enough processing power.
    • Do you think OS/2 Warp has drivers for an ATI 9800 Pro or the chipset for a 3.2 GHz P4 or AMD 64 FX 53? I'm sure it'd fly, as it already flew on a Pentium Pro 180 with 64MB of RAM.
    • Nice idea, but I tried this: Windows 3.11 won't even install on my 1.2GHz Duron, and I'm not even going to try it on my AMD64 3200+...
    • Remember -- if you run DOS on a CPU with a large enough L2 cache, you can fit the entire address space minus extended or expanded memory (or whatever they called it) into L2 cache!
    • Indeed, Office 97 rocks man! ;)

    • Remember back with really sluggish 33mhz 486s etc (and a lot lower) and thinking of the ultimate computer being a whole 50mhz.

      I remember when my 16 MHz 386 machine was the hottest thing around - blew the doors off of the 6 to 8 MHz AT's. Shortly after buying the 386, I picked up a copy of Gato which used timing loops intended for the 4.77 MHz 8088 - went w-a-y too fast to be playable until I learned how to set the clock speed compensation on the game.

      Before that when an 8 MHz 8086 was pretty hot stuff (w

  • x86 centric (Score:3, Insightful)

    by Anonymous Coward on Wednesday February 09, 2005 @08:44AM (#11617247)
    Might want to point out that the article is x86 centric. Not that it only applies to x86, indeed many/most of the issues are just generally related to processors (single vs multi-core, trace lengths, etc), but the article definitely focus' on these issues as applies to the x86.
  • Unbloated URL (Score:5, Informative)

    by rylin ( 688457 ) on Wednesday February 09, 2005 @08:57AM (#11617334)
    http://www.anandtech.com/printarticle.aspx?i=2343 [anandtech.com].
    Same article without 90% of the ad-bloat.
  • by Trolling4Columbine ( 679367 ) on Wednesday February 09, 2005 @08:59AM (#11617344)
    Chances are that you aren't often pushing your CPU to capacity. What I'd like to see is a better way to identify bottlenecks in my system. There's no sense pumping more power into a system if it's all going to be throttled by something like a slow hard drive.
    • very good point.

      In fact I would like to see research done on what operations are considered slow. For instance, if your word processor takes 1 sec to update the screen it is considered slow. But nobody will pay any attention if the DVD Burning takes 5 or 10 min..
    • That's a good one. How can we ask for faster processors if ours even are 100% used? The current bottlenecks are memory, motherboards and I/O. Most people has 128 or 256MB of RAM. So what we can except? A lot of swap! And swap sucks your system. When you has processors working on GHz, memories working on MHz and hard disks working on a few KHz, even if a lot of cache memories you can't speedup your system. Now they're trying to use a multi-core computers. Fine, sounds good. It's cheaper than a single core.
    • by Ironsides ( 739422 ) on Wednesday February 09, 2005 @12:55PM (#11620076) Homepage Journal
      Most bottelnecks are already known. Here is a breakdown of access time when you are running at processor speeds:

      L1 & L2 Cache: Almost instantanious, Picoosecond resonse time
      L3 and higher Cache: A bit slower, but still pretty quick, Nano resonse time
      Main memmory: Go do something else while waiting for this, Nano/Microsecond resonse time

      Hard Drive: Go to lunch and come back, Milisecond resonse time

  • by MosesJones ( 55544 ) on Wednesday February 09, 2005 @09:07AM (#11617383) Homepage

    Ummm, my home machine has a 400MHz processor running Suse. I'm thinking of upgrading, as I have every 6 months for 5 years, but I just keep waiting for the "next" best thing rather than upgrading now.

    There are mobile phones more powerful than my home PC, but it does the job.

    The wonder of these future boxes is that we will STILL be able to write code that makes them run slow. Roll on Longhorn I say!
    • Overleap makes a 1.3Ghz Tualatin upgrade for Slot-1 Pentium machines. It's called a SlotWonder 1300C and costs about $100.
    • Re:Limitations... (Score:4, Interesting)

      by Kjella ( 173770 ) on Wednesday February 09, 2005 @09:39AM (#11617666) Homepage
      The wonder of these future boxes is that we will STILL be able to write code that makes them run slow. Roll on Longhorn I say!

      Well, each version of Windows seems to bring about new hardware requirements. Most people buy a new Windows version with new hardware. It is more than just a little coincidence. I think Microsoft is well aware that most people aren't able to install Windows themselves, and that making them believe you'll need a faster box is a good idea to keep them upgrading to the "next" level, both on software and hardware.

      Kjella
  • by Anonymous Coward
    The myth is that desktop programming is inherently single threaded and that there's no benefit to multi-threading. This is in part due to that fact that a lot of multi-threaded programs don't run any faster on a single processor than a single threaded program does. If there's no benefit to writing multi-threaded programs, than why go to the extra trouble of doing so.

    I expect that once multi-core desktop cpu's become more prevalent, the advantage of multi-threaded programming will become evident and start

    • There are two fundamental truths:

      1) Programming for two or more processors is more work, and prone to more subtle and strange errors.
      2) Most people only have one processor.
      You can draw the obvious conclusions.

      Fact #1 can be dealt with by proper techniquie, training, and tools.
      Fact #2 is going to change due to the inability of AMD, Intel to deliver over 4GHz.
      • 1) Programming for two or more processors is more work, and prone to more subtle and strange errors.

        Threaded apps, and multitasking OSes have been around for years. Even if an app is single threaded, the user is still benefited by having 2 or more processors because the system is still very responsive, even if one app has one CPU completely pegged.
    • The myth is that desktop programming is inherently single threaded and that there's no benefit to multi-threading.

      Blocking man. There is a ready queue and a blocked list for a reason. Those disk accesses aren't instantaneous. Neither is waiting for input from the user, or waiting on a socket. If your thread is blocked, there might be other work that you can do while you wait.
  • by TheLoneCabbage ( 323135 ) on Wednesday February 09, 2005 @09:37AM (#11617652) Homepage
    Multi threading get's you a speed boost not necesarily on the individual application, but definetly on the OS level. That's why Sun get's away with individual CPU's that are each 1/4 the speed of cheapy x86 hardware.

    Most OS's these days are not monolithic. Even MS is really a collection of smaller pieces, but not nearly to the degreee of Linux.

    Linux just scales better than Windows on multiple CPUs. I have no doubt that MS will work indian programers day and night to catch up, but this is a game they are definetly playing catch up in.

    Linux, in some versions is scalling past 64 CPUs now (oh the benefits of forked kernel development!), which should factor nicely when time comes that AMD ('cause may not be around then) is pushing ships with dozens if not hundreds of micro-cores.

    Last I checked (and I may be out of date on this) Windows started bogging on 4 CPUs. And never mind it's assanine global message loop.

    I fully realize Joe User cares more about percieved performance than real performance (long live xorg!), and explaining Linux's advanced scaling architecture will not win over the desktop, but it will have a signifigant impact on technical decision markets; from servers to embeded devices (HUGE market for these clustered chips).
    • I'm sorry, but this is just plain Linux-fanboyism with very little technical backing. Windows has had threads as kernel scheduled entities since the first version of NT. Linux got them in 2.6. The reason you can not use the stock version of Windows on more than 2 CPUs (32, I believe, for AS) is that it uses a bit map to identify CPUs, the size of the bit map being determined at compile time. This approach (at least in theory) will give better performance for small numbers of CPUs, although it scales les
  • Just to note, I am not an Electrical Engineer (but will be in 3 years). From what little I've read, it seems like branch prediction allows the cpu to prefetch data it will need. Smart math people keep coming up with better and better general purpose algorithms. But these new algorithms need more and more logic behind them, adding to CPU complexity a lot. Now, my question is once we have an n-core cpu, would it be possible to optimize your main cpu set up for general purpose use, the second for video encondi
    • This is already what you have : you have a general purpose CPU (Intel or AMD), graphics CPU (Nvidia or ATI), audio CPU, MPEG en/decoding, DSP, Vector, ...
    • You can't design an algorithm that says `for video applications a branch is more (or less) likely to be taken than not taken'. What you can do, is put clues in for individual branch. In some languages, exceptions are used for this. An exception is just another sort of branch, and can be used as a standard control structure, but the compiler knows that the exceptional condition is less likely to occur and so will optimise the other condition more (and tell the CPU, if this is supported by your ISA).
    • I will offer one suggestion, as one who was also at one point 3 years from an EE degree.

      Forget everything you are told about X being optimal, and Y being old hat. Computer architectures come and go like bell bottoms and short skirts.

      Branch prediction is a workaround. It is not a radical performance enhancing technology. It is there to keep the CPU busy when it would otherwise be starved for instructions and data. Branch prediction is simply there to allow the CPU to operate at an insanely high clock spe

      • Branch prediction is a workaround. It is not a radical performance enhancing technology. It is there to keep the CPU busy when it would otherwise be starved for instructions and data. Branch prediction is simply there to allow the CPU to operate at an insanely high clock speed as compared to the memory bus. And it only works well when you have a relatively fixed target to optimize for (namely Windows.) Branch prediction is also needed because later generations of the i686 processor have insanely long pipel

    • When you reduce the pipeline from a P4-length to something along the lines of a PowerPC, the mis-prediction penalty is much lower. Also, GPUs on graphics cards already off-load a major chunk of repetitive processing of 3D rendering and 2D video decoding, occasionally video encoding.

      The "Cell" architecture does something similar to what you've described - different cells handle different tasks of a multimedia system, say set-top box or Playstation 3. Better statistics modeling is what's needed in terms of b
    • ISAs already have ways to indicate static branch likelihood for conditional branches. Some ISAs have a "likely to take" flag, indicating that a branch is more likely to be taken than not taken, while others have rules like "if branch backwards, then taken is more likely, else not taken is more likely". A compiler can make use of these to do what you're suggesting.
  • The author seems obsessed with the Pentium.

    The only reference made to AMD is regarding their ingenious SOI technology. With the exception of that, the focus is maintained on Intel, (whom he calls the "#1 in the CPU market"). I find that somewhat absurd, since Intel is largely failing (stretching an obsolete architecture to extreme limits by extending the pipeline) where AMD is innovating and has already largely surpassed them.

    AMD's CPU does a hell of alot more per clock cycle than Intel's. The AMD 64 bit

    • Well, he's also talking about the K9 delays. I think that the failure of the Prescott in some ways show what problems both leading x86 vendors are fighting against. Isn't SOI, BTW, more of an IBM deal that AMD cross-licensed?

      And, yeah, the only Intel CPU I currently like is the Pentium M, and I hope you can forgive that. If I would currently buy a new main machine, it would probably be AMD, but I'm holding out for dual-core releases from them. I like the effects that both "real" SMP and hyper-threading has

    • Correct me if I'm wrong, but wasn't SOI (Silicon On Insulator) IBM's technology?
  • A colleague suggests that 4 GHz may be a hard frequency to exceed (in the short run).

    My leaky brain suggests that this might correspond to the propogation speed in silicon for a given path length and a given process (eg, 90nm may give us better results).

    --dave

  • I have to question one of the main assumptions in the article -- that most software won't benefit from multiple processors. In a sense it's true, but it's also misleading.

    If you are desperate to run your word processor or spreadsheet faster, then he's got a point. But realistically, don't the current systems already run those kinds of programs just fine? Is this the kind of application where more speed is most needed?

    I think Sony have got it right with their whole "media processor" approach, with high
    • I do agree that most apps and games don't really feel that much faster.

      Compare a game at Pentium I 100mhz and a Pentium II 200mhz, there is a massive difference. That's off by just 100mhz.

      Compare a game at Pentium IV 1.4ghz and a Pentium IV 2.0ghz, there is hardly any difference. That's off by 600mhz.

      The industry is obsessed with number crunching and generic software number benchmarking. Which is a bad measurement altogether.

  • Tom's Hardware [tomshardware.com] also has "The Mother of All CPU Charts." Which is also a good read with many benchmarks.

    It is crazy how far we have come.

    -The only sig I have is a cig with a good single malt.
  • by zenst ( 558964 ) on Wednesday February 09, 2005 @12:31PM (#11619703) Homepage Journal
    Now I only have a very limated understanding of the issues and electronics, given my lack of electronics experience. But couldn';t the leakage by utilised in a some form of intigrated peltier coolining to help pump the heat out of the chip and as such making it cooler help in a small way to reduce leakage. ANother, and call it wacky thought that struck me ws why not have another layer of large silicon that is powered by the leakage. It look to me that the leaked power from the 70mn process is nearly enough for the total power on the 90mn process and then the leakage from the 90 would do something just over the 180mn process. In a sence another form of heat pump :). Anyhow I'm sure I've either given you electronics guru's somthing to thing about or at the very least, laugh about. Enjoy :)
    • Re:just a thought (Score:3, Informative)

      by ChrisMaple ( 607946 )
      Peltiers have been used, but they are expensive, inefficient, and not very useful at the high power densities of a P4.

      Leakage is not available as a power source. Leakage is turned into heat in that exact location where the leakage occurs.

  • www.everythingispossible.com

"It takes all sorts of in & out-door schooling to get adapted to my kind of fooling" - R. Frost

Working...