Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM Hardware Technology

Looking Into The Power Architecture Future 296

vmircea writes "If you think clock speed is the most important measure of a processor, IBM's Bernie Meyerson wants you to reconsider. Meyerson, who heads research and development efforts for Big Blue's semiconductor group, says processor chip speed is old news. Go to ZDNet for the interview."
This discussion has been archived. No new comments can be posted.

Looking Into The Power Architecture Future

Comments Filter:
  • Speed (Score:5, Insightful)

    by Anonymous Coward on Wednesday June 09, 2004 @09:42AM (#9376946)
    The end result that people care about. When a system is purchased, and people are looking at transaction processing capabilities, that is an end result. They are not looking at whether the clock frequency of the microprocessor is 8 percent higher.
    Isn't that how non-idiots have been looking at it, all along? I don't think this is really a new attitude.
    • Re:Speed (Score:5, Insightful)

      by Short Circuit ( 52384 ) <mikemol@gmail.com> on Wednesday June 09, 2004 @09:48AM (#9377022) Homepage Journal
      No, it's how people in the know look at it. There's a difference between being stupid and being ignorant. One is curable.

      (Odd...I feel like I just quoted someone. But I can't remember who.)
    • Re:Speed (Score:4, Insightful)

      by Anonymous Coward on Wednesday June 09, 2004 @09:51AM (#9377049)
      agreed.

      Except taht most people stop their research into what chip they want when they see a RBFN with the letters "MHz" or "GHz" printed next to it. Nevermind how other factors influnence true optimality of a chip. I personally would much rather see a standard numerical rating be developed (FLOPS may work), except that some (coughIntelCough) wont use taht in marketing materials because it shows inefficiency. (Much like how Hummers dont print their gas mileage on showroom display materials)
      • Re:Speed (Score:5, Interesting)

        by Short Circuit ( 52384 ) <mikemol@gmail.com> on Wednesday June 09, 2004 @10:12AM (#9377284) Homepage Journal
        FLOPS won't work; it ignores workloads that use integer math. It also ignores workloads that specialize in vector math. And workloads that depend a great deal on automated decision-making. And random-number generation.

        The problem is that no matter what metric you use, it won't fit all cases. Different workloads have different requirements. Personally, I'd like to see programmable hardware...Essentially an FPGA section on CPUs. Programs would provide the OS's scheduler with a circuit layout, and the scheduler would have the layout programmed in when needed.

        Each program doesn't necessarily have to have access to the whole grid array, either. The scheduler could divide the array into sections. One section would be for speeding up scheduler operations. The rest would be available to have programs loaded in. You wouldn't even need to erase one program's hardware when another program had something it wanted to implement. With the hardware divided, you could load the new program's code into an empty slot, and leave the old code available for the old program's next timeslice. (To prevent having to reprogram the FPGA section every time the program's turn came about.)
        • Re:Speed (Score:3, Informative)

          by afidel ( 530433 )
          SPECfpBase2000 and SPECintBase2000 cover almost everything usefull you can do with a computer. Btw the problem with FPGA's is that gate reconstruction times are so slow that you REALLY have to be doing a lot of something for it to make sense, like compressing an entire movie, and even then you might be able to do 90+% of the speed by using assembly and vector ops. Add to that the fact that FPGA's have a limited number of rewrite cycles and generally uses different fab processes and it gets to be pretty stup
          • Re:Speed (Score:4, Informative)

            by harrkev ( 623093 ) <kevin.harrelson@gm[ ].com ['ail' in gap]> on Wednesday June 09, 2004 @10:40AM (#9377618) Homepage
            Most FPGAs are RAM-based. Reconfigure as much as you want. This includes every Xilinx FPGA made. And there are some Xilinx Spartan II parts under $10. Pretty cool!

            There are only a few FPGAs which use any sort of non-volatile memory (Actel's Pro-Asic being one). Those would have a limited life.
            • Re-programmable (Score:4, Informative)

              by The Conductor ( 758639 ) on Wednesday June 09, 2004 @11:12AM (#9378032)

              There is a trade-off between speed, reliability, cost, and re-programmability.

              SRAM types
              Are re-programmable but require a rather slow serial load at boot-up. Reliability in embedded systems leaves something to be deisired since any brownout-induced glitch can create errors that are even worse (harder to recover from) than software glitches because wired logic doesn't have anything equivalent to code checksums or interrupt vectors. Well-paid FPGA designers are versed in the arcane art of self-verifying logic.

              EEPROM types
              Come alive at boot up and are much more resistant to glitches. Their performance, however, is slow. And you have limited (100,000 maybe) rewrite cycles.

              Anti-fuse types
              are made by Actel [actel.com]. They have the highest performance and best density. They come alive at boot up and are dead-nuts reliable under the worst of conditions; for example, properly qualified, they can survive the cosmic radiation in spacecraft that would leave other types toasted. The big drawback: the anti-fuse process, which works by melting diodes into short-circuits, is not eraseable.

              Desktop systems (say, an add-on FPGA card) would be best served by SRAM types, since you already have a processor that requires gluttinous gobs of puritanically clean DC power. Basement hardware hackers would be better served by EEPROM or anitfuse types (depending on performance requirements), since they don't require super-expensive exotic design software.

              • Re:Re-programmable (Score:4, Insightful)

                by harrkev ( 623093 ) <kevin.harrelson@gm[ ].com ['ail' in gap]> on Wednesday June 09, 2004 @11:33AM (#9378316) Homepage
                I would disagree with the last statement. Xilinx FPGAs are perfect for an experimentor. They can be easily programmed with a JTAG cable, just like everybody elses parts. And Xilinx has low-cost and free design suites available. This makes is perfect for development/debug. A home experimentor is likely to make a LOT of mistakes when designing, and EEPROM-based parts take longer to program, and they DO wear out after burning too many times.

                However, in order to program a Xilinx part in an embedded system (without a PC attached) requires a way to program a serial EEPROM. Programming this might be a pain, but Atmel (for one) makes serial EEPROMS for just this purpose, and will also be happy to sell you a programming cable.
        • Re:Speed (Score:2, Interesting)

          by Anonymous Coward
          Oddly enough, this idea sounds like a research paper I read... like 4 years ago, now; and I'm sure there were more earlier, as the paper referened other similar adaptable architectures.

          The drawbacks: you either need a FPGA that you can reprogram very quickly (on the order of nanoseconds) or you need a task that takes a long time and can more than benefit from having to take the time it takes to reprogram the FPGA. The latter is not terribly useful for your average desktop machine.
        • Re:Speed (Score:3, Insightful)

          by joe_bruin ( 266648 )
          Personally, I'd like to see programmable hardware...Essentially an FPGA section on CPUs. Programs would provide the OS's scheduler with a circuit layout, and the scheduler would have the layout programmed in when needed.

          sure... great idea. just like the idea of putting an array of dsps on the pci bus, to act as a generic accelerator unit.

          of course, the problem with both of these is: who is going to program for these? some specialized high performance applications, perhaps (openssl, linux kernel, autoca
    • by simpl3x ( 238301 ) on Wednesday June 09, 2004 @10:20AM (#9377395)
      business is often about defining your strategy for approaching new business. ibm is stating that openess will benefit their business. others have recognized this holistic approach to systems design, which from the beginning of computing was really required for high end/specialized systems. some wanted openess for additional reasons, such as freedom in beer and speech. so the same stick has simply been picked up to use in a more competetive environment against businesses not capable of integrating both sides of the computing environment.

      so long as everybody has their needs met, it's a good thing(tm).
  • They're only, what, almost a decade late making the observation that it's no longer as relevant for the average consumer?

    Sorry, I'm a bit bitter today.
    • Well, it is IBM. In the past, their ability to think 'ahead' was rather well portrayed when they decided that there was no home market for computing. But at least they are catching up right?
    • I beg to differ. It's relevant to the self-esteem of the average consumer,
      when comparing his purchase to that of his neighbor.

      I find it ironic that MIPS were dropped from most advertising, in part, because
      they were misleading, so manufacturers went back to quoting clock frequency, which
      is even moreso.
      --
    • The hottest consumer electronic product right now may well be the digital camera.

      Given that it takes a lot more cycles to process an image - than say - a spreadsheet - I think the consumer very much wants processing speed.

      The next hot item will be likely be digital videos actually going somewhere - other than the shoebox

      This will be even more cycle intensive.

      The market responds to technology in fits and starts - but the analysis says many consumer products are still throttled by the speed on consumer PC
  • by Seth Finklestein ( 582901 ) on Wednesday June 09, 2004 @09:44AM (#9376969) Journal
    Oh, look. A story on why clock speed doesn't matter. Perhaps this is a cover-up as to why the new G5s [slashdot.org] aren't as fast as Apple promised.

    SHAME on you, IBM, for causing Steve Jobs' promises not to come true.
  • real speed (Score:5, Funny)

    by Anonymous Coward on Wednesday June 09, 2004 @09:45AM (#9376985)
    personally i like to measure the speed on how many eggs i can cook on it per minute.

    my celeron can probably only do 2 or 3, i'm sure the P4 can top that though.
  • Correct (Score:5, Funny)

    by Lord_Dweomer ( 648696 ) on Wednesday June 09, 2004 @09:46AM (#9376994) Homepage
    Right on about processor speed not determining how fast a computer is.

    I mean, everybody knows its the cold cathode lights, plexiglass windows, and stickers that make it go faster.

    • Re: Correct (Score:5, Funny)

      by Mz6 ( 741941 ) * on Wednesday June 09, 2004 @09:48AM (#9377013) Journal
      "I mean, everybody knows its the cold cathode lights, plexiglass windows, and stickers that make it go faster"

      You must own a Honda.. perhaps a Civic to be exact?

    • Re:Correct (Score:5, Funny)

      by Ruprecht the Monkeyb ( 680597 ) * on Wednesday June 09, 2004 @09:52AM (#9377063)
      No, it's the speed holes that make it go faster. You know, those tiny little holes in the socket -- the more holes, the faster the processor.

      And I'll let you in on a little secret -- those pins that go in the holes are actually there to slow the CPU down. No need to buy a new processor -- just clip off a couple of the pins on your current PC and it'll go much faster.

      • Re:Correct (Score:5, Funny)

        by Oscaro ( 153645 ) on Wednesday June 09, 2004 @10:13AM (#9377309) Homepage
        And I'll let you in on a little secret -- those pins that go in the holes are actually there to slow the CPU down. No need to buy a new processor -- just clip off a couple of the pins on your current PC and it'll go much faster.

        I tried, but the magic smoke came out of the CPUand it doesn't work anymore (for those who knows, the magic smoke is what makes the chip work; if it escapes, the chip dies).
  • by L. VeGas ( 580015 ) on Wednesday June 09, 2004 @09:46AM (#9376996) Homepage Journal
    Clock speed is good, but what I look for in a processor is that ephemeral processor attitude. Can I show it off to friends? Will my mother thinkk it's cute, or is it a little ... dangerous? I want a processor that says something about me. That I'm a rebel that won't take no for an answer. That I'm cool without trying. If a processor can't do that for me, well I'm just not interested.
    • by foidulus ( 743482 ) * on Wednesday June 09, 2004 @10:06AM (#9377220)
      Clock speed is good, but what I look for in a processor is that ephemeral processor attitude. Can I show it off to friends? Will my mother thinkk it's cute, or is it a little ... dangerous? I want a processor that says something about me. That I'm a rebel that won't take no for an answer. That I'm cool without trying. If a processor can't do that for me, well I'm just not interested.
      I have a suggestion, get a personlized tattoo on your ARM!
      *Rim shot
      I apologize.
  • Its interesting that apple is releasing (in july) the IBM made g5 that can go to 2.5ghz. It seems like people still care if a prosessor can go "more ghz". I think it is smart what AMD did with there 3000+ chips(or an other somthing+ chip). It makes poeple think that prosesor runs faster when it realy doesn't.
    • What bugs me is how AMD and Intel kicked up their processor speeds. Both of them made their pipelines deeper.

      While that's fine for some workloads, with more instructions being executed at the same time, it harms workloads that depend heavily on the results of current calculations to figure out what to do next.

      Intel eased the problem by implementing hyperthreading; I'm surprised we haven't seen the same thing come out of AMD's corner.
    • by Jameth ( 664111 ) on Wednesday June 09, 2004 @10:00AM (#9377150)
      "It makes poeple think that prosesor runs faster when it realy doesn't."

      Actually, it makes people think the processor runs faster when it really *does*. Which is why I like their numbering scheme: it compensates for consumer ignorance.
    • by Anonymous Coward
      It makes poeple think that prosesor runs faster when it realy doesn't.

      Partially. When I bought my 2200+, I understood that it ran at 1800 MHZ. I looked at all the specs first. And then I saw the comparison to the Intel machines that I was looking at. For less money, I bought my AMD parts that put the Intel equivelent to shame. At 1800 MHZ, the Intel couldn't come close. At 2200 MHZ, as AMD wants you to think of when comparing chips, they were pretty close. The Intel chip beat the AMD in a few cata
    • A 2.4GHz Opteron would be more than fast enough for me. What I'm looking forward to is the ability take advantage of the bandwidth available to, say, DDR PC4200 RAM. Seems like we've stalled at 200MHz (PC3200).
  • by Anonymous Coward on Wednesday June 09, 2004 @09:47AM (#9377003)
    if you think clock speed is the most important measure

    I think it is very important for clock speed - the crystal in my watch runs at 64k hz to keep time which is quite important. Lets see you Solaris or AMD overclockers beat that!
    • I've overclocked my watch to 67.2 khz and it rocks! My workouts go much faster now (and better what with the water cooling rig acting as a wrist weight). Although I never seem to get enough sleep and nobody ever shows up on time for my meetings... Hrm... -jrrl.
  • by MacFury ( 659201 ) <me@johnkramlichP ... minus physicist> on Wednesday June 09, 2004 @09:47AM (#9377004) Homepage
    What other applications besides games really tax the CPU right now?

    I do a fair amount of video editing and image manipulation, even still my two year old computer works fast enough for me...

    Does the average Joe need the computing power they are given?

    • Come on, everyone needs the power they have. You don't think being a spam zombie has no effect on the CPU do you?
    • Does average Joe mean your average web surfer? Only online to pay bills, surf the internet, read mail and send pictures. Or is your average joe the person who uses it for editing home videos plus the average internet user.

      I think you might be seeing a plateau on "average" users purchasing bigger, better processors or computers. If the computer they have can get the job done for them and they aren;t moving on to other computer tasks (ala video editing, rendering, etc..) they don't mind having 2+ year

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Wednesday June 09, 2004 @09:58AM (#9377122)
      Comment removed based on user account deletion
      • Having 2 or more CPU's will only help you if your application has the ability to use them. If it's a single-threaded app, then you'll just be heating up your system case more than necessary for no gain.
        • Not exactly. Using SMP helps in other ways. The OS/resources need managing and when his application is using CPU 1 at 95-100% CPU 0 is hugging curves and taking names at about 30% doing memory and disk management, etc. He is having lock-ups because he has one cpu. More RAM, faster disks, more CPUs will help him out. While the application may not be able to make use of multiple threads (hypothetically, it very well might be able to) the system as a whole sure could use multiple thread execution. HT is n
      • Check your cooling (Score:2, Informative)

        by Trigun ( 685027 )
        The processor shouldn't just lock up like that. Perhaps the heavy load is overheating it?
    • by glob ( 23034 ) on Wednesday June 09, 2004 @10:02AM (#9377172) Homepage Journal
      > What other applications besides games really tax the CPU right now?

      as a developer i want compilation to be a quick as possible.

      also i make heavy use of vmware, which needs as much grunt as it can get.
      • as a developer i want compilation to be a quick as possible.

        Then don't use C++. Seriously.

        I've been using Delphi since it was Turbo Pascal, and compile times have been a non-issue for me since I can remember. Even on my 333MHz P2, I've never had a perceptible compile/link time, ever. Even for full rebuilds.

        And these days, anyone doing heavy software development should be using Perl, Python, Lua, Lisp, anything dynamic and lightweight.
      • as a developer i want compilation to be a quick as possible.
        If you're doing x86 stuff, you want an
        AMD [anandtech.com] processor.
    • I work at a company that design chips. The amount of CPU power that is needed to verify the correctness of a design keeps on going up and up and up.

      Granted, it's not an average application, but I do consider myself to be an average joe, so there you go. :-)

    • There are many applications, like software defined radios and televisions, that require huge amounts of number crunching.
    • Good question.

      I contribute my spare computing cycles to distributed computing efforts [yahoo.com].
      The distributed.net client [distributed.net], at least, does not require the computer to be switched on 24/7 either.
      Even playing DVDs only take up about 25% of the CPU time, and I've had no problems with overheating. I sometimes think about all that A/C power and computer cycles being wasted at the university computer rooms.

      Maybe O.S. vendors could include a voluntary option doing install to contribute your computing power. They could the
    • Power is not for PC (Score:5, Interesting)

      by rve ( 4436 ) on Wednesday June 09, 2004 @10:26AM (#9377462)
      The POWER architecture isnt really for the average Joe's computer, it is for servers. In servers, many tasks are done by coprocessors and independant subsystems without taxing the CPU. The extra CPU performance is now suddenly needed because IBM keeps encouraging ISV'S to write for Websphere, in Java, so you now need 10 times more memory and CPU performance than you previously did to perform the same task. In servers, the worst bottleneck at the moment are afaik still the moving parts in the disks and tapes.

      The PPC is a cousin of the embedded version of the chip, where the performance per watt power usage is relevant. It is hugely successful.

      Sales of Apples with desktop POWER chips aren't really significant. Although IBM aren't ready to admit it yet, the battle for the desktop is long over. No amount of performance advantage is going to outweigh the main advantage of the x86 architecture there: backward compatibility, preserving the value of past investments in software for the customer. IBM should know this, as they still make their zSeries mainframes compatible with the 40+ year old 360 architecture for the same reason.

      In the PC, unlike most servers, most everything goes through the CPU, which is why for the average Joe raw CPU performance _does_ matter.
      • by cbiffle ( 211614 ) on Wednesday June 09, 2004 @11:58AM (#9378685)
        The extra CPU performance is now suddenly needed because IBM keeps encouraging ISV'S to write for Websphere, in Java, so you now need 10 times more memory and CPU performance than you previously did to perform the same task.


        Your post is, for the most part, dead-on and well-put, but I can tell you're not an enterprise Java developer.

        Our transaction processing systems were recently moved to Java from C (Solaris on a Sunfire 6800, 8-way SPARC).

        Yes, they require more memory. This doesn't really concern us because we spend far less time tracking down dangling pointers and memory leaks now. The increase in memory seems to be about 4x-6x for our system, which still brings it in under a gig.

        No, they do not require more CPU. Several parts of our application actually run faster than the C version. I credit the Hotspot on-the-fly optimization crap for this to some degree, but I'm honestly not sure what the deal is. (And I'm our profiling guy. Ain't that sad? :-)

        But more importantly, as you mentioned, on big iron the I/O throughput tends to be the bottleneck anyway. Our transaction-processing systems tend to sit happily with significant idle percentages while positively slamming the disks and databases.

        We're running inside Sun's Solaris JVM in a hacked-up proprietary version of EJB, using Tomcat for the frontend. I can't imagine that Websphere has much higher overhead, though I could certainly be wrong.
        • by rve ( 4436 ) on Wednesday June 09, 2004 @02:09PM (#9380360)
          It's not the programming language Java perse, but the writing in an interpreted language for an application server sitting on top of a virtual machine sitting on top of the operating system's HAL sitting on top of the hardware as opposed to writing a natively compiled app for the HAL as we did before.

          But you're right, I'm not an enterprise Java developer.
        • by rve ( 4436 )
          This doesn't really concern us because we spend far less time tracking down dangling pointers and memory leaks now.

          That isnt really an issue for COBOL programmers hehe :)

    • Paint. Some effects can be fairly intensive.

      CAD and architecture.

      Animation and modeling, the interactive 3D part and rendering.

      Video editing.

      Flash or animated SVG.

      Virtual reality, either games or some kind of virtual presence. There is never going to be enough CPU or graphics power to accurately model reality or unreality. As the CPU, GPU and memory continue to grow so will the level of realism and complexity of the virtual world. Snow Crash and Diamond age laid down the gauntlet, its been slow in c
    • What other applications besides games really tax the CPU right now?

      Are you planning on a career in the computing field? If so, please send me your name and address info so I'll know never to hire you.

      If you think the only thing a fast CPU is used for these days is playing "Unreal Tournament", you have no business managing (or even logging in to) the kinds of UNIX boxes some of us deal with on a daily basis. Yes, a faster CPU (or 24 of them) is more than necessary for some of the applications we run in
    • by Dr. Zowie ( 109983 ) <slashdot@@@deforest...org> on Wednesday June 09, 2004 @11:06AM (#9377958)
      Scientific computing. Here are things I am working on now that I wish I had more power for:

      • Artificial vision - I wrote and use analysis software that tracks motion of magnetic poles on the Sun's surface. There are about 10,000 - 50,000 poles visible on the surface at any given time. It takes a full day for my dual Athlon to process a full day's worth of data. A 10x speedup would be great!
      • Space physics simulations - trying to describe the behavior of solar storms is very computationally intensive. It takes a minimum of ~1016 floating-point operations to simulate even a simple coronal mass ejection. That's a CPU-year.
      • Sound reprocessing - as I digitize my LP collection, I depop and noise-gate every track. It takes 2-3x as much clock time to digitally process the music as it does to play the LP into my sound card to be digitized.
      • Compilation - enough said
      • Stupidity in scientific software. A lot of my scientific work involves one-off codes to perform a particular operation on my data. Robust, reliable, transparent algorithms are just as important as the final result (after all, how can you trust a result if you haven't spent the time to understand the algorithm and all its nuances)? The stupider the algorithm, the better. If I can save a day of analytical work with a CPU-hogging numerical solution that takes 10 minutes to code and an hour to run, I've saved most of a day -- unless I have to run that code more than ten times. But as my CPU gets better the tradeoff improves and I can get my work done faster. The savings here isn't in direct CPU time, it's in allowing me to use stoopider code more of the time.
      • I'd really like an AI to write /. replies for me so I can get more work done.
  • by AKAImBatman ( 238306 ) <`akaimbatman' `at' `gmail.com'> on Wednesday June 09, 2004 @09:47AM (#9377005) Homepage Journal
    He doesn't really appear to offer any substantial concepts for performance improvements. Shrinking the die and upping the clock speed are the most common performance improvements because they are the most effective. Changes to the chips structure or internal coding only result in a one time 10-20% performance boost. And concepts like programmable gateways still have to follow the laws of physics.

    Sure, you may be able to optimize a few very common pathways. But you simply can't optimize all of them. Thus a "perfect" algorithm for pathway adaption would again net you one of those 10-20% increases on a general processor. A dedicated machine (e.g. One attempting to calculate PI to infinity) could of course see several times the performance, but then you have to weigh an expensive programmable chip against a cheap custom chip.
    • by DigitalDreg ( 206095 ) on Wednesday June 09, 2004 @10:00AM (#9377142)
      You missed the point.

      He is saying that people have run out of the easy optimizations. That it is more important now to concentrate on the performance of the whole package, not just the core.

      To that end people providing their own macro designs will allow Power to extend in ways IBM isn't planning on. Need better I/O handling? Somebody might sell it to you. Need a cache controller that handles a high number of outstanding cache requests because your software isn't cache friendly? Somebody might have that too. Need to find these people with these designs? They'll all be talking to each other as part of PowerPC consortium ...

      This opens up avenues for more creative uses that compliement the basic core, and helps bring down design time. Before you might have not even contemplated a custom chip based on a PowerPC design. In a few years, you might be able to glue a few building blocks together to get it.
      • It still sounds sensationalist. Sun has had the Sparc processor as an open standard for years. MIPS is an extremely popular chip for customizing. Despite the thousands of different companies who've customized these chips, neither one has seen a significant divergence in design. A third party will usually add a few instructions specific to their device (e.g. SIMD-like instructions were added to MIPS for video games) and leave it at that.

        What history has shown is that general computing is general computing (
      • by Arker ( 91948 ) on Wednesday June 09, 2004 @10:28AM (#9377490) Homepage

        I think you grok this well.

        Clock speed has never been the main factor in the performance of your computer - it's just been a number that works well for marketing. Your typical modern cpu is idle most of the time anyway. When you increase the clock speed, it does increase performance, but not linearly - doubling the clock speed on your chip might only give you a 10% boost or so in terms of real world performance.

        I remember back when the Pentium first came out, having two systems with P60s to compare, the only difference between them being that one had 4 times the cache memory onboard and, I believe, a better cache-logic implementation on board. The system with the superior motherboard was in a whole higher class, performance wise, in regards to every task we threw at it, although the effect was much more pronounced on some tasks than others, it was striking in every case.

        As CPU power has been growing far faster than IO capabilities, I would expect the same sort of testing with new systems today would show even more dramatic effects.

        Better IO handling is very important for many different applications. Just look at the difference between running an application that will fit in cache against one that requires constant work with your main RAM bank. It's huge. So is the difference between a program that will fit in main RAM and one that requires page swapping with VM. Massive difference. Increasing clock speed shaves a microsecond off here or there, but it does nothing about all the wasted cycles while the CPU waits on IO.

        CPU speed over the past 20 years has increased incredibly, but IO capabilities in the PC haven't improved at anything like the same rate. Making CPUs smarter (not necessarily faster, but more efficient at using the speed they already have,) using bigger better designed caches and improving IO systems are likely to be much more efficient ways of increasing real world performance than cranking up the clock speed.

    • by frinkster ( 149158 ) on Wednesday June 09, 2004 @10:08AM (#9377237)
      I had a tough time translating his statements from management/consultant to english, but I think one of the things he was trying to say is that the cost and effort required to continually shrink the die and up the clock rate are growing quickly, so much so that IBM doesn't feel that it's worth it to focus on that aspect in the quest for improved performance.

      IBM is surely going to continue to shrink the die and increase clock rate, but it seems as though for the same amount of R&D they feel that there are more gains in performance to be had by looking elsewhere.
  • by Throtex ( 708974 ) on Wednesday June 09, 2004 @09:48AM (#9377010)
    ... doesn't even have a clock, you insensitive clod!
  • RISC (Score:2, Interesting)

    by Coneasfast ( 690509 )
    is the clock speed still relevant for RISC chips too? or should that be measured differently too?

    i would think the clock speed has more meaning for RISC processors.
    • Well, since IBM creates PowerPC RISC processors, that would probably be the type of processor Meyerson is referring to.
    • Re:RISC (Score:3, Informative)

      by drinkypoo ( 153816 )
      Clock speed DOES have more meaning for RISC processors. You can judge their performance more or less by the change in clock speed from a prior chip with the same instruction set. However modern RISC processors are not entirely RISC, they contain coprocessors which have instructions which definitely take more than one cycle to complete. Granted the instruction that ships the data off to them should still only require one instruction, and later it should only take one instruction to get it back, so it becomes
  • Same old (Score:5, Informative)

    by Anonymous Coward on Wednesday June 09, 2004 @09:49AM (#9377029)
    Sounds like the same thing AMD has been trying to convince people of for the past 5 years, while Intel has been lengthening their processor pipelines to ramp up clockspeed while effectively lowering instructions per clock. Unfortunately no one bought it when AMD was saying it, so they had to come out with their PR naming system. Let's hope that at least IBM and their significantly bigger clout can change the picture. It seems like Intel's getting on board too, it seems there are rumblings of them moving their notebook M processors to the desktop as things have gone to hell when transitioning to 90nm fabrication. (In terms of power dissipation)
  • by foidulus ( 743482 ) * on Wednesday June 09, 2004 @09:49AM (#9377036)
    in both hardware and software. While their plan doesn't seem nearly as open as GPL software, it's still a step in the right direction.
    If they succeed it doesn't bode well for the x86 architecture, which seems to be a victim of it's own success. They seem to be trapped into just adding faster clocks instead of changing the architecture. They still have neat things like Centrino, but the marketing droids seem to have control over the engineers there. Every update just seems to be a faster clock speed without regards to how much it actually increases performance(I think this is evident in a lot of consumer pc's were they put in the latest and greatest pentium processor but then add in a paltry amount of RAM) I'm not saying I know more than the Intel engineers, I think they are doing a fabulous job with what they have to work with, but...I don't know where I am going with this, I'll just sit back and burn some karma now....damn ADD
    • "If they succeed it doesn't bode well for the x86 architecture, which seems to be a victim of it's own success. They seem to be trapped into just adding faster clocks instead of changing the architecture."

      It seems you haven't been introduced yet. Foidulus, this is x86-64. Him and the opteron family want to have a little talk with you.
    • Seems IBM is embracing open standards

      It also seems that IBM is a few years late [sparc.org] in that respect. (See: IEEE Standard 1754-1994)

      If they succeed it doesn't bode well for the x86 architecture, which seems to be a victim of it's own success. They seem to be trapped into just adding faster clocks instead of changing the architecture.

      As much as I can't believe I'm defending the Intel architecture, Intel *has* been modifying their chip design. Out of order instructions, Superscalar execution, instruction p
      • (Referring to SPARC:)

        It also seems that IBM is a few years late in that respect. (See: IEEE Standard 1754-1994)

        But isn't SPARC (the standardized part) only an instruction set? IBM appears to be opening up much more, revealing details about the actual implementation of the architecture.

        What I still can't tell is, will it be possible for a guy with a fab to retool his factory for building IBM CPU's, selling them indefinitely w/o paying the license.

        Now *that* would be Open Hardware. Being able to improve
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday June 09, 2004 @10:11AM (#9377272) Homepage Journal
      The x86 instruction set is obviously not the future for desktop systems, at least not in the form of x86. In the near term it looks like that future is x86-64, which is not really the same as the x86 instruction set (which itself has changed over time) though it is dramatically similar of course.

      x86 processors have managed to bump the clock and improve the architecture. You have to do both to be successful. Having higher clock rates IS a benefit even if you do nothing else, so long as the rest of the system can keep up. IMO you can give most of the credit for the improvement of x86 to AMD, which really pushed its limits admirably. They also didn't manage to push clock rates as far (at least, not as soon) so they had to add more functional units and tie them all in, which is exactly what you're talking about, an alternative to increasing cycles per second.

    • by SoopahMan ( 706062 ) on Wednesday June 09, 2004 @10:17AM (#9377360)
      I disagree - you're forgetting AMD. Intel has been pushing clock speed for a long time, and many consumers are still fooled by this engineered-for-marketing strategy.

      But AMD very-much uses the x86 architecture, and has long emphasized things other than clock speed. They've already put into action several of the things IBM's Bernie Meyerson seems to think he brilliantly came up with:

      • Efficiency: Athlons just plain get more performance per clock than an Intel. There are a lot of factors that contribute to this including the length of the pipeline, but the design just gets more done with each tick. That's less complexity and less...
      • Power usage: Athlons have 10-12 pipeline stages compared to the Pentium 4's 30. Between that increased efficiency, and less need for a large cache (big pipeline means frequent cache hits), it can use far less power than a P4 for the same performance - and consequently generate much less heat.
      • Interacting with software: Also not new - more recent desktop AMD chips internally clock themselves up and down depending on whether you're idling or running an app. Laptop chips have done this for years. That means the invalid assumption PC novices make that leaving a PC on while they eat lunch will not use much power becomes valid. For the power user, the PC churns out less heat overall since it only pumps heat under peak usage.
      There are things the guy lists that are just freakin' out there:
      We are even building in the capability for the chip to physically morph, if required. For instance, you spot an excessive number of fails occurring in the memory--we have techniques in software that recognize those errors. But if it turns out that for whatever reason, one segment of the chip drives an extreme amount of correction, one can easily envision the system autonomically issuing a command to remove that segment.

      Uh, dude, this isn't an episode of Transformers, it's a CPU. AMD and Intel already resolved this issue by building very strong chips that don't fail. Even if physically modifying the chip to lop off the bad parts is possible, I can only see it leading to a reduction in quality of chips produced, with manufacturers knowing that worst case, if it fails, it'll just lop itself to pieces.

  • by mattkime ( 8466 ) on Wednesday June 09, 2004 @09:52AM (#9377060)
    to IBM's website.

    I hate it when articles are posted about small, obscure companies and then I can't find their website.
  • It's all marketing (Score:5, Insightful)

    by grunt107 ( 739510 ) on Wednesday June 09, 2004 @09:58AM (#9377127)
    The processor speed for marketers is comparable to the engine size wars in the 60s/70s. If I say I have a 402 (6.6L) in my Chevelle and Bob next door has a (snicker) 350 (5.7L) in his Nova, my car gets the approving nods, but may not be faster since the Nova is lighter. Now compare said Chevelle w/today's Z06 'vette. Little 'wimpy' vette has just a 5.7L, but kicks the snot outta the Chevelle in performance. IBM, and other marketing 'geniuses', need to name their products to entice the 'mine is bigger' crowd. Right now, in the consumer computer realm, GHz talks. Most non-IT people I know will spout the "My PC is 4GHz - what's yours?" mantra when a 2.8 Opteron w/SCSI320 will kick its butt. The enlightened will know, but 'tis the general ignorant masses that have the buying power.
    • by pknoll ( 215959 ) on Wednesday June 09, 2004 @10:33AM (#9377539)
      To extend your car analogy a bit further -

      Both a Kenworth over-the-road tractor and a Formula 1 car have about 1000 horsepower. But one will accelerate a LOT faster than the other. And one can tow 20 tons of stuff behind it.

      Even IF MHz were directly comparable, you still couldn't judge the speed of a computer without considering what that computer was built to do.

    • very nice analogy. i would mod you up had i the points to give. speaking of SCSI320... hard disk is one crucial factor in performance to which most people don't pay attention. i've only just come to appreciate it myself, doing my first work with some (relatively) large production databases.

      even on my workstation on the job, my P4 2.8GHz HT processor regularly waits and waits for the puny IDE HD to load or seek through a data file or complete a search of the filesystem. it's like wasting access to a genius:

  • ...telling you that it's the GIRTH that counts!
  • by Hoonis ( 20223 ) on Wednesday June 09, 2004 @10:02AM (#9377169) Homepage
    Corporate datacenters are now filling up with half-full racks because cooling & power requirements are through the roof. You end up being unable to increase compute resources because you have to put in fewer of the faster systems, gaining you nothing.

    So hey, if you're listening harware vendors, see if you can't simply make the dang things run cooler on less power before you speed them up!
  • Old news (Score:3, Interesting)

    by LincolnQ ( 648660 ) on Wednesday June 09, 2004 @10:02AM (#9377173)
    Apple has been trying to get this message out for a while. We had a story a few months ago (lazy me, no link for you) about how Intel was dropping the clock from the branding of the processors.

    Clock speed really does not have a direct correlation to computer speed anymore. It seems like we will see more of the trend of newer, better technology that runs at a lower rate but executes a lot more in one tick, so it is much faster. It seems that it will start at 1GHz and move up to 3 before somebody gets a new idea, makes a new "slow" processor and starts it over...
  • by kennykb ( 547805 ) on Wednesday June 09, 2004 @10:02AM (#9377176)
    Most applications nowadays founder on memory hierarchy performance (L1/L2 cache, main store, backing store). Cache misses are a usual killer, and fetch prediction doesn't work very well at all yet.

    Even on the base CPU, the most important metric, I find, is "MIPS per watt". That's what determines how much horsepower you can get off a given amount of cooling, which is the real limiting factor for CPU speed.

  • *Everyone* knows? (Score:5, Interesting)

    by funkdid ( 780888 ) on Wednesday June 09, 2004 @10:04AM (#9377197)
    I think to the /. crowd this is certainly old news. Ever try to explain this to grandma? Or your girlfriend's little brother? *Most* people after my speach of how processors work say "Yeah but arent AMD chips slow? Like a Pentium is 3Ghz, AMDs are cheap (meaning cheaply made) right?"

    So I "dumb" my speach down a bit and give it again. The masses don't want to know how processors work, they don't want to know about architecture, they want an even base line to measure performance. Most people think the Comp Usa rep is ripping them off and they are trying to feel good about their purchase, being an un-educated consumer.

    By buying the high clock speed they can compare it to their neighboors and in their heads they have a Super-Fast PC.

    I'd like to note that most people I talk to look at AMD like most people look at a Yugo. (remember those cars?) In spite of my advice that an AMD is like a new Honda for $2,000.

    That's my 2 cents

    • Rather than trying to explain the nuts and bolts of why clock rate isn't important, perhaps you should try an analogy. I haven't actually had to explain it to anybody, but I figure that I would compare it to trying to determine the speed of a car based by reading the tachometer. In such a case, the gearing compares to things like IPC of the processor.
    • I love AMD man, but honda? I think you need to find a SLIGHTLY better car to compare it to ;)

      How about Mazda?

      (yea, ok, fine, I'm one of those people that wont touch a honda - except a prelude SH or an S2000 - with a 10ft pole)
  • Bogo Mips (Score:5, Interesting)

    by dfn5 ( 524972 ) on Wednesday June 09, 2004 @10:09AM (#9377256) Journal
    Clock speed has never been the definitive CPU performance measurement. As everyone knows it is the Bogo MIP.
  • says processor chip speed is old news Half the posts on /. already say this... I saw two different posts today from different topic headers, one was from the Apple post on the G5
  • by Omega1045 ( 584264 ) on Wednesday June 09, 2004 @10:11AM (#9377280)
    Didn't AMD take this approach some years back? They have to name their processors to sounds like pentium clock speed ratings, but they have been preaching the idea that clock-speed is not the sole issue for years. I know IBM is technical leader, but it just smells like IBM, like Intel, are jumping on the AMD bandwagon, but they aren't calling it the AMD bandwagon.
  • would want there clock to go at more than (1/60)Hz? Im trying to get MORE SLEEP, not less. Geeze, just increasing clock speed to 1 Hz, means that in a typically day, I would get .1333 hours of sleep on a good day.
  • what I want to see (Score:4, Insightful)

    by AviLazar ( 741826 ) on Wednesday June 09, 2004 @10:21AM (#9377402) Journal
    I don't care if they want to name it by processing speed, bus speed, or hell how much donkey speed it has - i just want it to be consistent!!! That means give me Intel processor 1, then the future processor named Intel Processor 2 WILL be better then the first one, and the third one will be better, etc. Stop coming out with funky new names to confuse me (and the less informed computer users, which i am probably in that category). If I want to look at the specs I will, but at least make it easy for people to realize which processor is the latest and greatest! I think the worst case for this is some of the graphics cards - whats better the 9600 version or the FX version or the super version or the crack version? Come on, have some sympathy on us ----damn marketers!
  • The old MHz measurement was nice for a time, but it just wasn't measuring what computer performance geeks find important. Just like car performance enthusiasts like to talk about the horsepower ratings of their engines, what is really important is a measurement of how the system really performs. That's why a quarter-mile timing can be more informative about a car's performance than just looking at the car's horsepower and torque rating.

    So, the new processor measurement that gets right to the heart of what's important for a CPU performance geek is going to be henceforth the PU. The PU stands for 'penis unit' and it indicates to their fellow cpu performance geeks how big their dick is relative to everyone else's.
  • by Dr. Smeegee ( 41653 ) * on Wednesday June 09, 2004 @10:39AM (#9377605) Homepage Journal
    Addressing a fundamental shift in the landscape of technology, a significant shift in the trajectory of basic technology. The rate of performance enhancement is becoming impacted as holistic design. You have to have a means by which you proactively and holistically address that extraordinary event. I am hoping that people really understand the sort of discontinuity we are talking about the capability for the chip to physically morph, one can easily envision the system autonomically issuing a command moving off to an entirely different plane.

    The world is full of incredibly bright people.
  • IBM is attempting to change the rules of performance measurement. They are doing this by educating their customers. Inherently, people want a single performance metric that says X is better than Y because this perfromance metric says so.

    IBM would prefer customers to come to them and ask IBM, "Which processor is better?" rather than rely on an external, easily verifiable, though not accurate, single number indicator.

    The truth, as we all know, is that there is no single metric since each processor has strengths and weaknesses and various applications rely on these strengths differently.

    They are also opening their processors to the end users a little more, almost as a jab at Intel. Intel has microcode, but you'll never see it or get to modify it. But the very presence of microcode in almost every modern general purpose CPU means that performance can be enhanced and tailored for each application with very little processor change.

    So IBM is letting people get closer with the processor to enhance performance with very little risk or effort.

    The kicker is that it's not simple, so only a few large manufacturers and some dedicated homebrewers will really have anything to show for it.

    Thus it's a marketting ploy intended to raise questions about current performance metrics in the minds of indecisive consumers.

    But then, when has the CPU war ever been about anything but marketting?

    -Adam
  • by itzdandy ( 183397 ) on Wednesday June 09, 2004 @11:25AM (#9378207) Homepage
    here is a thought.

    When you buy a car, you don't just consider how fast it goes, you consider fuel economy/comfort/quality etc. this could be applied to CPUs

    for instance, you buy a new DeLL/HP/whatever, the machine has 3 numbes on it. 12/16/65 - which means, 12 is the general office app benchmark, 16 is the gaming benchmark, and 65 is the mean power usage of the machine.

    so a good office machine is a 14/2/30, but if you are playing games, you need a 6/26/130, you don't care as much about the power bill or how fast office computes a6:c6*c14 whatever. these numbers would be linear, as in the nex-gen would just have higher numbers.

    Each computers label would ave a description of the rating above the label saying "look at the killer gamer system" or whatever.

    I can see the arguement of the system being confusing, but i'd take the least confusing method that was effective, and i think this would be effective.

    ------------------

    Something like this could translate over to server side with web/fileserving/powerreq or something, but it would allow companies like AMD and IBM who have not pushed the MHz myth to the extreme to allow their product to compete on merrit not Mhz.

    thoughts??

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...