Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

The Gigahertz Race is Back On 217

An anonymous reader writes "When CPU manufacturers ran up against the power wall in their designs, they announced that 'the Gigahertz race is over; future products will run at slower clock speeds and gain performance through the use of multiple cores and other techniques that won't improve single-threaded application performance.' Well, it seems that the gigahertz race is back on — a CNET story talks about how AMD has boosted the speed of their new Opterons to 3GHz. Of course, the new chips also consume better than 20% more power than their last batch. 'The 2222 SE, for dual-processor systems, costs $873 in quantities of 1,000, according to the Web site, and the 8222 SE, for systems with four or eight processors costs $2,149 for quantities of 1,000. For comparison, the 2.8GHz 2220 SE and 8220 SE cost $698 and $1,514 in that quantity. AMD spokesman Phil Hughes confirmed that the company has begun shipping the new chips. The company will officially launch the products Monday, he said.'"
This discussion has been archived. No new comments can be posted.

The Gigahertz Race is Back On

Comments Filter:
  • by pipingguy ( 566974 ) * on Saturday April 21, 2007 @06:03AM (#18822585)
    This reminds me of the sign at the local breakfast shop (paraphrased): "Use coffee: do stupid things faster".

    Yeah, this is cool, no doubt. How many users actually *use* how much power they already have? I use a lot, but it's mostly dependent on the graphics card.
    • Re: (Score:3, Insightful)

      by Jartan ( 219704 )

      How many users actually *use* how much power they already have? I use a lot, but it's mostly dependent on the graphics card.


      You're correct that people don't need this much power for their desktops but there are still plenty of uses for more speed in servers and for certain other applications.
      • by Name Anonymous ( 850635 ) on Saturday April 21, 2007 @06:32AM (#18822723)

        How many users actually *use* how much power they already have? I use a lot, but it's mostly dependent on the graphics card.

        You're correct that people don't need this much power for their desktops but there are still plenty of uses for more speed in servers and for certain other applications.
        Actually I think the correct phrase is "most people don't need.... and at that, it may be inaccurate. Someone who does heavy video work can certainly chew up a lot of processing power. Heavy image work can use a lot of prcessing power in bursts.

        Then there is the big fact that progammers these days are sloppy and waste resources. A machine that is faster than one needs today will only be adequate in 2 or 3 years given upgrades to all the programs. (Am I being cynical? Maybe, but then again, maybe not.)
        • Re: (Score:3, Insightful)

          by morcego ( 260031 )

          Then there is the big fact that progammers these days are sloppy and waste resources. A machine that is faster than one needs today will only be adequate in 2 or 3 years given upgrades to all the programs. (Am I being cynical? Maybe, but then again, maybe not.)

          You know, that is something that really piss me off.

          Yes, I know many times it is not the programmers fault, and they have to be sloppy to be able to meet that stupid deadline. But, c'mon. Take a look at the system resources something like Beryl uses.

          • Re: (Score:3, Informative)

            by koreaman ( 835838 )
            Actually, games are heavily optimized and do use everything they're given effectively. The use of a GPU is normal when you consider the graphics on modern games (it takes a LOT of processing power to render something that beautiful 30 times a second...). The CPU is used for physics and game state calculations, which are certainly not negligible.
          • by billcopc ( 196330 ) <vrillco@yahoo.com> on Saturday April 21, 2007 @03:39PM (#18826105) Homepage
            You know what's funny about Beryl/Vista ? They're doing the same stuff the Mac was doing years ago on puny hardware. I mean really, how frickin' hard is it to draw a window as a texture on a pair of triangles ? Seriously!

            Programmers are sloppy, because sloppy is all the industry wants to pay for. Way back in the day when CPU cycles were super expensive, programmers were paid better money and given the time to tweak the crap out of everything, because if they didn't, the app would run dog slow and people wouldn't buy it. The problem is that somehow, people now tolerate underperforming software. They see it as a reason to upgrade... good god, they actually fall for it! Gee I certainly remember surfing the web on a 486 with 8mb of Ram back in the day. Now my OS needs a good 50-60mb to itself, and that's after I ripped out all the cruft. Normally it would be 100mb just for sitting idle with a background image and a neon-colored task bar. Gee uh, where'd all my system resources go ? Does it really require 7.3 million bytes to house a TCP/IP stack when some embedded devices pull it off with oh, 6kb or so ?

            The truth however, is that if we were to write code as tightly and meticulously as we did in the 80's and 90's, software would perform, on average, at least 5 to 10 times faster than today, excluding hard bottlenecks like disk access and network bandwidth. It would also take 50 times longer to write the software, and I'd say less than 1% of people who call themselves "programmers" are even able to write such finely tuned code. Everyone doing VB ? Out. Everyone doing RAD ? Out. All you Ruby on Rails weenies ? follow me to this dark alley *BLAM*

            I remember spending hours on little loops, with a CPU reference manual and a calculator. Sometimes I did little time sketches to figure out the best way to stagger memory accesses so as to not starve the execution pipes. Often times that meant weaving two disparate functions together, one being memory-hungry, the other CPU hungry. Together they filled each other's latency pockets, and my routine ran nearly thrice faster as a result. No C compiler I've ever seen could do such kinky things. Heck one time I even wrote a little assembler demo whose code executed twice: forward, then backward. The opcodes and data were carefully selected to represent valid instructions when reversed. It was more than a nerdy trick, it allowed my routine to fit entirely in the CPU's on-die cache, which gave it a huge speed boost but more importantly, it enabled a lowly 486 to mix 48 sound channels in real-time. Today's Cubase can't even handle a couple dozen channels without stuttering and/or crashing, on computers over 100 times faster than a 486.
        • Re: (Score:3, Informative)

          by Lumpy ( 12016 )
          Someone who does heavy video work can certainly chew up a lot of processing power.

          nope.

          I edit 1080i HD video all day long on a incredibly old P4-3.0ghz processor.
          I even render effects on it as well as CG at those resolutions and it works just fine and speedy. a 30 minute episode renders in a little over an hour to mpeg2 for airing at a TV station or for bluRay disc. The biggest thing you need in video editing is MEMORY. 4gig or more helps a lot as well as really fast U320 scsi drives.

          Processor speed has a
        • "Then there is the big fact that progammers these days are sloppy and waste resources. A machine that is faster than one needs today will only be adequate in 2 or 3 years given upgrades to all the programs. (Am I being cynical? Maybe, but then again, maybe not.)"

          The truth is the power will definitely be used it just takes *years* of research to develop killer apps. Dragon Naturally speaking for instance is finally getting to a fairly usable point. The training feature is nice, while it isn't the best appl
        • by fuzz6y ( 240555 ) on Saturday April 21, 2007 @02:13PM (#18825665)

          Then there is the big fact that progammers these days are sloppy and waste resources.

          Just shut the fuck up already. Anyone with more sense than a bag of rocks will conserve scarce resources, not plentiful ones. Clock cycles are cheap. Profiling is expensive. Megabytes are cheap. Time spent coming up with clever bithacks is expensive, especially since only the cleverest and generally highest-paid developers can do it. Second cores are cheap. More time spent coming up with whole new clever bithacks for the pentium D version because it has a different relative cost for jumps and floating point ops, thus making your last batch of hacks do more harm than good, is expensive.

          Furthermore, programmers don't so much *waste* resources as utilize them to provide more value. Yeah, I know the 2600 had 128 bytes of RAM, and those were some clever fellas who managed to make playable games on them. Lets see you play WoW on it. I know that your multimedia keyboard probably has more processing power than the PCjr that could once run Word. Fire up that version of Word, insert an image and a table, and hit "print preview."

          Of course there are times when computing power is a precious resource. Console games that have to look awesome on 4 year old hardware. System libraries where every wasted clock will be multiplied by 2000 calls by 10000 different programs. Embedded systems where cost and size simply won't allow you to have those few extra Hz you crave. In these situations, when using extra cycles has more severe consequences than offending your sense of computational aesthetics, I believe you will find that these young whippersnappers aren't wasteful at all.

        • by Colin Smith ( 2679 ) on Saturday April 21, 2007 @05:12PM (#18826775)

          Then there is the big fact that progammers these days are sloppy and waste resources. A machine that is faster than one needs today will only be adequate in 2 or 3 years given upgrades to all the programs. (Am I being cynical? Maybe, but then again, maybe not.)
          No. In fact, you can generalise the statement to... Humans are sloppy and waste resources. Basically any resource which is cheap or easy will be fully consumed by the people using it.

          If CPUs stayed the same power, people would write better code to improve performance.

           
    • by Dwedit ( 232252 ) on Saturday April 21, 2007 @06:21AM (#18822681) Homepage
      One word: Flash.
      Flash is ridiculously inefficient, and requires an extremely beefy machine to render real-time full-screen animation.
      • by Bo'Bob'O ( 95398 )
        I used a PII laptop up until January this year as my sort of regular email, browsing, Word machine. I might still be using it, but simple web browsing had become a chore, and trillien had become endlessly bloated. Sure, I could have sped things up with flash blocker, moving to Gaim, digging up some more ram, and being more vigilant about background applications, but more and more sites are requiring flash, and it was getting irritating having to start up my gaming machine just to look at a link a friend sen
    • Re: (Score:3, Informative)

      by nyctopterus ( 717502 )
      Anyone doing graphics, even hobbyists. Editing home movies with effects, for example can use an almost unlimited amount of resources. As an artist working with a graphics tablet on large file in photoshop, and complex vector graphics, processors are no where near fast enough. I want everything now. I don't want to wait for the screen to redraw. I don't want to have to wait for filters. Bollocks to that. Give me 64-core 24GHz machine an I'd find a way to slow it down.
      • I do graphics, too, but maybe not do what you do. How much of your wait time is video card-dependent? Do you know?

        For engineering work, a CAD-dedicated card with 64MB blows away a 256MB consumer card quite easily based on my experience.

        Of course, I'm talking about 3D performance, which you might not need. There are $4000 cards out there, who could possibly need to spend that much money?
        • Re: (Score:3, Informative)

          by muridae ( 966931 )
          Yeah, it really depends on what type of graphic work you do. Some ray-tracing might make more use of the FPU when rendering, but while you are modeling the object it uses the graphics card. CAD tends to have more trouble displaying the work in real time, so those specialty graphics cards work wonders. Something like After Effects might have to keep lots of frames of video in memory to apply an effect, and could be bottle necked by the CPU or memory or even the bus between the two.

          The only part of a computer

        • by drsmithy ( 35869 )

          Of course, I'm talking about 3D performance, which you might not need. There are $4000 cards out there, who could possibly need to spend that much money?

          Those cards cost $4000 because that's what the market they are sold to will pay, not because they're doing anything a $100 card couldn't do with minor (if any) tweaking.

          I get to see this kind of thing first hand, in the radiology industry. You can buy "certified" displays for $thousands, or buy off-the-shell hardware that meets all of the necessary stan

      • Re: (Score:3, Interesting)

        by TheRaven64 ( 641858 )
        For simple video editing, disk and RAM are the bottlenecks. When it comes to effects (transitions, blending/warping etc), most of those will run on a one or two generation old GPU as pixel shader programs much faster than on any modern CPU.

        The interesting thing about the CPU market now is that most of the workloads that really tax a general purpose CPU (and there aren't a huge number left) are the ones that perform very badly on a general purpose CPU. For home use, something like one of TI's ARM cores wi

    • by zaibazu ( 976612 ) on Saturday April 21, 2007 @06:30AM (#18822715)
      Displaying myspace profiles. The CPU load they produce is astonishing.
      • Re: (Score:2, Interesting)

        by onedotzero ( 926558 )
        If my mod points hadn't run out yesterday, this would be +1 Insightful. The CSS hacks (primarily transparent overlays which aren't handled too gracefully by Opera) and overloaded flash content puts a strain on my CPU (2.6Ghz). It's incredible to see a browser struggle with these things.
        • It's incredible to see a browser struggle with these things.
          If you think that's incredible try a 67MB SVG document... You need 800MB of ram just to open it!!!

          I'm refering to Kandid [sourceforge.net]... To top it off the dam thing isn't even in vectorized.
      • Re: (Score:2, Insightful)

        by matt me ( 850665 )

        Displaying myspace profiles. The CPU load they produce is astonishing.
        Let me guess: you use Firefox.
    • by suv4x4 ( 956391 )
      Yeah, this is cool, no doubt. How many users actually *use* how much power they already have? I use a lot, but it's mostly dependent on the graphics card.

      I assume we're talking casual consumers and not pro users. Well, it's not really up to them, their software will bloat up to take whatever CPU "volume" there is and take all of it in the next version.

      At a first glance Photoshop 4 isn't THAT much simpler than Photoshop 10. But it's plenty times faster for all basic operations both support. Wonder how that h
      • AutoCAD is similar, just based on this one user's perception. R14 was very fast, probably the best ever (if we're talking Windows-based). Then Autodesk started to load on stuff and changing things. Sure, ACAD fan boys will claim that this old fart failed to keep up, but it's not my job to play catch-up with the latest software fads, I design things - I'm not a computer operator.
      • Re: (Score:3, Informative)

        by Stormwatch ( 703920 )
        Want to see bloat? Check the latest version of Nero Burning ROM. It installs loads of crap, hoses the whole system and makes it unstable, and takes hundreds of MB in the hard disk. Then some pirate trimmed the crap and made a "lite" version, which works MUCH better.
        • by MsGeek ( 162936 )
          The antidote is K3B. Of course, you have to run Linux to run it. Nobody's ported it to Windows...yet. Since the command-line cdrtools that K3B depends on have been ported to Win32 already, it should be easy for someone who knows what they are doing to do it. K3B is a lot like the Nero of yore, only it never got bloated.
    • You'd be surprised (Score:5, Insightful)

      by Moraelin ( 679338 ) on Saturday April 21, 2007 @07:13AM (#18822905) Journal
      You'd be surprised how much more _can_ be made with a CPU.

      E.g., sure, we like to use the stereotypical old mom as an example of someone who only sends emails to the kids and old friends. Unfortunately it's false. It was true in the 90's, but now digital cameras are everywhere and image manipulation software is very affordable. And so are the computers which can do it. You'd be suprised the kind of heavy-duty image processing mom does on hundreds of pictures of squirrels and geese and whatever was in the park on that day.

      And _video_ processing isn't too far out of reach either. It's a logical next step too: if you're taking pictures, why not short movies? Picture doing the same image processing on some thousands of frames in a movie instead of one still pictures.

      E.g., software development. Try building a large project on an old 800 MHz slot-A Athlon, with all optimizations on, and then tell me I don't need a faster CPU. Plus, nowadays IDEs aren't just dumb editors with a "compile" option in the menus any more. They compile and cross-reference classes all the time as you type.

      E.g., games, since you mention the graphics card. Yeah, ok, at the moment most games are just a glorified graphics engine, and mostly just use the CPU to pump the triangles to the graphics card. Well that's a pretty poor model, and the novelty of graphics alone is wearing off fast.

      How about physics? They're just coming into fashion, and fast. Yeah, we make do at the moment with piss-poor approximations, like Oblivion's bump-into-a-table-and-watch-plates-fly-off-superso nic engine. There's no reason we couldn't do better.

      How about AI? Already in X2 and X3 (the space sim games) it doesn't only simulate the enemies around you, but also what happens in the sectors where your automated trade or patrol ships are. I want to see that in more games.

      Or how about giving good AI to city/empire building games? Tropico already simulated up to 1000 little people in your city, going around their daily lives, making friends, satisfying their needs, etc. Not just doing a dumb loop, like in Pharaoh or Caesar 3, but genuinely trying to solve the problem of satisfying their biggest need at the moment: e.g., if they're hungry, they go buy food (trekking across the whole island if needed), if they're sick, they go to a doctor, etc. I'd like to see more of that, and more complex at that.

      Or let's have that in RPGs, for that matter. Oblivion for example made a big fuss about how smart and realistic their AI is... and it wasn't. But the hype it generated does show that people care about that kind of thing. So how about having games with _big_ cities, not just 4-5 houses, but cities with 1000-2000 inhabitants, which are actually smart. Let's have not just a "fame" and "infamy" rating, let's have people who actually have a graph of aquaintances and friends, and actually gradually spread the rumours. (I.e., you're not just the guy with 2 points infamy, but it's a question of which of your bad deeds did this particular NPC hear about.) Let's not have omniscient guards that teleport, but actually have witnesses calculate a path and run to inform the guards, and lead them to the crime. Etc.

      Or how about procedurally generated content? The idea of creating whole cities, quests and whatnot procedurally isn't a new one, but unfortunately it tends to create boring repetition at the moment. (See Daggerfall or Morrowind.) How about an AI complex enough to generate reasonably interesting stuff. E.g., not just recombine blocks, but come up with a genuinely original fortress from the ground up, based on some constraints. E.g., how about generating whole story arcs? It's not impossible, it's just very hard.

      And if you need to ask "why?", let's just say: non-linear stories. Currently if you want, for example, to play a light side and a dark side, someone has to code two different arcs, although most players will only see one or the other. If you add more points and ways you can branch the story (e.g.
      • I'm glad you mentioned Caesar. [codecad.com] Yeah, I know that's not what you were referring-to.
      • by toejam316 ( 1000986 ) on Saturday April 21, 2007 @07:48AM (#18823047)
        on your oblivion point, IT DOES have incredibly smart game AI. its just the sheer number to preformance ratio means they have to be stupid. On a good computer (cant remember the specs) they setup 2 NPCs and tasked 1 to rake the lawn and another to clip the hedges. They did so, then, they swapped the items needed so the clipping person had a rake and the raking person had clippers. The raker ended killing the clipper for the rake. Another example of Oblivion's AI is in a test, before you could make it to a quest guy who sold Skooma (a drug), the Skooma Addicts already killed him, ruining the game plot by not allowing you to progress. seem dumb to you? Oh yeah, I'm not sure if the second example is actually what happened in game with AI beefed up or just in a section with few NPCs.
        • this was their "Radiant AI" which they scrapped in favor of scheduled and balanced task loads, because they found that it would be too much of a pita to use Radiant AI with possibilites like that and still have a playable game - Ex: can't finish the game because npc-a was killed by npc-b for stepping on his lawn, and you need the final boss quest from npc-a...
      • Getting back to photo manipulation and the hypothetical grandmom, what about using face recognition to correctly tag each person in each picture? Or scan all pictures of a person and (based on the composite) get rid of any redeye automatically, with the proper iris color. How about when a picture is taken, automatically instead take 1000 photos within a second and use them to create a composite so there is no blurring or motion artifact?

        Lots of things possible when the horsepower is there.
      • by cerberusss ( 660701 ) on Saturday April 21, 2007 @08:15AM (#18823171) Journal

        nowadays IDEs aren't just dumb editors with a "compile" option in the menus any more. They compile and cross-reference classes all the time as you type.
        I program in C, you insensitive clod!
      • Thankyou. That was a very rare post on slashdot, with some genuine insight. You've nailed exactly what disappoints me in Oblivion. It is certainly a step-up from Morrowind (pretty graphics, fixing the obvious game mechanics holes) but ... it's not a very large step. When I walk into the Imperial city of Tameriel, that is so imposing I can see it from everywhere in the surrounding valley, it looks like a sparse little village with one or two people dotted around. The game sets up expections but then it dashe
      • by Kjella ( 173770 )
        Or how about procedurally generated content? The idea of creating whole cities, quests and whatnot procedurally isn't a new one, but unfortunately it tends to create boring repetition at the moment. (See Daggerfall or Morrowind.) How about an AI complex enough to generate reasonably interesting stuff. E.g., not just recombine blocks, but come up with a genuinely original fortress from the ground up, based on some constraints. E.g., how about generating whole story arcs? It's not impossible, it's just very h
      • How about physics? They're just coming into fashion, and fast. Yeah, we make do at the moment with piss-poor approximations, like Oblivion's bump-into-a-table-and-watch-plates-fly-off-superso nic engine. There's no reason we couldn't do better
        http://www.youtube.com/watch?v=3bKphYfUk-M [youtube.com]
      • I was with you up to the last point. Dynamically branching stories aren't just "very hard"; to be anything other than awful, it'd require an AI smart enough to pass the Turing test (to simulate realistic NPCs), AND it would have to be cleverer and more creative than a good percentage of humans (try playing tabletop D&D with a crappy DM if you doubt the point).

        If a computer could write a convincing dynamic game plot, it could write a decent novel, and that's a loooong way off.
    • by eebra82 ( 907996 )
      How many users actually *use* how much power they already have?

      I disagree with your thinking. It is not about using 100% of the 'power'. It is about the definition of how much power '100%' is. We all hit 100% during intensive operations, but to a lesser extent if the power is exceedingly efficient.
    • by ocbwilg ( 259828 )
      Yeah, this is cool, no doubt. How many users actually *use* how much power they already have? I use a lot, but it's mostly dependent on the graphics card.

      This article is discussing 2-way, 4-way, and 8-way Opteron CPUs for servers. I don't know about you, but with all the virtualization going on nowadays, more computing power in the same size box is a good thing. We can use all the power we can get.
    • Yeah, this is cool, no doubt. How many users actually *use* how much power they already have? I use a lot, but it's mostly dependent on the graphics card.

      Time and again, Intel and AMD come out with new, faster processors, and every time some bozo like you feels the need to say the same stupid thing: "who needs this much computing power on the desktop?" Time and again, someone like me has to post the same damn reply:

      You are not the market for this. The desktop is not the market for this. Games are not the end-all be-all of high-intensity computation. In a more general sense, progress just fucking progresses. Are you saying AMD and Intel should just market

    • How many users are running desktops with server class chips in them? :)
    • by Sloppy ( 14984 )
      Yeah, I've said that before. I think the last time was in reference to the Compaq 386. I mean, sure it was impressively fast, but who needs that kind of power on their workstation? The Compaq 386 is a niche product: good for servers, but overkill for most purposes.
    • The more powerful the computer, the more you can do. I can remember a time when it took minutes just to open a simple word processor or web browser. Now if it takes two seconds I get impatient.

      Or would you rather go back to the days of 100Mhz processors?
  • Oh come on (Score:4, Insightful)

    by Anonymous Coward on Saturday April 21, 2007 @06:04AM (#18822605)
    No sane person actually believed that the gigahurtz race was over. But who cares about it anyway, just more power for a little faster operation.

    I muchly prefer a fanless processor.
    • The P4 hit 3 GHz, what, 4 years ago? For Opteron to hit 3GHz only now is just proof of how badly the quest for GHz has atrophied.

      Had the wall not been hit, and GHz continued to increase as in the 90s, we'd be up to someting like 20 GHz by now. So the truth of this story is the exact opposite of "The Gigahertz Race is Back On." RAM and HDD capacity and price are relatively stagnant for the last few years, too. The only thing still growing by leaps and bounds is flash memory.

      • Re:Oh come on (Score:4, Insightful)

        by Anonymous Coward on Saturday April 21, 2007 @07:33AM (#18822975)
        The P4 hit 3 GHz, what, 4 years ago? For Opteron to hit 3GHz only now

        To understand how this is not a sign of slacking off by the chip designers, you have to understand that the P4 was able to run at high clock speeds only because it was designed to use a very long pipeline of small functional units. This design has proven to be inefficient because it causes too many pipeline stalls and because it requires a higher clock speed and higher power consumption to achieve the same performance. The more complicated functional units of chips with shorter pipelines cannot be clocked as fast, but they perform better at the achievable clock rates than the P4 did at higher clock rates. The last Gigahertz race was ended by a shift of architecture, not by "hitting a wall". Then came multicore designs, which further reduced the need and opportunity for higher clock rates (heat dissipation is somewhat of a "wall"). All this caused clock rates to grow much slower. Now that chip designers have found ways to control power consumption, increasing the clock rate is viable again, so the race is back on.
        • you have to understand that the P4 was able to run at high clock speeds only because it was designed to use a very long pipeline of small functional units... The last Gigahertz race was ended by a shift of architecture, not by "hitting a wall".

          The NetBurst architecture was noting but Intel's response to hitting the MHz wall. Intel wanted to continue ramping up MHz which in the past had corresponded very well with overall performance and was thus important to consumers. But because they were starting to

      • Having a really long pipeline with a high clock speed doesn't make your computer faster. It sounds better in marketing terms though - which is why Intel eventually fell flat on it's face when consumers found out the P4's were crap. If Intel's high clockspeed cpus also performed, we'd still see a mhz war because intel would still be focusing on it in their marketing. Right now it's just on the backburner.
    • What I find interesting is the role reversal. Intel in it's current generations is focused more on improving IPC. AMD seems to have hit the IPC wall and is instead focusing on clock speed increases. It's a reverse of what the situation was a few years ago, with AMD touting how much more elegant it's architecture was with it's higher IPC than Netburst, and Intel pushing high clock speed as it's answer.
    • by drerwk ( 695572 )
      It is over in the sense that I do not expect to see a 30GHz processor any time soon. Not too long ago I had to account for the fact that if I did not get my product out in nine months, the target computer would be twice as fast. In 1977 I had a 1MHz 8bit PC. In 1990 I had a 25MHz 32 bit CPU. 1996 200MHz, in 2002 2GHz. Do you think we will see a factor of 10 to 30Ghz by 2011? Sorry. But I do expect number of cores to double every 18 months. I expect 80 core CPU in 5 years. I'm not sure that the software deve
    • Today AMD reported huge losses due to increasing competition against rival Intel.

      In other news, AMD has started to release overclocked processors which increase the speed at the expense of power consumption, but with no R&D cost at all. I now totally cannot remember what the first news piece was.
    • They realized a good time to bring it back, 2 processors is kind of needed but I've been holding out for 4 and I think a lot of people joined me in that.

      4 Is more than the maximum simulataneous tasks I've ever needed to complete in a hurry.

      Game desginers are still churning out games not properly optomized for multi-threading, so the faster single cores still perform better on them (Dual cores are actually a bit slower, especially for AMD), once they stop that then no one will care about how fast each co
      • One core that can perform two billion operations per second will always be better than two cores that can each perform one billion ops/sec. Well, unless each core has its own memory controller and there's NUMA trickery going on.

        The reason we're seeing multi-core processors is that Moore's Law is continuing, but it's not possible to turn a doubling in transistors into a doubling of single-core performance. You get tradeoffs like "add a second core OR add some cache and increase speed by a factor of four on

  • by jibjibjib ( 889679 ) on Saturday April 21, 2007 @06:17AM (#18822657) Journal
    I was kind of hoping the gigahertz race would end so Microsoft would have to stop making each version of Windows slower than the last.
    • by TeknoHog ( 164938 ) on Saturday April 21, 2007 @06:39AM (#18822751) Homepage Journal

      I was kind of hoping the gigahertz race would end so Microsoft would have to stop making each version of Windows slower than the last.

      You're missing the whole point. CPU performances are increasing all the time, which allows Microsoft to continue making everything slower. However, the GHz race had little to do with performance; Intel pushed their Pentium 4 closer to 4 GHz, even if it performed slower than many competing CPUs between 2 and 3 GHz. They probably did it because most consumers would only look at raw GHz instead of performance.

      • CPU performances are increasing all the time, which allows Microsoft to continue making everything slower.

        A-ha! Finally the truth comes out! The massive, worldwide adoption of computers is *actually* a global job creation program!

        No, really, think about it.
  • Gigahertz race is back on! AMD increases the clockrate of its chips to 3 GHz! Ok.. race is over.

    Slows news day? When people announced GHz race is over they didn't mean that they'll only decrease the clockrate didn't they? Both Intel and AMD still bump the clock rate up on further developments of their models, but we should expect that we'll be seeing chips in the range of 1 GHz - 3.8 GHz and no higher than this.

    There no effin GHz race.
  • So again we're pushing down the throttle a hint more instead of shifting into next gear. I mean, ok, I'm not a hardware guru, but could it be that we might get more done with less speed if we managed to get things done more "intelligently" instead of simply "faster"?

    How about a bit more than just 8 registers? Maybe a bit more "distributed computing" inside the machine, with more than just outsourcing the graphics to a GPU, maybe a chip dedicated to memory or interrupt handling? I dunno, personally it feels
    • by DaleGlass ( 1068434 ) on Saturday April 21, 2007 @07:45AM (#18823041) Homepage

      How about a bit more than just 8 registers?

      AMD64 has 16 registers

      with more than just outsourcing the graphics to a GPU

      AMD seems to be working on putting a GPU in ther CPU

      maybe a chip dedicated to memory

      Memory used to be managed by a dedicated chip -- the northbridge. But AMD moved it into the CPU because it was faster that way.

      or interrupt handling

      The APIC? But anyway, the slow part of interrupt handling is done in the OS kernel, which runs on the CPU. So I'm not sure how much a chip would help there.

      Maybe someone with more background in hardware design can enlighten me why the race for more cores and more Hertz.


      I'm not an expert, but my guess is that because computers are all-purpose devices. Specialized hardware can accelerate something like encryption or audio mixing, but there doesn't seem to be all that much of that sort of thing that's still worth accelerating. Most people don't need to encrypt the huge amounts of data that would make a dedicated accelerator make much of a difference. Notice also how now almost nobody buys sound cards anymore, because you can just mix sound in hardware.
      • Notice also how now almost nobody buys sound cards anymore, because you can just mix sound in hardware.

        I guess hardcore gamers and audiophiles would disagree.

        Dedicated soundcards have 2 advantages over letting CPU handle the task: Less CPU load (yes, that doesn't matter in normal applications, but since games stress the CPU to the limit anyway you'll want that task in a dedicated device) and sound quality, which is again often due to "good" sound being actually quite a bit of load on whatever piece of hardw
        • Sound quality, maybe. CPU load, I'm not that sure.

          Some time ago, Creative intimidated John Carmack into supporting EAX in Doom 3 [slashdot.org]. This is yet another reason why I don't use Creative hardware anymore. The SB Live cards causing disk corruption with VIA boards, drivers being unstable on SMP (this was before dual core) and them taking drivers off their site for some time are some others.

          But anyway, why do you think Creative had to basically force John Carmack to support their EAX tech? Apparently because Doom 3
          • I think the biggest thing a discrete audio card can give nowadays is accelerated MIDI. Ever tried playing or creating a MIDI file in software? It can easily use 10% CPU on playback, with a card that accelerates this, its zero.
            • by edwdig ( 47888 )
              I think the biggest thing a discrete audio card can give nowadays is accelerated MIDI. Ever tried playing or creating a MIDI file in software? It can easily use 10% CPU on playback, with a card that accelerates this, its zero.

              You've got an absolutely horrendous MIDI implementation then. The GameBoy Advance, with it's 16.7 MHz ARM7, is capable of handling 8 channels of MIDI at about 30% CPU usage. On a modern processor, MIDI should take essentially no power.

              Last time I looked into dedicated sound cards (the
              • Meh, I use timidity on linux. With an emu10k1(sound blaster with hardware midi), 1% usage. With timidity, at least 5-10% usage.
          • by laffer1 ( 701823 )
            Let me provide real numbers. Using my Audigy card, I saw a 10 fps increase in Enemy Territory on a Celeron 1.3 Ghz vs the onboard audio. Many onboard cards use a lot of software to complete their functionality. That competes for CPU time with your game. It does help a little to have a "hardware" card. Of course, you can't use any of the extra creative software or features as they would cause the same net result. On a dual core system or SMP box, the difference is much less. I had a dual xeon 2.0ghz a
  • "The Gigahertz Race is Back On"

    No, I don't think so. It's just that AMD pushed the clock frequency for this CPU, but that works becuase it was just up to 3 GHz.

    Watch me be right when they don't continue to push that generation to clock speeds 3x higher or so like they could in the past.
  • future products will run at slower clock speeds and gain performance through the use of multiple cores and other techniques that won't improve single-threaded application performance.

    This is misleading. No one gave up improving the performance of single-threaded apps.

    All new chips are striving to improve the performance of each core by packing more executed commands per cpu cycle. This is achieved with better branch prediction, concurrent execution of commands that are in principle serial (this is possible
  • AMD is desperate (Score:2, Insightful)

    by tru-hero ( 1074887 )
    This is a desperation move. AMD is back on their heels and their recovery plan is too far off in the future. In hopes of saving face they are pulling the only lever they have, clock speed.

    Funny, Intel was chumped by AMD just like this a couple of years ago, why did AMD let themselves get tagged back? Intel woke up in a major way. Can AMD? Doesn't look too good...

    • AMD is working on there new Barcelona chip and it takes time for a NEW CPU to come out. Also AM2 was forced as DDR 1 was starting to go a way at the time and they needed to move to DD2 but the AM2 boards will work with the new AM2+ cpu unlike Intel where they use the same socket but need new chips and newer boards with the same chip set with support for the newer vcore.
    • Re:AMD is desperate (Score:5, Interesting)

      by ocbwilg ( 259828 ) on Saturday April 21, 2007 @09:56AM (#18823755)
      This is a desperation move. AMD is back on their heels and their recovery plan is too far off in the future. In hopes of saving face they are pulling the only lever they have, clock speed.

      Not so. AMD never said that they wouldn't increase clock speed on their CPUs. In fact, that's pretty much standard practice to get higher performance. So now their manufacturing process is capable of producing 3 GHz CPUs in sufficient volumes to sell, and they're selling them. As the process is refined there may be faster CPUs.

      Intel does the same thing. As the manufacturing process is refined they are able to produce more and more CPUs at higher clock speeds. It's not a sign of anything other than business as usual.

      Funny, Intel was chumped by AMD just like this a couple of years ago, why did AMD let themselves get tagged back? Intel woke up in a major way. Can AMD? Doesn't look too good...

      AMD has more than just clock speed coming, Barcelona (aka K10) is supposed to be shipping in the next month or two. That's generally expected to take back the performance crown from Intel, and even if it doesn't it should at least eliminate the performance gap. For purposes of historical reference, AMD pretty much bitchslapped Intel when they released the Athlon 64. It took Intel 4 years to finally catch up to AMD and pass them with the Core 2 architecture, and even today the Opterons are still higher performers on 4 and 8 processor systems. If Barcelona turns out to be as fast as or faster than Core 2 (and by all rights, it should be) then it will have taken them only 1 year to catch up. Conroe was "previewed" at Spring IDF in 2006, but didn't ship until several months later.

      As for why it's taken AMD a year to catch up, it takes quite a long time to design, layout, test, and debug a new CPU. Once all that is done the manufacturing process has to be designed and tested too. Then the CPUs have to actually be produced, and once production has started it takes almost 2 months to go from silicon wafers to functioning CPUs. However, something to keep in mind is that Intel is a much, much larger company than AMD and that Intel runs severals CPU design teams concurrently, while AMD doesn't. Intel has several times the number of designers, engineers, and fabs that AMD does. Because of their resources, Intel is able to completely scrap a CPU project and switch to something else if they need to. AMD can't, or at least not without seriously hurting the company. The fact that AMD is even competitive with Intel says quite a lot about the talent they have in-house.

      The thing that I find most interesting was that last year when Intel was on the ropes, they offered the IDF preview to select web sites in order to generate buzz and FUD regarding Intel vs. AMD. And it worked too, because for 3 months everybody was talking about how Intel was king again even though they still hadn't shipped any Conroe CPUs. This year they're doing the same thing with their new Penryn architecture, and they don't appear to be on the ropes. Why would you tip your hand early if you don't have to? That indicates to me that Intel is concerned about something, and I suspect that something is Barcelona.

      Even more interesting is that none of the previews compare Conroe with Penryn at the same clock speed. Most of the benchmarks that I have seen show a roughly 20% performance advantage for Penryn. But the Penryn CPU was running at about 14% higher clock speed, a 25% higher FSB, and with 50% more L2 cache onboard. Now who's playing the Gigahertz Game? I suspect that if you overclocked a Conroe and it's FSB to reach the same speeds, you probably would see little to no difference with Penryn. Which means that Intel's response to the all-new Barcelona is going to be...you guessed it...run up the clock speed and slap on some cache, because we're in for a bumpy ride.
      • by quarter ( 14910 ) on Saturday April 21, 2007 @12:18PM (#18824871) Homepage
        Penryn is mostly just a die shrink. All things equal (clock, FSB, cache) it should not be any faster or slower than a Conroe.

        Moving to 45nm gives you extra headroom for clock speed, extra transistor budget, etc. So they might just be demoing systems with similar power envelopes/cost/whatever.

        Throw some SSE4 enabled apps in the mix and the Penryn would outperform an equalized Conroe by a fair margin.
    • It may be that AMD is worried because they had an ~650 Million Dollar loss last year. Yet I don't think they're worried to much - they are still well in the game. It would take but one thing to be the top-seller again: Move back to one single socket for all CPUs. And one only.
      When AMD came out with Socket A it was such a relief to know you are safe to know that your hardware will fit be it in economy, business or first class. If they'd ditch their socket confusion, people would turn to AMD simply for easy o
  • So because AMD is releasing a 3Ghz Operton, suddenly the "Gigahertz Race" is magically back on? Is this some AMD guy trying to win support after the last article talked about their massive operating losses?
  • Frankly, I run my laptop (Pentium M, not sure which model) clocked at 600 MHz whenever it's not connected to a power source. It works great, and I really don't see much of a difference.
  • Lol 3GHz, we had that when? like 3-4 years ago, if the race was really back on show me a 5-10GHZ cpu on air (not vapo cooled).
  • . The Gigahertz race is not back on. As far as raw speed, we've been around 3Ghz for probably longer then any other 'milestone' speed in the last 12 years.

    It seems for all practicle reasons, the speed war is dead and will stay that way baring major change in chip material.
  • I think it is actually funny that in order to use Vista without it being maddeningly slow, you will need 4 ghz to have the same performance with a spreadsheet as you would under CP/M with 4 Mhz.

    Really says a lot on what happens when you let a monopoly like Microsoft exist, and dictate what utter garbage you will use.

    Cheers
  • by heroine ( 1220 ) on Saturday April 21, 2007 @01:40PM (#18825485) Homepage
    Bumping the speed from 2.5Ghz to 3Ghz is hardly a return to the Ghz race. This stuff is still based on cold war technology and the limit of cold war technology has been reached. They need a serious breakthrough in interconnect speeds now.
  • by mrnick ( 108356 ) on Saturday April 21, 2007 @02:53PM (#18825883) Homepage
    After reading the article and many of the responses here on Slashdot I think many of the readers on here are a little off base on what the issues are to many of the problems you proposed that increased CPU performance might solve.

    I read many comments about graphic editing. Being that hardly a day goes by where I don't do some graphic editing I think I am qualified to respond to this. The synergy lab at my University, where I am pursuing my Masters in Computer Science, has a Dual Power Mac with 2 Intel dual core 2.66 Ghz CPUs but only has 1 Gigabyte of RAM. At home I have a Dual Power Mac G4 with 2 800 Mhz CPU. I am not trying to argue here that the IBM 970 processors are superior to the Intel, though they may well be (lol), but I have 4 Gigabytes of RAM in my home system. I am way more productive working on my home system due to the increased memory it has. Graphic editing by nature is a RAM intensive process. If I were going to buy a new system that would be dedicated to graphic editing I would first spend my budgeted amount of money on making sure the system had the maximum amount of RAM (16 Gigabytes currently) before I gave any thought to the processor(s) for such a system.

    Also, many people mentioned either directly or indirectly processes that simulate AI. I make a point of saying "simulate" because our society has yet to produce any software that can come close to claiming to contain any AI. This is not a problem that can be solved by increased CPU or RAM or any other system resource. The #1 problem that plagues any currently developed program in their attempts to simulate AI is that our society has not developed a strong enough knowledge base of intelligence itself to understand how to write code that gives any acceptable level of simulation of it. If Intel where to release a 500 THz CPU tomorrow there would be no significant increase in real or simulated AI. Though, with enough CPU speed and RAM it might be possible one day to create a tree (data structure) that contains all the possible moves for a game of chess which would allow a computer to play a perfect game of chess this would not be an application of AI, although at one time people believed that chess was an application of AI, we have now realized that this is not the case and if a computer did have the complete tree for the game of chess, a significant accomplishment, it would simply be an application of brute force. I have yet to see any application of AI (again real or simulated) that faced against a human opponent can compete at a level that would challenge the human. Again, this is due to basic lack of understanding and programming skill rather than a lack of processing power. IMHO, that someday man may gain enough understanding and programming skill to not only simulate but actually program AI. When I think of this possibility I imagine it will be one of those eureka moments rather than a slow progression based upon our current study of AI. At best we are currently guessing and hoping that we might stumble on something than can simulate AI and even with all the computing power available in the world I do not believe we would be any further along.

    If anything an increase in hardware performance be it CPU, RAM, or whatnot that increase is generally proceeded by more and more inefficient code. Why make your code more efficient when the lack of performance in your programs can easily be overcome by ever increasing system resources?

    I remember when one had to upgrade their computer each year to be able to continue to have a viable system. Long gone are those days. I have had my primary system for nearly 7 years now. I will need to upgrade soon but not because my system is lacking in hardware performance but because of the scenario I described above in which programmers continue to use system hardware as a crutch. If some physical limitation were to present itself that prevented the creation of faster CPUs by either increased clock cycles or additional cores then programmers would adapt and we would cont

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...