Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Hardware

The Gigahertz Race is Back On 217

An anonymous reader writes "When CPU manufacturers ran up against the power wall in their designs, they announced that 'the Gigahertz race is over; future products will run at slower clock speeds and gain performance through the use of multiple cores and other techniques that won't improve single-threaded application performance.' Well, it seems that the gigahertz race is back on — a CNET story talks about how AMD has boosted the speed of their new Opterons to 3GHz. Of course, the new chips also consume better than 20% more power than their last batch. 'The 2222 SE, for dual-processor systems, costs $873 in quantities of 1,000, according to the Web site, and the 8222 SE, for systems with four or eight processors costs $2,149 for quantities of 1,000. For comparison, the 2.8GHz 2220 SE and 8220 SE cost $698 and $1,514 in that quantity. AMD spokesman Phil Hughes confirmed that the company has begun shipping the new chips. The company will officially launch the products Monday, he said.'"
This discussion has been archived. No new comments can be posted.

The Gigahertz Race is Back On

Comments Filter:
  • Oh come on (Score:4, Insightful)

    by Anonymous Coward on Saturday April 21, 2007 @07:04AM (#18822605)
    No sane person actually believed that the gigahurtz race was over. But who cares about it anyway, just more power for a little faster operation.

    I muchly prefer a fanless processor.
  • by Jartan ( 219704 ) on Saturday April 21, 2007 @07:20AM (#18822669)

    How many users actually *use* how much power they already have? I use a lot, but it's mostly dependent on the graphics card.


    You're correct that people don't need this much power for their desktops but there are still plenty of uses for more speed in servers and for certain other applications.
  • by Dwedit ( 232252 ) on Saturday April 21, 2007 @07:21AM (#18822681) Homepage
    One word: Flash.
    Flash is ridiculously inefficient, and requires an extremely beefy machine to render real-time full-screen animation.
  • by Name Anonymous ( 850635 ) on Saturday April 21, 2007 @07:32AM (#18822723)

    How many users actually *use* how much power they already have? I use a lot, but it's mostly dependent on the graphics card.

    You're correct that people don't need this much power for their desktops but there are still plenty of uses for more speed in servers and for certain other applications.
    Actually I think the correct phrase is "most people don't need.... and at that, it may be inaccurate. Someone who does heavy video work can certainly chew up a lot of processing power. Heavy image work can use a lot of prcessing power in bursts.

    Then there is the big fact that progammers these days are sloppy and waste resources. A machine that is faster than one needs today will only be adequate in 2 or 3 years given upgrades to all the programs. (Am I being cynical? Maybe, but then again, maybe not.)
  • by TeknoHog ( 164938 ) on Saturday April 21, 2007 @07:39AM (#18822751) Homepage Journal

    I was kind of hoping the gigahertz race would end so Microsoft would have to stop making each version of Windows slower than the last.

    You're missing the whole point. CPU performances are increasing all the time, which allows Microsoft to continue making everything slower. However, the GHz race had little to do with performance; Intel pushed their Pentium 4 closer to 4 GHz, even if it performed slower than many competing CPUs between 2 and 3 GHz. They probably did it because most consumers would only look at raw GHz instead of performance.

  • by Anonymous Coward on Saturday April 21, 2007 @07:51AM (#18822801)
    just htought i'd remind you all that the article mentions Opteron Processors for a server platform.

    Servers still do need more power.
    With virtuaisation software allowing dozens of people to share on server to replace their desktop apps

    Were still have a long way to go
    this is only the beginning

  • Re:NO NO NO NO NO (Score:5, Insightful)

    by Soul-Burn666 ( 574119 ) on Saturday April 21, 2007 @07:54AM (#18822819) Journal
    So take a top end 3GHz model and underclo it and reduce its voltage. You still get good performance, with lower power consumption.
  • Re:NO no NO no NO (Score:2, Insightful)

    by Cinnamon Whirl ( 979637 ) on Saturday April 21, 2007 @07:59AM (#18822855)
    Don't do THAT please. This ISN'T a comic BOOK. ;)
  • You'd be surprised (Score:5, Insightful)

    by Moraelin ( 679338 ) on Saturday April 21, 2007 @08:13AM (#18822905) Journal
    You'd be surprised how much more _can_ be made with a CPU.

    E.g., sure, we like to use the stereotypical old mom as an example of someone who only sends emails to the kids and old friends. Unfortunately it's false. It was true in the 90's, but now digital cameras are everywhere and image manipulation software is very affordable. And so are the computers which can do it. You'd be suprised the kind of heavy-duty image processing mom does on hundreds of pictures of squirrels and geese and whatever was in the park on that day.

    And _video_ processing isn't too far out of reach either. It's a logical next step too: if you're taking pictures, why not short movies? Picture doing the same image processing on some thousands of frames in a movie instead of one still pictures.

    E.g., software development. Try building a large project on an old 800 MHz slot-A Athlon, with all optimizations on, and then tell me I don't need a faster CPU. Plus, nowadays IDEs aren't just dumb editors with a "compile" option in the menus any more. They compile and cross-reference classes all the time as you type.

    E.g., games, since you mention the graphics card. Yeah, ok, at the moment most games are just a glorified graphics engine, and mostly just use the CPU to pump the triangles to the graphics card. Well that's a pretty poor model, and the novelty of graphics alone is wearing off fast.

    How about physics? They're just coming into fashion, and fast. Yeah, we make do at the moment with piss-poor approximations, like Oblivion's bump-into-a-table-and-watch-plates-fly-off-superso nic engine. There's no reason we couldn't do better.

    How about AI? Already in X2 and X3 (the space sim games) it doesn't only simulate the enemies around you, but also what happens in the sectors where your automated trade or patrol ships are. I want to see that in more games.

    Or how about giving good AI to city/empire building games? Tropico already simulated up to 1000 little people in your city, going around their daily lives, making friends, satisfying their needs, etc. Not just doing a dumb loop, like in Pharaoh or Caesar 3, but genuinely trying to solve the problem of satisfying their biggest need at the moment: e.g., if they're hungry, they go buy food (trekking across the whole island if needed), if they're sick, they go to a doctor, etc. I'd like to see more of that, and more complex at that.

    Or let's have that in RPGs, for that matter. Oblivion for example made a big fuss about how smart and realistic their AI is... and it wasn't. But the hype it generated does show that people care about that kind of thing. So how about having games with _big_ cities, not just 4-5 houses, but cities with 1000-2000 inhabitants, which are actually smart. Let's have not just a "fame" and "infamy" rating, let's have people who actually have a graph of aquaintances and friends, and actually gradually spread the rumours. (I.e., you're not just the guy with 2 points infamy, but it's a question of which of your bad deeds did this particular NPC hear about.) Let's not have omniscient guards that teleport, but actually have witnesses calculate a path and run to inform the guards, and lead them to the crime. Etc.

    Or how about procedurally generated content? The idea of creating whole cities, quests and whatnot procedurally isn't a new one, but unfortunately it tends to create boring repetition at the moment. (See Daggerfall or Morrowind.) How about an AI complex enough to generate reasonably interesting stuff. E.g., not just recombine blocks, but come up with a genuinely original fortress from the ground up, based on some constraints. E.g., how about generating whole story arcs? It's not impossible, it's just very hard.

    And if you need to ask "why?", let's just say: non-linear stories. Currently if you want, for example, to play a light side and a dark side, someone has to code two different arcs, although most players will only see one or the other. If you add more points and ways you can branch the story (e.g.
  • Re:Oh come on (Score:4, Insightful)

    by Anonymous Coward on Saturday April 21, 2007 @08:33AM (#18822975)
    The P4 hit 3 GHz, what, 4 years ago? For Opteron to hit 3GHz only now

    To understand how this is not a sign of slacking off by the chip designers, you have to understand that the P4 was able to run at high clock speeds only because it was designed to use a very long pipeline of small functional units. This design has proven to be inefficient because it causes too many pipeline stalls and because it requires a higher clock speed and higher power consumption to achieve the same performance. The more complicated functional units of chips with shorter pipelines cannot be clocked as fast, but they perform better at the achievable clock rates than the P4 did at higher clock rates. The last Gigahertz race was ended by a shift of architecture, not by "hitting a wall". Then came multicore designs, which further reduced the need and opportunity for higher clock rates (heat dissipation is somewhat of a "wall"). All this caused clock rates to grow much slower. Now that chip designers have found ways to control power consumption, increasing the clock rate is viable again, so the race is back on.
  • AMD is desperate (Score:2, Insightful)

    by tru-hero ( 1074887 ) on Saturday April 21, 2007 @09:36AM (#18823289)
    This is a desperation move. AMD is back on their heels and their recovery plan is too far off in the future. In hopes of saving face they are pulling the only lever they have, clock speed.

    Funny, Intel was chumped by AMD just like this a couple of years ago, why did AMD let themselves get tagged back? Intel woke up in a major way. Can AMD? Doesn't look too good...

  • Re:Oh come on (Score:1, Insightful)

    by Anonymous Coward on Saturday April 21, 2007 @09:37AM (#18823291)
    You're seeing a standstill where things simply improved in a different way. It was easier and cheaper to improve the logical design than to further speed up the clock. That doesn't mean that clock speed increases are impossible. They have just not been the best choice economically for a while. There are technological constraints to simply ramping up the clock rate, like the size of the clock domain, so the chip designer has to change the architecture to enable higher clock speeds. The Netburst architecture was one such design change which was exceptionally successful in enabling higher clock speeds. Unfortunately it wasn't as successful in the performance/watt department, so it hit the heat wall. The chip architecture determines the number of switching operations per clock cycle. In the absence of technology which reduces the power consumption per switching operation/transistor, the only way to reduce power consumption (and heat dissipation requirements) is to optimize the architecture. It is no coincidence that the last years have been dominated by power efficiency research (on the electrical level), because that is the wall that stands in the way of higher clock speeds right now. Intel's next dual core CPUs will "overclock" if one core is idle. That's just a fancy way of admitting that they could run much faster if they could get rid of the resulting heat. The silicon is not at a gigahertz limit: The chip simply becomes too hot. Every improvement in that department is a step towards higher clock rates, not towards lower power chips.
  • by matt me ( 850665 ) on Saturday April 21, 2007 @10:13AM (#18823473)

    Displaying myspace profiles. The CPU load they produce is astonishing.
    Let me guess: you use Firefox.
  • by morcego ( 260031 ) on Saturday April 21, 2007 @11:15AM (#18823905)

    Then there is the big fact that progammers these days are sloppy and waste resources. A machine that is faster than one needs today will only be adequate in 2 or 3 years given upgrades to all the programs. (Am I being cynical? Maybe, but then again, maybe not.)


    You know, that is something that really piss me off.

    Yes, I know many times it is not the programmers fault, and they have to be sloppy to be able to meet that stupid deadline. But, c'mon. Take a look at the system resources something like Beryl uses. Then take a look at whats-name Vista 3D crappy gui uses.
    And I'm pretty sure Beryl could be even more efficient (tho I'm not sure if it would be worth the effort).

    2GB to RUN an OS ? Tons of processing power (both CPU and GPU) to run a simple game ? I can understand beefy CPU needs for things like video encoding and cryptographic processing, along with a few other things. But most apps simply SHOULD NOT need that much resource. IT IS JUST PLAIN STUPID.
  • by quarter ( 14910 ) on Saturday April 21, 2007 @01:18PM (#18824871) Homepage
    Penryn is mostly just a die shrink. All things equal (clock, FSB, cache) it should not be any faster or slower than a Conroe.

    Moving to 45nm gives you extra headroom for clock speed, extra transistor budget, etc. So they might just be demoing systems with similar power envelopes/cost/whatever.

    Throw some SSE4 enabled apps in the mix and the Penryn would outperform an equalized Conroe by a fair margin.
  • by heroine ( 1220 ) on Saturday April 21, 2007 @02:40PM (#18825485) Homepage
    Bumping the speed from 2.5Ghz to 3Ghz is hardly a return to the Ghz race. This stuff is still based on cold war technology and the limit of cold war technology has been reached. They need a serious breakthrough in interconnect speeds now.
  • by fuzz6y ( 240555 ) on Saturday April 21, 2007 @03:13PM (#18825665)

    Then there is the big fact that progammers these days are sloppy and waste resources.

    Just shut the fuck up already. Anyone with more sense than a bag of rocks will conserve scarce resources, not plentiful ones. Clock cycles are cheap. Profiling is expensive. Megabytes are cheap. Time spent coming up with clever bithacks is expensive, especially since only the cleverest and generally highest-paid developers can do it. Second cores are cheap. More time spent coming up with whole new clever bithacks for the pentium D version because it has a different relative cost for jumps and floating point ops, thus making your last batch of hacks do more harm than good, is expensive.

    Furthermore, programmers don't so much *waste* resources as utilize them to provide more value. Yeah, I know the 2600 had 128 bytes of RAM, and those were some clever fellas who managed to make playable games on them. Lets see you play WoW on it. I know that your multimedia keyboard probably has more processing power than the PCjr that could once run Word. Fire up that version of Word, insert an image and a table, and hit "print preview."

    Of course there are times when computing power is a precious resource. Console games that have to look awesome on 4 year old hardware. System libraries where every wasted clock will be multiplied by 2000 calls by 10000 different programs. Embedded systems where cost and size simply won't allow you to have those few extra Hz you crave. In these situations, when using extra cycles has more severe consequences than offending your sense of computational aesthetics, I believe you will find that these young whippersnappers aren't wasteful at all.

  • by Colin Smith ( 2679 ) on Saturday April 21, 2007 @06:12PM (#18826775)

    Then there is the big fact that progammers these days are sloppy and waste resources. A machine that is faster than one needs today will only be adequate in 2 or 3 years given upgrades to all the programs. (Am I being cynical? Maybe, but then again, maybe not.)
    No. In fact, you can generalise the statement to... Humans are sloppy and waste resources. Basically any resource which is cheap or easy will be fully consumed by the people using it.

    If CPUs stayed the same power, people would write better code to improve performance.

     
  • by try_anything ( 880404 ) on Sunday April 22, 2007 @04:57AM (#18830519)

    hey that looks great on paper until you realize that today's hardware and software often work slower than 10 years ago. Sure they have a gazillion features, but they can't get the basics right. Word processing, spreadsheets, email... these are things that haven't changed much in a decade.

    Those apps have changed. They have to change. Nobody wants the old ones. Funny how that works, eh? A clone of Microsoft Word 2.0, which I used in high school, would be fast, efficient, adequate for virtually all usage, and commercially worthless. You and I may want the old fast versions, but I haven't noticed anybody developing software just for me. (I keep waiting for that to happen. I also want a cell phone without a friggin' camera. And get those kids off my damn lawn.)

    Those old apps weren't that responsive anyway, and they were incredibly unstable by today's standards. I'm younger than you, but I'm old enough to remember when saving your work every ten minutes was just common sense. These days that would be a neurosis. Maybe that has something to do with simpler, less-efficient code?

    Actually, I haven't noticed anything of equivalent complexity that runs worse today than it did ten years ago. I do notice is that the market for old, well-understood kinds of applications is dominated by the apps with the most features, so those apps aren't comparable to their predecessors of ten years ago. If you want to compare like with like, compare emacs today with emacs ten years ago. Compare gcc with gcc. It turns out this nifty new hardware is really smokin' fast!

    Yes, it does mean you have to be a bit more diligent about alloc/free, but as a professional software developer, you should be perfectly fine with it.

    No, I'm not fine with it. My brainpower, as copious as it is, is the most precious resource I have. Sure, I can handle memory allocation. Right now I do most of my programming in C++, with a bit of Python here and there. Sometimes I have to use the heap, and that means making sure my objects get deleted. Fine. I've had two or three memory leaks in over three years of work as primary developer on a C++ application, and only one made it into production, where it had a minimal impact. (Thank you, smart pointers.) That doesn't mean manual memory management is a good use of my time. Sure, it used to be a good idea, back in the day. Back in the day, people checked the oil in their cars a few times per month. If it were good to submit yourself to every possible discipline that used to be prudent, no matter what the present-day benefit, we'd all be checking our oil once a week, practicing martial arts, learning Latin, and swearing fealty to a feudal lord. Those things might be good fun and beneficial for our personal development, but neglecting them no longer entails unacceptable consequences.

    You said yourself that extraordinary effort was put into optimization because it was the only way to make programs run acceptably fast. Well, that reason doesn't exist any more. You need a new reason. A good programmer invests his effort where he'll get the greatest return. A good programmer knows that meticulously pairing new and delete costs time and brainpower. (A good programmer also knows that meticulously pairing new and delete restricts his design space, sometimes precluding the simplest, most elegant design. Again, thank God for smart pointers.)

    A lot of people regret the passing of the exigencies that forced them to come of age and learn discipline. For you, the exigency was meager hardware, and your measure of discipline is tight code. For the current generation, the exigency is complexity, and the measure of discipline is simplicity. Twenty years ago, restraint meant being frugal with machine resources, even when it caused you hardship. Today, restraint means being frugal with mental resources, including our own, even when it causes us hardship. It isn't all wine and roses. Imagine

The Tao doesn't take sides; it gives birth to both wins and losses. The Guru doesn't take sides; she welcomes both hackers and lusers.

Working...