The Gigahertz Race is Back On 217
An anonymous reader writes "When CPU manufacturers ran up against the power wall in their designs, they announced that 'the Gigahertz race is over; future products will run at slower clock speeds and gain performance through the use of multiple cores and other techniques that won't improve single-threaded application performance.' Well, it seems that the gigahertz race is back on — a CNET story talks about how AMD has boosted the speed of their new Opterons to 3GHz. Of course, the new chips also consume better than 20% more power than their last batch. 'The 2222 SE, for dual-processor systems, costs $873 in quantities of 1,000, according to the Web site, and the 8222 SE, for systems with four or eight processors costs $2,149 for quantities of 1,000. For comparison, the 2.8GHz 2220 SE and 8220 SE cost $698 and $1,514 in that quantity. AMD spokesman Phil Hughes confirmed that the company has begun shipping the new chips. The company will officially launch the products Monday, he said.'"
Re:More Power for What? (Score:2, Interesting)
Re:More Power for What? (Score:3, Interesting)
The interesting thing about the CPU market now is that most of the workloads that really tax a general purpose CPU (and there aren't a huge number left) are the ones that perform very badly on a general purpose CPU. For home use, something like one of TI's ARM cores with an on-board DSP might well give better performance.
Re:AMD is desperate (Score:5, Interesting)
Not so. AMD never said that they wouldn't increase clock speed on their CPUs. In fact, that's pretty much standard practice to get higher performance. So now their manufacturing process is capable of producing 3 GHz CPUs in sufficient volumes to sell, and they're selling them. As the process is refined there may be faster CPUs.
Intel does the same thing. As the manufacturing process is refined they are able to produce more and more CPUs at higher clock speeds. It's not a sign of anything other than business as usual.
Funny, Intel was chumped by AMD just like this a couple of years ago, why did AMD let themselves get tagged back? Intel woke up in a major way. Can AMD? Doesn't look too good...
AMD has more than just clock speed coming, Barcelona (aka K10) is supposed to be shipping in the next month or two. That's generally expected to take back the performance crown from Intel, and even if it doesn't it should at least eliminate the performance gap. For purposes of historical reference, AMD pretty much bitchslapped Intel when they released the Athlon 64. It took Intel 4 years to finally catch up to AMD and pass them with the Core 2 architecture, and even today the Opterons are still higher performers on 4 and 8 processor systems. If Barcelona turns out to be as fast as or faster than Core 2 (and by all rights, it should be) then it will have taken them only 1 year to catch up. Conroe was "previewed" at Spring IDF in 2006, but didn't ship until several months later.
As for why it's taken AMD a year to catch up, it takes quite a long time to design, layout, test, and debug a new CPU. Once all that is done the manufacturing process has to be designed and tested too. Then the CPUs have to actually be produced, and once production has started it takes almost 2 months to go from silicon wafers to functioning CPUs. However, something to keep in mind is that Intel is a much, much larger company than AMD and that Intel runs severals CPU design teams concurrently, while AMD doesn't. Intel has several times the number of designers, engineers, and fabs that AMD does. Because of their resources, Intel is able to completely scrap a CPU project and switch to something else if they need to. AMD can't, or at least not without seriously hurting the company. The fact that AMD is even competitive with Intel says quite a lot about the talent they have in-house.
The thing that I find most interesting was that last year when Intel was on the ropes, they offered the IDF preview to select web sites in order to generate buzz and FUD regarding Intel vs. AMD. And it worked too, because for 3 months everybody was talking about how Intel was king again even though they still hadn't shipped any Conroe CPUs. This year they're doing the same thing with their new Penryn architecture, and they don't appear to be on the ropes. Why would you tip your hand early if you don't have to? That indicates to me that Intel is concerned about something, and I suspect that something is Barcelona.
Even more interesting is that none of the previews compare Conroe with Penryn at the same clock speed. Most of the benchmarks that I have seen show a roughly 20% performance advantage for Penryn. But the Penryn CPU was running at about 14% higher clock speed, a 25% higher FSB, and with 50% more L2 cache onboard. Now who's playing the Gigahertz Game? I suspect that if you overclocked a Conroe and it's FSB to reach the same speeds, you probably would see little to no difference with Penryn. Which means that Intel's response to the all-new Barcelona is going to be...you guessed it...run up the clock speed and slap on some cache, because we're in for a bumpy ride.
Re:Show me 5GHZ at least, then the race is back on (Score:3, Interesting)
Here you are [wikipedia.org].
RAM & Programming is what is missing NOT CPU (Score:3, Interesting)
I read many comments about graphic editing. Being that hardly a day goes by where I don't do some graphic editing I think I am qualified to respond to this. The synergy lab at my University, where I am pursuing my Masters in Computer Science, has a Dual Power Mac with 2 Intel dual core 2.66 Ghz CPUs but only has 1 Gigabyte of RAM. At home I have a Dual Power Mac G4 with 2 800 Mhz CPU. I am not trying to argue here that the IBM 970 processors are superior to the Intel, though they may well be (lol), but I have 4 Gigabytes of RAM in my home system. I am way more productive working on my home system due to the increased memory it has. Graphic editing by nature is a RAM intensive process. If I were going to buy a new system that would be dedicated to graphic editing I would first spend my budgeted amount of money on making sure the system had the maximum amount of RAM (16 Gigabytes currently) before I gave any thought to the processor(s) for such a system.
Also, many people mentioned either directly or indirectly processes that simulate AI. I make a point of saying "simulate" because our society has yet to produce any software that can come close to claiming to contain any AI. This is not a problem that can be solved by increased CPU or RAM or any other system resource. The #1 problem that plagues any currently developed program in their attempts to simulate AI is that our society has not developed a strong enough knowledge base of intelligence itself to understand how to write code that gives any acceptable level of simulation of it. If Intel where to release a 500 THz CPU tomorrow there would be no significant increase in real or simulated AI. Though, with enough CPU speed and RAM it might be possible one day to create a tree (data structure) that contains all the possible moves for a game of chess which would allow a computer to play a perfect game of chess this would not be an application of AI, although at one time people believed that chess was an application of AI, we have now realized that this is not the case and if a computer did have the complete tree for the game of chess, a significant accomplishment, it would simply be an application of brute force. I have yet to see any application of AI (again real or simulated) that faced against a human opponent can compete at a level that would challenge the human. Again, this is due to basic lack of understanding and programming skill rather than a lack of processing power. IMHO, that someday man may gain enough understanding and programming skill to not only simulate but actually program AI. When I think of this possibility I imagine it will be one of those eureka moments rather than a slow progression based upon our current study of AI. At best we are currently guessing and hoping that we might stumble on something than can simulate AI and even with all the computing power available in the world I do not believe we would be any further along.
If anything an increase in hardware performance be it CPU, RAM, or whatnot that increase is generally proceeded by more and more inefficient code. Why make your code more efficient when the lack of performance in your programs can easily be overcome by ever increasing system resources?
I remember when one had to upgrade their computer each year to be able to continue to have a viable system. Long gone are those days. I have had my primary system for nearly 7 years now. I will need to upgrade soon but not because my system is lacking in hardware performance but because of the scenario I described above in which programmers continue to use system hardware as a crutch. If some physical limitation were to present itself that prevented the creation of faster CPUs by either increased clock cycles or additional cores then programmers would adapt and we would cont
Re:More Power for What? (Score:5, Interesting)
Programmers are sloppy, because sloppy is all the industry wants to pay for. Way back in the day when CPU cycles were super expensive, programmers were paid better money and given the time to tweak the crap out of everything, because if they didn't, the app would run dog slow and people wouldn't buy it. The problem is that somehow, people now tolerate underperforming software. They see it as a reason to upgrade... good god, they actually fall for it! Gee I certainly remember surfing the web on a 486 with 8mb of Ram back in the day. Now my OS needs a good 50-60mb to itself, and that's after I ripped out all the cruft. Normally it would be 100mb just for sitting idle with a background image and a neon-colored task bar. Gee uh, where'd all my system resources go ? Does it really require 7.3 million bytes to house a TCP/IP stack when some embedded devices pull it off with oh, 6kb or so ?
The truth however, is that if we were to write code as tightly and meticulously as we did in the 80's and 90's, software would perform, on average, at least 5 to 10 times faster than today, excluding hard bottlenecks like disk access and network bandwidth. It would also take 50 times longer to write the software, and I'd say less than 1% of people who call themselves "programmers" are even able to write such finely tuned code. Everyone doing VB ? Out. Everyone doing RAD ? Out. All you Ruby on Rails weenies ? follow me to this dark alley *BLAM*
I remember spending hours on little loops, with a CPU reference manual and a calculator. Sometimes I did little time sketches to figure out the best way to stagger memory accesses so as to not starve the execution pipes. Often times that meant weaving two disparate functions together, one being memory-hungry, the other CPU hungry. Together they filled each other's latency pockets, and my routine ran nearly thrice faster as a result. No C compiler I've ever seen could do such kinky things. Heck one time I even wrote a little assembler demo whose code executed twice: forward, then backward. The opcodes and data were carefully selected to represent valid instructions when reversed. It was more than a nerdy trick, it allowed my routine to fit entirely in the CPU's on-die cache, which gave it a huge speed boost but more importantly, it enabled a lowly 486 to mix 48 sound channels in real-time. Today's Cubase can't even handle a couple dozen channels without stuttering and/or crashing, on computers over 100 times faster than a 486.
Re:You'd be surprised (Score:1, Interesting)
Instead of replacing humans, simply make it easier for a human to record all the possible story branches. Naturally, this is also impossible, so then simply build a better algorithm for the current AI's. All you need, is a story line adaptation AI, at the front driving the story would be the normal AI's (whatever that means). The story line AI's job is simply to control all the NPC's by looking at how the story is playing out so far, then calculating where it should go, then telling the other AI's about it (assuming they are task-based AI's, all you need to do is give them a new task to do).
I feel the solution to this problem lays within genetic algorithms. As you might know, the point of genetic algorithms is simply adaptation, to change things based on preset rules, attempting to reach a preset goal. Combined with a form of logical reasoning (as exists in logical programing languages), you can create a adaptable story line AI, whos sole job is simply to alter the story line. Since im just making this up as i go along, dont expect things to be very well refined.
First, the need for logical reasoning comes about as a need to drive the characters of a story, whats the logical reaction to a event for a given character? This is the only need for such reasoning skills, and to do this, pretty much any algorithm that can produce what we need can be used (the choice here will effect how characters react, a bad choice could lead to volatile personalities, changing drastically with only a small alteration to the story line). The story itself is altered with the genetic algorithm. Each alteration is checked for consistency with the characters logical reasoning parts, to make sure the new story is consistent with the character. Now, im no expect on genetic algorithms, and since im currently studying other topics, i have never used them, so please bear with me as a simply take all i know about them from wikipedia to try and build one up.
First: chromosome representation, we need to represent the story line in terms of these. Simple enough, each chromosome represents a story line, each gene represents a action/task someone/thing/group performs. Each gene is split, the task and who does it is kept separate, so that the algorithm can change the gene so a new person performs the task. In this way, the chromosome reads like a story line, with a list of tasks and who is to do them.
Second: the algorithm itself (selection and reproduction of genes). Since the chromosome represents the story, the algorithm simply needs to produce a logically consistent story. Each gene is composed of two pairs, the person, and the task, both of which can be altered by the algorithm. First we find all tasks that the current story line needs accomplished, for each task we also find someone to perform it. This is accomplished by finding all those who can, then selecting one of them. Now what we have is a set of genes (tasks and people to do it) this is logically consistent. The algorithm then steps, causing the story to be changed (since we selected the genes that have yet to complete, the original story is still in tack, but how we get there changes).
The goal of the algorithm is not to solve the story, but to correct the story. Each time the story becomes unbalanced (like you killed someone necessary in the story's future), the genetic algorithm is run to correct the unbalance. Anything can cause a unbalance, its really up to the game to decide, but the algorithm is not run all the time in the game, only to correct and adapt the existing story.
The story is simply a set of possible tasks, and conditions upon which the story ends. Various branches would still be programed b
Re:More Power for What? (Score:1, Interesting)
Have you ever touched a HD camera? recording in progressive causes strobing UNLESS you are recording at 60p
most newer sports are being recorded at 1080i, it's easier for the stations to handle.
Progressive has craploads of problems making it look crappy IF you do not know how to use the camera fast moving objects strobe, incorrect pans or any shots without a tripod also look like crap because of the strobing.
Lumpy is 100% right, Being a Camera tech at a major network shooting in 720p or 1080p sucks and can only be done for specific things. Sports is NOT one of them.
Broadcast in and SHOT + EDITED in are two different things.
Please take your disposable income and buy a HD camcorder and tell me how wonderful your 720p or 1080p recording looks wonderful.....
Re:You'd be surprised (Score:3, Interesting)
What I had in mind was more Bayesian. You know, in much the same way you can get a computer trained to translate texts by feeding it texts and translations (Google is doing just that), one could feed it various gaming situations from a enough testers to have it learn all sorts of stuff. E.g., to recognize when a game flew off the hook in your description, or is about to fly off the hook, or what situations are working and aren't working for certain play styles. Or, say, historical forts so it can design one itself.
It doesn't even have to be all pre-trained, it can just collect the anonymized data from players daily to a central server, and refine the story engine data for the clients. So even if you're an early buyer, at least you'd eventually have a reason to replay it, as by then the game had been retrained a lot.
The more I think about it, the more I think the two can be combined. Your genetic approach could be driven by a statistical engine.