Where's My 10 Ghz PC? 868
An anonymous reader writes "Based on decades of growth in CPU speeds, Santa was supposed to drop off my 10 Ghz PC a few weeks back, but all I got was this lousy 2 Ghz dual processor box -- like it's still 2001...oh please! Dr. Dobbs says the free ride is over, and we now have to come up with some concurrency, but all I have is dollars... What gives?"
Heat is the problem (Score:4, Insightful)
Engineering within limits brings great results (Score:5, Insightful)
single bit of speed and capability out of the machines they had.
When computer engineers, faced with limits, still made magic
happen.
I hope this ushers that habit back into the profession. We have a lot of great technology, right now, let's find a better way to use it and make it more ubiquitous.
Hardware resources and software design (Score:3, Insightful)
Re:Asymptotic (Score:4, Insightful)
A Good Thing? (Score:5, Insightful)
Consider:
We might get some return to efficient coding being the norm, instead of writing systems anyhow and throwing more/faster hardware at it until it runs acceptably (Microsoft; its you I'm looking at!)
Your (and your business') desktop machine might _not_ become obsolete in no more than 2 years, and mmight continue in useful service as something more sensible than a whole PC doing the job of a router...
Processor designers might spend more time (i know they already spend some) on innovating new ideas, rather than solving the problems with just ramping up clock speeds.
Cooling/Quietening technology might have a snowball's chance in hell of catching up with heat output?
(and the wild dreaming one)
Games writers might remember about gameplay, rather than better coloured lighting...
Get over it (Score:3, Insightful)
Clock speed isn't everything. (Score:2, Insightful)
Re:Heat is the problem (Score:5, Insightful)
Re:Engineering within limits brings great results (Score:2, Insightful)
I tend to believe that they don't value efficiency and graceful code at all.
Re:Well Moore's Law is not a law... (Score:5, Insightful)
But now even you cheapest PC covers most users needs. So the CPU designers will continue to inovate but they will find that people will be able to keep their PCs and other electronics longer. Fundementally, the CPU business will start loosing steam and slow down. When people don't need to get new machines, they won't. The precieved premium for the high end products is getting less and less.
Moore's Law isn't Speed Doubling, it's Transistors (Score:4, Insightful)
Intel has just caved on the speed doubling in particular, by knocking the clock speed off their product designations, mainly because the Pentium M chips were running significantly faster than the same-speed P4's. AMD's Athlons have been 'fudging' their numbers by having the product number match not their clock speed, but that of the roughly equivalent P4 chip.
Meanwhile, cache sizes are up, instruction pipes are up, hyperthreading has been here a while, multi-core chips are coming down the pike... we're still getting speed gains, just not in raw clocks.
At the same time, the Amiga philosphy of offloading to other processors is truth, with more transistors on the high-end graphics processors than there are on the CPUs!
I hate to say it, but what do you think you need 10GHz for anyway? Unless you've got a REALLY fat pipe, there's a limit on how much pr0n you can process
The high-end machines do make good foot-warmers in cold climes.
Re:Hardware resources and software design (Score:4, Insightful)
What about knowing how to use the libraries that have these functions built in, such as the stl? You might not be 100% as efficient with the libraries, but you can be sure that those libraries are tested and optimized, and if you write these functions yourself, they might be buggy and will most likely be slower than the what comes with the compiler.
Re:Hardware resources and software design (Score:4, Insightful)
Hogwash! Write first, optimize later...or in the real world: write first, optimize if the customer complains. Even then, what are the chances that I can write a better sorting algorithm than one included in a standard library that was written by some who studied sorting algorithms? Close to zero.
Concurrency ... again ... (Score:3, Insightful)
...as we've been saying for, oh, at least the last 20 years, which is about the time I was writing up my Ph.D. thesis on concurrent languages and hardware.
As far as I can see (being slightly out of the language/computer design area these days), concurrent machines and languages aren't taking off for the same reasons they didn't take off in the 1980's:
There's more than a handful of generalisations there, but in short: Moore's Law means that nobody is going to buy a highly concurrent computer when consumer PC's are still getting faster, and the people who really need high parallelism (modellers and the like) have their own special-purpose toys to work with.
Re:Engineering within limits brings great results (Score:5, Insightful)
Re:Asymptotic (Score:5, Insightful)
Remember when 9600 baud was close to the limit of copper? Then 33.6. Then they changed how the pair was used, and made 128K ISDN. Then they changed it again and we're getting 7-10 MB DSL....sometimes even faster depending.
I find it hard to say the we're close to the limits of any technology in the computer/telecom field. Someone always seems to find a new way around it.
Re:Hardware resources and software design (Score:4, Insightful)
Maybe so, but it can (and should) be done in specific cases. For example, I maintain a library of binary tree functions, and I do use them frequently. They are well tested and perform beautifully. However, a project I completed recently required a large amount of data to be traversed in a specific manner, so we designed and built our own BTA--specifically optimized for the task.
As you know, poorly designed code will bubble up through the code and bite you in the end... and your project will suffer from it.
Re:Should always specify North or South. (Score:4, Insightful)
That's not true at all. At a mere 2GHz, light can only travel 15cm (6in) through free space in one cycle -- hardly a long distance. Add in modulation and switching delays, and you really can't ignore the board-level latency even with optical interconnect. On the other hand, even on-chip communication takes multiple clock cycles these days, so maybe it wouldn't be that much worse..?
Re:Asymptotic (Score:2, Insightful)
Those two conditions are not mutally exclusive. Actually, they appear to be strongly correlated.
Re:A Good Thing? (Score:5, Insightful)
These both relate to a trend in the market that I believe we're seeing. Consumers are finding that their "old" computers from 2 years ago are still doing their jobs. When I have a 2Ghz Dell that I use for web surfing, word-processing, and e-mail, there's no benefit to upgrading to the newest 3.4 Ghz Dell. Though there's a hefty speed bump in there, most users will never know the difference.
Therefore, developers/manufacturers are being forced to focus on things like usability and features. They're making their products smaller and more efficient, easier to use, and making them fit transparently into the user's life better. They're focusing on the whole "convergence" idea.
Instead of people spending money on RAM upgrades, the money is going to smaller/lighter/better digital cameras, iPods, and home theater technology. In short, instead of seeing the same box being rolled out every year with better stats, we're seeing new boxes coming out every year with pretty much the same stats, but better designed boxes-- boxes that are actually more useful than last year's model, and not just faster.
I, for one, hope the trend continues.
Re:Heat is the problem (Score:2, Insightful)
Optical? Optical is not that much faster than electical. They both run at close to c.
Electrical chips run far below 1% of c.
Optical would be far faster for several resons:
o no interference between parallel lines
o reduced waiting at gates for signals comming from the other side of the chip, so earlyer time ready to switch
o generally faster travel time of the signal
angel'o'sphere
Re:Asymptotic (Score:3, Insightful)
My other reasons are a little more subjective, but are largely to do with the fact that both AMD and Intel are investing heavily in developing multi-core CPUs. In Intel's case this has involved the very public scrapping of a promised CPU and a drastic revamp of its roadmap. While breakthroughs in CPU design have come from academia and other companies, the vast majority have come from Intel and IBM. However, neither are investing the R&D in ramping clock speeds ever higher and are focussing on multi-core designs instead.
Hence my original statement: based on what we current know about silicon based CPU design, we are at (or very close to) the limits of what is possible. Further R&D or a breakthrough might push that a little or even a significant amount higher, but without the massive R&D efforts of IBM and Intel, the chances of this happening are slim. Also, if the market does start to shift toward multi-core designs which seems very likely, then the inclination of people to look into better wats of doing things in the old way is likely to be reduced further.
Thanks to AMD, no (Score:3, Insightful)
Dude, that is what Intel was doing until AMD came along and forced them to get into this "keeping up with the Joneses" routine.
I can't decide whether to put a smiley face on this or not. I was being sarcastic, but for all we know it might be partially true!
Re:need for speed? (Score:4, Insightful)
As long as there are games and a large number of computer users who want to play them, there will be a need for faster CPUs. While on the graphic side the main work is already done by the GPU, the physics and AI are still done by the CPU. And oposed to the graphics, where games are already quite advanced, AI and physics tends still to be rather primitive in games and will for sure need a lot of additional CPU.
Re:Engineering within limits brings great results (Score:1, Insightful)
New protocols, paradigms, middleware, and the buzzwords and evangelists to sell them are pushed out at a phenomenal rate. We are now going from 'ERP' and 'CRM' to 'web services' to 'service oriented architectures', etc... without the time to even understand what these really are (the vendors don't really want us to anyway), much less what benefit they actually provide.
Once again, politics and money have determined events, and now we are living with the result: Bloated, buggy, incomprehensible, useless-feature-laden, eye candy.
Re:Asymptotic (Score:5, Insightful)
The lack of breakthrough will be due to something entirely different.
So far we have been exploiting the fruits of fundamental material science, physics and chemistry research done in the 60-es (if not earlier), 70-es and to a small extent in the 80-es. There has been nothing fundamentally new done in the 90-es. A lot of nice engineering - yes. A lot of clever manufacturing techniques silicon of insulator being a prime example - yes. But nothing as far as the underlying science is concerned.
This is not just the semiconductor industry. The situation is the same across the board. The charitable foundations and the state which used to be the prime source of fundamental research funding now require a project plan and a date when the supposed product will deliver a result (thinly disguised words for profit). They also do not invest into projects longer then 3 years.
As a result noone looks at things that may bring a breakthrough and there shall be no breakthroughs until this situation changes
Re:Get over it (Score:4, Insightful)
My Athlon64 3200, which isn't top-of-the-line but it's pretty close, still takes quite a bit of time to convert a DVD to divx. It takes a few minutes (because IO needs to get faster) to copy large volumes of files. Photoshop filters on huge, detailed files can take a few minutes to run. Machines only slightly slower choke on playback of HDTV. I can't imagine how long it takes to encode.
When I can do all those things instantly, do accurate global weather predictions in realtime and have my true-to-life recreation of the voyager doctor realize his sentience, THEN computers will be fast enough. Until the next killer app comes, of course.
Re:A Good Thing? (Score:3, Insightful)
>> We might get some return to efficient coding being the norm, instead of writing systems anyhow and throwing more/faster hardware at it until it runs acceptably (Microsoft; its you I'm looking at!)
Efficient coding is only useful if there is a return on your investment for efficiency. Exponentially increasing hardware capability over time at the same cost point makes this tradeoff obvious. The article is saying the hardware capability will still increase, but the programmer will have to learn to use concurrency to exploit it. This implies a fundamental shift in the single-thread world that we have lived in for a long time now.
>> Your (and your business') desktop machine might _not_ become obsolete in no more than 2 years, and mmight continue in useful service as something more sensible than a whole PC doing the job of a router...
The only reason a computer becomes obsolete is that something better comes along to replace it. Why else would someone spend money to replace something that's just as good. There must be value there. So, you're saying you want to stop or slow down progress??? Nobody is holding a gun to your head to upgrade. Keep your old computers and the rest of the world will move on and pay for increased value.
>> Processor designers might spend more time (i know they already spend some) on innovating new ideas, rather than solving the problems with just ramping up clock speeds.
Processor designers spend ALL their time innovating on new ideas. How do you think each and every new chip comes out faster than the last one? If we don't make a better chip, investors won't give us money to build new ones. It's all we do all damn day - build a better mousetrap. Innovation is part of the job. And "just ramping up clock speed" is pretty damn difficult thank you very much.
Re:GaAs??? GaAs is material of the future... (Score:2, Insightful)
As such, it is ok for small stuff (under 20 transistors)but is not going to fly for million transistor CPU's
Another Flawed Law. (Score:3, Insightful)
?Andy giveth, and Bill taketh away.?
That's only half right, because you don't have to let Bill take away. KDE3 runs well on a 233MHz PII and 64MB of RAM, almost a whole order of magnitude less of hardware than it takes to make XP happy. The picture is more drastic when you consider the virus load most XP setups must endure. You need a 2GHz processor just to keep running while your computer serves out spam and kiddie porn.
The changes Dr. Dobbs so wants are already happening in free software. There's a reason things like Arts can play music, games and system noises all at the same time while software on M$ has trouble sharing sound devices. If Beowulf is not a 10 year head start on concurrency, I don't know what is.
Quoth SVDave:
I don't predict good things for Microsoft. Longhorn in 2007, anyone?
Perhaps old Billy should put his money into processor development instead of SCO and FUD.
Re:Asymptotic (Score:4, Insightful)
I might also throw in the possibility that, since the end of the Cold War, there has been very little incentive for governments, etc, to back fundamental research that might (a decade later) lead to radically new technologies. Governments like the status quo, they like the future to be predictable. Fundamental research (except perhaps in really esoteric areas like cosmology or areas with practical benefits for them like medicine) scares the willies out of the people in power -- it might upset their apple cart.
Re:GaAs??? GaAs is material of the future... (Score:3, Insightful)
I'd think the more likely reasons would have to do, for starters, with consumers not wanting or being able to afford a computer that requires constant cooling with liquid nitrogen (or even worse, liquid helium) to work.
Re:Asymptotic (Score:3, Insightful)
Also, the linear speed might be too high to read without interleaving (which pretty much negates the advantage of the higher speed)
Some quick calculations:
Assuming that a 3.5" drive has 2.75" platters, which would have a circumferance of 8.64", would have a speed of 129,590 in/min at 15,000 RPM, which equals 122.7 MPH.
If we assume the 5.25" drive has 4.5" platters, these would have a circumferance of 14.14", which translates to 212,057 in/min or 200.8 MPH.
Also, the 5.25" platters are 268% larger (15.9 in^2 vs 5.9 in^2). Considering that the larger platters will also probably have to be thicker to prevent warping, an estimate of the platters having 3 times as much mass isn't unreasonable. This means much more powerful spindle motors, along with more heat, noise, and vibration.
None of these are insurmountable problems, but I doubt you could solve them economically enough to bring the unit price down so that it's competitive with smaller drives. The platter circumferance in the 3.5" drive is 8.64", which at 15,000 RPM is 129590 in/min, which translates to 122.7 MPH.
Re:GaAs??? GaAs is material of the future... (Score:3, Insightful)
I haven't been in the superconducter field for ten years now... what's the technology being used for the switches/logic gates?
As for GaAs, it's alive and well in the world of RF (analog) amplifiers going up to 100 GHz - I think the current technology uses a 6" wafer. (see, for example, WIN Semiconductor [winfoundry.com])
Re:I've always wondered (Score:2, Insightful)
Re:Lying??? (Score:3, Insightful)
Re:GaAs??? GaAs is material of the future... (Score:3, Insightful)
Re:Asymptotic (Score:4, Insightful)
The government pumped over a half billion a year into the Human Genome project, and spent $1.6 billion on nanotechnology last year. The government is still willing to spend money on basic research, but I doubt they are willing to create a whole new agency, such as NASA. They would rather have private companies do the work (even if federally funded), then create a new class of federal employees.
I also think you are assuming malice on the part of the government, when instead you should be assuming stupidity. And, since it is a democracy, you don't have to look far to find the root of that stupidity.
That humor isn't so far off (Score:3, Insightful)
I'd like to note that the average 3Ghz PC can do MORE than the eqivalant of a 10Ghz 5Mhz 8086. Don't forget that it's not just your CPU doing math now days, it's that fancy $400 super-computer rivaling video card you've got too.
Re:Hardware resources and software design (Score:1, Insightful)
Re:I've always wondered (Score:3, Insightful)
Very true.
Still, the signal needs to be able to cross the distance of the stage in your pipeline during the clock cycle. Smaller stages still mean you can have faster clock rates, as the intel chips demonstrate. All the clock rate benefits have come by making stages smaller (whether by reducing their functionality, optimizing their design, or shrinking the process). It seems we've reached the limit of how small they can be made, with intel seeing not just diminishing but vanishing returns of reducing the stage size to be able to bump up the clockrate.