Where's My 10 Ghz PC? 868
An anonymous reader writes "Based on decades of growth in CPU speeds, Santa was supposed to drop off my 10 Ghz PC a few weeks back, but all I got was this lousy 2 Ghz dual processor box -- like it's still 2001...oh please! Dr. Dobbs says the free ride is over, and we now have to come up with some concurrency, but all I have is dollars... What gives?"
Well Moore's Law is not a law... (Score:4, Informative)
Re:Well Moore's Law is not a law... (Score:5, Informative)
From webopedia [webopedia.com]
(môrz lâ) (n.) The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future.
Leave Moore's law out of this, please (Score:5, Informative)
bring on the diamond wafers (Score:3, Informative)
The main problem - our largest producer (Intel) said they would not stop utilizing silicon until they made more money from it...We know that the industry likes to stagger upgrades. Instead of giving us the latest and greatest - they give us everything in between in nice "slow" steps so we spend more money. Personally, I wouldn't mind seeing the jumps of 1ghz at a time. This year 2.0 ghz, next year 3.0, following year 4.0, etc...and then eventually increase it further so its 5ghz at a time, etc. et al.
10ghz. Huh??! (Score:1, Informative)
Brooklyn.
Re:I've always wondered (Score:5, Informative)
This is why your CPU runs at a faster speed than your L2 cache (which is bigger), which runs at a faster speed than your main memory (which is bigger), which runs at a faster speed than memory in the adjacent NUMA-node (which is bigger), which runs faster than the network (which is bigger),...
Note that I'm talking about latency/clock-rate here; you can get arbitrarily high bandwidth in a big system, but there are times when you have to have low latency and there's no substitute for smallness then; light just isn't that fast!
Re:Asymptotic (Score:4, Informative)
Hard drives however? Some of the areal densities that are working in R&D labs are significantly denser than what we have now and will allow for plenty of capacity growth if they can be mass produced cheaply enough. Sure, we're approaching a point where it's not going to be viable to go any further, but we're not going to arrive there for a while yet. There is also the option of making the platters sit closer together so you can fit more of them into a drive of course. If you really want or need >1TB on a single spindle then I think you'll need to wait just a few more years.
Re:I've always wondered (Score:5, Informative)
Another problem, of course, is heat - if your 1cm^2 CPU outputs 100w of heat, a 10cm^2 CPU is going to dump 1000w of heat. That's a hell of a lot of heat.
A third problem is reliability. Yields are bad enough with the current core sizes, tripling the core sizes will drop yield even further.
And a fourth problem is what exactly to *do* with the extra space.
Re:Moore's Law isn't Speed Doubling, it's Transist (Score:1, Informative)
But thanks for playing!
Re:Leave Moore's law out of this, please (Score:2, Informative)
Moore's Law is a description of semiconductor packing and describes the phenomena of it doubling in a given time period. A Moore's Theory would be if it attempted to explain WHY this occurs.
What I need 10 ghz for (Score:4, Informative)
authoring a DVD in less than an 4 hours from the dv-avi source?
my own CGI production in my lifetime?
Longhorn Screwed? (Score:4, Informative)
Re:Engineering within limits brings great results (Score:5, Informative)
It seems that we need to review
The Story of Mel.
I'll post it here from several places,
So that the good people of
(and the other people of
Don't wipe out a single server (yeah, right!)
http://www.cs.utah.edu/~elb/folklore/mel.html [utah.edu]
http://www.wizzy.com/andyr/Mel.html [wizzy.com]
http://www.science.uva.nl/~mes/jargon/t/thestoryo
http://www.outpost9.com/reference/jargon/jargon_4
and, of course, many other places.
Re:Asymptotic (Score:3, Informative)
perhaps not, but things are getting really dicey WRT silicon processes. The lates process shrink to 90nm really hurt, and required bunches of tricks to make it work. Specifically, thermal dissipation is a big problem, as when you shrink chips, they get hotter, and require more idle power to make them work. This increases the total thermal power you've got to dissipate, and you've reduced the surface area with which to do so.
Leakage power is another problem. Sure, that 3.6GHz Prescott you've got there has a max dissipation of 110watt at full tilt, but it still consumes something like 53w doing nothing! That's pretty bad, and there's absolutely no fix for that. Physics and chemistry say so, and it only gets worse the smaller the transistors become. So 65nm will be a real bitch...
Re:Heat is the problem (Score:3, Informative)
1) 75% idle time is nonsense. Where did you get that number? With SPECfp on an Athlon or P4 it's more like 20-30% idle. Just look at how spec scores scale with frequency to figure out the memory-idle time.
2) Increasing switching speed with optical technology increases bandwidth but does nothing for latency since nothing travels faster than the speed of light and electrons flowing along a wire can acheive close to 80% of the speed of light already. To reduce latency, what we need are smarter architectures and programmers that can prefetch the data into lower latency caches ahead of time.
Re:Asymptotic (Score:5, Informative)
That was never the limit of copper. It was the limit of voiceband phone lines, which have artificially constrained bandwidth. Since voiceband is now transmitted digitally at 64Kbs, that's the hard theoretical limit, and 56K analog modems are already asymptotically close to that.
If you hook different equipment to the phone wires without the self-imposed bandwidth filters, then it's easy to get higher bandwidth. Ethernet and its predecessors has been pushing megabits or more over twisted pair for decades.
Re:I've always wondered (Score:4, Informative)
Light speed is a big issue, but so is stray capacitence and inductance. A capacitor tends to short out a high frequency signal, and it takes very little capacitence to look like a dead short to a 10 GHz signal. Similarly, the stray inductance of a straight piece of wire has a high reactance at 10 GHz. That's why they run the processor at high speed internally, but have to slow down the signal before sending it out to the real world. If they sent it out over an optical fiber, things would work much better.
And I don't even know if electricity travels at true lightspeed or at something below that.
Under ideal conditions, electric signals can travel at light speed. In real circuits, it is more like
--Tacky the BSEE
Re:Heat is the problem (Score:2, Informative)
In an Article of November 2004s Issue of Scientific American about about optics-based computers.
2) Increasing switching speed with optical technology increases bandwidth but does nothing for latency since nothing travels faster than the speed of light and electrons flowing along a wire can acheive close to 80% of the speed of light already. To reduce latency, what we need are smarter architectures and programmers that can prefetch the data into lower latency caches ahead of time.
Huh, where did i even mention the speed of of electrons along wires? Im simply stating that wires will never be able to deliver enough data for the processor to be able to function at a correct regime.
Re:Hardware resources and software design (Score:4, Informative)
Supposing that you need that first sale of your system to a customer, and when they demo your software, they see it's so slow that they dismiss it and buy the competitor's product. You don't have a second chance. This actually happened with a company I know of. The company pretty much went tits up because the architect neglected performance.
Even then, what are the chances that I can write a better sorting algorithm than one included in a standard library that was written by some who studied sorting algorithms?
I don't necessarily need to write the sort algorithm, but I need to be concerned with the effect of using the various algorthms on my system and select the corrrect one accordingly.
Again, that company that failed went with using a standard library for some functionality in the product instead of rolling their own and this had disasterous results. After the customer complained about performance, they found that they'ld need to completely redesign a significant portion of the product to correct the problem. It wasn't a two or three day fix. The fix would have taken 1-2 months. Try eating that cost when you're a small company.
Actually, 56k is the hard limit (Score:5, Informative)
That's also why IDSL is 144k. The total bandwidth of an ISDN line is 144k, but 16k is used for circut switching data. DSL is point-to-point, so that's unnecessary and the D channel's bandwidth can be used for signal.
So 56k is as good as it will ever get for single analogue modems. I suppose, in theory, this could be changed in the future, I suppose, but I find that rather unlikely given that any new technology is likely to be digital end to end.
Re:Asymptotic (Score:3, Informative)
Just to expand a bit on this. Not much - I'm going to grossly oversimplify this. Each "baud" is merely a change in signal. However, it is an analog change, not a digital change. These signals do not need to be either "0" or "1". They can be "2", "3", "4", etc. (there is a limit here, too, I'm sure). 33.6k is merely 3.5 times 9.6k, so we have amplitudes of 0 through 3 (4 discrete values, one of every two signals has an extra parity bit). Using 6 amplitudes (0-5), we get 57.6k, or, minus the parity, 56k. But we're still transmitting at 9600 baud.
Of course, that only matters to geeks. To the rest of the world, baud is irrelevant. It's how fast the pr0n downloads that counts.
Re:GaAs??? GaAs is material of the future... (Score:4, Informative)
Re:GaAs??? GaAs is material of the future... (Score:3, Informative)
Hmm, I am wondering what kind of logic were you using 10 years ago!
Yes, it is SFQ/RSFQ (Single Flux Quantum) logic, counting individual magnetic flux quanta, but no, it has nothing to do with now over-financed "quantum computing".
As for GaAs, it's alive and well in the world of RF (analog) amplifiers going up to 100 GHz
And with InP you can go to 150 GHz and maybe higher amplifiers (though not broadband), but there is a huge difference between being able to amplify a signal and being able to do any kind of meaningful digital logic at fixed power consumption... Actually, time for me to get off
Paul B.
Re:GaAs??? GaAs is material of the future... (Score:3, Informative)
As to digital logic, it is REALLY hard to make reproducible Josephson jucntions (active elements in SCE circuits) in HTS. One can make 2-4 of them for SQUID sensors (and it is a bit market for HTS too), but for digital stuff you need thousands and millions of them. In certain way HTS vs. LTS is similar to GaAs vs. CMOS -- it is easy to make a really nice, but simple, analog front-end in one, but the other can handle much more processing.
Replacing metal wiring on transistor chips with superconductor wiring will not help that much, yes, part of the RC constant which takes care of wire resistance, will be gone, but you'd still dissipate F*CV^2/2 power to charge/discharge the line. To fully utilize SCE logic one needs to use SCE active elements (current-sensitive JJs, not voltage-sensitive transistors).
I forsee the day that a user will be able to use a superconducting set of electronics on the desk.
Me too!
Paul B.
P.S. There is another fundamental reason to chose LTS, rather than HTS superconductors. The beauty of SFQ logic is that it uses almost quantum-limited amount of energy per switch. When one starts increasing temperature, thermal noise becomes too high (yes, even at 77K) and the main advantage -- tiny energy dissipation, which allows for very dense packaging -- goes away.