Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Where's My 10 Ghz PC? 868

An anonymous reader writes "Based on decades of growth in CPU speeds, Santa was supposed to drop off my 10 Ghz PC a few weeks back, but all I got was this lousy 2 Ghz dual processor box -- like it's still 2001...oh please! Dr. Dobbs says the free ride is over, and we now have to come up with some concurrency, but all I have is dollars... What gives?"
This discussion has been archived. No new comments can be posted.

Where's My 10 Ghz PC?

Comments Filter:
  • by zoobaby ( 583075 ) on Friday January 07, 2005 @12:40PM (#11288274)
    It was just an observed trend. The trend is breaking, as far as retail availability, and thus we are not seeing our 10GHz rigs. (I believe that Moore's law is still trending fine in the labs.)
  • by stupidfoo ( 836212 ) on Friday January 07, 2005 @12:45PM (#11288349)
    Moore's "law" has nothing to do with Hz.

    From webopedia [webopedia.com]
    (môrz lâ) (n.) The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future.
  • by Raul654 ( 453029 ) on Friday January 07, 2005 @12:46PM (#11288351) Homepage
    Moore's law has nothing to do with processor frequency. It says that semi-conductor capacity doubles every 18 monthsm, not frequency. (With the corollary that there is no appreciable change in price). As we all know, semi-conductor capacity is roughly proportional to speed, so saying processor speeds double every 18 months is not quite wrong, just a little inaccurate. On the other hand, saying that we're not seeing 10 ghz processors, so Moore's law is broken is wrong.
  • by AviLazar ( 741826 ) on Friday January 07, 2005 @12:49PM (#11288392) Journal
    When they get off the silicon and hop onto those nice diamond wafers (there is an article in wired), then we will see faster processing.

    The main problem - our largest producer (Intel) said they would not stop utilizing silicon until they made more money from it...We know that the industry likes to stagger upgrades. Instead of giving us the latest and greatest - they give us everything in between in nice "slow" steps so we spend more money. Personally, I wouldn't mind seeing the jumps of 1ghz at a time. This year 2.0 ghz, next year 3.0, following year 4.0, etc...and then eventually increase it further so its 5ghz at a time, etc. et al.
  • 10ghz. Huh??! (Score:1, Informative)

    by Anonymous Coward on Friday January 07, 2005 @12:51PM (#11288423)
    Raw speed isn't measured in Megahertz anymore. Actually, it never really depended on MHZ, it was always MFLOPS. For years, and finally getting due recognition, AMD has destroyed Intel despite having a slower mhz core. MFLOPS was and is the key.

    Brooklyn.
  • by mikeee ( 137160 ) on Friday January 07, 2005 @12:52PM (#11288448)
    No, making it bigger will make it slower. Current digital systems are mostly "clocked" (they don't have to be, but that gets much more complicated), which means that signals have to be able to get from one side of the system to the other within one clock cycle.

    This is why your CPU runs at a faster speed than your L2 cache (which is bigger), which runs at a faster speed than your main memory (which is bigger), which runs at a faster speed than memory in the adjacent NUMA-node (which is bigger), which runs faster than the network (which is bigger),...

    Note that I'm talking about latency/clock-rate here; you can get arbitrarily high bandwidth in a big system, but there are times when you have to have low latency and there's no substitute for smallness then; light just isn't that fast!
  • Re:Asymptotic (Score:4, Informative)

    by Zocalo ( 252965 ) on Friday January 07, 2005 @12:54PM (#11288484) Homepage
    Without a major breakthrough, which isn't something I'd bet on, I'll agree that we are very close to the limits of silicon based CPUs. Strained Silion and Silicon on Insulator are effective stop gaps, but multi-core and possibly switching to something like Gallium Arsenide are the most likely ways forward for greater processing power at the moment.

    Hard drives however? Some of the areal densities that are working in R&D labs are significantly denser than what we have now and will allow for plenty of capacity growth if they can be mass produced cheaply enough. Sure, we're approaching a point where it's not going to be viable to go any further, but we're not going to arrive there for a while yet. There is also the option of making the platters sit closer together so you can fit more of them into a drive of course. If you really want or need >1TB on a single spindle then I think you'll need to wait just a few more years.

  • by ZorbaTHut ( 126196 ) on Friday January 07, 2005 @12:57PM (#11288523) Homepage
    The problem with that is light speed. Transmitting a lightspeed signal across one centimeter takes about 3.3*10^-11 seconds - which sounds like a lot, until you realize that a single CPU cycle now takes about 3.3*10^-10 seconds. And I don't even know if electricity travels at true lightspeed or at something below that.

    Another problem, of course, is heat - if your 1cm^2 CPU outputs 100w of heat, a 10cm^2 CPU is going to dump 1000w of heat. That's a hell of a lot of heat.

    A third problem is reliability. Yields are bad enough with the current core sizes, tripling the core sizes will drop yield even further.

    And a fourth problem is what exactly to *do* with the extra space. :) Yes, you could just fill it with cache, but that still won't give you a computer twice as fast for every twice as much cache - MHz has nothing to do with how many transistors you can pile on a chip. (Of course, you could just put a second CPU on the same chip . . .)
  • by Anonymous Coward on Friday January 07, 2005 @01:04PM (#11288603)
    Actually, Moore's law states that transistor channel length will halve every 18 months.

    But thanks for playing!
  • by vykor ( 700819 ) on Friday January 07, 2005 @01:06PM (#11288629)
    Theories are explanations about phenomena, supported by evidence and observations. Laws are merely descriptions of phenomena. It's not as if you can eventually promote theories to law. They are two different types of things.
    Moore's Law is a description of semiconductor packing and describes the phenomena of it doubling in a given time period. A Moore's Theory would be if it attempted to explain WHY this occurs.
  • by way2trivial ( 601132 ) on Friday January 07, 2005 @01:09PM (#11288665) Homepage Journal
    better than realtime video transcoding maybe?

    authoring a DVD in less than an 4 hours from the dv-avi source?

    my own CGI production in my lifetime?

  • Longhorn Screwed? (Score:4, Informative)

    by SVDave ( 231875 ) on Friday January 07, 2005 @01:21PM (#11288809)
    According to Microsoft, an average [slashdot.org] Longhorn system will need to have a 4-6GHz CPU. But if when Longhorn arrives, 4GHz CPUs are high-end parts and 6GHz CPUs don't exist, well...I don't predict good things for Microsoft. Longhorn in 2007, anyone? Or maybe 2008...

  • by gardyloo ( 512791 ) on Friday January 07, 2005 @01:27PM (#11288884)
    Ah, yes.
    It seems that we need to review
    The Story of Mel.

    I'll post it here from several places,
    So that the good people of /.
    (and the other people of /.)
    Don't wipe out a single server (yeah, right!)

    http://www.cs.utah.edu/~elb/folklore/mel.html [utah.edu]
    http://www.wizzy.com/andyr/Mel.html [wizzy.com]
    http://www.science.uva.nl/~mes/jargon/t/thestoryof mel.html [science.uva.nl]
    http://www.outpost9.com/reference/jargon/jargon_49 .html [outpost9.com]

    and, of course, many other places.
  • Re:Asymptotic (Score:3, Informative)

    by netwiz ( 33291 ) on Friday January 07, 2005 @01:30PM (#11288917) Homepage
    I find it hard to say the we're close to the limits of any technology in the computer/telecom field. Someone always seems to find a new way around it.

    perhaps not, but things are getting really dicey WRT silicon processes. The lates process shrink to 90nm really hurt, and required bunches of tricks to make it work. Specifically, thermal dissipation is a big problem, as when you shrink chips, they get hotter, and require more idle power to make them work. This increases the total thermal power you've got to dissipate, and you've reduced the surface area with which to do so.

    Leakage power is another problem. Sure, that 3.6GHz Prescott you've got there has a max dissipation of 110watt at full tilt, but it still consumes something like 53w doing nothing! That's pretty bad, and there's absolutely no fix for that. Physics and chemistry say so, and it only gets worse the smaller the transistors become. So 65nm will be a real bitch...

  • by akuma(x86) ( 224898 ) on Friday January 07, 2005 @01:33PM (#11288957)
    A few problems with your post.

    1) 75% idle time is nonsense. Where did you get that number? With SPECfp on an Athlon or P4 it's more like 20-30% idle. Just look at how spec scores scale with frequency to figure out the memory-idle time.

    2) Increasing switching speed with optical technology increases bandwidth but does nothing for latency since nothing travels faster than the speed of light and electrons flowing along a wire can acheive close to 80% of the speed of light already. To reduce latency, what we need are smarter architectures and programmers that can prefetch the data into lower latency caches ahead of time.
  • Re:Asymptotic (Score:5, Informative)

    by Waffle Iron ( 339739 ) on Friday January 07, 2005 @01:42PM (#11289072)
    Remember when 9600 baud was close to the limit of copper?

    That was never the limit of copper. It was the limit of voiceband phone lines, which have artificially constrained bandwidth. Since voiceband is now transmitted digitally at 64Kbs, that's the hard theoretical limit, and 56K analog modems are already asymptotically close to that.

    If you hook different equipment to the phone wires without the self-imposed bandwidth filters, then it's easy to get higher bandwidth. Ethernet and its predecessors has been pushing megabits or more over twisted pair for decades.

  • by Tacky the Penguin ( 553526 ) on Friday January 07, 2005 @01:46PM (#11289111)
    The problem with that is light speed.

    Light speed is a big issue, but so is stray capacitence and inductance. A capacitor tends to short out a high frequency signal, and it takes very little capacitence to look like a dead short to a 10 GHz signal. Similarly, the stray inductance of a straight piece of wire has a high reactance at 10 GHz. That's why they run the processor at high speed internally, but have to slow down the signal before sending it out to the real world. If they sent it out over an optical fiber, things would work much better.

    And I don't even know if electricity travels at true lightspeed or at something below that.

    Under ideal conditions, electric signals can travel at light speed. In real circuits, it is more like .5c to .7c due to capacitive effects -- very much (exactly, actually) the same way a dielectric (like glass or water) slows down light.

    --Tacky the BSEE
  • by WaZiX ( 766733 ) on Friday January 07, 2005 @01:57PM (#11289252)
    1) 75% idle time is nonsense. Where did you get that number? With SPECfp on an Athlon or P4 it's more like 20-30% idle. Just look at how spec scores scale with frequency to figure out the memory-idle time.

    In an Article of November 2004s Issue of Scientific American about about optics-based computers.

    2) Increasing switching speed with optical technology increases bandwidth but does nothing for latency since nothing travels faster than the speed of light and electrons flowing along a wire can acheive close to 80% of the speed of light already. To reduce latency, what we need are smarter architectures and programmers that can prefetch the data into lower latency caches ahead of time.

    Huh, where did i even mention the speed of of electrons along wires? Im simply stating that wires will never be able to deliver enough data for the processor to be able to function at a correct regime.
  • by corngrower ( 738661 ) on Friday January 07, 2005 @02:17PM (#11289478) Journal
    Hogwash! Write first, optimize later...or in the real world: write first, optimize if the customer complains.
    Supposing that you need that first sale of your system to a customer, and when they demo your software, they see it's so slow that they dismiss it and buy the competitor's product. You don't have a second chance. This actually happened with a company I know of. The company pretty much went tits up because the architect neglected performance.

    Even then, what are the chances that I can write a better sorting algorithm than one included in a standard library that was written by some who studied sorting algorithms?
    I don't necessarily need to write the sort algorithm, but I need to be concerned with the effect of using the various algorthms on my system and select the corrrect one accordingly.
    Again, that company that failed went with using a standard library for some functionality in the product instead of rolling their own and this had disasterous results. After the customer complained about performance, they found that they'ld need to completely redesign a significant portion of the product to correct the problem. It wasn't a two or three day fix. The fix would have taken 1-2 months. Try eating that cost when you're a small company.

  • by Sycraft-fu ( 314770 ) on Friday January 07, 2005 @02:19PM (#11289502)
    Analogue lines aren't like DS-0 lines, which have a seperate control channel, the control is "bit robbed" from the signal. They take out 8kbps for signaling, giving 56k effective for encoding. That's why with ISDN there is talk of B and D channels. For BRI ISDN you get 2 64k (DS-0) B (bearer) channels that actually carry the signal. There is then a 16k D (data) channel that carries the information on how to route the B channels.

    That's also why IDSL is 144k. The total bandwidth of an ISDN line is 144k, but 16k is used for circut switching data. DSL is point-to-point, so that's unnecessary and the D channel's bandwidth can be used for signal.

    So 56k is as good as it will ever get for single analogue modems. I suppose, in theory, this could be changed in the future, I suppose, but I find that rather unlikely given that any new technology is likely to be digital end to end.
  • Re:Asymptotic (Score:3, Informative)

    by Tanktalus ( 794810 ) on Friday January 07, 2005 @02:24PM (#11289555) Journal

    Just to expand a bit on this. Not much - I'm going to grossly oversimplify this. Each "baud" is merely a change in signal. However, it is an analog change, not a digital change. These signals do not need to be either "0" or "1". They can be "2", "3", "4", etc. (there is a limit here, too, I'm sure). 33.6k is merely 3.5 times 9.6k, so we have amplitudes of 0 through 3 (4 discrete values, one of every two signals has an extra parity bit). Using 6 amplitudes (0-5), we get 57.6k, or, minus the parity, 56k. But we're still transmitting at 9600 baud.

    Of course, that only matters to geeks. To the rest of the world, baud is irrelevant. It's how fast the pr0n downloads that counts.

  • by ChrisMaple ( 607946 ) on Friday January 07, 2005 @03:04PM (#11289963)
    Vitesse had CMOS GaAs as small as 0.35u and had to abandon the technology when smaller geometry silicon caught up in speed with GaAs. The money wasn't there (in 2000) to make a smaller geometry fab. Also, my understanding is that at smaller geometries the advantage for GaAs is reduced. Indium phosphide is another possible technology. The big problem is that a huge heap of money will be needed to develop a high speed, high integration replacement for silicon, and there's no guarantee that it will ever pay off. For the forseeable future, consumer processors will remain silicon.
  • by PaulBu ( 473180 ) on Friday January 07, 2005 @03:55PM (#11290424) Homepage
    I haven't been in the superconducter field for ten years now... what's the technology being used for the switches/logic gates?

    Hmm, I am wondering what kind of logic were you using 10 years ago! ;-) Good old latching stuff? No, it was 1994, SFQ and Nb triulayer was already out there in the field, actually I did come to this country to work on it some time in '92, I guess...

    Yes, it is SFQ/RSFQ (Single Flux Quantum) logic, counting individual magnetic flux quanta, but no, it has nothing to do with now over-financed "quantum computing". ;-) We can put tens of thousands Josephson junctions per chip now, all connected with matched superconductor transmission line (i.e., no RC time constants, nor F*CV^2/2 power), though which picosecond-wide pulse fly just fine. If you are interested, I can tell A LOT more -- hey, I'm one of the people who are still interested in pursuing this technology...

    As for GaAs, it's alive and well in the world of RF (analog) amplifiers going up to 100 GHz

    And with InP you can go to 150 GHz and maybe higher amplifiers (though not broadband), but there is a huge difference between being able to amplify a signal and being able to do any kind of meaningful digital logic at fixed power consumption... Actually, time for me to get off /. and get back to those pesky transistors... ;-)

    Paul B.

  • by PaulBu ( 473180 ) on Friday January 07, 2005 @05:03PM (#11291106) Homepage
    STI and Conductus were successful in marketing PASSIVE HTS components (analog filters for cellular basestation receivers) and their main accomplishment was, actually, making "normal" systems engineers not to be scared of having a cooler in the system (providing a reliable cooler was also important ;-) ). The brilliant marketing gimmick was that they actually packaged a traditional filter and a switch in parallel with their SC filter in the same box, so if the cooler would fail the system would fall back to the traditional normal design, with some loss of capacity, of course, but at least it would still function.

    As to digital logic, it is REALLY hard to make reproducible Josephson jucntions (active elements in SCE circuits) in HTS. One can make 2-4 of them for SQUID sensors (and it is a bit market for HTS too), but for digital stuff you need thousands and millions of them. In certain way HTS vs. LTS is similar to GaAs vs. CMOS -- it is easy to make a really nice, but simple, analog front-end in one, but the other can handle much more processing.

    Replacing metal wiring on transistor chips with superconductor wiring will not help that much, yes, part of the RC constant which takes care of wire resistance, will be gone, but you'd still dissipate F*CV^2/2 power to charge/discharge the line. To fully utilize SCE logic one needs to use SCE active elements (current-sensitive JJs, not voltage-sensitive transistors).

    I forsee the day that a user will be able to use a superconducting set of electronics on the desk.

    Me too! ;-) It definitely can be done IF some larger system is built and verified first, then technology becomes a commodity. Check out, for example, this [sunysb.edu] presentation by my former advisor and one of the godfathers of the whole field, seach for PeT workstation... ;-)

    Paul B.

    P.S. There is another fundamental reason to chose LTS, rather than HTS superconductors. The beauty of SFQ logic is that it uses almost quantum-limited amount of energy per switch. When one starts increasing temperature, thermal noise becomes too high (yes, even at 77K) and the main advantage -- tiny energy dissipation, which allows for very dense packaging -- goes away.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...