Forgot your password?
typodupeerror
Hardware

Where's My 10 Ghz PC? 868

Posted by michael
from the don't-forget-the-flying-car dept.
An anonymous reader writes "Based on decades of growth in CPU speeds, Santa was supposed to drop off my 10 Ghz PC a few weeks back, but all I got was this lousy 2 Ghz dual processor box -- like it's still 2001...oh please! Dr. Dobbs says the free ride is over, and we now have to come up with some concurrency, but all I have is dollars... What gives?"
This discussion has been archived. No new comments can be posted.

Where's My 10 Ghz PC?

Comments Filter:
  • Asymptotic (Score:4, Interesting)

    by dsginter (104154) on Friday January 07, 2005 @11:38AM (#11288247)
    We've found the limits of silicon and hard drives and they are being approached asyptotically. Relax...
    • Re:Asymptotic (Score:4, Insightful)

      by BrianHursey (738430) on Friday January 07, 2005 @11:42AM (#11288304) Homepage Journal
      True we have found limits to materials hence we need to think out of the box and find new materials.
    • Re:Asymptotic (Score:4, Informative)

      by Zocalo (252965) on Friday January 07, 2005 @11:54AM (#11288484) Homepage
      Without a major breakthrough, which isn't something I'd bet on, I'll agree that we are very close to the limits of silicon based CPUs. Strained Silion and Silicon on Insulator are effective stop gaps, but multi-core and possibly switching to something like Gallium Arsenide are the most likely ways forward for greater processing power at the moment.

      Hard drives however? Some of the areal densities that are working in R&D labs are significantly denser than what we have now and will allow for plenty of capacity growth if they can be mass produced cheaply enough. Sure, we're approaching a point where it's not going to be viable to go any further, but we're not going to arrive there for a while yet. There is also the option of making the platters sit closer together so you can fit more of them into a drive of course. If you really want or need >1TB on a single spindle then I think you'll need to wait just a few more years.

      • Re:Asymptotic (Score:5, Insightful)

        by lucifuge31337 (529072) * <daryl AT introspect DOT net> on Friday January 07, 2005 @12:12PM (#11288694) Homepage
        Without a major breakthrough, which isn't something I'd bet on, I'll agree that we are very close to the limits of silicon based CPUs.

        Remember when 9600 baud was close to the limit of copper? Then 33.6. Then they changed how the pair was used, and made 128K ISDN. Then they changed it again and we're getting 7-10 MB DSL....sometimes even faster depending.

        I find it hard to say the we're close to the limits of any technology in the computer/telecom field. Someone always seems to find a new way around it.
        • Re:Asymptotic (Score:3, Informative)

          by netwiz (33291)
          I find it hard to say the we're close to the limits of any technology in the computer/telecom field. Someone always seems to find a new way around it.

          perhaps not, but things are getting really dicey WRT silicon processes. The lates process shrink to 90nm really hurt, and required bunches of tricks to make it work. Specifically, thermal dissipation is a big problem, as when you shrink chips, they get hotter, and require more idle power to make them work. This increases the total thermal power you've got
        • Re:Asymptotic (Score:3, Insightful)

          by Zocalo (252965)
          Yes, there's certainly a possibility that there may be a breakthrough, I just don't see it happening for several reasons. First and foremost we have the laws of physics; you just can't make the traces on the silicon substrate much thinner and still know for sure what's going on. This is something that strained silicon has alleviated a little, but without further size reductions then more GHz equates to more heat.

          My other reasons are a little more subjective, but are largely to do with the fact that both

          • Re:Asymptotic (Score:5, Insightful)

            by arivanov (12034) on Friday January 07, 2005 @12:45PM (#11289104) Homepage
            No,

            The lack of breakthrough will be due to something entirely different.

            So far we have been exploiting the fruits of fundamental material science, physics and chemistry research done in the 60-es (if not earlier), 70-es and to a small extent in the 80-es. There has been nothing fundamentally new done in the 90-es. A lot of nice engineering - yes. A lot of clever manufacturing techniques silicon of insulator being a prime example - yes. But nothing as far as the underlying science is concerned.

            This is not just the semiconductor industry. The situation is the same across the board. The charitable foundations and the state which used to be the prime source of fundamental research funding now require a project plan and a date when the supposed product will deliver a result (thinly disguised words for profit). They also do not invest into projects longer then 3 years.

            As a result noone looks at things that may bring a breakthrough and there shall be no breakthroughs until this situation changes
            • Re:Asymptotic (Score:4, Insightful)

              by AJWM (19027) on Friday January 07, 2005 @01:03PM (#11289311) Homepage
              Mod that +1 insightful.

              I might also throw in the possibility that, since the end of the Cold War, there has been very little incentive for governments, etc, to back fundamental research that might (a decade later) lead to radically new technologies. Governments like the status quo, they like the future to be predictable. Fundamental research (except perhaps in really esoteric areas like cosmology or areas with practical benefits for them like medicine) scares the willies out of the people in power -- it might upset their apple cart.

              • Re:Asymptotic (Score:4, Insightful)

                by JWhitlock (201845) <John-Whitlock AT ieee DOT org> on Friday January 07, 2005 @02:49PM (#11290358)
                I might also throw in the possibility that, since the end of the Cold War, there has been very little incentive for governments, etc, to back fundamental research that might (a decade later) lead to radically new technologies. Governments like the status quo, they like the future to be predictable. Fundamental research (except perhaps in really esoteric areas like cosmology or areas with practical benefits for them like medicine) scares the willies out of the people in power -- it might upset their apple cart.

                The government pumped over a half billion a year into the Human Genome project, and spent $1.6 billion on nanotechnology last year. The government is still willing to spend money on basic research, but I doubt they are willing to create a whole new agency, such as NASA. They would rather have private companies do the work (even if federally funded), then create a new class of federal employees.

                I also think you are assuming malice on the part of the government, when instead you should be assuming stupidity. And, since it is a democracy, you don't have to look far to find the root of that stupidity.

        • Re:Asymptotic (Score:5, Informative)

          by Waffle Iron (339739) on Friday January 07, 2005 @12:42PM (#11289072)
          Remember when 9600 baud was close to the limit of copper?

          That was never the limit of copper. It was the limit of voiceband phone lines, which have artificially constrained bandwidth. Since voiceband is now transmitted digitally at 64Kbs, that's the hard theoretical limit, and 56K analog modems are already asymptotically close to that.

          If you hook different equipment to the phone wires without the self-imposed bandwidth filters, then it's easy to get higher bandwidth. Ethernet and its predecessors has been pushing megabits or more over twisted pair for decades.

          • by Sycraft-fu (314770) on Friday January 07, 2005 @01:19PM (#11289502)
            Analogue lines aren't like DS-0 lines, which have a seperate control channel, the control is "bit robbed" from the signal. They take out 8kbps for signaling, giving 56k effective for encoding. That's why with ISDN there is talk of B and D channels. For BRI ISDN you get 2 64k (DS-0) B (bearer) channels that actually carry the signal. There is then a 16k D (data) channel that carries the information on how to route the B channels.

            That's also why IDSL is 144k. The total bandwidth of an ISDN line is 144k, but 16k is used for circut switching data. DSL is point-to-point, so that's unnecessary and the D channel's bandwidth can be used for signal.

            So 56k is as good as it will ever get for single analogue modems. I suppose, in theory, this could be changed in the future, I suppose, but I find that rather unlikely given that any new technology is likely to be digital end to end.
      • by PaulBu (473180) on Friday January 07, 2005 @12:26PM (#11288869) Homepage
        ... and will always be! ;-) I think I first read this qoute sometimes in late 80s/early 90s, and it is still true. You know why? Ever looked at power dissipation specks of even the simplest GaAs chips? You would not want to build a processor out of those, Cray tried with Cray 4 and failed... ;-(

        superconductors is the way to go for highest speeds/most concentrated processing power, due to extremely small power dissipation and extremely high clock frequencies (60 GHz for logic is relatively easy right now), but the problem is that after someone invests $3B in a modern semiconductor fab they do NOT want to build a $30M top-of the line superconductor fab to compete with it. IBM would be a good candidate for this, but they got burned on superconductor computer project back in 80s and would not touch it with 10 foot pole now, though both logic and fab has changed dramatically since then.

        Disclosure: on my day job I do design III-V chips, and I used to design superconductor chips up until recently, now trying to push that technology forward is more of a night job for me... ;-)

        Paul B.
        • superconductors is the way to go for highest speeds/most concentrated processing power, due to extremely small power dissipation and extremely high clock frequencies (60 GHz for logic is relatively easy right now), but the problem is that after someone invests $3B in a modern semiconductor fab they do NOT want to build a $30M top-of the line superconductor fab to compete with it.

          I'd think the more likely reasons would have to do, for starters, with consumers not wanting or being able to afford a computer
        • Disclosure: on my day job I do design III-V chips, and I used to design superconductor chips up until recently, now trying to push that technology forward is more of a night job for me... ;-)

          I haven't been in the superconducter field for ten years now... what's the technology being used for the switches/logic gates?

          As for GaAs, it's alive and well in the world of RF (analog) amplifiers going up to 100 GHz - I think the current technology uses a 6" wafer. (see, for example, WIN Semiconductor [winfoundry.com])

          • I haven't been in the superconducter field for ten years now... what's the technology being used for the switches/logic gates?

            Hmm, I am wondering what kind of logic were you using 10 years ago! ;-) Good old latching stuff? No, it was 1994, SFQ and Nb triulayer was already out there in the field, actually I did come to this country to work on it some time in '92, I guess...

            Yes, it is SFQ/RSFQ (Single Flux Quantum) logic, counting individual magnetic flux quanta, but no, it has nothing to do with now over-
        • by ChrisMaple (607946) on Friday January 07, 2005 @02:04PM (#11289963)
          Vitesse had CMOS GaAs as small as 0.35u and had to abandon the technology when smaller geometry silicon caught up in speed with GaAs. The money wasn't there (in 2000) to make a smaller geometry fab. Also, my understanding is that at smaller geometries the advantage for GaAs is reduced. Indium phosphide is another possible technology. The big problem is that a huge heap of money will be needed to develop a high speed, high integration replacement for silicon, and there's no guarantee that it will ever pay off. For the forseeable future, consumer processors will remain silicon.
        • Given that a liquid nitrogen cascade cooling system is beyond the reach of most consumers, so-called "high temperature superconductors" are basically out of the question. Until we actually have cost-effective room temperature superconductors, I kind of doubt we're going to see much of this. Unless you mean something different than "superconductor" when you say superconductor, I'm at a loss as to where you are going with this.
  • by CPNABEND (742114) on Friday January 07, 2005 @11:39AM (#11288252) Homepage
    Multi-processing is the way to go. We need to do that to help heat dissipation...
    • by WaZiX (766733) on Friday January 07, 2005 @11:50AM (#11288416)
      The CPU spends as much as 75% of its time idle because its waiting patiently for the memory to give it something to do. With Systems only delivering information at a max of 1 Ghz and processors going up to almost 4 times as fast... Studies also show that they could in term be able to squeeze 20 Ghz out of wires as long as 20 inches (and only by 2010 will we be able to achieve that), but that would only be sufficient for the 32 nanometer generation of microships (and we're quite ahead of that)... So i think the future resides in optical connections within the motherboard, allowing processors to finally... well... process ;-)
      • A few problems with your post.

        1) 75% idle time is nonsense. Where did you get that number? With SPECfp on an Athlon or P4 it's more like 20-30% idle. Just look at how spec scores scale with frequency to figure out the memory-idle time.

        2) Increasing switching speed with optical technology increases bandwidth but does nothing for latency since nothing travels faster than the speed of light and electrons flowing along a wire can acheive close to 80% of the speed of light already. To reduce latency, what
    • by dsginter (104154) on Friday January 07, 2005 @11:55AM (#11288502)
      Multi-processing is the way to go. We need to do that to help heat dissipation...

      So, you think that using multiple iterations of an inherently power-hungry technology will somehow solve the power problem? While, certainly, we could back off clock speeds with multi-processing and reduce heat considerably, but, people always want the cutting edge so the demand to "crank it up" would still be a profitable venture, thus pressuring the price of the lower-end stuff.

      Look at page 8 [intel.com]. Processors are approaching the heat density of a nuclear reactor. Silicon is dead. We'll need something else if we want more clock cycles (or perhaps a new computing paradigm... something "non-Von Neumann).
  • by inertia@yahoo.com (156602) on Friday January 07, 2005 @11:39AM (#11288254) Homepage Journal
    People in Soviet Russia, however, appear to be afflicted with amusing juxtapositions of the aforementioned situation.
  • by zoobaby (583075) on Friday January 07, 2005 @11:40AM (#11288274)
    It was just an observed trend. The trend is breaking, as far as retail availability, and thus we are not seeing our 10GHz rigs. (I believe that Moore's law is still trending fine in the labs.)
    • by stupidfoo (836212) on Friday January 07, 2005 @11:45AM (#11288349)
      Moore's "law" has nothing to do with Hz.

      From webopedia [webopedia.com]
      (môrz lâ) (n.) The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future.
    • by Raul654 (453029) on Friday January 07, 2005 @11:46AM (#11288351) Homepage
      Moore's law has nothing to do with processor frequency. It says that semi-conductor capacity doubles every 18 monthsm, not frequency. (With the corollary that there is no appreciable change in price). As we all know, semi-conductor capacity is roughly proportional to speed, so saying processor speeds double every 18 months is not quite wrong, just a little inaccurate. On the other hand, saying that we're not seeing 10 ghz processors, so Moore's law is broken is wrong.
    • by jj_johny (626460) on Friday January 07, 2005 @11:54AM (#11288482)
      No, Moore's law was about price performance not about absolute performance. If you look at the cost of a PC it has consistently gotten better performance while decreasing in price. Nearer to the beginning of the PC revolution it was all performance inprovement and very little price drop. Then in the early 90s it was kind of balanced. Then the 2000 to 2004 was all about the machines getting cheaper with performance nudging along.

      But now even you cheapest PC covers most users needs. So the CPU designers will continue to inovate but they will find that people will be able to keep their PCs and other electronics longer. Fundementally, the CPU business will start loosing steam and slow down. When people don't need to get new machines, they won't. The precieved premium for the high end products is getting less and less.

  • by skrysakj (32108) * on Friday January 07, 2005 @11:40AM (#11288275) Homepage Journal
    I remember the old days, when programmers nudged every
    single bit of speed and capability out of the machines they had.
    When computer engineers, faced with limits, still made magic
    happen.

    I hope this ushers that habit back into the profession. We have a lot of great technology, right now, let's find a better way to use it and make it more ubiquitous.
    • So I can run four instances of CS and p0wn3d everyone in my single player, multi-character clan.
      Viva la VM-Ware
    • I doubt it will go back that way, we are to the point that they can be sloppy and get a way with it.

      The limits are high enough now to not care. Back in the old days the limits were low enough that it did make a difference...

      Not only that but the skills that used to exist in the older days are dissapearing.. "dont need to know that stuff'..
      • by grumbel (592662) <grumbel@gmx.de> on Friday January 07, 2005 @12:30PM (#11288928) Homepage
        ### The limits are high enough now to not care.

        The throuble is that this is assumption is wrong. The computers would in theory be fast enough to not care about optimization all over the place, the throuble is that a lot of bad programming doesn't result in just linear decrease of speed. If I use linear lookup instead of a hash-table, speed will go down, quite a bit more down then the amount of speed of the CPU increases over time.

        Simple example, Gedit, an extremly basic text-editor takes 4-5 seconds to load on a 1Ghz Athlon, MSDOS edit on the other side on 386er started in a fraction of a second. From a feature point of view both do basically the same. Gedit for sure has some more advanced rendering and GUI and isn't a text-mode application like MSDOS edit, however shouldn't it be possible with todays CPUs which are quite a bit faster then back then to have an application that has better rendering then text-mode, but still be at least as fast or faster then back then?
      • Embedded Systems (Score:5, Interesting)

        by crimethinker (721591) on Friday January 07, 2005 @12:46PM (#11289123)
        We're already back to the "old days" in the embedded systems field, if we ever left. When you have to squeeze every bit of life out of a battery, you make sure your code doesn't piss away processor cycles. If the system calls for an 8-bit processor and a certain network daemon, you make the daemon run on the 8-bit processor, even if it was originally written for a 32-bitter. (then you whack the salesweasel upside the head for promising the moon to the customer)

        If you crave the challenge of making tight, efficient code, sometimes with very little under you but the bare chip itself, then embedded systems might be the place for you.

        cue the grumpy old man voice: "Why back in my day, we didn't have 64-bit multi-core chips with gigabytes of memory to waste, no sir, we had to write in assembly code for 8-bit processors, and WE LOVED IT!"

        -paul

    • by gardyloo (512791) on Friday January 07, 2005 @12:27PM (#11288884)
      Ah, yes.
      It seems that we need to review
      The Story of Mel.

      I'll post it here from several places,
      So that the good people of /.
      (and the other people of /.)
      Don't wipe out a single server (yeah, right!)

      http://www.cs.utah.edu/~elb/folklore/mel.html [utah.edu]
      http://www.wizzy.com/andyr/Mel.html [wizzy.com]
      http://www.science.uva.nl/~mes/jargon/t/thestoryof mel.html [science.uva.nl]
      http://www.outpost9.com/reference/jargon/jargon_49 .html [outpost9.com]

      and, of course, many other places.
  • by salvorHardin (737162) <adwulf@nOsPAM.gmail.com> on Friday January 07, 2005 @11:40AM (#11288277) Journal
    ...I cannae change the laws'a'physics!
  • by dunsurfin (570404) on Friday January 07, 2005 @11:41AM (#11288283)
    According to most predictions we were meant to be enjoying lives of leisure by this point - working a 5-hour week in the paperless office, and driving to work in our hovercars.
    • by Tackhead (54550) on Friday January 07, 2005 @11:50AM (#11288415)
      > According to most predictions we were meant to be enjoying lives of leisure by this point - working a 5-hour week in the paperless office, and driving to work in our hovercars.

      Judging from these pictures of the Intel retail boxed heatsink [impress.co.jp] for the Pentium 4 560J (3.6 GHz), by the time we get 10 GHz PCs, the hovercar problem will take care of itself.

  • by SIGALRM (784769) * on Friday January 07, 2005 @11:41AM (#11288285) Journal
    Make a CPU ten times as fast, and software will usually find ten times as much to do (or, in some cases, will feel at liberty to do it ten times less efficiently)
    I find that software designers often do not take resource limits seriously. Programming is tedious, hard work. The algorithms chosen *are* important, and in some cases you shouldn't simply reach into the API toolbox and use the third-party solutions. There is no substitute for knowing how to write your own sort routines, specialized linked lists, and binary trees.
    • Right. But you also need to know when to write your own optimized software and when by using the API toolbox you won't cause much slowdown and will be able to deliver faster and cheaper.

      I would also observe that programmer can be a lot of fun.
    • by hng_rval (631871) on Friday January 07, 2005 @11:57AM (#11288519)
      There is no substitute for knowing how to write your own sort routines, specialized linked lists, and binary trees.

      What about knowing how to use the libraries that have these functions built in, such as the stl? You might not be 100% as efficient with the libraries, but you can be sure that those libraries are tested and optimized, and if you write these functions yourself, they might be buggy and will most likely be slower than the what comes with the compiler.
    • by gUmbi (95629) on Friday January 07, 2005 @12:01PM (#11288575)
      There is no substitute for knowing how to write your own sort routines, specialized linked lists, and binary trees.

      Hogwash! Write first, optimize later...or in the real world: write first, optimize if the customer complains. Even then, what are the chances that I can write a better sorting algorithm than one included in a standard library that was written by some who studied sorting algorithms? Close to zero.
      • by SIGALRM (784769) * on Friday January 07, 2005 @12:14PM (#11288723) Journal
        Hogwash! Write first, optimize later
        No, you cannot retrofit quality and performance into a software project.

        what are the chances that I can write a better sorting algorithm than one included in a standard library that was written by some who studied sorting algorithms? Close to zero
        Maybe so, but it can (and should) be done in specific cases. For example, I maintain a library of binary tree functions, and I do use them frequently. They are well tested and perform beautifully. However, a project I completed recently required a large amount of data to be traversed in a specific manner, so we designed and built our own BTA--specifically optimized for the task.

        As you know, poorly designed code will bubble up through the code and bite you in the end... and your project will suffer from it.
      • by corngrower (738661) on Friday January 07, 2005 @01:17PM (#11289478) Journal
        Hogwash! Write first, optimize later...or in the real world: write first, optimize if the customer complains.
        Supposing that you need that first sale of your system to a customer, and when they demo your software, they see it's so slow that they dismiss it and buy the competitor's product. You don't have a second chance. This actually happened with a company I know of. The company pretty much went tits up because the architect neglected performance.

        Even then, what are the chances that I can write a better sorting algorithm than one included in a standard library that was written by some who studied sorting algorithms?
        I don't necessarily need to write the sort algorithm, but I need to be concerned with the effect of using the various algorthms on my system and select the corrrect one accordingly.
        Again, that company that failed went with using a standard library for some functionality in the product instead of rolling their own and this had disasterous results. After the customer complained about performance, they found that they'ld need to completely redesign a significant portion of the product to correct the problem. It wasn't a two or three day fix. The fix would have taken 1-2 months. Try eating that cost when you're a small company.

  • A Good Thing? (Score:5, Insightful)

    by rdc_uk (792215) on Friday January 07, 2005 @11:43AM (#11288319)
    To my mind it _might_ be a good thing if the rampant speed-advance slowed (a lot).

    Consider:

    We might get some return to efficient coding being the norm, instead of writing systems anyhow and throwing more/faster hardware at it until it runs acceptably (Microsoft; its you I'm looking at!)

    Your (and your business') desktop machine might _not_ become obsolete in no more than 2 years, and mmight continue in useful service as something more sensible than a whole PC doing the job of a router...

    Processor designers might spend more time (i know they already spend some) on innovating new ideas, rather than solving the problems with just ramping up clock speeds.

    Cooling/Quietening technology might have a snowball's chance in hell of catching up with heat output?

    (and the wild dreaming one)
    Games writers might remember about gameplay, rather than better coloured lighting...
    • Re:A Good Thing? (Score:5, Insightful)

      by nine-times (778537) <nine.times@gmail.com> on Friday January 07, 2005 @12:33PM (#11288956) Homepage
      Processor designers might spend more time (i know they already spend some) on innovating new ideas, rather than solving the problems with just ramping up clock speeds....Games writers might remember about gameplay, rather than better coloured lighting...

      These both relate to a trend in the market that I believe we're seeing. Consumers are finding that their "old" computers from 2 years ago are still doing their jobs. When I have a 2Ghz Dell that I use for web surfing, word-processing, and e-mail, there's no benefit to upgrading to the newest 3.4 Ghz Dell. Though there's a hefty speed bump in there, most users will never know the difference.

      Therefore, developers/manufacturers are being forced to focus on things like usability and features. They're making their products smaller and more efficient, easier to use, and making them fit transparently into the user's life better. They're focusing on the whole "convergence" idea.

      Instead of people spending money on RAM upgrades, the money is going to smaller/lighter/better digital cameras, iPods, and home theater technology. In short, instead of seeing the same box being rolled out every year with better stats, we're seeing new boxes coming out every year with pretty much the same stats, but better designed boxes-- boxes that are actually more useful than last year's model, and not just faster.

      I, for one, hope the trend continues.

    • Thanks to AMD, no (Score:3, Insightful)

      by gosand (234100)
      Processor designers might spend more time (i know they already spend some) on innovating new ideas, rather than solving the problems with just ramping up clock speeds.


      Dude, that is what Intel was doing until AMD came along and forced them to get into this "keeping up with the Joneses" routine.


      I can't decide whether to put a smiley face on this or not. I was being sarcastic, but for all we know it might be partially true!

    • Re:A Good Thing? (Score:3, Insightful)

      by akuma(x86) (224898)
      Oh my...where to begin.

      >> We might get some return to efficient coding being the norm, instead of writing systems anyhow and throwing more/faster hardware at it until it runs acceptably (Microsoft; its you I'm looking at!)

      Efficient coding is only useful if there is a return on your investment for efficiency. Exponentially increasing hardware capability over time at the same cost point makes this tradeoff obvious. The article is saying the hardware capability will still increase, but the programme
  • dual cpu systems (Score:4, Interesting)

    by Lawrence_Bird (67278) on Friday January 07, 2005 @11:43AM (#11288320) Homepage
    since the mid 90s thats all I have built - they really do extend the time before you feel compelled to upgrade. Sure there are not that many apps that run threads on each CPU. But to me a large part of it is that I run many applications simultaneously. With 2 CPU's I rarely get any sluggish feel. And if one app is being especially hoggish I can set it to run on one cpu and flip another important app to the other cpu.

    This time around I also sprung for a hardware raid card and set up a 10 array. That has helped quite a bit with system responsiveness.

    I've also turned off as much eye candy as possible. After a couple days its really not missed and things are much snappier.

    yeah it would be great if I could run out and get some 10GHz chips to fry a few eggs on, but I think my dual MP2200's still have a bit of life in them.
  • by UncleRage (515550) on Friday January 07, 2005 @11:44AM (#11288333)
    flying car.

    Where else would it be?

  • I've always wondered (Score:5, Interesting)

    by harks (534599) on Friday January 07, 2005 @11:45AM (#11288336)
    Why the size restraints on processors? Could a processor be made twice as fast if it could be made twice the size? When we hit the limit on how small transistors can be made, could processors continue to increase in speed by making them larger? I see no need why computers need to keep a processor size to two inches square.
    • by mikeee (137160) on Friday January 07, 2005 @11:52AM (#11288448)
      No, making it bigger will make it slower. Current digital systems are mostly "clocked" (they don't have to be, but that gets much more complicated), which means that signals have to be able to get from one side of the system to the other within one clock cycle.

      This is why your CPU runs at a faster speed than your L2 cache (which is bigger), which runs at a faster speed than your main memory (which is bigger), which runs at a faster speed than memory in the adjacent NUMA-node (which is bigger), which runs faster than the network (which is bigger),...

      Note that I'm talking about latency/clock-rate here; you can get arbitrarily high bandwidth in a big system, but there are times when you have to have low latency and there's no substitute for smallness then; light just isn't that fast!
    • by ZorbaTHut (126196) on Friday January 07, 2005 @11:57AM (#11288523) Homepage
      The problem with that is light speed. Transmitting a lightspeed signal across one centimeter takes about 3.3*10^-11 seconds - which sounds like a lot, until you realize that a single CPU cycle now takes about 3.3*10^-10 seconds. And I don't even know if electricity travels at true lightspeed or at something below that.

      Another problem, of course, is heat - if your 1cm^2 CPU outputs 100w of heat, a 10cm^2 CPU is going to dump 1000w of heat. That's a hell of a lot of heat.

      A third problem is reliability. Yields are bad enough with the current core sizes, tripling the core sizes will drop yield even further.

      And a fourth problem is what exactly to *do* with the extra space. :) Yes, you could just fill it with cache, but that still won't give you a computer twice as fast for every twice as much cache - MHz has nothing to do with how many transistors you can pile on a chip. (Of course, you could just put a second CPU on the same chip . . .)
      • by Tacky the Penguin (553526) on Friday January 07, 2005 @12:46PM (#11289111)
        The problem with that is light speed.

        Light speed is a big issue, but so is stray capacitence and inductance. A capacitor tends to short out a high frequency signal, and it takes very little capacitence to look like a dead short to a 10 GHz signal. Similarly, the stray inductance of a straight piece of wire has a high reactance at 10 GHz. That's why they run the processor at high speed internally, but have to slow down the signal before sending it out to the real world. If they sent it out over an optical fiber, things would work much better.

        And I don't even know if electricity travels at true lightspeed or at something below that.

        Under ideal conditions, electric signals can travel at light speed. In real circuits, it is more like .5c to .7c due to capacitive effects -- very much (exactly, actually) the same way a dielectric (like glass or water) slows down light.

        --Tacky the BSEE
  • by 99BottlesOfBeerInMyF (813746) on Friday January 07, 2005 @11:46AM (#11288359)

    Ramping up clock speeds is hitting some serious limitations as far as increasing the work done by a machine is concerned. There are lots of ways to get work done faster. They are just harder to market without some good, popular, and independent benchmarking standards. At some point engine manufacturers realized that increasing the cubic centimeters of displacement in an engine was not the best way to make it faster or more powerful. Now most car reviews include horsepower. Clock speed is analogous to CCs.

  • Get over it (Score:3, Insightful)

    by Mirk (184717) <.slashdot. .at. .miketaylor.org.uk.> on Friday January 07, 2005 @11:46AM (#11288363) Homepage
    If, as the Dr. Dobbs article says, "the free lunch is over", then the only sensible thing to do is make do with what we have now. For goshssakes, people, the computers we have now are already insanely over-powered. How many more gigahertz do we need my life already?
    • Re:Get over it (Score:4, Insightful)

      by Wordsmith (183749) on Friday January 07, 2005 @12:46PM (#11289115) Homepage
      Computers won't be fast enough until they can do anything we'd want of them near instantly. If I have to wait for feedback, it's not fast enough.

      My Athlon64 3200, which isn't top-of-the-line but it's pretty close, still takes quite a bit of time to convert a DVD to divx. It takes a few minutes (because IO needs to get faster) to copy large volumes of files. Photoshop filters on huge, detailed files can take a few minutes to run. Machines only slightly slower choke on playback of HDTV. I can't imagine how long it takes to encode.

      When I can do all those things instantly, do accurate global weather predictions in realtime and have my true-to-life recreation of the voyager doctor realize his sentience, THEN computers will be fast enough. Until the next killer app comes, of course.
  • Abstract it away... (Score:3, Interesting)

    by Lodragandraoidh (639696) on Friday January 07, 2005 @11:47AM (#11288378) Journal
    What gives?


    You, sir, are an idiot. :p

    Seriously though, the article recommends building applications concurrently. Short-term this may be the case on a small scale (and really already is the case).

    The fundamental paradigm shift that will occur will be when we build our operating systems to handle concurrency for us; the advent of 4GLs will help move this forward.

    In this model, you would program normally, not worrying about concurrency at all. The OS would do all the dirty work of breaking up your application into pieces that can run concurrently for you. Are we there yet? No. Will we be there? Yes - particularly if you want to keep productivity at high levels. You will have to abstract concurrency from the day to day programmer for this to happen.
    • by arkanes (521690)
      The OS would do all the dirty work of breaking up your application into pieces that can run concurrently for you.

      In a word, no. At least not with current languages. There's a reason we don't do this already, after all. Provably correct concurrency is very hard to generate, and almost impossible with pure machine code - you either end up with deadlocks and race conditions or very poor performance because you serialize too much stuff. Or incorrect results because data is transparently copied instead of sha

  • by AviLazar (741826) on Friday January 07, 2005 @11:49AM (#11288392) Journal
    When they get off the silicon and hop onto those nice diamond wafers (there is an article in wired), then we will see faster processing.

    The main problem - our largest producer (Intel) said they would not stop utilizing silicon until they made more money from it...We know that the industry likes to stagger upgrades. Instead of giving us the latest and greatest - they give us everything in between in nice "slow" steps so we spend more money. Personally, I wouldn't mind seeing the jumps of 1ghz at a time. This year 2.0 ghz, next year 3.0, following year 4.0, etc...and then eventually increase it further so its 5ghz at a time, etc. et al.
  • by melted (227442) on Friday January 07, 2005 @11:49AM (#11288397) Homepage
    When there's no free ride, programmers will have to compete with each other on who can squeeze that last bit of performance out of existing hardware. So you can kinda sorta predict the revival of the performance-conscious programming.
  • by Fr05t (69968) on Friday January 07, 2005 @11:51AM (#11288431)
    "Based on decades of growth in CPU speeds, Santa was supposed to drop off my 10 Ghz PC a few weeks back, but all I got was this lousy 2 Ghz dual processor box"

    Santa was unable to deliver your 10Ghz system this year for the following reasons:

    1) Santa's Flying Car has not arrived

    2) Santa could not use his sleigh because it failed the new FCC saftey requirements for subobital ships (something about flaming reindeer poo falling from the sky).

    3) The OS for the new 10Ghz computer is Duke Nukem Forever which isn't currently available - maybe next year or decade.
  • Yeah (Score:3, Funny)

    by Aggrazel (13616) <aggrazel@gmail.com> on Friday January 07, 2005 @11:52AM (#11288454) Journal
    And for that matter, where's my Mr. Fusion, Hovercar conversion, Jaws 17 and perfected weather service? Aren't those supposed to be done by 2015?
  • by nurb432 (527695) on Friday January 07, 2005 @11:53AM (#11288464) Homepage Journal
    Just click here.. and send me your CC number, name and billing address ill get it shipped right out to you.

    Free shipping if you act in 24hours..

    But wait.. theres more..
  • by unfortunateson (527551) on Friday January 07, 2005 @11:56AM (#11288515) Journal
    The fallacy here is that the clock speed has to keep doubling. Moore's law says that the number of transistors on a chip doubles each 18 month period, and we're still pretty close to that.

    Intel has just caved on the speed doubling in particular, by knocking the clock speed off their product designations, mainly because the Pentium M chips were running significantly faster than the same-speed P4's. AMD's Athlons have been 'fudging' their numbers by having the product number match not their clock speed, but that of the roughly equivalent P4 chip.

    Meanwhile, cache sizes are up, instruction pipes are up, hyperthreading has been here a while, multi-core chips are coming down the pike... we're still getting speed gains, just not in raw clocks.

    At the same time, the Amiga philosphy of offloading to other processors is truth, with more transistors on the high-end graphics processors than there are on the CPUs!

    I hate to say it, but what do you think you need 10GHz for anyway? Unless you've got a REALLY fat pipe, there's a limit on how much pr0n you can process ;^)

    The high-end machines do make good foot-warmers in cold climes.
  • by Baldrson (78598) * on Friday January 07, 2005 @11:59AM (#11288550) Homepage Journal
    First of all, when DARPA decided to directly back specific technologies such as Danny Hillis' "Connection Machine [base.com]" while supercomputer sales were flagging, they corrupted the market-driven support for supercomputing innovation. As a result just when Seymour Cray had a viable production line for GaAs cpus there was virtually zero market demand for the technology. The lower capacitance as well as higher mobility of the electrons of his version of GaAs technology weren't the sole benefits -- it was also about a factor of 10 cheaper to capitalize the fabrication facilities.

    Whenever the government "picks winners" rather than letting nature pick winners, the technologists and therefore technology loses.

    (Now that Cray is dead, according to the supercomputing FAQ, "The CCC intellectual property was purchased for a mere $250 thousand by Dasu, LLC - a corporation set up and (AFAIK) wholly owned by Mr. Hub Finkelstein, a Texas oilman. He's owned this stuff for five years and hasn't done anything with it.")

    Secondly, as I've discussed before both operating system [slashdot.org] and database [slashdot.org] programming are awaiting the development of relations, most likely via the predicate calculus, as a foundation for software. Both are essentially parallel processing foundations for software.

    This feeds into quantum computing quite nicely as well, as relations are not just inherently parallel, but are parallel in such a way that they precisely model quantum software [boundaryinstitute.org].

  • by C A S S I E L (16009) on Friday January 07, 2005 @12:03PM (#11288601) Homepage
    Concurrency is the next major revolution in how we write software

    ...as we've been saying for, oh, at least the last 20 years, which is about the time I was writing up my Ph.D. thesis on concurrent languages and hardware.

    As far as I can see (being slightly out of the language/computer design area these days), concurrent machines and languages aren't taking off for the same reasons they didn't take off in the 1980's:

    • Implicitly concurrent languages (ones where the concurrency comes for free) are either next to useless (since they tend not to have state, and have problems with a stateful world containing things like, oh, I/O), or end up not being very concurrent at all once they're running;
    • Explicitly concurrent languages (ones with concurrency constructs) are tricky to program with, and debug, if you're trying to exploit the concurrency; shared memory (tricky at the hardware level) gives you multithreading, otherwise you're into the process world with very little in terms of shared objects etc.
    • Concurrent hardware tends to have wacky constraints in order to operate with any degree of efficiency (Inmos Transputer anyone?) and is, again, a pain to program;
    • The fancy concurrent hardware is custom-built, and by the time the boffins have built a concurrent machine that runs reliably based around processors of speed X, delivering concurrency of degree Y, Moore's Law dictates that you can go to your local computer store and buy a $1000 PC with processor speed greater than X * Y.

    There's more than a handful of generalisations there, but in short: Moore's Law means that nobody is going to buy a highly concurrent computer when consumer PC's are still getting faster, and the people who really need high parallelism (modellers and the like) have their own special-purpose toys to work with.

  • by TibbonZero (571809) <TibbonNO@SPAMgmail.com> on Friday January 07, 2005 @12:08PM (#11288646) Homepage Journal
    And to think, that apple's CPUs are nearly at the same 'number speed' in the mhz race now!

    Who would'a ever thought to see that happen?
  • There is one law in computer programming that is even more certain than Moore's Law: Over time, the user is going to do less work for the computer and the computer is going to do more work for the user.

    Remember back when users had to wait in line in front of a terminal to run their punchcards through the mainframe? Back then, human time was cheap and computer time expensive. Nowadays the user's time is paramount.

    Multithreaded programming breaks this law: It is hard to do multithreaded programming- Humans just don't think that way very well. To do it in a way that an arbitrary program (i.e. not a ray tracer) can see consistent performance gains in a multi-CPU environment is almost PhD-level hard. Making single-threaded software is already a major undertaking and anyone thinking that, in general, they should start designing all their programs as fundamentally concurrent programs is going to fall behind their competition due to other factors (security, features, etc.).

    Instead, the only way concurrent programming is going to play a major role for the majority of software, I believe, is at the compiler and OS levels: The OS and compiler designers are going to have to do their utmost to transform single-threaded software to perform optimally in a multi-CPU environment- These folks are going to have to take up the slack that the slow CPUs clockspeeds are causing in terms of limiting the speed of Software- Concurrent programming at the application-level is only going to play a minor role in this, in my opinion.
  • Longhorn Screwed? (Score:4, Informative)

    by SVDave (231875) on Friday January 07, 2005 @12:21PM (#11288809)
    According to Microsoft, an average [slashdot.org] Longhorn system will need to have a 4-6GHz CPU. But if when Longhorn arrives, 4GHz CPUs are high-end parts and 6GHz CPUs don't exist, well...I don't predict good things for Microsoft. Longhorn in 2007, anyone? Or maybe 2008...

    • by twitter (104583)
      Quoth the author:

      ?Andy giveth, and Bill taketh away.?

      That's only half right, because you don't have to let Bill take away. KDE3 runs well on a 233MHz PII and 64MB of RAM, almost a whole order of magnitude less of hardware than it takes to make XP happy. The picture is more drastic when you consider the virus load most XP setups must endure. You need a 2GHz processor just to keep running while your computer serves out spam and kiddie porn.

      The changes Dr. Dobbs so wants are already happening in free so

  • by Mordaximus (566304) on Friday January 07, 2005 @12:28PM (#11288903)
    Which processor outperforms which:

    1a)486-25SX
    1b)486-25DX

    2a)PIII - 450
    2b)G4 - 450

    3a)G3 - 300
    3b)Playstation 2 - 300

    Moral of the story : there are far, far more important performance measurements than clock frequency. If you think otherwise, you might as well slap a VTEC sticker on your case.

    P.S. As other's have pointed out, Moors law has nothing to do cpu frequency.
  • by bigtrouble77 (715075) on Friday January 07, 2005 @12:31PM (#11288935)
    This was spewed from Intel in 2002:

    "First, by switching to the Pentium 4 architecture, Intel can drastically boost the clock speed. The old server Xeon topped out at 1.4GHz. The new one debuts at 1.8GHz, 2GHz and 2.2GHz, and will eventually pass 10GHz, she said."
    http://news.com.com/2100-1001-843879.html [com.com]

    I can't find the exact quote and article, but another Intel exec/rep stated that this goal would be achieved by 2006.

    Well, it's 2005, the P4 has topped out at 3.6ghz and has been discontinued because Intel has determined that the P4 arcitecture is streached to the limit.

    Bottom line is that we should be expecting a 10ghz processor soon because Intel brazenly stated that they would produce one. Whenever they do make these statements the AP drools over the story, stock prices jump and I'm sure investors get excited.

    Instead, their next gen processor is a 2ghz Pentuim M dothan. Intel should be ashamed of themselves for lying to the public and should be investigated for inflating their stock value though fictional claims about their processor technology.

When I left you, I was but the pupil. Now, I am the master. - Darth Vader

Working...