Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware Science

Moore's Law Will Die Without GPUs 250

Stoobalou writes "Nvidia's chief scientist, Bill Daly, has warned that the long-established Moore's Law is in danger of joining phlogiston theory on the list of superseded laws, unless the CPU business embraces parallel processing on a much broader scale."
This discussion has been archived. No new comments can be posted.

Moore's Law Will Die Without GPUs

Comments Filter:
  • An observation (Score:5, Informative)

    by Anonymous Coward on Tuesday May 04, 2010 @09:37AM (#32084140)

    Moore's is not a law, but an observation!

    • by binarylarry ( 1338699 ) on Tuesday May 04, 2010 @09:47AM (#32084250)

      Guy who sells GPUs says if people don't start to buy more GPUs, computers are DOOMED.

      I don't know about you, but I'm sold.

      • Re: (Score:3, Insightful)

        by Pojut ( 1027544 )

        Yeah, pretty much this. It's akin to the oil companies advertising the fact that you should use oil to heat your home...otherwise, you're wasting money!

      • by jwietelmann ( 1220240 ) on Tuesday May 04, 2010 @10:46AM (#32085088)
        Wake me up when this NVIDIA's proposed solution doesn't double my electrical bill and set my computer on fire.
        • Re: (Score:3, Insightful)

          by clarkn0va ( 807617 )

          The article has Dally advocating more efficient processing done in parallel. The potential benefits of this are obvious if you've compared the power consumption of a desktop computer decoding h.264@1080p in software (CPU) and in hardware (GPU). My own machine, for example, consumes less than 10W over idle (+16%) when playing 1080p, and ~30W over idle (+45%) using software decoding. And no fires. See also the phenomenon of CUDA and PS3s being used as mini supercomputers, again, presumably without catching on

      • Obviously there's a conflict-of-interest here, but that doesn't mean the guy is necessarily wrong. It just means you should exercise skepticism and independent judgment.

        In my independent judgment, I happen to agree with the guy. Clockspeeds have been stalled at ~ 3Ghz for nearly a decade now. There are only so many ways of getting more per clock cycle and radical parallelization is a good answer. Many research communities, such as fluid dynamics, are already performing real computational work on the GP

    • Re: (Score:3, Insightful)

      by maxume ( 22995 )

      It is also a modestly self-fulfilling prediction, as planners have had it in mind as they were setting targets and research investments.

    • Re:An observation (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Tuesday May 04, 2010 @09:58AM (#32084372) Journal
      It's also not in any danger. The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction as would things like SoCs gaining more domain-specific offload hardware (e.g. crypto accelerators).
      • Re:An observation (Score:4, Insightful)

        by hitmark ( 640295 ) on Tuesday May 04, 2010 @10:36AM (#32084928) Journal

        yep, the "law" basically results in one of two things, more performance for the same price, or same performance for cheaper price.

        thing is tho that all of IT is hitched on the higher margins the first option produces, and do not want to go the route of the second. The second however is what netbooks hinted at.

        The IT industry is used to be boutique pricing, but is rapidly dropping towards commodity.

        • The IT industry is used to be boutique pricing, but is rapidly dropping towards commodity.

          Exactly.

          I recently upgraded my 3 year old computer from a 2.6Ghz dual core to a 3.4Ghz quad core. Well, with overclocking 3.0Ghz vs 3.7Ghz.

          Honestly enough, I upgraded more for compatibility with the newest videocards than for CPU reasons. Well, that and my 'server', IE the next older computer was an older single core unit with AGP graphics, to give you a clue on it's age.

          I'm not that impressed. And that's a problem. If my $1k upgrade over a 3 year old $1k upgrade* doesn't impress me, then I'm not going

      • Re: (Score:3, Insightful)

        by Bakkster ( 1529253 )

        The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction as would things like SoCs gaining more domain-specific offload hardware (e.g. crypto accelerators).

        Actually, parallel processing is completely external to Moore's Law, which refers only to transistor quantity/size/cost, not what they are used for.

        So while he's right that for CPU makers to continue to realize performance benefits, parallel computing will probably need to become the norm, it doesn't depend upon nor support Moore's Law. We can continue to shrink transistor size, cost, and distance apart without using parallel computing; similarly by improving speed with multiple cores we neither depend up

      • Re: (Score:3, Interesting)

        by timeOday ( 582209 )

        The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction

        That is not sustainable at all. Let's say we reach the magic number of 1e10 transistors and nobody can figure out how to get performance gains from more transistors. If the price dropped 50% every 18 months, after 10 years CPU costs will drop by 99.1%. Intel's flagship processor would be about $

    • It's a perfect example of a law. It offers no explanation and it predicts. Take Newton's laws of motion. They are just observations too in the same sense that you use it.
      • So? Welcome to science!

        A whole lot of "laws" were formulated and used, considered correct and useful until at one day they were proven incorrect. Considering how insignificant Moore's law is when it comes to the scientific community, I could think of worse contradictions.

      • by umghhh ( 965931 )
        I dare to differ - what Moor's law is, I am not sure but it is not in the same class of 'observations' as Newton's - for once Newton's law of motions seem to hold true to this day not because anybody planned it to be this way. Indeed I suspect that Newton's observations were true even before he took a pen to write them down. Now take a look at Moore's law - it is describing certain process closely related to business activity and one which seems to be valid more due to human intellectual laziness than to an
        • I was merely trying to point out that laws offer no explanation. If an explanation was offered, we get a theory. Newton had laws of motion. Einstein had the theory of gravity. One attempts and explanation the rather does not. So Just calling it an observation isn't as devastating a blow to a law as one might think.

          It fits a law because it it predictive and simple.

    • Well, no, Moore's Law was never passed by any legislative authority, no.

      As for a scientific law, 'laws' in science are like version numbers in software:
      There's no agreed-upon definition whatsoever, but for some reason, people still seem to attribute massive importance to them for some reason.

      If anything a 'law' is a scientific statement that dates from the 18th or 19th century, more or less.
      Hooke's law is an empirical approximation.
      The Ideal Gas law is exact, but only as a theoretical limit.
      Ohm's law is act
    • by Surt ( 22457 )

      Nope, it's a law:

      http://www.merriam-webster.com/dictionary/law [merriam-webster.com] (definition #1 even!)

      http://en.wikipedia.org/wiki/Law [wikipedia.org]

      Please people, stop making yourselves look foolish claiming Moore's Law isn't a law. This comes up every time!

    • Re: (Score:2, Funny)

      by raddan ( 519638 ) *
      The problem was that Gordon Moore's mouth was full at the time. It wasn't "Moore's Law". It was "more slaw". The rest is history.
  • I am The Law (Score:5, Informative)

    by Mushdot ( 943219 ) on Tuesday May 04, 2010 @09:40AM (#32084166) Homepage

    I didn't realise Moore's Law was purely the driving force behind CPU development and not just an observation on semiconductor development. Surely we just say Moore's Law held until a certain point, then someone else's Law takes over?

    As for Phlogiston theory - it was just that, a theory which was debunked.

    • Moore's law is describing the human abilities to make better processes leading to better miniaturization, leading to more precise printing of higher density transistors on smaller spaces. It is not a law that concerns natural processes, obviously -- And although it does hold true for now, it is bound to reach an end of life.

      Moore's law will not be debunked, but we will surely go past it sooner or later. We cannot keep shrinking transistor size forever, as molecules and atoms give us an absolute minimum size

      • by dissy ( 172727 )

        Moore's law will not be debunked, but we will surely go past it sooner or later.

        Moore's law is not a law, nor even a theory. It is an observation, nothing more.

        It can't be debunked by definition, as debunking (proving wrong) can only happen when a statement claims to prove something in the first place.

        An observation always remains true no matter (and despite of) its predictive powers.

        If I see a blue butterfly today, and tomorrow something happens to cause all blue butterflies to go extinct or something, that 100% will change any future predictions based on my observation. It does not

  • Objectivity? (Score:5, Insightful)

    by WrongSizeGlass ( 838941 ) on Tuesday May 04, 2010 @09:41AM (#32084178)
    Dr. Daly believes the only way to continue to make great strides in computing performance is to ... offload some of the work onto GPU's that his company just happens to make? [Arte Johnson] Very interesting [wikipedia.org].

    The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!" Perpetuating Moore's Law isn't an industry requirement, it's a prediction by a guy who was in the chip industry.
    • Re:Objectivity? (Score:5, Interesting)

      by Eccles ( 932 ) on Tuesday May 04, 2010 @10:00AM (#32084390) Journal

      The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!"

      As someone who still spends way too much time waiting for computers to finish tasks, I think there's still room for both. What we really want is CPUs that are lightning-fast and likely multi-parallel (and not necessarily low-power) for brief bursts of time, and low-power the rest of the time.

      My CPU load (3Ghz Core 2 Duo) is at 60% right now thanks to a build running in the background. More power, Scotty!

      • Your CPU spends the vast majority of it's time waiting ....or doing stuff that the operating system thinks is important and you don't ...

        If your CPU is not at 100% then the lag is not due to the CPU

      • by Profound ( 50789 )

        Get faster disks till your CPU is at 100% if you want a faster build.

      • Re:Objectivity? (Score:5, Insightful)

        by PitaBred ( 632671 ) <slashdot&pitabred,dyndns,org> on Tuesday May 04, 2010 @11:10AM (#32085500) Homepage

        If your CPU is running at 60%, you need more or faster memory, and faster main storage, not a faster CPU. The CPU is being starved for data. More parallel processing would mean that your CPU would be even more underutilized.

    • Perhaps nVidia's chief scientist wrote his piece because nVidia wants its very niche CUDA/OpenCL computational offering to expand and become mainstream. There's a problem with that though.

      The computational ecosystems that surround CPUs can't work with hidden, undocumented interfaces such as nVidia is used to producing for graphics. Compilers and related tools hit the user-mode hardware directly, while operating systems fully control every last register on CPUs at supervisor level. There is no room for nV

      • by S.O.B. ( 136083 )

        I rather doubt that the company is going to change its stance on openness, so Dr. Daly's statement opens up the parallel computing arena very nicely to its traditional rival ATI, which under AMD's ownership is now a strongly committed open-source company.

        I would agree with you if ATI's supposed commitment to open source had any impact on reality.

        Has ATI's commitment to open source provided timely Catalyst drivers for the year old Fedora 12 release on my 3-month old (at the time of install) laptop? Oh rig

    • by Kjella ( 173770 )

      The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!"

      Personally I think the form factor will be the defining property, not the power. There's some things you'd rather do on your phone, some you'd rather do on your laptop and some you'd rather have a full size keyboard, mouse, screen etc. for. Maybe there's room for an iPad in that, at least people think there is. Even if all of them would last 12 hours on battery you'd not like to carry a laptop 24/7 or try typing up a novel on a smart phone. I think we will simply have more gadgets, not one even if it runs o

  • by iYk6 ( 1425255 ) on Tuesday May 04, 2010 @09:42AM (#32084190)

    So, a graphics card manufacturer says that graphics cards are the future? And this is news?

  • by 91degrees ( 207121 ) on Tuesday May 04, 2010 @09:44AM (#32084216) Journal
    But the only "law" is that the number of transistors doubles in a certain time (something of a self fulfilling prophesy these days since this is the yardstick the chip companies work to).

    Once transistors get below a certain size, of course it will end. Parallel or serial doesn't change things. We either have more processors in the same space, more complex processors or simply smaller processors. There's no "saving" to be done.
    • by camg188 ( 932324 ) on Tuesday May 04, 2010 @10:31AM (#32084856)

      We either have more processors in the same space...

      Hence the need to embrace parallel processing. But the trend seems to be heading toward multiple low power RISC cores, not offloading processing to the video card.

      • by hitmark ( 640295 ) on Tuesday May 04, 2010 @10:47AM (#32085122) Journal

        but parallel is not a magic bullet. Unless one can chop the data worked on into independent parts that do not influence each other, or do so minimally, the task is still more or less linear and so will be done at core speed.

        the only benefit for most users is that one is more likely to be doing something while other, unrelated, tasks are done in the background. But if each task wants to do something with storage media, one is still sunk.

        • Re: (Score:3, Insightful)

          by Surt ( 22457 )

          Parallel is a decently magic bullet. The number of interesting computing tasks I've seen that cannot be partitioned into parallel tasks has been quite small. That's why 100% of the top 500 supercomputers are parallel devices.

      • GPU offloading has appeared with GPGPU. For example, Windows 7 can perform video transcoding using GPGPU [techeta.com] on the Ion.

        Not, it's not as useful as a general chip like the CPU, but with software support it can speed up some tasks considerably.

      • by Anonymous Coward

        Nine processors can't render an image of a baby in one system clock tick, sonny.

  • inevitable (Score:5, Insightful)

    by pastafazou ( 648001 ) on Tuesday May 04, 2010 @09:45AM (#32084228)
    considering that Moore's Law was based on the observation that they were able to double the number of transistors about every 20 months, it would be inevitable that at some point they reach a limiting factor. The factor seems to be the process size, which is a physical barrier. As the process size continues to decrease, the physical size of atoms is a barrier that they can't get past.
    • Re:inevitable (Score:4, Interesting)

      by Junior J. Junior III ( 192702 ) on Tuesday May 04, 2010 @10:00AM (#32084392) Homepage

      At some point, they'll realize that instead of making the die features smaller, they can make the die larger. Or three-dimensional. There are problems with both approaches, but they'll be able to continue doubling transistor count if they figure out how to do this, for a time.

      • by vlm ( 69642 )

        but they'll be able to continue doubling transistor count if they figure out how to do this, for a time.

        32mn process is off the shelf today. Silicon lattice spacing 0.5 nm. Single atom "crystal" leaves factor of 60 possible. Realistically, I think they're stuck at one order of magnitude.

        At best, you could increase CPU die size by two orders of magnitude before the CPU was bigger than my phone or laptop.

        Total 3 orders of magnitude. 2^10 is 1024. So, we've got, at most, 10 more doublings left.

        • Who says we have to keep using silicon?

          • by vlm ( 69642 )

            Who says we have to keep using silicon?

            Without any numbers at all, the density of crystalline "stuff" doesn't vary by much more than an order of magnitude, and silicon's already on the light end of that scale, compared to iron, tungsten, etc.

            But, I'll humour you. Lets consider humble Litium. With a Van der Waals radius around .2 nm. Not going to gain very much over silicon. And there are slight problems with the electrical characteristics. On the good side, you could make something that looks vaguely like a transistor out of lithium. On th

        • Plus another order of magnitude by way of decreasing prices.
        • by dissy ( 172727 )

          If we (humanity) figures out how to perform construction tasks on the nano-scale level in large scale (Ok, large for the nano scale), we can surpass the physical limits you posted.

          A big *if* of course, but we are making progress even now. Most people don't ponder 'if' anymore, only 'when'.

          Scientists feel much more comfortable stating the limits of physics, which we mostly know (and any inaccuracies will just raise the bar, not lower it)

          Only so much matter and energy can be in a given space at a time, and t

        • 10 more doublings (1024x) is a lot.

          The Core i7 965 using 7zip as a benchmark rates out at 18 billion instructions per second. That would be 18.4 trillion instructions per second after 10 more doublings.

          To put this in context, high definition 1080p30 video throws 62.2 million pixels per second. That i7 965 could use 289 instructions per pixel, while that 1024x computer could use 295936 instructions per pixel.

          Translation: The future is still a hell of a lot better.
      • by raddan ( 519638 ) *
        Speed of light. That's a rather serious obstacle, and it is already a factor in chip design. Larger dies will suffer timing problems.
  • Umm? (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Tuesday May 04, 2010 @09:52AM (#32084306) Journal
    Obviously "NVIDIA's Chief Scientist" is going to say something about the epochal importance of GPUs; but WTF?

    Moore's law, depending on the exact formulation you go with, posits either that transistor density will double roughly every two years or that density at minimum cost/transistor increases at roughly that rate.

    It is pretty much exclusively a prediction concerning IC fabrication(a business that NVIDIA isn't even in, TSMC handles all of their actual fabbing), without any reference to what those transistors are used for.

    Now, it is true that, unless parallel processing can be made to work usefully on a general basis, Moore's law will stop implying more powerful chips, and just start implying cheaper ones(since, if the limits of effective parallel processing mean that you get basically no performance improvements going from X billion transistors to 2X billion transistors, Moore's law will continue; but instead of shipping faster chips each generation, vendors will just ship smaller, cheaper ones).

    In the case of servers, of course, the amount of cleverness and fundamental CS development needed to make parallelism work is substantially lower, since, if you have an outfit with 10,000 apache instances, or 5,000 VMs or something, they will always be happy to have more cores per chip, since that means more apache instances for VMs per chip, which means fewer servers(or the same number of single/dual socket servers instead of much more expensive quad/octal socket servers) even if each instance/VM uses no parallelism at all, and just sits at one core = one instance.
  • by nedlohs ( 1335013 ) on Tuesday May 04, 2010 @09:54AM (#32084322)

    Guy at company that does nothing but parallel processing says that parallel processing is the way to go.

    Moore's law has to stop at some point. It's an exponential function after all. Currently we are at in the 10^6 range (2,000,000 or so), our lower estimates for atoms in the universe are 10^80.

    (80 - 6) * (log(10)/log(2)) = 246.

    So clearly we are going to reach some issues with this doubling thing in sometime in the next 246 more doubles...

    • Re: (Score:3, Insightful)

      by bmo ( 77928 )

      Parallel processing *is* the way to go if we ever desire to solve the problem of AI.

      Human brains have a low clock speed, and each processor (neuron) is quite small, but there are a lot of them working at once.

      Just because he might be biased doesn't mean he's wrong.

      --
      BMO

      • I don't care about AI (he says ignoring that his PhD dissertation was in the fringe of god-damn-AI)...

        I actually do agree with his fundamental claim, doesn't change that you need to find someone else to say it for an article that isn't just PR.

      • Human brains have a low clock speed, and each processor (neuron) is quite small, but there are a lot of them working at once.

        The brain's trillions of 3D interconnections blow away anything that has ever been produced on 2D silicon.

        Current parallel processing efforts are hardly interconnected at all, with interprocessor communication being a huge bottleneck. In that sense, the brain is much less parallel than it seems. Individual operations take place in parallel, but they can all intercommunicate simultaneously to become a cohesive unit.

        To match the way the brain takes advantage of lots of logic units, current computer architectu

        • Current parallel processing efforts are hardly interconnected at all, with interprocessor communication being a huge bottleneck.

          yes, but this is because we demand accuracy and determinism from our silicon.

          Even in the case of individual neurons, the same inputs dont always throw the same output, or at least not within a predictable time-frame. Its sloppy/messy stuff happening in our brain. The 'trick' to AI may in fact be the sloppy/messy stuff forcing the need for high (but also sloppy/messy) redundancy.

      • by glwtta ( 532858 )
        Human brains have a low clock speed, and each processor (neuron) is quite small, but there are a lot of them working at once.

        Human brains have nothing whatsoever in common with modern computers, and making facile comparisons is counter-productive.
    • Re: (Score:2, Informative)

      by wwfarch ( 1451799 )
      Nonsense. We'll just build more universes
    • CPUs with a "feature size" of about 22nm are currently in development. A silicon atom is 110pm across, with the largest stable atoms being about 200 pm. In other words, CPU features are currently about 100-200 atoms across. Can't increase too many more times before that becomes a problem...

  • Sometimes I think that parallel programming isn't a "new challenge" but rather something that people do every day with VHDL and Verilog...

    (Insert your own musings about FPGAs and CPUs and the various tradeoffs to be made between these two extremes.)

    • by imgod2u ( 812837 )

      The relative complexity of a C++ program vs what someone can realistically do in HDL is vastly different. Try coding Office in HDL and watch as you go Wayne Brady on your computer.

    • by Kamots ( 321174 )

      Most modern CPUs and the compilers for them are simply not designed for multiple threads/processes to interact with the same data. As an excersize, try writing a lockless single-producer single-consumer queue in C or C++. If you could make the same assumption in this two-thread example that you can make in a single-thread problem, namely that the perceived order of operations is the order that they're coded, then it'd be a snap.

      But you see, once you start playing with more than one thread of execution, yo

      • by PhilHibbs ( 4537 )

        Atomicity is a whole different level of fun as well. I was lucky, at the boundary I was dealing with inherently atomic operations (well, so-long as I have my alignment correct, (not guaranteed by new)), but if you're not... it's yet more architecture-specific code.

        That's also the main complication that I raise when the conversation comes around to personality uploading - the brain is a clockless system with no concept of atomicity at all. How do you take a "snapshot" of that?

  • Seriously. The headline for this should read "Moore's Law will Die Without GPUs, Says GPU Maker ."

    Or, to put it another way, the GPU maker keeps invoking Moore's Law, but I do not think it means what he thinks it means. You can't double semiconductor density by increasing the number of chips involved.

  • by dpbsmith ( 263124 ) on Tuesday May 04, 2010 @10:06AM (#32084462) Homepage

    I'm probably being overly pedantic about this, but of course the word "law" in "Moore's Law" is almost tongue-in-cheek. There's no comparison between a simple observation that some trend or another is exponential--most trends are over a limited period of time--and a physical "law." Moore is not the first person to plot an economic trend on semilog paper.

    There isn't even any particular basis for calling Moore's Law anything more than an observation. New technologies will not automatically come into being in order to fulfill it. Perhaps you can call it an economic law--people will not bother to go through the disruption of buying a new computer unless it is 30% faster than the previous one, therefore successive product introductions will always be 30% faster, or something like that.

    In contrast, something like "Conway's Law"--"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"--may not be in the same category as Kepler's Laws, but it is more than an observation--it derives from an understanding of how people work in organizations.

    Moore's Law is barely in the same category as Bode's Law, which says that "the radius of the orbit of planet #N is 0.4 + 0.3 * 2^(N-1) astronomical units, if you call the asteroid belt a planet, pretend that 2^-1 is 0, and, of course, forget Pluto, which we now do anyway."

  • by IBBoard ( 1128019 ) on Tuesday May 04, 2010 @10:07AM (#32084474) Homepage

    Moore's Law isn't exactly "a law". It isn't like "the law of gravity" where it is a certain thing that can't be ignored*. It's more "Moore's Observation" or "Moore's General Suggestion" or "Moore's Prediction". Any of those are only fit for a finite time and are bound to end.

    * Someone's bound to point out some weird branch of Physics that breaks whatever law I pick or says it is wrong, but hopefully gravity is quite safe!

  • It would mean that development cycles slow down, algorithmics finally win over brute force and that software quality would have a chance to improve (after going downhill for a long time).

    GPUs as CPUs? Ridiculous! Practically nobody can program them and very few problems benefit from them. This sounds more like Nvidia desperately trying to market their (now substandard) products.

    • by Surt ( 22457 )

      Algorithms won over brute force a long time ago. We're using brute force on the good algorithms!

      Seriously, there are very few big CPU tasks that have not had a LOT of smart people look at the algorithms. The idea that we'll suddenly take a big leap in algorithmic efficiency when Moore's law ends is laughable.

  • What the hell does Moore's Law have to do with parallel computing?

    It is concerned with the number of transistors on a single chip. Moore's Law has been dead in practical terms for a while (smaller transistors are too expensive / require too much power and cooling), which is the reason parallel computing is becoming essential in the first place.

    TFA fails computer science history forever.

  • Look at any modern GPU and it's trying to shoehorn general purpose computing functionality into an architecture designed as graphics pipeline. The likes of CUDA, OpenCL, DirectCompute may be a useful way to tap extra functionality, but IMO it's still a hack. The CPU has to load up the GPU with a program and execute it almost as if its a scene using shaders that actually crunch numbers.

    Aside from being a bit of a hack, there are 3 competing APIs and some of them are tied to certain combinations of operatin

    • the traditional method of CPU usage is a hact by that standard as well. BIOS loads the CPU information, CUDA just add another layer.

      which leads me to wonder, do we really needs multi core CPUs? perhaps we just need a CPU that can handle the throughput for running the OS and its most basic functions, and actually pass off all other processes to dedicated components.

  • by ajlitt ( 19055 ) on Tuesday May 04, 2010 @10:29AM (#32084826)

    Albert P. Carey, CEO of Frito-Lay warns consumers that the continuation of the Cheddar-Dorito law and the survival of humanity ultimately relies on zesty corn chips.

  • if ( story.contains('Moore\'s') and story.contains('die','dead','end in'):
            story.comment('Moore\'s Law is an observation not a law! and.... IT WILL NEVER DIE!!!')

  • Moore's Law was an observation of a trend made in 1965 that transistor counts on an integrated circuit had doubled and redoubled over a short period of time, and would continue to do so for at least another ten years (the fact that it has done so for half a century is possibly more than Moore could have hoped for). It was based on observed data that was beyond doubt. Phlogiston Theory was not a theory in the primary definition of the word (from the Greek theria meaning observed, the analysis of a set of fa
  • seriously, in the last few years Intel has produced some good CPU's with good power efficiency. contrast that with Nvidia where every generation you need more and more power to power their cards and the latest generation is something like 250W of heat. years ago we used a compaq all in one cluster server at work as a space heater, the way nvidia is going all you need to do is buy one of their cards and you can heat your house in the winter and not buy heating oil

    • by geekoid ( 135745 )

      That's video cards in general. While a CPU is 'fast enough' a video game wants real time physics, and realistic graphics, and that usually means more power.

  • Moore's Law works until the required elements are smaller than quantum objects. Actually, in our current state of technology and anything practical on the horizon, it works until the required elements are smaller than single atoms. Then there is no way to make stuff faster...

    Sort of.

    While GPUs might 'save Moore's Law', actually they just add other CPUs to each system. So more cores = more performance, and Moore's Law is still relevant.

    Now, to change the entire computing paradigm to actually take advantag

  • though I rarely see the usual mistakes being made by the slashdot community.

    I tried explaining to a friend of mine why it was that, in 2004 his standard desktop configuration had a CPU clocking at 2Ghz, and the standard configuration of the machines available last christmas had CPU's clocking in at 2.4Ghz (in the same price range). He seemed to think it would be in the 8-10Ghz range by now.

    • by geekoid ( 135745 )

      He's assumption is true based on historic Computer performance gains.

      On /.. Moore's law is often misunderstood. This 'scientist' doesn't use it correctly in the article, either.

      And what the hell does a chief scientist and Nvidia do? I'd like to see some of his published experiments and data.

  • for not knowing what Moore's law is.

    "ntel’s co-founder Gordon Moore predicted that the number of transistors on a processor would double every year, and later revised this to every 18 months.
    well, thats half of the rule.

  • Sure, you can add more transistors. And you can use those transistors to add more cores. But how useful will they be? That's what Amdahl's Law [wikipedia.org] tells you. And Amdahl's Law is harder to break than Moore's.

    GPUs only add one more dimension to Amdahl's Law: what portion of the parallelizable portion of a problem is amenable to SIMD processing on a GPU, as opposed to MIMD processing on a standard multi-core processor.

  • by JumpDrive ( 1437895 ) on Tuesday May 04, 2010 @12:43PM (#32087120)
    The CPU industry has been developing quad cores and releasing 8 cores. But a lot of my software can't take advantage of this.
    We just bought the latest version of software from one company and found that it ran a lot slower than the earlier version. I happened to stick it on a VM with only one core and it worked a lot faster.
    We talked about MATLAB yesterday not being able to do 64 bit integers, big deal. I was told that their Neural Network package doesn't have parallel processing capabilities. I was like you have got to be freaking kidding me. A $1000 NN package that doesn't support parallel processing.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...