Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Hacking Open Source Hardware Build

Opportunities From the Twilight of Moore's Law 148

saccade.com writes "Andrew 'bunnie' Huang just posted an excellent essay, Why the Best Days of Open Hardware are Yet to Come. He shows how the gradually slowing pace of semiconductor density actually may create many new opportunities for smaller scale innovators and entrepreneurs. It's based on a talk presented at the 2011 Open Hardware Summit. Are we entering an age of heirloom laptops and artisan engineering?"
This discussion has been archived. No new comments can be posted.

Opportunities From the Twilight of Moore's Law

Comments Filter:
  • As long as this technology is patentable, corporations will not allow it.
    • by jhoegl ( 638955 )
      To be fair, patents do help companies recoup their costs, but the length of patents, especially in the tech industry is not realistic.

      Perhaps what you are talking about are "method patents", or software patents. And yes, those are the worst technical innovation inhibitors ever produced by the United States of Corporate America.
    • by Surt ( 22457 )

      So a couple of decades then.

  • the market wont tolerate it, look at the 1980's everyone and their brother was making computers, often times using the same core parts but totally incompatible with each other. when the IBM clones started to hit market all those makers vanished. It was not because the IBM format was better, it was because everyone got even footing on a platform and their confidence in making an investment to hardware was reassured to not be worthless 6 weeks later.

    We still see this today, ie ohh windows 8, oh desktop window

    • by 0123456 ( 636235 )

      when the IBM clones started to hit market all those makers vanished.

      But that took years. For quite some time the ST, Amiga and the like were considered to be the home computers to own while PCs were primarily for business use; only when Windows 3.0 came along did the PC really take off for home users because it then offered most of the capabilities of those other computers at a lower price.

      • For quite some time the ST, Amiga and the like were considered to be the home computers to own

        Waiting for some US user to come along in 3, 2, 1..... and explain that you're totally wrong because *everyone* knows that the Amiga was a total flop. Where "everyone" is defined as US users who don't know or care that their market *wasn't* synonymous with the situation worldwide and that the Amiga was massively popular in Europe. Then again...

        PCs were primarily for business use; only when Windows 3.0 came along did the PC really take off for home users because it then offered most of the capabilities of those other computers at a lower price.

        You're kind of guilty of the reverse yourself here :-) My understanding is that in the US, the home market went straight from the early-80s 8-bit computers (mainly th

        • The Apple II was pretty popular as well, especially in education and the Mac wasn't too shabby in terms of market share by the beginning of the 90's as well. The 90's really brought the MS-Windows dominance, pushing it much farther ahead of other OSes.
          • The Apple II was pretty popular as well

            I understand this was the case in the US. Again, in the UK however, not so much- though my Dad *did* have one at work in the early 80s and I understand some businesses used them before the PC became the de facto standard for business (if not home and hobbyist) use.

            Probably didn't help that the PAL-compatible (European TV system) versions of the Apple II were apparently incapable of colour because the original US Apples' colour was generated using idosyncracies of the US TV system that didn't work with the

        • The problem as Commodore Europe knew how to market the Amiga, Commodore USA couldn't have marketed water to someone in the desert. Example is when they announced the REU for the 128, marketing never asked engineers if a ram expansion for the 128 was even possible.

          Windows 95 and the internet, did more to market computers to the home users than Win3.11 every did.
        • Yes, GEM was also included with my Amstrad PC-1512 here in Italy.
          The PC also came with MS-DOS 3.20.

          Both GEM and Windows were completely useless at the time, as there were no useful apps (=games and programming tools).

          The Amstrad 1512 also had a special graphics mode, 640x200x16 colors that was not compatible with any of the graphic standards of the time [Hercules, TGA (Tandy), CGA, EGA]. That special graphics mode was supported only by GEM. But it was pointless, since there were no apps. GEM Paint was the o

          • The Amstrad 1512 also had a special graphics mode, 640x200x16 colors that was not compatible with any of the graphic standards of the time [Hercules, TGA (Tandy), CGA, EGA].

            IIRC according to my Dad, the Amstrads also had text mode(s) that weren't quite standard and caused some programs to crash due to the lack of a bottom line (or something like that). He considered them "almost" compatibles in that there were a few areas like that where they weren't *quite* standard that could cause problems. But I don't get the impression it was a major deal.

            At the time, I looked up at the Amstrad 1640 as the "perfect" computer, with 640K RAM and an EGA adapter (this time, 16 colors on screen _for real!_).

            I remember deciding I wanted a PC at some point in the late 80s, but could never could have afforded one then. No great loss- unless yo

      • I don't see why that ever happened anyways. They were still making revisions to 8088 machines in 1985 and slowly moving towards the 80286 with a few models. The clone makers were a bit faster off the mark, but they were still making PC-BIOS/DOS machines.

        I fail to see how a single-tasking, segmented-memory, 1-meg-max machine could be considered even remotely "professional", when even a lowly CoCo could run a multitasking system (OS-9. The non-Apple one). The IBM-PC was completely and totally a member of t

    • by Anonymous Coward

      I see you don't understand the concept the article is presenting.

      Their point is that as the potential gains from developing a new architecture approach 0 the incentive to upgrade before the physical failyer of the hardware will also approach 0.

      Thus in a theoretical future where computers have been against the performance wall for half a decade manufacturers will have to compete on build quality and expected longevity of their products. Taken to it's logical conclusion if computing power were to completely f

    • by jbolden ( 176878 )

      Actually the grey box market was mainly in the early 1990s. The 1980s clones were much rarer, Compaq was just winning their lawsuits to allow an imitation of the IBM bios.

      • by sjames ( 1099 )

        There were plenty in the '80s. The Computer Shopper was packed full of them. By the '90s we quit even worrying about how compatible they were (they were all 100%) and PC started referring to the class of machine rather than meaning specifically the IBM (where the others were called clones).

        Oddly, Compaq was amongst the last to iron out the incompatibilities. TI never did, they just gave up on clones.

        • by jbolden ( 176878 )

          Compaq was the first to have a bios. I actually didn't realize the date, but now looking it up 1982 was when Compaq successfully cloned the bios. Pheonix came out in 1988 so that's the date just about anyone could get a BIOS. I think Pheonix is the fair date for when there were clones aplenty.

          • by sjames ( 1099 )

            Phoenix was big and marked an explosion in the clone market, but the (in)famous Packard Bell was 2 years earlier. Before that, we had a variety of odd brands with one-off BIOS around. I don't recall when AT&T put out their AT clone.

  • Was a wood and brass encased laptop with exquisite scrollwork around the keyboard and webcam, inherited by an archeologist who caries it around for data analysis and note taking.

    • by Fallon ( 33975 )
      I believe you are talking about Datamancer's steampunk laptop. http://www.datamancer.net/steampunklaptop/steampunklaptop.htm
      • Thank you for that link again. I'd seen this when it was first completed and loved the effort and detail that went into it.

  • My grandpappy made his own CPU's, you lazy whippersnappers! And if we're going to get back on top, American kids gonna start having to learn how to again. Now git' your lazy ass in that clean room and get to work!

    • by geekoid ( 135745 )

      In my day, we just shouted zero's at the window.

  • by drolli ( 522659 ) on Thursday September 22, 2011 @01:56PM (#37483106) Journal

    and i seriously dont think that moores law will end soon. Bumps in both directions, extending over some time are nothing unusual. New technologies will rise and Metal-Oxide-Semiconductor processes wont be dominating forever.

    • Re:I am a physicist (Score:5, Interesting)

      by Guspaz ( 556486 ) on Thursday September 22, 2011 @05:03PM (#37485352)

      There are theoretical limitations to how small things can get, and how much work can be done per unit of space, but we're nowhere near that yet.

      The author claims that semiconductor density improvements have been slowing over the past few years, but that's not true at all. One need only look at past schedule of Intel's die shrinks, or their transistor counts, to realize that we're still going ahead at full steam. The pace of reductions has held pretty much constant to Moore's law for at least the past decade, and Intel's roadmaps seem to show that continuing for at least another two die shrinks (which will each double density).

      It's kind of amazing, when you think of it. Comparing the best of 2002 to 2012, you get from 90nm to 22nm. In just one decade, that is a 16.7x increase in density, and that doesn't even take architectural improvements into account.

      • What are you smoking? Theoretical limitations aren't too far away. There are lots of games that we can yet play to increase performance, but miles of runway we don't have.

        And yes, I remember a headline "Can we break the 1 micron barrier?" and I marvel that we're at 22nm lithography and shrinking.

        d

        • by guruevi ( 827432 )

          The 'theoretical limit' has always been 10 years away. We know we are going to get into some issues (such as quantum physics and molecular issues) but they have said the same about SiO2 since the invention of the microprocessor in every magazine I've read (it's too big, it can't be purified enough, the electrical current is going to be too low, we need to start supercooling, ...). There is always somebody fixing the issues well in advance in the surrounding fields of chemistry, physics and electronics.

          • The 'theoretical limit' has always been 10 years away.

            That doesn't mean the theoretical limit will always stay 10 years away. 20 years ago, the doctors said my father would be dead within a year. They were wrong, and he lived for another 20 years. However, if the doctors keep saying "he'll be dead within a year," then there will come a year when they'll be right. The fact that they were wrong 20 years ago does not mean that my father will live forever.

            • by guruevi ( 827432 )

              Humans deteriorate, a single human doesn't evolve just like a single non-biological chip doesn't evolve into a faster or better thing. The human race does evolve. Chip baking processes don't (usually) deteriorate. It's like saying that the human race is no longer evolving because we're good enough as we are or we haven't seen any major changes in the last 200 years or we can't see what part of us will evolve next. It's a fact of nature that we're going to continue evolving but nobody knows how (yet).

              We'll e

              • My point wasn't about the particulars of human biology. My point is that when someone makes a prediction that something will happen in a set timeframe, and it doesn't happen in that timeframe, that doesn't suddenly mean that it will never happen.

                Back in 2005, I remember watching a news show where an economist was complaining about the housing bubble, explaining that prices would eventually have to come down. They laughed at him. He explained his thinking, and I remember someone saying something to the

        • by mcrbids ( 148650 )

          I smoke, not at all. And I've been hearing about the "theoretical limitations" for a good 20 years now. Yes, the 386 processor was once touted as the last Moore's law processor.

      • by AmiMoJo ( 196126 )

        One need only look at past schedule of Intel's die shrinks, or their transistor counts, to realize that we're still going ahead at full steam.

        To some extent yes, but everyone is now focused on making designs more efficient rather than throwing brute force at the problem. For example AMD's 6000 series graphics card actually have less stream processors than the 5000 series, but they are better used so performance is higher. Similarly in the CPU world AMD has been reducing the number of integer and floating point units on its latest generation in favour of having more instruction cores, and Intel is trying to get into the embedded market currently d

        • by Guspaz ( 556486 )

          But this (the drive for power efficiency over performance) is largely representative of market forces, not technology. Smartphones, for example, have a fixed power budget, unlike a desktop, and people want more and more powerful smartphones inside that budget. Before smartphones and tablets became so popular, there wasn't nearly as much pressure on the SoC market for this stuff.

    • by Surt ( 22457 )

      How many atoms currently make up a transistor? How many atoms do you think is the smallest possible number to make up a transistor?
      How many years of Moore's law does that leave?

      (Hint: the answer to the last question is not a large number).

  • Good for them, but bad for everyone else. Users lose the continual improvements we're used to, manufacturers and retailers have to deal with people making their kit last years longer as they have fewer reasons to upgrade. Probably very good for Google and other companies who "rent" computers.

    • by sjames ( 1099 )

      It's actually fairly bad for rental. When you can buy a usable low-end system for $300 or so, rental because you can't afford to buy isn't going to happen. That just leaves rental because you expect the thing to be obsolete in short order and want a rapid upgrade cycle. That will actually be less likely if the tech hits the wall.

  • Thats what the development graphic of open stuff is like.

    you can expect supercomputers made of open source chips in 10-15 years.
  • Moore (Score:5, Funny)

    by Anonymous Coward on Thursday September 22, 2011 @02:06PM (#37483206)

    The number of people predicting the end of Moore's Law doubles every two years.

  • It seems computers have been stuck at 3GHz (plus or minus a bit) for a while.

    Sure, we've added more cores and the like, but it's interesting to see that plateau at the end of the curve.

    I'm sure some things actually are faster, but in terms of what's available to consumers, it hasn't seemed to get all that much faster the last few years.

    • by BZ ( 40346 ) on Thursday September 22, 2011 @02:14PM (#37483242)

      A 3GHz i7 is a _lot_ faster than a 3GHz P4. Have you tried actually comparing them?

      Heck, a 2GHz Core 2 Duo core was about comparable in raw speed to a 3.6GHz P4 core last I measured. And an i7 is a lot faster than a Core 2 Duo.

      More to the point, Moore's law is about transistor count, not clock speed. Transistor count continues to increase just fine; scaling clock speed just got hard because of power issues and such.

      • scaling clock speed just got hard because of power issues and such.

        Nothing that can't be fixed with a little liquid helium. Now give me my 8ghz processor.

        • The new i5s and i7s are about 7 times faster at AES encryption with the new instruction set, than an equivalent processor without it. So try scaling that up to 21ghz, if you want to compete with an i5 2600k. Which do you suppose is easier, adding that instruction set or dealing with the TDP of a 21ghz cpu?

      • No, but if I look at consumer laptops, it's more or less the same quad-core AMD CPU as I bought three years ago. There actually was a good solid decade where the CPU speed was growing at super crazy speeds.

        In a lot of ways, I don't miss the whole "oh, crap, the machine I bought six months ago is half the speed of the one I can buy now" ... there was nothing more annoying than spending $2K on a box only to have it become obsolete right away.

        Though, I am hoping that next time I get a new PC I can go beyond

        • Re: (Score:3, Insightful)

          by Desler ( 1608317 )

          For the.millionth time, Moore's law is not about processor speed. And no a 3 year old cpu is not the samesame as today. Take for example the sandy bridge intel cpus. They blow away the older core2quads in performance and at lower clock speeds.

          • Moore's law is about transistors, and no, the 3 year old cpu is not "the samesame as today".
            But the experienced performance gains now are much slower, and much more workload-dependent than back in the day.

            And back in the day, more MhZ _was_ the (main) way to increase processor performance.
            You could feel the incredible yet predictable pace at which that happened, and suddently you had to upgrade, since the 3 year old cpu could not keep up with newer software anymore. This is less the case today.

            • by geekoid ( 135745 )

              components, not just transistors. It's an important distinction.

              In 2006, the Dual-Core Itanium 2 had 1,700,000,000 components
              in 2011(over 2 doubling) 10-Core Xeon Westmere-EX has 2,600,000,000

              should that be > 6,800,000,000 components'

              Of course we are living in a golden age whining about who has the sweetest wine.

            • by BillX ( 307153 )

              and suddently you had to upgrade, since the 3 year old cpu could not keep up with newer software anymore. This is less the case today.

              Don't know about that; my work PC has just been 'upgraded' from Windows XP to 7 last week. Aye caramba. It used to be that I would start some coffee brewing and do work on the computer while waiting for it to finish. Now I start a process on the computer and go make coffee while I wait for it to finish.

        • by jandrese ( 485 )
          It doesn't help that licensing disputes and general cattiness in the industry have led to weird situations in the past couple of years. For instance, I bought my wife a white MacBook with a Core2Duo clocked at 2.4 Ghz four years ago. A few months ago we needed a second laptop, so I ended up buying a 13" Macbook Pro that had pretty much the same Core2Duo at 2.4 Ghz. Is this because we hit a wall with Moore's Law? No, it's thanks to a stupid licensing dispute between nVidia and Intel that caused Apple to
          • by emt377 ( 610337 )

            A few months ago we needed a second laptop, so I ended up buying a 13" Macbook Pro that had pretty much the same Core2Duo at 2.4 Ghz. Is this because we hit a wall with Moore's Law?

            No, it's because you bought it used or refurbed. The 13" MBP has been all i5/i7 for well over a year now.

          • Your gaming machine is still capable of running the latest games because most games are being developed to support cross-platform with the consoles, which haven't upgraded since around the time you got your computer. Expect a huge leap in system requirements when the next generation of consoles finally make it out the door (guessing we'll see some "surprises" at next year's E3).

        • by sjames ( 1099 )

          We're there. You can get a decent 6 core machine dirt cheap.

        • Though, I am hoping that next time I get a new PC I can go beyond the 4 cores/8GB of RAM I have now and not be outrageously expensive. That would rock.

          You can. It's called Llano by AMD and Asus is offering a Matx board that supports 16GB of memory. You do need to spend some money on the 4GB DDR3 sticks but the price isn't too bad as this shows http://www.newegg.com/Product/Product.aspx?Item=N82E16820134927 [newegg.com] -$25 a stick isn't bad at all for 4GB memory.

          I've been planning a new build for year end and it's going to use the Asus board and the Llano Quad Core with APU. Cost of the hardware w/o the Wacom Cintiq 12WX and Corel Painter 12 is right around a grand (

      • by epine ( 68316 )

        The whole point of Netburst was to achieve premature ejaculation in the frequency domain. This is what happens when people evaluate performance on a proxy indicator (given a choice between simple and correct, what does your average careerist commuter choose?)

        Apparently it was a pretty good ruse, because ten years later, people are busy citing the frequency plateau as if it means a great deal more than it does. It didn't hurt Enron's raping of California that Intel took out a few hundred megawatts of gener

      • by geekoid ( 135745 )

        yes, but how many transistor per sq. cm are there? and is it's double* and is it is, did they cost the same?
        That is Moore's law. Ironically, its not the most genius thing about his paper. the fact that he realized very complex systems will be made from smaller pieces is the real forward thinking in the paper.

        *assuming they where fabricated 18 months apart, otherwise adjust based on 18 month

        • by BZ ( 40346 )

          P4 used a 180nm process in 2000.

          Core 2 duo used a 65nm process in 2006. That's about 9x the density, so corresponds to doubling every 2 years in there.

          Core i7 in 2008 is a 45nm process, so about 2x the density again. For this one, http://en.wikipedia.org/wiki/Transistor_count [wikipedia.org] has numbers: 731e6 transistors on 263 mm^2.

          Core i7 in 2010 is 1170e6 transistors on 240mm^2, with a 32nm process. So about 1.75x the transistor/mm^2 of the 2008 i7.

          The price might rise along those, though. Hard to say.

          The doubling

    • by LordLimecat ( 1103839 ) on Thursday September 22, 2011 @02:40PM (#37483496)

      I'm sure some things actually are faster, but in terms of what's available to consumers, it hasn't seemed to get all that much faster the last few years..

      Heres a reality check for you.

      Im speccing out a machine for a pfSense firewall; Ive settled on a low power, 20 watt Xeon E3 1220L. At about 1/5th the power consumption of a Pentium 4 2.8ghz (and at about 75% the clockrate), it can handle about 13.5gbits of AES encryption, compared to the Pentium's 500mbps.

      So we're talking a 36-fold improvement in processor performance in the area of encryption, along with a 5-fold reduction in power requirements; not to mention the improved memory bandwidth and whatnot.

      Processors continue to improve at a rapid pace; Intel is supposed to be releasing Ivy Bridge soon, which should have another ~15% performance increase, and they just released Sandy Bridge which mostly eliminates the need for a dedicated GPU on laptops and about 80% of users.

      So when people bemoan the rate of computer improvement, despite the MASSIVE leaps in performance, reductions in power usage, and price drops (a core i3 @ $100? A phenom x3 @ $60? Yes please), it boggles my mind. 5 years ago a "modern", decent gaming rig could be had for about $800. Prior to that, getting a fabled 2GB of ram was like $200 on its own. These days, you can have a decent gaming rig for about $500, with none of your parts costing substantially more than $60. For goodness sake, RAM is down to about $6 per GB.

      Heck, I just priced out and ordered 2 laptops for 2 different clients-- they come with i3s, 4GB of RAM, a 4hr battery life, and very high build quality, all for under $500. Where the heck could you have gotten a laptop anywhere close to that value 3 years ago? A celeron? A crappy AMD mobile?

      Seriously, come back to reality please.

      • I remember when the cost of memory dropped below 1c per bit - $10,000 per megabyte. (IIRC it was about 1972, close to the time when RAM began to take over from core.)

        Now get off my lawn! :D

    • At 3 GHz light travels 10 cm in one clock cycle. Faster speeds would make it hard to send a request of data from one end of the chip to the other and get a response during the execution of a machine instruction. Having to insert no-operation cycles while waiting for data to arrive would negate the usefulness of faster clock rates.

    • by Surt ( 22457 )

      Moore's law is about the number of transistors, not their clock rate.

  • In the beginning, hardware was not "open." Any antique radio collector will tell you that the schematics of 1920s radios were some of the best-kept secrets the manufacturers had (at the time), since the parts used in their products were readily available. Giving the user a schematic was viewed as a license to compete, and there were hundreds, if not thousands, of radio receiver manufacturers -- many of whom got started by reverse-engineering an existing design.

    It was only in the 1930s that schematics bega

  • by Kjella ( 173770 ) on Thursday September 22, 2011 @02:15PM (#37483252) Homepage

    By the time Moore's law slows down, we'll also have systems on a chip. Replaceable parts? We've moved the other way from the days you could solder chips and until today. Extension cards are almost gone, more and more of the north/south bridge and motherboard chips is moving into the CPU, now even the graphics card is moving into the CPU for many.

    His argument sounds to me to use the same logic as arguing that once computers don't get faster, we'll have to make applications faster so we'll see a return of assembler language and hand optimization. Somehow, I don't think that's very likely. I'd make a fair bet that custom hardware is even more of a niche in 20-30 years than it is now.

    • "Extension cards are almost gone, more and more of the north/south bridge and motherboard chips is moving into the CPU, now even the graphics card is moving into the CPU for many."

      You're missing the point unfortunately, the laws of physics demands that expansion cards are here to stay. The memory bandwidth you can get from a dedicated card exceeds that of today's modern CPU's. Specializations is here to stay. The same argument was made by epic games over 10 years ago when they thought everything would b

      • by geekoid ( 135745 )

        Right now, I can build a mid range game machine with no expansion cards. HD, 7.1 dolby.

        So why A card can be faster, it not necessarily NEEDED.

        If you want to play quake 10, and you can get HD 100FPS without adding component why would you? And would many other people do ti for the same reason.

        • All hardware advances to have meaning require killer apps, it's a chicken and egg scenario. As hardware gets more powerful someone will find the killer app. No one knew we needed google until it existed, the same thing will happen as hardware becomes more powerful.

          Every era computing era has had periods of centralization and decentralization the trend will continue on for the indefinite future. When centralization reaches it's limits, decentralization occurs (think the mainframes vs the internet). The s

  • Unlikely (Score:4, Interesting)

    by Nethemas the Great ( 909900 ) on Thursday September 22, 2011 @02:20PM (#37483278)

    Firstly it's high unlikely that Moore's law will be retiring any time soon. All we are seeing is a slow down in the advancement of shrinking the manufacturing process. That doesn't say anything about performance improvements by other means. We are continually seeing advancement in performance per watt which is enabling CPUs to spread their dies not only "out" but even now we're seeing the prospect of "up" with promising research in layering techniques. Beyond that there are carbon rather than silicon based materials coming online that promise to further improve upon the performance per watt angle. We're even starting to see glimmers of hope in the quantum computing arena which would be game changing.

    With respect to small companies being able to enter the market and compete with the "big guys" I would seriously doubt it. The first and obvious reason being the cost factor being a barrier to entry. The equipment isn't cheap and contending in the patent arena is worse. The most you'll ever see here is university level research being sold off to the big guys.

    • by k8to ( 9046 )

      Well.. moore's law is specifically about transitor density, specifically the rate of its increase per time. So a slowdown in that increase is in fact an end to the law in that the "law" predicted a specific rate that would no longer be met.

      • by NoSig ( 1919688 )
        Moore's law is not about density, it is about the price per transistor. Doesn't matter how you bring the price down, the law will still hold if it does indeed go down as the law predicts. Density has been the most important component of that so far, that you are right on, but it doesn't have to continue to be true. If you can stack slices of transistors on top of each other, and keep making that process cheaper, that will help with Moore's law too.
        • by k8to ( 9046 )

          Well, you're wrong.

          ftp://download.intel.com/museum/Moores_Law/Articles-Press_Releases/Gordon_Moore_1965_Article.pdf [intel.com]

          There's some discussion of what the trend means for prices, but the core observation is clearly about the density.

          • by NoSig ( 1919688 )
            Thanks for the reference to the original article, which I had not read and now I have. The article is in two parts. The first part is an observation: as you increase the number of transistors on a single component, the price per transistor becomes a u-shape. If you graph the sweet spots of these u-shapes, you get an exponential decrease in price per transistor. Moore speculates that he sees no reason that the (exponential) trend shown on the graph on the second page won't continue for at least 10 years. Tha
            • by k8to ( 9046 )

              You're still wrong.

              He's *interested* in the predicted result that they will follow a flat economic behavior while increasing in density, but the observation and prediction is that they will increase in density.

              • by NoSig ( 1919688 )
                The prediction is "will continue for at least 10 years", and it is made before density enters the picture. Consider the difference between law and mechanism for a while longer.
        • by geekoid ( 135745 )

          NO IT IS NOT. It's appalling on how ignorant so many posters are about Moore's law on a tech site. Simply appalling.

          Double the amount of Components per sq.cm at the same cost.

        • by Surt ( 22457 )

          Layering buys you a 15 year extension on Moore's law if you're lucky. After that it gets really hard to layer fast enough to produce the finished chips within 18 months.

          • by NoSig ( 1919688 )
            15 more years of Moore's law sounds good. Let's take it and reevaluate in 15 years :) Though, if the doubling is every 18 months, and we continue for 15 years, that is only 10 doublings, which is 1024 layers. I don't know how much time it takes us to print the surface of a single layer chip today (?), but if it takes an hour then that's only 42 days with no improvements. Also, if we can find a way to make the layers separately and stack them at a later stage, they can be made in parallel. I suspect that hea
            • by Surt ( 22457 )

              It takes a few hours per layer right now. In the ballpark of 12. Going parallel means that intel, which currently spends billions per fab, is going to then spend trillions per fab. Going parallel to that degree means a fundamental rework of a manufacturing process that has had hundreds of billions of dollars in research already. Not that this is impossible, but getting that kind of fundamental redesign of the process done on the kind of timetable required to keep Moore on schedule? That project will pu

    • I may be wrong, but I thought Moore's law was specifically referring to the manufacturing process, and not performance. So, Moore's law can retire and performance can still improve. They're not mutually exclusive.
      • by geekoid ( 135745 )

        CORRECT!
        Well done. You can have all the nerd point most these other yahoos have lost with there careful planned out incorrect explanation of Moore's law.

    • Moore's law doesn't say anything about "performance improvement". Moore's law is precisely about "shrinking the manufacturing process" - how many transistors you can fit on a chip.
    • by allanw ( 842185 )
      Processors nowadays are limited by the thermal dissipation limits, instead of how many transistors they can fit on a chip. Transistor scaling was supposed to keep power usage the same while increasing density, but supply voltages have stopped decreasing, so the power efficiency gains are very low. Thus it is now important to think about power efficiency: http://hardware.slashdot.org/story/11/09/13/2148202/whither-moores-law-introducing-koomeys-law [slashdot.org]
  • Moore's law has applied, and will apply - at least by inference - to all past and future computing paradigms.

    The exponential growth trends of price/performance started long before CMOS processes were developed. While Moore's law specifically refers to integrated circuits, the facts remain: exponential growth trends were present in relay-based machines, vacuum tube based machines, transistor based machines (pre-IC), and integrated circuits.

    In fact, the exponential growth trends are actually accelerating at

  • And it's far from the only thing that impacts a computer's performance. The advent of SSDs proves there are still major areas of computer performance to be addressed outside of the CPU. While the GPU will be brought onboard before system RAM is, I'm sure that's another area that will find its way onto the die eventually. Gigabit ethernet is good enough for now - I don't know of any broadband connections anywhere that exceed that just yet, but an increase to 10gigabit isn't out of bounds in the nearterm, and

    • The advent of the 10" 2560x1600 panel are upon us - put three of those together for a nice 30" display, and you're good to go. The more things are offloaded FROM the CPU, the more irrelevant CPU density and speed becomes.

      To maintain the aspect ratio, you need nine 10"(diag) displays to make a 30"(diag) display.

      • The advent of the 10" 2560x1600 panel are upon us - put three of those together for a nice 30" display, and you're good to go. The more things are offloaded FROM the CPU, the more irrelevant CPU density and speed becomes.

        To maintain the aspect ratio, you need nine 10"(diag) displays to make a 30"(diag) display.

        Whoops. I knew that, but somehow typed three instead of nine. *sad panda*

        They haven't announced prices on those new panels, so who knows if that will be affordable. I know they're planning to make 2560x1600 panels in sizes ranging from 4" - 10", so let's hope it's affordable. A 30" panel is rather big for me for a desktop display; I'm more comfortable with my current 24", but higher pixel density would be appreciated.

        It's going to be strange having a 4" smartphone with a 2560x1600 display, though. That's a

  • funny... i was just writing up a post to the http://openhardwaresummit.org/ [openhardwaresummit.org] mailing list about a way to accelerate the process by which enthusiasts can work with the latest mass-produced embedded hardware.

    the initiative, which has a specification here - http://elinux.org/Embedded_Open_Modular_Architecture/PCMCIA [elinux.org] - is based around the fact that, just as mentioned above, the development of processor "speeds" is slowing down. this funnily enough allows so-called "embedded" processors to catch up, and it's the

  • It's all just propaganda.

  • I'm not convince of the central postulate here. We may not be able to continually increase CPU performance by slavishly increasing the transistor count any more than we could do it by slavishly increasing the clock frequency, but that doesn't mean the Intels of the world can't maintain dominance in performance. There are all sorts of possibilities that haven't been as yet fully explored because the industry has billions (trillions?) sunk into the silicon-specific end-to-end fabrication process, and they s

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...