Forgot your password?
typodupeerror
The Almighty Buck Hardware

Moore's Law Blowout Sale Is Ending, Says Broadcom CTO 267

Posted by samzenpus
from the paying-the-price dept.
itwbennett writes "Broadcom Chairman and CTO Henry Samueli has some bad news for you: Moore's Law isn't making chips cheaper anymore because it now requires complicated manufacturing techniques that are so expensive they cancel out the cost savings. Instead of getting more speed, less power consumption and lower cost with each generation, chip makers now have to choose two out of three, Samueli said. He pointed to new techniques such as High-K Metal Gate and FinFET, which have been used in recent years to achieve new so-called process nodes. The most advanced process node on the market, defined by the size of the features on a chip, is due to reach 14 nanometers next year. At levels like that, chip makers need more than traditional manufacturing techniques to achieve the high density, Samueli said. The more dense chips get, the more expensive it will be to make them, he said."
This discussion has been archived. No new comments can be posted.

Moore's Law Blowout Sale Is Ending, Says Broadcom CTO

Comments Filter:
  • I thought Intel, Samsung and TSMC claim that the upcoming 350mm wafer going to bring along another round of cost saving.

    Are they telling the truth, or are they blowing smoke ?

    • by CaptBubba (696284) on Thursday December 05, 2013 @11:29PM (#45615627)

      350 may bring costs down, but it isn't a process node advancement and won't help cram more transistors per unit area into a chip.

      Instead it will just let them process more chips at once in most time-consuming processing steps such as deposition and oxide growth. The photolithographic systems, which are the most expensive equipment in the entire fab on a cost-per-wafer-processed-per-hour basis, gain somewhat due to less wafer exchanging, but the imaging is still done a few square cm at a time repeated in a step-and-scan manner a hundred times or more per wafer per step. Larger wafers however are posing one hell of a problem for maintaining film and etch uniformity, extremely important when you have transistor gate oxides on the order of a few atoms thick.

      • Re: (Score:3, Insightful)

        by Xicor (2738029)
        more transistors per unit area on a chip is worthless atm. you can have a million cores on a processor, but it will still be slowed down dramatically due to issues with parallelism. someone needs to find a way to increase parallel processor speed.
        • by x0ra (1249540) on Friday December 06, 2013 @12:15AM (#45615883)
          The problem is not so much in the hardware than in the software nowadays...
          • by artor3 (1344997) on Friday December 06, 2013 @01:25AM (#45616163)

            This hits the nail on the head. For decades, software developers have been able to play fast and loose, while counting on the ever-faster hardware to make up for bloated, inefficient programs. Those days are ending. Programmers will need to be a lot more disciplined, and really engineer their programs, in order to get as much performance as possible out of the hardware. In a lot of ways, it will be similar to the early days of computing.

            • by SuricouRaven (1897204) on Friday December 06, 2013 @03:53AM (#45616735)

              The advancements in hardware were used to allow a saving in software development costs.

              • by byrtolet (1353359)

                The advancements in hardware were used to allow a saving in software development costs.

                Not exactly.

                The hardware isn't advancing equally on all fronts. For example the memory latency havent increased noticably in the last 15 years.

                Our softwere solves more prblems than the one 20 years ago. Now 95% of the features in each and evry software are not used by 95% of the users. 20 years ago it was much different.

                Now we have bloat, but also we have power and freedom to do much more

                The software still costs a lot, and it's buggier than ever, because of its quantity.

              • by symbolset (646467) * on Friday December 06, 2013 @08:53AM (#45617671) Journal

                It's not cheap to get rid of that much processor power without improving anything.

                Office XP system requirements: Single core processor at 0.133 GHz minimum. 0.4 GHz recommended. RAM 0.024 GB (OS) + 0.008 GB (Office). Storage 0.21 to 0.26 GB.

            • by Lumpy (12016) on Friday December 06, 2013 @07:39AM (#45617443) Homepage

              "For decades, low skilled software developers have been able to play fast and loose,"

              FTFY

              Embedded system programmers are the only real programmers anymore.

              • by jbolden (176878)

                30 years ago stuff we do casually today like networking or multiuser transactional databases, resolution independence was considered really hard and esoteric. The real programmers from then had quite often far less complexity to deal with

            • Ha! The joke's on you. Programs running on Wine while running Linux in Javascript is going to save the day.
        • by AdamHaun (43173)

          more transistors per unit area on a chip is worthless atm.

          That assumes your processor is a fixed size. It isn't. The smaller your die is, the cheaper it is. That's how process improvements make things cheaper.

          • by tepples (727027)

            The smaller your die is, the cheaper it is. That's how process improvements make things cheaper.

            The point of the featured article is that this has become no longer the case now that smaller processes are requiring far more complex fabrication.

          • by artor3 (1344997)

            Only to a certain point, which is what the article is getting at. Eventually the cost of the machines required to make smaller die outweighs the cost savings from having more die per wafer.

        • We do more than make parallel processors with silicon chips. We have memory and flash storage too, for instance. those benefit.

          Some tasks do actually scale well to parallelism if you want to talk processor. Still, one core needs access to memory usually, so designing buses that are fast and short enough to get all the cores proper access to the memory will be a challenge, unless you can shrink the process so you will have enough room on your silicon to put the pathways and logic to do so.

          While most consum

        • by jbolden (176878) on Friday December 06, 2013 @10:30AM (#45618259) Homepage

          They have. Functional programming. By explicitly avoiding side effects huge chunks of code can execute independently and in different orders. Moreover by organizing the code using functional looping constructors the parallel compilers can tell how to break things.

          Functional makes parallelism much easier.

      • There will also be a cost reduction from the more efficient use of the ARC (anti-reflective coating), top coats, and Photoresist applications on the larger wafers. Coat dispense volumes do not go up significantly with larger wafers in a spin coat application so you effectively get more imaging for the same volume of chemical. Seeing as many of the lithography materials are some of the most expensive in the process this benefit can be very significant. Of course controlling these thicknesses to within a few

      • by Katatsumuri (1137173) on Friday December 06, 2013 @05:30AM (#45617049)

        I see many emerging technologies that promise further great progress in computing. Here are some of them. I wish some industry people here could post some updates about their way to the market. They may not literally prolong the Moore's Law in regards to the number of transistors, but they promise great performance gains, which is what really matters.

        3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects. Also, this increases "computation per rack unit volume", simplifying some space-related aspects of scaling.

        Memristors. HP is ready to produce the first memristor chips but delays that for business reasons (how sad is that!) Others are also preparing products. Memristor technology enables a new approach to computing, combining memory and computation in one place. They are also quite fast (competitive with the current RAM) and energy-efficient, which means easier cooling and possible 3D layout.

        Photonics. Optical buses are finding their ways into computers, and network hardware manufacturers are looking for ways to perform some basic switching directly with light. Some day these two trends may converge to produce an optical computer chip that would be free from the limitations of electric resistance/heat, EM interference, and could thus operate at a higher clock speed. Would be more energy efficient, too.

        Spintronics. Probably further in the future, but potentially very high-density and low-power technology actively developed by IBM, Hynix and a bunch of others. This one would push our computation density and power efficiency limits to another level, as it allows performing some computation using magnetic fields, without electrons actually moving in electrical current (excuse me for my layman understanding).

        Quantum computing. This could qualitatively speed up whole classes of tasks, potentially bringing AI and simulation applications to new levels of performance. The only commercial offer so far is Dwave, and it's not a classical QC, but so many labs are working on that, the results are bound to come soon.

    • Smaller geometries mean cheaper cost per chip. Unfortunately wafer yields drop with smaller geometries, meaning the benefits of shrinking silicon get eaten by a reduction in number of chips that work per wafer. If we could shrink the chips without a drop in yield then moors law would continue (for now). The other issue with shrinking the chips is transistor reliability. As we make them smaller the chips stop working reliably.
    • Are you aware that 350mm is less than 14in, and that the actual wafers are 450mm (almost 18in)?
      Note also that the larger size doesn't inherently reduce cost or increase yield, much less improve performance (density or speed).
      It may however follow Rock's law that the price of a semi fab doubles every 4 years... (this set should hit $5B) .

    • I thought Intel, Samsung and TSMC claim that the upcoming 350mm wafer going to bring along another round of cost saving.

      Are they telling the truth, or are they blowing smoke ?

      Yes and no... as a bona fide cynic, I will believe that the sun is made of twinkie foam before believing anything that any CEO says about increased costs justifying higher prices or smaller price reductions. But at the same time, a 350mm wafer means more chips per wafer, not smaller chips. That does help them increase yield (number of viable chips per wafer), so that each wafer is worth more (but also costs more, because it is larger... the increased value is greater than the increased cost, however, for Si

  • by Anonymous Coward on Thursday December 05, 2013 @11:19PM (#45615569)

    Used to be you used to have to upgrade every 2 years. Now you really have to upgrade every 5 or 7 years. Once every 10 years sounds pretty good to me. As the pace of computer innovation slows, less money has to go towards upgrades. Computers are now more like appliances, you run them down until they physically break.

    Of course if you manufacture computers or work in IT, then such a proposition is horrible as a long product lifecyle means less money coming to you. As a consumer, I like it because I no longer have to shell out hundreds of dollars every other year to keep my computers usable.

    • by alexander_686 (957440) on Thursday December 05, 2013 @11:35PM (#45615641)

      No, you are still upgrading at the same rate. Except now, because more and more stuff is being pushed out onto the web, it is the servers that are being upgraded. So it is transparent to you. Oh, and phones too.

    • by Kjella (173770)

      Used to be you used to have to upgrade every 2 years. Now you really have to upgrade every 5 or 7 years. Once every 10 years sounds pretty good to me. (...) As a consumer, I like it because I no longer have to shell out hundreds of dollars every other year to keep my computers usable.

      Really, you like products that are just marginally better than before? You wouldn't like it if next year there was a car that could get you to work at twice the speed and half the price? I love that in 2013 I can buy a much better processor for the same amount of (inflation-adjusted) dollars than I could in 2003 or in 1993 and ideally I'd like to say the same about 2023 as well. You really think you'd be better off with 1993-era level of technology and two rebuys because they wore out?

      With real income stagn

      • The problem is for the vast majority? To use a car analogy its like using a top fuel funny car to go to the store for milk, they have more power than they can possibly use.

        Take my dad for example, he is the perfect "Joe Average" user. he uses social media, watches videos, uses his bookkeeping software, the kind of everyday tasks the majority do daily. When the price dropped on the Phenom IIs to make way for the FX I thought "Well it has been awhile since I built him that Phenom I quad so maybe its time for an upgrade" and ran a usage monitor for a week to see how bad he was hitting the CPU,what did I find? 35%. That is the average amount of usage that quad was getting. Sure he'd get occasionally over 50% but that was only for a few seconds.

        And THAT is why its really not gonna matter to Joe and Jane average, because their systems already idle more than they run and the prices are already crazy cheap. I mean I just got dad a quad core Android tablet for Xmas.,..think he'll EVER come up with enough to do to peg all 4 cores enough that an upgrade would help? Not likely. Hell I was the guy that built a new system every other year with a major overhaul on the odd years,now? My system is 4 years old and I have zero reason to upgrade to a new one. Why should I? I have a hexacore, 8Gb of RAM, 3TB of HDD, the only thing I upgraded was my HD4850 for an HD7750 and even that was about lowering heat and not performance.

        Lets face it, Moore's Law made systems several orders more powerful than the work the masses can come up for them to do. Who cares if Moore's Law finally winds down when the systems are so powerful they spend more time idling than anything else?

        • Lets face it, Moore's Law made systems several orders more powerful than the work the masses can come up for them to do. Who cares if Moore's Law finally winds down when the systems are so powerful they spend more time idling than anything else?

          I care about Moore's law winding down. For my applications (CFD) it means that not only do I have to start paying attention to the fact that I'm running close-to-metal, e.g. I have to minimize the amount of cache misses, but also that if I want to have a scalable application, I can't do it without MPI. And MPI is tricky.

      • Moore's Law is an expression of exponential growth. All we are seeing is the logical conclusion of applying exponential growth expectations to a real world finite resource (i.e. the fact that atoms have an essential finite size). See Wheat and Chessboard problem [wikipedia.org] for reference.

        • The historical fact that 20% per year die shrinkage was possible for 50 years running, just means that atoms are a lot smaller than the first IC features.

          It was good while it lasted.

      • by Artifakt (700173)

        One of the things that allows the US government to claim the inflation rate is extremely low is that they get to adjust for improved tech capabilities. If Moore's law is finally hitting its end, the extra value of computation on cheaper, newer iron will stop being one of the things that lets them fudge the reporting. The other most major fudge area is those stagnant wages you alluded to, which will have to become where just about all of the lying with statistics will take place in the future. It's interesti

        • by Dahamma (304068)

          Just because the increase in transistor count slows down, that doesn't mean *inflation* will increase. It just means technological gains will be slower. You think somehow the average computer is just going to jump to $10,000 because Intel just needs to charge $8000 for some absurd chip no one wants? If that were true the Itaniam would have been popular.

          Or to use the usual "car analogy" ;) - cars just haven't changed that drastically year over year for many decades - they have had slow, incremental impro

          • Wrong. It has an impact on the CPI (consumer price index) in the U.S., and the percent change in the CPI is the inflation rate. See http://www.bls.gov/cpi/cpifaccomp.htm [bls.gov]

            This has nothing whatsoever to do with PCs suddenly becoming more expensive due to 'inflation'. It's a technical measure of how the U.S. government statistical 'proves' a computer costs less because it's more powerful than last years model, even if, in real dollars, the actual selling price is identical in both years. You might also note tha

    • by Opportunist (166417) on Friday December 06, 2013 @12:13AM (#45615865)

      Considering the quality of contemporary components, you'll still be upgrading every 2-3 years. Or however long the warranty in your country is.

    • by Lumpy (12016)

      I need to upgrade every 2. But I am one of those guys that actually uses the computer. Programming, advanced mathematics, and HD video editing all demand the fastest processors or more and more cores. I have 12 cores right now and wish I had 18 or 20 as I could use the speed to get more work done.

  • by 140Mandak262Jamuna (970587) on Thursday December 05, 2013 @11:22PM (#45615581) Journal
    Well, we had a good run. 99% of the computing needs of 99% of the people can be met by the existing chips electronics. For most people network and bandwidth limits their ability to do things, not raw computing power or memory. So Moore's observation (it ain't no law) running out of steam is no big deal. Of course the tech companies need to transition from selling shiny new things every two years to a more sedate pace of growth.
    • by Anonymous Coward on Thursday December 05, 2013 @11:36PM (#45615653)

      When people say this, I think that the person is not being imaginative about the future. Sure, we can meet 99% of current computing needs, but what about uses that we have not yet imagined.

        Image processing and AI are still pretty piss poor, and not all bound by network and bandwidth limits. Watch a Roomba crash into the wall as it randomly cleans your room, Dark Ages!

      • by AmiMoJo (196126) *

        I never really understood why Roomba opted for the random-path bump-into-things method. I have a Neeto robot that uses lidar and proximity sensors to mostly avoid the bumps, and cleans in neat back-and-forth lines for the most part. Your carpet ends up looking like a mown football pitch.

        I can only presume that Roombas work that way because they only have a very simple "map" of the room. You can download the data from the Neeto over USB and it does create a detailed outline of everything in the room. Unfortu

    • Get rid of the bloat and start coding in ASM. Of course, those developers aren't cheap are they?

      • by VTBlue (600055)

        Hold the boat, a return to C or C++ would be a HUGE boost, no need to throw the baby out with bath water.

        ASM programmers could never build rich content apps that the world relies on today. The code would be ridiculous, think the worst COBOL app times a 1000 for every application used today.

        No, moving dynamic languages and compiling them to optimized C or chunking out high level critical code into optimized C/C++ is what every major web service is focusing on today. Facebook for example is realizing well ove

        • by rubycodez (864176)

          the most bloated crap on the planet is written in c/c++, by Microsoft

          putting web facing services into C is just asking for exploits due to the languages deficiencies (no size or limit checking, etc.). Hell most malware today propogates by about a dozen common mistakes that "genius" programmers make again and again because they're so clever they are morons

          • by Dahamma (304068)

            the most bloated crap on the planet is written in c/c++, by Microsoft

            No, no, it's not. The most bloated crap on the planet can be seen every day on many of your favorite web sites.

            Why do in 20,000 lines of C++ linked efficiently into a binary and shared libraries what you can do in 50,000+ lines of Javascript, most of which are included without any knowledge of what's in them and that just bloat your browser without actually being executed.

            Not that Microsoft should be absolved of blame there. WinJS is an abomination.

        • by gweihir (88907)

          On MC68xxx it was possible and was being done. It could also be done on Intel, but that assembler model is so cluelessly complex, the language is a real issue. "Content rich" has nothing to do with it.

          As to C, competent people are using it, no need to hold the boat. Just realize that all those that can only do Java are not competent programmers. Also, C coders are highly sought after, see, e.g. http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html [tiobe.com]

          Code reviews I have done confirm this, Java progr

      • by Opportunist (166417) on Friday December 06, 2013 @12:12AM (#45615857)

        Erh... no.

        As an "old" programmer who happens to know a few languages, ASM for a few different machines among them, I can reassure you that you do NOT want to return to the good ol' days of Assembler hacking. For more than one reason.

        The most obvious one is maintenance. I still write ASM for embedded applications where size does matter because you're measuring your available space in Bytes. Not even kBytes. Where it matters that your code takes exactly this or that many cycles, none more, none less. But these are very, very specific routines with a "write once, never touch again" policy in mind. You do not want to be the poor bastard who gets to maintain ASM code. Even less so if it's not your own (which is already anything but trivial). ASM is often a very ugly mess of processor side effects being used for some kind of hack because you simply didn't have the time and/or space to do it "right".

        C is probably the closst you should get today to the "metal" anymore. Unless of course you have a VERY good reason to go lower, but I can not really think of anything that doesn't deal with the OS itself.

        • by JanneM (7445) on Friday December 06, 2013 @12:21AM (#45615917) Homepage

          As an addendum to the parent (I, too, have a background in ASM programming): You're working at such low level of detail that any application of non-trivial size becomes extremely difficult to write truly effectively. You just can't keep so many details in mind at once. And when you need to work as a team, not alone, interfacing code becomes a nightmare.

          So of course you abstract your assembler code. You define interfaces, develop and use libraries of common application tasks, and just generally structure your code at small and large scales.

          But at that point, you are starting to lose the advantage of ASM. A good, modern C compiler is a lot better than you to find serendipitous optimization points in structured code, and it is not constrained by human memory and understanding so it doesn't need to structure the final code in a readable (but slower) way.

          Small, time-critical sections, fine. Small embedded apps on tiny hardware, no problem. But ASM as a general-purpose application language? That stopped making sense decades ago.

          • A good, modern C compiler is a lot better than you to find serendipitous optimization points in structured code

            Provided that a developer can find and afford a "good, modern C compiler" targeting a given platform. What's the state of the art in compilers for 6502-based* microcontrollers again? Last I checked, code produced by ca65 was fairly bloated compared to equivalent hand-written assembly language. And I'm told that for years, GCC severely lagged behind $6000-per-seat Green Hills compilers.

            * Why 6502? Maybe I'm making an NES game for the competition [nintendoage.com]. Or maybe I need to code a hash table for the storage contro [pagetable.com]

            • by JanneM (7445)

              Provided that a developer can find and afford a "good, modern C compiler" targeting a given platform.

              The thread is about application development on general-use PCs, which means Intels compiler, the MS compiler, gcc and the like on x86 or ARM.

              • by AmiMoJo (196126) *

                It's a fair point given that the GPs were talking about embedded systems though. One thing that really hobbles the PIC platform is the lack of a good and free compiler for the 12, 16 and 18 ranges. I think the 24 range uses GCC like Atmel does for their AVR line.

                • One thing that really hobbles the PIC platform is the lack of a good and free compiler for the 12, 16 and 18 ranges.

                  The one thing that really hobbles the PIC platform is the lack of a good architecture (save for PIC32, perhaps).

          • As an addendum to the parent (I, too, have a background in ASM programming): You're working at such low level of detail that any application of non-trivial size becomes extremely difficult to write truly effectively. You just can't keep so many details in mind at once. And when you need to work as a team, not alone, interfacing code becomes a nightmare. So of course you abstract your assembler code. You define interfaces, develop and use libraries of common application tasks, and just generally structure your code at small and large scales. But at that point, you are starting to lose the advantage of ASM.

            Perhaps not. ;-) [greenarraychips.com]

        • Oh yes. I just did a small microcontroller project for a customer and just having to re-learn how to just compare two 8 bit bytes was enough for me. "Not this crap again!" Move one operand to the accumulator. Subtract from memory. Um, which way around is it again? ACC - memory or memory - ACC? Do I need to clear the carry bit myself first? Wait, how do I mask that bit again? It's AND for resetting to 0, OR to set to 1? Wait, there's a macro for that already?

          10 minutes later, I'm fairly confident I've put t

        • C is probably the closst you should get today to the "metal" anymore. Unless of course you have a VERY good reason to go lower, but I can not really think of anything that doesn't deal with the OS itself.

          I would add Fortran (90/95/2003/2008) to that one-item list. Not to spark a flame war here, but Fortran can often be 10x faster for number crunching, given the same amount of programmer-hours to code it. Not saying you can't make a C program equally fast, but the default way people structure Fortran code for number crunching results in faster code than the default way people stucture C code for number crunching.

          Take IPA for instance: you have to put in some work in order to write C code that will actuall

    • by Dahamma (304068)

      Or someone actually comes up with something *new* in computing and changes the game again. For example, quantum computing has seemed like a bit of a pipe dream so far, but a major breakthrough there would kickstart a whole new era of development.

      Or maybe it will come from another direction - if we can only improve 2 of (speed, power consumption, cost), what if someone came up with an exponentially improved battery technology? And/or drastically reduced the power consumption for the same cost? Those could

    • Well, we had a good run. 99% of the computing needs of 99% of the people can be met by the existing chips electronics.

      Hasn't that been true since the computer was invented?

    • by Bender_ (179208)

      > 99% of the computing needs of 99% of the people can be met by the existing

      You know, this phrase has been uttered so many times it became completely meaningless. Please define "Computing needs"?

      If you had asked someone in the 50ies, they would have told you that the average person needs some help with adding the numbers for checkbook balancing. So a simple calculator should be enough, right? Nobody would have considered that people of 2000ies would deem it a worthwhile endeavour to use processing power

  • On schedule (Score:5, Interesting)

    by Animats (122034) on Thursday December 05, 2013 @11:23PM (#45615585) Homepage

    About ten years ago, I went to a talk at Stanford where someone showed that the increasing costs of wafer fabs would make this happen around 2013. We're right on schedule.

    Storage can still get cheaper. We can look forward to a few more generations of flash devices. Those don't have to go faster.

    • by gweihir (88907)

      Indeed. The slow-down has been happening for about a decade now. My personal indicator is that once a year or so, I think about upgrading my CPU. For the last few years, I have not been able to find anything significantly faster. That used to be no problem. I have to admit that I quite like this trend. Maybe we can no start to build better software?

  • Process Node (Score:3, Informative)

    by spectral7 (2030164) on Thursday December 05, 2013 @11:24PM (#45615587)

    The most advanced process node on the market, defined by the size of the features on a chip, is due to reach 14 nanometers next year.

    Actually, the "process node" hasn't meant anything for years now [ieee.org].

  • by CapOblivious2010 (1731402) on Thursday December 05, 2013 @11:36PM (#45615655)
    If that's true, we can only hope that the exponential bloating of software stops as well. Software has been eating the free lunch Moore was providing before it got to the users; the sad reality is that the typical end-user hasn't seen much in the way of performance improvements - in some cases, common tasks are even slower now than 10 years ago.

    Oh sure, we defend it by claiming that the software is "good enough" (or will be on tomorrow's computers, anyway), and we justify the bloat by claiming that the software is better in so many other areas like maintainability (it's not), re-usability (it's not), adherence to "design patterns" (regardless of whether they help or hurt), or just "newer software technologies" (I'm looking at you, XAML&WPF), as if the old ones were rusting away.
    • You know, I'm kinda tempted to see how an ancient version of Windows + Office would run on a contemporary machine.

      Provided they do at all, that is...

    • by mcrbids (148650) on Friday December 06, 2013 @12:08AM (#45615843) Journal

      Software has been eating the free lunch Moore was providing before it got to the users; the sad reality is that the typical end-user hasn't seen much in the way of performance improvements - in some cases, common tasks are even slower now than 10 years ago.

      This point of view is common, even though its odd disparity with reality make it seem almost anachronistic. Software isn't bloating anywhere near as much as expectations are.

      Oh, sure, it's true that much software is slower than its predecessor. Windows 7 is considerably slower, given the same hardware, than Windows XP which is a dog compared to Windows 95, on the same hardware. But the truth is that we aren't running on the same hardware, and our expectations have risen dramatically. But in actual fact, most implementations of compilers and algorithms show consistent improvements in speed. More recent compilers are considerably faster than older ones. Newer compression software is faster (often by orders of magnitude!) than earlier versions. Software processes such as voice recognition, facial pattern matching, lossy compression algorithms for video and audio, and far too many other things to name have all improved consistently over time. For a good example of this type of improvement, take a look at the recent work on "faster than fast" Fourier Transforms [mit.edu] as an easy reference.

      So why does it seem that software gets slower and slower? I remember when my Dell Inspiron 600m [cnet.com] was a slick, fast machine. I was amazed at all the power in this little package! And yet, even running the original install of Windows XP, I can't watch Hulu on it - it simply doesn't have the power to run full screen, full motion, compressed video in real time. I was stunned at how long (a full minute?) the old copy of Open Office took to load, even though I remember running it on the same machine! (With my i7 laptop with SSD and 8 GB of RAM, OpenOffice loads in about 2 seconds)

      Expectations are what changed more than the software.

      • by AmiMoJo (196126) *

        Oh, sure, it's true that much software is slower than its predecessor. Windows 7 is considerably slower, given the same hardware, than Windows XP

        Except that it isn't. A typical machine with 1GB of RAM will perform better on Win7 than on XP, assuming drivers for everything are available. I have old laptops that have received a nice performance boost this way, with RAM at the limit of 1.5GB.

        It's obvious why. Win7 benefited from a lot of optimization using tools that simply did not exist when XP was being developed. It manages caches better, deals with multiple DLL versions better, reduced memory in key areas and fixed a lot of little bottlenecks that

  • Without adjusting for inflation Intel's processors cost about as much as they did 20+ years ago.

    http://www.krsaborio.net/intel/research/1991/0422.htm [krsaborio.net]

    http://www.newegg.com/Product/Product.aspx?Item=N82E16819116492 [newegg.com]

    http://www.newegg.com/Product/Product.aspx?Item=N82E16819116899 [newegg.com]

    http://www.nytimes.com/1992/01/09/business/company-news-intel-moves-to-cut-price-of-386-chip.html [nytimes.com]

    http://www.newegg.com/Product/Product.aspx?Item=N82E16819116775 [newegg.com]

    Almost every other component (except maybe the GPU) has dropped tremendously

    • by JanneM (7445)

      Pointless comparison without inflation adjustment. If you do adjust for inflation, CPU's have become a lot cheaper as well.

  • To run MS Office and watch cat videos on Youtube? Not very? THen I guess "Good Enough" computing will will moderate the situation....
  • I think a cartel exam is in order. If someone tries to explain a price hike in a field that is allegedly contested, especially when the reason given is threadbare at best, it's time to watch for price fixing.

    • by rubycodez (864176)

      no, they are almost at the limit of physics for CMOS. At 10nm gate thickness becomes a monolayer, and quantum tunneling significant problem.

      some exciting alternative technologies exist though.

  • If Intel's investor day meeting is to be believed this is not true at least for their next 2 process nodes
    http://files.shareholder.com/downloads/INTC/2827417808x0x709360/2D44DBF8-58B8-403F-B0E8-16E114CFF0E8/2013_IM_Smith.pdf [shareholder.com] .
    Look at Slide 36.

  • Yawn (Score:5, Funny)

    by jon3k (691256) on Friday December 06, 2013 @12:21AM (#45615923)
    Oh look the 100th executive to predict the end of moore's law in the last month.
    • by jd (1658)

      Oh no! You know what that means! 100 monkeys is the critical threshold! The brains of all of humanity will now be wiped! I can feel it sta....gurhcfjgjxhhfhcCARRIER LOST

    • Re:Yawn (Score:5, Funny)

      by Anonymous Coward on Friday December 06, 2013 @02:16AM (#45616409)

      18 months from now, it will only require 50 executives to predict the end of Moore's law.

  • But an end to Moore's Law has been predicted before multiple times, and it hasn't happened yet. (Things have slowed down, yes, but they're far from stopping.) A few years back hard drives were predicted to reach a storage density limit, but this was solved by turning the magnetic cells vertical. So Moore's Law may finally be coming to an end, but don't be surprised if something new comes along and blows silicon transistors away.
  • Or, y'know.... (Score:5, Interesting)

    by jd (1658) <imipak@yaCOLAhoo.com minus caffeine> on Friday December 06, 2013 @01:31AM (#45616181) Homepage Journal

    Encourage inventors rather than patent troll them into oblivion.

    Just a thought, I know it would destroy much of the current economic model, but maybe - just maybe - those expensive techniques are merely the product of insufficient brains. Does the semiconductor world forget so soon that "cutting edge" in the 1970s was to melt silicon and scrape off the scum on top? Does it eve r occur to anyone that, just as we use reduction techniques to obtain silicon today because older methods were crap, there exists the potential that the expensive, low-quality techniques of today could be the rejects of tomorrow?

    There are no inventors any more because silicon is a bloody expensive field to get kicked out of by patent trolls. Mind you, it's also a difficult area to get into, what with TARP being used to fund golden parachutes, bonuses and doubtless a few ladies of the night rather than business loans and venture capital. There's probably a few tens of thousands of mad scientists on Slashdot, and I'm probably one of the maddest. Give each of us 15 million and I guarantee the semiconductor market will never be the same.

    (P.S. For the NSA regulars on Slashdot, and if you don't know who you are, you can look it up, feel free to post on your journals or as an article all the nifty chip ideas you've intercepted that have never been used. After all, you're either for us or for the terrorists.)

  • No more speed increases coupled with decreases in power consumption and cost. Fair enough, but who says increasing cost is the way to go? (That's rhetorical, we all know it's the business people saying that). Focus on less power consumption and at least keep costs the same. Use the chips we have to make systems with more processors. Take advantage of the cloud and Hadoop. Refocus on more efficient coding practices. We're so focused on chips getting faster, but parallel processing is a viable method o
  • If they try to jack up prices they'll see what happens.

  • by fygment (444210) on Friday December 06, 2013 @08:29AM (#45617591)

    If _nothing_ changes, yes this will all come to pass.

    BUT

    Want to bet that in a lab somewhere, there is something that will let Moore's continue?

    Think: how many times has this prediction been made and then proven wrong? Wonder if these statements are just ploys to jack up prices?

    Sit back, relax, and be prepared to be amazed.

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...