Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

The End Of The Innovation Road for CMOS 281

Elledan writes "According to this EE Times article, CMOS technology (also used to create CPUs with) is getting near the moment when we will no longer be able to create smaller structures with it. With the date for this moment set around 2012 and with no replacement technology in sight, this issue might become a real problem in the near future, as the article explains."
This discussion has been archived. No new comments can be posted.

The End Of The Innovation Road for CMOS

Comments Filter:
  • Structures? (Score:2, Insightful)

    by Anonymous Coward
    As in what, code? (CMOS holds data, blah blah blah..)

    Right, right.

    Anyway, I almost wish we would hit impassable physical barriers with all hardware. Everywhere I look, people sacrifice good code for simple fast-to-write code (I'm guilty of this myself on occasion).

    I would love to see what we could come up with if we *had* to scrape every last bit out of the bucket, if we *couldn't* waste anything because there were no additional resources.
    • Now that you can heat your coffee by direct irradiation from the CPU, is there any need to go faster?

      IRL, Microsoft will find a way. If they didn't, XP's great-grandbastard would run like a stoned sloth. Install it on a P100 for a preview of what I mean.
      • FWIW, Windows 2000 runs fine on my Pentium 166, 64 MB RAM. Don't know about XP, that bloated sack of ..
        • Windows 2000 runs fine on my Pentium 166, 64 MB RAM.

          Not in my experience, but I was running Win2k Server with an app.

          233M (or 266M) w/ 168MB of RAM and it's still a PIG.
        • Windows 2000 runs fine on my Pentium 166, 64 MB RAM

          I have one workplace which uses 32MB P133s, carefully stripped of non-vital processes, as TS clients (only!) under Win2k.

          OTOH, until a power surge killed its serial-port card a few months back, I was using a 486SX40 (ie souped-up '386, no FPU) + 12MB (4x1 32-pin, 1x8 72-pin) + 250MB (samsung) as a gateway, dialin (x2) dialout, SQL server, webserver, mail server, name server and web server with uptimes exactly matching the power outages. It hung from my ceiling [linuxlots.com] and was powered by a real-original IBM PC/XT PSU.
    • if you would like to see that you can look at 2 different arenas.
      1 is embedded, I dont mean that toy called a PDA that we try and pass off as an embedded system, it's not... I'm talking real embedded systems, machine control, Aircraft systems, vehicular computer systems, PIC's are a really cool example.. Write your program in less than 6K and drive a graphic display,store data,get user input,communicate with 5 RS485 devices, and still have some speed on the 4MHZ clock. (Ok some are up to 20MHZ now.)

      and finally look at examples of the "impossible" from the early to mid 80's where 256K was the Maximum you can get on a 6809 or 6502 processor, or a Z80... or best of all code that ran on mainfraimes of that time... SpaceWAR is an excellent example.

      There are 2 things that make code the slop it is today... Management and Laziness.. and I for one believe that the first one is the Cause of the second.

      Finally, there are some coding projects today that are writing tight fast code.. Look for projects that do things on slow computers. Nuppel video for example... video capture on a Pentium233 with an el-cheapo bt878 video capture card.... you cant do that with anything else out there...
  • by panurge ( 573432 ) on Sunday May 19, 2002 @03:17AM (#3545090)
    At what point does the performance of computers become "adequate"? Once a technology becomes mature, a slow rate of improvement becomes acceptable. Reliability gets fixed, design improves, niche markets get filled. Internal combustion engines, houses, aircraft, ships, bridges, for all of these the lack of a Moores Law isn't a "problem". Perhaps if Moore's Law finally packs in for computers, we can all stop chasing progress and concentrate on things like social implications, human factors, and software that does something useful.
    • by quantaman ( 517394 ) on Sunday May 19, 2002 @03:48AM (#3545120)
      By many standards the performance of our modern computers are already well beyond adequate. We can browse the internet with ease, looks at pictures, make presentations, watch movies. But whenever we get a little more power we always find a way to use it, a few more features, a new file format, a few more polygons. The fact is the only point at which I can see home computing reaching "adequate" levels is when the worst written program can generate a set of stimulus indistinguishable from reality, and even then I'm sure we'll still come up with some new uses. One must also take into account other areas of computing such as high end physics and weather computers, these systems take into account massive amounts of variables and I don't believe that it's possible to come up with an adequate level of performace (ie taking into accound every electron, photon, quarks, etc. in the universe including itself). Then again I'll be pretty happy when they come up with a sever that can single handedly handle the /. effect!
      • Then again I'll be pretty happy when they come up with a sever that can single handedly handle the /. effect! You might find that magical server Here [slashdot.org]
        • Unless I'm mistaken /. along with most other major websites runs off of a number of different servers to spread out the traffic, that's why I made sure to include the single handedly. That being said I still appreciate the joke.
      • When we can interactively render a game with the visual quality of _Jurassic Park_ in real time? That would likely require a computer many thousands of times faster than what we have available today.

        C//
    • At what point does the performance of computers become "adequate"?

      Not for a long while. Error-free Voice Recognition? Artificial Intelligence? Robots? Cars that don't need drivers?

      We need Terahertz processors.

      Perhaps if Moore's Law finally packs in for computers, we can all stop chasing progress and concentrate on things like social implications, human factors, and software that does something useful.

      These are not mutually exclusive goals. I'd say they go hand in hand. You can't concetrate on the social implications of progress without first having progress.
      • I'm not convinced that its faster processors keeping us from doing all of that.
      • "At what point does the performance of computers become "adequate"?" Not for a long while. Error-free Voice Recognition? Artificial Intelligence? Robots? Cars that don't need drivers?

        A studied prediction of when computers can approach human-level intelligence:

        http://www.transhumanist.com/volume1/moravec.htm

      • "You can't concetrate on the social implications of progress without first having progress."

        I definitely disagree. I would say that science fiction is very often about examining the social implications of progressing to a very advanced stage of technology.

        For example, there is a movie coming out called "Minority Report" based on a book by Philip K. Dick (Do Androids Dream of Electric Sheep?). The short summary is that the government can predict crimes before the occur and thus stop them from occuring at all.

        On slashdot, we've discussed many times the implications of a fully deployed facial recognition system.
        • For example, there is a movie coming out called "Minority Report" based on a book by Philip K. Dick (Do Androids Dream of Electric Sheep?). The short summary is that the government can predict crimes before the occur and thus stop them from occuring at all.

          It's actually a short story. In the story the predictions are made by psychica, rather than computers. Plenty of his other work covers out of control computers or supposedly impartial computers actually subject to human manipulation.
          Anyway the central theme behind "Minority Report" appears to be how such a system can give unexpected results when applied to a government official who knows exactly how things work.
    • At what point does the performance of computers become "adequate"?

      It ain't now, that's for sure. I have a p4-2Ghz, 512 megs of PC800 and a ge4 ti4600, and I can still only get about 15 fps in Balmor within the Morrowind game at 1600x1200 (with all eye candy features turned up high). What a fine game it is too. Pushes eye candy to an entire new level, and the game play rocks too...

      It also ain't now because it takes too damn long to re-encode an mpeg video stream. After I cap an hour long episode of my favorite TV series and exercise my fair use rights to edit out commercials for my personal private viewing later, it takes about 30 minutes to re-encode it into a compatible VCD format for my living room's DVD player. (Oh, I'm sorry, that's considered stealing by some. I tell you what, I've seen a lot of commercials a frame at a time and have to pay extra attention to them as I attempt to make a clean cut, just so I can satisfy my stupid collecting habit with a full set of VCDs for some stupid show I most likely will never watch again...)

      And it certainly won't be enough horsepower by the time the next OS release of Windows comes out, because Microsoft, in their infinite wisdom, plans on doing away with a simple file system and replacing it with a database where all PCs saved data goes, which I'm sure will require a 5 Ghz PC with 5 gigs of RAM. (And you think registry corruption is bad...) And this will help people find their old data how? The same people who can't figure out how to construct a decent google query? Your typical marketing person for example, "find marketing report -- 2,042 results found." instead of something like "find marketing report where client equals wonka and body includes teenagers, candy and syringes, and month equals april, may, or june and year equals 2000."

      • It ain't now, that's for sure. I have a p4-2Ghz, 512 megs of PC800 and a ge4 ti4600, and I can still only get about 15 fps in Balmor within the Morrowind game at 1600x1200 (with all eye candy features turned up high).

        Running at 1600x1200 means your're fill-rate dependent. It has nothing to do with your CPU.

        And most PC games, as much as fanboys don't want to admit it, are sloppily coded. You look at some of amazing stuff being done on relatively low-end consoles, and it's mind-blowing. Then you look at what's being done on high-end, much more powerful PCs and you often see a lot of bloated crap. On the PC it's not uncommon for levels to take a minute or more to load, whereas you see sub 10 second times all the time on consoles, even though consoles don't come with hard drives.
    • You could very well stay at today's computing power, but if you consider the amount of power today's computers draw there's a lot to be won.

      Smaller processes means less power more processors per wafer makes them cheaper etc.

      There's no race to be the first to the moon anymore, we just want technology in a package that fits the human - maybe even in terms of lifespan.
  • How many times...? (Score:5, Insightful)

    by rhadc ( 14182 ) on Sunday May 19, 2002 @03:18AM (#3545091) Journal
    How many times have we heard this prediction?

    I remember when 200mhz was the end of the road. 'They' always manage
    to give us another 10-15 years. It's like drilling for oil.

    Besides, while Mhz makes a big difference to speed, design is more important.
    Even if we hit this wall, we'd just continue to improve in other areas.

    This is a different kind of FUD, but FUD it is.

    rhadc
    • The predictions were made based on current technology. The first predictions were made on the basis that you were using a visible wavelength of light which put a limit on the track size. Then they moved to using X-rays which led to a smaller track size. It's like the estimates for how much oil we have left - they keep increasing because they either find more, or work out ways to drill deeper, extract current deposits more efficiently etc.
      • by anshil ( 302405 )
        I think oil prediction is now on 40 years. It were 50 years 15 years ago. Okay it isn't that accurate but oil is decreasing. I'm 24 years old, and I estimate that I'll see the beginnings of the end of oil. There will be huge wars for the few remaining resources, I will tell you.
        • by greenrd ( 47933 )
          There already are. Do you think the War on Afghanistan was about fighting terrorism?

    • by -brazil- ( 111867 )
      Yet it's a simple fact, that earth's oil reserves ARE limited and that exponential growth (or shrinkage) IS impossible in our limited universe. Pretending otherwise is just ignorance. With computers, it's not really a problem since nothing really crucially depends on getting more powerful computers all the time. Unfortunately, this is not so with fossil fuel reserves. Unless we find alternative energy sources, mankind is in really deep shit quite soon, not when fossil fuels run out, but well before that time, when they become much more expensive to get out of the ground. Realize that the comfortable predictions of 100 years or more of oil reserves include ones that will be 10 times more expensive to use.
      • Start cuting down and burying trees now!!!

        Sure it will take a little will, but all that oil will be there in the future.

        Help the future generations - think of the little children!!!
      • With computers, it's not really a problem since nothing really crucially depends on getting more powerful computers all the time.

        Oh, yes there is - the profit predictions and stock prices of several big companies.
  • by bravehamster ( 44836 ) on Sunday May 19, 2002 @03:45AM (#3545117) Homepage Journal
    I say this is a good thing. Let the end of CMOS come. It's time for us to move forward. I think this is just the kick in the ass we need to really start focusing on quantum computing. IBM and Fujitsu both have quantum computing research divisions, and I wouldn't be surprised if there aren't quite a few companies out there very quietly working on it. The pressure for faster and better computing will drive us forward. And when the first 64-qubit computer comes rolling down the line, I'm certain Tom's Hardware will be there to tell us how many FPS's we'll be getting in Quake8 with it:

    Tom's Hardware: I can definitely say that this thing smokes. Unfortunately, due to quantum uncertainty we weren't able to give you an exact measurement of FPS's. but we can say with some confidence that it's between 189 and Infinity + 2. However, with quad-sampling anti-aliasing on, don't be surprised to see that number drop to Infinity + 1.

    Damn, I need to get some sleep.
    • 10. To decrypt those files Mulder stole from the Pentagon.
      9. John Connor has smashed your defense grid, and you need an edge, pronto.
      8. Nothing can cheat like a quantum aimbot in Quake 4...
      7. Negative ping times.
      6. The shifty eyed salesmen at CompUSA talked you into it.
      5. Opens up the exciting new possibility of quantum porn.
      4. Windows.NET 2010 runs like a dog on your 2048-cpu, 900 Teraflops cluster with 8 petabits of ram.
      3. The ability to render away the clothes, in real time, of your favorite TV show.
      2. Your scheme to perform nuclear yield simulations with imported Playstation 2's ended in a trade embargo.

      And the #1 reason to like quantum computing is...

      *drum roll*

    • (...and until we looked at it, your cat was either both dead and alive or neither dead nor alive, Mr. Schroedinger.) ah this is where the real adventure begins. But the thought of Bill Gates and Feynman diagrams sends a chill of dread down my spine. --dingding66@attbi.com
  • by bertok ( 226922 ) on Sunday May 19, 2002 @03:47AM (#3545119)
    But whatever technology is to take the place of the venerable MOSFET -- be it molecular structures, carbon nanotubes,
    MEMS, or other next-generation technologies -- must be invented now and developed full-bore over the next decade in order to be ready in time, Buss said.

    MEMS isn't an electronic system like MOSFET or CMOS, it's a method for making mechanical systems out of silicon. Oops.

    • Umm, first of all, it's EE Times. Second of all, the quote is from the VP of R&D at TI. Get your facts straight, knucklehead.
    • You can certanly use MEMS techniques to make a better electrical circuit. (Though I am not familiar with applications in digital devices)

      MEMS techniques can for instance help in creating excellent on-chip inductors, important for RF applications.

      However, it is not given that the Next Big Thing in digital devices will be electronic at all. Maybe we'll find ways to make micromechanics perform better than electronics.
  • by Have Blue ( 616 ) on Sunday May 19, 2002 @03:56AM (#3545133) Homepage
    Would it really be so bad if manufacturing advancement in the hardware sector slowed or stopped? Companies would be forced to develop new features (like MMX or AltiVec) to differentiate their chips. Work would shift to bringing the rest of the computer up to the top speed of the processors, which it has lagged behind by orders of magnitude for years. The oft-hated hardware upgrade cycle would slow down greatly. Machines would be useful for much longer, and depreciate less. Software developers could no longer rely on increased performance, and would be forced to do real optimization.
  • by colmore ( 56499 ) on Sunday May 19, 2002 @03:57AM (#3545134) Journal
    I don't think anyone is suggesting that this is going to be the end of increased CPU speed, just the end of the usefulness of a certain technology.

    I think perhaps the best thing that could happen would be about a five year freeze on increasing CPU power, so that the burden would again fall on the programmers to write good fast code.

    In the past five years, CPUs have increased in speed tenfold, but computers have gained little apparent speed (applications don't load any quicker, OSes don't boot any faster) and certainly haven't gotten *ten times* more useful.

    We have all these extra cycles, and all we can think to do with them is write slow, clunky but pretty window managers. (A criticism I lay against, MS, Apple, and OS) A pause in the mad rush for speed might give some time to think of what to *do* with all that power. DivX is a pretty specific use for so much general purpose hardware.
    • I think the next improvements will not be with the CPU, but getting the rest of the computer up to speed.

      Already, we're seeing that with faster speed expansion slot peripheral connectors, more efficient motherboard chipset architectures, and faster hard drive interfaces (Serial ATA could take hard drive data rates to 6-7 times what ATA-133 does now, and SCSI has reached 320 MB/second).

      Besides, given the technical know-how of companies like Intel, AMD, SGS-Thomson, TSMC, Toshiba, Kyocera, etc., I think we will probably see non-CMOS techniques of increasing chip density by 2005-2006 anyway.
    • I agree, to an extent..
      I don't see why we need to increase the speed of PC:s indefinetly.

      On the other hand, computers are being put to
      good scientific use (remember: that's what we invented them for..)
      And in many cases here, the speed is a real limit to what can be done: so this may be a problem.
      (Luckily, many of these problems can be solved by
      parallel computing.. so imagine a beowulf cluster of those, if you will.)

      Anyway.. I myself am looking forward to this happening:
      Historically, when science or engineering gets 'stuck' like this,
      there is an explosion of creativity, suddenly, all
      ideas are worth considering.

      Given the interest and money in computers, we'll probably be seeing more innovation and more original innovation in computers than ever before.

      ..and more funding for basic research.. yeah!
    • Having CPU speeds come to a brief halt while process technology retools might not be a bad thing, you know? This would force the various hardware vendors to concentrate on other system components. If I could have any one "fantasy" system component right now, it would be a solid state drive. This would make more difference to performance than any new cpu around.

      C//
    • A pause in the mad rush for speed might give some time to think of what to *do* with all that power.

      You suffer from the fallacious idea that all anybody does is pursue better processors. A simple look around you will reveal that "the industry" as a whole dedicates some small percentage (probably less then 10% measured by cash) of its efforts in this direction. The rest of are... thinking what to "do" with all that power.

      Re-assigning processor engineers to figure out what to "do" with that power would be a waste anyways. They design processors, they don't write window managers.

      Really, just look around; this is a rather simple to debunk with just your eyes, unless you live in a fabrication facility... sheesh!
    • I disagree:
      Celeron 300A @ 450mhz
      256MB 100mhz SDRAM
      IBM 75GXP 7200rpm 30gb drive
      Matrox G400 Max 32MB
      TDK 32X CD Burner
      SB Live!
      DLink 530-TX+ NIC
      Windows XP Professional; 1600x1200; Cleartype and all other visual effects on

      Everything runs just fine. Plenty fast for just about anything. No, you don't need a 5GHZ CPU to run modern software.
  • by Restil ( 31903 ) on Sunday May 19, 2002 @04:06AM (#3545143) Homepage
    Certainly, the transistors can't get any smaller. However, you can increase your die size, you can play fun games (like pipes and branch prediction) etc that speed up the processor without speeding up the clock. You make each computation more efficient. You duplicate the processor and create multiple processors in one, running in parallel. Layer it vertially. Increase the bitsize. Create more efficient opcode sets. And of course, eventually invent a technology that will get around the physical "limit".

    Then hit the software side. More efficient compilers. Less bloated programs. Develop more complex, but at the same time far more efficient algorithms. Avoid the creeping featurism.
    Modularize. Streamline. Use the grey matter in
    your skull.

    Oh, and never forget. Whoever invents the miracle cure will make a buttload of money. So if there IS a way, it will be found.

    -Restil
  • In the 1970's and 1980's, people had really interesting ideas about parallel computing and all sorts of new processor design. But those ideas never made it to market, because by the time people managed to design and produce a non-mainstream chip, Intel, Motorola, and other "mainstream" vendors had already ground out the next version of their chip using the latest manufacturing techniques and a few tweaks.

    If chip performance hits a limit, people won't just be able to tweak an old design, make it a bit smaller, and beat everybody else in the market. People will have to come up with entirely new architectures and paradigms. And that will have many benefits besides merely continuing improvements in raw computing power.

  • ther are so many other factors, it will give others (eg parallel computing) a go in the lime light for a while.

    After that, there are other technologies on the horizon, quantum computing is a way off but has amazing potential.

    • It's commonly thought that in the future, quantum computing won't replace the existing breed of computers that are out there today, but rather compliment them.

      The one main point of quantum computing that makes it so appealing is its massive parallelism. Yes, this is fantastic, but a couple bad poitns appear. Not every single program out there is made to be parallel. Not every program out there can be broken up into smaller chunks that can be done in parallel.

      Take for instance, finding a Mersenne [mersenne.org] Prime. It takes roughly 2 weeks using full CPU utilization in order to see if a high value number is Mersenne or not. These primes get harder and harder to find with time.

      Of course, the solution is an internet search like GIMPS DOES work in parallel to a degree, the problem is still the same. Even if you could create a computer that computes 100,000 candidates all at once, it would still take 2 weeks to do so. Whether you're computing one value, or millions, it still boils down to that single atomic operation.

      Also, if I'm not mistaken (I could be wrong), current quantum configurations can switch nowhere near the speed of current processors. So in the case of finding our Mersenne Prime, it might take longer than that to compute the primeness of a single value.

      It's much akin to our human brain. The response time of a neuron is roughly 1 millesecond. Compare that to common computers that can do operations in nanoseconds. Which is more powerful? It depends. Do you want a high latency, massively parallel computer, or a low latency, serial computer? For vision analysis, AI, CG, etc, parallelism is key. But even still, I can imagine that today's sort of CPU archetecture will be around for a long time to come.

  • Not dead yet... (Score:2, Interesting)

    by lirkbald ( 119477 )
    We may or may not be approaching a wall on shrinking the circuits, but there are any number of architectural ideas still coming down the pipe.

    One cool one I've happened to have been exposed to: asynchronous design. The idea is to design the system without a clock- basically the chip components individually signal to each other when they are done, instead of having a global clock deciding when everything is ready to move on. This has several major advantages: First, power consumption. A remarkable amount of power in modern processors is used just on clock distribution. By eliminating the clock, we can save a lot of power. Second, we can design for the average case, rather than the worst case. In synchronous computing, you have to be done by the end of the clock, no matter what. With asynchronous, you can use an approach that is faster most of the time, even if it occasionally takes a lot longer.

    There is, of course, a downside: by eliminating the global synchronization, you run into many of the same problems that one encounters in multithreaded software. There's a group working on this here at Caltech [caltech.edu]; some of the results they've gotten look very promising. I also hear that Intel and others are starting to use this internal to some of their newer processors. Anyway, the point is you don't necessarily have to up the clock speed to up performance; sometimes there's at least as much to be gained from using what you already have a little better.

  • So here's my theory of what will happen if we hit a wall in processor performance:

    1) Software developers will aim to better optimize the software.
    2) Hardware developers will work at moving software-dependent things off on to hardware.

    Some years back, I had a machine capable (at least to my untrained eyes) of full-screen, full-motion movies, under win 3.1. Of course, this was thanks to a $100 Sigma Designs VLB hardware MPEG decompressor, but ever since, I've wondered what all the excitement has been about in the last year or so with people talking about how great it is to have a CPU fast enough to handle movie playback. (one of these days, I'm going to put the old DX4-100 back together and see if I can get it to play dvd's through that card). But this seems to be a common trend. Stuff lives on hardware because it can be done fast. Stuff moves to software because it can be done cheap. Having major speed increases in the processor market has helped, but I think it'd be a hard sell to say that everything that's done in software currently couldn't be moved off into hardware. Find me 10 people that are convinced that hardware-accelerated 3d is soon to be eclipsed by software, and perhaps I'll consider that as an argument.
    Does this mean that everything needs to be moved off to hardware? Probably not, but I'd like to see some of it offloaded. Some could arguably be better off as hardware (I could be wrong, but I think a cheap usb camera duct-taped to a lava lamp would make a better random number generator than most of the algorithms out there.)

    As for software optimization, here's where the annoying part comes in. How many self-taught people know the difference between O(n) and O(2^n)? It's not the sort of thing you can rely on your compiler to fix for you. Perhaps we'll be coming to an age where the difference between doing the time in formal education learning the foundation becomes apparent from those who bought a "Teach yourself C++ in 10 minutes" book.
  • by petis ( 139263 ) on Sunday May 19, 2002 @05:26AM (#3545254)
    According to this paper [isy.liu.se] (pdf) entitled "Scaling of Electronics" from 2001, the following conclusions are drawn:
    * Moore's law will hold for 20 more years.
    * There is a potential performance increase of 10000x with current CMOS-technology
    * The minimum gate: needs 12(!) electrons to switch.

    We'll see. I wouldn't hold my breath waiting for CMOS to hit the roof though.
    • The primary obstacle for continuing develompment on our current path will likely not be technological but rather financial.

      New fabs are increasing in cost at a dramatic rate, unless the semiconductor market increases it's growthrate substantially we'll likely see that while technologically possible some next stage development of CMOS will be economically infeasible as a fab won't be able to recover the cost of building it over it's lifetime.

      We are not there yet, and not likely to get there for another ten years, but if present developments continue we will get there some 10-20 years from now.
  • Why this matters (Score:2, Insightful)

    by 00_NOP ( 559413 )
    I find all the "who cares" and "good" posts bizarre.

    End of Moore's law - or 2/5/7/10 year hiatus - is very bad news.

    It means an end to cheaper faster computing power - and that means an end to expansion of the embedded sphere and the increasing use of computing power in business.

    In other words - slower growth, collapse of hardware industry (why buy a new machine if its not any faster) and programmers out of jobs (what do we need you for - we have all the word processors we need).

    Bad, bad, bad...
    • Not necessarily any of those things, though. It only means an end to the current strategy.

      For instance, it may be what's needed to really push SMP and parallel systems toward the lower end. The chips might max out, so they'll have to sell more of them to make a profit... what better way than to make sure each user is buying 4 or 8 chips at a time? Same with code, programmers won't be so wasteful.

      Efficient code on 8way CPUs just might buy us another 10 years, enough for a new technology to arise.
      • I can't imagine wanting 4 or 8 way SMP systems in the embedded space. Talk about a great way to destroy any chance of deterministic response or bounded time operation.

        • Well, true. I was talking about the desktop market, and perhaps even game consoles.

          I'm just having trouble picturing what you'd need in the embedded market though, that a .01 process wouldn't be capable of handling. How much cpu does it require to calculate optimum fuel injector settings, for instance?

          Anything so truly massive, that it needs that kind of cpu power can't possibly be realtime, can it? And if it isn't offload it to a networked machine somewhere. At least thats how it seems to me.
    • In other words - slower growth, collapse of hardware industry (why buy a new machine if its not any faster) and programmers out of jobs (what do we need you for - we have all the word processors we need).

      I disagree. What you're saying is that the only reason any of us have jobs right now is that computers are cheap and getting cheaper, and that once the balloon pops everything will collapse.

      In fact, once the balloon pops those of us who actually know how to deliver value to the customer will make it big, while all those who provide no value but only glitz and marketing brochures will perish.

    • It means an end to cheaper faster computing power - and that means an end to expansion of the embedded sphere and the increasing use of computing power in business.

      You're conclusions don't follow at all. If something happens and it becomes impossible to ramp up the clock speed, that doesn't mean chips won't get cheaper. If a fab can use a given process for say five years instead of the two or so years they can now, costs will go down. The R&D and fab's construction costs can be amortized over more time which should lead to real cost reductions.

      If anything it will be a boon to embedded stuff as chips should be both cheaper and have more stable designs. Both good traits for embedded. As for hardware not getting any "faster", I don't think that will be a problem either. There are numerous ways to improved the performance of chips without changing a specific manufacturing process. Improved branch prediction. Improved cache controllers. Wider busses. Multi-core chips. While speed improvements might slow down, they wouldn't stop. The focus would just change from brute-force clock increases to better architectural designs.

      Programmers won't be out of jobs either. Well, the less skilled ones might be. Because there is an unbelievable amount of room for improvement in the software industry. A lot of code out there today is crap which is only acceptable because of the huge improvements in clockspeed and memory density. If clockspeed's stop advancing, improving the software will be one of more effective means of increasing the performance of a given system. By the logic of what do we need you for - we have all the word processors we need, we should all be out of jobs because let's be honest, we had all the word processors we need about a decade ago.
  • Okay, I admit it. I didn't understand a word of the article. It's not that I'm stupid (although some people might disagree), I just didn't really understand it.

    I do know that CMOS stands for Complementary Metal-Oxide Semiconductor and it's uses N and P type transistors to do logic functions (AND, OR, XOR) but after that, it's all a bit hazy.

    Can anyone provide a nice translation to English for us dummys?

    Thanks!

    • Re:CMOS? Huh? (Score:4, Informative)

      by bedessen ( 411686 ) on Sunday May 19, 2002 @09:14AM (#3545657) Journal
      Okay, I admit it. I didn't understand a word of the article

      Here's a few quick explanations of some of the key points mentioned in the article.

      The leakage problem: This is a really difficult and nasty problem. It arises from the fact that designing a chip involves trading off a number of things, among which are clock frequency, operating voltage and power dissipation. It turns out that as you increase voltage, it speeds things up but it also causes power dissipation to rise as well. Ask any overclocker. However, the speedup is roughly proportional to voltage, while the power dissipation goes as the square of voltage. Hence the operating voltage of chips has steadily been decreasing. The bleeding-edge research type chips are down somewhere in the 1V - 2V range. The problem here is that there is a fundamental property of the FET called the threshold voltage, the voltage at which (more or less) the transistor switches from being ON to OFF or vice versa. Of course it's not a sudden transition, so its desirable to have the system voltage higher (say by 2X to 5X) the threshold voltage, so that the transistors are turned ON and OFF fully. Otherwise, leakage occurs, and can become a very significant power drain if not kept in check. The problem is that due to physics and some other factors, the threshold voltage cannot be reduced easily past a certain point. There are tricks that the designer can use to attack this, but it's still a very fundamental issue. So what the circuit designers end up doing to meet the design criteria is play a large game of cost-benefit analysis with regards to power, frequency, system voltage, threshold voltage, area (die size), etc.

      Masks: Integrated circuits are build up in layers. An extremely simple design might have 6 layers, modern CPUs might have 20 or more layers. Each layer is created with a mask that defines the features of the layer. While enlargement/reduction is used (meaning the mask features are larger than the features on the wafer), mask creation is still very difficult. It's like making a stencil with millions of tiny features. The photolithography involves very expensive machines with extremely precise optics. Indeed you might have heard of the push to "extreme ultraviolet" - this refers to the light source which shines through the mask and exposes features on the silicon wafer. The trend is to use smaller and smaller wavelengths, because the feature size keeps shrinking. The wavelength of light that is used must be significantly smaller than the smallest feature, otherwise you get interference/fringing/etc. Anyway, these masks are very expensive to produce, leading to very little room for error. You want to be sure that those masks are at least functional, and hopefully as bugfree as possible. To a certain extent you can work around some hardware bugs, but it's very stressful because of the huge cost and time delay (many months) of getting a design fabricated. Imagine what development would be like if compiling your source code one time cost you a million dollars and took 6 months. Now try to stay competitive in a market where everybody is screaming at you to get a product to market as quickly as is humanly possible. Simulation is the name of the game here.

      Interconnects: This refers to connecting together the individual transistors to form blocks, connecting the blocks to form modules, etc, up higher and higher levels. Interconnects do not scale well, it's just one of those complexity things. The number of interconnects goes something like N^2 (where N is the number of transistors), and this can quickly get out of hand. The problem is you can't just make the wires longer (by wires I mean the etched paths inside the chip, not the external things) because this increases their resistance and capacitance, which means that they must be driven "harder" to achieve a given performance. To drive them harder you must spend extra area on larger transistors (which just complicates things -- now the chip is even more spread out) or spend more power, which is usually not feasible. A stopgap measure is to use copper instead of the traditional aluminum for the interconnects, but this is only really a one-shot thing, it only buys you so much. Another way is to use more interconnect layers (expand in the "z" direction) but this has its problems as well. The most promising solution to the interconnect issue is with advanced CAD algorithms and plain old good design. Keep related modules close to each other, and design busses to shuttle things around longer distances.

      Capacitance: Capacitance is one of the worst enemies of the circuit designer. It means that on every transition of state, energy must be spent charging (or discharging) a dielectric. This is one of the main reasons for reducing smaller feature size -- smaller things have less capacitance. The article mentions fully depleted SOI, which is basically a very extreme way of trying to reduce capacitance. The bulk substrate is silicon dioxide, an insulator, instead of pure crystalline silicone (a conductor.) The effect is to decouple the individual transistors from the bulk substrate of the wafer. The result is much less stray capacitance, but the cost is that your transistors no longer work quite right so it makes circuit design that much more complicated. The article also mentions high-k dielectrics, which basically is a way of increasing the "gain" or drive strength of a transistor without increasing its size, which is the normal way of doing things. It can be really quite frustrating: if a path in your circuit is too slow, you have to increase its drive strength. But this also increases the capacitance (which leads to more power dissipation) and now the thing that drives that circuit also has to be bigger (to compensate for the increased gate area), etc, etc. Any means of increasing the drive strength without increasing area is quite beneficial.

      I hope that was of some use to at least someone.

  • by dinotrac ( 18304 ) on Sunday May 19, 2002 @05:56AM (#3545296) Journal
    Chip makers complain because the "CAD Community" isn't coming up with solutions to some of their problems, but University R&D programs are unable to keep up with fabrication standards as the equipment gets more expensive.

    Isn't this a problem waiting for a few self-interested chip-makers to whip their wallets in the direction of a few universities?
    • Chip makers complain because the "CAD Community" isn't coming up with solutions to some of their problems, but University R&D programs are unable to keep up with fabrication standards as the equipment gets more expensive.
      Isn't this a problem waiting for a few self-interested chip-makers to whip their wallets in the direction of a few universities?

      As someone who has participated in CAD and microelectronics research in academia, I can offer a few informed opinions about this situation. Everyone has a opinion of what academia "should" be doing. Very few people take a hard look at -why- academia research in these areas has come to a standstill.

      Research money for CAD development in particular and microelectronics in general is almost impossible to come by. For research in IC processing, the big factor is the cost of capital equipment. Universities simply can't keep up - only industry can afford the multi-multi-million dollar capital equipment costs. Even research in analog and mixed-signal circuit design has pretty much collapsed, simply for lack of funding to fabricate ICs through third-party fabs. What little money is available is quickly soaked up by a tiny number of high-profile researchers, mainly in California schools. Unfortunately, these guys can't even come close to meeting the demand for students and research output.

      CAD is a somewhat different story. The capital cost of CAD research is actually pretty small. All you need are a few good workstations. The bigger problem is lack of qualified graduate students, and lack of any money to pay them. Everybody agrees that universities should do more CAD research, but the federal government won't write the checks (applied research should be funded by industry) and industry won't write the checks (too many intellectual property hassles with universities, plus no guaranteed return on investment). In fact, industry often acts as its own worst enemy by hiring away students and faculty both and eating the academic "seed corn" of future research.

      To top it off, any CAD researcher with even a moderately innovative tool will do MUCH better by leaving academia, finding venture capital, starting his/her own company, getting bought out by Cadence, and retiring wealthy.

      Graduate student recruitment has improved somewhat in the past year due to the recession, but as soon as the microelectronics industry starts to recover, you won't be able to hire a qualified graduate student or faculty member for love or money. It will take massive amounts of cash to change this trend, and frankly I don't see it happening here in the U.S. The microelectronics industry will have to rely on internal R & D from now on. Except for a few remaining high-profile programs, the academic sector is pretty much finished.
  • by AstroMage ( 566990 ) on Sunday May 19, 2002 @06:04AM (#3545303)
    For those of you who have actually read the article, note that it talks about two main issues- the problems with scaling CMOS below 10nm, and the rising costs of masks.

    But even the article repeatedly says that the mask cost issue is a problem for the little guys, not the large ones like Intel. They can and will cheerfully swallow $600k respin costs, and more, to tapeout a successful new processor. So this aspect won't hurt processor development at all.

    As for the CMOS scaling issue, the processor companies- i.e. Intel and AMD, have the pockets AND the incentive to find work-arounds. I promise you all that processors will continue to advance well beyond the year 2012. It may not be CMOS, and it may not be pretty :-), but it will work.

    So for all of you who posted asking "what will we do when processors no longer advance", let me set your mind at ease- THAT won't happen for a long while yet.

    Finally, while the subject of my post is "the end of processor advancement", I'll say a few words regarding other types of chips. I work as a hardware engineer for an ASIC house, and we produce at TSMC using the 0.18u process. The point is, that for our chips there is NO incentive to go to 0.13u or below. Nor will there be a reason for quite a while. The same is more or less true for MANY MANY other ASIC companies. So while the cutting edge- processors, Flash and graphic-chips companies will probably need to switch from CMOS to some other technology around 2012, that will in no way spell the end of CMOS, not for a VERY large segment of the ASICs market, and not for a VERY long time.

    • The point is, that for our chips there is NO incentive to go to 0.13u or below.

      Huh? Doesn't .13u imply more chips per wafer, and therefore a lower cost basis?

      C//

      • Huh? Doesn't .13u imply more chips per wafer, and therefore a lower cost basis?


        As long as your chip is mostly digital, then yes it might.

        However you must evaluate if the longer design time, increased mask costs and potentially higher tool costs (timing closure is a bitch on .18 and better) can be offset by the higher yield/wafer. (i.e. if you excpect enough volume to make it worthwile)

        As for circuits with analog components. These don't nearly shrink as much as digital (indeed they often grow due to the exotic solutions swhich might be needed) with smaller processes.
  • You mean I have to wait till 2012 before there is a reason to remove all the bloat from software so that it runs at optimum speed and does something useful?

    Hmmm, what's the copy protection pushing politicians gonna use as an arguement for slowing computers down with bloat, in 2012?
  • READ THIS! (Score:5, Informative)

    by clark625 ( 308380 ) <clark625 AT yahoo DOT com> on Sunday May 19, 2002 @08:19AM (#3545550) Homepage

    I work in research at a university, and my PhD project is going to help solve this problem (and others) long before 2012. I can't get into specifics because of disclosure issues. But, understand that already a HUGE amount of work has been done behind the scenes and most other researchers don't yet know of what's to come.

    CMOS isn't going to die. Turns out that we're not limited in the horizontal direction like everyone predicted years ago (remember how lithography was always the big problem?). Instead, it's the vertical direction. Our gates are having to get too thin. SiO2 just doesn't work well with 10A thick layers because of trapped charge and whatnot. Also we can't properly control doping at very shallow levels.

    But all that doesn't matter. Strained-Si technology is where it's going. If you're interested, check out AmberWave [amberwave.com]. It turns out that we can increase the mobility of holes and electrons--so even older .18um fabs could easily be refitted with strained Si material and compete with the .13um fabs. Actually, it's even better than that--the increases in mobility have been up to 8 times over that of Si.

    No, CMOS isn't going to die. It's going to change and morph. Just like it has in the past. We don't need a revolution like many engineers are claiming--we simply need evolution. Strained Si is an evolution that will make for revolutions later. Current fabs can just swap out their current Si wafers and get strained Si ones--most everything else in the fab stays the same. Talk about a huge cost savings to boot (no need to rebuild a new fab for billions).

    • Orthogonal plays (Score:3, Insightful)

      by NanoProf ( 245372 )
      Historically, increased CMOS speeds have come from one thing: shrink the features. Atoms being small, this works for quite some number of doublings. Techniques such as strained Si, alternative gate dielectrics, etc. are a qualitative change in strategy. They have potential to help, but they don't have the long-term extendability that we've seen from shrinkage. Let's say strained Si gives a factor of 8 in mobility. That's great, but in 3-4 years it's done and we need some other idea orthogonal to the previous one. Having to come up with a qualitatively new enhancement every 3 years is very different from the make-it-smaller world to date.
  • With the date for this moment set around 2012 and with no replacement technology in sight...

    I've seen so many people say something like this, and each time I get really vocal. CMOS will die. Eventually. Big deal. We're counting oxide thickness in angstroms now ("how many atoms are in that?"), but get this -- gate tunneling leakage, source to drain leakage, they're making this a technology we wouldn't want to take further. That's right, DC current is becoming astronomical.

    Replacements? The first one I think of is BiCMOS. That's our old standby. Current FET beta ratios are quoted at 100, but it's lower for each newer technology. Bipolar, on the other hand, is 300. That means that a bipolar transistor is 3 times as strong as a FET in terms of current it can source (or sink). Bipolars are big, and currently yield poorly. Throw the weight behind the technology and I bet we get some of that learned down. (For the curious, it yields poorly because to make a pnp transistor out of n silicon, you have to dope a big bowl of p, smaller bowl of n, but really hard to overcome the p you just did and finally a pretty small bowl of p, exceptionally hard to overcome the n you just did hard. Think about how CMOS makes a p type FET on p silicon -- light n to make an n well, then you can dope your source and drain.)

    Oh, and Research [rpi.edu]is being done all the time to replace CMOS.

    "No replacement technology in sight". Bah. Maybe for consumers. I'll throw my professional weight behind this: "All CMOS replacements have their own strengths and weaknesses, just as CMOS does. Some of them are already better at what we have CMOS do."

  • ...the US patent office will close some time before 2012, as there will be nothing left to invent.
  • Why does this become "a real problem"?

    It would seem to me that the rate of development in technology could slow or even pause for a while and still not become "a real problem". But then maybe I don't understand.

    From what I see of things we already have plenty of wonderful technology that isn't being used to its fullest. I'm curious if the real problem isn't that we aren't first taking full advantage of the technology we have now, finding more efficient and productive ways to use it.

    Maybe, in a Douglas Adams sort of way, it's because we already have the answer, we just don't know what the question is. Just what is it that we're trying to accomplish? Do we know that?

    I know that in the last few decades the microprocessor and memory seem to have replaced the muscle car. Bigger, faster, badder is better. It's a macho thing, sure. But what really is the point? Why is this "a real problem"?

  • Okay.
    First, how is this the end of innovation? Is the current increase in CMOS detail every year innovation, or just a method being refined? Exactly. It's refining.. not innovation.

    Necessity is the mother of all invention... we've all heard that one before, and it's true. If there is a need for more computing power, we will have it 10 years from now when this article talks.
    Oh.. and how many technology predictions about how things will be in 10 years are accurate? not many.

    As for computers being 'fast enough'... that's 2-edged. We can deal with a slowdown in computing advancement at the moment.. we aren't stuck. The rapid increase in speed of CMOS technology has meant less effort in developing better algorithms, tighter code, parallel computing, etcetera. There is plenty of room for more work to squeeze more out of our computers. The paradigm can change.

    Still, there are other technologies out there. There is much more that can be done once we reach the limit of cmos detail. what about going to chips with more layers? Newer materials that can aid in cooling? thicker chips with more components? Bigger chips? There are many avenues we can explore to get more speed out of our chips than mere detail.

  • In every other industry, the name of the game is being able to do more with
    less resources. And in every other industry, quality has improved, productivity
    has improved, and more can be done now with fewer resources!

    In the software industry, the name of the game is using as many resources as
    possible to get what you want done. And in the software industry, quality has
    remained steady, productivity hasn't improved since the first word processors
    and spreadsheets, and now software takes up more resources than ever before!

    The software industry has been in this situation for decades, and the day that
    Moore's law slows down is the day that software, like all other goods and services
    will need to do more with less resources. And when that day comes, expect the
    quality of software to improve drastically, and expect productivity to improve
    as well.
  • We don't necessarily have to reduce transistor size to improve ICs. We can, at least as an interim technology, use a better semiconductor than the dirt cheap but fairly mediocre "Silicon" that has been in use for decades.

    What about gallium arsenide? Crays used to use this, as did many other supercomputers. Sure, it would make your processor poisonous but it's a small price to pay. Who licks their CPU more than a few times a week anyway?

    What about Germanium? Germanium is an excellent ... though fairly expensive semiconductor.
    IBM has made incredible progress actually creating a hybrid semiconductor of silicon and germanium, which can be read about briefly here [ibm.com]

    Has there ever really been a time in which electronics engineers have been stuck such that computer technology could not advance? No, but there have been many, many times in which there were predictions about how the limits of a technology would stop everything up X years downthe road. While this is a good thing, because R&D firms start trying to find the next big thing before it is already needed, does anyone really believe that in ten years we will have no means to increase the number of transistors (or whatever is used then) to improve what they are used in?
    • Has there ever really been a time in which electronics engineers have been stuck such that computer technology could not advance?

      Yes. The main computers used in academia were around 1 MIPS in 1969, and were still around 1 MIPS in 1983. DEC was stuck at 1 MIPS for a long time.

  • If we actually DO reach the limits of CMOS before the next technology becomes available, we'll simply end up with computers with multiple CPUs. Given that in today's world we are processing more complex tasks (which can generally be broken down into multiple threads) and doing more things at once, multiprocessing is more and more obviously the way to go.

    AMD's Hammer chips, for example, use a bus which is designed to make SMP systems easy; You just chain the CPUs along. You can have odd numbers of CPUs. I don't think they do ASMP, so you are still stuck with the problem that they all must run at the speed of the slowest CPU, but that is a relatively small price to pay. Eventually we'll all be using systems with more than one CPU. It looks like the way hammer is set up that AMD could actually do processor modules which plugged into one socket (or slot or whatever they end up with for hammer, I'm sure it'll be a socket) which had multiple CPU dies in the same package - If they could just work out a package that would handle this. Then you'd drop it into your SMP-capable motherboard (A matter of BIOS more than anything else) and bam, you'd have an eight processor hammer system.

    Of course, I haven't done all my homework, so there may be reasons other than packaging why this wouldn't work, but it seems to me that their bus standard is intended for this kind of thing, the idea minimizing the glue logic and support hardware necessary to do SMP. It would be fantastic if they even offered chips which had TWO processors in them, let alone more. But I'm pulling for about eight. Just think, a single-socket board could be an eight processor 3d graphics rendering powerhouse, especially when coupled with four-way interleaved DDR333 memory.

  • They've been saying the end of the road is 10 years out for around 20 years nows. And every few years a new discovery is made that shifts it out another 10 years. So I'll start worrying if in 2010, they're still saying the end of the road is in 2012.
  • by gelfling ( 6534 ) on Sunday May 19, 2002 @06:32PM (#3547335) Homepage Journal
    Remember the good old days when a good engineer could race a computer to a solution with a circular slide rule? I do. Then there were complete IC based computers and we couldn't do that anymore. Then around 1987 we all said 25 nano lithography was the theoretical limit of the physics. Which of course was wrong because it was based on materials science that was already old.

    At any rate - I don't feel comfortable making prognostications about technology 10 years in the future. Any every time I think about I also think about Turing's paraodx. That says, that if you need 10 years to solve a problem today but in 3 years you will probably have the technology to solve it in only 5 years then you should wait 3 years to start and you will be 2 years ahead of the games already.
  • Maybe this is what it takes to bury the x86 family. By then chip designers will have to do better than just shrinking and speeding up the chips.
    Customers would need to compile software for all sorts of architechtures, and therefore would demand opensorce software.

    btw. when were we all supposed to buy ia64 machines?
  • Isn't that the end of the world according to the Myans?

    After Earth computes the answer to the ultimate question, then it won't be needed any more will it?
  • It may be the end of the road for general-purpose CPUs, but the door is wide open for more specific hardware solutions. For example, no one questions that having custom texture mapping hardware is The Right Thing. You'd need a 10GHz CPU with its own power supply to do what a GeForce 2 does.

    In the past, the prevailing opinion was that custom hardware was a bad thing. Remember Wirth's Lilith? And Lisp machines? But this is changing, especially as CPUs continue to run hotter and get more and more complex. Ericsson uses a functional, concurrent language for some of its development--cutting edge stuff. Because CPU manufacturers continue to ignore power consumption and heat generation (you do not want a two pound heat sink in embedded systems), they designed their own processor to run their language. This is no big deal any more: you can use an FPGA. What did they find? They got a 30x performance increase over high-end Ultra SPARCs, they cut power consumption by over 90%, and the custom processor solution is cheaper to manufacture in quantity. This is going to become more and more common. The "Look! I got a 12% increase by buying an $800 CPU that uses 20% more power than the last one" incremental frame of mind is coming to a close. Why nickel and dime the increases when there are HUGE leaps to be made with currently available technology?

Vitamin C deficiency is apauling.

Working...