Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Silicon Will Get CPUs To .07 Micron 149

ruiner writes: "This post at EE Times discusses that it now appears that silicon dioxide can be used as an insulator down to a process of .07 micron for processors. This will buy processor manufacturers a few more years to develop solutions for smaller processes. "
This discussion has been archived. No new comments can be posted.

Silicon Will Get CPUs To .07 Micron

Comments Filter:
  • Too bad all these processes only allow companies w/billions of dollars in the bank to play.
  • That's all good and well, but are we really that interested in watching Intel and AMD push the same tech up a few more mega(or giga)hertz? I'm all for milking the technology we have, but its about time they move on to something a little more exciting. The next step is 10Ghz, not 1.1Ghz. We are fast approaching a time when 200 more mhz doesn't mean a thing.

    SM
  • It's really sad that people have to keep delaying the inevitable wall in processing power. I say scientists should just face it now and forget about working on .07 Micron microprocessors.
  • Oh boy! another how-low-can-you-go estimate from Silicon Valley. Next week it will be .007 microns.
  • Perhaps it would then be better to begin pushing for more efficient programming usage of multi-processors on a single system? From what I know which isn't a lot, we could make more use of parallel processing than most systems currently allow for, or am I completely wrong?
  • All these advances in processor technology are great, but whenever something like this happens, you will have forgotten about the actual discovery whenever it comes to market in about 10 years time, once everybody's expectations of their computers have ramped up to what will then be delivered. Us consumers have it even worse than the commercial customer because our products come last. A .07 micron mainframe will before a .07 mircon Pentium XI, because they need more clock cycles.

    What I'd really like to see is a helluva powerful consumer device released before its commercial counterpart. Not just helluva powerful, but something that would be like comparing a GeForce 256 DDR to a CGA card.

    "Assume the worst about people, and you'll generally be correct"

  • by slothbait ( 2922 ) on Monday April 24, 2000 @08:19AM (#1113115)
    > The next step is 10Ghz, not 1.1Ghz

    What an excellent idea! Why didn't *my* company think of it first! Forget that that piddly 1 GHz crap, why don't we just jump straight to 10? I'll get right on it...

    Are you really trying to tell me that you were content with your 100 MHz Pentium Classic right up until a month or so ago when that 1 GHz chips came out? All of those small jumps in the middle there didn't mean a thing, I suppose.

    While it's true that the steps that companies increment their clock in should be increasing (they should now be releasing in 50-100 MHz steps, not 33 MHz steps), the percentages should scale. 1.5 GHz : 1 GHz :: 150 MHz : 100 MHz. (I think I got that notation right).

    Now, I *do* think that the race to 1 GHz was kind of silly, but hey! It was marketing. Faster is better, but clock isn't everything. Intel hasn't released a new core since 1996, and you can really feel it. Coppermine and others are slight improvements, but they really need to get their new architecture (IA-64) out the door. Athlon is still eating their lunch.

    just a disgruntled computer architect,
    --Lenny
  • by kawlyn ( 154590 ) on Monday April 24, 2000 @08:19AM (#1113116) Homepage
    Don't get me wrong this is cool and all and will probably result in much faster chips, but what are faster, 32 bit, x86 chips, on a PC platform really gonna do for us?

    Faster chips are great but x86 is getting tired and the I/O on a PC is really limiting the usefullness of the chips we're capable of making now. We need a platform capable of useing what we have now.

  • by Anonymous Coward on Monday April 24, 2000 @08:20AM (#1113117)
    Some readers are probably wondering what all the fuss is about size (as we all know, technique, rather than size, is what is important), so here's a quick introduction to the subject.

    Electrical signals travel at the speed of light. Therefore, the smaller you can make a circuit, the faster a signal will get from one end of it to the other. And of course you can pack more of them into a given area, leading to smaller die size, which equates to more units per wafer, higher yields, and lower prices.

    At this point we're getting to the limits of what can be done on a silicon substrate. The problem here is that with circuits smaller than 0.07 micron, you are in danger of splitting silicon atoms if you pump any energy at all through the circuit. Yes, you read that right -- splitting silicon atoms, resulting (theoretically) in a release of energy equivalent to the Hiroshima bomb. This, as you may have guessed by now, is the real reason the US government considers high-powered CPUs to be "munitions". Just imagine what could happen if a bunch of Islamic terrorists got hold of a few thousand such CPUs and set themselves up as a mail-order PC company.

    This is not, by the way, a problem unique to ICs. The real reason for the classification of data compression products as "munitions" is related to this too. You see, if data is compressed too much, the atoms comprising the individual bits can actually begin to participate in atomic fusion, leading (for a 32 kb block of data) to a release of energy equivalent to the original H-bombs of the 1950s. There are some papers here [usgovernment.com] to document all this.

    Just goes to show... the government doesn't always tell you the real reasons for the decisions they make, but that doesn't mean those reasons aren't justified.

  • And ever they shall continue to speed up, grow smaller, and generate more heat. I'm still waiting for them to end this old idea of using silicon and move to something that can be used easier and faster. Technology can make the "bio" chips, etc, but the question is, do they delay because the new tech is still new, or because intel/amd wants to squeeze a few more bucks by creating more? Which comes to another question, how far along are we with these other technologies, and are they truely as fast/cool as we're made to believe.
  • Speed is all very well and good, but don't forget that having chips run with less voltage will let them run at least somewhat cooler. This is great, but I guess it also means that I won't be able to heat my home with my computer while I'm playing Quake III.
  • Agree, pushing the Mhz is no big step ahead anymore. Making quantum processors work would be one huge step ahead, as it would render many boundaries of current theoretical computer science useless. But would Intel and AMD still be the main contenders in that market?
  • This is interesting. We often hear about how new technology is going to completely and totally change everything overnight. Yet it seems to me that this rarely if ever happens. Even the internet has taken a few years to become mainstream and change the way we live. Granted, some technological advances change the playing field immediately, for example, quantum computing will wreak havoc on the Cryptography world. But it seems that often no matter how advanced the new technology may be, it takes time for it to fully impact society.
  • Goody. If the rate of CPU growth slows, it'll force people to realized speed gains by actually writing more efficient code. Eat that, Wintel!
  • Sad that people have become cynical to the point of suggesting stopping scientific advances.

    You may say that other animals don't even consider how many transistors they can fit on a piece of dirt, and they still live fine. However, other animals don't take showers, have life expectancies far shorter than ours, and generally don't care about short-term advancement. Sure, they'd like to evolve into us someday, but it doesn't depend on the discovery and work of individuals to do that.

    We are at a time in our evolution where individuals do matter more than the individuals of other species. We didn't evolve ourselves to rocket into space, we worked at it. That is why we are here.

    "Assume the worst about people, and you'll generally be correct"

  • Are you really trying to tell me that you were content with your 100 MHz Pentium Classic right up until a month or so ago when that 1 GHz chips came out?

    Well, I'm still content with my P2/233, and I run NT at home. If I were a Linux user, i'd probably be happy with a P100.

    --

  • by chazR ( 41002 ) on Monday April 24, 2000 @08:27AM (#1113125) Homepage
    This is stretching the limits of physics. A 70 nm layer is only about 200-300 Si=O bonds thick. We're almost in the area where quantum effects become an overriding concern. I can't be bothered to work out the probability of an electron with a given voltage tunnelling through a layer this thin, but I suspect that we are in the area where voltage regulation and temperature control become *very* important. Put another way, these babies won't be candidates for aggressive overclocking.

    What this really means is that we *may* have a little longer to go before we have to start using 'exotic' oxides. This is good news. One of the great things about SiO2 is that the manufacturing properties are well understood (although, at this size, lithography is going to be, er, interesting.

    And they say there's a chance that they can take it even further. Gordon Moore will be pleased. His law looks good for the forseeable future.
  • Chip makers seem to only be concerned about whether or not the gates will live for 10 years? While most 10-year-old chips are so old that they're not even keychains anymore, it seems odd that there's an almost guaranteed failure several years down the road. Most people (at least in the US) get a new computer at least once every ten years, but in less affluent countries, this may not be the case. However, it seems that the chip makers (read evil corporations) are forcing this upon them.
  • by Ralph Wiggam ( 22354 ) on Monday April 24, 2000 @08:29AM (#1113127) Homepage
    That sounds great from a science point of view, but just not realistic from a business point of view. Let's say that a big chip company puts no money in .07 micron technology and dumps every last R&D dollar into truly next generation CPUs. What if the R&D doesn't produce a working chip until 2007? Do you think a spokesperson for AMD could take a podium in 2004 and say, "In response to Intel's announcement of 6.4 GHz CPUs, we would like to ask everyone to hold off for three years when we will deliver our 150 GHz chips...maybe." They might as well fire everyone and lock the doors. The trick for those companies is to split the funding between evolutionary and revolutionary R&D so that they can keep products coming down the pipeline right up until that huge leap can be made. I certainly don't envy the people drawing up that budget. If you want to give it a shot, try to predict the weather for June first, of next year, and "hot" won't cut it.

    -B
  • I asked some masters students at Carnegie Mellon when physical improvements with current techniques would stop being possible and speed would be determined by efficient code. They said something like: without quantum, we've got about 20 years before everything is as small as it's going to get, and running a bunch of chips parallel will slow it down a lot because of communication time.

    I figure that once companies realize the limit, they'll collude and only release faster processors in small increments in "competition" with each other to leech as much money as they can. The alternative is that the first one to reach the limit sells to everyone to get the money before the other guy (Intel vs AMD), but then everyone is out of business because the market is satisfied until freaky quantum or holographic or whatever technology is developed.

    How close is anyone to that stuff?

    Instant Crisis

  • First of all, a protron stream (electicity) travels faster than the speed of light... (Which is why ..theoretically.. wire is faster than optical) But thats off topic. As for the rest, I'm not sure if your actually speaking the truth or just saying something that SOUNDS good but has no real point but to scare people who try to get the fastest chip. (Look honey, a P3 2GZ! Power BOOM!) And you didn't really back up your story.. www.usgovernment.com/secrets/ ??? Couldn't you have bs'd a better site? I am curious to know if any of this story is legit, so if anyone knows, reply please.
  • I actually do that. I use my computer to heat my bedroom. I go in and turn it on a little while before I want to use it if my room is a little cool. The room heats right up. I've often joked about my twenty-five hundred dollar space heater. How many people do this? Do I smell a new poll?

    PS. Sorry if this message is duplicated. I'm having trouble submitting. This T3 just doesnt cut it sometimes

    /*--Why can't I find the QNX OS on any warez sites?
    * (above comment useless as of 4-26-2000)
    */
  • Whey don't companies start pushing Dual Proc systems? I mean P3 450's are dirt cheap, so why not slap in 2 and get a theoretical 900Mhz out of it.

    Plus with the dual proc, it allows for more flexibility. Say one proc could be used for all the I/O tasks and the other could be used for whatever else is needed. I think that would be a more efficient way than 1 blazing fast processor. Am I right?
  • . . .This will buy processor manufacturers a few more years to develop solutions for smaller processes. . .

    Can Slashdot pul-eeze not use the marketing buzzphrase "solution" ??? I'm sorry I hate this meaningless management word and loathe to see it in my favorite tech spot. The wise Hemos could have just written "develop smaller processors" and meant the same thing.

    Sorry, I started thinking "solution" was overused when Rite Aid started printing it on its receipts. "Your total store solution" or somesuch rot. . .

  • Sounds like a .05m channel length to me. 1.5 nm Oxide thickness is mindblowing, considering I'm working with 70 nm right now. It's great to see silicon die decreasing in size. Too bad it doesn't relate to CPUs at all, at least not at such an early stage in the research. To get reproducible results on a large scale, entire processes will have to be reworked, this will take at least five years to even become anywhere near viable for a CPU. Oxide will get ICs to .07m, but it will take a while for the CPU usability of this process to reach maturity. We've had .25 micron for over ten years. It's only been in use for the past three.

    Lithography is going to take a while for this stuff to catch up, too. The traditional Novolac/Diazonapthoquinone(DNQ) resist won't stand up at such a small feature size. It should be interesting to see if PolyHydroxyStyrene(PHS) can even hold up well at such a small feature size. 157nm Lithography is a ways off for industry. 193nm is nearly standard now, and it's an _extremely_ easy process to wreck.
  • Here's the previous "story" on CPU exploding virii [slashdot.org]

    you are in danger of splitting silicon atoms if you pump any energy at all through the circuit. Yes, you read that right -- splitting silicon atoms, resulting (theoretically) in a release of energy equivalent to the Hiroshima bomb

    So here's a quick way to end the world...
    while (1);

  • I think most people misunderstand what's going on here.
    The story is about the SiO2 insulator thickness. This oxide sits under the gate of the transistor which is used to control the flow between the source and drain.
    This is 0.07um OXIDE THICKNESS, which is NOT the same as the gate length. The gate length is the usual parameter quoted when referring to a process (i.e. 0.25um, 0.18um, etc).

    The problem is that if the gate oxide is too thin, you really screw up the transistor. All sorts of nasty reliability problems with hot carrier damange and what not. (Not to mention device performance).
    But you can't have super-thick oxides relative to the gate width. There would be millions of tiny thin, but tall patterns which you have to expose, wash and deposit properly and it just doesn't work.
  • Hey , I may be wrong but if it was possible to split an atom with a circut board do you really think they would be using tons of super high explosives and huge missiles to catapult these things half way around the world? no.
  • No kidding, I've got a dual celeron Seti Screen Saver heater in my back room. I kept the pipes from freezing on my ski vacation.
  • The answer to this quandary lies not in technology, but in economics. The production of new technologies is a piece of the equation, but until those technologies become tenable within the market, nobody but the research institutions and the government will possess them.

    Thus, a new technology has to be researched and produced, and later mass-produced as the costs per unit begin to drop. A combination of lowering costs and high consumer demand make these things household realities.

    -L
  • although, at this size, lithography is going to be, er, interesting.

    Interesting? Try near impossible (though I refuse to say impossible, sub micron was once considered impossible). .13 is accomplished using 150nm light, how much lower can we go? Lithography has consistently been the hold up in manufacturing, not the substrate.
  • Then don't use x86. There are other platforms such as Sparc, Alpha, etc. These chip technologies won't just affect x86.
    ----
  • These smaller processes are great...assuming anybody can figure out how to use them. The problem is, as processes get smaller, the actual chip stays the same size (or sometimes gets bigger). The problem with this is as processes get smaller, so do the wires on the chip. A wire of the same length as in a previous chip would be slower in the new chip because of the reduced driver sizes, thinner wires (increased resistance), and the relatively unchanging capacitance. (The capacitance per unit length stays about the same at smaller sizes because of fringing effects.) This can make device performance very low, especially if you have wires running from one end of the chip to the other, not to mention more susceptible to noise. There are ways to combat these problems (like inserting inverters periodcally along long wires to reduce noise and improve speed--Intel actually has a requirement for this, although it does chew up more power), but I don't think anybody has found a way to get the full performance out of what we currently have.
    However, I must say that getting rid of the x86 architechture will certainly help....
  • I used to use a 6200 series TENCOR unit to count dirt (Yes I was a dirt counter, what of it) on silicon wafers after processing and it could see dirt smaller than that...so if the laser can distinguish dirt at that size perhaps the beam width is that size or smaller. Maybe they can start burning the tracts instead of using the photolithography etching technique. I think people are trying to stretch the envelope of one tech without rethinking other techniques...of course they only have billions of dollars invested in equipment that does it that way. Plus who would want to move the ion implanter out of the building...its only the size of a bus, in a building with 8 foot doorways. Maybe they dont change because no one wants to move that crap. They should just buy a few cases of beer and call some (sucker) friends over!
  • well... 90% of the computing market doesn't support more than one processor, for one. For two, P3-450's are dirt cheap because they're not being made anymore. And for three, in almost any case besides using the BeOS, I feel that peopel are better served using one fast processor rather than two slow ones.

    But if you do want a dual processor system, you can buy one from SGI, Dell, Gateway, IBM, Hewlett Packard, Intergraph, VA Linux, and many other ones. There's also a huge selection of motherboards available, if you're willing to get your hands dirty.

    And the example you're pointing to... That's rather unfeasible with todays operating systems (though probably completely feasible in the mainframe world). But for what you're asking for, it sounds like you just want a good scsi controller. Keeps your CPU usage down to around 3% most of the time. Yeah, it costs money and the drives cost more too, but with savings you'll have from buying a machine and mobo with only one processor, it won't work out too badly, and you'll probably notice a bigger bang for your buck.
  • by foghorn19 ( 108432 ) on Monday April 24, 2000 @08:51AM (#1113144)
    How close is anyone to that stuff?

    Silicon technology is still a bulk technology. The most likely candidates for a further circuit miniaturization are what is called "molecular electronics." These involve using organic molecules with dimensions of several dozen angstroms for swithces and interconnects. People are already working very hard on metal contacts to organic molcular componet. There was a special issue of the Proceedings of the IEEE on Quantum and Nanoscale Devices and an article titled "Molecular Electronics" by Prof. Reed of Yale EE dept surveyed the field.

    The Most striking figures in that article were (i) a very tiny organic molecular diode which operated at room temperature with voltages around +/- 0.5 volt, and (ii) a resonant tunneling device at room temp. with similar voltages. These are highly practical voltages and temperatures! The biggest obstacle is to integrate these devices.

    The variety of possibilities offered by organic molecules in conjunction with metals and other solid materials is simply staggering. What is going to open the floodgates is development of techniques to integrate these tiny devices with tiny interconnects in an inert matrix.

    Zyvex [zyvex.com] and the Foresight Institute [foresight.org] website are the best resources for information on this subject. Particularly, the writings of Eric Drexler and Ralph Merkle [merkle.com].

  • by chazR ( 41002 ) on Monday April 24, 2000 @08:53AM (#1113145) Homepage
    Advances like this first get used on 'real' computers - serious SMP servers like IBM's SP series of RS6000s, Suns high-end servers (Starfire), Compaq's WildFire Alpha boxes (drool) and, soon, servers based on AMD Sledgehammer and Intel Merced (Itanium) / Willamette chips.

    Machines like this are used for *serious* numbercrunching. They predict the weather, model the economy, help design planes and spacecraft and find oil. These are tasks for which there is still a serious demand for MIPS.

    Because of the astounding cost of developing these technologies, it takes years for them to trickle through to the desktop.

    I admit that when decent processors get to the desktop, they are wasted. I did some low-level monitoring of my mother's PIII 450 recently. She runs Win98 and MS Word. The processor spends 99.2% of it's time idle, and 60% of it's active time it's waiting for cache misses. The cache miss problem isn't going away any time soon, because memory is still not getting faster at a high enough rate. The only realistic cure is for compiler writers to continue developing *very* clever optimisers. This is happening, but optimisations like this are deep magic.

    I/O in modern servers using proprietary technology is awesome. Check out the IBM SP servers for more info. (Can't find the link - I have it on CD). Unfortunately, PCs are hampered by 'legacy' technologies like PCI. There is at least one serious attempt to address this - the Next Generation I/O project [intel.com]

  • Too bad all these processes only allow companies w/billions of dollars in the bank to play.
    Yeah. Damn shame. Modern civilization is opressing the home hobbyist. Why can't I set up a .07 micron process in my kitchen? It must be a conspiracy. If these lousy scientists weren't all in the pocket of big buisness we'd see a better world.
    I've got seventy bags of cement and some turbines, but can I generate massive ammounts of hydroelectric power? Hell no. Thanks to the military industrial complex, they've crippled the technology so that it requires a river to function. A river! Who has one of those in their appartment. Nobody. It's all due to the power companies and the oil companies being in bed with Microsoft!
    --Shoeboy
  • by Plasmic ( 26063 )
    Moderate this comment up.
  • There are so many errors in the comment that I'm replying to that I am aghast that somebody would post this.

    Electricity is essentially the movement of electrons, not protons. Protons do not (normally) move from atom to atom. Electrons certainly DO NOT travel faster than the speed of light. Electrons have finite mass; if they traveled at the speed of light, the Lorentz transformation equations assert that they would infinite mass. I assure you that the electrons in your computer do not have infinite mass. In fact, electrons usually travel far slower than that. The conduction speed of copper wire is in the range of a few thousand meters per second at best. Light (well, actually photons) travels faster than anything else. The speed of light is an absolute speed limit in our universe and is a fundamental physical constraint. Your comment is both off-topic and wrong.

    As for the rest of your post, you've been suckered into responding to a quite funny comment. There's just enough techno-mumbo and vaguely plausible sounding physics to get Slashdot karma whores (tm) to reply with corrections (in analogy with StreetLawyerGuy and DumbMarketingGuy). Kudos to all three.
  • that makes no sense, what do you think the 20th century of computing 'belonged' to? *SOMEONE* had to invent the vaccuumn tube, *SOMEONE* invented the transistor. and i dont think Lee de Forest or Shockley and his group were geologists!
  • by smasch ( 77993 ) on Monday April 24, 2000 @08:57AM (#1113150)
    No, this is transistor size. The oxide thickness they are talking about is 15 angstroms, which is far smaller than 0.07 microns. Current oxide thicknesses for modern processes are on the order of 100 angstroms or less (which is 0.01 microns or 10 nm).
  • The compiler can only do so much.

    Especially if the programmer messes things up by trying to hand-optimize!

    Having the programmer use goto statements and hand-unroll loops usually makes things worse. It is far, far better to have the programmer concentrate on the high-level algorithm design.

    Small-scale hand-optimization will do squat if your algorithm is exponential.

    The compiler can do a whole lot if the programmer lets it do its job. That includes making careful use of C++ inlinig and templates!

    --

  • It's really sad that people have to keep delaying the inevitable wall in processing power. I say scientists should just face it now and forget about working on .07 Micron microprocessors.

    Yeah, I agree. I also think that we shouldn't have bothered to improve RAM storage capacity over the years. After all, 640K ought to be enough for anyone.
    ----

  • Couple this new technology with the tiny hard drive technology mentioned on Slashdot a week or two back and we could eventually see Supercomputer wrist watches! Or for that matter many other devices. We may just see the next really big technological advance in our time. But we must always remember, the government takes roughly 30% of ever single paycheck earned in this entire country(U.S.) And on top of that they take about 6% on most retail sales. And there are countless other taxes. There are millions and millions of people working and paying taxes, My point they have an amazing amount of money and plenty of agendas. They must have absolutely amazing technology. I mean , they have thousands of the very best scientist and practically unlimited funds. One wonders about what they have already had for years.... and more importantly what they are or are planning on doing with it. But anyway I'm excited to see where all this goes... If anyone here has any further information about anything like this already in the works(conspiracy theorys, or people already planning on useing alot of these new experimental technologies together)please reply =)
  • Quantum electron tunneling can be modelled to a first order by psi(x)=e^(-i*k*L*x) where k is sqrt((8*pi^2*m*(K_e-U_b))/h^2)) and L is the width of the potential barrier, K_e is the kinetic energy of the electron and U_b is the height potential barrier. My impression (ie. I didn't plug in any numbers) is that quantum tunnelling effects will be not be significant at normal operating parameters if the insulation material has a sufficiently high dielectric constant (greater than 3.5 should be enough). The tunneling probability is (psi(x))^2, where the standard rules of taking squaring imaginary numbers applies (psi(x)*psi_bar(x)).
  • Two OC'd AMD, a OC'd Intel, three underclocked Cyrix, three IBM 21's, and a pair of 15. Plus the stereo and sometimes a laptop. The thermostat for the baseboard heater died back in 1998. The room stays a steady 65 through the dead of winter without the monitors and the window cracked. With the monitors on, I need to open the window fully or my pair of AMD start crapping out because the room is 100 degrees and they've hit 150.
  • Wake me when they deliver some really exciting news about signifigantly faster bus speeds and RAM...or greatly improved cache.

  • You're making it too easy on the guy. Weather?

    Ask him to predict exactly how much rain a particular county will have that morning. Then he might begin to appreciate what these guys do.
  • Well that's all well and good if that's what makes you happy. I'm not sure where you live, but I'm in America and we speak English here. American English is a combination of many languages along with MANY valid words we just made up ourselves. I personally don't much care about the latin language. If you do, that's fine.

    /*--Why can't I find the QNX OS on any warez sites?
    * (above comment useless as of 4-26-2000)
    */
  • this will keep us using 30-year old technology for another 10 years. It's time to find a replacement for silicon.

    tcd004

  • by Greyfox ( 87712 )
    They've been saying we'll reach the end of the useful life of silicon for quite a while now. And each time they get close, they figure out how to bum it down by a few microns. I betcha when they get close to .07, they'll figure out how to bum it down again...
  • Try E-Beam, or ZPAL, or conformable contact lithography, or X-Ray, etc. Dimensions below 70nm are achieved regularly TODAY. The manufacurability (throughput, cost, etc) is an issue, but the processes are very possible.

  • That's your penis? I thought I was eating a piece of spaghetti!

  • Precisely. Economics or Political agenda, which are closely bound, are key factors in scientfic advancement. The reason the lunar mission happened so quickly was due to the politics and economic relations with the former U.S.S.R. during the cold war. We acted out of fear largely. Likewise, if there was the economic and political insentive to go to Mars, we would likely be there by now. We have the "technology" right now for things like wireless broadband, videophones, cars that hover above the ground and drive themselves to the destination you name audibly, and intensely realistic virtual reality. But to get these technologies off the paper and into the factory, or even if prototypes exist, to get them into large-scale production, requires the right economic and political insentives.
    ----
  • It's nice that oxides that thin don't degrade over time as badly as feared. Now all we have to worry about are
    • gate tunneling
    • channel-dopant mismatch
    • utterly abysmal P-channel performance due to short-channel and vertical-field effects
    • oxide variations (we're down to fingers-of-one-hand atom counts)
    • skyrocketing leakages
    • rising interconnect delays
    • others that I don't remember offhand.

    Looks like smooth sailing to me!

  • Thpse new technologies are even truely technologies at this point; at the best they're like the first transistors back in 1947-8 : big, clunky, hard to make. Years before they get to the practical stage, and there's some that believe that some of the new technologies won't pan out, period.

    Note that some of these, bio and mechanical nanotech, have the same sort of power dissipation problems that conventional semiconductors have. Computing take power to perform an opperation; you make it fast and it burns a lot of power, so you try to make it smaller to reduce the power per functional unit. There are proposed methods to get around this, see
    http://www.ai.mit.edu/~mpf/rc/home.html
    for an example.

    Sometimes technologies are delay because the established leaders wan to hold onto their positions - the US phone companies and digital access is an example. Usually the delay is not very large, the old guard is passed by. Sometimes technologies are fueled by the current leaders, who want to hold onto their lead and know a little history.

  • It won't really affect "us" in the PC world for a while. There are fundamental changes that the Personal Computer will have to go through first before 0.07 micron chips for a Personal System make sense. The effect will hit the big time server and super-computing centers first, which is natural.

    I doubt that a 0.07 micron chip will be used in anything for the average consumer for over a decade. By then we will hope-fully have a "PnP" type parallel CPU array and Fire-Wire type PnP for hot swapping peripherals... making 150 GHz not all that impressive.

    --// Hartsock //
  • Actually, you proved my point. The reason the plural of virus is viruses, is because it's the only thing that makes sense in English. Viruses isn't a Latin word at all. The point is that the made-up "Latin" alternatives don't make sense. The Oxford English Unabridged lists "viruses" as the plural.
    ----
  • First off, why waste time with silicon? Gallium Arsonide is faster, for less effort.

    Secondly, companies aren't going to flog off any more computers, just because the processor is smaller. It would make much more sense, IMHO, to use the scale improvements to build multi-processor CPUs. (If the next generation of Intels or AMDs packed 16 ix64's into a single unit the size of current processors, with all necessary SMP stuff thrown in, you'd have a truly powerful computer.)

    Last, but not least, why use all these improvements on the processor? It's not been the bottleneck for years! If you designed a wafer-scale RAM chip at 0.07 microns, you'd be looking at computer memories in the region of 512+ TERAbytes! Can you imagine how responsive KDE would be with that?

  • Actually, I dont think they will be "organic" molecules, but molecular scale "hard" materials. Organic implies biology, and biology has nothing to do with nanoscale engineered components.
  • Check out Infiniband for what could be the newest greatest I/O thing since sliced bread... well, maybe. At least IBM, Intel, Sun, Compaw, HP, etc. are all hoping it will. A lot of work is being put in.

    http://www.InfiniBandta.org/home.html
  • by Anonymous Coward
    Gives new meaning to the term "dirt-cheap computers"...
  • what do you do in the summer?
  • The dirt counter doesn't actually "see" the dirt. It uses a property known as oblique Reynolds scattering to detect the glint that dirt reflects back at a photodiode. Photolithography is the only practical technique to mass produce ICs. If they were to use a laser to trace each path, it would literally take years for a single chip to be made (think of how many miles of circuit paths there are in even a small IC).
  • Yup, electrons flow pretty slowly through copper and other metals, but the electric field itself travels closer to the speed of light (depending on the dielectric). That's what is important. In AC circuits, the electrons never get very far, and in DC/switched circuits, they don't go nearly as far as the field does during a switch time.

    His comment was pretty wrong though. I'd love to see a proton stream... and live through it.
  • It's not a latin word? What have you been smoking? My latin dictionary (don't have it here right this sec) says that 'virus' means 'poison'.

    The plural of 'virus' is 'viri'.
  • Oh, that means no notepad.exe with a really helpful (read: annoying) paperclip/dog/shakespeare/...

    This always brings up the notion that the base linux kernel actually improves in speed, given the same functionality as earlier releases. Of course, once you add all sorts of snazzy modules in...
    (talk about offtopic)
  • not only those, but the smaller they get, and the closer some of the features, the better chance of having parasitic bipolars in there. Great stuff, especially when they lock up...
  • OK, a silicon transistor as small as 0.07 um (that's the drain-source distance AFAIK) will work, but that doesn't solve everything. I've seen somewhere between .02 and .03 as being the limit for silicon (a previous Slashdot story talked about a .03 um transistor realized in a lab). The real problems are more practical.

    First, how do you build a CPU with a .07 process? You cannot modify the current lithography process to do that. It would require far UV, for which no transparent materials are known. The alternatives go from X-Ray (IBM) to electron beam (Lucent), but none of these alternative is close to being production ready.

    The second practical problem: cooling! A .07 um CPU the size of a PIII would contain ~200,000,000 transistors. Since it would probably run a coupe GHz, The heat will likely be close to a kilowatt - impossible to cool with just a fan. Plus it would also cost ~$30/month just to leave your computer running 7/24. Breaking the Linux uptime record wound thus cost about $1000 in electricity.
  • how can we get rid of PCI anytime soon. PC's are still being made with ISA slots! PCI is far from dead in any case. The new PCI-X standard with rejuvinate PCI and extend it's life for the forseeable future. Some info here at serverworks [serverworks.com]. Quote from article:
    PCI-X is a backward compatible extension of the widely accepted Peripheral Component Interconnect (PCI) standard that forms the basis for the I/O systems for personal computers, workstations and all classes of servers. PCI-X permits the transfer of data between a host CPU and I/O peripherals at speeds in excess of 1-GByte per second, twice as fast as the 533-MByte per second supported by today's fastest (66 MHz) PCI buses, and eight times as fast as the 133-MByte per second peak rate available on most contemporary desktop and laptop personal computers. Industry analysts project that the PCI-X standard will have broad impact on high-end systems used for traditional data processing applications. It will also find broad acceptance in emerging markets for server appliances, storage-area networks (SANs), and high performance network switches. The first commercial products incorporating PCI-X technology are expected to arrive in the market during the second half of this calendar year.

    1GB/sec is pretty fast compared to the maximum transfer fate 64bit 66MHz PCI offers today (533MB/sec). The PCI-X bus is a 66 or 133MHz 64bit peripheral bus. Looks like it can burst faster than a theoretical AGP 8X. Maybe Intel will make an AGP-X bus for graphics ;)
    --
  • will I be able to download porn any faster? (an indirect simpsons reference)

  • 5. Breakup of Microsoft Predicted
    4. Death of x86 Architecture Predicted
    3. Death of FORTRAN Predicted
    2. Year of the Network Predicted (okay, so they finally got one right)

    and finally, the #1 headline we've seen too many times in the last 20 years...

    1. Death of Moore's Law Predicted!

    --j
  • Yes. My wife and I moved into a new house a couple years ago and when we were planning out how to use the floorspace, she wanted the study which turns out is the coldest room in the house in the winter as it is on the corner sharing a wall with the garage and is the farthest room on the ground floor from the furnace. I took an extra bedroom upstairs as my 'lab'.

    She talked me into trading. I didn't mind because the study is built over a crawlspace and it is closer to the phone lines, cable and power and overall it was a much more convenient location to wire up my network and stuff. After I got my two servers, my masq box, cable modem and my workstation into this room, we noticed that it was appreciably warmer. Almost too warm. We did this around Thanksgiving and I was toasty all throught the winter. Worried now though about how hot it will get in the summer. Might have to think about putting some ceiling vents in or something.
  • HAH!! Finally,has the concept come full circle.In the olden days(before Windows),code had to be as efficent as possible.The younger programmers are just discovering that efficent coding works faster?? Efficent code is ugly,but looks aren't everything.Functionality is part of the solution as well. The only reason CPU speeds has went as far up as it has is because of the sloppy coding for Windows making a speedup necessary in order to operate at a decent speed. Why do you think a short-to-the-point solution is called quick & dirty? Call me the old man from the mountains,but Windows has gotten slower and sloppier after each version is "Upgraded!!",and I would rather have an operation system that is efficent,not pretty.Cosmetics can be added later.
  • Likewise, if there was the economic and political insentive to go to Mars, we would likely be there by now.

    Definatly. I heard someone once say on /. that you could put people on Mars for the cost of the movie "Titanic."

    Most likely, nobody would have made it to the moon if Kennedy hadn't died. The whole country felt it owed it to him that they should go to the moon, because thats what Kennedy wanted. So off to the moon they went, and did it before the decade was out, people put a stuipd little flag there, just like Kennedy wanted.

    But then a funny thing happend. Nobody cared anymore. Man stomped around up on that rock for a while and then retreated home, and haven't gone back since. Earthbound people said you could be spending NASA's money on the poor (Fools! Do you really trust polotitions to spend that extra money wisely?).

    Heres to hoping that the International Space Station will allow new missions to the moon, since the shuttle can now refuel there and move on, perhaps launching a module out of the cargo bay.

  • Virus is a Latin word. Viruses is not. The root is virus, Latin for poison, yes. But the plural is not viri. Check out this link [perl.com] for more information. Not every word ending in -us has -i in the plural. Virus is a defective noun. Even some Latin dictionaries have that wrong.
    ----
  • The x86 architecture is not the only architecture to use silicon. Every CPU out there today uses silicon, so something like this is not only for x86. So an announcement like this affects pretty much all computers, not just x86 PCs.

    Chris Hagar
  • This argument played out a couple weeks ago on another story...

    Result: Everyone left still thinking that they were right

    1) English is a dynamic language. What comes into common usage becomes language. Therefore, Virii/Viri is/are word/s.

    2) It was a latin base, so therefore Viri/Virii is the proper plural.

    3) It may have been a latin base, but it's an english word, and the dictionary says it's viruses.

    4) You all suck, shut up and go away.

    I think that pretty much summarizes the argument.
    I won't take sides, but the longer this goes on, the more I agree with #4...
  • I don't think you read the article correctly. The article refers to a 70nm process (i.e. gate length = 70nm, lambda = 35nm). The thickness of the gate oxide is 15 Angstroms, or approximately 1/40th of 70nm. yow!
  • by Christopher Thomas ( 11717 ) on Monday April 24, 2000 @10:04AM (#1113189)
    Solid state photonics is coming, and there's nothing you can do about it.

    Solid state photonics will still have its feature size limited by the wavelength of the light used within its devices. _Current_ integrated circuit chips use feature sizes that are much smaller - by the time photonics matures, it will already be left in the dust as far as density is concerned.

    Use smaller wavelengths of light? Not unless you want to destroy your material by photoionization.

    Your next logical argument is to point out that most proposed photonic devices are three-dimensional. My logical counterargument is to point out that you can build three-dimensional electrical devices too. It's just currently cheaper to shrink 2D fabrication processes.

    Your next probable point is to make noise about propagation delay in electrical circuits. It turns out that these aren't the limiting issue in conventional ICs - heat dissipation is.

    Your next likely point is to say that a photonic circuit would have less heat dissipation. My response is that I'll believe it when I see it. Absorption happens, and whatever diode lasers are pumping this device won't be perfectly efficient either.

    Lastly, I'd like to point out that most of the effort that goes into designing integrated circuits goes into designing the logic, not the fabrication processes. Computer engineers would still be employed in your hypothetical universe. Electrical engineers design motherboards and specialized analog ICs, both of which would still exist, so they wouldn't be out of work either.

    Summary: Photonics is not the magic wand you hold it out to be.
  • Note that when it mentions backwards-compatability, it has the same ramifications as putting a 33 MHz PCI card in a 66MHz slot. Yeah, it works, but it slows everything else down to it's speed (bus width doesn't have the same impact). PCI-X is almost a misnomer, given how different a protocol it is from standard PCI. Good stuff, no doubt about that, and it helps fill the gap between current PCI implementations and the Next Big Thing (Future I/O? Infiniband? Sliced Bread?)
  • wire of the same length as in a previous chip would be slower in the new chip because of the reduced driver sizes, thinner wires (increased resistance), and the relatively unchanging capacitance. (The capacitance per unit length stays about the same at smaller sizes because of fringing effects.)

    Capacitance is still (AFAIK) dominated by the diffusion capacitance of transistor sources/drains connected to the wire. Second contributor, IIRC, was gate capacitance. Both of these go down with feature size.

    You might point out that gate and drain area will only go down in one dimension, as I'll be sticking more devices on the bus, but they'll still go down.

    Wire resistance similarly isn't a huge contributor AFAIK. In all of the sets of parameters that I've seen, even a long bus wire would have resistance lower than the effective resistance of a MOSFET in saturation mode.

    Lastly, while your drivers get smaller, the W/L ratio of the gates remains the same. This means that, should you be inclined to melt down your circuit, you could still pass the same amount of current through a smaller MOSFET.

    Now, as far as using intelligence is concerned... Most of the cynicism I've seen expressed both towards coding and towards IC design has been put forward by people who aren't doing coding or IC design (in general; I don't know what your personal qualifications are). The fact remains that while boneheaded code gets written and while boneheaded ICs are most likely designed, there are still companies that do it right. These gain market share, grow complacent, and fall to the next group that does it right, continuing the grand cycle.

    My point being that you aren't likely to get order-of-magnitude performance improvements by "using intelligence". The people you're competing against already are.

    As far as the ultimate limits of communication on smaller, faster chips are concerned, I doubt this will become a serious problem. Designers will simply focus more on pipelining and asynchronus operation of modules to relax system-wide signal timing constraints.
  • Perhaps it would then be better to begin pushing for more efficient programming usage of multi-processors on a single system? From what I know which isn't a lot, we could make more use of parallel processing than most systems currently allow for, or am I completely wrong?

    It turns out that, for several reasons, multiprocessors aren't likely to dominate desktops for a few years yet.

    The first reason is that systems with multiple _discrete_ processors are more expensive. You need to pay for multiple processor modules, and the motherboard needs a more complex chipset. Joe Average Gamer is better off spending the same amount of money getting a top-of-the-line video card, and a new single processor six months later. Joe Average Non-Gamer doesn't need a multiprocessor for email and office apps.

    The second reason is that writing good parallel code is much more difficult than writing good sequential code. Race conditions, interprocess communication, and so forth add plenty of complexity, and compiler tools won't save you - parallelism is designed in at a higher level than compilers deal with.

    The third reason is that interprocessor communications bandwidth and memory coherency overhead are *big* problems for multi-processor systems, and they keep on getting bigger as more processors are added. Something like a Starfire, for instance, isn't a large set of processors and memory with a bus tacked on - it's the Bus Network of the Gods with processors and memory tacked on as an afterthought. It has to be, to handle supercomputer communications loads. This means that a lot of the money you spend on a parallel system *won't* be on processing power. If, on the other hand, you're willing to wait another design generation, you can get a comparable processor for a much lower price.

    The fourth reason is that while we could indeed integrate many old cores on a new die, we get better performance by doing other things. Adding more cache, for instance, or adding fast, complicated FP units that would have taken too much silicon to add before. Making a bigger translation lookaside buffer (important with a 64-bit address space). Improving branch prediction (a big source of stalls). Adding deeply pipelined load/store units (another big source of stalls). Or adding whatever other performance-enhancing widgets are invented over the next five years. Multple cores are an interesting idea, but at _present_ aren't the most effective way of increasing performance.

    All of these factors mean that parallel processing isn't used except by those who really, *really* need it (dual-processor doesn't count).

    Now, the caveat.

    Once cache performance saturates - and it eventually will - we'll have a lot of silicon to play with when moving to higher linewidths. At the same time, we'll also have to break chips into asynchronus pieces to solve the clock skew problem. We may also be reaching limits to superscaling (scheduling is an NP-complete problem, and approximations reach diminishing returns eventually). At this point, it starts to make sense to put multiple cores in a chip, along with the coherency logic and communications pathways needed.

    However, I don't see your desktop machine running a processor like that for 5-15 years, for the reasons mentioned above.
  • Ha! Try an k6-2/400 system in a TigerDirect midtower case. Now that thing gets toasty. Add to that an iMac 400mHz DV (That thing is a freaking space heater in itself. I fear it's going to melt through the desk) and a 486 with no case. I have a nice warm room. ;]

    In the summer, I leave the iMac and the 486 off and put big fans around my 400mHz Linux box. Works great.

  • Say one proc could be used for all the I/O tasks and the other could be used for whatever else is needed.

    The I/O processor would be idle much of the time on a modern system. Most of the I/O load goes through bus mastering devices these days. The CPU queues it up, and gets an interrupt when the transfer completes (in some cases, overhead is reduced by holding the interrupt until several I/O requests complete).

    What is needed is a good processor affinity so that the cache stays warmer. The Linux 2.3.x kernel is moving in that direction with internal structures. 2.2.x allready has CPU affinity for user processes.

    There are situations where dedicating one CPU to a specific process can be a good idea, but it's not common enough to be in a mainstream kernel.

  • I used to do some work with GaAs in semiconductor lasers, and I have no idea where you get the idea that GaAs is less effort. For semiconductor lasers, it is less effort, but for device fab, the major problem is the lack of a native oxide. What this means in laymans terms is that you can process silicon, let it oxidize(rust), use a photomask to lay down a pattern, etch away the oxide in the pattern, and start over again. There is no such native oxide for GaAs which means that you have to somehow invent a non-native oxide such as GaAlAs which is a real pain in the ass. As for GaAs being faster, this is also only partially true. I can't remember exactly, but I believe that at low frequencys, GaAs has a higher electron mobility, but this effect drops off at higher frequencies to the point where GaAs and Si are similar in speed. The net result is that GaAs has a limited range of applications for which it is acutally better, and it always costs more. It's true that the cost is decreasing rapidly for GaAs, it is decreasing just as fast or faster for Si.
  • Capacitance is still (AFAIK) dominated by the diffusion capacitance of transistor sources/drains connected to the wire. Second contributor, IIRC, was gate capacitance. Both of these go down with feature size.


    You might point out that gate and drain area will only go down in one dimension, as I'll be sticking more devices on the bus, but they'll still go down.

    Wire resistance similarly isn't a huge contributor AFAIK. In all of the sets of parameters that I've seen, even a long bus wire would have resistance lower than the effective resistance of a MOSFET in saturation mode.

    Re: wire capacitances - not true (at least directly) anymore, AFAIK.

    Our customers (we are working with various companies working in the 0.13 process regions). They are continually complaining about the effects of wire impedance (including resistance, not just capacitance!) on their designs - particularly because the current-state-of-the-art logic synthesis tools do not MODEL this wire impedence well & end up making lousy decisions which impact the tail-end design flow severely. The relative impact of wire impedance to cell delay is only expected to get worse as feature sizes get smaller.

    The performance of individual logic gates have been scaling pretty well as the feature size goes down, particular because companies can spend so much time & computation analyzing & refining each cell. The behavior of the wires, esp. in the context of large-scale routing across large "geographic" areas is analyzed in a far cruder manner, making it much harder to get reliable results.

  • That's an interesting observation. But Moore saw that trend, too. The January issue of Physics Today [aip.org] had an interesting article titled "Physics and the Information Revolution" that described Moore's Second Law. This corrollary to the more famous Moore's Law applys a geometric progression to the cost of successive generations of IC foundries. The Physics Today article even postulated that one day, the cost of the tooling to make the next successive generation of ICs will exceed the GNP of the entire world economy, thus setting a practical upper limit on the technology.

    So it's a matter of which wall we hit first: the physical or the economic.

  • There were some 3D chips I saw a few years ago, they had vertical interconnects rather than horizontal. The catch was these were DRAM chips, the idea being you could stack several small (cheap) ICs to make one large one. This might be feasible with processors if they had teeny tiny gates and a relatively low clock so they didn't generate too much heat.
  • A smaller die means less electrical resistance which means *drumroll* less energy dissapation! The smaller the die size the less eletricity it needs so it produces less heat.
  • I think most people misunderstand what's going on here. The story is about the SiO2 insulator thickness. This oxide sits under the gate of the transistor which is used to control the flow between the source and drain. This is 0.07um OXIDE THICKNESS, which is NOT the same as the gate length. The gate length is the usual parameter quoted when referring to a process (i.e. 0.25um, 0.18um, etc).

    Usually, it helps to understand the topic you're discussing. I grew a 700 angstrom (.07 m) oxide last week on my PFET wafers. If you were correct (which you're not) this article would be over fifteen years late in the coming. Also "wash and deposit" are not terms used in industry. We use "develope, etch and diffusion" since you forgot a step, too. The size which is referred to by .25, .18 m is the feature size. the gate/channel length is lambda which is half of this. Please, for your own sake, understand the topic at hand before posting.
  • Don't need to be so sarcastic - don't you think things would've been even more interesting than it already is, if you could create custom chips at home with the same ease as you write & compile programs?
  • Actually, the scientifically proven limit of silicon technology is much smaller than this. I remember a couple months ago, someone proved that the minimum thickness of the insulator was 4 atoms. Of course, mass-producing on that scale would require lasers with such a short wavelength that the energy delivered by that high-frequency beam would wreak all kinds of havoc on the chip, so a manufacturing process that small is only possible under extremely limited conditions, which will not likely be overcome any time before the next major breakthrough in computing technology.

    Personally, I'm more interested in the molecular scale quantum computing technology. I believe it was Los Alamos that put 3 quantum transistors on a single proline molecule. You know, proline, one of them amino acids. We might even be able to grow our computer chips from a DNA or RNA template in the distant future. That is something that could go a lot farther.
  • Technologies like these (copper interconnect, new insulators, new dielectrics) tend to be used in custom ASIC design long before they find their way into CPUs at all.

    For example, people were using .18 micron processes to produce signal processing gear for cellphones (especially basestations) years before it turned up in a CPU.
  • That's as it should be. Because with the Linux kernel you can go through turning off the stuff you don't need. If I could install MS Office (to keep compatible with my coworkers) and just turn off the assistants, the installation of gramma chicken, and many other wasted things, then I'd be happy. But even if the base speed did increase, I wouldn't know it because it's buried under 400MB of crap, and required 64MB to load properly.

    Feature bloat is cool, if done right. Dynamically linkable code, conditional installation, and decent application design can cope with this. Simply code all possibly unwanted modules as externals, and if the user wants, then can install them, then make calls to the external code. That way not only does the memory footprint get smaller (only what's actually used now is loaded) but the disk footprint, where only what you want gets copied from the CD.

    And the features themselves don't take CPU resources, unless used. The animated paperclip doesn't draw a lot of resources if it's not running. The trick is to not run it if the user doesn't want it.

    But, coding this way is like writing good portable, and standard (indenting, etc) code, doable, but a pain in the ass. As long as it's possible to get by without doing much, everyone will.

    I long for a day when Abrash's optimization books, or similar ones for the processors of the day, are required reading in university programming courses, and classes on optimization are standard. There's a lot compilers will never be able to do, because we influence it with our high-level design. We need to think smart to get good results.
  • Sure, machine flight was said to be impossible and now we do it routinely.

    But only by abandoning the idea of flapping wings and going to an airfoil and lateral thrusters. (Yes, I know some people are still trying flapping aircraft, and with some degree of success, barely.)

    I have no doubt that there's a lower size limit beyond which silicon chips will *not* reach. At best, this is one atom-wide pathways.

    That doesn't mean that we'll never get anything better, just that the current 'cheap and easy' process will have reached its limits and we'll be using the less popular (because of price, or whatever) techniques, until those too run out, and we move to whatever new technologies we've found.

    There are a lot of things that can done with chips to make them faster. Silicon density can be made higher by utilizing more of the third dimension. If layer-to-layer connections can be made to waste as little energy as connections on the same layer, then with the limits of heat disipation, we'll be able to get a lot more silicon close to other silicon.

    Better software techniques could be used to get parallel calculation benefits out of almost anything, if nothing else, simply by precalculation all possible choices at any branch.

    And sure, some software runs serially, only. But we've already found unsolvable problems, or classes of problems, that will be unsolvable in the life of the universe, even with unimaginably powerful computers. We need to come up with better ways of solving these problems, just throwing a faster CPU at them simply reduces an eternity long wait to merely half that.

    Software could, imho, be made ten times faster in most cases, were hardware designed with the idea of finite cycles, and if the software were properly written.

    (By this, I mean that hardware is often designed to get more cycles, instead of to make software development easier. If it were designed to be easier to write good code for, it'd probably do better, on average, at any given clock speed. And eventually, the same hardware speed limits will be hit. The G4 CPU architecture with tons (128+ registers, most 128b wide), will be usefull a lot longer than the x86 architecture, with a handful of registers, most only 32b wide, and a weird floating point stack, etc, etc, with speciality registers layered on top, like the MMX.)
  • Bipolar designs are great, but getting the lithium dose right is a royal pain. Ever try to convince a bipolar chip that it needs its meds?
  • I got a couple of old, low clock rate boards for nothing, and I only had MII chips on hand to pair them up to. So I have a pair of MII-300's clocked at 233 and 266. Their uptime has only been dictated by how long I can go without 'tweaking' something..
  • Two words: 'Utilities included'..

"If it ain't broke, don't fix it." - Bert Lantz

Working...