Silicon Will Get CPUs To .07 Micron 149
ruiner writes: "This post at EE Times discusses that it now appears that silicon dioxide can be used as an insulator down to a process of .07 micron for processors. This will buy processor manufacturers a few more years to develop solutions for smaller processes. "
Only the big boys can play (Score:2)
That's great, but... (Score:1)
SM
Sad that this is necessary (Score:1)
Guiness is getting tired (Score:1)
Re:That's great, but... (Score:2)
Wow - it might come to market in 10 years (Score:1)
What I'd really like to see is a helluva powerful consumer device released before its commercial counterpart. Not just helluva powerful, but something that would be like comparing a GeForce 256 DDR to a CGA card.
"Assume the worst about people, and you'll generally be correct"
You must be kidding. (Score:3)
What an excellent idea! Why didn't *my* company think of it first! Forget that that piddly 1 GHz crap, why don't we just jump straight to 10? I'll get right on it...
Are you really trying to tell me that you were content with your 100 MHz Pentium Classic right up until a month or so ago when that 1 GHz chips came out? All of those small jumps in the middle there didn't mean a thing, I suppose.
While it's true that the steps that companies increment their clock in should be increasing (they should now be releasing in 50-100 MHz steps, not 33 MHz steps), the percentages should scale. 1.5 GHz : 1 GHz
Now, I *do* think that the race to 1 GHz was kind of silly, but hey! It was marketing. Faster is better, but clock isn't everything. Intel hasn't released a new core since 1996, and you can really feel it. Coppermine and others are slight improvements, but they really need to get their new architecture (IA-64) out the door. Athlon is still eating their lunch.
just a disgruntled computer architect,
--Lenny
and this is going to effect us how? (Score:4)
Faster chips are great but x86 is getting tired and the I/O on a PC is really limiting the usefullness of the chips we're capable of making now. We need a platform capable of useing what we have now.
Primer (Score:4)
Electrical signals travel at the speed of light. Therefore, the smaller you can make a circuit, the faster a signal will get from one end of it to the other. And of course you can pack more of them into a given area, leading to smaller die size, which equates to more units per wafer, higher yields, and lower prices.
At this point we're getting to the limits of what can be done on a silicon substrate. The problem here is that with circuits smaller than 0.07 micron, you are in danger of splitting silicon atoms if you pump any energy at all through the circuit. Yes, you read that right -- splitting silicon atoms, resulting (theoretically) in a release of energy equivalent to the Hiroshima bomb. This, as you may have guessed by now, is the real reason the US government considers high-powered CPUs to be "munitions". Just imagine what could happen if a bunch of Islamic terrorists got hold of a few thousand such CPUs and set themselves up as a mail-order PC company.
This is not, by the way, a problem unique to ICs. The real reason for the classification of data compression products as "munitions" is related to this too. You see, if data is compressed too much, the atoms comprising the individual bits can actually begin to participate in atomic fusion, leading (for a 32 kb block of data) to a release of energy equivalent to the original H-bombs of the 1950s. There are some papers here [usgovernment.com] to document all this.
Just goes to show... the government doesn't always tell you the real reasons for the decisions they make, but that doesn't mean those reasons aren't justified.
Smaller, Faster.. (Score:1)
Don't forget about the voltage.... (Score:1)
Re:That's great, but... (Score:1)
Leaps and Bounds or Baby Steps? (Score:1)
we need a slowdown (Score:2)
Re:Sad that this is necessary (Score:1)
You may say that other animals don't even consider how many transistors they can fit on a piece of dirt, and they still live fine. However, other animals don't take showers, have life expectancies far shorter than ours, and generally don't care about short-term advancement. Sure, they'd like to evolve into us someday, but it doesn't depend on the discovery and work of individuals to do that.
We are at a time in our evolution where individuals do matter more than the individuals of other species. We didn't evolve ourselves to rocket into space, we worked at it. That is why we are here.
"Assume the worst about people, and you'll generally be correct"
Re:You must be kidding. (Score:1)
Well, I'm still content with my P2/233, and I run NT at home. If I were a Linux user, i'd probably be happy with a P100.
--
70 nanometres is *tiny* (Score:4)
What this really means is that we *may* have a little longer to go before we have to start using 'exotic' oxides. This is good news. One of the great things about SiO2 is that the manufacturing properties are well understood (although, at this size, lithography is going to be, er, interesting.
And they say there's a chance that they can take it even further. Gordon Moore will be pleased. His law looks good for the forseeable future.
sure don't make them like they used to... (Score:1)
Re:Sad that this is necessary (Score:5)
-B
Maximum capability (Score:2)
I figure that once companies realize the limit, they'll collude and only release faster processors in small increments in "competition" with each other to leech as much money as they can. The alternative is that the first one to reach the limit sells to everyone to get the money before the other guy (Intel vs AMD), but then everyone is out of business because the market is satisfied until freaky quantum or holographic or whatever technology is developed.
How close is anyone to that stuff?
Instant Crisis
Re:Primer (Score:1)
I'm not the only one? (Score:1)
PS. Sorry if this message is duplicated. I'm having trouble submitting. This T3 just doesnt cut it sometimes
/*--Why can't I find the QNX OS on any warez sites?
* (above comment useless as of 4-26-2000)
*/
Re:That's great, but... (Score:1)
Plus with the dual proc, it allows for more flexibility. Say one proc could be used for all the I/O tasks and the other could be used for whatever else is needed. I think that would be a more efficient way than 1 blazing fast processor. Am I right?
Solution Pollution (Score:1)
Can Slashdot pul-eeze not use the marketing buzzphrase "solution" ??? I'm sorry I hate this meaningless management word and loathe to see it in my favorite tech spot. The wise Hemos could have just written "develop smaller processors" and meant the same thing.
Sorry, I started thinking "solution" was overused when Rite Aid started printing it on its receipts. "Your total store solution" or somesuch rot. . .
.07µm Feature Size (Score:2)
Lithography is going to take a while for this stuff to catch up, too. The traditional Novolac/Diazonapthoquinone(DNQ) resist won't stand up at such a small feature size. It should be interesting to see if PolyHydroxyStyrene(PHS) can even hold up well at such a small feature size. 157nm Lithography is a ways off for industry. 193nm is nearly standard now, and it's an _extremely_ easy process to wreck.
Re:Primer and malicious virii (Score:1)
Here's the previous "story" on CPU exploding virii [slashdot.org]
you are in danger of splitting silicon atoms if you pump any energy at all through the circuit. Yes, you read that right -- splitting silicon atoms, resulting (theoretically) in a release of energy equivalent to the Hiroshima bomb
So here's a quick way to end the world...
while (1);
This is NOT transistor size! (Score:1)
The story is about the SiO2 insulator thickness. This oxide sits under the gate of the transistor which is used to control the flow between the source and drain.
This is 0.07um OXIDE THICKNESS, which is NOT the same as the gate length. The gate length is the usual parameter quoted when referring to a process (i.e. 0.25um, 0.18um, etc).
The problem is that if the gate oxide is too thin, you really screw up the transistor. All sorts of nasty reliability problems with hot carrier damange and what not. (Not to mention device performance).
But you can't have super-thick oxides relative to the gate width. There would be millions of tiny thin, but tall patterns which you have to expose, wash and deposit properly and it just doesn't work.
Re:ummmm....no? (Score:1)
Re:I'm not the only one? (Score:1)
Re:Leaps and Bounds or Baby Steps? (Score:1)
Thus, a new technology has to be researched and produced, and later mass-produced as the costs per unit begin to drop. A combination of lowering costs and high consumer demand make these things household realities.
-L
Re:70 nanometres is *tiny* (Score:1)
Interesting? Try near impossible (though I refuse to say impossible, sub micron was once considered impossible).
Re:and this is going to effect us how? (Score:1)
----
Now if only we started to use some intelligence... (Score:1)
However, I must say that getting rid of the x86 architechture will certainly help....
change your paradigm (Score:1)
Re:That's great, but... (Score:1)
But if you do want a dual processor system, you can buy one from SGI, Dell, Gateway, IBM, Hewlett Packard, Intergraph, VA Linux, and many other ones. There's also a huge selection of motherboards available, if you're willing to get your hands dirty.
And the example you're pointing to... That's rather unfeasible with todays operating systems (though probably completely feasible in the mainframe world). But for what you're asking for, it sounds like you just want a good scsi controller. Keeps your CPU usage down to around 3% most of the time. Yeah, it costs money and the drives cost more too, but with savings you'll have from buying a machine and mobo with only one processor, it won't work out too badly, and you'll probably notice a bigger bang for your buck.
Re:Maximum capability (Score:4)
Silicon technology is still a bulk technology. The most likely candidates for a further circuit miniaturization are what is called "molecular electronics." These involve using organic molecules with dimensions of several dozen angstroms for swithces and interconnects. People are already working very hard on metal contacts to organic molcular componet. There was a special issue of the Proceedings of the IEEE on Quantum and Nanoscale Devices and an article titled "Molecular Electronics" by Prof. Reed of Yale EE dept surveyed the field.
The Most striking figures in that article were (i) a very tiny organic molecular diode which operated at room temperature with voltages around +/- 0.5 volt, and (ii) a resonant tunneling device at room temp. with similar voltages. These are highly practical voltages and temperatures! The biggest obstacle is to integrate these devices.
The variety of possibilities offered by organic molecules in conjunction with metals and other solid materials is simply staggering. What is going to open the floodgates is development of techniques to integrate these tiny devices with tiny interconnects in an inert matrix.
Zyvex [zyvex.com] and the Foresight Institute [foresight.org] website are the best resources for information on this subject. Particularly, the writings of Eric Drexler and Ralph Merkle [merkle.com].
Real computers (Score:3)
Machines like this are used for *serious* numbercrunching. They predict the weather, model the economy, help design planes and spacecraft and find oil. These are tasks for which there is still a serious demand for MIPS.
Because of the astounding cost of developing these technologies, it takes years for them to trickle through to the desktop.
I admit that when decent processors get to the desktop, they are wasted. I did some low-level monitoring of my mother's PIII 450 recently. She runs Win98 and MS Word. The processor spends 99.2% of it's time idle, and 60% of it's active time it's waiting for cache misses. The cache miss problem isn't going away any time soon, because memory is still not getting faster at a high enough rate. The only realistic cure is for compiler writers to continue developing *very* clever optimisers. This is happening, but optimisations like this are deep magic.
I/O in modern servers using proprietary technology is awesome. Check out the IBM SP servers for more info. (Can't find the link - I have it on CD). Unfortunately, PCs are hampered by 'legacy' technologies like PCI. There is at least one serious attempt to address this - the Next Generation I/O project [intel.com]
Re:Only the big boys can play (Score:2)
Yeah. Damn shame. Modern civilization is opressing the home hobbyist. Why can't I set up a
I've got seventy bags of cement and some turbines, but can I generate massive ammounts of hydroelectric power? Hell no. Thanks to the military industrial complex, they've crippled the technology so that it requires a river to function. A river! Who has one of those in their appartment. Nobody. It's all due to the power companies and the oil companies being in bed with Microsoft!
--Shoeboy
(Score:1)
Re:Primer (Score:2)
Electricity is essentially the movement of electrons, not protons. Protons do not (normally) move from atom to atom. Electrons certainly DO NOT travel faster than the speed of light. Electrons have finite mass; if they traveled at the speed of light, the Lorentz transformation equations assert that they would infinite mass. I assure you that the electrons in your computer do not have infinite mass. In fact, electrons usually travel far slower than that. The conduction speed of copper wire is in the range of a few thousand meters per second at best. Light (well, actually photons) travels faster than anything else. The speed of light is an absolute speed limit in our universe and is a fundamental physical constraint. Your comment is both off-topic and wrong.
As for the rest of your post, you've been suckered into responding to a quite funny comment. There's just enough techno-mumbo and vaguely plausible sounding physics to get Slashdot karma whores (tm) to reply with corrections (in analogy with StreetLawyerGuy and DumbMarketingGuy). Kudos to all three.
Re:uhhh and the 20th century? (Score:1)
This IS transistor size! (Score:3)
Re:Write more efficient code? What a concept! (Score:1)
Especially if the programmer messes things up by trying to hand-optimize!
Having the programmer use goto statements and hand-unroll loops usually makes things worse. It is far, far better to have the programmer concentrate on the high-level algorithm design.
Small-scale hand-optimization will do squat if your algorithm is exponential.
The compiler can do a whole lot if the programmer lets it do its job. That includes making careful use of C++ inlinig and templates!
--
Re:Sad that this is necessary (Score:1)
Yeah, I agree. I also think that we shouldn't have bothered to improve RAM storage capacity over the years. After all, 640K ought to be enough for anyone.
----
I love it when a plan comes together (Score:1)
Re:70 nanometres is *tiny* (Score:2)
Re:I'm not the only one? (Score:1)
*YAWN* (Score:1)
Wake me when they deliver some really exciting news about signifigantly faster bus speeds and RAM...or greatly improved cache.
Re:Sad that this is necessary (Score:1)
Ask him to predict exactly how much rain a particular county will have that morning. Then he might begin to appreciate what these guys do.
Re:Latin (Score:1)
/*--Why can't I find the QNX OS on any warez sites?
* (above comment useless as of 4-26-2000)
*/
I'm worried (Score:1)
tcd004
Huh! (Score:2)
Re:70 nanometres is *tiny* (Score:1)
Re:Sure (Score:1)
That's your penis? I thought I was eating a piece of spaghetti!
Re:Leaps and Bounds or Baby Steps? (Score:1)
----
One down, ... (Score:2)
Looks like smooth sailing to me!
Re:Smaller, Faster.. (Score:1)
Note that some of these, bio and mechanical nanotech, have the same sort of power dissipation problems that conventional semiconductors have. Computing take power to perform an opperation; you make it fast and it burns a lot of power, so you try to make it smaller to reduce the power per functional unit. There are proposed methods to get around this, see
http://www.ai.mit.edu/~mpf/rc/home.html
for an example.
Sometimes technologies are delay because the established leaders wan to hold onto their positions - the US phone companies and digital access is an example. Usually the delay is not very large, the old guard is passed by. Sometimes technologies are fueled by the current leaders, who want to hold onto their lead and know a little history.
Re:and this is going to effect us how? (Score:1)
I doubt that a 0.07 micron chip will be used in anything for the average consumer for over a decade. By then we will hope-fully have a "PnP" type parallel CPU array and Fire-Wire type PnP for hot swapping peripherals... making 150 GHz not all that impressive.
--// Hartsock
Re:Latin (Score:1)
----
Interesting... (Score:2)
Secondly, companies aren't going to flog off any more computers, just because the processor is smaller. It would make much more sense, IMHO, to use the scale improvements to build multi-processor CPUs. (If the next generation of Intels or AMDs packed 16 ix64's into a single unit the size of current processors, with all necessary SMP stuff thrown in, you'd have a truly powerful computer.)
Last, but not least, why use all these improvements on the processor? It's not been the bottleneck for years! If you designed a wafer-scale RAM chip at 0.07 microns, you'd be looking at computer memories in the region of 512+ TERAbytes! Can you imagine how responsive KDE would be with that?
Re:Maximum capability (Score:1)
Re:Real computers (Score:2)
http://www.InfiniBandta.org/home.html
*rim shot* (Score:1)
Re:I'm not the only one? (Score:1)
Re:change your paradigm (Score:2)
Re:Primer (Score:1)
His comment was pretty wrong though. I'd love to see a proton stream... and live through it.
Re:Latin (Score:1)
The plural of 'virus' is 'viri'.
Re:we need a slowdown (Score:1)
This always brings up the notion that the base linux kernel actually improves in speed, given the same functionality as earlier releases. Of course, once you add all sorts of snazzy modules in...
(talk about offtopic)
Re:One down, ... (Score:1)
Yes, but... (Score:1)
First, how do you build a CPU with a
The second practical problem: cooling! A
Re:Real computers (Score:1)
1GB/sec is pretty fast compared to the maximum transfer fate 64bit 66MHz PCI offers today (533MB/sec). The PCI-X bus is a 66 or 133MHz 64bit peripheral bus. Looks like it can burst faster than a theoretical AGP 8X. Maybe Intel will make an AGP-X bus for graphics
--
The only question that matters... (Score:1)
Top 5 Headlines seen too often in the last 20 yrs. (Score:1)
5. Breakup of Microsoft Predicted
4. Death of x86 Architecture Predicted
3. Death of FORTRAN Predicted
2. Year of the Network Predicted (okay, so they finally got one right)
and finally, the #1 headline we've seen too many times in the last 20 years...
1. Death of Moore's Law Predicted!
--j
Re:I'm not the only one? (Score:1)
She talked me into trading. I didn't mind because the study is built over a crawlspace and it is closer to the phone lines, cable and power and overall it was a much more convenient location to wire up my network and stuff. After I got my two servers, my masq box, cable modem and my workstation into this room, we noticed that it was appreciably warmer. Almost too warm. We did this around Thanksgiving and I was toasty all throught the winter. Worried now though about how hot it will get in the summer. Might have to think about putting some ceiling vents in or something.
Re:Write more efficient code? What a concept! (Score:1)
Re:Leaps and Bounds or Baby Steps? (Score:1)
Likewise, if there was the economic and political insentive to go to Mars, we would likely be there by now.
Definatly. I heard someone once say on /. that you could put people on Mars for the cost of the movie "Titanic."
Most likely, nobody would have made it to the moon if Kennedy hadn't died. The whole country felt it owed it to him that they should go to the moon, because thats what Kennedy wanted. So off to the moon they went, and did it before the decade was out, people put a stuipd little flag there, just like Kennedy wanted.
But then a funny thing happend. Nobody cared anymore. Man stomped around up on that rock for a while and then retreated home, and haven't gone back since. Earthbound people said you could be spending NASA's money on the poor (Fools! Do you really trust polotitions to spend that extra money wisely?).
Heres to hoping that the International Space Station will allow new missions to the moon, since the shuttle can now refuel there and move on, perhaps launching a module out of the cargo bay.
Re:Latin (Score:1)
----
x86 & silicon (Score:1)
Chris Hagar
Re:Latin (Score:1)
Result: Everyone left still thinking that they were right
1) English is a dynamic language. What comes into common usage becomes language. Therefore, Virii/Viri is/are word/s.
2) It was a latin base, so therefore Viri/Virii is the proper plural.
3) It may have been a latin base, but it's an english word, and the dictionary says it's viruses.
4) You all suck, shut up and go away.
I think that pretty much summarizes the argument.
I won't take sides, but the longer this goes on, the more I agree with #4...
Re:70 nanometres is *tiny* (Score:2)
Photonics has problems. (Score:3)
Solid state photonics will still have its feature size limited by the wavelength of the light used within its devices. _Current_ integrated circuit chips use feature sizes that are much smaller - by the time photonics matures, it will already be left in the dust as far as density is concerned.
Use smaller wavelengths of light? Not unless you want to destroy your material by photoionization.
Your next logical argument is to point out that most proposed photonic devices are three-dimensional. My logical counterargument is to point out that you can build three-dimensional electrical devices too. It's just currently cheaper to shrink 2D fabrication processes.
Your next probable point is to make noise about propagation delay in electrical circuits. It turns out that these aren't the limiting issue in conventional ICs - heat dissipation is.
Your next likely point is to say that a photonic circuit would have less heat dissipation. My response is that I'll believe it when I see it. Absorption happens, and whatever diode lasers are pumping this device won't be perfectly efficient either.
Lastly, I'd like to point out that most of the effort that goes into designing integrated circuits goes into designing the logic, not the fabrication processes. Computer engineers would still be employed in your hypothetical universe. Electrical engineers design motherboards and specialized analog ICs, both of which would still exist, so they wouldn't be out of work either.
Summary: Photonics is not the magic wand you hold it out to be.
Re:Real computers (Score:1)
Not sure about a couple of these. (Score:2)
Capacitance is still (AFAIK) dominated by the diffusion capacitance of transistor sources/drains connected to the wire. Second contributor, IIRC, was gate capacitance. Both of these go down with feature size.
You might point out that gate and drain area will only go down in one dimension, as I'll be sticking more devices on the bus, but they'll still go down.
Wire resistance similarly isn't a huge contributor AFAIK. In all of the sets of parameters that I've seen, even a long bus wire would have resistance lower than the effective resistance of a MOSFET in saturation mode.
Lastly, while your drivers get smaller, the W/L ratio of the gates remains the same. This means that, should you be inclined to melt down your circuit, you could still pass the same amount of current through a smaller MOSFET.
Now, as far as using intelligence is concerned... Most of the cynicism I've seen expressed both towards coding and towards IC design has been put forward by people who aren't doing coding or IC design (in general; I don't know what your personal qualifications are). The fact remains that while boneheaded code gets written and while boneheaded ICs are most likely designed, there are still companies that do it right. These gain market share, grow complacent, and fall to the next group that does it right, continuing the grand cycle.
My point being that you aren't likely to get order-of-magnitude performance improvements by "using intelligence". The people you're competing against already are.
As far as the ultimate limits of communication on smaller, faster chips are concerned, I doubt this will become a serious problem. Designers will simply focus more on pipelining and asynchronus operation of modules to relax system-wide signal timing constraints.
Parallel processing. (Score:2)
It turns out that, for several reasons, multiprocessors aren't likely to dominate desktops for a few years yet.
The first reason is that systems with multiple _discrete_ processors are more expensive. You need to pay for multiple processor modules, and the motherboard needs a more complex chipset. Joe Average Gamer is better off spending the same amount of money getting a top-of-the-line video card, and a new single processor six months later. Joe Average Non-Gamer doesn't need a multiprocessor for email and office apps.
The second reason is that writing good parallel code is much more difficult than writing good sequential code. Race conditions, interprocess communication, and so forth add plenty of complexity, and compiler tools won't save you - parallelism is designed in at a higher level than compilers deal with.
The third reason is that interprocessor communications bandwidth and memory coherency overhead are *big* problems for multi-processor systems, and they keep on getting bigger as more processors are added. Something like a Starfire, for instance, isn't a large set of processors and memory with a bus tacked on - it's the Bus Network of the Gods with processors and memory tacked on as an afterthought. It has to be, to handle supercomputer communications loads. This means that a lot of the money you spend on a parallel system *won't* be on processing power. If, on the other hand, you're willing to wait another design generation, you can get a comparable processor for a much lower price.
The fourth reason is that while we could indeed integrate many old cores on a new die, we get better performance by doing other things. Adding more cache, for instance, or adding fast, complicated FP units that would have taken too much silicon to add before. Making a bigger translation lookaside buffer (important with a 64-bit address space). Improving branch prediction (a big source of stalls). Adding deeply pipelined load/store units (another big source of stalls). Or adding whatever other performance-enhancing widgets are invented over the next five years. Multple cores are an interesting idea, but at _present_ aren't the most effective way of increasing performance.
All of these factors mean that parallel processing isn't used except by those who really, *really* need it (dual-processor doesn't count).
Now, the caveat.
Once cache performance saturates - and it eventually will - we'll have a lot of silicon to play with when moving to higher linewidths. At the same time, we'll also have to break chips into asynchronus pieces to solve the clock skew problem. We may also be reaching limits to superscaling (scheduling is an NP-complete problem, and approximations reach diminishing returns eventually). At this point, it starts to make sense to put multiple cores in a chip, along with the coherency logic and communications pathways needed.
However, I don't see your desktop machine running a processor like that for 5-15 years, for the reasons mentioned above.
Re:I'm not the only one? (Score:2)
In the summer, I leave the iMac and the 486 off and put big fans around my 400mHz Linux box. Works great.
Re:That's great, but... (Score:2)
Say one proc could be used for all the I/O tasks and the other could be used for whatever else is needed.
The I/O processor would be idle much of the time on a modern system. Most of the I/O load goes through bus mastering devices these days. The CPU queues it up, and gets an interrupt when the transfer completes (in some cases, overhead is reduced by holding the interrupt until several I/O requests complete).
What is needed is a good processor affinity so that the cache stays warmer. The Linux 2.3.x kernel is moving in that direction with internal structures. 2.2.x allready has CPU affinity for user processes.
There are situations where dedicating one CPU to a specific process can be a good idea, but it's not common enough to be in a mainstream kernel.
Gallium Arsenide is not faster or less effort. (Score:2)
Re:Not sure about a couple of these. (Score:2)
Re: wire capacitances - not true (at least directly) anymore, AFAIK.
Our customers (we are working with various companies working in the 0.13 process regions). They are continually complaining about the effects of wire impedance (including resistance, not just capacitance!) on their designs - particularly because the current-state-of-the-art logic synthesis tools do not MODEL this wire impedence well & end up making lousy decisions which impact the tail-end design flow severely. The relative impact of wire impedance to cell delay is only expected to get worse as feature sizes get smaller.
The performance of individual logic gates have been scaling pretty well as the feature size goes down, particular because companies can spend so much time & computation analyzing & refining each cell. The behavior of the wires, esp. in the context of large-scale routing across large "geographic" areas is analyzed in a far cruder manner, making it much harder to get reliable results.
Re:Only the big boys can play (Score:2)
So it's a matter of which wall we hit first: the physical or the economic.
Re:Chip Question (Score:2)
Re:Yes, but... (Score:2)
This is NOT transistor size! - YES it is! (Score:2)
Usually, it helps to understand the topic you're discussing. I grew a 700 angstrom (.07 m) oxide last week on my PFET wafers. If you were correct (which you're not) this article would be over fifteen years late in the coming. Also "wash and deposit" are not terms used in industry. We use "develope, etch and diffusion" since you forgot a step, too. The size which is referred to by
Re:Only the big boys can play (Score:2)
Scientifically proven limit (Score:2)
Personally, I'm more interested in the molecular scale quantum computing technology. I believe it was Los Alamos that put 3 quantum transistors on a single proline molecule. You know, proline, one of them amino acids. We might even be able to grow our computer chips from a DNA or RNA template in the distant future. That is something that could go a lot farther.
Re:Real computers (Score:2)
For example, people were using
Re:we need a slowdown (Score:2)
Feature bloat is cool, if done right. Dynamically linkable code, conditional installation, and decent application design can cope with this. Simply code all possibly unwanted modules as externals, and if the user wants, then can install them, then make calls to the external code. That way not only does the memory footprint get smaller (only what's actually used now is loaded) but the disk footprint, where only what you want gets copied from the CD.
And the features themselves don't take CPU resources, unless used. The animated paperclip doesn't draw a lot of resources if it's not running. The trick is to not run it if the user doesn't want it.
But, coding this way is like writing good portable, and standard (indenting, etc) code, doable, but a pain in the ass. As long as it's possible to get by without doing much, everyone will.
I long for a day when Abrash's optimization books, or similar ones for the processors of the day, are required reading in university programming courses, and classes on optimization are standard. There's a lot compilers will never be able to do, because we influence it with our high-level design. We need to think smart to get good results.
Re:Maximum capability (Score:2)
But only by abandoning the idea of flapping wings and going to an airfoil and lateral thrusters. (Yes, I know some people are still trying flapping aircraft, and with some degree of success, barely.)
I have no doubt that there's a lower size limit beyond which silicon chips will *not* reach. At best, this is one atom-wide pathways.
That doesn't mean that we'll never get anything better, just that the current 'cheap and easy' process will have reached its limits and we'll be using the less popular (because of price, or whatever) techniques, until those too run out, and we move to whatever new technologies we've found.
There are a lot of things that can done with chips to make them faster. Silicon density can be made higher by utilizing more of the third dimension. If layer-to-layer connections can be made to waste as little energy as connections on the same layer, then with the limits of heat disipation, we'll be able to get a lot more silicon close to other silicon.
Better software techniques could be used to get parallel calculation benefits out of almost anything, if nothing else, simply by precalculation all possible choices at any branch.
And sure, some software runs serially, only. But we've already found unsolvable problems, or classes of problems, that will be unsolvable in the life of the universe, even with unimaginably powerful computers. We need to come up with better ways of solving these problems, just throwing a faster CPU at them simply reduces an eternity long wait to merely half that.
Software could, imho, be made ten times faster in most cases, were hardware designed with the idea of finite cycles, and if the software were properly written.
(By this, I mean that hardware is often designed to get more cycles, instead of to make software development easier. If it were designed to be easier to write good code for, it'd probably do better, on average, at any given clock speed. And eventually, the same hardware speed limits will be hit. The G4 CPU architecture with tons (128+ registers, most 128b wide), will be usefull a lot longer than the x86 architecture, with a handful of registers, most only 32b wide, and a weird floating point stack, etc, etc, with speciality registers layered on top, like the MMX.)
The Secret... (Score:2)
Re:I'm not the only one? (Score:2)
Re:I'm not the only one? (Score:2)