Intel Moving Forward With 10nm, Will Switch Away From Silicon For 7nm 279
An anonymous reader writes: Intel has begun talking about its plans for future CPU architectures. The company is already working on a 10nm manufacturing process, and expects the first such chips to be ready by early 2017. Beyond that, things are getting difficult. Intel says it will need to move away from silicon when it develops a 7nm process. "The most likely replacement for silicon is a III-V semiconductor such as indium gallium arsenide (InGaAs), though Intel hasn't provided any specific details yet." Even the current 14nm chips they're making ran into unexpected difficulties. "While Intel didn't provide any specifics, we strongly suspect that we're looking at the arrival of transistors based on III-V semiconductors. III-V semiconductors have higher electron mobility than silicon, which means that they can be fashioned into smaller and faster (as in higher switching speed) transistors."
amazing (Score:5, Interesting)
Amazing that we're getting to 7nm, and rather than saying we can't do it, there's just casual talk about how they will have to switch away from silicone. Really incredible. Will they just keep marching forward to less than 7nm and into other exotic configs?
To answer your question (Score:5, Funny)
Nope. They've decided to hit 7nm and then call it a day.
Re: (Score:2, Funny)
They've decided to hit 7nm and then call it a day.
I asked Gordon Moore about this and he said it would be illegal.
Re: (Score:2)
One wonders whether if they reach 'the limits of silicon' whether implementing optimization techniques in hardware will be the next iteration.
e.g. VLIW inspired designs such as Transmeta Crusoe, Elbrus 2000 or Mill CPU.
Re: (Score:2)
They've always done a lot of hardware optimization techniques. But advanced hardware techniques go hand in hand with extra transistors.
Re: (Score:2, Interesting)
Those are the Tock's in Intel's Tick/Tock model. [wikipedia.org]
Tick is smaller structures.
Tock is new architecture.
Each new architecture is optimized for the most common tasks at that time, together with a bazillion other changes. If they figure out a general optimization technique that still works with the x86 instruction set in the mean time, they'll go for it.
The problem with some optimizations is that they do not work with the x86 instruction set. Abandoning that instruction set is expensive, although we are doing it
Re: (Score:3)
The problem with some optimizations is that they do not work with the x86 instruction set.
I don't see why the x86 instruction set is a problem. Just translate them on the fly, as they've been doing for years.
Re:To answer your question (Score:5, Interesting)
You can, and people do. However, the issue is not translating one x86 instruction to one [insert ISA here] instruction. That has been done since x86 was invented, and was common with previous ISAs before that. The real requirement is to translate source code that maps to a bunch of x86 instructions into ONE [trendy ISA] instruction. This will obviously be easier if x86 is thrown out the window.
Historical note: x86 is a bastadised rip-off of the PDP11 instruction set. The PDP11 was built as a "hardware Fortran machine" ie one instruction represents one Fortan instruction as far as was achievable in 1970. C is (just one) PDP11 assembly language! The VAX instruction set was an attempt to achieve a higher level machine code, which worked quite well - most VAX assembly instructions are actually function calls to application specific microcode.
X86 was a poor ISA when the first 8086 chips were made (but good, given hardware capabilities at the time). That was about 40 years ago. MIPS and Sparc (and ARM) are all better than x86.
The moral of this story is that it is "first past the post" in this game, cos people hate it when their favorite app stops working. (See Great Western Railway, Brunel and 8' gauge).
Re: To answer your question (Score:2)
Re: (Score:3)
It's not even clear that a new ISA would actually improve performance by a meaningful amount.
Re: (Score:3)
They tried that with the Itanium and it did not go well.
What Intel might do is faze out older parts of the ISA like the 16bit x86-x286 instructions to free up some space on the die. Even that might not be worth the effort since I am pretty sure those are already "emulated" on modern CPUs in the decoder.
I thought that it would a good idea for Intel to go only 64 bit on their mobile chips. They have no real installed code base in mobile to worry about.
Re: (Score:2)
The real requirement is to translate source code that maps to a bunch of x86 instructions into ONE [trendy ISA] instruction.
No, the real requirement is to execute the program as quickly as possible. If that can be done by mapping N->1, that's great, and I'm sure Intel already does that where they can. But if you can get the same speed by using multiple instructions in parallel, that works too.
X86 was a poor ISA when the first 8086 chips were made (but good, given hardware capabilities at the time). That was about 40 years ago. MIPS and Sparc (and ARM) are all better than x86.
No, the x86 is a good ISA. You may not think it's pretty, but it gets the job done, as their market shares proves. It's also enlightening to look at the ARM ISA. From the original ARM1 to the latest ARM Cortex, there's been a clear trend
Re: (Score:2)
I think translation is becoming less important now because a lot of code is compiled from an intermediate form anyway, e.g. Java and .NET. If you look at x86 Android performance, which is a mix of Java byte-code compilation to x86 and binary translation of ARM to x86 performance just isn't an issue any more.
The other big issue used to be boot time with hardware that contained x86 code in ROM, executed by the BIOS. Apart from the security implications that meant that you needed special PCI cards for Macs whi
Re: (Score:2)
It tells you something about the skill of their design teams that AMD was able to be competitive for so long in spite of being 1-2 process generations behind Intel.
Of course! They had to wait for the next generation of Intel chip to use them to design the next generation of AMD.
Re: (Score:3)
Re: (Score:2)
Also, the newer architectures require more logic, so they only become feasible after an improvement in process technology.
Re: (Score:3)
Right after Unicode support.
And after Beta goes live.....
We're working it!
Re:To answer your question (Score:5, Informative)
Intel was heavily invested in VLIW, and developed Itanium. That did not go well, and AMD brought out x64 and ate their lunch. Intel adopted AMD's instruction set and Itanium is basically dead now.
Re:To answer your question (Score:4, Interesting)
Intel did license Transmeta's patents, if only to keep an iron in the fire. According to wikipedia, Transmeta at the time had code morphing working supposedly utilizing lower power but slower in terms of performance relative to clock speed. Now the balance has switched from the Mhz wars to all-day battery life on fanless machines. In competing with ARM, sacrificing a bit of performance for power consumption might be a winner.
I dunno much about Mill but if you read their whitepaper(s), it *sounds* revolutionary in venture capitalist speak.
And for the Russian chip, they have their own native ISA but emulate x86, which some have been saying is a millstone but required for binary compatibility.
I'm not having a go at the folks at Intel, clever blokes than me... They did try producing a revolutonary new platform as a successor to x86 - but the Itanic proved less than successful.
Re:To answer your question (Score:4, Interesting)
A buddy's brother works (or worked, who knows now) for Intel, and used to bring along demos of the latest and greatest lab technology when he came for visits. Some of the stuff he had was up to 10-15 years ahead of actual release cycles in terms of performance and capability. I'm sure some of the ideas got scrapped, but a lot of them probably made it into production in the chips we use today.
Wild stuff. Both brothers were major hardware geeks.
I'd love to see what kind of technology he's showing his brother from the labs over Christmas and Easter holidays nowadays. :D
Re:To answer your question (Score:5, Funny)
Re:To answer your question (Score:5, Interesting)
This was a lot of years ago. Things weren't as tightly controlled back then. '386 days...
The 386 debuted in 1985 (the beginning of the "'386 days").
The 486 debuted in 1989 (the end of the "'386 days").
You claimed that you were looking at hardware that was up to 10-15 years ahead in terms of performance and capability.
That means you saw the equivalent of 1995-2000 level hardware in 1985, 1999-2004 level hardware in 1989, or any corresponding range in the years between.
The Pentium 4 was released in 2000.
Care to revise your bullshit claim?
Re:To answer your question (Score:4, Informative)
As to Transmeta, the company that bought them was nVidia. Their Project Denver chips use a lot of the Transmeta ideas. They're particularly interesting in terms of history, as the project was several years along before they decided on the ISA (they spent a while trying to license the relevant patents from Intel to build an x86 chip, failed and went with ARMv8 - which may end up being a strategic error for Intel). Unlike the Transmeta chips, it has a hardware ARM decoder that generates horribly inefficient VLIW instructions from ARM code. This helps alleviate the startup penalty that the older Transmeta chips had, where they had to JIT compile every instruction sequence the first time they encountered it and then run it from their translation cache. The nVidia chips can run the code as soon as they pull it into the instruction cache and can profile it before doing the translation.
Re: To answer your question (Score:5, Funny)
That would be a really tiny computer.....
Re: (Score:2)
Like a cheerleader the big ones will be happy first but by the time it gets to the little ones it will be fairly old and unimpressive.
Re: To answer your question (Score:5, Insightful)
You wll never be happy because laptops will never be as powerful as desktops. Simply speaking, if you manage to create a laptop as powerful as a desktop then you can also create a more powerful desktop. That is not a matter of computing power but of temperature. Desktop are by definition bigger than laptops so they can dissipate more heat.
Re: (Score:3)
Desktop are by definition bigger than laptops so they can dissipate more heat.
What if my lap is bigger than my desk?
Re: (Score:2)
Re: To answer your question (Score:5, Insightful)
Re: To answer your question (Score:5, Insightful)
Your request makes no sense. You can always fit more processing power in a big case with lots of cooling than in a small case with very limited airflow (and power constraints on the fans). And it's always going to be cheaper to produce chips that can consume more power and dissipate more heat than ones with similar performance but a lower power budget. The only reason that the prices have become so close is that laptop sales passed desktop sales some years ago and now the economies of scale are on the side of the mobile parts.
If you want a laptop with the power of a desktop, just wait a couple of years and you'll be able to buy a laptop with the power of this generation's desktops. Of course, desktops will be even faster by then.
Re: To answer your question (Score:2)
Re: (Score:2)
Also if you are buying mainstream hardware not building your own things are much closer. Ex: Dell XPS using i7 4770k I think. If you compare that to the ~2500 upgraded i7 version of a macbook pro they are only about 200 points difference between the CPU mark scores. Sure the mac has a newer CPU but that might be the way of things: laptops get updated every year or so but desktops are allowed to age their way into budget market and then sit their for a couple years before the manufacturer finally has to make
Re: (Score:2)
Laptop manufacturers have you but the short hairs because if you want to do work while mobile they really are the best option. Since (at least till the last say 10 years) business was the main reason for the devices margins could be a bit higher. Anyways it isn't like they just said: hey lets make a low powered device. There are more thermal and energy considerations in something that has to sit on your lap, be thin and run on a battery versus a big honking box, not touching you that has continual access to
Re: (Score:3)
The thing is, atoms are very, very small, but they still have a finite size. A hydrogen atom, for example, is about 0.1 nanometers, and a caesium atom is around 0.3nm. The atoms used in silicon chip fabrication are around 0.2nm.
source: http://www.extremetech.com/com... [extremetech.com]
Re:amazing (Score:5, Insightful)
There is some debate among people if 5nm will make sense or even be reasonable to do...
Can a 5nm transistor be made? Sure.... Can 5 billion of them be packed onto a chip and sold for $200? That is a different question...
Going to 5nm only helps if it is a functional product that is better than what we have.
Anything further beyond that and it becomes really interesting... it might happen, but we're running out of room in the known universe.
Re:amazing (Score:5, Insightful)
Going to 5nm only helps if it is a functional product that is better than what we have.
We still don't have the processing power of a human brain in a few pounds of silicon, running on 20 Watts. There's still a lot to do.
Re: (Score:3)
We don't? I don't know about you, but I sure can't do a billion math problems in a minute... but my Intel CPU sure can...
I couldn't do a billion math problems in my whole life!
Depends on how you measure processing power of course...
Re: (Score:2)
Depends on how you measure processing power of course...
I was hoping this was obvious from my comment. I'm talking about the silicon chips doing the things that our brain can do, such as designing the next intel chip.
Re:amazing (Score:5, Interesting)
The major stumbling block isn't processor speed or capacity. It's that we don't know how to architect such a system in the first place.
And if you think about it, a lot of the "smart" things we want to automate really don't need anything like human-level or human-like intelligence. A car with the smarts of a mouse would do great as an autonomous vehicle. Real mice manage to navigate around a much more difficult, unpredictable and dangerous environment, using a far more complex and tricky locomotion system, after all.
Re: (Score:2)
The major stumbling block isn't processor speed or capacity. It's that we don't know how to architect such a system in the first place.
We have some ideas on how to architect such a system, but we can't try them out because of lack of good hardware. We already had ideas in the 80's to build neuronal nets, but the ideas failed because they weren't big enough. Now people have a lot more success with deep learning, mostly because they've been throwing a lot more hardware at it.
A car with the smarts of a mouse would do great as an autonomous vehicle
Well, we can't make an artificial mouse brain either, so that only enforces my point that there's still a lot to do.
Re: (Score:2)
Now people have a lot more success with deep learning, mostly because they've been throwing a lot more hardware at it.
Bullshit.
The success of deep learning coincided with the discovery of a novel training method, not improved hardware.
Why even open your mouth?
Re: (Score:2)
I think you underestimate how much of the design is actually done by computers and auto-routing/placement algorithms.
Re: (Score:2)
Computers can help with the low-level design, yes. They can't come up with novel ideas to change the overall design.
Re: (Score:2)
An autonomous car needs a lot more longer term planning than a cockroach does. For a cockroach it's acceptable to run into a wall, or into another cockroach. For a car going 80 mph, not so much.
Re: (Score:2)
Unfortunately, neural computing, as demonstrated by animals, is guesswork. People mostly buy computers cos they want the right answer, not a good guess. Babbage's original intention was to build a machine that gave the right answer, or no answer at all. (Not even "Error at or near line 1, column 1").
If people want a good guess, they ask Uncle Eric, or the postman, or the nice lady in the house opposite (or the Office of National Statistics
Re: (Score:2)
No, but you're doing real-time 3D vision and context-sensitive pattern recognition with an amazing degree of parallelism any time you got your eyes open. Cue the "I'm blind, you insensitive clod" jokes. Do you know what the processing speed of a neuron is? Roughly 0.2 kHz, give or take a little depending on type. The Apple I from 1976 runs circles around a neuron with a 1 MHz processing speed. The difference? We have a *lot* of neurons with a *lot* of connections.
The brain proves we can do a lot more of ext
Re: (Score:2)
We have a *lot* of neurons with a *lot* of connections.
The second part is the important one. Neurones in the human brain have an average of 7,000 connections to other neurones. That's basically impossible to do on a silicon die, where you only have two dimensions to play with and paths can't cross - you end up needing to build very complex networks-on-chip to get anywhere close.
Re: (Score:3)
On the other hand, silicon is orders of magnitude faster, so you could use less hardware resources and do many things in sequence, rather than in parallel.
Re: (Score:2)
On the other other hand brains are orders of magnitude more energy efficient. I don't know if the efficiency is even related to the parallelism, asynchronicity, and ultra low "clock speed" of the brain, but it seems plausible that it is. The brain is optimized for efficiency above all else, where we have so far made the opposite trade-offs with computers.
We're doing that "real-time 3D vision and context-sensitive pattern recognition" with a few watts. Doing that with a bunch of GPUs would take thousands of
Re: (Score:2)
The second part is the important one. Neurones in the human brain have an average of 7,000 connections to other neurones. That's basically impossible to do on a silicon die, where you only have two dimensions to play with and paths can't cross - you end up needing to build very complex networks-on-chip to get anywhere close.
We can implement that with a fairly simple grid with pass-through, say you have a grid (x,y) and (1,3) wants to pass it to (4,7), we can just pass it right to (2,3). It can do a simple compare(x=2, y=3) not for us, if x > 2 pass right else if if y > 3 pass down until we hit the right grid node. What's hairy is understanding how to program it into doing anything useful.
Re: (Score:3)
You are mistaken, every time you listen to music your brain is doing like 20k floating point ops per second, watching a ball and trying to hit is with your tennis racket involves millions of flops, on top of that balancing and moving your body requires a robot a few mega if not giga flops. Not so sure about your body, perhaps it needs less flops than a robit :)
Your brain is by far the most powerfull computing device we right now have on the planet. Only beaten by my brain of course.
I really doubt a peta flo
Re: (Score:3)
Re: (Score:2)
Re:amazing (Score:5, Interesting)
From Metal-Pages:
In: $600/kg
Ga: $220/kg
vs
Si: $3/kg
The material part of the cost of the chip is likely to go up. I think however, that part today is minuscle,
so that part of the price impact with be small. However, I do think the volume benefits to Si technology
(50 years of development and industrial support, and with 13 gazillion Si units produced every year)
will be very, very hard to beat with any III-V technology. There's so much new stuff to be done: defect
density, passivation, via technology, lithography chemistry etc. The investment in III-V to reach current Si
position will be huge and ultimately paid by the customers with higher unit prices.
Material cost is largely irrelevant (Score:4, Insightful)
The cost of the raw materials is completely dwarfed by the cost of processing. Even a very large chip (2 cm x 2cm by .5mm thick) masses less than a gram. It's also likely that these high-performance III-V chips will be built on a cheaper substrate, meaning the thickness of the expensive stuff will be much, much smaller.
Re: (Score:3)
I think the bigger problem is, what happens when we reach the long-tail of process development, and demand tapers off to the point they can't fund further R&D?
IE: Systems are "good enough" and people go from buying one every 3 years to "only when they break". That could be 10+ years.
I suppose Intel would just follow the carrot to the next profitable market like they are pushing Atom CPUs lately?
Re: (Score:3)
I don't know if this'll apply to InGaAs, but for silicon, I did a projection based on ITRS numbers. As transistors shrink, they get faster. But at the same time, process variation gets worse, and that uncertainty requires wider safety margins. At what point does the increase in performance equal the increase in safety margin? 5nm.
It's unlikely that InGaAs will suffer less in terms of random dopant fluctuation and lithographic abberations, unless it's less damaged by UV, in which case at least the lithog
Re: (Score:2)
It's not a new discussion by any means. It was an old debate when people were asking whether a 100MHz bus was as fast as we could get, and 45nm was considered ridiculously small. The GHz barrier on clock speeds seemed insurmountable.
Didn't stop anyone, did it?
If it can be done, someone's going to try. If it can be done profitably, we'll see it on our desks or in our pockets in a few processor generations. That's just how
Re: (Score:3)
And to look around the GHz barrier *was* pretty damned insurmountable. Sure, it wasn't at exactly 1000MHz, but that particular number was always a "magical thinking" artifact of how the human brain regards numbers. We hit 1GHz back in 2000, and here we are, 15 years later and we haven't managed even managed a single order of magnitude increase in clock speed. Lets put that in proper context: 15 years earlier, in 1985, Intel had just released the 12MHz 386 with optional floating point module.
So, from '85
Re: (Score:3, Funny)
I don't know if such would make my PC run faster, but it sounds delicious!
Re: (Score:2)
Re:amazing (Score:4, Informative)
Actually it was 90, 45 and 22 (with some in between) but the explosion in mobile devices and the scramble for smaller, faster cheaper was still at work in that market.
Mobile has sort of reached a point where shrinking the device has only marginal value however. Users want or need a certain screen size and the devices need a certain mechanical strength, so "smaller" components aren't a big value driver. I don't see that faster speeds are going to be a huge value in that market either. Lower power/more battery life is still a bonus and if costs keep going down at each node, the demand will be there.
Now that we're talking about moving away from silicon however, the smaller, faster and lower power are still considerations, but I think the OP is talking about the point where the new technology can achieve that, but only at higher cost. Are there enough products and applications where people are willing to pay a premium for the extra functionality? We shall see.
Re: (Score:2)
For that, you can thank [ieee.org] IBM [extremetech.com].
They have been at the leading edge of a number of computer technologies over the years. It's a shame that IBM has been so poor at capitalizing on them.
Re: (Score:2)
silicon - cone is for... cones.
Re:amazing (Score:5, Informative)
Cray did it first.
http://en.wikipedia.org/wiki/C... [wikipedia.org]
Seymour Cray build a GaAs based computer almost 20 years ago. It actually worked but he ran out of money because of the end of the Cold War and the need for Super Computers decreased.
Re:amazing (Score:5, Funny)
Silicone? Really incredible - transistors made out of flubber. There is a huge difference between silicon and silicone.
And if you keep abreast of technology you will know that silicone has more to do with enlargement than miniturisation.
Re:amazing (Score:5, Funny)
many people use silicon to watch silicone so maybe they are more closely related than we think.
GaAs, technology of the future: (Score:2)
Always has been, always will be.
Re: (Score:2)
Re: (Score:2)
Also there isn't enough gallium in the world. Literally. Any future solar or computing tech based on gallium is dead on arrival because of this fact.
Please provide a source for your claim.
InGaAs? (Score:5, Interesting)
GaAs was the future of super-fast transistors. The Cray 3 was made from GaAs.
GaAs has a much higher electron mobility than silicon, 8,5000 versus about 1,500 for silicon. This allows for much faster switching. InGaAs has an electron mobility of 10,000 allowing even faster switching.
But that's just electrons which are used in P channel MOSFETs. For CMOS, you also need N channel MOSFETS. The kicker is that GaAs and InGaAs have respectively lower and much lower hole mobility so the N channel FETs switch rather slower than silicon.
CMOS is by far the only architecture. Historically it is the most power efficient since it only spends energy switching. On high speed, small scale CMOS, however, lots of power goes into the switching itself, the switching is fast enough that the devices don't really act very ideally and there's a lot of leakage.
Perhaps at very extreme ends, other architectures can compete, power wise.
Re: (Score:2)
Re: (Score:2)
According to Wikipedia, the natural occurrence of indium is 3 times that of silver, but current world production of indium is 40 times lower, so it is reasonable to assume that indium production can be scaled up if there's increasing demand.
Re: (Score:2)
And, apparently, it is three times as abundant as silver in the Earth's crust, so PARENT made no mistake here.
The minerals in the mantle or core are not easily accessible, so the phrase "in the Earth's crust" needs to be observed.
Needs to be in concentrated deposits (Score:3)
It's a bit more complicated that that. Even if an element is somewhat abundant but evenly distributed in the earth's crust, then it's difficult to mine. It's only practical to mine something if it's concentrated in some areas. E.g. gold is rare but you can find it in macroscopic flecks or clumps that are concentrated in certain areas. If gold were not concentrated like that but was instead uniformly distributed in the crust, there'd be no economical way to mine it.
That said, it looks like indium is concentr
Re:InGaAs? (Score:4, Interesting)
> CMOS is by far the only architecture
No it's not. Complementarity is great, but there's no requirement for it to be MOS-based. MOS is just the best choice for silicon. There are transistors using Schottky barriers and other technologies that are far better suited to InGaAs. Five minutes of googling would have revealed this and nullified your "Score 5 Interesting" argument.
No, the main issue with InGaAs is manufacturing difficulty and expense. You can buy InGaAs chips right now. It's just really expensive technology and not nearly as developed as silicon, both in terms of manufacturing steps and lithography tech.
Re: (Score:2)
Re: (Score:2)
AlGaAs is transparent to mid-IR. This clears the path for photonic interconnects.
Perhaps optic fibers can be spun out of the stuff?
Re: (Score:3)
Well, silicon is reaching its limits - much like with aircraft maneuverability, stability tends to come at a price: modern highly maneuverable fighter planes are so unstable that a human pilot couldn't hope to keep them in the air without constant computer assistance. Modern CPU manufacturing, self-monitoring, and thermal self-regulation are all far more advanced than when GaAS "failed" - I'd say its got a fair chance at a comeback, though doped diamond may prove more viable once synthetic diamond yields g
Re: (Score:3)
You have to realize that modern technology is quite... wonderful in that it allows us to revisit things that were impractical before, and are practical now.
I mean, back in the early days of microch
Goodbye Silicon Valley (Score:2)
Well maybe future improvements (Score:3)
will involve making chips taller, ie various forms of 3D ICs. That would mean that we could continue to get the apparent effects of higher densities at least for a while, though we'd really just be making taller or chips or better interconnected layers, but it would also mean that the cost of transistors wouldn't go down, it would probably go up.
Re: (Score:3)
You can't just stack cpu chips on top of one another. They'd melt and vaporize. You either have to develop really good cooling tech or ways of reducing power consumption.
One near-term solution is to stack memory (cache levels and main RAM) on the cpu chip. Memory doesn't produce that much heat so cooling would be straightforward. It would be a huge boost to speed to have memory right on top of the cpu. A few companies are working on this.
Re: (Score:2)
GaAs chips have a very high thermal tolerance, temperatures of 250C have been shown to have no impact on MTTF, this is ~250% better than Si. The bigger issue is what do you attach them to, most commonly available PCBs can't handle that, though solutions do exist since I've read about very high temperature GaAs chips used in jet engine monitoring and control.
Not just heat but also stress (Score:3)
Chips that run hotter also have more thermal gradient, which can put mechanical stress on the various delicate layers of the chip. Being able to run hotter means you can support more of a thermal gradient to ambient, and thus support more heat flow and thus more computations/sec. However, at some point you're going to cause mechanical failure of the chip, especially if the stresses cycle.
So not only termperature tolerance, but also coefficient of thermal expansion and strength of all the various materials
Re: (Score:2)
One near-term solution is to stack memory (cache levels and main RAM) on the cpu chip. Memory doesn't produce that much heat so cooling would be straightforward. It would be a huge boost to speed to have memory right on top of the cpu. A few companies are working on this.
Another I've heard about is going vertical with the transistors. You still have increased worries about heat, but you can get a lot more density that way. Shorter average wire runs also result in less heat per transistor, on average, so increased density and efficiency might outweigh any need to throttle to manage heat.
Re:Well maybe future improvements (Score:4, Interesting)
You can't just stack cpu chips on top of one another. They'd melt and vaporize. You either have to develop really good cooling tech or ways of reducing power consumption.
On-chip heat pipes will become a thing to carry heat away from the center of stacks. We found out that water actually goes faster through channels so small that it has to pass one molecule at a time.
Planes, trained agents and planetary automobiles. (Score:4, Informative)
> III-V semiconductor such as indium gallium arsenide (InGaAs
I think the french will like it and possibly the swedes. They use Gallium and Indium based semiconductors in airborne electronic warfare systems, which allows for very high RF energy output in physically very small and high temperature tolerant packages. (For example used in the Dassault Rafale and SAAB Gripen fighter jets). The french SPECTRE jamming suite is especially famous: the Rafale plane is not stealthy, only has reduced radar reflection, but the french trusted their system enough so their pilots were already flying deep in lybian airspace by the time the US Navy started to launch Tomahawk cruise missiles at Gaddhafi. Supposedly there is something equal or better in the american F-35 JSF, but that airframe is so buggy one must wonder if it will ever enter service?
On the other hand non-silicon semiconductors, like Ga and IN tend to cost twice the price of pure gold per weight or more. At the most extreme end, the soviet-russians even created diamond-based semiconductors, for use in space weapons and a planned Venus robotic rover. They invented a diamond crystal growing machine for the purpose, which after the Cold War was sold to a US company, which nowadays grows and sells multiple carat "cultured" yellow diamonds for ladyfolk decoration purposes. Beware, that femme fatale may wear a supercomputer on her finger! Now you know why multiple-finger gesture support was developed by Synaptics...
forget Silicon Valley (Score:2)
The prices in my condo development in Indium Gallium Arsenide Valley is going to explode!
Re:This is the End, Beautiful Friend, the End. (Score:4, Informative)
Moore's Law had a good run, but she's dead Jim.
It doesn't look that dead just yet [wikipedia.org]. While that graph shows a straight diagonal line of transistor count over time, there should also be a flat line alongside showing the number of people who predict that Moore's Law is dead.
Maybe they can partner with Apple and make a really skinny macbook.
Why would they need to partner with Apple when they can just shrink their own competing Ultrabook spec [wikipedia.org]? They own the trademark to it after all.
Re: (Score:2)
I'm surprised Moore's Law lasted this long. Other bottlenecks seem to be more of a factor of late such that I thought CPU's would take a bit of a rest due to diminishing practical returns, analogous to a Ferrari stuck in traffic.
Re: (Score:2)
It's definitely slowing down, Westmere EX was 2.6B in early 2011, Haswell EP 5.69B in late 2014 so roughly 42 months to double (Haswell die is ~20% bigger accounting for the 220% count instead of 200%) . A large part of that slowdown though might be economics, Westmere was surely started before the financial crisis and Haswell likely during or after so Intel might have slowed development (especially since on these large parts they don't have any meaningful competition except at the very high end from IBM an
Re: (Score:2)
Moore's Law had a good run, but she's dead Jim. Two, maybe 3 shrinks at most, and you're at the end of getting benefit from feature size.
Moore's law is really all about "cost" per transistor. While process shrinks are certainly an important enabler they don't have to be the only driver that keeps things going.
Re: (Score:2)
Seems like you took too long to type yipee there. Better luck next time. Try a few e's less maybe?
Re: (Score:2)
Doubtful, Ga isn't that rare, we mine ~254t per year mostly as a byproduct of Al smelting, this is fairly small compared to ~54,000t for Si use in semiconductors, but is quite high given the fairly small market for it today. To give you an idea Lithium is slightly less common in the crust but annual production is ~30,000t.
Re: (Score:2)
Silver is 3x more rare and is mined at ~18,000t per year, so again you can reasonably expect ~60kt per year if prices support the effort (though that's a bit misleading since elemental silver veins happen in numerous places but In has not been found in similar streaks)
Re: (Score:2)
The "rare" in Rare Earth Metals/Minerals says nothing about actual rarity. It's only a statement on whether they can be found in concentrated ores or not.
Re: (Score:3, Informative)
That is actually not correct. ... many of them are actually absolutely not rare.
The comes from the fact that they where considered rare when they where discovered, the whole third group and the Lanthanoids are considered 'rare earth metals'
Their oxydes are rare ores, perhaps you meant that. On the other hand 'deposites' of thise minerals are rare, too. But they are usually mined in quantities together with other ores, the primary ore of the deposite in question.
See e.g. http://en.wikipedia.org/wiki/L... [wikipedia.org].
Re:Resource wars (Score:5, Informative)
Despite their name, rare earth elements (with the exception of the radioactive promethium) are relatively plentiful in Earth's crust, with cerium being the 25th most abundant element at 68 parts per million (similar to copper). However, because of their geochemical properties, rare earth elements are typically dispersed and not often found concentrated as rare earth minerals in economically exploitable ore deposits.[3] It was the very scarcity of these minerals (previously called "earths") that led to the term "rare earth".
http://en.wikipedia.org/wiki/R... [wikipedia.org]
Re: (Score:3)
The ingredients are definitely nasty, so there's concern for industrial waste and exposure. However, the finished material has proven to be relatively harmless in animal studies. I was surprised to learn this, but that seems to be the conclusion, so there should be no immediate risk for using the end products.
I'm not sure about the stability of the compounds or how they degrade over time.