Ask Slashdot: Why Are There No Huge Leaps Forward In CPU/GPU Power? 474
dryriver writes: We all know that CPUs and GPUs and other electronic chips get a little faster with each generation produced. But one thing never seems to happen -- a CPU/GPU manufacturer suddenly announcing a next generation chip that is, say, 4-8 times faster than the fastest model they had 2 years ago. There are moderate leaps forward all the time, but seemingly never a HUGE leap forward due to, say, someone clever in R&D discovering a much faster way to process computing instructions. Is this because huge leaps forward in computing power are technically or physically impossible/improbable? Or is nobody in R&D looking for that huge leap forward, and rather focused on delivering a moderate leap forward every 2 years? Maybe striving for that "rare huge leap forward in computing power" is simply too expensive for chip manufacturers? Precisely what is the reason that there is never a next-gen CPU or GPU that is, say, advertised as being 16 times faster than the one that came 2 years before it due to some major breakthrough in chip engineering and manufacturing?
One word (Score:5, Insightful)
Physics
Re:One word (Score:5, Informative)
To elaborate: We can't reliably clock Silicon much faster than we're doing right now.
There are other semiconductors (such as GaAs) which can operate reliably at higher frequencies, but they are absurdly expensive, produce too much heat, consume too much power, and so on -- not to mention the fact our tiny process sizes for silicon don't exactly work for entirely different materials (chemistry bites again).
We're running into a similar wall for die shrinkage, on multiple fronts:
- We're getting into the size territory where bits flip due to quantum tunneling, which tends to hurt reliability. Flash storage has started to reach that territory, if my colleagues working for ${SSD MANUFACTURER} are telling me the truth.
- Yields of working units are going down significantly as the die shrinks, and it's taking a lot longer to figure out how to bring yields back up.
In the end, every material has its limits, and we're starting to run into them with Silicon, and there isn't a material that 'stands out' as worth betting the business on.
Re: (Score:2)
Yields of working units are going down significantly as the die shrinks, and it's taking a lot longer to figure out how to bring yields back up.
Actually no, the yield is higher for smaller die sizes at a given technology, since the likelihood of having a defect on your die is lower.
On the other hand, time-to-ramp is longer for advanced tech nodes, although some companies like TSMC have shown impressive numbers.
Re: (Score:2)
Was about writing something like you did, however:
since the likelihood of having a defect on your die is lower.
No, the "likelihood" is the same. You have X defects on a die. So up to X "chips" will have a defect in the end. However as you increase the amounts of chips you produce the percentage of defect chips shrinks. Or in other words: the smaller the chips, the more you get from one waver/die. The amount of defect chips stays the same, though.
Re: (Score:3)
The original statement was more about bad yields on newer processes. I.e. if your yields are only about 10% on 10nm trying to make a desktop chip then it's a terrible yield no matter what. If your yield for the former process was 80% for the same area then even if your new die is a much smaller version of the former one, it will have a worse yield overall. Wafer cost and design cost also increase.
The 10nm process has to be improved over time to get economical. That's business as usual but it's taking more
Re: (Score:3)
Thus with all else being equal more flaws. Those flaws that previously were too small to matter now do.
I hope I made that clear enough.
Comment removed (Score:5, Informative)
Re: (Score:3)
Not the physic you are thinking though. Lets use the favoured slashdot car analogy. Why are cars no longer more and more powerful, why is not the average consumer car capable of 300km per hour because it can not be safely or legally used and it is a waste of energy and resources, it serves no purpose except to allow some but to stroke the ego whilst stroking their private parts. Sure there is a market for it, a very tiny market (heh heh) but not sufficient to continue development, most people what more fuel
Plastics? (Score:3)
Central high power cloud machines are just a disaster waiting to happen, how many times does this have to be proven.
Once would be a good start. Do you really think that people are not designing fault-tolerant network infrastructure?
Re:One word (Score:4, Insightful)
The problem is that each new generation of programmers is lazier than the one before them. All the increased CPU power is wasted on bloated librairies, OS processes, etc.
Re:One word (Score:5, Interesting)
While the "end effect" is true, it has nothing to do with laziness.
Paying a programmer is expensive. The employer have you rather finished quickly and sells your work early with "drawbacks" e.g. more memory usage and less speed.
And the real culprits are the marketing droids that think programs and OSes need a new UI experience every few years. A huge deal of programming efforts and bloat is wasted and does not bring any value to the users.
Re: (Score:3)
The frameworks themselves aren't bloated because of laziness (generally, per se), but the programs using these frameworks are bloated due to laziness.
e.g: You need to write a program which does 2 or 3 nontrivial but common tasks. You could write your own or research and use 2 or 3 lightweight and efficient libraries for those specific tasks, but that would be effort, so you use a framework you've worked with before which has the 2 or 3 things you need plus 50000 other features. And that's how you end up wit
Laziness (Score:5, Insightful)
Laziness is a virtue in a programmer. [codinghorror.com]
The whole point of this profession is to save labor. That includes programmer labor, especially because it's an expensive commodity.
I don't know who has mod points today but this comment is frankly ridiculous.
Re: (Score:2)
"(...), or to create it faster."
To create it faster *once*, notwhitstanding that it will run millionces!
Re: (Score:3)
So they claim... I've seen perfectly mundane software that's more than 100x larger than older software that still somehow manages to do less than older versions.
That is, equal or lesser complexity, dramatically larger size, unimaginably worse performance.
I blame the attention paid to "do everything" libraries and frameworks used because they're popular, not because they add value. The defense is always "don't reinvent the wheel" and "if we want to add this or that someday" or some variation of the two. If
Re: (Score:2)
Stick with small, special purpose, libraries.
Doesn't only the specific call get linked in and not the whole library? If I call acosh(), I don't need all 120 dead code math functions.
Re: (Score:3)
Comment removed (Score:5, Insightful)
Comment removed (Score:5, Interesting)
Re:One word (Score:5, Interesting)
It's really capacitor charge time. In CMOS technology, you basically have a metallic plate (the gate) sitting on some semi conductor (separated by an insulator).
As electrons flow into the 'plate', they accumulate. This creates an electric field which pushes electrons in the semiconductor away creating a channel of 'holes'. It's through this channel that electrons can flow (drain to source). Note that the electrons moving through the CMOS gate are typically sent to another transistor. And as soon as that plate fills up with electrons, current stops flowing through the device. And since power = current x voltage (IxV), you only dissipate power while the device is switching and this is why there is more current drain (and heating) the faster that you switch. Leakage current blah blah disclaimers.
CMOS Transistor [wikimedia.org]
Re:One word (Score:4, Interesting)
Re:One word (Score:4, Interesting)
- Yields of working units are going down significantly as the die shrinks, and it's taking a lot longer to figure out how to bring yields back up.
In the end, every material has its limits, and we're starting to run into them with Silicon, and there isn't a material that 'stands out' as worth betting the business on.
So, Moore's Law is dead.
Moore law remains a remarkably correct prediction. However the prediction is concerning both feature size and cost and it predicts the costs rising in pretty much the fashion they have. It's exponential.
However in terms of computer power, the vast majority of the increases in computer power have been architectural, not from process improvements. If we stopped at 10nm and never went below that, computers would continue to get faster. I am aware of techniques that will continue to improve the processing speed of CPUs. They are not feature size improvements. They will come out in due course. But feature size is not limited by our ability to push feature size. It's limited by the cost of reducing it. Who's going to drop $100,000,000,000 on a fab in 5 years to get below 5nm? Other technique become more effective per unit dollar.
We push these things on all fronts. I've seen some pretty crazy schemes and I've seen some fail and some succeed.
My personal opinion as someone who works on these CPUs is that the recent (4-5 years) slowing of CPU power increase (note that improvement in instructions per Joule hasn't slowed) is going to change. New things will come down the line that will dramatically increase the speed of doing stuff. It's happened with specific workloads like graphics, or crypto or RNGs or disk I/O. Other things will continue to improve as attention is spent on improving them.
Notice how your CPU isn't awesome at DSP, but there are plenty of DSP oriented CPUs that blow any general purpose CPU out of the water on those tasks. There are datapath oriented architectures that can move data faster than any general purpose CPU sitting in big iron routers everywhere. As the demand for specific workloads change, the general purpose CPUs will follow.
C versus SQL. SQL is understandable, and parallel (Score:5, Interesting)
> trying to teach some of the programmers out there how to program effectively on the various parallel platforms is harder than trying to alter physics.
Which could also be phrased as:
So far, many of the parallel platforms available are much harder to learn.
Programmers can and do learn new and different ways of working, provided that the new ways don't suck.
C, Java, etc are all imperative, scalar and object based languages. SQL is a completely different paradigm, declarative and set-based. In other words, in most programming languages the programmer tells the computer how to do some task, with some value. In SQL, the programmer tells the computer what the result must be - without specifying how to do it, and all fundamental operations work on sets, not individual values. Yet most programmers can ans often do learn the declarative, set-based way of programming just as well as they learn the classic imperative way. They learn two very different ways of thinking and programming, because SQL is reasonably good - it's quite learnable, with or without understanding the underlying mathematical concepts.
There's no fundamental reason you can't have a parallel programming language or library for general purpose programming that's roughly as easy to use as SQL. In fact, SQL may point the way in many respects - besides being a learnable paradigm, it's fundamentally parallelizable precisely because the fundamental operations all use sets as input and output. All the major operations could easily be completely parallelized behind the scenes and the user (programmer) wouldn't have to know or care.
Maybe that's the way to go, since we know programmers can and do use sets - introduce a set-based general purpose language. To avoid leading programmers into temptation, the language should have no loop constructs. With no capability to run this:
foreach blah in group {
result[i++] = do_stuff(blah);
}
programmers will quickly learn to instead write:
results = do_stuff(group);
Re: (Score:3)
But you don't have to look to future software for this.
ASIC design languages create designs that are explicitly parallel, and they do it easily. Sure, there are synchronizations that have to happen, but that may not apply to much of the design. They are explictly event-oriented, and combinational (When this event occurs, do one of the following things depending on the state of these other two signal). I have sometimes been amazed at how quickly, and in how small a description. and with a full test suite,
Re: (Score:3)
It did indeed have a construct like:
Unfortunately, it was not American.
Re: (Score:3)
Maybe that's the way to go, since we know programmers can and do use sets - introduce a set-based general purpose language. To avoid leading programmers into temptation, the language should have no loop constructs. With no capability to run this: foreach blah in group { result[i++] = do_stuff(blah); }
programmers will quickly learn to instead write: results = do_stuff(group);
I agree, but I think you've taken it a step too far here. Look back at maths and how things like sigma summation and similar things like the product function work. Because of the mathematical properties of these, they are order independent, and inherently parallelisable.
Eliminating loops doesn't mean eliminating a "foreach" -- it just means treating each instance of the block as its own scope, and ensuring that no instance can access the variables of another instance. (Talking "instances" instead of "itera
Re: (Score:3)
Re: (Score:3, Insightful)
I'm not sure what world you are living in, but in the one i'm in, we have CPUs with a lot more cores the 4, 6 or 8.
For starters, mainstream Intel dual socket supporting processors have 22 core options - E5-4669 v4 for example. So, you can get 44 cores into a dual socket machine.
Sun/Oracle got into this game in a big way with their T series processors, and blurred threads vs cores (in a very interesting way), so produce things like the T5 with 16 cores and 128 threads - it's like hyperthreading, but very cle
Re: One word (Score:3)
FYI, Apple's languages/tools (like GCD, Swift, and OperationQueues) make it very easy and manageable to take advantage of concurrent programming. (At least compared to other systems I've see )
Re: (Score:3)
The power problem is not the cost of the power. The problem is that any machine that does work also generates waste heat according to the laws of thermodynamics. So a chip that uses more power generates more heat which has to be got from the chip to outside the package and dissipated into the environment. If the heat is not removed then the chip temperature increases to the point where so many thermal electrons are generated the chip no longer works.
The issue is also affected by the linewidth shrinks used t
Re: (Score:2)
Re: (Score:3)
Physics
Yes and no.
Yes as in there is a limit to what we can do with silicon and transistors, but also no because of the way innovation tapers off after a few decades. Its the same reason that we dont see huge leaps in car, aeroplane and oven technology. Its because the design has matured to a point where for the most part we're just adding minor improvements to tried and tested designs. Intel/AMD/NVIDIA have pretty much reached this point and it will take a disruptive technology to change that.
Said disrupt
Re: (Score:3)
Computers were first built back in the 40's... we didn't get them into the home until the 70's.
The computers built in the 1940's used valves, not silicon. The first transistor-based computers were in the early 1950's so that's when the clock should start ticking since valve-based computers were clearly never going to be a consumer item. The same may be true of the next generation of computer technology - the current tech for quantum computers is not really consumer friendly if that turns out to be the next generation technological platform.
Re: (Score:3)
Computers were first built back in the 40's... we didn't get them into the home until the 70's.
The first transistor-based computers were in the early 1950's so that's when the clock should start ticking since valve-based computers were clearly never going to be a consumer item. The same may be true of the next generation of computer technology - the current tech for quantum computers is not really consumer friendly if that turns out to be the next generation technological platform.
Fair enough, it's 2:30 where I live and I didn't feel like reading the Wikipedia article on computers to get exact dates. However that's still 20 years from prototype to home product so I stand by my point.
I forget where I read it, (back in high school, which is a while ago for some of us) but it takes 25 years from the point where a new technology becomes available for it to integrate into our lives. Replacements for silicon are largely still theoretical.
Re: (Score:3)
Speed of electrons or even light isn't the problem. It's the capacitance. The destination transistor feels the voltage change at the speed of light, but it doesn't change its own stored charge fast enough to register a "0" or "1". This has much more to do with intrinsic resistance of the material locally than how far the signal has to travel.
The problem is that a material that's a semiconductor will typically straddle some range between conductance and resistance (by definition). So conductance is hard to i
Huge leaps (Score:2)
Market (Score:5, Informative)
Most likely, there is no major competition in the market, and PC sales on the whole have slowed considerably. A modern 6800K processor is as close as you'll come to a leap forward, but it's $1100 Canadian and requires a similarly expensive motherboard + memory. Same with similar chips.
Meanwhile the cheapest system on the market is as fast as a moderately high-grade enthusiast computer from 2010 and probably has reasonable 3D graphics onboard, with a SSD drive it will feel quite snappy.
So, a) not a lot of market demand for faster systems, b) lots of tablets and game consoles for entertainment out there, c) moderately faster systems exist but cost keeps them low-volume, d) very low-percentage demand for faster computers - definitely less than 1% that will pay a premium for it, e) the majority of gamers are young-ish and they play largely twitch games even on PCs which are more GPU limited than CPU limited.
Re:Market (Score:5, Interesting)
Dude gamer GPU's are increasing in performance incredibly fast. THey double in speed every 2 years. The only reason desktop is not innovating is because Intel has a monopoly and won. But that is changing starting with Kaby Lake thanks to AMD Ryzen. It is back to 15% every year again and maybe even more as graphics shows no slow downs anytime soon.
Shoot for $185 you can get what a $399 did just in late 2014/2015 at all max settings in games.
Re: (Score:3)
GPUs are increasing incredibly fast because of a couple of reasons. First, they're not anywhere close to the same die size as a CPU. They're roughly 2 generations behind CPU's in shrinking, that means the tolerances can be off and it won't make a huge difference and can "run wild" without the danger of causing errors. But can benefit from all the advances that AMD and Intel have gone through with each die shrink. The second is GPU's are able to increase their die size and transistor count as well as hav
Re: (Score:3)
That's not true; GPUs basically always use the latest process technology available, just like CPUs. Recently, there have been some degenerate cases where a new process is (at least initially) slower and more expensive than the previous one; but in general, they always move to the latest and greatest process, once that process is capable of making a better product.
As for die size, the big GPUs are way bigger than CPUs. A 22-core Xeon Broadwell E5 from 2016 is 7.2 billion transistors, and 456 mm^2. The NVIDIA
Re:Market (Score:4, Informative)
Sadly low power dedicated graphics cards aren't being made, due to integrated graphics removing the OEM market for it. The lone exception is geforce GT710 (and the GT610 before that) with a 19W TDP, and a somewhat rare nvidia GPU (GM108) on some ultrabooks.
Either AMD or nvidia could make a low power GPU like that wih the latest technology and some LPDDR or DDR4 memory, if so they wished.
nvidia almost released a 15W graphics card with a Maxwell GPU
http://wccftech.com/nvidia-gef... [wccftech.com]
Re: (Score:3)
Most likely, there is no major competition in the market, and PC sales on the whole have slowed considerably.
Sorry, but I think this is plain wrong because they're always working to lower their own cost. Even in the absence of competition if Intel could make a processor twice as fast, they'd make it half the size and sell the same performance at a much higher profit margin. And while the PC market has shrunk it's still 270 million PCs/year or about 75% of its all time high, it's a huge market even if it's not a growth market anymore.
Business decision (Score:3, Insightful)
Limitations (Score:3, Informative)
Increases in clock speed have hit a wall with current silicon based semiconductors. Exotic semiconductors and incredible cooling systems aren't practical for the mass market.
Re:Limitations (Score:5, Interesting)
In a way, process limitations are a welcome obstacle, that should motivate reflection on legacy decisions, and perhaps finally allow the x86 architecture to be put to rest. Many consider x86 "good enough", but the problems with legacy hardware run a lot deeper than performance, and are largely responsible for the horrific state of computer security today.
The main problem isn't legacy hardware, but legacy software. The x86 architecture is already dead, and most of what we see is a hardware translation of x86 to a CPU architecture that isn't accessible to the coder.
I believe that the only way out of this is for us to start making more heterogeneous parallel chips. At the moment, this only really exists in the form of packages of CPU+GPU on a single chip. But if we had (for example) ARM+x86+GPU, we'd be able to run an ARM-based Linux or Windows environment, but power up the x86 core as required to run any vital legacy apps. This would mean it would slowly become more and more economical to develop for ARM (or whatever your chosen architecture is) and we'd be able to start thinking about retiring x86 sooner. And hell, it's not like even Intel are really fans of x86 themselves -- they've already tried to ditch it once (remember Itanium?), and in the end it was AMD who extended the x86 architecture to 64-bit, not Intel. Intel wants away from x86, the market wants a better architecture, we just need a stepping stone that guarantees legacy software compatibility, and when so many multiple cores lie idle, I don't see why heterogeneous multicore isn't recognised as the solution.
Why Are There No Huge Leaps Forward In CPU power? (Score:5, Insightful)
"Relative to GTX 980 then, we're looking at an average performance gain of 66% at 1440p, and 71% at 4K. This is a very significant step up for GTX 980 owners,"
http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/32 [anandtech.com]
Re:Why Are There No Huge Leaps Forward In CPU powe (Score:5, Informative)
Architecture-wise, Pascal was mostly an incremental upgrade to Maxwell.
The big difference from Maxwell to Pascal was a process upgrade from 28 nm to 16/14 nm which allowed the clock speed to bump 50% from around 1 GHz to around 1.5 GHz.
Couple that improved memory and a good balance of different types of units for the best performance in typical games of its time.
Re: (Score:2)
Re: (Score:2)
In fact, process improvements are critical as a valid source of performance gains.
That's pretty much Intel's entire chip development model [wikipedia.org]...
Re: Why Are There No Huge Leaps Forward In CPU pow (Score:3)
Breakthroughs are NOT plannable projects (Score:5, Insightful)
The poster asks a question that assumes breakthroughs can be planned just like any other development project. But breakthroughs are not, or rather, those that can be planned and worked already have been. The computer science field has been operating awash with funding for at least 55 years.
I'm not saying there are no breathoughts out there, what I'm saying is that our current project methodology has already discovered all it can, and most future breathoughs will come from some other methodology.
The target, CPU/GPU power is also not especially compelling -- compared to the past, there is much less pressure to increase performance, and considerable uncertainty how the increase will be helpful.
Re:Breakthroughs are NOT plannable projects (Score:4, Insightful)
I'd mod you up if I could... at this point, it's starting to look like we need a material breakthrough - Silicon appears to be reaching its limits.
Re: (Score:2, Interesting)
Huge breakthroughs happen when some option has not been tried due to lack of funds, vision, laziness, monopoly markets or some other crap. In a field where smart people have been exploring all options at the cutting if not bleeding edge there wont be an overlooked angle which can suddenly give a 16x jump.
In short a huge breakthrough is not a sign of greatness rather it is a sign that there was something wrong with the field and someone figured out how to fix it.
Huge breakthroughs will never happen in a heal
Re: (Score:2)
Two excellent points in this comment - the obvious one about breakthroughs not being a planned project, and the other, also important: there just isn't a huge financial motivation for a company like Intel to make a chip an order of magnitude faster right now.
That's especially true if you look at the inevitable tradeoffs - if they could make a chip 10x faster using 10x more power, would they bother? Or 10x more power with 10x cost? Probably not, since the market would be so limited. These days - both in m
Playing too much Civilization (Score:3)
The CIV games make young minds think that technological breakthroughs are simply a matter of money and time, then BANG tech advance!
Somebody needs to start airing "Connections" again: http://topdocumentaryfilms.com... [topdocumentaryfilms.com]
Intel just got faster (Score:5, Informative)
The sole reason Kaby lakes got hot and clocked in so fast is because of AMD just around the corner and it worked to beat Ryzen. I expect the CPU race to heat back up again as physics has not killed innovation yet.
Proof is GPU's and Phones are still improving at breakneck speed. It is only because of an INtel monopoly that on the desktop it has went to a standstill.
Re: (Score:3)
I think you will find that Intel is paddling as fast as it can, Qualcomm among others is snapping at their heels.
Most People Only Want a Window to the Internet. (Score:5, Insightful)
Right about 2008/2009 computer hardware became "good enough" to appeal to people's basic needs which really only centered on having a simple window to the internet. Netbooks became available and smartphones started to become good enough to browse the internet on their own. Consumers at the end of the day really only want a platform that's able to view into the internet.
Someone can correct me, but I believe such innovation is still occurring for server technology and niche fields like a/v production, cad, and animation. Though, I do yearn for the olden days when consumer technology was cool and exciting. Being a tech nerd in the 90s was something else!
There used to be (Score:3)
I remember when Pentiums were first coming out. P75, P90, P100, P133, P166. They were faster than the 386s and 486sx and 486dx models. The p166 was noticeably more than twice as fast as the P75 on lots of tests. The Mhz and Ghz races are over.
We can't just ramp up cycles anymore with silicon. It puts out too much heat. Multicore doesn't magically make programs faster unless they lend themselves well to parallellization & are coded properly for it. New architectures have been tried, but ultimately fail because they're costly or proprietary. ARM was a pretty good leap forward for mobile use. New instructions are being included in CPUs all the time -- especially ARM. Try to play a HEVC 1080p video on a 2013 tablet vs one today... you'll notice a difference right away. Check the CPU usage -- one's at 100% and dropping frames left and right while the other barely nudges past 15%.
Intel or AMD could sell you a chip with 256 cores on it, but unless you do a lot of video encoding or physics rendering, it'd be wasted on you... and super expensive b/c they have no incentive to make it in volume. Maybe when VR or AI becomes commonplace, you'll drive demand for such architectures.
CPUs are fast enough for just about anything one could think to do with them at a consumer level. GPUs can be made better, but market forces push for low power that's "good enough" for most users. CPUs and even GPUs aren't the bottlenecks anymore -- it's RAM, SSD, PCI-express lanes, various busses like USB, thunderbolt, HDMI, SATA, etc. Doesn't do much good to stuff a really fast CPU or GPU into a system if you can't feed it data fast enough to max it out. Most CPUs already have several layers of cache as well as branch prediction to help with the crippling latency from other I/O, but it's still not enough.
Changes are usually evolutionary, not revolutionary... and we've tweaked so much with CPUs and GPUs, you're not going to see a big bump until we move away from silicon and PCB to say... diamond or carbon nano-wires and optical computing.
Because there's no such thing as one "performance" (Score:5, Informative)
CPU architect here. I'll try to provide some insight.
Performance for CPU/GPU or any computational tool isn't exactly just a number you hit. It's not like bandwidth for storage or communications nor is it like a battery's capacity.
A CPU and to a lesser extent a GPU is able to perform all sorts (all logical) computational functions. Each of these involves different usage patterns of the different computational paths inside a piece of silicon. And thus, speeding up each of these usage patterns requires different structures.
A single piece of code running something complex like launching an app or opening a webpage will generate hundreds of millions of instructions with lots of different patterns. Think about all those API's you call. How much code do you think is similar between them?
And thus the problem of improving "performance". The goalpost is a shifty one. Speed up one code pattern, and you risk your changes hurting another. Or you can spend extra transistors making a specialized accelerator for that code pattern. But then...it'll be idle 95% of the time.
And if you speed up a particular function by 1000x (it's happened), your average speed increase for a typical benchmark or API call will still be 0-1%. Because that function is only a small piece of the larger codebase.
Think about how many non-similar libraries and functions there are in typical software, and think about how there's any way to speed them *all* up. You can make memcpy or memset (malloc uses these) faster by 5x and that'll speed up javascript processing by....0.01% or so.
The reason "performance" doesn't increase as drastically in the computer world is because computing "performance" is very very multifaceted. Much like how "intelligence" can't just be increased by 5x -- someone can get 5x better at specific tasks, like memorizing or image recognition, but that doesn't make them 5x more "intelligent".
Compare this with a simple metric like 0-60 acceleration or network bandwidth.
Re: (Score:2)
Re: (Score:2)
AMD (Score:2)
AMD, to be fair, has pretty much done this just now with the Ryzen chips.
Re: (Score:2)
AMD moved from 32nm and 28nm to 14nm, and amazingly experienced the same performance increases Intel saw when it moved from one node to another.
I realize that sadly for some of you guys that cpus are inexplicable magic boxes, but they just arent. Put some effort into understanding, or turn in your geek card.
No context (Score:4, Interesting)
This question lacks context. In terms of desktop PCs and common everyday usage, we don't NEED more speed or power. Nothing is going to speed up webpages or Facebook or whatever people typically do on their PCs. And even if you did, then you become constrained by the speed of the internet and there won't be much perceived benefit.
On the mobile side, there is room for more speed but it comes at the expense of power and is still constrained by connection speeds and website performance on mobile devices, which often sucks. Throwing faster and more processing isn't necessarily the fix that is needed.
There are cases where rendering and other heavy duty uses might benefit but the vast majority of people never use those things. Even gaming is usually constrained by other things like the GPU, the game engine, connection speed, and human performance.
The major places where computing power is much more important are in things like supercomputing but those machines don't run desktop programs and don't work the same way. Only the people directly using those machines would ever have any idea how fast they are or how much faster they wish they could be.
So, to recap, desktop PCs are adequate, mobile devices are still finding a balance between power and power usage, gamers are off on their own island but sheer CPU isn't a magic fix, and supercomputing, where extra power would matter, is so far removed from everyday users, there is no way to relate to it.
Re:No context (Score:4, Insightful)
You need a netbook.
I need a 6ghz 8 core because I do actual work on the computer like compiling and rendering.
PC's are Not adequate because software today is complete shit, almost none of it is written well for multi threading.
Again, mostly because programmers coming out of colleges are poorly trained, and then companies want them to bang out trash and not well optimized code that takes advantage of the hardware.
Re: (Score:2)
Gate tunnelling current (Score:5, Informative)
Moore's law had a great run: ~40 years from early 60s to early 00s.
During that time, every generation boosted density, gate count, clock speed, and value per dollar.
The (exponential!) rule of thumb was 2x more every 18 months.
Everyone knew it had stop sometime: you can't make things smaller than atoms.
What finally did stop it (considerably north of atom-scale) was gate tunnelling current.
In a MOS-FET, the gate is separated from the channel by an insulator (SiO2).
As you scale the transistor down, that insulator gets thinner, along with everything else.
When the insulator thickness is less than the wavelength of an electron, you start to get significant tunnelling current.
This acts like short-circuit from the power to ground.
The technology hit the wall around 2003.
Gate tunnelling current was then over half of total power dissipation.
The power density of the CPU chip was 150 W/cm^2 (like a stove top),
and going further was clearly impractical.
As it happens, the clock speed at that design node was 3 GHz,
and that's pretty much were we are today.
Everything since then has been building bigger, not faster: multi-core, caches, SoC;
plus architecture tweaks and optimizations, like pipelining and super-scalar.
It was a great run while it lasted, but it's over,
and we're not getting another one without a fundamental scientific/technological breakthrough,
on the order of coal, or steel, or quantum mechanics.
Re:Gate tunnelling current (Score:5, Funny)
Excellent (and accurate) observations, but :)
can I just say?
The way you did your line-breaks
made me think at first glance that you had written your
Comment in verse. Maybe,
"An Ode to Moore's Law"?
Re: (Score:3)
It's nice to get the real answer amidst all the bullshit. I experienced nearly 20 years of those processor speedups, and it was glorious. Too bad it came to an end. If the trend had continued, we'd all be using some terahertz CPUs by now.
Risk Averse CEOs are holding us back (Score:5, Informative)
Risk averse CEOs who don't want to sink in the R&D to make carbon based chips because there is risk of it not working.
A synthetic diamond transistor was first built and tested over 13 years ago at 81GHz: http://www.geek.com/blurb/81gh... [geek.com]
More recently they developed a 300GHz Graphene transistor, but that was still 7 years ago: https://www.bit-tech.net/news/... [bit-tech.net]
The technology is there and proven, but scaling it up to processor scale would be a massive investment and a big risk.
Re: (Score:3, Informative)
The chip manufacturers are funding research on these and other technologies, but they are all a long way from viability. It is easy to forget that silicon CPUs with a billion transistors are the outcome of 60 years' research, development, and investment.
Silicon processing is made easier because silicon's oxide is an extremely good insulator. For diamond and graphene, the oxide is a gas, and so insulating areas cannot be created by oxidising the material: another substance must be deposited.
Re:Risk Averse CEOs are holding us back (Score:5, Interesting)
These are different kinds of transistors, and don't operate the way (digitally) MOSFET silicon transistors do.
Diamond is a wide bandgap semiconductor (that's physics for insulator). In special conditions, it can perform well, but those conditions (ranges for temperature, humidity, and field strength) are not practical for consumer devices. Doping diamond is possible, but very difficult, and it still results in a material that is a pretty good insulator. Sorry, it's going to be a lab toy for a long time.
Graphene is a zero-bandgap semiconductor. That means that it never turns off, it just has varying amounts of "on." It's got great numbers on paper (resistivity, mobility). Doping graphene is something immoral scientists talk about doing. The reality is that doping graphene creates a different material that lacks the speed and chemical stability of normal graphene. Your conduction mechanism changes, your gating mechanism changes, your noise sources change. It's a mess. Also, it's really easy to dope graphene on accident and lose your high-end performance. It's the newest material in this space, and the one least understood in the manufacturing realm (despite that, it forms the basis for the commercial product linked above, so obviously it's understood well enough).
You didn't mention carbon nanotubes, but I will, because what was the point of getting a PhD in carbon nanotube electronics if I can't talk about them on Slashdot?! Carbon nanotubes remain the unattainable holy grail of digital electronics. You can have it all: the speed of graphene, the on-off ratio of silicon, low power requirements... It's just that you almost need to assemble your circuit by hand. It's been >25 years we've been working with these materials, and we still don't know how to properly control where they go on a wafer (well, maybe these guys [carbonicsinc.com] know). The problem is that nanotubes want to make a heterogeneous mixed metal-semiconductor plate of spaghetti on the wafer, when you want clean rows of uniform semiconductor. The best guys in the world at this are up to producing postage stamp sized patches in the middle of the wafer. So... there's some work to be done there before anyone starts designing a processor.
Greed (Score:2)
Moore's law ended in 2006 (heard it straight from an Intel engineer). In it's place they have been focusing on multi-processing and power savings.* In doing so they learned they could make even more money through a much slower upgrade time-table. They do have tech on the back burner to roll out that will have huge improvements on performance (optical interconnects, for instance) but they are going to roll that stuff out like molasses going up a hill. Greed has really taken hold of everything these days.
* (D
Because you're counting it wrong! (Score:2)
Instead of thinking about processing power in term of Hz, you should be looking at a CPU's/GPU's overall computational throughput. When you look at things that way, you will see there has been a massive uptick in processing power in GPUs. x86 CPU have stagnated a bit due to lack of serious competition for the high-end but everywhere else it's thriving. Massive parallel processing is the real future of computing, so get ready for chips with a thousands of sub-GHz cores running independent and identical ta
Weak process improvement/Few ideas waiting (Score:5, Informative)
This kind of thing was rather common until about 2000. Each process node was better in every way than the last. Big jumps in performance at each node advance. Power went down too. And, of course it was much cheaper per gate. You could get doubled performance and 1/4 the cost by just porting over the same design, trace for trace, to the next full node. These "die shrinks" were quite common. Through the 90's you got an extra bonus for new designs. That is because the industry was brimming with ideas that were known to work but were just not practical to implement because they took too much silicon area.
First the idea spigot sputtered. The good mainframe ideas had already been implemented. It was longer clear what to do with all those gates. New ideas were tried. Some worked. Some didn't. Also, about this time, complexity started to threaten the ability to make chips that actually worked. Bugs became more common. Design progress slowed.
Then process starting acting up. Power scaling stopped. More transistors were available but if you used them, your chip consumed proportionally more power. Run the transistors faster and you had the same problem, only worse. A hot chip was no longer a marketing problem, it was a chip that would not work. More effort and more complexity were needed to tame power. A simple die shrink wouldn't do that much.
Then process started getting messier. The new nodes were not better in every way. Leakage current went up instead of down. Variability went up. Performance scaling slowed. Getting any improvement at all required more development time and money. Progress always slows when development time and cost rise.
Then 20nm planer came and it was awful. Terrible leakage. Required double patterning. Double patterning means more masks mean more expense up front and during manufacturing. It actually cost more per transistor than 28nm. What was the point, really?
That is pretty much the mess were are in now. Can't significantly increase clock rate. Can't throw gates at the problem and wouldn't really know what to do with the gates if we had them. Finfets temporarily tamed power but are only available in nodes hobbled by the need for multi-patterning.
Power vs. Power (Score:2)
The people who are actually paying for the products are interested in
a) Power in: do the same about of computation at half the power so my battery will last longer.
b) Power Out: do the same amount of computation at half the power so I can use twice as many devices without blowing by power budget.
Data centers are limited by how much heat you can extract per square foot. Desktops are limited by how loud the fan is. Mobile is limited by the battery size.
Therefore, the designers are designing what people are ac
PCU/GPU gains have been huge recently (Score:2)
Breakthroughs happen all the time (Score:2)
You're just looking in the wrong markets. If you're "just" looking at x86, obviously you have a blueprint you need to follow. Any breakthrough will take quite a few years in order to integrate and fab it. But even then, comparing 5 or 10 year old CPU's to now you can see quite a bit of new circuitry.
Look at AES acceleration and virtualization, we can now fully virtualize a machine including it's hardware as if they were separate machines including networking. There is quite a bit of logistics to make that h
Mill Computing and Wintel (Score:4, Interesting)
For a long time, Intel and Microsoft Windows have rules the computing world. The platform has been at the bottom, Intel's instruction set architecture.
Intel leaped from 16-bit to 32-bit architecture and then from 32-bit to 64-bit but the basic execution model remains the same. Most of the advances that Intel have done from the Pentium onwards in the early '90s have been stopgaps to get as much out of the execution model, but still being limited by it.
There are other processors out there, DSPs, that are much faster than x86 at specialized tasks by making them pipelined and parallel. GPUs could be seen as massively parallel DSPs.
But raw computing power is not the problem. The problem is to run general-purpose code well - and general-purpose code has many branches between code paths and that can't be parallelized.
A company called Mill Computing [millcomputing.com] is working on a general-purpose CPU architecture inspired by DSPs and from what they think that the Intel IA-64 (Itanium) should have been.
By being vastly different in several significant ways from x86, they claim to be able to achieve a significantly higher performance per watt and performance per clock overall than Intel and AMD's x86.
Money. (Score:2)
The main reason is money. Each generation costs billions to develop and produce, and manufacturers are going to make sure they get a return on their investment. These investments stretch back years, and designs have to be made with assumptions about what will be workable at the current process node at the time the chip is ready to produce. That said, not quite all the low hanging fruit has been picked yet. Ryzen could not carry a 50% IPC improvement over the FX if there was nothing left to work with. Maybe
Everything ... everything is conspiring. (Score:5, Interesting)
The gates are now so small that the electron wave function has a pretty high probability of being "on the other side" of the gate. As gates shrink, leakage power goes up very rapidly. Even when they're "off", the gates are consuming too much power (leaking it to ground.)
Also, think about 5 Ghz, IBM's fastest chips. At 5 Ghz, the clock speed is 200 picoseconds, and a 10 deep pipeline can allocate about 20 ps to each gate transition. That's a lot to ask, given that resistance and capacitance don't scale down linearly with dimensions. You also have to populate your chip with a lot of decoupling capacitors in order to hold the charge locally for each transition (because you can't get the power from off chip in 20 ps.) To fight the increased RC load (proportionally) you're putting in more buffers (big amplifiers).
As if that weren't enough, you have the fact that a 14 nm gate is about 20 silicon atoms across. When you start doping the substrate, your actual behavior is all over the place because one or two more dopant atoms represent a 10-20% shift, up or down (total shifts of 40-50%.)
So, your gates are too small, they all behave differently, they have to drive a relatively larger load, and the suckers are too hot.
Competition (Score:2)
Competition (academic and free market) makes big jumps unlikely.
Most of the improvements that any one company is trying to do to get 2X or more performance has already been done, by the time they get to market, by other companies trying to beat them to market. Only a percentage of things they manage to do differently (perhaps things that other companies didn't think were worth doing) differentiate the performance of any one company's product.
Status Quo (Score:2)
Patents and profits (Score:2)
Former cpu and gpu staff starting their own brands?
The way to stop that is to control the entire sector. No advance game or codecs will be offered to support any new start ups.
Anything tech that is useable and considered free will be open sourced by the original brand to control, brand and shape the free end of the market.
Zilog https://en.wikipedia.org/wiki/... [wikipedia.org] pricing spreading around the world was the reason why the the C
Intel's shady tatics (Score:5, Interesting)
Intel is up to their shady tactics again with AMD's new Ryzen release. Maybe not out right paying off computer makers, just now they are sponsoring reviewers. The reviewers jump through all kinds of hoops to make sure that Intel is on top of the benchmark graphics and read like a Intel marketing brochure. None of the reviewers disclose that they are sponsored by Intel.
Examples of oddities from reviewers that are sponsored by Intel.
1) Tom's Hardware: Complains about the power consumption being higher than spec, leaves out that the result was from a overclocked test and an MSI board that has an additional CPU power.
2) GamersNexus (one worst of them)
a) Had to compared the 1800x to 6 different Intel processors that were overclocked with the 6900k overclocked by 700Mhz.
b) Only one AMD processor was OC by -100Mhz(yep) . There OC vs stock were almost exactly same.
c) Makes the 6900k pop on the top of the benchmarks.
d)1800X only loses 6 vs 8 to the Intel 6900k at stock speeds. With only 2 benchmarks with the 1800x losing by more than 7fps.
e)Pretty much all benchmarks by the same author never included OC tests, but suddenly he had to compare it to 6 different OC benchmarks. http://www.gamersnexus.net/gam... [gamersnexus.net] http://www.gamersnexus.net/gam... [gamersnexus.net]
f) Out right lied saying AMD told him not to benchmark Ryzen at 1920x1080. AMD just asked him to benchmark at multiple resolutions , not just 1080P.
Vectorization (Score:4, Informative)
For certain operations, AVX made a huge difference. AVX2 made an even huge-r difference. Depending on what you're doing, you can see a 2x to 10x speedup on the outside vs. using a chip without AVX2 with similar performance characteristics.
Breaksthroughs allow continued development (Score:3)
There have been many breakthroughs in the PC industry, incredibly clever inventions which allowed things to move forward. And that's the thing, the smartest things in the industry don't make for a huge processing leap, they enable making progress at all. Each of these developments take years. Ideas may be simple, but implementing them, especially at the level required for mass production, is hard. Each development also requires more accurate tools. Also, complexity is now so high, that, as imgod2u said, even a huge change in some part leads to an overall small change.
So as others have said, physics, but I think the above is a more nuanced answer. I remember when people said that it wouldn't be possible to make transistors under a micron in size. The very fact that we've reached so far is miraculous.
It DOES happen (Score:4, Informative)
It happened about ten years ago with the rise of GPUs for general purpose computing. Suddenly we could do a lot of things 10-100 times faster than before. You program GPUs really differently than CPUs, so we had to rewrite a lot of code and design new algorithms. But the benefit was huge.
It may be happening again with specialized chips for deep learning, like Google's TPU. These chips are designed for just one class of applications, but it's a really important class, and they can be 10x faster or more efficient for those applications.
There've been other times when a new generation brought a sudden major improvement in speed, like with vector units or multicore CPUs. But always at the cost of having to rewrite how your code works.
Now if you want new chips that work just like the old ones and run the same programs as before, just 10x faster, sorry. That isn't likely to happen. Huge jumps like that require major changes of approach.
Because we're already close (Score:4, Informative)
I think the real issue is, semiconductors are so competitive, the current shipping product is always very close to the state of the manufacturing and physics arts. Intel, AMD, nVidia, Samsung, Toshiba, Apple, and others spend billions pushing the processes and architectures to the limit in every product so it stays competitive as long as possible.
To get a 4x or 8x improvement in size, power, or speed would imply there's a revolutionary way to do things that we just don't quite know yet. And it better be something which can be quickly turned to production because Moore's Law hasn't stopped yet. If you have a 4x improvement idea but it takes five years to release, it won't get funded. Plain CMOS silicon has too good a chance of catching up.
There's plenty of times people rolled the dice on processor moon shots. I was at HP when Itanium was first developed (~95). We thought we'd have working silicon in a few years (~98 or 99) at the astounding clock rate of 500 MHz (oh, and that was potentially retiring something like 6 to 12 instructions per cycle, I forget the details). This was when a good Pentium processor ran at around 45 MHz. We thought Itanium was going to be so frickin' fast there was no way Intel could compete. Then AMD started a clock rate war, x86 got faster really fast, Itanium took much longer to produce than we anticipated, and the rest was history.
I think the bottom line is, it's really hard to produce a system which really is even 2x faster than the competition. 4x is incredible and 8x probably has never been done.
As an analogy, consider cars and mileage. My car, a diesel Passat (which shortly will not be road legal :() actually exceeds 50 MPG on a good day. What would it take to make a car which gets 100 MPG with a 600 mile range? How about 200 MPG? With no compromises? And a sales price of $28k? It's pretty hard to imagine.
Re:milking it (Score:4, Insightful)
Re:milking it (Score:5, Informative)
My SSD based laptop boots a lot faster than Windows 3.1.
As far as "planned obsolescence", I'm running Windows 10 on a Core 2 Duo 2.66Ghz laptop with 4Gb of RAM - a computer that was first sold in 2009. It runs my Plex Server and my PlexConnect server.
My mom still uses my 2006 era Mac Mini (Core Duo 1.66) with Windows 7, Office, and Chrome. It has 1.5Gb or RAM. When I go home and use it, it's not unusable as long as you don't try to run too many things at once.
My secondary laptop that I keep upstairs is a circa 2009 2Ghz Pentium Dual Core with 4Gb of RAM running Windows 7. In day to day use, the only thing wrong with it is a battery that won't hold a charge.
You can accuse MS of a lot of things, but not optimizing Windows to run well on fairly old hardware isn't one.
Re: milking it (Score:2)
How do you get 240k MIPS for a modern CPU? That's 60 to 80 instructions per cycle.
Re: milking it (Score:5, Insightful)
My Chromebook takes mere seconds to boot, whereas an IBM AT could easily take minutes. And of course, my modern device performs tasks that would have been the domain of supercomputers in the past.
Time to take off the rose colored glasses. I did live through the eighties and nineties, and computing was pathetic back then ... we just didn't know any better
My Commodore 64 took about 0.1 seconds to boot. We just suck at "fast" these days.
Re: (Score:3)
Well the C64 didn't do really do anything on boot - mostly initialize the 40 character x 25 line display and jump to Basic and start executing. The kernal was custom written for one hardware config, didn't work with thousands of different pieces of hardware. No internet, no services at all to run (because no multi-threading). Those machines were extremely simple, and really can't be compared to today's Mac, Linux, or Windows OS's.
But modern machines are about 10000x faster. Needless complexity aside, it's just not that much more complicated. Whatever is hardware-specific, cook that up when the hardware changes - how often does that happen? - and park it ready for fast boot again.
We just suck at "fast".
Re: (Score:2)
I was booting computers in milliseconds in the mid 90's (to the point where users space applications were getting scheduled time). It really depends on what you considered 'booted' and what hardware checks you are willing to skip. RAM test? walking ones test? read/write test?
Sometimes you have to set up a piece of hardware to fail and wait for it to time out to verify that a system is working and that alone can take an arbitrary amount of time. 40ms? 2 minutes? Depends on the hardware and what you're loo
Re: (Score:3)
Moore's Law [wikipedia.org] is an observation made by its namesake that the density of transistors on a chip doubles approximately once every 18 to 24 months. Gordon Moore first made the prediction in 1965 and it held fairly well until recent years (roughly after 2012.)
Processor speeds, although they have increased significantly over the same time period, have not doubled every 18 to 24 months.
Re: (Score:2)
The part where you acted like you knew what you were talking about instead of just misremembering some thing you didnt understand when you heard it, is called dishonesty. You, sir, are a liar. You arent a liar because you are wrong. You are a liar because you pretended to know.
The part where you literally got it all wrong tells us that the last
Re: (Score:2)
LOL
Re: (Score:2)