To Keep Pace With Moore's Law, Chipmakers Turn to 'Chiplets' (wired.com) 130
As chipmakers struggle to keep up with Moore's law, they are increasingly looking for alternatives to boost computers' performance. "We're seeing Moore's law slowing," says Mark Papermaster, chief technology officer at chip designer AMD. "You're still getting more density but it costs more and takes longer. It's a fundamental change." Wired has a feature story which looks at those alternatives and the progress chipmakers have been able to make with them so far. From a report: AMD's Papermaster is part of an industry-wide effort around a new doctrine of chip design that Intel, AMD, and the Pentagon all say can help keep computers improving at the pace Moore's law has conditioned society to expect. The new approach comes with a snappy name: chiplets. You can think of them as something like high-tech Lego blocks. Instead of carving new processors from silicon as single chips, semiconductor companies assemble them from multiple smaller pieces of silicon -- known as chiplets. "I think the whole industry is going to be moving in this direction," Papermaster says. Ramune Nagisetty, a senior principal engineer at Intel, agrees. She calls it "an evolution of Moore's law."
Chip chiefs say chiplets will enable their silicon architects to ship more powerful processors more quickly. One reason is that it's quicker to mix and match modular pieces linked by short data connections than to painstakingly graft and redesign them into a single new chip. That makes it easier to serve customer demand, for example for chips customized to machine learning, says Nagisetty. New artificial-intelligence-powered services such as Google's Duplex bot that makes phone calls are enabled in part by chips specialized for running AI algorithms.
Chiplets also provide a way to minimize the challenges of building with cutting-edge transistor technology. The latest, greatest, and smallest transistors are also the trickiest and most expensive to design and manufacture with. In processors made up of chiplets, that cutting-edge technology can be reserved for the pieces of a design where the investment will most pay off. Other chiplets can be made using more reliable, established, and cheaper techniques. Smaller pieces of silicon are also inherently less prone to manufacturing defects.
Chip chiefs say chiplets will enable their silicon architects to ship more powerful processors more quickly. One reason is that it's quicker to mix and match modular pieces linked by short data connections than to painstakingly graft and redesign them into a single new chip. That makes it easier to serve customer demand, for example for chips customized to machine learning, says Nagisetty. New artificial-intelligence-powered services such as Google's Duplex bot that makes phone calls are enabled in part by chips specialized for running AI algorithms.
Chiplets also provide a way to minimize the challenges of building with cutting-edge transistor technology. The latest, greatest, and smallest transistors are also the trickiest and most expensive to design and manufacture with. In processors made up of chiplets, that cutting-edge technology can be reserved for the pieces of a design where the investment will most pay off. Other chiplets can be made using more reliable, established, and cheaper techniques. Smaller pieces of silicon are also inherently less prone to manufacturing defects.
Dammit (Score:5, Insightful)
Our of all places on the Internet I want at least /. to admit that there has never been Moore's law - it was a mere observation": from Wikipedia, "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years".
Whoever decided to call it a "law" was a moron and now we have this idiocy repeated every news story. And since it's not a law, we could simple move on and realize that physics simply doesn't allow it to exist.
Re: (Score:1)
Marketing has always operated on an entirely different reality than the rest of us; no news here. Slashdot shouldn't be just repeating marketing gobledegook wholesale, but given its propensity for "native advertising" I've grown to expect it.
Re:Dammit (Score:4)
Yep, it's just a name that was given to an observation.
There's lot's of things called "XXX's law" that aren't physical laws of the universe. Get over it.
Re: Dammit (Score:2)
Re: (Score:3)
Indeed.
Murphy's Computer Laws [imgur.com]
e.g.
Clarke's Third Law
Any sufficiently advanced technology is indistinguishable from magic.
Weinberg's Law
If builders built buildings the way programmers write programs then the first time a woodpecker came along it would destroy civilization.
Re:Dammit (Score:5, Funny)
Especially when there's only one law: Brannigan's Law.
Re: (Score:2)
Especially when there's only one law: Brannigan's Law.
I believe you are neglecting to include an equally and perhaps more important field of law - Bird Law.
Re:Dammit (Score:5, Insightful)
You are correct. It was never a law.
Actually, it was a self-fulfilling prophecy [wikipedia.org]. Since Moore's "Law" provided a reference point for the evolution of transistor density, all designers knew where they needed to get, or otherwise their competitors would surpass them.
Re: (Score:2)
Re: (Score:2)
Just thank the stars that the headline wasn't a question...
Re: (Score:2)
People have always known that there were obvious physical limitations to Moore's law lasting forever. But it was never intended to last forever. And it was not simply an observation: Moore predicted that the pattern would continue to hold for at least ten years. I suppose you could argue that it should be "Moore's Prediction" or "Moore's Conjecture" or "Moore's Educated Guess", but surely we aren't going to quibble about what can [wikipedia.org] and can't [wikipedia.org] be called a law?
Re: (Score:2)
Even better: "You're still getting more density but it costs more and takes longer."
That's a nonsense statement. With new technology, you learn to do things cheaper--with less human labor involved in total. They're saying the technology isn't improving as quickly.
This technology that "costs more and takes longer" will be cheaper, faster, and better than last-generation's new technology when next-generation's new technology is standard. "costs more and takes longer" means we're being impatient and gree
Re: (Score:2)
Are you saying that new fab plants are cheaper, as in a 7nm plant is way cheaper then a 25nm plant? Likewise with the speed of process shrinking? I guess that is why Intel seems to be taking forever to do another process shrink.
Re: (Score:2)
Re: (Score:2)
No, I'm saying that the SAME fab plant to produce the SAME thing will be cheaper at a point further in the future. The same fab plant and process in 2010 is cheaper than it was in 2000.
We're finding that repeating the 2000 to 2010 step takes until 2025, but we're trying to do it in 2020. The 2010 process had similar costs in 2010 to the 2000 process in 2000--and would have cost a whole hell of a lot more to do in 2000. Well, the 2025 process has similar costs in 2025 to the process we used in 2010--an
Re: (Score:2)
Exactly this isn’t a law, it is just a trend that lasted far longer the. Most people expected.
Re: (Score:3)
Out of all places on the Internet I want at least /. to admit *that's what the phrase scientific law means*. From Wikipedia: "Each scientific law is a statement based on repeated experimental observations that describes some aspect of the Universe."
"physics simply doesn't allow it to exist."
There are plenty of laws for things that don't exist. For example, in a rotating frame of reference there are three fictitious forces (Euler, Coriolis, and centrifugal) that each have well-defined laws to describe their
Re: (Score:1)
In science, that is exactly what a "law" is. An observation "this is how things work", nothing more. Take Newtons laws of motion, they are observations, nothing more. There is no "this is how it must be", or "it can't be any other way", just "we looked and this is what we saw". A scientific law includes no explanation, no mechanism, no directive that this is the only way things must be and no other way.
Re: (Score:2)
"Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years".
Whoever decided to call it a "law" was a moron and now we have this idiocy repeated every news story. And since it's not a law, we could simple move on and realize that physics simply doesn't allow it to exist.
A law, as opposed to a theory, rule, observation, hypothesis,etc. simply means it can be put into a mathematical formula. The above definition fits that very well. It has nothing to do with if it holds for all cases or if it has even been tested. Many laws are not exact or have only a range for which they are applicable.
This seems to crop up every few years (Score:2)
Not saying it can't work but this sort of thing seems to crop up every few years. At least at chip level it seems to do better than the ever recurring schemes for code reuse.
Bicycle reinvented (Score:5, Insightful)
No, they used to be Integrated Circuits (Score:3)
They just use much smaller sockets now. :)
Re:Bicycle reinvented (Score:5, Interesting)
I've wanted a generic coprocessor architecture for a few months now. Imagine if you could stick a chip on your board and it could access the on-board video port (DVI, HDMI), a range of RAM exposed as RAM (it requests address and data), and so forth. Instead of on-CPU graphics, you have a chip that provides that. The same chip can provide things like encryption, encoding, and artificial neural networks.
These things aren't extended CPU instructions as with an FPU. They're actual separate microcomputers. An ANN chip has a completely-different architecture with memory local to the neuron's logic unit instead of in a memory bank. A GPU runs its own program against a memory space.
You don't need a huge riser and ports exposed on the card's edge. You can just plug into the board, get power and an addressing bus, and get appropriate output ports like display and DMA. You can provide multiple functions on one chip. Just make it a chip socket and make it standard.
This works for things that have to run a process on input and output, or on large bulk data. It doesn't work for things that are just extended CPU instructions, like SIMD. Transfer back and forth between processing units and the hop through the memory controller creates too much latency.
You can use a four-wire (RX,TX) LVDS memory bus, too: instead of 64 data lines and 32 addressing lines (1TB physical addressing), you can use two TX and two RX and use a packet protocol. Modern GPUs use 128-byte cache lines (seriously!). You can specify a protocol that sets memory unit size, offset, and then issues READ requests. If you want, your memory controller (on the expansion chip) could send an instruction packet {SET SIZE 512}, {READ 390625}, {READ 390626}, .... The return packet on RX would be the data. CPU's memory controller would carry out the memory read and stream the data to the RX pins.
The memory unit size is just a number of bytes. No trading off number of pins for maximum addressable RAM. There are odd reasons we use parallel buses for RAM, and it's not because parallel is faster; it's because building all of that stuff into DRAM is expensive and power-hungry. Since a coprocessor goes through a memory controller on a CPU, it's cheap there. Latency isn't as much of an issue as sheer bandwidth in this application.
Imagine it. Just pop a graphics chip on your motherboard. 12V supply that can feed 100W. If you need bigger than that, then you buy a 16-lane PCI-Express card.
Re: (Score:2)
They did that thing with graphics chips witth laptops. Some were designed to have GPU's in a cartridge like format. They were advertised that you could upgrade the GPU whenever a new one became available. The only problem. They needed all the new GPU's for their new laptops.
Re: (Score:3, Interesting)
No, this isn't co-processors.
This is more about adopting MCM (Multi chip Module) techniques to high-end processors. Right now MCMs are a very popular cost reduction technique where you can package a lot of tiny special purpose chips in to one package. You can build your special purpose package with a machine and reduce your finally assembly costs. I've seen MCM packages that are a half-dozen little grain-of-sand sized chips, bonded to a lead frame, encased in black plastic. From the outside it looks like an
Re: (Score:3)
Absolutely not! These aren't co-processors, they are processors in a modular system design - completely different. Using a modular system means that one can mix and match chips easier which means less chip masks are needed, it means bad/slow chips can be binned more efficiently and put together in the configuration wanted. It also means processors can be mixed and matched using different manufacturing processes as exemplified by the recently announced AMD EPYC2.
Chiclets... (Score:1)
Back to the future (Score:5, Insightful)
It seems that the semiconductor industry goes through these cycles periodically. Whenever they run up against limits to single chip integration, they go back to this strategy of wiring together separate chips together. Ultimately, this proves to be inefficient and once technology improves, they return to putting everything on a single chip.
Re: (Score:2)
It would let them decouple whatever was causing the problem. Long wires are known to cause crosstalk as they operate like antennae.
Re: (Score:2)
There's too much money tied to writing papers about the next end of Moore's Law for Moore's Law to be permitted to actually end.
Re: (Score:2)
Sounds like a legit approach?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Just yesterday I was just reading up on Multi-Chip Modules, Hybrid Integrated Circuits and other similar technologies, and wondering to myself why they didn't do that more often in the 8 and 16-bit era to cut down on package pin count. The answer is fairly obvious: yields, and thus costs, are worse. Often much worse.
This is a good thing (Score:3)
Hitting up against Moore's law is probably the best thing for the chip industry. It's going to force them to innovate.
Re: (Score:2)
Well technically they were. Now they have to pursue much harder innovations (read expensive). I don't see that as a great thing anymore than reducing dozens of car companies in the U.S. to just the big 3 was.
Re: (Score:2)
Wait, aren't there only like 4 or 5 big chip makers now?
Re: (Score:2)
4 or 5 sounds about right, it is dependent on how you count. Given the costs involved in the latest processes and the risks of trying to invent new techniques that could easily be down to two or even 1.
Re: (Score:2)
US, China, Taiwan, Korea. These places will maintain a position in semi manufacturing because it is strategically wise to do so.
Europe should be there too, but they aren't good at providing under-the-table support for a local tech industry, so they lose out.
Re: (Score:2)
3 pursuing or manufacturing 7nm+: Intel, TSMC, Samsung.
Re: (Score:2)
The problem is their innovation is to crank out more cores ... for a still largely single threaded workload, where the workload is high enough to stress a modern CPU.
Re: (Score:3)
Yeah! It means the Amiga is making a comeback!
Re: (Score:2)
You could probably have a dozen or more Amigas on one Chiplet :D
Re: how about ASIC chiplets (Score:2)
You wouldn't need it for all tasks, just semi-stable ones. TCP/IP doesn't change much, so putting a stack on ASIC makes a lot of sense.
If you shoved the Linux filesystems and VFS2 onto a series of ASICs that could be placed on the controller card, you'd have far better performance and all OS' would have access.
Re: (Score:2)
So why do companies already do this for high-end cards?
FPGA chiplets too? (Score:4, Insightful)
I wouldn't mind seeing both ASIC chiplets, dedicated for a specific task, like AES array shifting, RSA exponentiation and multiplication, and other tasks a computer commonly does. From there, it would be nice to have FPGAs for most anything else. This can easily allow a hypervisor to run x86 code as well as ARM. Done right, this could also improve security between VMs.
Of course, if someone wants to grind cryptocurrency, next to dedicated ASIC boards, FPGAs are not bad.
Re: (Score:3)
Comment removed (Score:3)
Re: (Score:2)
I'm sure the EU will figure out a way to fine them for it :)
What's old is new again (Score:4, Informative)
Looks like they've reinvented the IBM 3081 [wikipedia.org] mainframe from 40 years ago:
The elimination of a layer of packaging was achieved through the development of the Thermal Conduction Module (TCM), a flat ceramic module containing about 30,000 logic circuits on up to 118 chips.
Re: (Score:1)
Mainframes, distributed servers, thick clients, personal computers, goto 10
It's all a loop, sometimes through the cycle it skips a step, and not everyone will always see all the steps.
Re: (Score:1)
Or a Transputer.
Re: (Score:2)
No, MCMs have been used for x86 for a long time. The Pentium Pro used a MCM with a separate SRAM chip, later Core i7 chips have used external iGPU memory chips etc.
The new thing isn't using MCMs or even placing several processor chips together on one MCM but the change in systems design to a more modular approach.
Re: (Score:2)
That's not new either. [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
Nothing was more modular than bitslice architectures.
Déjà vu (Score:2)
I'm pretty sure I saw something similar [nocookie.net], almost three decades ago.
Re: only amd (Score:2)
Large dies are good. You can do far more. I'd love to see wafer scale SoC at 15nm, more RAM than most machines have hard drive and more cores than most servers, in something the size of a kindle.
Re: (Score:2)
Large dies also mean a much lower yield rate. No point in having huge dies if most of your chips fail testing.
Re: (Score:2)
Re: (Score:2)
The failure rate is not bad and can be improved on. Intel reckoned an 80 core CPU using wafers, you can disable 3/4s and still have the power of a high-end server. Odds are, you'd not drop below 64 cores.
Other methods apply. Since we're talking daughter cards, you can have some replace disabled logic on the chip.
Co-processors are back! (Score:1)
Back in the dark ages we used to put floating point math on a special chip. Intel had the x87 co-processor, and most computers didn't have them since computers didn't need high-speed floating point math, and when you did, you just the operation in software. This started to go away with the 486, which (mostly) had an integrated FPU on chip, while a few cheaper models disabled it. The Pentium was the first chip where every model had an FPU.
So the future seems to be de-integrating circuits? Possibly. But
Re: (Score:1)
Just a repeat of previous technology (Score:2)
Wikipedia:MCM [wikipedia.org]
It's like technology companies are starting to behave like Hollywood. Come out with a rehash of what they did a few years ago instead of any new revolutionary ideas.
Re: (Score:2)
They already did the same thing basically to create quad-core processors -> two 2 core processors on one MCM.
Chiplets! (Score:1)
Chiplets! Imagine a Beowulf cluster of those. On a chip!
Why not go the other way? (Score:2)
Chiplets are good, but what's wrong with wafer scale integration? You could even combine them, chiplets as daughter boards in a 2+1 dimensional arrangement.
Re: (Score:2)
Defect density is much, much lower these days. Different method of purifying silicon, the increased affordability of isotopically pure silicon, etc.
(You can now produce ultrapure single isotope silicon using benchtop apparatus. Indeed, that is being done.)
We've gone from an 80% pass rate in the 80s to a 99.5% pass rate today.
Summary is all wrong (Score:1)
Making larger chips from smaller "chiplets" will not improve the speed nor efficiency of the final combined chip. This is purely a money-saving trick around the age-old problem of having lower yields every subsequent process technology. If anything, the added circuitry necessary to facilitate interconnects between these chiplets only add to power usage and transmission delays.
Chiplets also do nothing for speeding up chip development as monolithic chips are already built from various templates stamped over a
Re: (Score:1)
These points were exactly what I was thinking as I reading the summary. I would mod you up if I had mod points.
Although as a partial counterpoint, if it is noticeably cheaper, it might indirectly allow the balance point between cost, speed, and yield of mass produced parts to be a bit faster...
Limits to Lithography (Score:2)
Chip makers are currently doing 7 nm lithography. Copper atoms are 0.2 nm wide. We may not have reached the limits for lithography, but we have to be getting real close.
Makes me think of something I once read (Score:1)
To communicate 100 light years away instantaneously all it takes is a long stiff rod where very small movement back and forth can communicate. Of course this is just a non practical idea but maybe its not a matter of trying to continue moores law at the hardware level but one of a mindset change on how much speed do you really need, or is it a tortoise and hare race in how you approach getting something done. Sometime I feel that a commodore 64 is more responsive than a today system running some overcomple
Re: (Score:1)
To communicate 100 light years away instantaneously all it takes is a long stiff rod where very small movement back and forth can communicate. Of course this is just a non practical idea ...
I'm not sure if this is sarcasm, but it's not only impractical, it's also not how reality works. Any interactions at one side of the rod would travel the length of the rod at a speed less than light. The exact speed would depend on what the rod was made out of.
Re:Makes me think of something I once read (Score:4, Informative)
To communicate 100 light years away instantaneously all it takes is a long stiff rod where very small movement back and forth can communicate.
Physics doesn't work that way. Force and motion propagate through a material at a finite speed. Perhaps the most well known example of this is the propagation of sound waves. So far, nobody has found a material for which the speed of sound is greater than the speed of light through a vacuum.
Re: (Score:2)
That's nonsense. Try using a C-64 now, you'll find it has trouble even keeping up with fast typing.
Re: (Score:2)
Part of the reason why computers are slow is the rather stupid scheduling policies that are commonly used in Windows.
A proper scheduler will prioritize handling *input* above all else (ie, recording keys and mouse movements) as this make a computer feel responsive. On Amiga's this was a separate process, running at a priority higher than everything else, just to make sure nothing got lost. It got even better with an improved scheduler (Executive) which automatically prioritized i/o bound processes over CP
sounds like the wheel of reincarnation turning (Score:1)
Every couple of years. . . (Score:1)
Every couple of years somebody says that "Moore's Law is ending" and then it doesn't.
I propose we call this Borehd's Law.
It's the cloud all over again (Score:2)