Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware

To Keep Pace With Moore's Law, Chipmakers Turn to 'Chiplets' (wired.com) 130

As chipmakers struggle to keep up with Moore's law, they are increasingly looking for alternatives to boost computers' performance. "We're seeing Moore's law slowing," says Mark Papermaster, chief technology officer at chip designer AMD. "You're still getting more density but it costs more and takes longer. It's a fundamental change." Wired has a feature story which looks at those alternatives and the progress chipmakers have been able to make with them so far. From a report: AMD's Papermaster is part of an industry-wide effort around a new doctrine of chip design that Intel, AMD, and the Pentagon all say can help keep computers improving at the pace Moore's law has conditioned society to expect. The new approach comes with a snappy name: chiplets. You can think of them as something like high-tech Lego blocks. Instead of carving new processors from silicon as single chips, semiconductor companies assemble them from multiple smaller pieces of silicon -- known as chiplets. "I think the whole industry is going to be moving in this direction," Papermaster says. Ramune Nagisetty, a senior principal engineer at Intel, agrees. She calls it "an evolution of Moore's law."

Chip chiefs say chiplets will enable their silicon architects to ship more powerful processors more quickly. One reason is that it's quicker to mix and match modular pieces linked by short data connections than to painstakingly graft and redesign them into a single new chip. That makes it easier to serve customer demand, for example for chips customized to machine learning, says Nagisetty. New artificial-intelligence-powered services such as Google's Duplex bot that makes phone calls are enabled in part by chips specialized for running AI algorithms.

Chiplets also provide a way to minimize the challenges of building with cutting-edge transistor technology. The latest, greatest, and smallest transistors are also the trickiest and most expensive to design and manufacture with. In processors made up of chiplets, that cutting-edge technology can be reserved for the pieces of a design where the investment will most pay off. Other chiplets can be made using more reliable, established, and cheaper techniques. Smaller pieces of silicon are also inherently less prone to manufacturing defects.

This discussion has been archived. No new comments can be posted.

To Keep Pace With Moore's Law, Chipmakers Turn to 'Chiplets'

Comments Filter:
  • Dammit (Score:5, Insightful)

    by Artem S. Tashkinov ( 764309 ) on Thursday November 08, 2018 @12:15PM (#57612322) Homepage

    Our of all places on the Internet I want at least /. to admit that there has never been Moore's law - it was a mere observation": from Wikipedia, "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years".

    Whoever decided to call it a "law" was a moron and now we have this idiocy repeated every news story. And since it's not a law, we could simple move on and realize that physics simply doesn't allow it to exist.

    • by Anonymous Coward

      Marketing has always operated on an entirely different reality than the rest of us; no news here. Slashdot shouldn't be just repeating marketing gobledegook wholesale, but given its propensity for "native advertising" I've grown to expect it.

    • Re:Dammit (Score:5, Funny)

      by DontBeAMoran ( 4843879 ) on Thursday November 08, 2018 @12:31PM (#57612442)

      Especially when there's only one law: Brannigan's Law.

      • by dj245 ( 732906 )

        Especially when there's only one law: Brannigan's Law.

        I believe you are neglecting to include an equally and perhaps more important field of law - Bird Law.

    • Re:Dammit (Score:5, Insightful)

      by enriquevagu ( 1026480 ) on Thursday November 08, 2018 @12:51PM (#57612582)

      You are correct. It was never a law.

      Actually, it was a self-fulfilling prophecy [wikipedia.org]. Since Moore's "Law" provided a reference point for the evolution of transistor density, all designers knew where they needed to get, or otherwise their competitors would surpass them.

    • Comment removed based on user account deletion
    • by thelexx ( 237096 )

      Just thank the stars that the headline wasn't a question...

    • by Sneftel ( 15416 )

      People have always known that there were obvious physical limitations to Moore's law lasting forever. But it was never intended to last forever. And it was not simply an observation: Moore predicted that the pattern would continue to hold for at least ten years. I suppose you could argue that it should be "Moore's Prediction" or "Moore's Conjecture" or "Moore's Educated Guess", but surely we aren't going to quibble about what can [wikipedia.org] and can't [wikipedia.org] be called a law?

    • Even better: "You're still getting more density but it costs more and takes longer."

      That's a nonsense statement. With new technology, you learn to do things cheaper--with less human labor involved in total. They're saying the technology isn't improving as quickly.

      This technology that "costs more and takes longer" will be cheaper, faster, and better than last-generation's new technology when next-generation's new technology is standard. "costs more and takes longer" means we're being impatient and gree

      • by dryeo ( 100693 )

        Are you saying that new fab plants are cheaper, as in a 7nm plant is way cheaper then a 25nm plant? Likewise with the speed of process shrinking? I guess that is why Intel seems to be taking forever to do another process shrink.

        • It's because we are reaching a limit of how much we can shrink the circuits using the current material, the tracks start bleeding electrons (electron tunneling I think it's called) and bad shit starts happening. Clearly i am no expert, but if we want to get smaller we need to find better/different materials. While we do that it would be great if we could find materials which generate less heat while operating, which would also greatly reduce energy consumption.
        • No, I'm saying that the SAME fab plant to produce the SAME thing will be cheaper at a point further in the future. The same fab plant and process in 2010 is cheaper than it was in 2000.

          We're finding that repeating the 2000 to 2010 step takes until 2025, but we're trying to do it in 2020. The 2010 process had similar costs in 2010 to the 2000 process in 2000--and would have cost a whole hell of a lot more to do in 2000. Well, the 2025 process has similar costs in 2025 to the process we used in 2010--an

    • Exactly this isn’t a law, it is just a trend that lasted far longer the. Most people expected.

    • Out of all places on the Internet I want at least /. to admit *that's what the phrase scientific law means*. From Wikipedia: "Each scientific law is a statement based on repeated experimental observations that describes some aspect of the Universe."

      "physics simply doesn't allow it to exist."

      There are plenty of laws for things that don't exist. For example, in a rotating frame of reference there are three fictitious forces (Euler, Coriolis, and centrifugal) that each have well-defined laws to describe their

    • by Anonymous Coward

      In science, that is exactly what a "law" is. An observation "this is how things work", nothing more. Take Newtons laws of motion, they are observations, nothing more. There is no "this is how it must be", or "it can't be any other way", just "we looked and this is what we saw". A scientific law includes no explanation, no mechanism, no directive that this is the only way things must be and no other way.

    • "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years".

      Whoever decided to call it a "law" was a moron and now we have this idiocy repeated every news story. And since it's not a law, we could simple move on and realize that physics simply doesn't allow it to exist.

      A law, as opposed to a theory, rule, observation, hypothesis,etc. simply means it can be put into a mathematical formula. The above definition fits that very well. It has nothing to do with if it holds for all cases or if it has even been tested. Many laws are not exact or have only a range for which they are applicable.

  • Not saying it can't work but this sort of thing seems to crop up every few years. At least at chip level it seems to do better than the ever recurring schemes for code reuse.

  • Bicycle reinvented (Score:5, Insightful)

    by sinij ( 911942 ) on Thursday November 08, 2018 @12:19PM (#57612362)
    Bicycle reinvented. These used to be called co-processors.
    • They just use much smaller sockets now. :)

    • by bluefoxlucid ( 723572 ) on Thursday November 08, 2018 @01:27PM (#57612834) Homepage Journal

      I've wanted a generic coprocessor architecture for a few months now. Imagine if you could stick a chip on your board and it could access the on-board video port (DVI, HDMI), a range of RAM exposed as RAM (it requests address and data), and so forth. Instead of on-CPU graphics, you have a chip that provides that. The same chip can provide things like encryption, encoding, and artificial neural networks.

      These things aren't extended CPU instructions as with an FPU. They're actual separate microcomputers. An ANN chip has a completely-different architecture with memory local to the neuron's logic unit instead of in a memory bank. A GPU runs its own program against a memory space.

      You don't need a huge riser and ports exposed on the card's edge. You can just plug into the board, get power and an addressing bus, and get appropriate output ports like display and DMA. You can provide multiple functions on one chip. Just make it a chip socket and make it standard.

      This works for things that have to run a process on input and output, or on large bulk data. It doesn't work for things that are just extended CPU instructions, like SIMD. Transfer back and forth between processing units and the hop through the memory controller creates too much latency.

      You can use a four-wire (RX,TX) LVDS memory bus, too: instead of 64 data lines and 32 addressing lines (1TB physical addressing), you can use two TX and two RX and use a packet protocol. Modern GPUs use 128-byte cache lines (seriously!). You can specify a protocol that sets memory unit size, offset, and then issues READ requests. If you want, your memory controller (on the expansion chip) could send an instruction packet {SET SIZE 512}, {READ 390625}, {READ 390626}, .... The return packet on RX would be the data. CPU's memory controller would carry out the memory read and stream the data to the RX pins.

      The memory unit size is just a number of bytes. No trading off number of pins for maximum addressable RAM. There are odd reasons we use parallel buses for RAM, and it's not because parallel is faster; it's because building all of that stuff into DRAM is expensive and power-hungry. Since a coprocessor goes through a memory controller on a CPU, it's cheap there. Latency isn't as much of an issue as sheer bandwidth in this application.

      Imagine it. Just pop a graphics chip on your motherboard. 12V supply that can feed 100W. If you need bigger than that, then you buy a 16-lane PCI-Express card.

      • by mikael ( 484 )

        They did that thing with graphics chips witth laptops. Some were designed to have GPU's in a cartridge like format. They were advertised that you could upgrade the GPU whenever a new one became available. The only problem. They needed all the new GPU's for their new laptops.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      No, this isn't co-processors.

      This is more about adopting MCM (Multi chip Module) techniques to high-end processors. Right now MCMs are a very popular cost reduction technique where you can package a lot of tiny special purpose chips in to one package. You can build your special purpose package with a machine and reduce your finally assembly costs. I've seen MCM packages that are a half-dozen little grain-of-sand sized chips, bonded to a lead frame, encased in black plastic. From the outside it looks like an

    • by Megol ( 3135005 )

      Absolutely not! These aren't co-processors, they are processors in a modular system design - completely different. Using a modular system means that one can mix and match chips easier which means less chip masks are needed, it means bad/slow chips can be binned more efficiently and put together in the configuration wanted. It also means processors can be mixed and matched using different manufacturing processes as exemplified by the recently announced AMD EPYC2.

  • Did anyone read the headline as being about Chiclets [wikipedia.org]? The chewing gum, not the keyboard.
  • Back to the future (Score:5, Insightful)

    by mspohr ( 589790 ) on Thursday November 08, 2018 @12:22PM (#57612378)

    It seems that the semiconductor industry goes through these cycles periodically. Whenever they run up against limits to single chip integration, they go back to this strategy of wiring together separate chips together. Ultimately, this proves to be inefficient and once technology improves, they return to putting everything on a single chip.

    • by mikael ( 484 )

      It would let them decouple whatever was causing the problem. Long wires are known to cause crosstalk as they operate like antennae.

    • There's too much money tied to writing papers about the next end of Moore's Law for Moore's Law to be permitted to actually end.

    • by fisted ( 2295862 )

      Sounds like a legit approach?

    • But this time technology will not improve, because there are physical limits.
      • We're not at the physical limits yet. The present problem is that we don't yet have a technology to make mass produced smaller scale complex circuits; the problem is not that smaller circuits are impossible or won't work.
    • Just yesterday I was just reading up on Multi-Chip Modules, Hybrid Integrated Circuits and other similar technologies, and wondering to myself why they didn't do that more often in the 8 and 16-bit era to cut down on package pin count. The answer is fairly obvious: yields, and thus costs, are worse. Often much worse.

  • by Berkyjay ( 1225604 ) on Thursday November 08, 2018 @12:23PM (#57612386)

    Hitting up against Moore's law is probably the best thing for the chip industry. It's going to force them to innovate.

    • Well technically they were. Now they have to pursue much harder innovations (read expensive). I don't see that as a great thing anymore than reducing dozens of car companies in the U.S. to just the big 3 was.

      • Wait, aren't there only like 4 or 5 big chip makers now?

        • 4 or 5 sounds about right, it is dependent on how you count. Given the costs involved in the latest processes and the risks of trying to invent new techniques that could easily be down to two or even 1.

          • US, China, Taiwan, Korea. These places will maintain a position in semi manufacturing because it is strategically wise to do so.

            Europe should be there too, but they aren't good at providing under-the-table support for a local tech industry, so they lose out.

        • by Megol ( 3135005 )

          3 pursuing or manufacturing 7nm+: Intel, TSMC, Samsung.

    • The problem is their innovation is to crank out more cores ... for a still largely single threaded workload, where the workload is high enough to stress a modern CPU.

  • by account_deleted ( 4530225 ) on Thursday November 08, 2018 @12:28PM (#57612424)
    Comment removed based on user account deletion
  • by Waffle Iron ( 339739 ) on Thursday November 08, 2018 @12:33PM (#57612452)

    Looks like they've reinvented the IBM 3081 [wikipedia.org] mainframe from 40 years ago:

    The elimination of a layer of packaging was achieved through the development of the Thermal Conduction Module (TCM), a flat ceramic module containing about 30,000 logic circuits on up to 118 chips.

    • by Anonymous Coward

      Mainframes, distributed servers, thick clients, personal computers, goto 10

      It's all a loop, sometimes through the cycle it skips a step, and not everyone will always see all the steps.

    • by Anonymous Coward

      Or a Transputer.

    • by Megol ( 3135005 )

      No, MCMs have been used for x86 for a long time. The Pentium Pro used a MCM with a separate SRAM chip, later Core i7 chips have used external iGPU memory chips etc.
      The new thing isn't using MCMs or even placing several processor chips together on one MCM but the change in systems design to a more modular approach.

  • I'm pretty sure I saw something similar [nocookie.net], almost three decades ago.

  • by Anonymous Coward

    Back in the dark ages we used to put floating point math on a special chip. Intel had the x87 co-processor, and most computers didn't have them since computers didn't need high-speed floating point math, and when you did, you just the operation in software. This started to go away with the 486, which (mostly) had an integrated FPU on chip, while a few cheaper models disabled it. The Pentium was the first chip where every model had an FPU.

    So the future seems to be de-integrating circuits? Possibly. But

  • Comment removed based on user account deletion
  • So they're going back to the Multi-Chip Module concept? Which has been used by multiple companies before, including Intel (Pentium Pro was the first).

    Wikipedia:MCM [wikipedia.org]

    It's like technology companies are starting to behave like Hollywood. Come out with a rehash of what they did a few years ago instead of any new revolutionary ideas.

  • by Anonymous Coward

    Chiplets! Imagine a Beowulf cluster of those. On a chip!

  • Chiplets are good, but what's wrong with wafer scale integration? You could even combine them, chiplets as daughter boards in a 2+1 dimensional arrangement.

  • by Anonymous Coward

    Making larger chips from smaller "chiplets" will not improve the speed nor efficiency of the final combined chip. This is purely a money-saving trick around the age-old problem of having lower yields every subsequent process technology. If anything, the added circuitry necessary to facilitate interconnects between these chiplets only add to power usage and transmission delays.

    Chiplets also do nothing for speeding up chip development as monolithic chips are already built from various templates stamped over a

    • These points were exactly what I was thinking as I reading the summary. I would mod you up if I had mod points.

      Although as a partial counterpoint, if it is noticeably cheaper, it might indirectly allow the balance point between cost, speed, and yield of mass produced parts to be a bit faster...

  • Chip makers are currently doing 7 nm lithography. Copper atoms are 0.2 nm wide. We may not have reached the limits for lithography, but we have to be getting real close.

  • To communicate 100 light years away instantaneously all it takes is a long stiff rod where very small movement back and forth can communicate. Of course this is just a non practical idea but maybe its not a matter of trying to continue moores law at the hardware level but one of a mindset change on how much speed do you really need, or is it a tortoise and hare race in how you approach getting something done. Sometime I feel that a commodore 64 is more responsive than a today system running some overcomple

    • by Anonymous Coward

      To communicate 100 light years away instantaneously all it takes is a long stiff rod where very small movement back and forth can communicate. Of course this is just a non practical idea ...

      I'm not sure if this is sarcasm, but it's not only impractical, it's also not how reality works. Any interactions at one side of the rod would travel the length of the rod at a speed less than light. The exact speed would depend on what the rod was made out of.

    • by Dragonslicer ( 991472 ) on Thursday November 08, 2018 @03:54PM (#57613706)

      To communicate 100 light years away instantaneously all it takes is a long stiff rod where very small movement back and forth can communicate.

      Physics doesn't work that way. Force and motion propagate through a material at a finite speed. Perhaps the most well known example of this is the propagation of sound waves. So far, nobody has found a material for which the speed of sound is greater than the speed of light through a vacuum.

    • by swilver ( 617741 )

      That's nonsense. Try using a C-64 now, you'll find it has trouble even keeping up with fast typing.

    • by swilver ( 617741 )

      Part of the reason why computers are slow is the rather stupid scheduling policies that are commonly used in Windows.

      A proper scheduler will prioritize handling *input* above all else (ie, recording keys and mouse movements) as this make a computer feel responsive. On Amiga's this was a separate process, running at a priority higher than everything else, just to make sure nothing got lost. It got even better with an improved scheduler (Executive) which automatically prioritized i/o bound processes over CP

  • Every couple of years somebody says that "Moore's Law is ending" and then it doesn't.

    I propose we call this Borehd's Law.

  • It's the cloud all over again. Isn't it?

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...