Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Businesses Hardware

The World's Largest Computer Chip (newyorker.com) 72

silverjacket writes: A feature article at The New Yorker: 'A typical computer chip is the size of a fingernail. Cerebras's is the size of a dinner plate. It is the largest computer chip in the world.' An excerpt from the story: Even competitors find this feat impressive. "It's all new science," Nigel Toon, the C.E.O. and co-founder of Graphcore, told me. "It's an incredible piece of engineering -- a tour de force." At the same time, another engineer I spoke with described it, somewhat defensively, as a science project -- bigness for bigness's sake. Companies have tried to build mega-chips in the past and failed; Cerebras's plan amounted to a bet that surmounting the engineering challenges would be possible, and worth it. "To be totally honest with you, for me, ignorance was an advantage," Vishra said. "I don't know that, if I'd understood how difficult it was going to be to do what they did, I would have had the guts to invest."
This discussion has been archived. No new comments can be posted.

The World's Largest Computer Chip

Comments Filter:
  • Wafer-Scale Integration: https://en.wikipedia.org/wiki/Wafer-scale_integration

  • And who are these companies?

    And why should we care?

  • Chip stats (Score:5, Informative)

    by phantomfive ( 622387 ) on Friday August 20, 2021 @02:24PM (#61712241) Journal

    "on-chip SRAM of 40 gigabytes, memory bandwidth to 20 petabytes per second and total fabric bandwidth to 220 petabits per second"

    From Wikipedia.

  • New Yorker? (Score:5, Informative)

    by necro81 ( 917438 ) on Friday August 20, 2021 @02:34PM (#61712267) Journal
    Ah yes, the New Yorker, my go-to source for science and technology journalism.

    The article is long on story, which is engaging, but a bit short on details that this audience would be interested in. For that, I suggest this article from April in IEEE Spectrum [ieee.org]. Or this one from Jan 2020 [ieee.org], or this one from Aug 2019 [ieee.org]. Or these snippets from AnandTech [anandtech.com]. Or this in the EE Times [eetimes.com] that explains the cooling.

    In other words, this article may be bringing Cerebras into the popular consciousness, but it is hardly news.
    • Re: (Score:2, Interesting)

      by DRJlaw ( 946416 )

      In other words, this article may be bringing Cerebras into the popular consciousness, but it is hardly news.

      We here in "the popular consciousness," who don't read IEEE Spectrum or EE times on a regular basis, consider it news. Hint: common definitions include "information or recent events..."

    • An old Soviet joke: "In Soviet Union they make world's fastest clocks and biggest micro chips."
  • by Arnonyrnous Covvard ( 7286638 ) on Friday August 20, 2021 @02:38PM (#61712295)
    ...but because we thought they were going to be easy. (Programmers' Credo)
    • ...but because we thought they were going to be easy. (Programmers' Credo)

      [ Also a former President's credo on tariffs and trade wars ... :-) ]

    • Nah, We do things because they are hard and nobody could do them, including us. (Advanced Programmers' Credo)

  • No real advantage. (Score:4, Insightful)

    by Areyoukiddingme ( 1289470 ) on Friday August 20, 2021 @02:38PM (#61712297)

    A feature article at The New Yorker: 'A typical computer chip is the size of a fingernail. Cerebras's is the size of a dinner plate.

    It's been possible to do such a thing for a very long time. It's not worth it because your yield is utter garbage. The process is not reliably repeatable unless chip feature sizes are so physically big that the resulting chip is merely an academic curiosity because it's so slow and power hungry.

    AMD has shown that the exact opposite is radically more effective. Their chiplets let them improve yield to quite profitable levels while using the smallest mass-produced feature size in the world. Their chiplets even let them spread out the hotspots in the package, reducing thermal management problems. Intel's monolithic chips, especially in mobile applications, are notorious for one hard working hot running core to thermally bleed into adjacent cores, causing them to throttle even under the mild load they're executing.

    Hell, signal propagation time alone is a problem across a chip the size of a dinner plate. That's an enormous distance at today's clockspeeds. The speed of light is a serious problem for such a chip.

    When it comes to processing, one giant chip causes you grief and solves nothing. They're only useful for niche applications like sensor arrays, and even that's questionable since AMD pioneered mass production of precision placement of chiplets on a silicon interposer.

    • It's been possible to do such a thing for a very long time. It's not worth it because your yield is utter garbage.

      Surely they have some fraction of redundant circuits and a way to bypass the bad ones in testing. Regular-sized chips do.

      Of course like anything novel, it's most likely a bad idea or somebody would already be doing it.

      • "Surely they have some fraction of redundant circuits and a way to bypass the bad ones in testing. Regular-sized chips do. "

        Here's how it goes. Regular-sized chips are actually fabricated in dinner-plate size units that are dozens if not hundreds of chips. The unit is then cut into the individual chips. If there's a flaw in unit (and a lot of the time there is), you usually toss the chip that has the flaw on it, and use the rest. If it's all one chip, you've got to toss the entire thing.

        • Here, check this out:

          https://skeptics.stackexchange... [stackexchange.com]

          The second answer gives a lot of links.

        • If it's all one chip, you've got to toss the entire thing

          Or you could do the obvious thing: disable the defective subunits and ship the entire thing as a slightly less capable processor. Theyâ(TM)ve no doubt estimated how many defects they are likely to encounter on any given wafer and adjusted their specs to account for it.

    • Not sure what "typical computer chip means", but the entire chip plus plastics and leads and such, is often the size of a fingernail or smaller. Yes, maybe the typical PC chip, but that's not the average as the tiny chips far outnumber the big guys.

      The idea of what to do with a huge wafer sized chip, or even let's say a 2 inch square chip (that 5cm before someone complains), is not always clear. Lots of RAM is pointless potentially. And it's very likely to screw up with typical PC style CPUs, mostly due

      • The idea of what to do with a huge wafer sized chip, or even let's say a 2 inch square chip (that 5cm before someone complains), is not always clear. Lots of RAM is pointless potentially.

        In this case it has tons of RAM (40GB), and the memory bandwidth is phenomenal. Their idea is to be good at machine learning.
        There are a lot of interesting experimental architectures coming out recently aimed at machine learning. Most of them will not be good, but some will.

      • by cusco ( 717999 )

        It's for training neural networks and massive parallel processing, with 850,000 cores and 220 petabit/second fabric bandwidth. There's 40 gig of RAM on-board, to eliminate delays, and a 20 petabyte/second memory bandwidth. Takes 20 kilowatts to power it, and needs water cooling.

    • Hell, signal propagation time alone is a problem across a chip the size of a dinner plate. That's an enormous distance at today's clockspeeds. The speed of light is a serious problem for such a chip.

      It's still faster than going off chip, to across a motherboard, and into another chip just to do a memory fetch. Most of the chip is RAM.

    • by pz ( 113803 )

      The speed of light is a serious problem for such a chip.

      Yes, but the speed of light is a serious problem for essentially all modern chips. What you might be trying to say is that the speed of light is such a serious problem for chips of that dimension that an extra few layers of technology need to be applied in order to make sure the unavoidable clock skew issues do not become deal breakers.

      When I was a graduate student (at a technical university that should be familiar to Slashdot readers), one of my projects was to look into how to go about synchronizing many

    • by zeeky boogy doog ( 8381659 ) on Friday August 20, 2021 @05:23PM (#61712877)
      What you have written suggests that you don't even know what Cerebras is. It is not a synchronous processor the size of a dinner plate. It does not have a disastrous yield problem. It does not (relatively speaking) have a thermal problem.

      What Cerebras is is thousands of processors (and sram), whose wafer was never broken apart, and which has tens of thousands of bond wire interconnects jumping across the wafer scoring marks.

      This is a single wafer, with (relatively speaking) a piddling few thousands of cores, that has a level of interconnect bandwidth equal to over ten thousand HDR Infiniband links. That is literally about twenty five times the connectivity of e.g. Summit at ORNL per core. Which is why Cerebras is capable of solving communication heavy problems (like implicit PDEs) faster than leadership-class computers, despite the fact that Cerebras draws about 10KW and fills one third of a single rack, where supercomputers draw a thousand times as much power and space.

      Also, this may surprise you but the team of professional ASIC engineers who designed Cerebras are, in fact, familiar with wafer defect rates and yields. The interconnect system is designed to go around a certain number of nonworking cores per square.
    • It also depends on the intended application. Integration helps with power and latency for data movement. However, the wafer has to be set up for exactly what you want it to do. Otherwise, the chip area is wasted. Chiplets are great for configurability but worse for power and latency overheads due to data movement. Which way is better? It depends on what the target application is. One other consideration is market economics. A huge wafer that efficiently does exactly what you want it to do likely add

    • A sensor array would be cool, imagine a wafer-size CMOS -- It would form a multi-terapixel image. It would beat the crap out of any current exoplanet hunting telescope design.

      Cerebras should toitally design one.

      • by jabuzz ( 182671 )

        Stop imagining then. If you are building a new telescope which will cost hundreds of millions of Dollars or Euros then you just keep making a new wafer-sized sensor till you get one that's perfect. If you have to throw a hundred away and it costs a few million then that's the cost of doing business. After all the sensor is one of the more critical components of the telescope.

        Though typically they want much larger cells than on your phone for low light performance so a multi-terapixel imagine is not on the c

  • While a engineering challenge. What real value does this have?

    • Shorter interconnects.

    • Density and latency. Same reason a laptop System on a Chip is appealing. Same reason L3 cache is faster than DRAM.

      If you include everything on a motherboard inside of a single chip you can have an entire laptop in a small 25mm x 25mm chip, not a large PCB with loads of surface mounts and connectors etc. Look at the CPU in a computer... then look at the motherboard. The more motherboard you include on the die the smaller the motherboard can be... until you have nothing but IO and power.

      The shorter, smalle

    • It's harder to lose it?
    • While a engineering challenge. What real value does this have?

      You could say the same about any advance in computing power. Some things we take for granted nowadays required serious hardware advances. So now I have a laptop computer, instead of a scientific calculator, pencil, and paper.

  • Where [slashdot.org] have we seen [slashdot.org] this before?

    Still pretty neat. Supply voltage: 0.8V. Supply current: 20000A.

  • All for it! Maybe GPU pricing and availability will return to normal.

  • That might be doable, including the keyboard and a USB-C power port, but the cheat will probably to adding a display chip of similar size hinged to it.

  • On my old Apple I had 16 ram units, a pmmu, a cpu, I/0 chips, all could easily be replaced if broke, then Apple made the iPhone where the entire board had to replaced if on memory cell went bad. Now this company is going make up replace the entire computer!
    • by dskoll ( 99328 )

      Yes, consumers are going to flock to buy machines that consume 50kW of power and cost several hundred thousand dollars. So we'd better make sure they have the right to repair.

      • by HiThere ( 15173 )

        That's this model. One can readily assume that if it goes into production the costs will drop significantly...and that it won't go into production unless the power requirements do.

        • by cusco ( 717999 )

          It's been in production for quite some time, and is in use in places like Argonne National Lab, Lawrence Livermore, and GlaxoSmithKline.

          • by HiThere ( 15173 )

            OK, make that "mass production".

            • by cusco ( 717999 )

              It's a pretty specialized device, you're not going to be dropping one in the server room unless you have a specific need for that type of processing. They're looking at a market of being able to sell thousands of these, probably not even tens of thousands.

    • Now this company is going make up replace the entire computer!

      No, I'm sure that there are plenty of redundant units on the wafer to handle fabrication errors.

      This company can offer you a subscription to a SPaaS (spare parts as a service) business model, then you'll be able to buy and unlock those repair parts that are already conveniently in your computer!

    • On my old Apple I had 16 ram units, a pmmu, a cpu, I/0 chips, all could easily be replaced if broke

      How often does a computer fail because of transistors "going bad"?

      Far more likely failure points are RoHS solder and interconnects, which high levels of integration avoid.

      So the net result of SoCs is lower cost and higher reliability.

  • This chip would predictable have many vulnerabilities. I love the idea but it is much easier to secure the smaller silicon.
    • by dskoll ( 99328 )

      That doesn't make sense. It's probably an array of hundreds or thousands of similar processing units, not a single glob containing all kinds of different non-repeating stuff.

    • Not if you have very few simple elements, and merely replicate them many many times.
      Then all you additionally need to worry about, is effects emerging from how you put the elements together.
      (Think Lego or Minecraft when they were still just simple bricks.)

    • by cusco ( 717999 )

      This chip would predictable have many vulnerabilities. I love the idea but it is much easier to secure the smaller silicon.

      Vulnerabilities? So what? If the thing is sitting on the private network of Lawrence Livermore Labs training neural networks who the fuck is going to attack it? And why?

      • I used to be told the same thing about my privacy and threat risk assessments about banks, governments, hospitals, schools and the Electrical grid. If there are vulnerabilities someone will find a way to exploit it, the use cases are endless. First and foremost espionage sounds apt.
        • by cusco ( 717999 )

          The thing is for training neural networks and massive parallel computing, that's all it does. This isn't a general purpose server that might host a web site or database or run a SCADA system. About the worst you could do would be to corrupt the output in some unknowable and unpredictable way, once you've had a few years to get a doctorate in AI processing.

          • What would happen if I could corrupt the data yet reconstruct it for myself there is some value. Considering the value of IP today I see unlimited use cases for good and bad. IDK maybe I'm too cynical now but after a few decades of experience I trust nothing not even the coffee machine.
    • And from what actually would you need/want to secure such a chip?

      I guess you mean, from getting stolen?

  • This is even bigger:

    https://monster6502.com/ [monster6502.com]

    And much more beautiful too!

    Gotta go all the way, mates!
    Gotta have the balls to go discrete components!
    You just didn't have the guts.
    Nice playing.
    Next! ;-)

    Bonus link: https://eater.net/ [eater.net]
    Build your own CPU from discrete components and breadboards.
    Plus graphics card and more.
    The guy's awesome.

  • Can they make a double sided one, or can the wafer only be polished on one side or something ?
  • Pics or GTFO

In any formula, constants (especially those obtained from handbooks) are to be treated as variables.

Working...