Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Intel Hardware

Intel's Roadmap Includes 4nm Fab in 2022 259

Precision submits "Intel Corp., the largest maker of chips in the world, has outlined plans to make chips using 4nm process technology in about thirteen years. According to Intel, integration capacity of chips will increase much higher compared to fabrication process."
This discussion has been archived. No new comments can be posted.

Intel's Roadmap Includes 4nm Fab in 2022

Comments Filter:
  • by captaindomon ( 870655 ) on Monday August 24, 2009 @01:15PM (#29174921)
    These are long-term business forecasts for 10+ years down the line. They are thought experiments only, in my opinion. They are still valuable, and something to consider, but still very much a "projection" and not a "concrete plan with funding".
  • by olsmeister ( 1488789 ) on Monday August 24, 2009 @01:20PM (#29174989)
    This is obviously pie-in-the-sky speak from the marketing dweebs, who don't understand the physical limitations that come with a die shrink.
  • by SlappyBastard ( 961143 ) on Monday August 24, 2009 @01:29PM (#29175121) Homepage
    Even accounting for the successful introduction of new materials for transistors, 12 years to get to 4nm seems a tad ambitious. Also, you have to wonder whether or not they're approaching the top of the S curve.
  • by Anonymous Coward on Monday August 24, 2009 @01:42PM (#29175305)

    Dnap crozak mucky mucky hoodwiggle. Aptach TRS-80 4,whacka-mole wuppa puppa. Bezdig 6502 Assembler!

  • Re:My Roadmap (Score:5, Insightful)

    by Locke2005 ( 849178 ) on Monday August 24, 2009 @01:43PM (#29175313)
    Give it up. The liability from lawsuits by people who sue after getting hit in the head by heavy gold flying pony crap will bankrupt you, just like it did the owners of the goose that laid golden eggs...
  • by ishmalius ( 153450 ) on Monday August 24, 2009 @01:46PM (#29175365)

    I would suspect that unforeseen developments, such as big advances in 3d circuit design, would alter this schedule a lot. This is simply daydreaming.

  • by TheRaven64 ( 641858 ) on Monday August 24, 2009 @01:58PM (#29175519) Journal
    3D chip layouts are part of this roadmap. This kind of roadmap isn't really intended to say what their process will be, however. It's intended to give numbers to their core design teams about how many transistors they will be able to play with, what the latencies will be, and so on. These teams will then start working on designs on the assumption that the predictions are correct, then tweak them a bit if they were wrong. If they go badly wrong, you get something like the Pentium 4.
  • by treeves ( 963993 ) on Monday August 24, 2009 @02:25PM (#29175883) Homepage Journal
    Given the choice between this getting modded funny and getting modded insightful, I guess I'll be thankful it was modded funny.
  • by R2.0 ( 532027 ) on Monday August 24, 2009 @02:45PM (#29176121)

    "After all there's a reason you're not actually working in enginerring, when you're such a great engineer..."

    Yeah - the pay is better.

  • by Hurricane78 ( 562437 ) <deleted.slashdot@org> on Monday August 24, 2009 @03:12PM (#29176447)

    What's to stop you from carving them out of the bone of those rat-men? Are they boneless?
    Or are they actually your children by then? ;)

  • by Tanktalus ( 794810 ) on Monday August 24, 2009 @03:54PM (#29177037) Journal

    Um, for those building supercomputers?

    Today, supercomputers are not solely the purvue of RISC chips (which could also use this technology with proper patent-licensing fees paid), but also often made of commodity hardware, such as that coming from Intel. See: Google. With the sheer volume of data to mine that we have today, and the accelerated growth of data warehouses and other VLDBs (not just multi-TB, but multi-PB), faster everything is important in order to turn that data into value (sorry - that's already too buzzwordy). Yes, network speeds and hard disk speeds are important here. But not only does Intel not do that (well, they do some network, but that's not the biggest bottleneck anyway in this environment), but you can always fake disk speed by spreading your data over more disks until SSD or other technology displaces hard disks in server environments.

    It's not like Intel backing off on this will entice software companies to produce quality software. That suggestion is moot. The server market is huge. Intel wants to make more money by helping its customers do what they need to do with their data faster. I see nothing to complain about here.

    Besides, when we get chipsize down, we also get more powerful (and usually more energy-efficient) mobile devices in smaller footprints. A remote control for your home theatre system that can display a second channel on a minidisplay so you know what you're going to before you get there. A phone that you can capture video with and edit it right there before uploading to YouTube('s replacement) ... before the cops get there to confiscate it ;-) These don't just drive value/revenue for big corps in their backrooms, these come out and hit us as consumers. Interestingly, the big corps who fund this type of thing through purchase of ever-faster top-end equipment end up making it profitable enough to enter the consumer landscape, meaning they are in effect subsidising the rest of us. That video-editing phone probably wouldn't be profitable enough on its own to drive this development pace, but once the development is paid for by big corps, it's available to the rest of us some time later.

  • by RightSaidFred99 ( 874576 ) on Monday August 24, 2009 @04:17PM (#29177367)
    Strictly in terms of clock, yes. But if you normalize for performance/clock it doesn't look that off. I imagine a 3.2GHz nehalem would perform somewhere around (or even north of) a 6.7GHz P4.
  • by Grishnakh ( 216268 ) on Monday August 24, 2009 @04:45PM (#29177759)

    No need to use hydrogen, which, BTW, doesn't really exist as a solid, at reasonable temperatures.

    Yeah, I was kidding with that. My point is, it seems like we're not talking about anything that's orders of magnitude better than Silicon. Before too long, if they keep shrinking things at this rate, they're going to hit a brick wall, right? They can only go so small with Silicon, and then if they switch to Graphene, they can get features a little smaller, but then they'll run up against the limits there, and won't have anywhere to turn and will have to do something completely different, like 3D chips or something.

  • by andy_t_roo ( 912592 ) on Tuesday August 25, 2009 @01:22AM (#29182557)
    they are, they just split the ghz across 2 bits of silicon, (2x3Ghz) -- today, you get 4cpu's x 3.2Ghz (or a tad under 13Ghz )

This process can check if this value is zero, and if it is, it does something child-like. -- Forbes Burkowski, CS 454, University of Washington