Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IBM Hardware

IBM Leapfrogs Intel With 22nm Chips 168

Slatterz writes "Intel may be touting 45nm CPUs, but IBM says it can go much further with a strategy to produce future chips using a 22nm fabrication process. The company is adopting a technique called 'computational scaling' in order to manufacture circuits small enough to deliver more powerful and energy-efficient devices. Intel plans to introduce 32nm chips in 2009, but chipmakers have hit a problem in that current lithographic methods are not adequate for designs as small as 22nm owing to fundamental physical limitations. IBM claims to have solved this problem." Unfortunately the phrase "computational scaling" doesn't actually convey any information about how they've solved it.
This discussion has been archived. No new comments can be posted.

IBM Leapfrogs Intel With 22nm Chips

Comments Filter:
  • Well duhhhh.... (Score:5, Insightful)

    by Rod Beauvex ( 832040 ) on Friday September 19, 2008 @01:05AM (#25067495)
    If I figured out how to do something that would lay a serious hurting on my competition, I wouldn't exactly go around saying how I did it either.
    • Re:Well duhhhh.... (Score:5, Insightful)

      by neonleonb ( 723406 ) on Friday September 19, 2008 @01:08AM (#25067511) Homepage
      Patents, dude. That's the reason they're around: so you can tell people how you did it, and still be the only one to do it. Some patents are evil, but I hope *someone* is using the system as it's intended.
      • Re:Well duhhhh.... (Score:5, Informative)

        by Anonymous Coward on Friday September 19, 2008 @01:45AM (#25067727)

        IBM and Intel have complete cross-site patent agreements. Anything that IBM patents in the future, Intel already has a license for -- and vice versa. Trade secrets, on the other hand, are legally protected as long as the company with the secret takes adequate steps for it to remain a secret.

        • by Moraelin ( 679338 ) on Friday September 19, 2008 @02:57AM (#25068105) Journal

          Well, unfortunately it's a bit like the problem with conspiracy theories: anything that needs the complete cooperation of thousands to keep a secret, isn't going to really stay a secret. Building a 22nm fab is going to require a lot of stuff, and a lot of people knowing what is being done there, how, and why. It takes only one disgruntled employee, or some chinese subcontractor going, "hmm, I wonder what'd they buy that big an electron gun for... too big for electron microscopy... could it be they're using electrons at this many electron-volts instead of light?" to lose that trade secret in a jiffy.

          • by Anonymous Coward on Friday September 19, 2008 @03:06AM (#25068145)

            A fab is huge, most of the people who work on a site are completely ignorant of the details of how such deep magic is performed, most of those thousands are only concerned about keeping the xyz network up or replacing/upgrading servers in the datacenter.
            Many of the machines are closed units which only ever get opened by a small number of techs.

            Actual knowledge of how they do what they do can be kept between a surprisingly small group of people.

            Yes someone could take a stab at it but much of the time it's the fine details rather than the general idea that make an idea workable.

            • by John Betonschaar ( 178617 ) on Friday September 19, 2008 @06:03AM (#25068973)

              It's not about only fab's, it's also about R&D on the production technology, the machines that perform the 'deep magic' also need to be developed, tested and put into production.

              I'm working for ASML myself, which makes more than half of the lithography gear on the market, and I can attest that a surprisingly LARGE number of people on-site here know all the ins and outs of ASML scanner technology, both the stuff already on the market as well as the bleeding-edge stuff that no-one outside is supposed to know about.

              ASML has 6500+ employees, so it's a pretty safe bet knowledge leaks out. I don't see why this would be different for IBM.

              • by epine ( 68316 )

                Some secrets are easier to keep than others. It depends less on the context, more on the secret. Of course, you believe every one of those "leaks". Wikipedia is unreliable, whereas leaky secrets in the general environment are irreproachable. Shifting sands of credibility. I suspect you'd have to level up ten tables at Texas Hold'em to achieve your first belt in industrial counter-espionage. Some pretty important secrets have leaked, and some pretty important secrets didn't. Every secret on its own te

          • Re: (Score:3, Funny)

            Comment removed based on user account deletion
          • by Roxton ( 73137 )

            True, but then the submarine patent emerges, so they get a longer lifespan on the patent and the benefit of a trade secret for as long as it lasts.

          • Re: (Score:3, Insightful)

            by twostix ( 1277166 )

            "Well, unfortunately it's a bit like the problem with conspiracy theories: anything that needs the complete cooperation of thousands to keep a secret, isn't going to really stay a secret."

            Sooo...there's no such thing as secret military weapons development and programmes, and *definately* no state secrets. Every one knows the exact inner workings of every aspect of the CIA and NSA, the exact recipe for Coke and millions of other major trade secrets across the world aren't secrets either.

            Also Germany must ha

          • One employee isn't likely to know all the gory details of how to do such a massive endevour as a chip fab process. The devil is in the details. Simply saying "oh, they used device X" isn't going to help. It is like saying "Well, I saw the guys who made the building were using hammers and welders. Now you know how they built it."

            On the other hand knowing a few basic bits of info can help eliminate a lot of dead ends and let you know you are on the right path. Andrei Sakharov said that the info gleaned f

          • It's one thing to risk your career, your money, and your freedom if the cause is worthwile (saving lives, exposing injustice, etc). It's another thing to throw it all away for the possibility of some money from the competition. Even that's pretty unlikely, since no large company would ever take the risk of illegally buying trade secrets from a competitor's employees.

            You can go to jail for violating your companies trade secrets. Hell, you could go to jail for changing your investment strategies based off

      • Re:Well duhhhh.... (Score:5, Insightful)

        by Anonymous Coward on Friday September 19, 2008 @02:58AM (#25068113)

        Patents are mostly useful when you've outrun the competition by a few seconds. As a reward for you get to beat your competition with a stick for 17 years.

        Any *real* breakthrough is better protected by trade secrets. You stay ahead even longer, avoid having to look for infringement, avoid litigation altogether, and prevent cheap knockoffs from countries that don't enforce patents.

        • Re:Well duhhhh.... (Score:4, Insightful)

          by HungryHobo ( 1314109 ) on Friday September 19, 2008 @05:38AM (#25068875)

          Well they can still take your product apart and try to build a knock off.
          and if someone else discovers your trade secret on their own and files a patent then you can have problems.

          • Re: (Score:2, Interesting)

            by Thyrteen ( 1084963 )
            Prior art.
            • Re:Well duhhhh.... (Score:4, Informative)

              by HungryHobo ( 1314109 ) on Friday September 19, 2008 @06:44AM (#25069221)

              only applies to published material.

              • by dpilot ( 134227 )

                Shipping a product (not controlled by NDA) using the technology is legally equivalent to publishing.

                Oh, IANAL, etc.

              • Wrong try again.

                in use or on sale in this country

                35 USC 102

                (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent, or

                (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of the application for patent in the United States, or

      • by SeaFox ( 739806 )

        Patents, dude. That's the reason they're around: so you can tell people how you did it, and still be the only one to do it.

        Telling how you did it and then defending your patents by taking violators to court is costly and time-consuming. Keeping your mouth shut and forcing your competitors to take apart your product to even begin to comprehend how you did it is much cheaper.

        And then even when they do start to copy you after that, at the very least you got a big market lead time over them you wouldn't have o

        • Re: (Score:3, Insightful)

          by russotto ( 537200 )

          Telling how you did it and then defending your patents by taking violators to court is costly and time-consuming. Keeping your mouth shut and forcing your competitors to take apart your product to even begin to comprehend how you did it is much cheaper

          But IBM has traditionally taken the former strategy. And given the number of partners they have in this (Mentor Graphics, RPI, Toppan) it seems a lot safer for them to get the patent than to try to maintain a lead with trade secrets.

          • by mdfst13 ( 664665 )

            They may be planning on using trade secrets until the proposal is developed enough to enter production. Then they could patent it and get the seventeen years protection starting then.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        I remember reading in a business strategy book that at least when it comes to GPUs, companies don't bother to patent much since patenting something requires you to disclose too much and the technology advances so fast that by the time you have your patent, it's obsolete and worthless.

        • by Endo13 ( 1000782 )

          That sounds about right. I also seem to recall reading somewhere that a patent can take as long as 5 years to be issued, after it's initially applied for. And if that's the case, that definitely lends credence to your statement.

    • Unfortunately, in countries that don't care about patents they'll use that information to copy your design and sidestep all that pesky and expensive R&D. See China/India/Brazil. Because 22nm processes are a fundamental human right.....

      • by x2A ( 858210 )

        "See China/India/Brazil"

        Well, we use their cheap workforce to save us money... I say let them use our ideas, seems a fair trade!

    • by warrior ( 15708 )

      It actually tells exactly how they did it. For the better part of a decade we've been using interference patterns to draw features that are smaller than the wavelength of the light source being used by the photolith. The mask sets used to do this are incredible - stacks of ten lenses weighing over a ton for low-level features. Generating the correct "serifs" on rectangular layout shapes to create the correct interference pattern is very compute-intensive. It sounds like IBM is taking this beyond the sim

    • Did Apple retain the capability to start making PowerPC Macs again, or have they washed their hands of IBM's tech? IBM just might become CPU king once again (stranger things have happened).
  • by Anonymous Coward on Friday September 19, 2008 @01:08AM (#25067505)

    Instead of just saying they're going to do it.

    Talk is cheap.

    • by x2A ( 858210 )

      No... if you wanna sleep through progress, set yourself an alarm to wake yourself up when it's "all done". Why should anyone else care what you wanna sleep through?

  • I know its getting harder and harder, especially considering these things are only a handful of atoms across, but why can't they ever skip a generation? Why work on three generations of chips simultaneously? Why not just skip one?

    --
      find my ip [ipfinding.com]

    • Re: (Score:3, Funny)

      by QuantumG ( 50515 ) *

      huh? They skipped 13 generations to get from 45 to 32 and they're skipping 10 generations to get from 32 to 22.

      • by something_wicked_thi ( 918168 ) on Friday September 19, 2008 @03:41AM (#25068321)

        Nanometers aren't discrete units, you know.

        The real reason they don't skip generations is because it's not cost effective. Intel is making a killing on its tick/tock model where they shrink the process in one model and change the architecture in the next. This way, they can pipeline. They can have their semiconductor people working out how to make it smaller while the VHDL people are throwing together a new chip. They each have twice as long as if they were coordinated, delays in one don't necessarily affect the other, and everybody is kept busy.

        If they wanted to skip a generation, then the fab guys would probably take longer, which means they'd have a time when they weren't pumping out new, incrementally better CPUs to sell to people. They'd make less money, and the consumer would have to wait longer to get something better.

        • Re: (Score:3, Informative)

          by TheRaven64 ( 641858 )

          When you start designing a CPU, you have a transistor budget. Someone looks into their crystal ball and says 'in five years, when you've completed the design, we will be able to give you n transistors and sell the resulting chip in the market segment we want.' This is really hard to do for two reasons. The first is that it requires them to predict what the market will want five years in advance (the P4 was probably the biggest example of a miss-prediction here). The other is that they need to work out h

          • by poopdeville ( 841677 ) on Friday September 19, 2008 @05:19AM (#25068803)

            That isn't how chip fabrication or design works at all.

            Intel has three design teams, in three countries. They compete for the next Intel release. The israeli team won the Core/Core 2 Duo design. All the design teams were expected (and told) to keep Moore's law in mind as the miniaturization teams worked out the shrinking details. The Core/C2D was the most efficient processor for that many transistors.

            The new 80 core machines are also coming out of the Israeli design team. These things don't even have (many) more transistors than a C2D. But each core is basically a streamlined Pentium 2 core (like the Core architecture), and they all share a large cache, and Apple has first dibs. Sweet.

          • Re: (Score:2, Insightful)

            by ichigo 2.0 ( 900288 )

            If they go too fast then this screws up the core team because they suddenly have more transistors than they expect.

            I don't see how that would be a problem. Being able to put more transistors on a chip != have to put more transistors on a chip. In this case, having a better process would lead to higher profits, so there is an incentive to advance process technology ASAP.

          • the P4 was probably the biggest example of a miss-prediction here).

            Wow, you've forgotten the Itanic already? I'd say that that was a much bigger failure than the P4. :)

            Truth be told, IA64 is actually a very nice technology, but Intel massively misjudged advancements in compiler technology as well as what the market could use. A big jump to a new architecture was never going to go down well in a market that is inundated with closed-source code. It isn't as if the users can say "we want this architecture" and recompile all their proprietary software. 5 years after the in

        • by Iamthecheese ( 1264298 ) on Friday September 19, 2008 @04:52AM (#25068691)
          Yer Intel Captains can't do that anyway matey. The kind of bosuns Intel hires are the finest on the seven seas! The finest sailors won't sit on their arses and grind their swords, them kinds like to be up and doing! They like the smell of fresh booty in the morning! If Intel let those people sit, they'd keel-haul the bosses and set sail for new horizons [amd.com]! YARRRRRRRRR
    • by servognome ( 738846 ) on Friday September 19, 2008 @01:32AM (#25067647)

      I know its getting harder and harder, especially considering these things are only a handful of atoms across, but why can't they ever skip a generation? Why work on three generations of chips simultaneously? Why not just skip one?

      Because it isn't just the technology you develop. You have to get several other companies to align their technology roadmaps with you. Processing/handling equipment, raw materials, and a number of other technologies are involved in the production of a wafer.
      The semiconductor manufacturing industry pretty moves together as a whole. Even if one company is out in front in terms of technology it isn't that far ahead, which is why so many companies just focus on design and have foundaries make their stuff.

      • by Kjella ( 173770 ) on Friday September 19, 2008 @02:11AM (#25067865) Homepage

        The semiconductor manufacturing industry pretty moves together as a whole. Even if one company is out in front in terms of technology it isn't that far ahead, which is why so many companies just focus on design and have foundaries make their stuff.

        Actually it is "that far ahead", but the investments are so absurdly huge only a few companies can afford to keep up. Do the math, going from say 65nm to 45nm means the surface area is halved but the real business difference is in the margin. Say it costs AMD 100$ to make, maybe they can sell it for 110$. Enter an Intel 45nm, they produce it for 50$ and still sell it for 110$. Which is why AMDs Atom competiton is ridiculous - yes it can concievably keep up on performance but the margins are abysmal compared to the extremely small die size of an Atom which means Intel will be the only one making any money. In the long run it'll be better for everyone if Intel stumbles a little and competition stays intense, because they are bleeding their competitors dry. Notice that Intel is making substantial pushes into UMPCs, mobile devices, motherboards (more than chipsets before), graphics and SSDs. All of that is funded first and foremost from their superior process technology, their designs are good too but not that spectacular.

        • Re: (Score:3, Insightful)

          by HungryHobo ( 1314109 )

          Yes, I got that impression too, it's not so much that their chips are the most fantastic on the market but rather that they can produce more, faster and for less money than everyone else.

        • Yeah, Intel's way ahead of the competition at this point. TSMC is just rolling out 40/45nm, and they say their process doesn't have any performance advantages over 65nm.

          http://www.semiconductor.net/blog/270000427/post/10025801.html [semiconductor.net]

          AMD's 45nm is coming too, but I haven't heard much about it.

        • Re: (Score:3, Interesting)

          by TheRaven64 ( 641858 )

          Say it costs AMD 100$ to make, maybe they can sell it for 110$. Enter an Intel 45nm, they produce it for 50$ and still sell it for 110$

          True, to a point. This depends on Intel getting the volumes up, however. The vast majority of the cost of a wafer of chips is the cost of building the fab in the first place. Each 45nm fab cost around $1-1.5bn. Intel aims to sell 100m of these by 2011. If they are the entire output from one fab in this time then they have a $10 cost just from fab creation (in practice, they will be the partial output from several fabs - not sure of the exact numbers). If they only sell 50m, then the investment cost is

          • by Kjella ( 173770 )

            Well, that also depends on the costs going down - if AMD is still paying as much for their 45nm fab when they do move, they're just delaying the costs who'll have to be spread over the next generation of chips when Intel got their 45nm already paid down (or well then the cycle probably continues) and I don't think AMD can survive "jumping over" some generation. Obviously you'd better choose if you want to be the one fabbing or asking others to fab for you, but you'd damn better keep up somehow. Being one ge

    • by _merlin ( 160982 )

      Why have children? Why not skip a generation and just have grandchildren?

      It doesn't work like that. The next technology you develop will be the next generation of your chips, just like your kids will be the next generation.

      • by Nullav ( 1053766 ) <mocNO@SPAMliamg.valluN> on Friday September 19, 2008 @03:15AM (#25068185)

        45nm chips do not pop out of the vaginae of 65nm chips. -1, Bad non-car analogy

        • by _merlin ( 160982 )

          But nonetheless, suppose I develop a semiconductor technology. You would call this "_merlin's first-generation semiconductor technology". If I then went on to develop another, better, semiconductor technology, you would call it "_merlin's second-generation semiconductor technology". There is no way that I could produce something you would call my third-generation semiconductor technology without producing a second generation first. Now my second generation of chips could be as good as, or even better th

    • by Joebert ( 946227 )

      especially considering these things are only a handful of atoms across

      I'm not sure you picked the best unit of measure for making your point there.

    • I know its getting harder and harder, especially considering these things are only a handful of atoms across, but why can't they ever skip a generation? Why work on three generations of chips simultaneously? Why not just skip one?

      Because it takes too long to do the R&D, it would leave them with too much time between releases. Also, it's the nature of how products are developed. Once the early R&D folks come up with something and hand it off to the people who work on small-scale fab, it's not lik

  • the method... (Score:5, Informative)

    by lordholm ( 649770 ) on Friday September 19, 2008 @01:09AM (#25067529) Homepage

    FTFA: "IBM said that computational scaling overcomes these limitations by using mathematical techniques to modify the shape of the masks and the characteristics of the illuminating source used to image the circuits for each layer of an integrated circuit."

    That gives you an idea. They are not being more secretive than normal.

    • by RuBLed ( 995686 )
      I'm not under NDA, so I'd give you an additional detail.

      currentNM = 45;
      newNM = 45 - 23; //I like Michael Jordan
      foreach ( currentChip in _chipQueue )
      {
      CreateNewAwesomeChip( newNM );
      }
    • Re: (Score:1, Funny)

      by Anonymous Coward

      hopefully it's a little more novel than that. Otherwise, this article can be summarized as "IBM discovers OPC (optical proximity correction). A decade late."

    • by marxmarv ( 30295 )

      FTFA: "IBM said that computational scaling overcomes these limitations by using mathematical techniques to modify the shape of the masks and the characteristics of the illuminating source used to image the circuits for each layer of an integrated circuit."

      Heh, unsharp mask?

  • Who knows.. (Score:5, Insightful)

    by eebra82 ( 907996 ) on Friday September 19, 2008 @01:12AM (#25067549) Homepage
    The article doesn't mention when such chips would be ready for production and I doubt that IBM's original press release sheds any light on that subject. So all this COULD mean is that IBM only announced their breakthrough ahead of Intel, not that they are ahead or behind Intel.

    It's still good to see that Moore's law is hanging in there.
    • by Bender_ ( 179208 ) on Friday September 19, 2008 @02:23AM (#25067919) Journal

      Indeed, they have not even demonstrated working devices yet. The press release is nothing but the announcement of the utilization of one specific technique.

      09/19/2008, The internet. Slashdot user Bender_ announces to leapfrog IBM and Intel by intending to build 10 nm structures in his garage.

    • I was always led to believe that you don't look at Intel/AMD or anyone of that ilk for the latest semiconductor technology. You look at people like Samsung who make DRAM. Since it is simpler and there is more of it manufactured, it is used to test and prove new processes long before something with the complexity of an x86 processor it made.

      • The thing about ram is that it is defect tollerant. It's easy to add a bit of extra ram and make the decode circuits programable. Then after testing the chip they can program it to not use the defective blocks.

        With less homogenous chips like CPUs this is much harder.

    • by hackus ( 159037 )

      Chips will be available on the 22nm process spring/summer 2011.

      -Hack

  • AMD's partner IBM? (Score:3, Interesting)

    by fishyfool ( 854019 ) on Friday September 19, 2008 @01:13AM (#25067553) Homepage Journal
    Does this mean the Phenom will be produced on 22nm scale? Could be a very interesting development in the AMD/Intel chip wars.
    • by Hal_Porter ( 817932 ) on Friday September 19, 2008 @01:18AM (#25067577)

      The writeup is misleading. 45nm is in production now, and 32nm is due in 2009. The work at IBM is basic research which will be used by both Intel and IBM to make 22nm chips later on.

      At least I think that's how it works. I guess Intel and IBM license patents from each other to allow them all to use the same level of technology. It certainly seems unlikely that IBM will be ahead of Intel in introducing smaller feature sizes since Intel is usually at the head of the pack.

    • by bussdriver ( 620565 ) on Friday September 19, 2008 @01:57AM (#25067799)

      I'd like to see somebody do something new besides just get smaller. CELL for example.

      Most users are just fine with a fixed system on a chip with no PCI. (ram too if you could pull that off) If you want to reduce power and cost you'd place as much as possible on a single chip. (using crazy IP games they could buy designs for parts on the chip-- consolidating manufacturing as well.)

      How about a working variation of Hyperthreading? have 1.5 CPUs and manage it so almost runs like 2 full CPUs? (since pipelines are still problems.)

      At least AMD is going to combine GPUs. But next they need to think about how to better integrate the vector processing that GPUs are taking over - instead of the weak MMX/SSE/etc features which have a lot of overlap in their uses.

      How about hardware accelerated stacks? MMUs that can handle a driver memory space (not just kernel and user.)

      Advances in clockless processing?

      Just slapping more cores on chips is the lazy way out. Most people could use a business-class computer on a single chip with a stick of ram. maybe even a slower cheaper but larger secondary ram...(since GPU ram would get used a lot doing all that fluff that every OS now has.)

      • by SirSlud ( 67381 )

        more parallelism = more difficult programming

        it seems silly to complain about the guys at Intel and AMD when nobody has the skilled labour pool in your customer base to take advantage of asynch state machines.

        have 1.5 CPUs and manage it so almost runs like 2 full CPUs

        define CPU, please, I'm curious how you add 1 to 0.5 and end up with something higher than 1.5?

        • Hyperthreading uses the significant amount of idle cpu time to process something else; it is akin to a virtual processor. The gains are not incredibly significant and there are plenty of implementation complaints; however, the idea is still a good one.

          Rather than copy/paste a whole CPU; place a partial CPU that shares common units. More specifically, take that hyperthreading concept of using idle cpu time and provide it extra processing units to leverage-- a "wider" cpu design that makes use of the waste of

      • Just slapping more cores on chips is the lazy way out.

        In most cases it is also the faster and cheaper way out for getting the software to support a chip. For a new architecture, you'll need to create compilers first as an absolute minimum. If you do something radically different, maybe even a new programming language that supports the new concepts.

        Let alone that anything non-x86 means no Microsoft products for your computer these days (except for the XBox360 with its PPC tri-core).

        So I think it will take tw

        • True. However, chips today already crack stuff into micro instructions and I seem to remember intels x86 breaking their instructions down to some sort of internal RISC. Microcode just didn't seem to die.

          Its almost like we have simple hardware emulation going on everywhere in the PC market already.

          Adoption is a problem. I remember when PPC was killing x86 and how it didn't get any traction. I think it still can kill x86 but its big market never was big into killing x86 (just apple was.) So we have had PPC ch

      • by TheRaven64 ( 641858 ) on Friday September 19, 2008 @05:12AM (#25068765) Journal

        Most users are just fine with a fixed system on a chip with no PCI. (ram too if you could pull that off) If you want to reduce power and cost you'd place as much as possible on a single chip.

        Chips like TI's OMAP series (found in the Nokia handhelds, OpenPandora, and a load of other things) have a CPU, DSP, GPU and a load of other things in the same die. They use a stacked-chip design so you can plug 128MB of RAM (256MB coming soon) on top of the package. Power usage is around 250mW.

        How about a working variation of Hyperthreading?

        Hyperthreading is a Intel's implementation of an idea that IBM brought to market first (based on an academic research project which produced the first prototypes, with the original designer now working at Sun). Sun and IBM have had it working for years. As have a few others. Unlikely in ARM chips, since the performance/power benefits in this space are worse than with multi-core (Cortex A9 allows up to 4 cores). It only makes sense for Intel in the Atom because it allows two context to share an instruction decoder, which reduces the cost of x86 bloat a bit.

        How about hardware accelerated stacks?

        x86 chips have had hardware accelerated stacks for well over a decade - rewrite an iterative algorithm with a software stack as a recursive implementation and you'll see a speedup.

        MMUs that can handle a driver memory space

        IOMMUs have been in Sun and IBM chips since they introduced 64-bit CPUs and wanted to plug in 32-bit PCI devices. Newer Intel and AMD designs also include them.

        Advances in clockless processing?

        Asynchronous designs have been floating around for a few decades but still don't deliver the kind of performance benefit that offsets the extra complexity (which equates to extra power usage).

        • Hardware stacks?
          interesting. never touched x86 assembly (always hoped it would die so I'd never have to touch it; but good compilers pretty much removed the need.) You have any info on this? I'm curious and google isn't much help.

          MMU
          mainframes have had >2 memory spaces forever. R U saying the current generation of CPUs all support 3 memory spaces and the OS are simply not exploiting the features yet?? If so these kernel devs need to get cracking and start putting drivers out of the kernel memory space.

          • Hardware stacks? interesting. never touched x86 assembly (always hoped it would die so I'd never have to touch it; but good compilers pretty much removed the need.) You have any info on this? I'm curious and google isn't much help.

            Push and pop instructions store and load registers to an offset from the stack pointer register. On a modern implementation, these are aliased with some hidden registers, so they're very fast. All local variables will be implemented with these.

            MMU mainframes have had >2 memory spaces forever

            Here I begin to suspect you have no idea what you are talking about. A modern operating system creates a virtual address space for every process. x86 chips have four protection domains (x86-64 drops this down to two, then adds a third one back for virtualisation).

            • I thought intel didn't have load/store instructions?
              http://en.wikipedia.org/wiki/Stack_machine [wikipedia.org]

              Yes, the OS uses hardware assist for handling memory spaces.

              Letting something like 1394 or PCMIA get full unrestricted DMA access shouldn't be even possible. Rogue devices can cause crashes or security threats; the devices are not dumb, they have firmware. Ideally, they do not need cpu mmu time to monitor all their operations.

              Many USB drivers exist in kernal space. Most can run in its own memory space; some even in

              • I thought intel didn't have load/store instructions?

                No, it has both load/store and push/pop instructions. The architecture only has two stacks (one is the FPU stack, which is a horrible design).

                Letting something like 1394 or PCMIA get full unrestricted DMA access shouldn't be even possible

                With an IOMMU, the kernel can restrict this, but it has to define a policy. Modern x86 hardware has an IOMMU.

                Many USB drivers exist in kernal space

                Which kernel? It depends on the operating system. Some run the drivers in userspace, some run them in kernel space. It's a design decision made by the kernel designers. If you don't like the kernel you're using, maybe you should pick a different one.

                I guess what I'm thinking of is some old mainframe I was told about over a decade ago where hardware imposed limitations on DMA prevented bad devices from taking down the system

                Yes,

  • Description from IBM (Score:5, Informative)

    by wyoung76 ( 764124 ) on Friday September 19, 2008 @01:22AM (#25067587)
  • Dual processor motherboard. Problem solved ;)
  • well, duh (Score:5, Funny)

    by Trailer Trash ( 60756 ) on Friday September 19, 2008 @01:33AM (#25067649) Homepage

    Unfortunately the phrase "computational scaling" doesn't actually convey any information about how they've solved it.

    Using some of SCO's intellectual property, of course...

    • Unfortunately the phrase "computational scaling" doesn't actually convey any information about how they've solved it.

      Well duh: They used a computer to scale things down.

      That wasn't hard.

  • by LS ( 57954 ) on Friday September 19, 2008 @01:41AM (#25067691) Homepage

    "...but chipmakers have hit a problem in that current lithographic methods are not adequate for designs as small as 22nm owing to fundamental physical limitations. IBM claims to have solved this problem."

    This is virtually the same statement made every time a smaller fabrication process is announced. It conveys no information. Obviously some physical limitation was preventing them from making smaller circuits, and then they overcame them to make them even smaller.

    LS

  • by TheMiddleRoad ( 1153113 ) on Friday September 19, 2008 @01:58AM (#25067807)

    Let me translate the press release:

    We announce that our future product, someday in the undefined and possibly distant future, will hit 22nm. We're making partnerships to make it happen.

    The slashdot writeup is misleading. For shame!

  • Catch? (Score:2, Interesting)

    by Tablizer ( 95088 )

    Maybe they did achieve 22, but perhaps there's are tiny catch: They don't work. They only claimed 22nm, not working 22nm. Watching all this Nov.2008 campaign coverage has taught me to read between the lines.

  • by Gldm ( 600518 ) on Friday September 19, 2008 @02:16AM (#25067883)
    What a joke of an article. Every semiconductor manufacturer has several generations of process in various states in the lab. Woo IBM's showing sneak peaks at 22nm!

    I met with an Intel VP [intel.com] for an interview a while back and talked about where things are going. He had some nice lab-pr0n of what the photos claimed were 11nm transistors. I believe it was said that was "about 15 years out", and meant to offer reassurance that Moore's Law still had a bit more time left to go.

    Actually here, let me go dig up my transcript so I can get a proper quote:

    You're going to see that platforms are going to continue to evolve. We're moving to a faster cadence. The processor cadence is about a two year cadence, in terms of process technologies. By the way this is interesting. We know how to do Moore's Law for about another fifteen years which we've never had that kind of length of projection before. ...it sort of takes 3D transistors and all that, but we know how to do these things. It's all using standard silicon, it's CMOS it's extraordinarily well charictarized right? But we've got transistors running at 11 nanometers, I can show you photographs of them. We have the leakage issues but we've got a very good plan.

    That was 2 years ago, early October 2006. Who leapfrogged what now?
  • by ZarathustraDK ( 1291688 ) on Friday September 19, 2008 @02:21AM (#25067905)
    Here in Denmark we want our chips big and crunchy. Silly americans' chips are so small they can drink them from a mead-horn.
    • Yar, mead be a drink not fit for a swabbie, tis rum we relate to.

      Talk like a Viking day be in March, ye Danish scallawag.
  • It is no secret that IBMs legal department and their patent portfolio is what always gives IBM the upper hand. I'm sure that they managed to get alien cpu technology in an settlement for alien infringement on one or more of IBMs many patents.

  • by HannethCom ( 585323 ) on Friday September 19, 2008 @02:24AM (#25067931)
    http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=10810046 [eetimes.com]

    Though a more recent article stated that the first plant using 15nm won't be online until late 2011, or early 2012 at the latest.

    In the silicon production market there is usually about a 5 year, or more, period between when something is announced, and when it is in production. Which means we will see IBM's 22nm process as early as late 2013.
  • So, with this new chip process, I can expect my G5 PowerBook... This Fall?

    And make sure we break that 3Ghz barrier. Best not keep Mr. Jobs waiting any longer.

  • by usul294 ( 1163169 ) on Friday September 19, 2008 @07:21AM (#25069461)
    I'm still in college and we have a big semiconductors lab, so we had to learn the basics of lithography in class. The problem that people are running into is that everything uses UV light, which theoretically can make details of 10nm (its wavelength) but this is incredibly hard. There exists, but not commercially viable, techniques which use x-rays (masking material an issue), electron beams and proton beams(deBroigle wavelength). If IBM got one of these to work commercially it would be a big deal. If they built a state of the art one of these and made some 10nm features, no big deal. Probably the single biggest issue is that they have to make a machine accurate enough to be exactly in the focal point of the beam(~0.1 nm) and the smaller the beam you are using, the smaller the focal point so making more precise machinery is as much of a limiter as small beams.
    • I'm sure you're going to fail your class.
      The light used in the 45nm node is at 193nm wavelength, not "10nm". Features can be made much smaller than the wavelength of the light used because a variety of tricks are employed (immersion lithography, over-exposure, OPC, etc.)

  • Sounds like the technique that TrueType fonts have to resolve the problem of trying to render fonts at low-pixel fonts. If no corrections are performed, the character (or glyphs) will either merge into each other or skip particular segments of glyphs (eg. missing out the middle bar of the letter 'm'. The font-engine actually usually a 'virtual machine' with an instruction set that performs geometric calculations (like project point to line, snap point to grid, set axis of projection line) to solve this prob

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...