Forgot your password?
typodupeerror
IBM Hardware

IBM Creates Commercially Viable, Electronic-Photonic Integrated Chip 71

Posted by samzenpus
from the when-your-powers-combine dept.
An anonymous reader writes "After more than a decade of research, and a proof of concept in 2010, IBM Research has finally cracked silicon nanophotonics (or CMOS-integrated nanophotonics, CINP, to give its full name). IBM has become the first company to integrate electrical and optical components on the same chip, using a standard 90nm semiconductor process. These integrated, monolithic chips will allow for cheap chip-to-chip and computer-to-computer interconnects that are thousands of times faster than current state-of-the-art copper and optical networks. Where current interconnects are generally measured in gigabits per second, IBM's new chip is already capable of shuttling data around at terabits per second, and should scale to peta- and exabit speeds."
This discussion has been archived. No new comments can be posted.

IBM Creates Commercially Viable, Electronic-Photonic Integrated Chip

Comments Filter:
  • More info (Score:5, Informative)

    by Geoffrey.landis (926948) on Monday December 10, 2012 @11:03AM (#42242829) Homepage

    The article is remarkably lacking in technical details.

    This article from two years ago is a little more detailed: http://www.eetimes.com/electronics-news/4211151/IBM-debuts-CMOS-silicon-nanophotonics [eetimes.com]
    or this press release: http://www-03.ibm.com/press/us/en/pressrelease/33115.wss [ibm.com]

    • by Geoffrey.landis (926948) on Monday December 10, 2012 @11:09AM (#42242869) Homepage

      And here's the IBM press release
      http://researcher.ibm.com/researcher/view_project.php?id=2757 [ibm.com]
      which has a sidebar that has "links to additional information" with a lot more details.

    • Re: (Score:3, Insightful)

      by Shatrat (855151)

      It's also remarkably misleading. Infinera has been doing Photonic Integrated Circuits for a while now, but they're definitely not cheap.
      The only thing IBM may have pioneered is doing it on Silicon. Infinera uses Indium Phosphide.

      • Re:More info (Score:5, Insightful)

        by Anonymous Coward on Monday December 10, 2012 @11:25AM (#42243023)

        The only thing IBM may have pioneered is doing it on Silicon. Infinera uses Indium Phosphide.

        What they've done hardly sounds like a small thing. They've gone from lab-scale to commercial-scale, at least in the lab (if that makes sense).

        They're not the first to make this kind of chip, but they've advanced the state of the art. There aren't many times when a completely new invention comes out, most of the time it's baby steps like this.

        • by Shatrat (855151)

          Infinera was at commercial scale around 7 years ago, at 100Gigabit speeds (10x10Gbit/s). They're very expensive, but cheaper than 10 discrete OTU2/OC192/10GbE LAN-PHY transponders with optics. From what I've read in article, IBM may possibly be able to use this to lower the cost of LR4 optics in routers, at least that's what they seem to be aiming at. It won't give us the ability to do anything we can't already do today, though.

          • Re:More info (Score:5, Insightful)

            by bws111 (1216812) on Monday December 10, 2012 @11:45AM (#42243257)

            It won't give us the ability to do anything we can't already do today, though.

            Yes, it will. It will give you the ability to afford the technology, so that applications may turn up in places where you would not be able to put an Infinera type device.

            • You mean we can make the north/south bridge, memory controller, and graphics chipsets discrete components again without a performance impact? And a new PCI standard that includes 4 copper pins for power plus several optical pins for data, with expansion cards being roughly as low-latency and high-bandwidth as on-CPU-dye integrated chipsets? How about system RAM access at almost L2 cache speeds?
              • by BitZtream (692029)

                You can't afford RAM at L2 speeds, and thats what the problem has always been.

                • Re:More info (Score:4, Informative)

                  by bluefoxlucid (723572) on Monday December 10, 2012 @02:00PM (#42244579) Journal
                  RAM is incredibly fast. L1 cache is SRAM and faster; RAM access requires a whole lot of shit with slow clocking. There's a lot of latency because there's a memory controller between everything that works out how to send commands across and get data and put it in the CPU, mostly because just accessing RAM outright by attaching it to CPU pins doesn't work anymore (and partly because the memory controller adds features, but that's become less of an effect). Seriously the CPU can clock a few times by the time access requests actually reaches the RAM.
                  • by Anonymous Coward

                    RAM is incredibly fast

                    Compared to disk, maybe, but time from request to first byte at the pin level on DRAM is still at least 60 CPU clock cycles (@ 3 GHz.) Add in all the rest of the delays (memory controller, caches, TLBs, etc., etc.) and you get a situation where you might as well context-switch if you get a cache miss, since your pipeline is going to be empty or stalled for hundreds or thousands of clock cycles.

                  • by BitZtream (692029)

                    You're comparing different types of RAM and pretending they are the same.

                    I said you can't afford gigabytes of RAM running at those speeds. You can not afford gigabytes of SRAM running at full CPU speed. I'd say it again, but you'll still pretend I said something else.

                    IF you could, you wouldn't need a memory controller in the way.

                    And those few CPU cycles you speak of ... are nothing compared to the waits spent while the data is fetched, buffered and latched by DRAM. Of course some of those cycles are spe

                    • You would need a memory controller, or your e-mail app could stomp all over your web server's memory space.
            • by Shatrat (855151)

              The hypothetical IBM chip is not a competitor to the Infinera device, it's a competitor to CFPs and CXPs produced using discrete optical and electronic components.

      • by bws111 (1216812)

        How is it misleading? It says right in the summary that their claim is that the use silicon and "standard 90nm semiconductor processes". The Infinera site says that Infinera were able to accomplish their stuff 'by not having to evolve existing manufacturing'. So it sounds quite a bit different to me.

        • by Shatrat (855151)

          I say misleading because both TFS and TFA headlines say commercially viable. When you actually dig into it you find out this has never left the lab. It also implies that something like this hasn't been done before. It has, just not on silicon. And, IBM still can't do it purely on doped silicon. If you read deeper the wafer has to include a germanium layer and carbon nanotubes for the optical components.

      • So, cheap, easy-to-manipulate materials and a process compatible with modern fabrication techniques and machinery? As opposed to exotic, expensive metal ceramics requiring different processes and separate facilities? That's kind of revolutionary. It's like having all this propane gas that you liquefy, and then discovering how to modify engines by shaping steel components slightly different (i.e. a compound machine) to use propane gas instead of liquid fuel. (We actually have experimental gasoline engine
      • by mark-t (151149)
        Misleading, perhaps... but in all fairness, the headline doesn't at all actually say that IBM were the first ones to do it... only that they did.
    • The article is remarkably lacking in technical details.

      Maybe they are waiting for the patent applications to be processed, before giving out too many details . . . ?

    • The article is remarkably lacking in technical details.

      It's just a prototype. The commercially-viable article will be ready in two years.

  • OpSIS (Score:5, Interesting)

    by Darth Snowshoe (1434515) on Monday December 10, 2012 @11:11AM (#42242897)

    http://opsisfoundry.org/ [opsisfoundry.org]

    OpSIS is a foundry service for integrated photonics/CMOS electronics, similar to MOSIS for CMOS. Academic and research institutions can get small lots of experimental designs built as part of a multi-chip wafer run. They support libraries of standard and example components, some modelling and rules decks. They plan several fab runs a year, and access, last time I checked, three different processes from different vendors. Carver Mead is a booster.

    I had hoped to start designing with their rules a while ago, and got pulled into more immediate projects. I still think it's pretty cool, and would like to get back to it if ever I get a quiet moment.

    • Re:OpSIS (Score:4, Informative)

      by Darth Snowshoe (1434515) on Monday December 10, 2012 @11:14AM (#42242915)

      One thing that's worth looking into - OpSIS hosts or points to web-based training and seminars several times a year, sometimes given by CAD vendors that support their design and fab processes. They are well worth sitting in for if you're trying to spin up on this stuff. Not a plug, just my own experience.

  • by Chewbacon (797801) on Monday December 10, 2012 @11:17AM (#42242959)
    FTFA: "Ultimately, we are talking about a standard computer chip that could be integrated into any electronic device, without significantly impacting the price." This is going for to be high-end applications for quite some time and pretty damn pricey when it first hits the desktop.
  • by CanadianRealist (1258974) on Monday December 10, 2012 @11:51AM (#42243325)

    Data transmission using photons rather than electrons is better. IBM has figured out how to do parts of that on silicon.

    Processing the data using photons instead of silicon might be better too. How much does what IBM has done help us towards being able to produce photonic logic?

  • So in practice, what does this mean exactly?

    Does it mean we can beat the 3-4Ghz CPU limit?
    Or does it mean we can treat DRAM as if it were more like next-to-the-metal L2 cache?
    Or does it mean we can have faster internet download speeds or quicker latency?
    • 1. No, probably not.
      2. No, probably not, but in theory may get higher bandwidth, but latency should be similar to current DRAM
      3. Probably, should make for cheaper fiber optics transmitters.
      4. (which you didn't say). You can electrically disconnect components on a board, aka, not on the same ground plane, less issues with electrical noise and signal propagation.

    • by timeOday (582209)
      I would be delighted if this leads to commodity implementation of optical Thunderbolt. Compared to copper Thunderbolt, which is limited to 3m, optical Thunderbolt can run tens of meters [wikipedia.org]. Instead of maintaining and powering so many computers around the home or office, you could have a centralized "mainframe" with a strand of fiber for each terminal, because you could send uncompressed video signals without the computational load or latency of re-compression.

      .

      Now, does it mean we can have faster internet

  • I see bussiness

    These are exciting times we live on

    Logic enters the 4th dimension

  • This breakthrough will lower our cell phone and cable bills by at least a dollar a year!!
  • by iiii (541004) on Monday December 10, 2012 @02:22PM (#42244775) Homepage

    ...thousands of times faster than current state-of-the-art copper and optical networks...

    Nope. Electrons and photons still moving at the speed of light, which is relatively constant. (c what I did there?!?)

    Ok, mostly I'm just being a smart ass. This may improve throughput and/or latency. But our chips are running into constraints due to the fact that the electrons can only go so far in on clock cycle. The stuff is cool, but it's not going to fix those problems.

    • by iiii (541004)
      s/ on / one /
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Electrons don't move at the speed of light. The electric field generated by moving electrons propagates at the speed of light. In other words, when an electron starts moving at one end of a wire, the electric field propagates down the wire at the speed of light and starts the electrons at the other end moving.

  • Does this have any use in the image sensor market?

  • So we've gone from using flashlights and reflectors for signalling with light to a wire capable of 1+ T/s also using light. Love it.
  • First there were horses, then steam, then gas, now electricity and soon light. It seems that all technology will be powered by electricity and eventually by light some day, becasue it is the fastest and cheapest to do. Look at the price of horses, gasoline, metal and copper for arguments.

The typical page layout program is nothing more than an electronic light table for cutting and pasting documents.

Working...