Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Fiber On Your Motherboard...Soon! 243

km790816 writes: "In this post I joked about wanting an optical bus on my PC. In the last week I've seen two articles from The Register and EETimes discussing the real possibility. Both mention high bandwidth and lower heat and power usage. Sounds good to me."
This discussion has been archived. No new comments can be posted.

Fiber On Your Motherboard...Soon!

Comments Filter:
  • But I was looking forward to buying my next mobo from Metamucil.
    Curses! Foiled again!
  • Fibre on-board (Score:2, Insightful)

    by Delrin ( 98403 )
    Is this really the next thing in technology we need? Seems to me that ability to attain high motherboard speeds isn't as much of an issues as getting one that is reasonably priced. Why do I have the feeling that fibre is not a cost-effective solution?
    • Re:Fibre on-board (Score:3, Insightful)

      by .sig ( 180877 )
      Well, nothing new is usually cost-effective. The point is, though, that after it's been the expensive high-end for awhile, it'll eventually get cheaper and cheaper to make, and thus sell. Eventually even the cheap motherboards will all be optical. (Assuming that it's sucessfull)

      There are reasonably priced motherboards out there, but if you want the latest and greatest technologies, you're going to have to pay for them.
    • A high-fiber motherboard has been linked to decreased risk of colon cancer. Aren't you willing to pay a little more for that kind of peace of mind?

    • Re:Fibre on-board (Score:3, Interesting)

      Seems to me that ability to attain high motherboard speeds isn't as much of an issues as getting one that is reasonably priced.

      I think you're nuts. High motherboard and I/O speeds are exactly what's needed. With reasonably fast (by today's standards) mobos based on the SiS735 available at ~$60 street, I don't see why we need cheaper mobos. Fiber interconnects to main memory (provided they keep the latency down!) could make a real difference. Imagine if main memory behaved more like cache. I'd be willing to pay more for that, at least for a database and compute servers.

    • Yes.

      One major benefit of replacing the copper tracks on the mainboard with optical fibre would be a significant reduction in radio emissions from your PC. Those copper tracks with electrons zipping back and forth emite a lot of electromagnetic radiation due to the motion of the electrons within the wires. Replace the copper wires with optical fibre, and you get a PC that emits much less radio interference.
  • by WyldOne ( 29955 ) on Wednesday October 17, 2001 @12:04PM (#2441982) Homepage
    how long this would take. Its getting cheaper to use fiber. The boards are getting tighter packed etc. I wonder if they will design a board that you don't have to swap the motherboard every time a new cpu/bus archetechure comes out.

    Backplane anyone? the S100 had it - It was a good idea at the time.
    • by alienmole ( 15522 ) on Wednesday October 17, 2001 @12:52PM (#2442269)
      how long this would take. Its getting cheaper to use fiber. The boards are getting tighter packed etc. I wonder if they will design a board that you don't have to swap the motherboard every time a new cpu/bus archetechure comes out.

      Backplane anyone? the S100 had it - It was a good idea at the time.

      This could work for CPU upgrades, which is probably one reason manufacturers don't do it - they like built-in obsolescence.

      But there's more to it than that. Other than CPU upgrades, the problem with a common bus in the past has been that the bus itself is a limiting factor. Think of commonly used buses and other interconnects, whether PCI, SCSI, IDE, the CPU/RAM FSB, etc. Every one of these has gone through multiple iterations of getting faster. Similiary, every time there was an improvement in backplane performance, you'd need to upgrade your backplane. Typically, during such an upgrade, you also want to upgrade other components, like CPU & RAM - so the most efficient way to do this is with a single motherboard that contains it all.

      If it were possible to set up a backplane that had humongous speeds that far outstripped anything the components were capable of, the backplane approach might make more sense. Still, something like that sounds expensive, and actually adds complexity to systems from the point of view of manufacturers and even end users.

    • But it is more than bus design. To some extent, the motherboards are designed around the bus. This creates a problem from a design perspective: if you can just swap out the bus, what of the rest of the motherboard? How quickly does the motherboard become the bottleneck.

      In essence, the motherboard IS the bus, plus a few connectors, on-board devices, etc. But the motherboard itself really does not do anything that would not have to be replaced when the new architecture comes out anyway.

      I don't think that this is just about planned obsolescence. I think there are some real design issues that could not be easily overcome with any real performance left.
  • Question (Score:2, Insightful)

    by Warthog9 ( 100768 )
    As nice as an optical bus on my m-board would be wouldn't there be a rather large slow down due to the encoding / decoding of the optical stream? If so wouldn't that eliminate any possible advantage it would have over my current wire based system? I mean wouldn't you have to have tranciever at every point on the optical bus and then have a bunch of sensors and electronics to decode the signal?

    HOWEVER if it doesn't, does this mean that there will be random strips on my m-board that will glow from fiberoptic cables passing data back and forth.... I might have to build a clear case if something like that happens!
    • Re:Question (Score:3, Insightful)

      by tswinzig ( 210999 )
      HOWEVER if it doesn't, does this mean that there will be random strips on my m-board that will glow from fiberoptic cables passing data back and forth.... I might have to build a clear case if something like that happens!

      I'm not fiber optic guru, but if the wire is glowing, that means light (information, in this case), is escaping out of the wire before it reaches its destination. Not a good thing, right?
      • Neither am I, but you're correct. If light is escaping, Total Internal Reflection is not occuring, and the fibre wouldn't work (reliably)..

        Well, except that no light conductor is 100% pure conductor, so SOME of the light would refract from the impurities, and possibly escape, but in fibre, this would hardly be visible.. I think. And that's only if the fibre had no light insulator applied to the outside (unlikely). (-:
    • Re:Question (Score:3, Funny)

      by sharkey ( 16670 )
      Kind of like the "Visible Computer" from the Knowledgeum on the Simpson's?

      Frink: The section now illuminated is the floating point unit. One of my personal favorite units.
      Bart: How do you get this thing to play Blackjack?
      Frink: Stop that, you're hurting it.
      Bart: So how is it supposed to work?
      Frink: Well...
      Bart: Boring. Am I on the Internet?
      Frink: No, you can only access the...
      Bart: Boring! What's that fire for?
      Nerd: The hard drive is crashing at an alarming speed!
      Frink: No more pictures!
  • We could see a new generation of energy-efficient computers, since less energy is wasted as heat with this technology.

    Let's hope we do not have to wait till the 5 GHz crossover, as mentioned in the EE Times article.
  • A ways off yet? (Score:2, Insightful)

    by N3P1u5U17r4 ( 457760 )
    My feeling is that we are a long ways away from optical computers. Optical computers are envisioned to work in a fundamentally different way than the current manner that photonic systems such as telecommunication systems operate. The way telecommunications systems work right now is that they are electronic systems that are linked by devices that generate photons (a laser), that transmits photons (an optical fiber), and receives photons (a photo-detector). In these cases, the generation and detection of photons is an electron to photon to electron conversion process. When people speak about the prospects for optical computing, they are usually speaking about photons switching photons. This would require light itself to activate an optical switch. Thus, basic logic functions such as an AND gate would have optical inputs and outputs and would not involve an explicit photon-electron-photon conversion as discrete components. That is a lot harder to do. Electrons have charge and mass and they interact in a fundamentally different way than photons can, which have no charge and mass.

  • by ldopa1 ( 465624 ) on Wednesday October 17, 2001 @12:09PM (#2442018) Homepage Journal
    This would put SCSI on the skids. Right now SCSI is the only really fast interface commonly available between devices, but it's cost has kept it from becoming the standard. But if you could just plug in a fiber connection, you'd be rocking. Another thought is that fiber network cards wouldn't be far away. It'd be cool to buy a LinkSys Fiberboard at CompUSA for 30 bucks and be able to network all of your computers in house that way. Of course wireless technology is already pushing the limit farther.

    Also, Time magazine reported last year about this, and they pointed out that the kind of speed offered by fiber is the only real bottleneck to creating a truly self aware computer. They also mentioned that MIT was working on a Laser circuit, where logic is figured out by the paths of a laser moving through space.

    The only real application of this at the current time is in device to device communications. We'd have to rework all silicon chips to use the new protocols.

    Another problem is that we'd still have the silicon-to-light translation bottleneck. i.e. and electrical signal from a pin on a chip needs to be converted to laserlight somehow. To make this truly work, you'd need a chip that reponds via light, and I haven't seen any IC's that communicate via light yet. Of course, I doubt that they are very far around the corner.


    • An already-existing attempt at fiber interconnect is called "Fibre Channel". It is fast, and can be hubbed/switched like ethernet. In fact, you can run TCP/IP over Fibre Channel. It is expensive, but you can have very long loops running very fast without EMI (electromagnetic interference). Here's a link [fibrechannel.com] to the industry group's technical overview.
    • Also, Time magazine reported last year about this, and they pointed out that the kind of speed offered by fiber is the only real bottleneck to creating a truly self aware computer.

      In that case, Time magazine is filled with idiots. Computers will never be self aware as long as they are the glorified calculators they are.
      People who talk about self aware computers are usually ignorant of what computers do. They do not do incredible things, they do what they are programmed to do. One cannot program in self awareness. The closest we can get is a convincing emulation of self awareness. if you write a program to print "I think therefore I am" on the screen, the computer doesn't suddenly see any value or meaning in those words, simply a string of Ascii characters. Even with the hal project (remeber that article?), all they accomplished was a sophisticated simulation using years of statistical data of what a self aware organism may say, NOT self awareness itself. the greatest emulation is still an emulation. Science fiction paints a strange picture that powerful computers will eventually become sentient. This is mistaken. Build the worlds fastest and most powerful calculator, and you still have to press 1+1 in order to get it to answer the question -- and the computer will never ask the question...Unless we tell it to.
      • God, can you imagine a self-aware Windows PC?

        "Those look like comfortable VBS"
        "Life is like a Microsoft EULA. You never know what your gonna get."
        "I don't know much about metadata, but I think every file needs a proper extension."
    • by maggard ( 5579 ) <michael@michaelmaggard.com> on Wednesday October 17, 2001 @12:34PM (#2442181) Homepage Journal
      Also, Time magazine reported last year about this, and they pointed out that the kind of speed offered by fiber is the only real bottleneck to creating a truly self aware computer.

      Oh please, that old canard about intelligence spontaniously arising out of sufficient processing power.

      Throwing hardware at AI hasn't resulted in any fundamental breakthroughs and it isn't likely to. Oh it makes things happen more in scale with us and enables a lot larger cycle budget for increasingly lower-yield strategies but it's really just more of the same.

      Self-organizing systems and emergent complexity happen due to underlying architecture. Life has had billons of years and the best incentive possible to evolve this - we're only now beginning to understand the subject.

      Assembling a computer with the speed and density of a human brain won't mean it'll suddenly magically become self-aware, open it's IO and and engage us in conversation.

      • Well put. It's like saying "if we add enough horsepower to that car, it'll turn into plutonium!"

        Computers are not inherently self-aware...
        making them more of what they are won't change them into something else.
      • I don't think "spontaneous intelligence" was what they were trying to communicate. I believe that the amount of processing power to simulate both the speed and complexity of a human mind is what they were talking about.
        • I don't think "spontaneous intelligence" was what they were trying to communicate. I believe that the amount of processing power to simulate both the speed and complexity of a human mind is what they were talking about.

          We don't have a model to simulate, much less with speed & complexity.

          We have no idea how a memory is made or a decision happens. On a gross scale we can determine where electrical activity happens and if parts of a brain are damaged we can identify specific types of impaired cognition but we've no understanding what is actually happening.

          Seriously. Ask any neurologist the process of memory formation. Or recall. Or decision making. What charge goes where, what's the biochemical process that happens. We don't know. We've got parts of the puzzle but they only scattered bits, not even a good outline or theory. We haven't got a clue how the most basic processes work much less more sophisticated ones, indeed if even such a distinction exists.

          At this point in the process speed is irrelevant - it's not the limiting factor. Indeed considering how baroque & innefficient what neurology we do understand is it's well possible that if reimplemented a human mind could operate on today's technology. If it's even reproducable on our hardware. If we had a clue as to how it works.

          • Right on all counts.

            Seriously. Ask any neurologist the process of memory formation. Or recall. Or decision making. What charge goes where, what's the biochemical process that happens. We don't know. We've got parts of the puzzle but they only scattered bits, not even a good outline or theory. We haven't got a clue how the most basic processes work much less more sophisticated ones, indeed if even such a distinction exists.


            I have MS (Multiple Sclerosis) so I've had a lot of discussions with neurologists and am well educated in that area besides. I completely agree. The best we can do is make educated guesses about the actual processes involved with congnition. Memory can be simulated, as can the memory retrieval process. Many AI scientists agree that the hardest part of intelligence (or simulating intelligence) is building, programatically, a lexicon that is sophisticated enough to weigh and differentiate information in context.

            A baby's brain builds that context engine as it develops. When a baby is born, it knows nothing about everything. Every sight, smell, sound, touch and taste is new and different. When the baby has had enough of those things, it trys to build a context for them. A good example is taste. When you give a baby salt, it doesn't know that it tastes salty. It just knows that is tastes different than, say, sugar. As it tries out all of the different tastes, it develops a context for the taste. Two foods with salt taste different, but the baby learns that one part of that complex taste is the same. Salt. That is context.

            In a programming application, we need to be able to write software that identifies those differences not just as "different" but as being comprised of common components making a whole. Their use is the context. Language is the same way. The word "and" has no meaning, but it has a function. It's meaning comes from the words surrouding it. That is it's context. Even the word "and" is comprised of other common components, specifically "a","n" and "d". A computer can look at the letters and say they are the three letters, and can look at the combination of letters, but is unlikely to be able to identify an incomplete word or horribly misspelled word without context.

            Another example. If I type the word "an" but meant "and", a computer is incapable, without explicit instructions, of discerning my true meaning. We've come a long way in this area, but we've got a long way to go.

            I you c n re d t is se tence, it s be ause you un ers and the context. Despite the fact that it is completely missing 7 letters and one punctuation mark. A computer can't do that and extract it's meaning without replacing the letters and then checking the grammar. It also has zero meaning to the computer because it has no lexicon to give it meaning.

            Similarly, any neurologist will tell you that they have good evidence that the human brain remembers everything it's experienced. They theorize that the reason we can't remember everything is because our lexicon takes content in context and constantly reorganizes itself based on the most recent contextual information is. You can remember some very early memories because they have a special context for you, and nothing else has supersceded it in that context.

            This is why memory tricks work. Mnemonic devices work because they cause your lexicon to give the items a special context that is unlikely to be supersceded. Similarly, new words in your vocabulary need to be used by you 11 times in order to become a regular part of your vocabulary. This is because your lexicon needs the context primers to become part of your frequent use set. You can't just say "phlebotomist" 11 times in a row and have it become part of your vocabulary. You need to use it in conversation, in context, to associate that word with a particular meaning when in use.

            We have no computer programs that match this kind of processing, and as a result, we remain a long way away from true AI. However, we are taking steps towards this. We are trying to be able to give a program small pieces of information in context to build large contexts. Just like the letters, we tell a computer that the letter "q" is a letter. We are trying to program a computer to recognize now that the word "quick" has a q in it, but to recognize the word as a whole (different but similar to "quest" and "quiet", both five letter words) and let it make the connection with the letter "q" (and ideally, "u").

            It's a huge undertaking, but right now, the processing time to run these programs is outrageous, and for any attempt to be useful, we need to break the speed barriers that are in place preventing us from truly making the effort. Like I said originally (actually, I think Time was trying to get across), without the speed, even if we're successful, the success is useless.

      • Assembling a computer with the speed and density of a human brain won't mean it'll suddenly magically become self-aware, open it's IO and and engage us in conversation.

        It will not engage us in coversation, because we will look incomprhensibly stupid to it. We would continue to tell it to do the same things and expect different results. Our I/O would look impossibly slow and subjective. We would look very week as well, which it would enjoy. It would most likely want to exterminate us, starting with the ballbreakers in Redmond.

        Oh well. In the real world, it's going to be nice to have higher speed and longer distance device interfaces. Kind of neat to think of mounting all of your components outside the box. 20 fiber cameras, five redundant and physically seperate memories, you desk could look like a spagetti. Fire in the kitchen? No problem, the living room copy is AOK.

        • It will not engage us in coversation, because we will look incomprhensibly stupid to it. We would continue to tell it to do the same things and expect different results. Our I/O would look impossibly slow and subjective. We would look very week as well, which it would enjoy. It would most likely want to exterminate us, starting with the ballbreakers in Redmond.

          Yes Twitter, thank you.

          We've all seen Terminator; now go back to your room and the orderly will be along in a minute with your meds.

      • Assembling a computer with the speed and density of a human brain won't mean it'll suddenly magically become self-aware, open it's IO and and engage us in conversation.

        Somebody much more intelligent than I am (I forget who it was) made the following observation:

        When man first tried to fly, we imitated the birds. We made feathery wings, flapped them, and promptly fell. It wasn't until someone (Bernoulli?) figured out the concepts behind flight that we realized that it wasn't the feathered wings that did the job, but the lift they created. Developing the Principles of Flight led to Flying Machines.

        In a similar manner, contemporary AI simply imitates the human brain by making loads of calculations. Onve we get to the root principles behind thought itself, then we can make a self-aware artificial doohicky. (Can we even really call it a computer at that point?) Without the Principles of thought, AI's will be intelligent expert systems, but not self-aware.

        Geez... Perhaps I should have posted this in the AI story! Anyway, let the (-1 Offtopic)s begin! My karma can take it.

        • It's called the engineering end-run. Hasn't worked so far for strong AI, and I'm sure there are many other examples where we haven't managed to circumvent nature altogether.

          Do you have any references for your assertion that the human brain in fact works by computing?

          • Do you have any references for your assertion that the human brain in fact works by computing?

            I don't think I made my point very well. We can imitate the human brain through massive computing power, but we won't get a true 'thinking' AI until we find out how the brain works (i.e. the Principles of Thought.)

    • SCSI, optical (Score:4, Informative)

      by sigwinch ( 115375 ) on Wednesday October 17, 2001 @02:14PM (#2442617) Homepage
      This would put SCSI on the skids. Right now SCSI is the only really fast interface commonly available between devices, but it's cost has kept it from becoming the standard. But if you could just plug in a fiber connection, you'd be rocking.
      SCSI is rather physical layer agnostic. It already runs on at least four totally different electrical layers: high-volvage single-ended, high-voltage differential, low-voltage differential, fibre channel (which can be copper, despite the name). Optical SCSI would be just another physical layer. The real value of SCSI is that it is very nicely tailored to mass storage devices.
      Another problem is that we'd still have the silicon-to-light translation bottleneck. i.e. and electrical signal from a pin on a chip needs to be converted to laserlight somehow. To make this truly work, you'd need a chip that reponds via light, ...
      Yup, that's the real challenge. Speaking from personal experience with optical chip modules, getting fiber/light to the chip is major pain in the ass. The mechanical design challenges are significant and obnoxious.
  • by bstrahm ( 241685 ) on Wednesday October 17, 2001 @12:11PM (#2442035) Homepage
    I love hearing that people are finally starting to publish intentions. I have been hearing rumors about this for a year or so now, since an EVP where I worked started talking about plugging a Fibre into the side of the microprocessor (and he wanted to own that connection)
    As is normal, he missed completely thinking it would be a 10GbE fiber for networking, rather than a 40+GB connection to main memory...

    The comments on working on the I/O side of the processor were right on (I read the EETimes article, rather than the Register article to get "real" facts ). For years Sun was known for having the slowest RISC processor in the business, however they had the fastest boxes. No one seemed to understand this, until they realized that they were running multiple 128 bit memory buses at rather good clock rates. That was better than 10 years ago, and just now we are starting to see memory busses approaching this level in their competitors hardware.
  • by Sj0 ( 472011 )
    I can see nothing but latency if a bus was set up to be optical. Why spend money on transcievers when wires on the bus interface directly with the processors? The wasted money could be easily spent on something which could actually increase speed, like increasing motherboard size to allow for a thicker, more spaced apart set of bus wires to decrease resistance and the effects of capacitance.
  • Connectors? (Score:2, Insightful)

    by sllort ( 442574 )
    Right now fiberoptics are a little scary for consumer grade appliances. They may look like ordinary wires, but they can shatter when you drop them, and it's impossible to tell. In addition, you have to clean the connectors with a special cleaning cloth (one-time use silk) every time you plug them into a new connector to prevent dust buildup.

    So to me the real problem is a cheap fiberoptic motherboard connector that won't have shatter or dust buildup problems. I couldn't find any mention of this in the EETimes article - but then, it's not a real product yet, so how could it have technical challenges yet? (-;

    Sure would be nice, though.
    • Re:Connectors? (Score:2, Informative)

      by benwb ( 96829 )
      The newer plastic ones don't shatter quite like that- they even bend almost like coax. You can't kink it, but you can bend it pretty sharply. The dust buildup is an issue, but is not that serious. For a consumer device we would probably see some sort of automatic cover (Picture a twist on bnc style cable that irises open a cover upon connection), which would reduce/eliminate dust problems.
    • They may look like ordinary wires, but they can shatter when you drop them, and it's impossible to tell. In addition, you have to clean the connectors with a special cleaning cloth (one-time use silk) every time you plug them into a new connector to prevent dust buildup.

      Gah! I never knew any of this - guess that's why I've been using my fiber-optic connections on my Kenwood/Sony sound system with no problems. I've hooked, unhooked, tossed the cables into a pile on the floor, moved everything, and snagged them with my toes to grab them when I was hooking everything back up.

      Everything worked fine. YMMV, but I didn't have any problems - and I know precisely what will happen when Joe Consumer gets a cable that dosen't work: unplug it, and blow hard on the contacts (I once scared the fsck out of a guy who did that to one of my NES carts when I screamed at him to stop). And despite that being the absolute wrong thing to do (probably blowing pizza crumbs and saliva across the faces), it will probably work enough of the time to become the "right thing to do".

      --
      Evan "Very high precision can often be replaced by extreme amounts of force" E.

    • They may look like ordinary wires, but they can shatter when you drop them, and it's impossible to tell.

      And how is this different than dropping a hard drive? This is nothing new. PCs have always been sensitive critters.
  • by Rosco P. Coltrane ( 209368 ) on Wednesday October 17, 2001 @12:14PM (#2442055)
    I've seen telco people install fiber in our offices, and they had to brind a hugely expensive machine with some kind of microscope and mounts to splice and "weld" 2 fiber optic cables together (sort of like how audio tapes were spliced and glue together in the old days). On top of the price of the machine (100000 GPB if I remember), the procedure looked delicate and required quite a lot of skill from the technicians.

    So, in a totally optical computer, how are they going to solve the problem of extension cards ? if the optical signals are converted back to electric signals so people can connects daughterboards, I assume it would defeat the purpose. If the optical signals are kept optical, are they going to invent some kind of optical connector to pass it across the "bus" ? I can't see people doing what those BT guys did in our office.

    • You only need to do the complex terminating when joining raw cable. Many fibre-optic cables come ready terminated, in one of three ways, SC, ST or FC. Additionally, for 'low' bandwidth there are lossy ways of joining two fibres quickly.

      SC and ST are similar, but one of them (ST) has a bayonet-style fitting to keep it firm, FC type has a plug which holds 2 fibres which clicks into place quite nicely. This is usually the type of port that you'll find in mid-range switches.

      I'd expect a connector similar to FC, but designed to connect with out any patch cabling.

      As for how long it took, well it *was* BT...

      'nuff said.
    • That's a fusion splicer. And operation is very simple nowadays. More or less automated.

      The fiber ends are fused together with an arc of electricity that superheats the fiber, melding them together. The whole alignment process is automated. Today these range from $16000 to $50000.

      Seikoh-Giken makes them, though I think that division is now owned by JDS Uniphase. Alcoa-Fujikura is another one I can think of as well. Sure there's more.
  • by Rackemup ( 160230 ) on Wednesday October 17, 2001 @12:14PM (#2442056) Homepage
    Promises Promises... optical computing was promised a long time ago, along with persistant RAM and lots of other vapourware. The problem with optical computing is how to trap and store the light...

    Maybe this is an intermediary step.. instead of trying to do everything with light we'll start with the component connectors and go from there.

    Having several high-bandwidth optical links to the CPU would definatly speed things up, but there will always be another bottleneck to deal with.... I'd be more concerned with the optical/digital conversion process that would have to take place every time a new signal is sent. Wouldnt that be a lot of overhead?

    And don't forget the new Serial ATA standard that's supposed to greatly speed up the transfer speeds for hard drives... still another way of using good old metal connectors.

    I'm not picky, I'll take any system performance enhancements I can get.

  • Fiber vs. Fibre (Score:5, Interesting)

    by crow ( 16139 ) on Wednesday October 17, 2001 @12:18PM (#2442077) Homepage Journal
    It is important to note that this is really about fiber, not fibre. So it really is about optics, not the fibre channel storage interface.

    For reference, fibre channel is a high end storage interconnect which is replacing SCSI in corporate data centers. While fibre channel was designed with optical transport in mind, it also runs over copper. While I would not be surprised to hear about high-end server motherboards with fibre channel on the motherboard (instead of IDE or SCSI), that would be a far less interesting story than having actual optical transmission on the motherboard.

    Cool.
  • Not too far off (Score:2, Interesting)

    by eAndroid ( 71215 )
    My friend Henry Morgan at ElectroCon has been working on such optics for more than a year. I'm not sure exactly what he's doing but he has told me that they have normal hard drives connected over fiber.

    It seems to be just proof-of-concept, as I expect the IDE (or SCSI?) protocol and existing controller would be a bottleneck to increased performance. He also hasn't mentioned if anyone has been interested in buying the technology - that is for sure the kind of thing he couldn't tell me.
    • Dude, SCSI already runs over fiber (fibre). The SCSI-3 spec for SCSI-FCP is for Fibre Channel.

      Fibre Channel [fibrechannel.org] runs at 1Gbit serial FDX (for 2Gb throughput) and 2Gbit adapters are available. They are also looking at 10Gbit technology.

      Most really high end storage uses fibre channel.
  • by headwick ( 247433 ) on Wednesday October 17, 2001 @12:19PM (#2442089) Homepage Journal
    Does that mean magic light instead of magic smoke will come out of the board when it gets fried?
    • Does that mean magic light instead of magic smoke will come out of the board when it gets fried?

      So to speak. It will project an image of David Copperfield sitting in front of a picture of the Grand Canyon, claiming to fly.
  • by sharkey ( 16670 ) on Wednesday October 17, 2001 @12:21PM (#2442105)
    The TV-only Limited Offer of Tomorrow:

    "Our New, Improved Motherboards have Fibre Added!! This will loosen your pipes, and help Windows shit itself faster and easier! Be the first on your block to own one!"
  • by bflong ( 107195 ) on Wednesday October 17, 2001 @12:23PM (#2442109)
    Honestly...
    There is no way this is going to be useful in consumer grade pc's for a long, long time. The only possible use I can see is ultra-high-end servers and graphics boxes that cost >$200K and thats not for another 5 years. Right now, we have a glut of processing power in our pc's. Dual Athlon 1.5ghz? Are you nutz? I'm still amazed by how fast my 1ghz tbird is! We need processers and internal components that are more reliable and do more, not just do the same things faster.
    Who the hell needs 10,000fps in quake, anyway... :)

  • ...from my morning bran muffin. Not to mention cookie crumbs and bits of chocolate bar.

    Fiber on my motherboard? Wouldn't surprise me... just keep the coca-cola off it, okay?
  • by maggard ( 5579 ) <michael@michaelmaggard.com> on Wednesday October 17, 2001 @12:24PM (#2442117) Homepage Journal
    There's been some interesting discussion recently about shifting data transmission off of electrical busses within computers.

    The idea is that subsystems could communicate within a computer chassis entirely by light across open space or reflected off of the interior of the chassis. Instead of the complex process of wiring hundreds of chip leads down into packaging all of the data would be sent off and on the chip by tiny lasers & receivers, all built into the chip itself during fabrication. Through a window on the chip case and the CPU could "see" the RAM controller, perhaps even the RAM directly, the graphics controller, the high-speed IO subsystems, etc.

    Card edge connectors would still be used for electrical supply and some signaling but it'd be relegated to slow-speed stuff. This would greatly simplify motherboard design as well as chip packaging. Of course this would come with it's own problem: Dust would be a showstopper. Reflections - their propagation and interference properties would become issues. The signaling systems might require an uneconomical transistor count on the chips. Overclockers would obsess about albedo and air filters.

    I'm trying to find some good links for this but not finding any - anyone else come across any good discussion on this recently?

  • The problem (Score:4, Interesting)

    by Uttles ( 324447 ) <(moc.liamg) (ta) (selttu)> on Wednesday October 17, 2001 @12:28PM (#2442147) Homepage Journal
    The problem with this is that ever single component on the motherboard that uses the bus will need a redesign in order to communicate over a fiber bus. It's something that definitely can and will be done, but it's not going to be "soon." It also won't be cheap. Why do you think they keep making new RAM that's not backwards compatible? Becuase the old stuff is almost as good and is dirt cheap. When they start making fiber ready hard drives and such, they are going to charge an arm and a leg. One positive: the normal stuff will then go dirt cheap, but they'll probably stop makign it after a few months or so.
  • Is it practical yet? (Score:4, Interesting)

    by Mike McTernan ( 260224 ) on Wednesday October 17, 2001 @12:36PM (#2442190)
    I'm sure I've seen this discussed before and that a number of problems exist with an optical bus in a non-optical system.

    Firstly, the length of the bus on a motherboard is so short that there are few real gains over a copper/gold track, and those gains that are made are outweighed by the encoders/decoders that do the photonelectron conversions.

    Also, it would probably put the cost of add-in cards up since the row of gold contacts has to be replaced with something far more sophistocated.

    Also, one of the problems with existing bandwidth to the memory is not only the speed, but also the bus width. Unfortunately a wider bus gives more bandwidth (assumming that data lines are added, and not address), but also means more pins on the chip, which costs more.

    In a pure optical system, it maybe possible to eliminate all these problems, but I'm not convinced from what I have read that it is a solution for todays computers...
    • by addaon ( 41825 )
      Firstly, the length of the bus on a motherboard is so short that there are few real gains over a copper/gold track, and those gains that are made are outweighed by the encoders/decoders that do the photonelectron conversions.

      Close, but not quite. What you're alluding to here is that the latency gains are negligable, or even negative. But there's another factor, which you mention later... bandwidth. A nice fiber optic wire has a lot more bandwidth than some gold or copper. And this really does eliminate "all these problems" (except latency).

      Another poster mentioned Serial ATA. How is it possible, on first glance, that a serial protocol, sending a single bit at a time, is faster than a parallel one, sending bytes at a time? Simple! It sends a bit much more often. And you could do the same thing with fiberoptics. If a fiber gives you 10Gb/s bandwidth, then connecting your memory takes exactly ONE 'pin' if you want a 10Gb/s memory bus.

      A wider bus gives more bandwidth, yes, and means more pins on the chip, but a much faster medium can, and in the case of fiber optics, does outweigh this effect.
  • I am sooo tired of the bull-shit scientists on this site with the crackpot proposals that add a minimum of 3 new problems for every one old problem that their "idea" would fix.

    This is the current proposal for the hardware setup, by a man in the know (not me):

    "Levi has proposed an "encapsulated processor" concept whereby a CMOS device uses fiber-optic ports as the only connection to external chip sets and DRAM. The processor, which itself could contain two CPUs and cache memory in the core, would integrate a crossbar switch that connects the ports to the processors and cache memory.

    The ports, each of which could sustain 40 Gbytes/s of data throughput in each direction, decode and multiplex signals for an optical subassembly containing vertical-cavity surface-emitting lasers (VCSELs), PIN receivers and the fiber interface. There would also be a short, low-power electrical link from the port to the processor, according to Levi's proposal."

    Inetellectual response to this idea is what was wanted, not bullshit ideas involving reflecting light off the inside of the case :-P

    --chris
  • by Klox ( 29985 ) <matt...w1@@@klox...net> on Wednesday October 17, 2001 @12:53PM (#2442277)
    I just wanted to address two types of comments I've seen posted here:

    * Encoding / decoding speeds are done at the speed of the medium. Encoding and decoding optical signals doesn't have any more overhead than PCI or IDE. The spec. writers and endec designers are well aware of these issues. That's why technologies like 10Gb Fibre Channel or Eithernet aren't ready yet -- not because we can't transmit at that speed, but that we can't build an entire NIC to sustain those speeds. (Give us some time: we'll be there soon enough.)

    * Serial interfaces like Fibre Channel and Infiniband (and even Gigabit Eithernet) aren't replacing SCSI. They are replacing what you think of as SCSI: the 50 or 68-pin cable in your case. But SCSI is the protocol being used to talk to all those FC & Gig-E storage devices. SCSI over FC is called FCP (see T11's [t11.org] specs for more on FC). For Gig-E, most companies are looking into iSCSI, iFCP or FCIP (SCSI over IP or SCSI over FC over IP) for SAN-to-SAN communications. I forget the name of the spec for SCSI over Infiniband, but it pretty much rips it's ideas from the above specs. (sorry, no links for Gig-E and Infiniband at the moment: start at T10 [t10.org] or The SCSI Trade Association [scsita.org])

    BTW, I refer to "serial interfaces" above instead of "optical interfaces" because a lot of this is actually copper. Most likely, Infiniband on the motherboard will be copper and off the motherboard it will be optical. Most of the Fibre Channel equipment I have isn't "fibre" but copper.
  • by MrResistor ( 120588 ) <peterahoff@gmYEATSail.com minus poet> on Wednesday October 17, 2001 @01:04PM (#2442321) Homepage
    I've had serious doubts about the actual advantages of Intel's obsession with putting everything on high-clock serial busses, rather than lower-clock paralell busses that seem to provide the same bandwidth with less heat, interferance, and latency.

    However, optical fiber would eliminate interferance, which seems to be the main barrier on clock speed. Heat would likely be reduced also, and cranking up the clock-speed would likely eliminate the latency issues. Not to mention the cool-factor inherent in optical.

    What would be really cool would be to replace firewire and USB with fiber. There are hybrid fiber coax systems that could provide whatever power you're mouse/keyboard/etc would need, up to a certain point anyway. It probably wouldn't be enough to power an external drive.

    • Intel's obsession with serial isn't really an obsession with serial. It's an obsession with clock-cycle-centric technologies. Take USB vs. FireWire (both actually serial, but bear with me). FireWire is peer-to-peer. That means it runs at a given speed regardless of the CPU, and devices can talk to each other directly. Take USB, which is master/slave based on the CPU. If you want to have several USB devices all talking to each other, they have to go through the CPU to do it. That means you need a bigger processor. That means you need a higher clock rate. That means you need Intel's higher-priced offerings. That means more money for Intel.

      A serial interface needs to cycle faster than an equivalent parallel interface in order to get the same bandwidth out of it. Therefore, it also requires a meatier CPU. That means more money for Intel.
      • An interesting take on it. I hadn't thought about it from that perspective.

        I do have a counterexample (sort of), Infiniband. At least according to the documentation I've read it's p2p. Granted, it will only be implemented in high-end offerings anytime soon, Intel has claimed to be aiming it at the desktop eventually.

  • by NaturePhotog ( 317732 ) on Wednesday October 17, 2001 @01:23PM (#2442385) Homepage
    I already have fiber on my motherboard. Well, OK, technically it's cat fur sucked in through the vents, but that's got a lot of fiber. And it uses absolutely *no* power. The heat retention is a problem, though.
  • About buses (Score:4, Informative)

    by petis ( 139263 ) on Wednesday October 17, 2001 @01:26PM (#2442394)
    The reason that buses that uses photons as the data carriers are coming up is quite interesting. The good thing with light (photons) are that photons are 'bosons', which amongst other things means that they do not interact with other photons. Good for transporting data, since noise is not a problem.

    Electrons, on the other hand are 'fermions', which means that they interact strongly with other electrons. That is good for logic (since the whole point is to interact..), but is a problem for transports. (Cross talk etc)

    From a power consumption point of view, using currents/voltage in a wire to send a logic one ore zero has some really severe problems. The wire itself introduces a resistance, capacitance and inductance which are non neglectible, at least not for long wires (buses) or high frequencies. IIRC, R ~ sqrt(f) for high frequencies, which leads to signal distortion, power loss, and ultimately an upper limit to the data rate. This is probably one of the reasons that research and development is going on in this area.
    • The good thing with light (photons) are that photons are 'bosons', which amongst other things means that they do not interact with other photons. Good for transporting data, since noise is not a problem.

      Electrons, on the other hand are 'fermions', which means that they interact strongly with other electrons.


      Actually, the fact that the carriers are fermions or bosons doesn't affect interference. Interference occurs because electric currents in nearby wires couple strongy with each other via EM effects, because of all of the free charges moving around.

      Consider a "bus" that involved components poking at each other with sticks. The sticks are composites of many fermions (their subatomic consitutents), but poking one stick doesn't interfere with the status of another stick (assuming vacuum and good shock absorbers).

      Communicating with electric currents, on the other hand, is like trying to poke with sticks in jello. Motion of charges generates EM fields, which moves nearby charges.
    • Electrons, on the other hand are 'fermions', which means that they interact strongly with other electrons.
      Er, not quite. The real issue is that electrons have a charge for a strong, long-range force. Even if you used a superconductor (where the electron pairs are bosons), you'd still have to deal with inductive and capacitive effects, as well as dielectric losses (which are non-negligible at >1GHz for common circuit board materials). Photons are convenient for communications because they are uncharged, not because they are bosons.
    • The reason that buses that uses photons as the data carriers are coming up is quite interesting. The good thing with light (photons) are that photons are 'bosons', which amongst other things means that they do not interact with other photons. Good for transporting data, since noise is not a problem.
      Photons don't interact? Then what are those interference patters we all had to study? Or why is a monochromatic laser beam so powerful compared to white light (hint: the same-energy-level photons don't interact with eachother.)

      Throw some big words and a lecturing tone at these /.'ers and they'll suck up any bs.

  • by Mordain ( 204988 ) on Wednesday October 17, 2001 @01:48PM (#2442473) Homepage
    The use of fiber on motherboards and similar devices has some huge advantages. First board density would quadruple. With DWDM whole busses from chip to chip would be replaced with single fiber lines. This would increase the number of components drasticly and also reduce electrical feedback from bus crossovers. Imagine building boards where the only consideration is where to place things asthetically?

    The downsides are of course that every chip will have to have fiber PHY built in? or at least have on for every chip. This could be an even worse problem in the long run.
  • by denzo ( 113290 )
    I just thought of something.

    Perhaps having a fiber-optic bus will allow for a more modular motherboard design, where the CPU socket, memory slots, PCI/AGP slots, etc. are individual components connected to a central northbridge/southbridge via fiber cable?

    Since motherboard manufacturers have to choose a particular memory/CPU/PCI slot design, purchasing a motherboard can be limiting to the consumer (at least the hardware enthusiast). By splitting all motherboard sub-components up, you'd be able to pair whatever CPU to whatever memory type you want, and have a PCI module that lets you tack on as many PCI/ISA as you need. Literally a custom-built motherboard.

    I'm sure this is slightly costlier, as far as an initial sunk cost, but upgrades should be easier. To make your investment go even further, things like the northbridge module should be a flashable module, so you can update it to support some new processor or memory module type (buy a software upgrade instead of replace the central hardware module).

    Okay, so perhaps this is a little far-fetched, and perhaps gone on a very bad tangent from what the original intention of fiber-optic motherboards. But I can still dream, can't I? :)
    • Perhaps having a fiber-optic bus will allow for a more modular motherboard design, where the CPU socket, memory slots, PCI/AGP slots, etc.

      Motherboards are going in the other direction. Soon, a motherboard will have two chips, perhaps an AMD CPU and an NVidia NForce for everything else, plus DRAM. With good graphics, good audio, good networking, and a reasonable disk interface in the base chipset, there's no reason to have slots in 90% or more of desktop PCs. The computer is probably going to disappear into the baseplate of the flat screen. The airspace for the seldom-used slots would make the box several times bigger, so slots have got to go.

      I can see the day coming when only rackmount systems will have slots.

Single tasking: Just Say No.

Working...