Fiber On Your Motherboard...Soon! 243
km790816 writes: "In this post I joked about wanting an optical bus on my PC. In the last week I've seen two articles from The Register and EETimes discussing the real possibility. Both mention high bandwidth and lower heat and power usage. Sounds good to me."
Better than fiber in it, I guess. (Score:1, Funny)
Curses! Foiled again!
Fibre on-board (Score:2, Insightful)
Re:Fibre on-board (Score:3, Insightful)
There are reasonably priced motherboards out there, but if you want the latest and greatest technologies, you're going to have to pay for them.
Re:Fibre on-board (Score:2, Funny)
Re:Fibre on-board (Score:3, Interesting)
I think you're nuts. High motherboard and I/O speeds are exactly what's needed. With reasonably fast (by today's standards) mobos based on the SiS735 available at ~$60 street, I don't see why we need cheaper mobos. Fiber interconnects to main memory (provided they keep the latency down!) could make a real difference. Imagine if main memory behaved more like cache. I'd be willing to pay more for that, at least for a database and compute servers.
Re:Fibre on-board (Score:2)
One major benefit of replacing the copper tracks on the mainboard with optical fibre would be a significant reduction in radio emissions from your PC. Those copper tracks with electrons zipping back and forth emite a lot of electromagnetic radiation due to the motion of the electrons within the wires. Replace the copper wires with optical fibre, and you get a PC that emits much less radio interference.
I've been wondering ... (Score:3, Informative)
Backplane anyone? the S100 had it - It was a good idea at the time.
Re:I've been wondering ... (Score:4, Insightful)
But there's more to it than that. Other than CPU upgrades, the problem with a common bus in the past has been that the bus itself is a limiting factor. Think of commonly used buses and other interconnects, whether PCI, SCSI, IDE, the CPU/RAM FSB, etc. Every one of these has gone through multiple iterations of getting faster. Similiary, every time there was an improvement in backplane performance, you'd need to upgrade your backplane. Typically, during such an upgrade, you also want to upgrade other components, like CPU & RAM - so the most efficient way to do this is with a single motherboard that contains it all.
If it were possible to set up a backplane that had humongous speeds that far outstripped anything the components were capable of, the backplane approach might make more sense. Still, something like that sounds expensive, and actually adds complexity to systems from the point of view of manufacturers and even end users.
Hmmm (Score:2)
In essence, the motherboard IS the bus, plus a few connectors, on-board devices, etc. But the motherboard itself really does not do anything that would not have to be replaced when the new architecture comes out anyway.
I don't think that this is just about planned obsolescence. I think there are some real design issues that could not be easily overcome with any real performance left.
Question (Score:2, Insightful)
HOWEVER if it doesn't, does this mean that there will be random strips on my m-board that will glow from fiberoptic cables passing data back and forth.... I might have to build a clear case if something like that happens!
Re:Question (Score:3, Insightful)
I'm not fiber optic guru, but if the wire is glowing, that means light (information, in this case), is escaping out of the wire before it reaches its destination. Not a good thing, right?
Re:Question (Score:2)
Well, except that no light conductor is 100% pure conductor, so SOME of the light would refract from the impurities, and possibly escape, but in fibre, this would hardly be visible.. I think. And that's only if the fibre had no light insulator applied to the outside (unlikely). (-:
Re:Question (Score:3, Funny)
Frink: The section now illuminated is the floating point unit. One of my personal favorite units.
Bart: How do you get this thing to play Blackjack?
Frink: Stop that, you're hurting it.
Bart: So how is it supposed to work?
Frink: Well...
Bart: Boring. Am I on the Internet?
Frink: No, you can only access the...
Bart: Boring! What's that fire for?
Nerd: The hard drive is crashing at an alarming speed!
Frink: No more pictures!
Re:Question (Score:2)
The client thought we where crazy for trying such a low tech method, but it worked like a charm.
Optical fiber - energy efficient? (Score:2, Interesting)
Let's hope we do not have to wait till the 5 GHz crossover, as mentioned in the EE Times article.
A ways off yet? (Score:2, Insightful)
Optical on the motherboard.. (Score:3, Offtopic)
Also, Time magazine reported last year about this, and they pointed out that the kind of speed offered by fiber is the only real bottleneck to creating a truly self aware computer. They also mentioned that MIT was working on a Laser circuit, where logic is figured out by the paths of a laser moving through space.
The only real application of this at the current time is in device to device communications. We'd have to rework all silicon chips to use the new protocols.
Another problem is that we'd still have the silicon-to-light translation bottleneck. i.e. and electrical signal from a pin on a chip needs to be converted to laserlight somehow. To make this truly work, you'd need a chip that reponds via light, and I haven't seen any IC's that communicate via light yet. Of course, I doubt that they are very far around the corner.
Re:Optical on the motherboard.. (Score:2, Offtopic)
An already-existing attempt at fiber interconnect is called "Fibre Channel". It is fast, and can be hubbed/switched like ethernet. In fact, you can run TCP/IP over Fibre Channel. It is expensive, but you can have very long loops running very fast without EMI (electromagnetic interference). Here's a link [fibrechannel.com] to the industry group's technical overview.
Re:Optical on the motherboard.. (Score:3, Insightful)
In that case, Time magazine is filled with idiots. Computers will never be self aware as long as they are the glorified calculators they are.
People who talk about self aware computers are usually ignorant of what computers do. They do not do incredible things, they do what they are programmed to do. One cannot program in self awareness. The closest we can get is a convincing emulation of self awareness. if you write a program to print "I think therefore I am" on the screen, the computer doesn't suddenly see any value or meaning in those words, simply a string of Ascii characters. Even with the hal project (remeber that article?), all they accomplished was a sophisticated simulation using years of statistical data of what a self aware organism may say, NOT self awareness itself. the greatest emulation is still an emulation. Science fiction paints a strange picture that powerful computers will eventually become sentient. This is mistaken. Build the worlds fastest and most powerful calculator, and you still have to press 1+1 in order to get it to answer the question -- and the computer will never ask the question...Unless we tell it to.
Re:Optical on the motherboard.. (Score:3, Funny)
"Those look like comfortable VBS"
"Life is like a Microsoft EULA. You never know what your gonna get."
"I don't know much about metadata, but I think every file needs a proper extension."
Re:Optical on the motherboard.. (Score:5, Interesting)
Oh please, that old canard about intelligence spontaniously arising out of sufficient processing power.
Throwing hardware at AI hasn't resulted in any fundamental breakthroughs and it isn't likely to. Oh it makes things happen more in scale with us and enables a lot larger cycle budget for increasingly lower-yield strategies but it's really just more of the same.
Self-organizing systems and emergent complexity happen due to underlying architecture. Life has had billons of years and the best incentive possible to evolve this - we're only now beginning to understand the subject.
Assembling a computer with the speed and density of a human brain won't mean it'll suddenly magically become self-aware, open it's IO and and engage us in conversation.
Re:Optical on the motherboard.. (Score:3, Funny)
Computers are not inherently self-aware...
making them more of what they are won't change them into something else.
Re:Optical on the motherboard.. (Score:2)
Re:Optical on the motherboard.. (Score:2)
We don't have a model to simulate, much less with speed & complexity.
We have no idea how a memory is made or a decision happens. On a gross scale we can determine where electrical activity happens and if parts of a brain are damaged we can identify specific types of impaired cognition but we've no understanding what is actually happening.
Seriously. Ask any neurologist the process of memory formation. Or recall. Or decision making. What charge goes where, what's the biochemical process that happens. We don't know. We've got parts of the puzzle but they only scattered bits, not even a good outline or theory. We haven't got a clue how the most basic processes work much less more sophisticated ones, indeed if even such a distinction exists.
At this point in the process speed is irrelevant - it's not the limiting factor. Indeed considering how baroque & innefficient what neurology we do understand is it's well possible that if reimplemented a human mind could operate on today's technology. If it's even reproducable on our hardware. If we had a clue as to how it works.
Re:Optical on the motherboard.. (Score:2)
Seriously. Ask any neurologist the process of memory formation. Or recall. Or decision making. What charge goes where, what's the biochemical process that happens. We don't know. We've got parts of the puzzle but they only scattered bits, not even a good outline or theory. We haven't got a clue how the most basic processes work much less more sophisticated ones, indeed if even such a distinction exists.
I have MS (Multiple Sclerosis) so I've had a lot of discussions with neurologists and am well educated in that area besides. I completely agree. The best we can do is make educated guesses about the actual processes involved with congnition. Memory can be simulated, as can the memory retrieval process. Many AI scientists agree that the hardest part of intelligence (or simulating intelligence) is building, programatically, a lexicon that is sophisticated enough to weigh and differentiate information in context.
A baby's brain builds that context engine as it develops. When a baby is born, it knows nothing about everything. Every sight, smell, sound, touch and taste is new and different. When the baby has had enough of those things, it trys to build a context for them. A good example is taste. When you give a baby salt, it doesn't know that it tastes salty. It just knows that is tastes different than, say, sugar. As it tries out all of the different tastes, it develops a context for the taste. Two foods with salt taste different, but the baby learns that one part of that complex taste is the same. Salt. That is context.
In a programming application, we need to be able to write software that identifies those differences not just as "different" but as being comprised of common components making a whole. Their use is the context. Language is the same way. The word "and" has no meaning, but it has a function. It's meaning comes from the words surrouding it. That is it's context. Even the word "and" is comprised of other common components, specifically "a","n" and "d". A computer can look at the letters and say they are the three letters, and can look at the combination of letters, but is unlikely to be able to identify an incomplete word or horribly misspelled word without context.
Another example. If I type the word "an" but meant "and", a computer is incapable, without explicit instructions, of discerning my true meaning. We've come a long way in this area, but we've got a long way to go.
I you c n re d t is se tence, it s be ause you un ers and the context. Despite the fact that it is completely missing 7 letters and one punctuation mark. A computer can't do that and extract it's meaning without replacing the letters and then checking the grammar. It also has zero meaning to the computer because it has no lexicon to give it meaning.
Similarly, any neurologist will tell you that they have good evidence that the human brain remembers everything it's experienced. They theorize that the reason we can't remember everything is because our lexicon takes content in context and constantly reorganizes itself based on the most recent contextual information is. You can remember some very early memories because they have a special context for you, and nothing else has supersceded it in that context.
This is why memory tricks work. Mnemonic devices work because they cause your lexicon to give the items a special context that is unlikely to be supersceded. Similarly, new words in your vocabulary need to be used by you 11 times in order to become a regular part of your vocabulary. This is because your lexicon needs the context primers to become part of your frequent use set. You can't just say "phlebotomist" 11 times in a row and have it become part of your vocabulary. You need to use it in conversation, in context, to associate that word with a particular meaning when in use.
We have no computer programs that match this kind of processing, and as a result, we remain a long way away from true AI. However, we are taking steps towards this. We are trying to be able to give a program small pieces of information in context to build large contexts. Just like the letters, we tell a computer that the letter "q" is a letter. We are trying to program a computer to recognize now that the word "quick" has a q in it, but to recognize the word as a whole (different but similar to "quest" and "quiet", both five letter words) and let it make the connection with the letter "q" (and ideally, "u").
It's a huge undertaking, but right now, the processing time to run these programs is outrageous, and for any attempt to be useful, we need to break the speed barriers that are in place preventing us from truly making the effort. Like I said originally (actually, I think Time was trying to get across), without the speed, even if we're successful, the success is useless.
that's correct (Score:2)
It will not engage us in coversation, because we will look incomprhensibly stupid to it. We would continue to tell it to do the same things and expect different results. Our I/O would look impossibly slow and subjective. We would look very week as well, which it would enjoy. It would most likely want to exterminate us, starting with the ballbreakers in Redmond.
Oh well. In the real world, it's going to be nice to have higher speed and longer distance device interfaces. Kind of neat to think of mounting all of your components outside the box. 20 fiber cameras, five redundant and physically seperate memories, you desk could look like a spagetti. Fire in the kitchen? No problem, the living room copy is AOK.
Re:that's correct (Score:2)
Yes Twitter, thank you.
We've all seen Terminator; now go back to your room and the orderly will be along in a minute with your meds.
Re:that's correct (Score:2)
You should read Hienlen's "The Moon is a Harsh Mistress", asshole.
some people (Score:2)
You might have one but you did not learn anything from it, did you?
Re:AI on the motherboard.. (Score:2, Insightful)
Somebody much more intelligent than I am (I forget who it was) made the following observation:
When man first tried to fly, we imitated the birds. We made feathery wings, flapped them, and promptly fell. It wasn't until someone (Bernoulli?) figured out the concepts behind flight that we realized that it wasn't the feathered wings that did the job, but the lift they created. Developing the Principles of Flight led to Flying Machines.
In a similar manner, contemporary AI simply imitates the human brain by making loads of calculations. Onve we get to the root principles behind thought itself, then we can make a self-aware artificial doohicky. (Can we even really call it a computer at that point?) Without the Principles of thought, AI's will be intelligent expert systems, but not self-aware.
Geez... Perhaps I should have posted this in the AI story! Anyway, let the (-1 Offtopic)s begin! My karma can take it.
Re:AI on the motherboard.. (Score:2, Insightful)
Do you have any references for your assertion that the human brain in fact works by computing?
Re:AI on the motherboard.. (Score:2)
I don't think I made my point very well. We can imitate the human brain through massive computing power, but we won't get a true 'thinking' AI until we find out how the brain works (i.e. the Principles of Thought.)
SCSI, optical (Score:4, Informative)
Cool, people finally starting to publish (Score:5, Informative)
As is normal, he missed completely thinking it would be a 10GbE fiber for networking, rather than a 40+GB connection to main memory...
The comments on working on the I/O side of the processor were right on (I read the EETimes article, rather than the Register article to get "real" facts ). For years Sun was known for having the slowest RISC processor in the business, however they had the fastest boxes. No one seemed to understand this, until they realized that they were running multiple 128 bit memory buses at rather good clock rates. That was better than 10 years ago, and just now we are starting to see memory busses approaching this level in their competitors hardware.
It's a waste. (Score:2, Troll)
Connectors? (Score:2, Insightful)
So to me the real problem is a cheap fiberoptic motherboard connector that won't have shatter or dust buildup problems. I couldn't find any mention of this in the EETimes article - but then, it's not a real product yet, so how could it have technical challenges yet? (-;
Sure would be nice, though.
Re:Connectors? (Score:2, Informative)
Re:Connectors? (Score:2)
Gah! I never knew any of this - guess that's why I've been using my fiber-optic connections on my Kenwood/Sony sound system with no problems. I've hooked, unhooked, tossed the cables into a pile on the floor, moved everything, and snagged them with my toes to grab them when I was hooking everything back up.
Everything worked fine. YMMV, but I didn't have any problems - and I know precisely what will happen when Joe Consumer gets a cable that dosen't work: unplug it, and blow hard on the contacts (I once scared the fsck out of a guy who did that to one of my NES carts when I screamed at him to stop). And despite that being the absolute wrong thing to do (probably blowing pizza crumbs and saliva across the faces), it will probably work enough of the time to become the "right thing to do".
--
Evan "Very high precision can often be replaced by extreme amounts of force" E.
Re:Connectors? (Score:2)
The practitioners of audiophile voodoo probably will insist that they can hear the difference between plastic and their $5000 glass cables, made by Buddhist monks in Tibet.
Re:Connectors? (Score:2)
I'd have to agree - I doubt there is any difference. Either the bits get through or not - I would assume there is some elementary error checking to let the ends know if they aren't getting the correct signal. At the very least, a sync signal would function as a rudimentary error check. Thus, to paraphrase Parappa, plastic or glass - it's all in the bits.
It puts me in mind of the 30 something guy with a fancy, nice sound system, who burned MP3s off onto CD (as MP3 files) because he insisted that they sounded better on the superior CD media rather than the hard drive. ( I'm going to point out that anybody who listens to MP3s for anything that is available in a better format is not a audiophile, no matter what they say... but then, as a musician, I hold that no recording even approaches a performance ).
--
Evan "192kbps plus ease of access is good enough for me" E.
Re:Connectors? (Score:2)
Re:Connectors? (Score:2)
If there are several cables that are acceptable, you then ask "Which is cheapest?" and "Which is most durable?" and the answer for those is a resounding "plastic". Frankly, using glass fiber to move an audio signal a couple of meters would be really, really dumb.
Re:Connectors? (Score:2)
Re:Connectors? (Score:2)
And how is this different than dropping a hard drive? This is nothing new. PCs have always been sensitive critters.
What about extension cards ? (Score:4, Interesting)
So, in a totally optical computer, how are they going to solve the problem of extension cards ? if the optical signals are converted back to electric signals so people can connects daughterboards, I assume it would defeat the purpose. If the optical signals are kept optical, are they going to invent some kind of optical connector to pass it across the "bus" ? I can't see people doing what those BT guys did in our office.
Re:What about extension cards ? (Score:2)
SC and ST are similar, but one of them (ST) has a bayonet-style fitting to keep it firm, FC type has a plug which holds 2 fibres which clicks into place quite nicely. This is usually the type of port that you'll find in mid-range switches.
I'd expect a connector similar to FC, but designed to connect with out any patch cabling.
As for how long it took, well it *was* BT...
'nuff said.
Re:What about extension cards ? (Score:2)
The fiber ends are fused together with an arc of electricity that superheats the fiber, melding them together. The whole alignment process is automated. Today these range from $16000 to $50000.
Seikoh-Giken makes them, though I think that division is now owned by JDS Uniphase. Alcoa-Fujikura is another one I can think of as well. Sure there's more.
Optical links to the CPU? (Score:4, Interesting)
Maybe this is an intermediary step.. instead of trying to do everything with light we'll start with the component connectors and go from there.
Having several high-bandwidth optical links to the CPU would definatly speed things up, but there will always be another bottleneck to deal with.... I'd be more concerned with the optical/digital conversion process that would have to take place every time a new signal is sent. Wouldnt that be a lot of overhead?
And don't forget the new Serial ATA standard that's supposed to greatly speed up the transfer speeds for hard drives... still another way of using good old metal connectors.
I'm not picky, I'll take any system performance enhancements I can get.
Fiber vs. Fibre (Score:5, Interesting)
For reference, fibre channel is a high end storage interconnect which is replacing SCSI in corporate data centers. While fibre channel was designed with optical transport in mind, it also runs over copper. While I would not be surprised to hear about high-end server motherboards with fibre channel on the motherboard (instead of IDE or SCSI), that would be a far less interesting story than having actual optical transmission on the motherboard.
Cool.
Re:Fiber vs. Fibre (Score:2)
Not too far off (Score:2, Interesting)
It seems to be just proof-of-concept, as I expect the IDE (or SCSI?) protocol and existing controller would be a bottleneck to increased performance. He also hasn't mentioned if anyone has been interested in buying the technology - that is for sure the kind of thing he couldn't tell me.
Re:Not too far off (Score:2, Offtopic)
Fibre Channel [fibrechannel.org] runs at 1Gbit serial FDX (for 2Gb throughput) and 2Gbit adapters are available. They are also looking at 10Gbit technology.
Most really high end storage uses fibre channel.
fiber vs copper (Score:3, Funny)
Re:fiber vs copper (Score:2)
So to speak. It will project an image of David Copperfield sitting in front of a picture of the Grand Canyon, claiming to fly.
Add fibre to your PC (Score:4, Funny)
"Our New, Improved Motherboards have Fibre Added!! This will loosen your pipes, and help Windows shit itself faster and easier! Be the first on your block to own one!"
I love the smell of vapor in the morning (Score:3, Insightful)
There is no way this is going to be useful in consumer grade pc's for a long, long time. The only possible use I can see is ultra-high-end servers and graphics boxes that cost >$200K and thats not for another 5 years. Right now, we have a glut of processing power in our pc's. Dual Athlon 1.5ghz? Are you nutz? I'm still amazed by how fast my 1ghz tbird is! We need processers and internal components that are more reliable and do more, not just do the same things faster.
Who the hell needs 10,000fps in quake, anyway...
Fiber in the Keyboard (Score:2)
Fiber on my motherboard? Wouldn't surprise me... just keep the coca-cola off it, okay?
Free space optical busses (Score:5, Interesting)
The idea is that subsystems could communicate within a computer chassis entirely by light across open space or reflected off of the interior of the chassis. Instead of the complex process of wiring hundreds of chip leads down into packaging all of the data would be sent off and on the chip by tiny lasers & receivers, all built into the chip itself during fabrication. Through a window on the chip case and the CPU could "see" the RAM controller, perhaps even the RAM directly, the graphics controller, the high-speed IO subsystems, etc.
Card edge connectors would still be used for electrical supply and some signaling but it'd be relegated to slow-speed stuff. This would greatly simplify motherboard design as well as chip packaging. Of course this would come with it's own problem: Dust would be a showstopper. Reflections - their propagation and interference properties would become issues. The signaling systems might require an uneconomical transistor count on the chips. Overclockers would obsess about albedo and air filters.
I'm trying to find some good links for this but not finding any - anyone else come across any good discussion on this recently?
Re:Free space optical busses (Score:3, Informative)
The plus would be that you'd not need point-to-point optical cables or some sort of optical router. Put a device in the case, give it electricity and it could "see", directly or indirectly all of the other components.
Re:Free space optical busses (Score:3, Insightful)
No - you're completely missing the design.
Imagine you're inside one of these next-gen computers. The bus inside the computer supplies power and low-frequency signalling. Arrayed across the mother board and daughter cards are these next-gen optical IO chips.
Instead of an opaque case these chips have a window transparent to whatever frequency is being used. Wherever on a traditionial chip the circuitry would head off to a lead in this case there's a tiny solid-state laser & adjacent reciever (with some support circuitry.)
Whenever a signal needs to be sent the laser serving as an optical IO point fires. They may differ in frequency, they may use coded pulses of light, however it works they'd be addressable. These picosecond flashes of light illuminate the interior of the PC bathing the other components in varying degrees of brightness.
Whatever other component is being address recieves the signal with it's own optical IO point and acts on it, replying back with it's own coded flash of light.
No line-of-sight is required as long as the primary reflective surfaces in the case have a high enough albedo and sufficient light scattering ability. If you need an anology imagine a bunch of kids flashing signals to each other with flashlights in the woods. Oftentimes one won't see another hidden behind a tree but the light reflecting off nearby bushes reflect the signal.
Some of the proposed benefits:
The problem (Score:4, Interesting)
Is it practical yet? (Score:4, Interesting)
Firstly, the length of the bus on a motherboard is so short that there are few real gains over a copper/gold track, and those gains that are made are outweighed by the encoders/decoders that do the photonelectron conversions.
Also, it would probably put the cost of add-in cards up since the row of gold contacts has to be replaced with something far more sophistocated.
Also, one of the problems with existing bandwidth to the memory is not only the speed, but also the bus width. Unfortunately a wider bus gives more bandwidth (assumming that data lines are added, and not address), but also means more pins on the chip, which costs more.
In a pure optical system, it maybe possible to eliminate all these problems, but I'm not convinced from what I have read that it is a solution for todays computers...
Re:Is it practical yet? (Score:2, Informative)
Close, but not quite. What you're alluding to here is that the latency gains are negligable, or even negative. But there's another factor, which you mention later... bandwidth. A nice fiber optic wire has a lot more bandwidth than some gold or copper. And this really does eliminate "all these problems" (except latency).
Another poster mentioned Serial ATA. How is it possible, on first glance, that a serial protocol, sending a single bit at a time, is faster than a parallel one, sending bytes at a time? Simple! It sends a bit much more often. And you could do the same thing with fiberoptics. If a fiber gives you 10Gb/s bandwidth, then connecting your memory takes exactly ONE 'pin' if you want a 10Gb/s memory bus.
A wider bus gives more bandwidth, yes, and means more pins on the chip, but a much faster medium can, and in the case of fiber optics, does outweigh this effect.
does any body read the link articles? (Score:2, Insightful)
This is the current proposal for the hardware setup, by a man in the know (not me):
"Levi has proposed an "encapsulated processor" concept whereby a CMOS device uses fiber-optic ports as the only connection to external chip sets and DRAM. The processor, which itself could contain two CPUs and cache memory in the core, would integrate a crossbar switch that connects the ports to the processors and cache memory.
The ports, each of which could sustain 40 Gbytes/s of data throughput in each direction, decode and multiplex signals for an optical subassembly containing vertical-cavity surface-emitting lasers (VCSELs), PIN receivers and the fiber interface. There would also be a short, low-power electrical link from the port to the processor, according to Levi's proposal."
Inetellectual response to this idea is what was wanted, not bullshit ideas involving reflecting light off the inside of the case
--chris
Serial communications & SCSI (Score:5, Insightful)
* Encoding / decoding speeds are done at the speed of the medium. Encoding and decoding optical signals doesn't have any more overhead than PCI or IDE. The spec. writers and endec designers are well aware of these issues. That's why technologies like 10Gb Fibre Channel or Eithernet aren't ready yet -- not because we can't transmit at that speed, but that we can't build an entire NIC to sustain those speeds. (Give us some time: we'll be there soon enough.)
* Serial interfaces like Fibre Channel and Infiniband (and even Gigabit Eithernet) aren't replacing SCSI. They are replacing what you think of as SCSI: the 50 or 68-pin cable in your case. But SCSI is the protocol being used to talk to all those FC & Gig-E storage devices. SCSI over FC is called FCP (see T11's [t11.org] specs for more on FC). For Gig-E, most companies are looking into iSCSI, iFCP or FCIP (SCSI over IP or SCSI over FC over IP) for SAN-to-SAN communications. I forget the name of the spec for SCSI over Infiniband, but it pretty much rips it's ideas from the above specs. (sorry, no links for Gig-E and Infiniband at the moment: start at T10 [t10.org] or The SCSI Trade Association [scsita.org])
BTW, I refer to "serial interfaces" above instead of "optical interfaces" because a lot of this is actually copper. Most likely, Infiniband on the motherboard will be copper and off the motherboard it will be optical. Most of the Fibre Channel equipment I have isn't "fibre" but copper.
Intel's serial obsession? (Score:5, Insightful)
However, optical fiber would eliminate interferance, which seems to be the main barrier on clock speed. Heat would likely be reduced also, and cranking up the clock-speed would likely eliminate the latency issues. Not to mention the cool-factor inherent in optical.
What would be really cool would be to replace firewire and USB with fiber. There are hybrid fiber coax systems that could provide whatever power you're mouse/keyboard/etc would need, up to a certain point anyway. It probably wouldn't be enough to power an external drive.
Re:Intel's serial obsession? (Score:2)
A serial interface needs to cycle faster than an equivalent parallel interface in order to get the same bandwidth out of it. Therefore, it also requires a meatier CPU. That means more money for Intel.
Re:Intel's serial obsession? (Score:2)
I do have a counterexample (sort of), Infiniband. At least according to the documentation I've read it's p2p. Granted, it will only be implemented in high-end offerings anytime soon, Intel has claimed to be aiming it at the desktop eventually.
Re:Well thought out idea... (Score:2)
No, my keyboard doesn't need Gb bandwidth, but all the combined peripherals daisy-chained into that one fiber port might benefit from it.
As for bending the wire sharply, copper has the same problem, although it's not quite as guaranteed as it is with fiber. With cat5, for example, the minimum bend radius is 4 times the outside diameter of the cable. And yes, I have had plenty of copper cables go bad from being bent sharply. In fact, it's the most common reason cables go bad.
Perhaps you should try thinking before you flame.
Re:Intel's serial obsession? (Score:2)
Just think how small your mobo would be without having to make space for IDE, parallel, serial, ps/2, AGP, Ethernet, and PCI connectors. That's what I'm looking at. I realize that isn't what the article is saying, but that doesn't mean I can't extrapolate their direction 10 or 20 years off.
Re:You have just described IEEE1394 (Score:2)
I used to go around saying "&%^$& Intel and their *(^%*( proprietary Infiniband crap!", but then I did some research into it for a class project and I've changed my mind about it. The combination of Infiniband and Hypertransport is going to bring us some really fast computers in the near future (and no, despite common misconception, Infiniband and Hypertransport are not competing technologies).
A quick google search will bring you a wealth of info about it, and I highly recomend it. It was a real eye-opener for me.
fiber on motherboard (Score:4, Funny)
About buses (Score:4, Informative)
Electrons, on the other hand are 'fermions', which means that they interact strongly with other electrons. That is good for logic (since the whole point is to interact..), but is a problem for transports. (Cross talk etc)
From a power consumption point of view, using currents/voltage in a wire to send a logic one ore zero has some really severe problems. The wire itself introduces a resistance, capacitance and inductance which are non neglectible, at least not for long wires (buses) or high frequencies. IIRC, R ~ sqrt(f) for high frequencies, which leads to signal distortion, power loss, and ultimately an upper limit to the data rate. This is probably one of the reasons that research and development is going on in this area.
Re:About buses (Score:2)
Electrons, on the other hand are 'fermions', which means that they interact strongly with other electrons.
Actually, the fact that the carriers are fermions or bosons doesn't affect interference. Interference occurs because electric currents in nearby wires couple strongy with each other via EM effects, because of all of the free charges moving around.
Consider a "bus" that involved components poking at each other with sticks. The sticks are composites of many fermions (their subatomic consitutents), but poking one stick doesn't interfere with the status of another stick (assuming vacuum and good shock absorbers).
Communicating with electric currents, on the other hand, is like trying to poke with sticks in jello. Motion of charges generates EM fields, which moves nearby charges.
Re:About buses (Score:2)
Re:About buses (Score:2)
Throw some big words and a lecturing tone at these /.'ers and they'll suck up any bs.
Interesting combinations of technology (Score:3, Informative)
The downsides are of course that every chip will have to have fiber PHY built in? or at least have on for every chip. This could be an even worse problem in the long run.
Modular Motherboards...? (Score:2, Interesting)
Perhaps having a fiber-optic bus will allow for a more modular motherboard design, where the CPU socket, memory slots, PCI/AGP slots, etc. are individual components connected to a central northbridge/southbridge via fiber cable?
Since motherboard manufacturers have to choose a particular memory/CPU/PCI slot design, purchasing a motherboard can be limiting to the consumer (at least the hardware enthusiast). By splitting all motherboard sub-components up, you'd be able to pair whatever CPU to whatever memory type you want, and have a PCI module that lets you tack on as many PCI/ISA as you need. Literally a custom-built motherboard.
I'm sure this is slightly costlier, as far as an initial sunk cost, but upgrades should be easier. To make your investment go even further, things like the northbridge module should be a flashable module, so you can update it to support some new processor or memory module type (buy a software upgrade instead of replace the central hardware module).
Okay, so perhaps this is a little far-fetched, and perhaps gone on a very bad tangent from what the original intention of fiber-optic motherboards. But I can still dream, can't I?
The future is slotless (Score:2)
Motherboards are going in the other direction. Soon, a motherboard will have two chips, perhaps an AMD CPU and an NVidia NForce for everything else, plus DRAM. With good graphics, good audio, good networking, and a reasonable disk interface in the base chipset, there's no reason to have slots in 90% or more of desktop PCs. The computer is probably going to disappear into the baseplate of the flat screen. The airspace for the seldom-used slots would make the box several times bigger, so slots have got to go.
I can see the day coming when only rackmount systems will have slots.
Re:speed up HD's (Score:2, Interesting)
Re:speed up HD's (Score:2)
Re:speed up HD's (Score:2, Interesting)
Of course, we won't see any of this stuff on the consumer market until there's a reasonable demand for it. Guess I'll be counting the days.
Re:speed up HD's (Score:2, Funny)
Well, that's fascinating; do you have any links that talk about specific "crystal storage" technology, or did you get this information from "Superman: The Movie" 8^)
Seriously, I'd like to find a way to store all of the world's information in a few crystal cubes in my pocket. Just the geek factor alone is enough to get me excited. I could solve any argument, for instance, by hooking the relevant cube up to my Palm and scratching out the appropriate question on the screen. I think that's a "reasonable demand"...
Re:speed up HD's (Score:2)
Re:speed up HD's (Score:2)
You can get a motherboard with 4G of ram
these days. What do you need a hard drive
for?
Storage, only. Load it once, and off ya
go, fastern a bleeding spullet.
-wp
Re:speed up HD's (Score:2)
This is a troll, right?
On the off chance that it's not...
Let's see... I want to keep more than 4GB of MP3s around. Oh, gee, I live in California -- hope the power doesn't go out for longer than my UPS lives! Etc... etc... etc...
Re:speed up HD's (Score:3, Interesting)
Now, in a large environment, 4GB likely wouldn't be enough for the RAM that the programs use as well as a usable RAMdisk, but for the home environment, it could work.
Problem with that is that the benefits would probably be least in that environment. But, it would eliminate my concern about yanking the power cord accidentally, or the CA brown/blackouts you alluded to. OTOOH (on the other other hand) Does replaying the journal (you are using a journaling fs, aren't you?) take any less time than loading up the RAM disk in the first place? Probably not. But, if you are still on ext2, it makes sense. Put / on RAM. Then, even though loading the RAMdisk would be a long time, it wouldn't be much longer than fsck, but you don't have to worry about a hosed disk. (But, again... If you have 4GB of RAM, you are probably savvy enough to have ext3, Reiser, etc.)
I don't know. I give up. It's a valid question, but I don't think it's a troll. But the answer is most definately, 100% "it depends".
Re:speed up HD's (Score:2)
Re:speed up HD's (Score:2)
Re:speed up HD's (Score:2)
Eventually I couldn't do that anymore once memory requirements started going up. I've never tried doing it again under Windows, but I guess it's reasonable, since I have half a gig of memory, and windows manages to stay up for about a week or two consistantly.
Re:First post on this one (Score:5, Insightful)
From the article: "But it may not take divine intervention to get more mileage out of copper interconnect. Intel claims it can reach speeds of 10 GHz and beyond in five to eight years using copper. "We're confident we can get to 10 GHz. And there's reason to believe we can double that," Pinfold said."
I'd put my money on copper; we're still using
gasoline, when hydrogen-powered cars have
been viable for years.
http://www.auto.com/industry/iwirn22_20010822.h
-wp
Re:Not a troll (Score:2, Informative)
problems for home use:
Video card
Buisness:
Networking
multiple controllers.
It's not that hard to saturate a bus and unfortunatly it happens a lot. There are several hackish ways companies are trying to fix that (multiple PCI busses AGP etc) but none really fix the underlying problem.
Re:Not a troll (Score:5, Informative)
True, that's what the L1 and L2 cache are supposed to prevent, but some apps (games, mostly) blow through that cache without even thinking about it. WWIIOnline, for instance, gets bitchy with only 256MB. It's only happy once you have 512MB. How long will even a 4 MB on-die cache last?
If we can increase the speed that we can toss bits between the CPU and RAM, we'll reduce one more sticking point (and RDRAM, expensive as it is, was meant to do that), and higher framerates for all!
Re:Not a troll (Score:2, Insightful)
Re:Not a troll (Score:2)
Also, needing 512MB for WWIIO be a problem with drive to main memory bandwidth and latency, not main memory to cache. In other words, if you could move data ten times as fast from main mem to cache, your performance would not increase, because you're still getting misses in main memory.
Re:Not a troll (Score:2)
Re:Not a troll (Score:3, Insightful)
Re:Not a troll (Score:3, Informative)
This info is a little out of date-- it comes from Practical Unix Programming by Robbins and Robbins, published in '96.
It's a table of access times, scaled so 10 ns is equal to 1 second.
Processor cycle: 1 second
Cache access: 3 seconds
Memory access: 20 seconds
Context switch: 166 minutes
Disk access: 11 days
Notice that this table doesn't discuss bus bandwidths. The reason is simple: latency is more important than bus bandwidth for these kinds of comparisons. It doesn't matter if you can suck in 800 MB per second from RAM to CPU if getting that first byte still takes many nanoseconds.
In short, for normal server or desktop tasks, bus bandwidth isn't a serious bottleneck at all. But for traditional HPC applications, where a processor takes a huge chunk of data (measured at least in 10s of megabytes) and operates on it serially, from front to back, your bus and memory bottlenecks start to show through.
It's kind of analogous to having a car with a top speed of 250 MPH and a 0-60 time of four minutes. On the highway, once you get up to speed, you'll cruise along nicely. (Think of that as big serial computations.) But in stop-and-go traffic in the city, you're sucking. (Typical branching programs that depend on user input.)
Re:I wonder (Score:3, Funny)
+1 Enteresting or
+1 Enformative
;-)
Re:Dumb idea (Score:2)
The power needed to drive that bus will be 1/2 * N * f * C * V^2 = 0.5 * 256 * 2.5*10^9 * 10*10^-12 * 3.3^2 = 35Watts.
35 watts just for interconnect! Even if the optical interfaces consume a whopping 250mW each, you can still afford 140 of them for the power cost of copper.
But wait, there's more: modern CPUs need a huge L2 cache to compensate for the narrow pipe to main RAM. If you widen the pipe, you can get away with a lot less L2 cache, which saves a lot of power (cache is typically static RAM, which has 4 to 6 transistors per bit and sucks a lot of power). Optical interconnect can potentially provide a dedicated link between each RAM unit and the CPU. The latency will probably be higher, which will penalize things like office suites that have a random pattern of memory accesses, but signal processing, graphics, and technical calculations will be blazingly fast.