Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Multiterabit Switching, No Moving Parts 64

npongratz writes "Hailing from the world of physics, chemistry, and assorted geewhiz, Lynx Photonic Networks announced a photonic switch with less than 5ns packet switching. "...multiterabit switching systems..." That's what I call bitchin' switchin'." And unlike certain optical switches discussed here before based on bubbles moving in liquid, this variety "does not have any moving parts, nor does it require a change in the physical state of the light signal." 5 nanoseconds.
This discussion has been archived. No new comments can be posted.

Multiterabit Switching, No Moving Parts

Comments Filter:
  • by Anonymous Coward
    Outperforms mirrors and bubbles - Pah!

    The whole point of using mirrors and bubbles is that you can switch hundreds of wavelengths simultaneously. Of ocurse you're not going to read the packet headers on every one of these wavelenths - the switching is for the backbone, and doesn't occur that often. Even optical processing would stop you having to have hundreds of processors for each wavelength.

    Think of these switches as jumbo jets. Only your rough destination matters and you get on the right plane to begin with. it's only when you get off the plane can you start making routing decision,
  • MEMS uses tiny adjustable mirrors to switch lightwaves

    But isn't an adjustable mirror a movable part? Unless of course adjustable means they can change how reflective it is on the fly.

    -paul
    ---------------------------------------
    The art of flying is throwing yourself at the ground...

  • A few things to think about:

    - you need routers at the edge of any optical network to talk to the copper world, and currently the bottleneck is building terabit/petabit routers that will go fast enough (as well as doing added value stuff on the edge such as QoS and VPN processing). Cisco, Juniper, Avici and co should carry on doing well.

    - over time, optical switches are likely to become MPLS-enabled, meaning that they have an IP+MPLS (Multiprotocol Label Switching) control plane, and *appear* to be routers even though MPLS is laying down all-optical paths through all these switches. There is a lot of work going on in this area, but ultimately it means that ATM switches, Frame Relay switches, SONET cross-connect switches, and DWDM lambda switches will all look like IP routers. This is a Good Thing, since otherwise the edge IP routers would have to peer directly with all the other edge routers, generating too much routing traffic and consuming too much CPU in each router to be viable (the 'large, flat network' problem).

    - Optical Burst Switching is a way of building all-optical switches (at least in the data path) that can at least switch packet trains (bursts). What happens is that an edge device signals ahead of the actual packet burst, on a separate control channel - the delay between the control packet and the data burst gives the electronic control-path part of the optical switch enough time to do the switching (5 ns is a long time at optical rates). This is an alternative to using MPLS or similar to lay down optical paths that are more or less fixed - of course, reliable and fast setup of switching is important to this working well.
  • The MPLS-enabled switches (ATM, Optical, etc) will look like IPv4 and/or IPv6 routers - turning such a switch from IPv4 into dual-stack or IPv6 is just a matter of changing the control plane (routing) software on the switch, and doesn't have any impact on the data (forwarding) plane.

    This is one reason why MPLS may be a good way of deploying IPv6 in large core networks - it removes the need to upgrade silicon to forward IPv6 packets, as long as the core switches/routers are MPLS capable. Only the edge MPLS routers (label switch routers in the jargon) need to actually forward IPv6 packets, all the core LSRS just do label-swapping or optical switching.
  • There are lots of other sources of WWW latency other than packet switching latency. Given that most routers act at wire speed already, much of the delay on an HTTP request is already just round-trip-time, software processing, and queuing. It's unlikely that 25% of this latency occurs waiting for a switch to do its thing.

    The last factor, queuing, is very important to switches, and is simply not addressed by this new technology. If you have two packets coming in that need to go out the same interface, one of them needs to be delayed; it's not clear how their switch handles this from the description in the article.

    Still, it's nice to see optical networking technology that acts at a reasonable scale (nanoseconds rather than milliseconds or 100's of microseconds), but I'd like to see more details before I believe it delivers what it promises.
  • Or put it in perspective for people that just use computers: 5ns is time between clock cycles on a 200 MHz machine.
  • You weenies! Whip out your calculators.
    Switching packets at 5ns is switching them at 200MHz.
    Assuming you're not using any double edge clocking scheme, that's only 200Mbit/s.

    With data streams coming in at well over 1Gb/s, that's a pretty high latency. Once you accomplish the switch, you better be pumping a lot of data through that channel so as not to be a bottleneck. Else, it will be holding up all that traffic.
  • No I'm not. Read what I said carefully.
    You are perfectly correct, it can switch between streams of data every 5ns.

    Let's hope that your data arrives in one stream, because if that single 1Gb/s data stream you sent out is switched in the mid-stream, you have a bottle neck.
  • Backbone switching. These things are used for semi-permanent, long haul routes -- like central links from Atlanta to Chicago, or New York to L.A.

    You don't switch all the time.


    --
    Charles E. Hill
  • It is interesting to hear about technologies like this, but this is the same no-technical-details type article about optical switching that has been around for years.

    5ns is a nice number, but how exactly are you going to figure out where you need to switch that packet in 5ns? the whole optical-electronic translation problem still remains.

    I do have to give them some credit for getting "little tiny very fast mirrors" out of the equation, since this seems like a stupid, non-robust way to do things.

    Now, how long before I can get one of these and a 5ns latency?

  • Weellll... What do we have hear? Another one 'o those youngn's, I reckon. Well, I say, back in my day we had to read the bits over the phone ourselves... None 'o these fancy modem thingamajobs. Yep, it was just "one, zero, one, one, zero, zero, zero, one, zero, two, zero, one, one, zero" all day long. If we wanted to increase transmission speed we'd have to use hex, "one, seven, c, five, nine, two, a, f, six." If we used hexadecimal, and a good phone transfer specialist, we could get up to around 256 bps. Of course, this was on the big iron, so you might expect that they scaled the technology down a bit when they put it in your fancy "modem."
  • Those are the good ones... The newbies could usually only do about 32 bps.

    BTW, 256 bps = 16 hex digits a second.

  • Damn, I was tired when I posted that. My math abilities=shit when tired.
  • oh my god! He's got us! We're all doomed!
  • >(But Yes, I did start with a 200bps modem. So don't tell me that in your day....I played Tic Tac Toe at a godly 4fps.)

    ok, I've gotta ask...where did you find a "200" bps modem? 75, 150, and 300bps modems I've seen (and actively used a 300baud modem), but a 200bps modem? :)
  • This will NOT directly affrect your packets. The reporter seems to not really understand what this thing does.

    At BEST this will be used as a junction switch for a bunch of opticly connected routers/switches as an 'external' switch fabric. This idea goes well with the way most of the switch vendors are attacking the scalability problems that come with VLOI (Very Large Optical Interfaces).

    The only reason the 'fast switching' is important, is in case one of the optical systems that are using this cross-connect fails, then we switch over to the back up with a minimal loss of internal data.

    Other than that, it is pretty much useless...
  • One manufacturer's 74LS00 quad NAND package has a "time to pull low" worst-case of 15 ns, and a "time to pull high" worst-case of 22 ns. This is per input bit to be processed.

    Don't forget that these are just the transition times for TTL. You also have to hold the signal high or low for a specified time in order to trigger the next logic gate this one is connected to.

    You mention the 74LS00 quad-NAND as an example. Don't forget that this chip exists in a number of different logic families. The 74 prefactor means TTL, and the LS means Low-Power Schottkey. There's also 74F and 74HC and others I can't remember right now that might be faster and/or lower power (albeit with somewhat-varying voltage levels).

    We don't use TTL in today's computers of course, it's too slow

    TTL is a 'slower' logic family because it runs the BJT's (Bipolar Junction Transistors) in saturation. This is good for power requirements (although not as good as CMOS), but it when a transistor is saturated, it takes some time to come out of saturation for the next cycle. THat's the inherent limitation of TTL.

    and chips requiring 5V signals produce too much heat for small circuit paths in the chip.

    It isn't just the voltage that causes the waste heat, it's the current too. Remember that TTL signals have low current at 5V, and high current at low signal. Look at the datasheet [fairchildsemi.com] yourself. (This is for 74ALS00, and a 8-pint SOIC surface-mount device, but functionally equivalent to the DIP 74LS00 you most-likely used in your class).

    Typical values in the low state are 0.1mA at 0.35 V, for a power of 35 uW. The high state is 3V at 20uA, for a power of 60 uW. So you see it's not just the voltage that causes the heat.

    Because the low states use more power, control signals for gates (for instance, hi-Z output control in tri-state chips) use inverse-logic to activate them. Ie, a signal that will be used only occasionaly will typically have a logic-low activate it's function. Saves power in the long run, and seems kind of weird when you design TTL circuits at first.

    Now if you want high speed, look at ECL (Emitter-Coupled Logic). This logic family is really fast. unlike TTL, the BJT's are not run in saturation, so they can switch faster. A side effect of this is that the transistors use more power, and hence run hotter. Like most things, it's a tradeoff. The fastest commercially-available logic family I've seen (and used) is ECLinPS (pronounced Eclipse), for ECL-in-PicoSeconds. These chips can run at several GHz! Pretty sweet. Look here for a datasheet [on-semiconductor.com] for the ECLinPS NAND gate.

    Unfortunately, one of the fastest logic companies has gone out of business about 10 years ago, and I've only been able to glimpse some of their datasheets. It was GigaBit Logic, who in the late 80's and early 90's had logic devices that beat the pants off of what we have now, implemented in GaAs (Gallium Arsenide). However, it cost way too much to develop profitably, and sadly the company is gone. Datasheets had devices listed at 10GHz (although I haven't tested any so I can't guarantee how accurate the datasheets are).
    __ __ ____ _ ______
    \ V .V / _` (_-&#60_-&#60
    .\_/\_/\__,_/__/__/

  • by selectspec ( 74651 ) on Monday April 23, 2001 @04:31AM (#272165)
    Sounds somewhat missleading to me. While clearly this technology is facinating and will outperform mirrors and bubbles, I raise some doubts about these claims. First, the light signal must be translated into electronic signal in order for the processor to make the switch (because they don't have an all optical processor). Second, they do have moving part in the optical gateway which is heated in order to polarize the light for a particular channel. What is the durability of this gateway? How will it stand up over time? How far can this trick be expanded? Sounds like 64x64 will be pushing the laws of physics.
  • Is it just me, or did anyone else parse this as "Multi-tier-rabbit" switching?

    I'll get my coat ...

  • I used the TTL example because regular lab oscilloscopes for first year electronics schools register 5ns. It's a common "get you thinking" exercise. TTL is the goldfish of digital electronics; it's hard to kill the chips. TTL is used by people who fuss with electronics as a hobby (TINI, BasicStamp, etc.), which is a wider audience than those who work with electronics professionally alone.

  • Put "5 nanoseconds" into perspective.

    The speed of light 'c' is 299792458 m/s in a vacuum. A nanosecond is 10^-9 s, so that makes an easy 0.299792458 m/ns. While physics is usually not a good time to switch away from the metric system, that's about one foot per nanosecond (11.8028526772 in/ns).

    One foot is roughly one light-nanosecond in a vacuum.

    If you have played around with Transistor Transistor Logic (TTL) 5Vcc discrete logic chips in school, you probably learned that gates don't settle their state changes instantly. They take some time to drift from ~5V to ~0V output, or vice versa. The NAND gate is the root of all simple TTL gates, you can implement any simple gate with just NANDs.

    One manufacturer's 74LS00 quad NAND package has a "time to pull low" worst-case of 15 ns, and a "time to pull high" worst-case of 22 ns. This is per input bit to be processed. Before that worst-case time has elapsed, you can't be sure you're getting the right answer out of the chip. Managing propagation delays is the biggest reason for providing a CLOCK which drives all the logic at nice clear intervals. The interval has to assume the worst case of any of the involved logic. No wonder the Apple ][ clock was rated at just over 1 MHz, giving ~1000ns between clock ticks.

    We don't use TTL in today's computers of course, it's too slow and chips requiring 5V signals produce too much heat for small circuit paths in the chip.

    This article is saying that a packet of information sent into the switch as a beam of light can be switched intelligently from its current course to some other course in the time it would take that packet to move just five feet(*).

    (*) 'c' is the speed of light through a vacuum; a fiber isn't vacuum; I know that.

  • If you can track down the February edition of Scientific American, you'll find a fantastic article on optical switching technologies. After reading the article, using "little tiny very fast mirrors" AKA MEMs won't seem so stupid. I am at work so I can't remember exactly which month the article was, but it was Jan-March anyway.
  • If the hardware power is there then there will soon be software that needs more power then the hardware can provide.
    =\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\ =\=\=\=\=\
  • As you say LS TTL is rather slow and rather out of the arc technology. Not quite sure why your using it as an example.

    Lets take the current commonly available state of the art. Intels P4 1.5GHz has a clock period of 666ps (pico seconds) and it does something useful within this time. This implies that there gate delay will be a maximum of 30% to 50% of this, say 200 to 300ps. (It could be considerably less) Now 200ps is only 60mm (or 2.3" for you Yanks)

    Its beginning to make the 5ns time look rather slow don't you think.
  • For anyone interested, take a look at some of the work at Washington University [wustl.edu]. The Applied Research Lab is doing some really cool work here on optical burst switching.

    Derek
  • *Ignore this post*

    first of all, for the individual slow user, the fact that they can get a file that much faster, yet take the same time to analyze and/or manipulate does not mean the transmit time is invalid or irrelevant. Since much data is handled in pieces or streams and then put together to be presented, or just kept in the background as administration or backend stuff... the issue becomes very important.

    This leads right into quantity. The more data required (and this keeps getting exponentially larger) whether sending or receiving (which of course includes verification and error correction) requires not only a singular faster bandwidth, but is also greatly aided in many tasks by parallel data paths, whether physical or logical. A good example of this is the efficiency of SCSI and IDE (regardless of the little suffixes and prefixes). IDE is getting MUCH faster, but besides just the spin rate and bus transfer rate, the bottle neck is of course on the controller side with the big issue for developers (of the hardware) being the very thing that makes IDE, IDE and not something else. Yet, even though the low level studies don't show much of an increase in actual data streaming rates, the end performance for applications is much better. SCSI, on the other hand, has parallel data paths. So, besides the redundancy and additive nature of all the paths together, another nice thing about SCSI is that even if the bandwidth is lower (perhaps due to the bottleneck at the motherboard end) the fact that you can perform multiple tasks "at once" and hit multiple random areas really adds to the all around effectiveness of the technology... albeit with a varied amount depending on what kind of access you are requiring.

    Ok, enough Karma whoring... just wanted to spit that out real quick. BTW, the effect of a larger(bandwidth) backbone has consitently proven to be more effective regardless of the end users connection speed. So even with the CE bottlenecks, the added speed of the fiber network will help everyone. How much, I have NO clue.

  • Yes MEMS use movable parts (according to the article).

    The technology descibed that don't use movable parts is PLC (also according to the article).

  • i dunno, but he does seem to put his own personal views and agenda into slashdot articles more often than not... for example, take his 'story' about the canadian government's response to the soldier of fortune video game (was going to disallow stores to sell it)... while it makes sense to talk about censorship, or violence in media, or even corporate response to society's pressures, he turned the article into a huge rant about vegitarianism/veganism, and why eating animals is so bad, and why vegans rock, etc... now, i can't pretend to really know where he's coming from as i find animals rather tasty (and i happen to be very much in love with my girl), but i think he needs to show a little more journalistic integrity at times...

    ok, i'm done ranting... this isn't a personal sleight, just an observation...
    -----

  • The idg link actually goes via ad.doubleclick.net, which some of you may have filtered out.

    Here's a more direct link:
    http://www.nwfusion.com/edge/columnists/2001/0416e dge2.html [nwfusion.com]
  • Well, of course you're going to have a faster network if everybody's using Lynx! [browser.org]

    Sorry. Had to say it.

    But seriously... it's old stuff, but many may not be familiar with George Gilder's interesting articles [upenn.edu], particularly Into the Fibersphere [upenn.edu], on the implications of really, really fast networks. Like the notion that computers may become the bottleneck in the network, and that a packet would be better routed to the other side of the world and back through pure fiber, rather than though the computer next door and back. And how compression becomes passe when it's slower to decompress something than to send the thing uncompressed.

    Interesting observation: when computing power was expensive, programmers were paid to conserve it, writing very tight assembly code. Now that it's cheap, programmers are expected to "throw switches at the problem". But bandwidth is expensive, so they write to conserve it. On an all-fiber network, they may be expected to "throw bandwidth at the problem".

    Lots of good stuff here, especially considering it was written in 1995. Hope you like it.

  • This is a different kind of switching - Don't think of it like the hub-replacing switch: It's interface-based. You'd specify rules like, "traffic on interface 0 -> interface 1". As it said in the article, the optical switch is meant to feed multiple routers - Load balancing, if you will.
  • I have but one comment... "This is sweet" :)

    I'd love someone to tell me what the heck you need all this switching grunt
    for just at the moment though (aside from 'whee, imagine a beowul...') :)
    spose we'll all have one of these one day though ;)

  • You can read more about the different optical switching technology in the January 2001 issue of Scientific American. Their special report, "The rise of optical switching" explains the various technologies used to switch photonic circuits.

    Unfortunately, the article is not available on-line, though you can see a related article about the rise of optical networking. You can see the article abstracts at SciAm.Com [sciam.com].

    The switching technology for Lynx's switch sounds like thermo-optic switches. If so, it uses light interference to pass/block signals. The technology is wavelength sensitive, unlike MEMS or bubble switches. Also, liquid-crystal switching (another popular photonic switch technology) is polarization sensitive.

  • Maybe if they used these fully optical transistors [physicsweb.org] it might be faster. However, I have slight doubts about the speed of these babies because of a chemical reaction that is central to their operation. But then again, ordinary silicon transistors are based on the diffusion of electrons, which is slow as hell compared to something purely optical, so these might well turn out a lot faster.

    --
  • but imagine a Beowulf Cluster of these!

    jk
    -c
  • ....and using no moving parts no less. The only problem? Things need to be kept at differing wavelengths (such as network broadcasts), thus biting into bandwidth a little. Nevertheless, if you're churning packets out with 5ns delays, it probably won't matter anyway! :-D
  • The only thing I hope about all this new technology is that advantages should always include: 1. Lower power consumptions, 2. Smaller footprints, 3. More recyclable material. I'm not really an environmentalist but I think at this point in time we should all become a little concerned with all these new gadgets that are taking increased amounts of power.


    yoink
  • If we used hexadecimal, and a good phone transfer specialist, we could get up to around 256 bps.

    64 hex digits a second? By voice? Now that's impressive...

    --
    BACKNEXTFINISHCANCEL

  • BTW, 256 bps = 16 hex digits a second.

    Not last time I checked... 0xF = 1111b = 4 bits; 256 / 4 = 64 hex digits per second.

    --
    BACKNEXTFINISHCANCEL

  • Wow, this will really increase the throughput of major backbones... but image how much we could increase the throughput using today's technology, simply by eliminating Spam and pr0n. (Or maybe just Spam...)

    --brian

  • Last time I checked, photons moved. As a matter of fact, they were moving along at a pretty darn fast rate.
  • > ok, I've gotta ask...where did you find a "200" bps modem?

    Easy. In the olden days we used to overclock 300 baud modems to 500 baud. And dot matrix printers too - from 10 cps to 15 cps. And today's young wankers think they invented overclocking. Pfth! :-P

  • > will all look like IP routers. This is a Good Thing

    IPv4 or IPv6 or both?

  • Concerning power consumption, it is easy to figure out for yourself that this will dramatically lower the required wattage, since the period over which power is needed will dramatically be lower for each switched packet.

    This is faulty logic. Just because the time/packet, and therefore the power/packet, goes down doesn't mean that the overall power will go down. The number of packets will rise inversely proportional to the time/packet.

  • by SirFlakey ( 237855 ) on Monday April 23, 2001 @04:24AM (#272192) Homepage
    Here is a link [delphion.com] to their related patent with some more info on the tech used. Pretty damn cool.
    --
  • "...and I've developed a program that downloads porn from the internet a million times faster!" -nerd from simpsons
  • ...And switch packets in five seconds
    ...And compile mass libraries of MP3s and warez despite industry controls
    ...And put giant arms on space stations that make the space shuttle arm look like a grasshopper leg [cnn.com]
    ...And sequence the entire human genome
    ...And make god games that can seriously damage your view of reality [bwgame.com].

    Why the HELL am I still getting emails that start like "Looking for HOT, HORNY Teens? Look no further!!!!!!"
  • We can increase the speed at which packets get switched and routed. We can increase the speed at which our computers can receive and process the information (which isn't addressed by this technology). But in the end, the bottleneck comes down to how quickly we, the users, can process and respond to the information.

    Sure, there are situations that don't really require human intervention, at least not till a series of events have completed (say, FTP of a series of large files, or convergence of a network after a routing change). But for any interactive, the human in the slowest part, and always will be. Get a file in 63 seconds rather than 312 seconds? Nice, saves you four minutes of your life, but generally small compared to the time that you'll use manipulating or using the file.

  • if you take into consideration that until recently, 60ns RAM was considered something that could be used without the end user griping. Of course, end users will always find something to gripe about. But then again, it's not a bug, it's a feature. Security, yea, that's the ticket....

    Back to the switch. I read about this in this month's wired, and it's pretty cool, but I need to know.... evem though they're trying to get rid of the bottlenecks, how long till this kind of thing is "profitable" enough for say, my broadband providederto be giving be an extra megabit of bandwidth? Hmmm?

    (Yes, I am a spoiled teenager)

    (But Yes, I did start with a 200bps modem. So don't tell me that in your day....I played Tic Tac Toe at a godly 4fps.)

  • You know what, I have absolutely no freaking idea where It came from, I just knew that I had it. Keep in mind, I was all of four. So my memorys are clouded, at best.
  • Computers are already the slow link in the chain if you're using gigabit ethernet. If my math is correct, with Gig-E the NIC can receive data faster than a 100Mhz bus can throw it at the processor. Crazy.


  • I would hate to get DoS'd from a network of those. [antioffline.com]

  • I am truly in agreement about the clams of this switch being rather bogus. What few people realize is that its the protocol that is responsible for 90%+ the latency of a packet through a switch, and not the switching it self. Until there is a commercial method to decoded packet headers in fiber with out converting to electrical signals and also a major change in the header complexity/format the latency of packets through a LAN, WAN, or what ever will not be reduced!
  • > Computers are already the slow link in the
    > chain if you're using gigabit ethernet. If my
    > math is correct, with Gig-E the NIC can receive
    > data faster than a 100Mhz bus can throw it at
    > the processor. Crazy.

    Only if your bus is pretty narrow. A Gigabit Ethernet connection has a maximum throughput of approximately 120 Megabytes a second. So, unless your computer has a 100 MHz, 10 bit bus, you've still got some leeway.

    Gigabit Ethernet would just about saturate a 33 MHz, 32-bit PCI bus, though.

  • Concerning power consumption, it is easy to figure out for yourself that this will dramatically lower the required wattage, since the period over which power is needed will dramatically be lower for each switched packet.

    Less obvious, but potentially smaller footprints are needed in such a system, since smaller switch times require smaller curcuits (or whatever...)

    Guess the recyclable material will not be an issue here, since this type of equipment will likely be high-end industrial, not for your $20,- ethercard.
  • This is an interesting technology. I think it will be most useful in very (reletively) small scale switching applications. As the article said anything above 64x64 would be 'challenging'.
    The whole optical switching arena is full of promising technology using all sorts of methods of switching light from one path to another, but there is no solution (yet) which is cost or footprint effective over the whole connection volume range.
    In the medium term the only systems we will see are those that switch on the small scale, say tens of connections. What with the state of the world economy R&D has been slowed in these areas because noone can afford to purchase a pure optical line system.
    So don't be expecting any optical switching to the home anytime soon (Wow would that really annoy BT!, so much for the local loop).
    As for the instability of mechanical switching, this is a very small consideration as the mems can remain in the same position for several years without intervention. And with protection an restoration systems as well any interruption to service could be limited to a few seconds at most.
    The potential benefits of optical switching are clear (lower footprint, less power consumption) however it is not going to replace electrical switching for a long time.

    M Gardner
    ---
    All opinions my own
  • Why are more than 64 ports pushing the law of physics?
  • Am I the only one that read the title of this piece as 'Multi Tier Rabbit Switching" ?
  • Imagine the Internet reconfigured to use these switches...this would cut page seek times by a quarter at least, but it's a little too overpowered to use without much technology to take care of it. Maybe the quantum spin duplication technology that they've been researching would make this an instantaneous transmission technology. Now THAT is power.
  • by jvanderneut ( 444663 ) on Monday April 23, 2001 @05:55AM (#272207)
    Sounds somewhat missleading to me. While clearly this technology is facinating and will outperform mirrors and bubbles, I raise some doubts about these claims. First, the light signal must be translated into electronic signal in order for the processor to make the switch (because they don't have an all optical processor).

    With the electric address signal you could control an electro-optical switch, so the signal can be switched optically. Only the address is translated into an electric signal. With optical-optical switches it may be possible to eliminate this conversion too.

    Second, they do have moving part in the optical gateway which is heated in order to polarize the light for a particular channel. What is the durability of this gateway?

    The heated parts are special materials that will have a slightly different refractive-index when heated (or cooled). In this case they are used to tune the 'optical length' of the cavity, so the small manufacturing errors are corrected.

    Jasper
  • does this mean we can build data from star trek now? ;-)
  • oops you're right ;-P

    i confused photon with positron, not very trekkie of me
  • I think you are correct. This could not be MEMS technology, there is no way you could get switching speeds at that rate with that technology.
  • Lynx's switch is really a circuit switch and not a packet switch. Let me explain why.
    The term Packet Switching commonly refers to statistical multiplexing of packets onto a lower layer channel like say a single WDM channel or a SONET stream. For statistical multiplexing, you need buffers at input or output ports because if two packets arrive at the same port at the same time, one of them has to be buffered while the other is being transmitted. Thus, the common usage of the term optical Packet Switching implies optical buffering and optical processing available in the switch. In other words, its like an optical implementation of a packet switch like an IP router or a cell switch like an ATM switch.

    In contrast, these "photonic switches" in the market like those of Lynx are like circuit switches (like TDM swictches used to set up circuits in telephony.) There is no buffering and statistical multiplexing and intelligent forwarding features. The switch may of course still have to do opto-electronic conversion and look at the "packet" or "frame" headers to determine the incoming and then outgoing port numbers. But remember that this makes your switch dependent on the bit rate, clock timing and protocol format.

    In contrast, pure wavelength switching (WDM switching) as opposed to this all-optical "packet switching" is totally independent of bit rate, clock timing and protocol format. You are switching light wavelengths and not packets here. This is one of the major advantages of WDM networks since their capacity can be dynamically upgraded in response to customer demand by just upgrading, i.e. adding more wavelengths or increasing bandwidth per wavelength channel - you don't have to go visit every node in your network core and upgrade all the equipment. This advantage is not available if you made your network out of these optical packet-switches like those of Lynx.

    All said, wavelength switching and optical "packet-switching" are not necessarily competing technologies. The former is more suited to backbones and long haul networks..while the latter is more suited to local and metropolitan areas.
    Note that I wrote "packet switching" in quotes here.

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...