Multiterabit Switching, No Moving Parts 64
npongratz writes "Hailing from the world of physics, chemistry, and assorted geewhiz, Lynx Photonic Networks announced a photonic switch with less than 5ns packet switching. "...multiterabit switching systems..." That's what I call bitchin' switchin'." And unlike certain optical switches discussed here before based on bubbles moving in liquid, this variety "does not have any moving parts, nor does it require a change in the physical state of the light signal." 5 nanoseconds.
Re:Somewhat misleading... (Score:1)
The whole point of using mirrors and bubbles is that you can switch hundreds of wavelengths simultaneously. Of ocurse you're not going to read the packet headers on every one of these wavelenths - the switching is for the backbone, and doesn't occur that often. Even optical processing would stop you having to have hundreds of processors for each wavelength.
Think of these switches as jumbo jets. Only your rough destination matters and you get on the right plane to begin with. it's only when you get off the plane can you start making routing decision,
no moving parts? (Score:1)
But isn't an adjustable mirror a movable part? Unless of course adjustable means they can change how reflective it is on the fly.
-paul
---------------------------------------
The art of flying is throwing yourself at the ground...
Routers and Optical Burst Switching (Score:2)
- you need routers at the edge of any optical network to talk to the copper world, and currently the bottleneck is building terabit/petabit routers that will go fast enough (as well as doing added value stuff on the edge such as QoS and VPN processing). Cisco, Juniper, Avici and co should carry on doing well.
- over time, optical switches are likely to become MPLS-enabled, meaning that they have an IP+MPLS (Multiprotocol Label Switching) control plane, and *appear* to be routers even though MPLS is laying down all-optical paths through all these switches. There is a lot of work going on in this area, but ultimately it means that ATM switches, Frame Relay switches, SONET cross-connect switches, and DWDM lambda switches will all look like IP routers. This is a Good Thing, since otherwise the edge IP routers would have to peer directly with all the other edge routers, generating too much routing traffic and consuming too much CPU in each router to be viable (the 'large, flat network' problem).
- Optical Burst Switching is a way of building all-optical switches (at least in the data path) that can at least switch packet trains (bursts). What happens is that an edge device signals ahead of the actual packet burst, on a separate control channel - the delay between the control packet and the data burst gives the electronic control-path part of the optical switch enough time to do the switching (5 ns is a long time at optical rates). This is an alternative to using MPLS or similar to lay down optical paths that are more or less fixed - of course, reliable and fast setup of switching is important to this working well.
Re:Routers and Optical Burst Switching (Score:2)
This is one reason why MPLS may be a good way of deploying IPv6 in large core networks - it removes the need to upgrade silicon to forward IPv6 packets, as long as the core switches/routers are MPLS capable. Only the edge MPLS routers (label switch routers in the jargon) need to actually forward IPv6 packets, all the core LSRS just do label-swapping or optical switching.
Re:Now THIS is power (Score:1)
The last factor, queuing, is very important to switches, and is simply not addressed by this new technology. If you have two packets coming in that need to go out the same interface, one of them needs to be delayed; it's not clear how their switch handles this from the description in the article.
Still, it's nice to see optical networking technology that acts at a reasonable scale (nanoseconds rather than milliseconds or 100's of microseconds), but I'd like to see more details before I believe it delivers what it promises.
Re:put in perspective (Score:1)
That's slow (Score:1)
Switching packets at 5ns is switching them at 200MHz.
Assuming you're not using any double edge clocking scheme, that's only 200Mbit/s.
With data streams coming in at well over 1Gb/s, that's a pretty high latency. Once you accomplish the switch, you better be pumping a lot of data through that channel so as not to be a bottleneck. Else, it will be holding up all that traffic.
Re:That's slow (Score:1)
You are perfectly correct, it can switch between streams of data every 5ns.
Let's hope that your data arrives in one stream, because if that single 1Gb/s data stream you sent out is switched in the mid-stream, you have a bottle neck.
Re:One comment... (Score:1)
You don't switch all the time.
--
Charles E. Hill
Nice and vague as usual (Score:1)
5ns is a nice number, but how exactly are you going to figure out where you need to switch that packet in 5ns? the whole optical-electronic translation problem still remains.
I do have to give them some credit for getting "little tiny very fast mirrors" out of the equation, since this seems like a stupid, non-robust way to do things.
Now, how long before I can get one of these and a 5ns latency?
Re:That's Hella fast... (Score:1)
Re:That's Hella fast... (Score:1)
BTW, 256 bps = 16 hex digits a second.
Re:That's Hella fast... (Score:1)
Re:-2 Troll him down Moderator (Score:1)
Re:That's Hella fast... (Score:1)
ok, I've gotta ask...where did you find a "200" bps modem? 75, 150, and 300bps modems I've seen (and actively used a 300baud modem), but a 200bps modem?
Why this is a big deal (Score:1)
At BEST this will be used as a junction switch for a bunch of opticly connected routers/switches as an 'external' switch fabric. This idea goes well with the way most of the switch vendors are attacking the scalability problems that come with VLOI (Very Large Optical Interfaces).
The only reason the 'fast switching' is important, is in case one of the optical systems that are using this cross-connect fails, then we switch over to the back up with a minimal loss of internal data.
Other than that, it is pretty much useless...
Re:put in perspective (Score:2)
Don't forget that these are just the transition times for TTL. You also have to hold the signal high or low for a specified time in order to trigger the next logic gate this one is connected to.
You mention the 74LS00 quad-NAND as an example. Don't forget that this chip exists in a number of different logic families. The 74 prefactor means TTL, and the LS means Low-Power Schottkey. There's also 74F and 74HC and others I can't remember right now that might be faster and/or lower power (albeit with somewhat-varying voltage levels).
We don't use TTL in today's computers of course, it's too slow
TTL is a 'slower' logic family because it runs the BJT's (Bipolar Junction Transistors) in saturation. This is good for power requirements (although not as good as CMOS), but it when a transistor is saturated, it takes some time to come out of saturation for the next cycle. THat's the inherent limitation of TTL.
and chips requiring 5V signals produce too much heat for small circuit paths in the chip.
It isn't just the voltage that causes the waste heat, it's the current too. Remember that TTL signals have low current at 5V, and high current at low signal. Look at the datasheet [fairchildsemi.com] yourself. (This is for 74ALS00, and a 8-pint SOIC surface-mount device, but functionally equivalent to the DIP 74LS00 you most-likely used in your class).
Typical values in the low state are 0.1mA at 0.35 V, for a power of 35 uW. The high state is 3V at 20uA, for a power of 60 uW. So you see it's not just the voltage that causes the heat.
Because the low states use more power, control signals for gates (for instance, hi-Z output control in tri-state chips) use inverse-logic to activate them. Ie, a signal that will be used only occasionaly will typically have a logic-low activate it's function. Saves power in the long run, and seems kind of weird when you design TTL circuits at first.
Now if you want high speed, look at ECL (Emitter-Coupled Logic). This logic family is really fast. unlike TTL, the BJT's are not run in saturation, so they can switch faster. A side effect of this is that the transistors use more power, and hence run hotter. Like most things, it's a tradeoff. The fastest commercially-available logic family I've seen (and used) is ECLinPS (pronounced Eclipse), for ECL-in-PicoSeconds. These chips can run at several GHz! Pretty sweet. Look here for a datasheet [on-semiconductor.com] for the ECLinPS NAND gate.
Unfortunately, one of the fastest logic companies has gone out of business about 10 years ago, and I've only been able to glimpse some of their datasheets. It was GigaBit Logic, who in the late 80's and early 90's had logic devices that beat the pants off of what we have now, implemented in GaAs (Gallium Arsenide). However, it cost way too much to develop profitably, and sadly the company is gone. Datasheets had devices listed at 10GHz (although I haven't tested any so I can't guarantee how accurate the datasheets are). .V / _` (_-<_-<
.\_/\_/\__,_/__/__/
__ __ ____ _ ______
\ V
Somewhat misleading... (Score:3)
reading problems ... (Score:1)
I'll get my coat ...
Re:put in perspective (Score:1)
I used the TTL example because regular lab oscilloscopes for first year electronics schools register 5ns. It's a common "get you thinking" exercise. TTL is the goldfish of digital electronics; it's hard to kill the chips. TTL is used by people who fuss with electronics as a hobby (TINI, BasicStamp, etc.), which is a wider audience than those who work with electronics professionally alone.
put in perspective (Score:2)
Put "5 nanoseconds" into perspective.
The speed of light 'c' is 299792458 m/s in a vacuum. A nanosecond is 10^-9 s, so that makes an easy 0.299792458 m/ns. While physics is usually not a good time to switch away from the metric system, that's about one foot per nanosecond (11.8028526772 in/ns).
One foot is roughly one light-nanosecond in a vacuum.
If you have played around with Transistor Transistor Logic (TTL) 5Vcc discrete logic chips in school, you probably learned that gates don't settle their state changes instantly. They take some time to drift from ~5V to ~0V output, or vice versa. The NAND gate is the root of all simple TTL gates, you can implement any simple gate with just NANDs.
One manufacturer's 74LS00 quad NAND package has a "time to pull low" worst-case of 15 ns, and a "time to pull high" worst-case of 22 ns. This is per input bit to be processed. Before that worst-case time has elapsed, you can't be sure you're getting the right answer out of the chip. Managing propagation delays is the biggest reason for providing a CLOCK which drives all the logic at nice clear intervals. The interval has to assume the worst case of any of the involved logic. No wonder the Apple ][ clock was rated at just over 1 MHz, giving ~1000ns between clock ticks.
We don't use TTL in today's computers of course, it's too slow and chips requiring 5V signals produce too much heat for small circuit paths in the chip.
This article is saying that a packet of information sent into the switch as a beam of light can be switched intelligently from its current course to some other course in the time it would take that packet to move just five feet(*).
(*) 'c' is the speed of light through a vacuum; a fiber isn't vacuum; I know that.
Re:Nice and vague as usual (Score:1)
Re:Now THIS is power (Score:1)
=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=
Re:put in perspective (Score:2)
Lets take the current commonly available state of the art. Intels P4 1.5GHz has a clock period of 666ps (pico seconds) and it does something useful within this time. This implies that there gate delay will be a maximum of 30% to 50% of this, say 200 to 300ps. (It could be considerably less) Now 200ps is only 60mm (or 2.3" for you Yanks)
Its beginning to make the 5ns time look rather slow don't you think.
Re:Routers and Optical Burst Switching (Score:1)
Derek
down the road (Score:1)
first of all, for the individual slow user, the fact that they can get a file that much faster, yet take the same time to analyze and/or manipulate does not mean the transmit time is invalid or irrelevant. Since much data is handled in pieces or streams and then put together to be presented, or just kept in the background as administration or backend stuff... the issue becomes very important.
This leads right into quantity. The more data required (and this keeps getting exponentially larger) whether sending or receiving (which of course includes verification and error correction) requires not only a singular faster bandwidth, but is also greatly aided in many tasks by parallel data paths, whether physical or logical. A good example of this is the efficiency of SCSI and IDE (regardless of the little suffixes and prefixes). IDE is getting MUCH faster, but besides just the spin rate and bus transfer rate, the bottle neck is of course on the controller side with the big issue for developers (of the hardware) being the very thing that makes IDE, IDE and not something else. Yet, even though the low level studies don't show much of an increase in actual data streaming rates, the end performance for applications is much better. SCSI, on the other hand, has parallel data paths. So, besides the redundancy and additive nature of all the paths together, another nice thing about SCSI is that even if the bandwidth is lower (perhaps due to the bottleneck at the motherboard end) the fact that you can perform multiple tasks "at once" and hit multiple random areas really adds to the all around effectiveness of the technology... albeit with a varied amount depending on what kind of access you are requiring.
Ok, enough Karma whoring... just wanted to spit that out real quick. BTW, the effect of a larger(bandwidth) backbone has consitently proven to be more effective regardless of the end users connection speed. So even with the CE bottlenecks, the added speed of the fiber network will help everyone. How much, I have NO clue.
Re:no moving parts? (Score:1)
The technology descibed that don't use movable parts is PLC (also according to the article).
Re:Which Department? (Score:1)
ok, i'm done ranting... this isn't a personal sleight, just an observation...
-----
More direct link to article. (Score:1)
Here's a more direct link:
http://www.nwfusion.com/edge/columnists/2001/0416
Gilder (Score:2)
Sorry. Had to say it.
But seriously... it's old stuff, but many may not be familiar with George Gilder's interesting articles [upenn.edu], particularly Into the Fibersphere [upenn.edu], on the implications of really, really fast networks. Like the notion that computers may become the bottleneck in the network, and that a packet would be better routed to the other side of the world and back through pure fiber, rather than though the computer next door and back. And how compression becomes passe when it's slower to decompress something than to send the thing uncompressed.
Interesting observation: when computing power was expensive, programmers were paid to conserve it, writing very tight assembly code. Now that it's cheap, programmers are expected to "throw switches at the problem". But bandwidth is expensive, so they write to conserve it. On an all-fiber network, they may be expected to "throw bandwidth at the problem".
Lots of good stuff here, especially considering it was written in 1995. Hope you like it.
Re:Nice and vague as usual (Score:1)
One comment... (Score:1)
I'd love someone to tell me what the heck you need all this switching grunt
for just at the moment though (aside from 'whee, imagine a beowul...')
spose we'll all have one of these one day though
More info Scientific American, Jan. 2001 (Score:1)
You can read more about the different optical switching technology in the January 2001 issue of Scientific American. Their special report, "The rise of optical switching" explains the various technologies used to switch photonic circuits.
Unfortunately, the article is not available on-line, though you can see a related article about the rise of optical networking. You can see the article abstracts at SciAm.Com [sciam.com].
The switching technology for Lynx's switch sounds like thermo-optic switches. If so, it uses light interference to pass/block signals. The technology is wavelength sensitive, unlike MEMS or bubble switches. Also, liquid-crystal switching (another popular photonic switch technology) is polarization sensitive.
Fully optical switching... (Score:2)
--
i can't believe no one said it yet... (Score:1)
jk
-c
Photonic switching? Cool! (Score:1)
Anonymous Subject (Score:2)
yoink
Re:That's Hella fast... (Score:1)
If we used hexadecimal, and a good phone transfer specialist, we could get up to around 256 bps.
64 hex digits a second? By voice? Now that's impressive...
--
BACKNEXTFINISHCANCEL
Re:That's Hella fast... (Score:2)
BTW, 256 bps = 16 hex digits a second.
Not last time I checked... 0xF = 1111b = 4 bits; 256 / 4 = 64 hex digits per second.
--
BACKNEXTFINISHCANCEL
One of two solutions to the problem... (Score:1)
--brian
Multiterabit Switching, No Moving Parts?!?!? (Score:1)
Re:That's Hella fast... (Score:1)
Easy. In the olden days we used to overclock 300 baud modems to 500 baud. And dot matrix printers too - from 10 cps to 15 cps. And today's young wankers think they invented overclocking. Pfth! :-P
Re:Routers and Optical Burst Switching (Score:1)
IPv4 or IPv6 or both?
Re:Lower power consumptions... (Score:1)
This is faulty logic. Just because the time/packet, and therefore the power/packet, goes down doesn't mean that the overall power will go down. The number of packets will rise inversely proportional to the time/packet.
And because /dotters want to know .. (Score:4)
--
Is it possible now? (Score:2)
If we can send a man to the moon... (Score:2)
Why the HELL am I still getting emails that start like "Looking for HOT, HORNY Teens? Look no further!!!!!!"
The slowest bottleneck: Human (Score:1)
Sure, there are situations that don't really require human intervention, at least not till a series of events have completed (say, FTP of a series of large files, or convergence of a network after a routing change). But for any interactive, the human in the slowest part, and always will be. Get a file in 63 seconds rather than 312 seconds? Nice, saves you four minutes of your life, but generally small compared to the time that you'll use manipulating or using the file.
That's Hella fast... (Score:1)
Back to the switch. I read about this in this month's wired, and it's pretty cool, but I need to know.... evem though they're trying to get rid of the bottlenecks, how long till this kind of thing is "profitable" enough for say, my broadband providederto be giving be an extra megabit of bandwidth? Hmmm?
(Yes, I am a spoiled teenager)
(But Yes, I did start with a 200bps modem. So don't tell me that in your day....I played Tic Tac Toe at a godly 4fps.)
Re:That's Hella fast... (Score:1)
Re:Gilder (Score:1)
Computers are already the slow link in the chain if you're using gigabit ethernet. If my math is correct, with Gig-E the NIC can receive data faster than a 100Mhz bus can throw it at the processor. Crazy.
mindblowing (Score:1)
I would hate to get DoS'd from a network of those. [antioffline.com]
Re:Somewhat misleading... I Agree! (Score:1)
Re:Gilder (Score:1)
> chain if you're using gigabit ethernet. If my
> math is correct, with Gig-E the NIC can receive
> data faster than a 100Mhz bus can throw it at
> the processor. Crazy.
Only if your bus is pretty narrow. A Gigabit Ethernet connection has a maximum throughput of approximately 120 Megabytes a second. So, unless your computer has a 100 MHz, 10 bit bus, you've still got some leeway.
Gigabit Ethernet would just about saturate a 33 MHz, 32-bit PCI bus, though.
Lower power consumptions... (Score:2)
Concerning power consumption, it is easy to figure out for yourself that this will dramatically lower the required wattage, since the period over which power is needed will dramatically be lower for each switched packet.
Less obvious, but potentially smaller footprints are needed in such a system, since smaller switch times require smaller curcuits (or whatever...)
Guess the recyclable material will not be an issue here, since this type of equipment will likely be high-end industrial, not for your $20,- ethercard.
Fast switching, but not on a large scale (Score:1)
The whole optical switching arena is full of promising technology using all sorts of methods of switching light from one path to another, but there is no solution (yet) which is cost or footprint effective over the whole connection volume range.
In the medium term the only systems we will see are those that switch on the small scale, say tens of connections. What with the state of the world economy R&D has been slowed in these areas because noone can afford to purchase a pure optical line system.
So don't be expecting any optical switching to the home anytime soon (Wow would that really annoy BT!, so much for the local loop).
As for the instability of mechanical switching, this is a very small consideration as the mems can remain in the same position for several years without intervention. And with protection an restoration systems as well any interruption to service could be limited to a few seconds at most.
The potential benefits of optical switching are clear (lower footprint, less power consumption) however it is not going to replace electrical switching for a long time.
M Gardner
---
All opinions my own
Stupid Question (Score:1)
Am I the only one... (Score:1)
Now THIS is power (Score:1)
Re:Somewhat misleading... (Score:4)
With the electric address signal you could control an electro-optical switch, so the signal can be switched optically. Only the address is translated into an electric signal. With optical-optical switches it may be possible to eliminate this conversion too.
Second, they do have moving part in the optical gateway which is heated in order to polarize the light for a particular channel. What is the durability of this gateway?
The heated parts are special materials that will have a slightly different refractive-index when heated (or cooled). In this case they are used to tune the 'optical length' of the cavity, so the small manufacturing errors are corrected.
Jasper
data from star trek (Score:1)
Re:data from star trek (Score:1)
i confused photon with positron, not very trekkie of me
Re:no moving parts? (Score:1)
meaning of the term Packet switch. (Score:2)
The term Packet Switching commonly refers to statistical multiplexing of packets onto a lower layer channel like say a single WDM channel or a SONET stream. For statistical multiplexing, you need buffers at input or output ports because if two packets arrive at the same port at the same time, one of them has to be buffered while the other is being transmitted. Thus, the common usage of the term optical Packet Switching implies optical buffering and optical processing available in the switch. In other words, its like an optical implementation of a packet switch like an IP router or a cell switch like an ATM switch.
In contrast, these "photonic switches" in the market like those of Lynx are like circuit switches (like TDM swictches used to set up circuits in telephony.) There is no buffering and statistical multiplexing and intelligent forwarding features. The switch may of course still have to do opto-electronic conversion and look at the "packet" or "frame" headers to determine the incoming and then outgoing port numbers. But remember that this makes your switch dependent on the bit rate, clock timing and protocol format.
In contrast, pure wavelength switching (WDM switching) as opposed to this all-optical "packet switching" is totally independent of bit rate, clock timing and protocol format. You are switching light wavelengths and not packets here. This is one of the major advantages of WDM networks since their capacity can be dynamically upgraded in response to customer demand by just upgrading, i.e. adding more wavelengths or increasing bandwidth per wavelength channel - you don't have to go visit every node in your network core and upgrade all the equipment. This advantage is not available if you made your network out of these optical packet-switches like those of Lynx.
All said, wavelength switching and optical "packet-switching" are not necessarily competing technologies. The former is more suited to backbones and long haul networks..while the latter is more suited to local and metropolitan areas.
Note that I wrote "packet switching" in quotes here.