Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking Communications The Internet Upgrades Hardware Science Technology

Squeezing More Bandwidth Out of Fiber 185

EigenHombre writes "The New York Times reports on efforts underway to squeeze more bandwidth out of the fiber optic connections which form the backbone of the Internet. With traffic doubling every two years, the limits of current networks are getting close to saturating. The new technology from Lucent-Alcatel uses the polarization and phase of light (in addition to intensity) to double or quadruple current speeds."
This discussion has been archived. No new comments can be posted.

Squeezing More Bandwidth Out of Fiber

Comments Filter:
  • Dark Fiber (Score:3, Interesting)

    by mehrotra.akash ( 1539473 ) on Sunday October 10, 2010 @02:03PM (#33853218)
    Isnt some large percentage of the fiber not being used anyways? Rather than change the equipment on the current fiber, why not use more of the current equipment, and light up more fiber?
  • by s52d ( 1049172 ) on Sunday October 10, 2010 @02:27PM (#33853396)

    Assuming we have 5 THz of usable bandwith (limited by todays fiber and optical amplifiers),
    and applying some technology known from radio for quite some time:

    Advanced modulation (1024 QAM): 10 bits/sec
    Polarization diversity (or mimo 2*2) by 2

    So, 100 Tbit/sec is approximate reasonable limit for one fiber.
    There is some minor work to transfer technology from experimental labs to the field,
    but this is just matter of time.

    Wavelength mupltiplexing just make things a bit simpler:
    Instead of one pair of A/D converters doing 100 Tbit/sec, we might use 1000 of them doing 100 Gbit/sec.

    In 2010, speed above 60 Tbit/sec was already demonstrated in the lab.

    Eh, will we say soon: "Life is too short to surf using 1 Gbit/sec"?

  • by ls671 ( 1122017 ) * on Sunday October 10, 2010 @02:39PM (#33853478) Homepage

    FTFS: > to double or quadruple current speeds.

    Of course, they must have been talking about capacity instead of speed. Sending more information concurrently using the same pipe. Every bit of information would still travel at pretty much the same speed obviously.

  • Re:Dark Fiber (Score:5, Interesting)

    by Anonymous Coward on Sunday October 10, 2010 @02:46PM (#33853522)

    Speaking as someone involved in all of this, the days of dark fiber have, by and large, gone away. Back in 2002/2003 there had been massive amounts of overbuilding due to the machinations of MCI and Enron, no joke. Telco's world wide had been looking at MCI and Enron's sales numbers, vouched for by Arthur Anderson, and had launched big builds of their own, figuring that they were going to get a piece of that pie as well. When the ball dropped and it turned out that MCI and Enron had been lying with the collusion of Arthur Anderson, the big telco's realized that the purported traffic wasn't there. They dried up capex (which killed off a number of telecom equipment suppliers and gave the rest a near-death experience) and hawked dark fiber to anybody who would bite.

    Those days have come and gone. Think back to what it was like in 2002: Lots of us now have 10 Mb/s connections to the ISP, 3G phones, 4G phones, IPODs, IPADs, IPhones, Androids, IPTV, and it goes on and on. The core networks aren't crunched, quite, yet, but the growth this time is for real and has used up most if not all of the old dark fiber.

    Now telco's are going for more capacity, really, and it's a lot cheaper to put 88 channels of 100 Gb/s light on a existing single fiber than to fire up the ditch diggers. If you're working in the area, it's becoming fun again!

  • Re:Hmmm... (Score:5, Interesting)

    by garnser ( 1592501 ) on Sunday October 10, 2010 @02:48PM (#33853538) Homepage
    Actually talking with one of the major Tier 1 providers they only saw a 30% drop in total throughput over the first 24 hours after shutting down TPB, took about 1 month for it to recover. Youtube is probably a better candidate if we want to save some bandwidth http://www.wired.com/magazine/2010/08/ff_webrip/all/1 [wired.com] ;)
  • Probably not (Score:5, Interesting)

    by Sycraft-fu ( 314770 ) on Sunday October 10, 2010 @03:05PM (#33853628)

    You find each time go go up an order of magnitude in bandwidth, the next order matters much less.

    100 bps is just painfully slow. Even doing the simplest of text over that is painful. You want to minimize characters at all costs (hence UNIX's extremely puthy commands).

    1 kbps is ok for straight text, but nothing else. When you start doing ANSI for formatting or something it quickly gets noticeably slow.

    10 kbps is enough that on an old text display, everything is pretty zippy. Even with formatting, colour, all that it is pretty much realtime for interactivity. Anything above that is slow though. Even simple markup slows it does a good bit. Hard to surf the modern Internet, just too much waiting.

    100 kbps will let you browse even pretty complicated markup in a short amount of time. Images also aren't horrible here, if they are small. Modern web pages take time to load, but usually 10 seconds or less. Browsing is perfectly doable, just a little sluggish.

    1 mbps is pretty good for browsing. You wait a bit on content heavy pages, but only maybe a second or two. Much of the web is sub second loading times. This is also enough to stream SD video, with a bit of buffering. Nothing high quality, but you can watch Youtube. Large downloads, like say a 10GB video game are hard though, it can take a day or more.

    10 mbps is the point at which currently you notice no real improvements. Web pages load effectively instantly, usually you are waiting on your browser to render them. You can stream video more or less instantly, and you've got enough to stream HD video (720p looks pretty good at 5mbps with H.264). Downloads aren't too big an issue. You can easily get even a massive game while you sleep.

    100 mbps is enough that downloads are easy to do in the background while you do something else, and have them ready in minutes. A 10GB game can be had in about 15 minutes. You can stream any kind of video you want, even multiple streams. At that speed you could stream 1080p 4:2:2 professional video like you'd put in to a NLE if there were any available on the web.

    1 gbps is such that the network doesn't really exist for most things. You are now approaching the speed of magnetic media. Latency is a more significant problem than speed. Latency (and CPU use) aside, things tend to run as fast off a network server as they do on your local system. Downloads aren't an issue, you'll spend as much time waiting on your HDD as the data off the network in most cases.

    10 gbps is enough that you can do uncompressed video if you like. You could stream uncompressed 2560x1600 24-bit (no chroma subsampling) 60fps video and still have nearly half your connection left.

    If we get gig to the house, I mean truly have that kind of bandwidth available, I don't think we'll see a need for much more for a long, long time. At that speed, things just come down at amazing rates. You could download an entire 50GB BD movie during the first 6 minutes of viewing it. Things stream so fast over a gig that you can have the data more or less immediately to start watching/playing/whatever and the rest will be there in minutes. The latency you'd face to a server would be more of a problem.

    Even now going much past 10mbps shows strong diminishing returns. I've gone from 10 to 12 to 20 to 50 in the span of about 2 years. Other than downloading games off of Steam going faster, I don't notice much. 50mbps isn't any faster for surfing the web. I'm already getting the data as fast as I need it. Of course usages will grow, while I could stream a single 1080p blu-ray quality video (they are usually 30-40mbps streams video and audio together) I couldn't do 2.

    However at a gbps, you are really looking at being able to do just about everything someone wants to for any foreseeable future in realtime. I mean you can find theoretical cases that could use more but ask yourself how practical they really are.

  • Re:Probably not (Score:5, Interesting)

    by Sycraft-fu ( 314770 ) on Sunday October 10, 2010 @04:33PM (#33854242)

    I was never arguing against the need in the core. Already 10 gbps links are in use just on the campus I work at, never mind a real large ISP or tier-1 provider. I'm talking to the home. While many geeks get all starry-eyed about massive bandwidth I think they've just never played with it to realize the limits of what is useful. 100mbit net connections, and actually even a good deal less, are more than plenty for everything you could want today. That'll grow some with time, but 20-50mbps is plenty for realtime HD streaming, instant surfing, fast downloads of large games, etc. Be a good bit before we have any real amount of content that can use more than that (or really even use that well).

    Gigabit is just amazingly fast. You discover copying from two modern 7200rpm drives you get in the 90-100MBytes/sec range if things are going well (like sequential copy, no seeking). Doing the math you find gigabit net is 125MBytes/sec which means even with overhead 100MBytes/sec is no problem. At work I see no difference in speed copying between my internal drives and copying to our storage servers. It could all be local for all I can tell speed wise.

    That's why I think gig will not be "slow" for home surfing any time in the foreseeable future, and maybe ever. You are getting to the point that you can stream whatever you need, transfer things as fast as you need. Faster connections just wouldn't do anything for people.

    A connection to the home only really matters to a certain point. Once you can do everything you want with really no waiting, any more is just for show. At this point, that is somewhere in the 10-20mbps range. You just don't gain much past that, and I say this as someone who has a 50mbps connection (and actually because of the way they do business class I get more like 70-100mbps). That is also part of the reason there isn't so much push for faster net connections. If you can get 20mbps (and you'll find most cable and FIOS customers can, even some DSL customers) you can get enough. More isn't so useful.

  • Re:Oh...of course! (Score:5, Interesting)

    by Soft ( 266615 ) on Sunday October 10, 2010 @06:01PM (#33854718)

    The article implies that it's easy to do, there was simply never a need before. I seriously doubt that it's a trivial thing to accomplish a four-fold increase in bandwidth on existing infrastructure.

    It's not, as you have pointed out. My interpretation is that, on the contrary, phase and polarization diversity (which I'll lump into "coherent" optical transmissions) are hard enough to do that you'll try all the other possibilities first: DWDM, high symbol rates, differential-phase modulation... All these avenues have been exploited, now, so we have to bite the bullet and go coherent. However, on coherent systems, some problems actually become simpler.

    Polarization has a habit of wandering around in fiber.

    Quite so. Therefore, on a classical system, you use only polarization-independent devices. (Yes, erbium-doped amplifiers are essentially polarization-independent because you have many erbium ions in different configurations in the glass; Raman amplifiers are something else, but sending two pump beams along orthogonal polarizations should take care of it.)

    For a coherent system, you want to separate polarizations whose axes have turned any which way. Have a look at Wikipedia's article on optical hybrids [wikipedia.org], especially figure1. You need four photoreceivers (two for each balanced detector), and reconstruct the actual signal by digital signal processing. And that's just for a single polarization; double this for polarization diversity and use a 2x2 MIMO technique.

    That's why it's so expensive compared to a classical system: the coherent receiver is much more complex. Additionally, you need DSP and especially ADCs working at tens of gigasamples per second. This is only just now becoming possible.

    Phase-encoding has similar problems. Dispersion, the fact that different frequencies travel at different velocities (this leads to prisms separating white light into rainbows), will distort the pulse shape and shift the modulation envelope with respect to the phase. You either need very low dispersion fibers, and they already need to use the best available, or have some fancy processing at a receiver or repeater.

    Indeed. We are at the limit of the "best available" fibers (which are not zero-dispersion, actually, to alleviate nonlinear effects, but that's another story). Now we need the "fancy processing". And lo, when we use it, the dispersion problem becomes much more tractable! Currently, you need all these dispersion-compensating fibers every 100km, and they're not precise enough beyond 40Gbaud (thus 40Gbit/s for conventional systems). With coherent, dispersion is a purely linear channel characteristic, which you can correct straightforwardly in the spectral domain using FFTs. Then the limit becomes how much processing power you have at the receiver.

    The article downplays how hard these problems are. It implies that the engineers simply didn't think it through the first time around, but that's far from the case. A huge amount of money and effort goes into more efficiently encoding information in fiber. There probably is no drop in solution, but very clever design in new repeaters and amplifiers might squeeze some bonus bandwidth into existing cable.

    Well, yes, much effort has been devoted to the problem. After all, how many laboratories are competing for breaking transmission speed records and be rewarded by the prestige of a postdeadline paper at conferences such as OFC and ECOC ;-)?

    As for how much bandwidth can be squeezed into fibers, keep in mind that current systems have an efficiency around 0.2bit/s/Hz. There's at least an order of magnitude left for improvement; I don't have Essiambre's paper handy, but according to his simulations, I think the minimum bound for capacity is around 7-8bit/s/Hz.

  • by Myrv ( 305480 ) on Sunday October 10, 2010 @07:01PM (#33855044)

    Well, we'll just have to hope that their competitors will implement the technology

    Already have. Actually Alcatel is pretty much playing catchup with all this. Nortel introduced a 40Gb/s dual polarization coherent terminal 4 years ago (despite many people, including Alcatel, saying it wasn't possible). Furthermore Nortel Optical (now Ciena) already has a 100Gb/s version available. Alcatel is pretty late to this game.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Monday October 11, 2010 @07:11AM (#33857814)
    Comment removed based on user account deletion

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...