Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking Communications The Internet Upgrades Hardware Science Technology

Squeezing More Bandwidth Out of Fiber 185

EigenHombre writes "The New York Times reports on efforts underway to squeeze more bandwidth out of the fiber optic connections which form the backbone of the Internet. With traffic doubling every two years, the limits of current networks are getting close to saturating. The new technology from Lucent-Alcatel uses the polarization and phase of light (in addition to intensity) to double or quadruple current speeds."
This discussion has been archived. No new comments can be posted.

Squeezing More Bandwidth Out of Fiber

Comments Filter:
  • Well, we'll just have to hope that their competitors will implement the technology; because the odds of Alcatel doing a proper job are pretty much zero....
    • Re:Hmmm... (Score:4, Insightful)

      by hedwards ( 940851 ) on Sunday October 10, 2010 @01:58PM (#33853170)
      Or figure out a way of getting cyber criminals off the net. The problem for quite some time has been that they'll suck up as much bandwidth as they can get, and since they don't pay for it, there's little reason to actually throttle back their operations.
      • Re:Hmmm... (Score:5, Funny)

        by nacturation ( 646836 ) * <nacturation AT gmail DOT com> on Sunday October 10, 2010 @02:00PM (#33853184) Journal

        Or figure out a way of getting cyber criminals off the net. The problem for quite some time has been that they'll suck up as much bandwidth as they can get, and since they don't pay for it, there's little reason to actually throttle back their operations.

        Shut down The Pirate Bay? :)

        • Re:Hmmm... (Score:5, Interesting)

          by garnser ( 1592501 ) on Sunday October 10, 2010 @02:48PM (#33853538) Homepage
          Actually talking with one of the major Tier 1 providers they only saw a 30% drop in total throughput over the first 24 hours after shutting down TPB, took about 1 month for it to recover. Youtube is probably a better candidate if we want to save some bandwidth http://www.wired.com/magazine/2010/08/ff_webrip/all/1 [wired.com] ;)
          • Re: (Score:3, Insightful)

            Hey! That's a good idea! Let's just shutdown the main reasons people are using high-speed internet technologies: streaming audio and video. And shutting down BitTorrent obviously wouldn't hurt.

            Then we'll party like it's 1997!

            • Re: (Score:3, Interesting)

              Comment removed based on user account deletion
          • only saw a 30% drop in total throughput over the first 24 hours after shutting down TPB

            Wow... 30%? For only one (admittedly extremely popular) torrent site, that seems like a hell of a lot. I guess that explains why ISP's want to block torrent traffic so badly.

      • Re:Hmmm... (Score:4, Funny)

        by T Murphy ( 1054674 ) on Sunday October 10, 2010 @02:07PM (#33853252) Journal
        So when people go online, the ISP should pop up a EULA saying you can only use the internet for legal activity. Problem solved.
      • by ZDRuX ( 1010435 )
        Can you enlighten us as to how one may aquire copius ammounts of bandwidth without paying it?! Criminals are one thing, but I can't think of a single scenario where a criminal can use bandwidth yet NOBODY pays for it. Perhaps he piggy-backs on someone else's connection, but that connection is STILL BEING PAID FOR by some poor smuch somewhere.
    • by Myrv ( 305480 ) on Sunday October 10, 2010 @07:01PM (#33855044)

      Well, we'll just have to hope that their competitors will implement the technology

      Already have. Actually Alcatel is pretty much playing catchup with all this. Nortel introduced a 40Gb/s dual polarization coherent terminal 4 years ago (despite many people, including Alcatel, saying it wasn't possible). Furthermore Nortel Optical (now Ciena) already has a 100Gb/s version available. Alcatel is pretty late to this game.

  • by Anonymous Coward on Sunday October 10, 2010 @01:59PM (#33853178)

    If you pump too much data into a fiber optic glass, it will begin to flow under the photonic pressure similar to glass in old church windows. If you look at them, the bottoms are much thicker than at the top. The reason? All that knowledge from God in Heaven up in the sky exerts a downward pressure on churches in particular warping their window glass... the same thing will happen to fiber optics. If you put too many libraries of congress through it, it will start to flow like toothpaste and your computer rooms will have a sticky floor and all your network switches will be gooey.

    Thanks
    Signed,
    Mr KnowItAll.
    (Happy Thanksgiving by the way)

    • Mod parent up! it's so unlikely, it MUST be true.

    • by Steeltoe ( 98226 )

      It's Thanksgiving?

      Agggh, smartass!

  • Dark Fiber (Score:3, Interesting)

    by mehrotra.akash ( 1539473 ) on Sunday October 10, 2010 @02:03PM (#33853218)
    Isnt some large percentage of the fiber not being used anyways? Rather than change the equipment on the current fiber, why not use more of the current equipment, and light up more fiber?
    • Re:Dark Fiber (Score:5, Insightful)

      by John Hasler ( 414242 ) on Sunday October 10, 2010 @02:09PM (#33853260) Homepage

      Because the dark fiber is where it is, not where it is needed. One of the fibers that crosses my land runs from Spring Valley, Wisconsin to Elmwood, Wisconsin. Is that going to help with a bandwidth shortage between New York and Chicago?

    • Re:Dark Fiber (Score:5, Informative)

      by phantomcircuit ( 938963 ) on Sunday October 10, 2010 @02:34PM (#33853448) Homepage

      The shortage is almost entirely in the transcontinental links.

    • Re:Dark Fiber (Score:5, Interesting)

      by Anonymous Coward on Sunday October 10, 2010 @02:46PM (#33853522)

      Speaking as someone involved in all of this, the days of dark fiber have, by and large, gone away. Back in 2002/2003 there had been massive amounts of overbuilding due to the machinations of MCI and Enron, no joke. Telco's world wide had been looking at MCI and Enron's sales numbers, vouched for by Arthur Anderson, and had launched big builds of their own, figuring that they were going to get a piece of that pie as well. When the ball dropped and it turned out that MCI and Enron had been lying with the collusion of Arthur Anderson, the big telco's realized that the purported traffic wasn't there. They dried up capex (which killed off a number of telecom equipment suppliers and gave the rest a near-death experience) and hawked dark fiber to anybody who would bite.

      Those days have come and gone. Think back to what it was like in 2002: Lots of us now have 10 Mb/s connections to the ISP, 3G phones, 4G phones, IPODs, IPADs, IPhones, Androids, IPTV, and it goes on and on. The core networks aren't crunched, quite, yet, but the growth this time is for real and has used up most if not all of the old dark fiber.

      Now telco's are going for more capacity, really, and it's a lot cheaper to put 88 channels of 100 Gb/s light on a existing single fiber than to fire up the ditch diggers. If you're working in the area, it's becoming fun again!

      • by jabuzz ( 182671 )

        Right so seven years ago lots of fibre was put in, and initially it was not all used, and now it is. Hum given that the a fibre link has a lifetime greater than seven years any notion of overbuilding has in fact proved to be a load of rubbish.

        In fact I could argue that given that as it is now almost all being used that far from massive over building of capacity, there was in fact massive under building of capacity.

  • by s52d ( 1049172 ) on Sunday October 10, 2010 @02:27PM (#33853396)

    Assuming we have 5 THz of usable bandwith (limited by todays fiber and optical amplifiers),
    and applying some technology known from radio for quite some time:

    Advanced modulation (1024 QAM): 10 bits/sec
    Polarization diversity (or mimo 2*2) by 2

    So, 100 Tbit/sec is approximate reasonable limit for one fiber.
    There is some minor work to transfer technology from experimental labs to the field,
    but this is just matter of time.

    Wavelength mupltiplexing just make things a bit simpler:
    Instead of one pair of A/D converters doing 100 Tbit/sec, we might use 1000 of them doing 100 Gbit/sec.

    In 2010, speed above 60 Tbit/sec was already demonstrated in the lab.

    Eh, will we say soon: "Life is too short to surf using 1 Gbit/sec"?

    • Won't increasing the number of bits per symbol as you suggest require a higher SNR, thus meaning amplifiers have to be more closely spaced? Given the desire is to get more out of the existing infrastructure, that might be a problem.

      • by Soft ( 266615 )

        Won't increasing the number of bits per symbol as you suggest require a higher SNR, thus meaning amplifiers have to be more closely spaced?

        Good point. And even having closely-spaced amplifiers may not work, as optical amplifiers have fundamental limitations in terms of noise added (OSNR actually decreases by at least 3dB for each high-gain amplifier).

        At least, that's for classical on-off keying (1 bit per symbol, using light intensity only). Coherent transmission might not have the same limit; I'd

    • Re: (Score:3, Insightful)

      by DigitAl56K ( 805623 )

      Eh, will we say soon: "Life is too short to surf using 1 Gbit/sec"?

      Seriously... damn Flash websites...

    • Probably not (Score:5, Interesting)

      by Sycraft-fu ( 314770 ) on Sunday October 10, 2010 @03:05PM (#33853628)

      You find each time go go up an order of magnitude in bandwidth, the next order matters much less.

      100 bps is just painfully slow. Even doing the simplest of text over that is painful. You want to minimize characters at all costs (hence UNIX's extremely puthy commands).

      1 kbps is ok for straight text, but nothing else. When you start doing ANSI for formatting or something it quickly gets noticeably slow.

      10 kbps is enough that on an old text display, everything is pretty zippy. Even with formatting, colour, all that it is pretty much realtime for interactivity. Anything above that is slow though. Even simple markup slows it does a good bit. Hard to surf the modern Internet, just too much waiting.

      100 kbps will let you browse even pretty complicated markup in a short amount of time. Images also aren't horrible here, if they are small. Modern web pages take time to load, but usually 10 seconds or less. Browsing is perfectly doable, just a little sluggish.

      1 mbps is pretty good for browsing. You wait a bit on content heavy pages, but only maybe a second or two. Much of the web is sub second loading times. This is also enough to stream SD video, with a bit of buffering. Nothing high quality, but you can watch Youtube. Large downloads, like say a 10GB video game are hard though, it can take a day or more.

      10 mbps is the point at which currently you notice no real improvements. Web pages load effectively instantly, usually you are waiting on your browser to render them. You can stream video more or less instantly, and you've got enough to stream HD video (720p looks pretty good at 5mbps with H.264). Downloads aren't too big an issue. You can easily get even a massive game while you sleep.

      100 mbps is enough that downloads are easy to do in the background while you do something else, and have them ready in minutes. A 10GB game can be had in about 15 minutes. You can stream any kind of video you want, even multiple streams. At that speed you could stream 1080p 4:2:2 professional video like you'd put in to a NLE if there were any available on the web.

      1 gbps is such that the network doesn't really exist for most things. You are now approaching the speed of magnetic media. Latency is a more significant problem than speed. Latency (and CPU use) aside, things tend to run as fast off a network server as they do on your local system. Downloads aren't an issue, you'll spend as much time waiting on your HDD as the data off the network in most cases.

      10 gbps is enough that you can do uncompressed video if you like. You could stream uncompressed 2560x1600 24-bit (no chroma subsampling) 60fps video and still have nearly half your connection left.

      If we get gig to the house, I mean truly have that kind of bandwidth available, I don't think we'll see a need for much more for a long, long time. At that speed, things just come down at amazing rates. You could download an entire 50GB BD movie during the first 6 minutes of viewing it. Things stream so fast over a gig that you can have the data more or less immediately to start watching/playing/whatever and the rest will be there in minutes. The latency you'd face to a server would be more of a problem.

      Even now going much past 10mbps shows strong diminishing returns. I've gone from 10 to 12 to 20 to 50 in the span of about 2 years. Other than downloading games off of Steam going faster, I don't notice much. 50mbps isn't any faster for surfing the web. I'm already getting the data as fast as I need it. Of course usages will grow, while I could stream a single 1080p blu-ray quality video (they are usually 30-40mbps streams video and audio together) I couldn't do 2.

      However at a gbps, you are really looking at being able to do just about everything someone wants to for any foreseeable future in realtime. I mean you can find theoretical cases that could use more but ask yourself how practical they really are.

      • by Soft ( 266615 )

        10 gbps is enough that you can do uncompressed video if you like. [...] If we get gig to the house, I mean truly have that kind of bandwidth available, I don't think we'll see a need for much more for a long, long time.

        Yes and no; I'm sure we'll invent new ways of wast^Wusing bandwidth. (3DTV, telepresence, video editing on remote storage, cloud computing... What's next?)

        But the problem no longer lies in the house. Not everybody has a 100Mbit/s Internet access yet, but that's coming in the next fe

        • Re:Probably not (Score:5, Interesting)

          by Sycraft-fu ( 314770 ) on Sunday October 10, 2010 @04:33PM (#33854242)

          I was never arguing against the need in the core. Already 10 gbps links are in use just on the campus I work at, never mind a real large ISP or tier-1 provider. I'm talking to the home. While many geeks get all starry-eyed about massive bandwidth I think they've just never played with it to realize the limits of what is useful. 100mbit net connections, and actually even a good deal less, are more than plenty for everything you could want today. That'll grow some with time, but 20-50mbps is plenty for realtime HD streaming, instant surfing, fast downloads of large games, etc. Be a good bit before we have any real amount of content that can use more than that (or really even use that well).

          Gigabit is just amazingly fast. You discover copying from two modern 7200rpm drives you get in the 90-100MBytes/sec range if things are going well (like sequential copy, no seeking). Doing the math you find gigabit net is 125MBytes/sec which means even with overhead 100MBytes/sec is no problem. At work I see no difference in speed copying between my internal drives and copying to our storage servers. It could all be local for all I can tell speed wise.

          That's why I think gig will not be "slow" for home surfing any time in the foreseeable future, and maybe ever. You are getting to the point that you can stream whatever you need, transfer things as fast as you need. Faster connections just wouldn't do anything for people.

          A connection to the home only really matters to a certain point. Once you can do everything you want with really no waiting, any more is just for show. At this point, that is somewhere in the 10-20mbps range. You just don't gain much past that, and I say this as someone who has a 50mbps connection (and actually because of the way they do business class I get more like 70-100mbps). That is also part of the reason there isn't so much push for faster net connections. If you can get 20mbps (and you'll find most cable and FIOS customers can, even some DSL customers) you can get enough. More isn't so useful.

          • And I think it would be nice to have clunky, electro-mechanical/magnetic devices like hard drives out on the cloud where they can be managed by someone who's only job is to make storage work.

            I've played around with Rackspace Cloud, and it's just nice not to have to worry about specific hard drives failing.

      • by Kjella ( 173770 )

        Yup... I used to be on around 2 Mbps ADSL, it was just painful. Now I'm at 25 Mbps cable, and I've realized I don't really need more. With uTorrent+RSS most things I want are downloaded before I even know it. With 5 Mbps upload I have no trouble uploading anything to friends or keeping my ratio. Now can I pretty please soon pay for a service that is half as good as the pirates give me?

        • For games Steam and Impulse will give you better service than the pirates run. Other than when their servers are heavily loaded due to a free weekend or something I get games from them at 5MBytes+/sec. Starts fast, stays fast. Of course it also has the advantage of always being what I asked for and all that, and always being available for redownload.

          For movies and so on, sorry got nothing. There are some good streaming services, but they only do HD to a Blu-ray player, they can't trust your evil computer, a

      • "I've gone from 10 to 12 to 20 to 50 in the span of about 2 years."

        fuck you.

        • So you are saying I probably shouldn't tell you that because of how they handle business class accounts, I actually get more than that most of the time? :D

          http://www.speedtest.net/result/985434853.png [speedtest.net]

          That is my actual result from a few minutes ago. It is fun for bragging rights, I'll say that. However truth be told other than Impulse and Steam downloads I notice no difference over 20mbps. Personally I'd take 20/20 if it were offered instead of 50/5. However currently they use 4 downstream channels with DOCS

      • by Skapare ( 16644 )

        All of that applies to "last mile" connections to home. For a business, you have to multiply many of those needs by as many people using them at one time. Then there may be services going in the reverse direction. For an ISP, multiply by the number of customers (divided by the oversell factor). For core infrastructure ISPs, more than 100 Tbps is still going to be needed to service a billion homes with 1gbps and a million businesses with 10gbps.

        And I'll still need to compress my 5120x2160p120 videos.

        • For a business, you have to multiply many of those needs by as many people using them at one time.

          For home too. I have 1.5mb DSL (very far out, bad S:N ratio if I up it to the 3mb service). Just me and the wife surfing and doing email, no problem. ISO downloads and Windows updates while we sleep, no problem. However, if we use Netflix on the Wii and I want to surf or perhaps listen to some streaming audio, the connect gets very bogged down, Netflix pauses every so often to rebuffer, etc.

          I'd be happy to u

      • by Twinbee ( 767046 )

        Perhaps you're forgetting that we'll also have 20,000 x 10,000 resolutions by then, and maybe even 3D. In particular, loading a voxelized 100,000^3 3D map will still dig into those orders of magnitude for a little bit longer.

      • by lennier ( 44736 )

        However at a gbps, you are really looking at being able to do just about everything someone wants to for any foreseeable future in realtime.

        What about holographic video [mit.edu]?

        Ad that to a few dozen banner ads and we could spam up a gigabit link pretty fast.

      • If we get gig to the house, I mean truly have that kind of bandwidth available, I don't think we'll see a need for much more for a long, long time.

        Once speeds like these become available, once the opportunity arises and the market grows, we will find applications for it. Trust, we always do.

      • 100 mbps is enough that downloads are easy to do in the background while you do something else, and have them ready in minutes. A 10GB game can be had in about 15 minutes. You can stream any kind of video you want, even multiple streams. At that speed you could stream 1080p 4:2:2 professional video like you'd put in to a NLE if there were any available on the web.

        Something like single-stream ProRes 422 is going to be a lot more than 100 Mbit unless you are using a proxy mode. If "professional" means 10 bit

      • by mcrbids ( 148650 )

        I almost agree with you. As you increase the amount of bits, the value-per-bit drops, and I think that's what you are saying. But the value of the bits DOES increase linearly as the number of bits increases exponentially.

        As a professional hosting provider, our upstream is 100 Mbps. NOT the consumer 100 Mbps, but the fully-redundant, backed up, monitored 24x7, 4-hops-from-MAEWest kind of 100 Mbps. For database-driven, graphics-poor applications such as what we provide, it's amazing just how much you can do w

    • Any word on the use of orbital angular momentum for actual bandwidth at this point?
  • I'm sure this will make my 768kbps down and 512kbps up seem so much more snappier.
  • This was under development before the dot com bust because back then we were going to run out of bandwidth within 3 years.
  • I'm down with polarity, but what is a phase of light and what does it mean to use 4 of them? Google isn't giving me any love on that.
    • by Soft ( 266615 )

      Polarization, you mean? (As in the direction along which the electrical field vibrates?)

      For phase modulation, try Wikipedia [wikipedia.org]. I like the diagram.

      The problem in optical transmissions, unlike radio or electricity, is that you can't directly access the phase of the light. All you can do is to have two beams of light interfere together (just like with sound: if you hear two tones, very closely spaced, you will hear a low-frequency "beat" which pulses at a frequency equal to the difference between the fre

      • by Krahar ( 1655029 )
        Thanks. Usually Google is excellent at showing relevant Wikipedia pages. Don't know why it fails in this instance. There must be some kind of reference compared to which phase is measured. I wonder if that is done by also sending a single reference on a different wavelength or if it is done by very fine-tuned clocks and knowledge of transmission time. Seems like a reference phase would be by far the easiest and most robust solution.
        • by Soft ( 266615 )

          Seems like a reference phase would be by far the easiest and most robust solution.

          It has been proposed for some modulation types. However, this halves the efficiency: you use one wavelength for each reference beam, but you can't use the same reference for all the other wavelengths, due to the fact that these wavelengths travel down the fiber at different speeds. (This is called "chromatic dispersion" and can be a major pain in the neck at high bit rates.) So there would be a delay between reference

    • It's probably two of them. Two degrees of freedom from polarization, two from phase.

      If your information is carried on an amplitude modulated sine wave, you recover it by demodulating with another sine wave. Demodulating with an orthogonal function, cosine, yields nothing. So you can pack a second carrier with a cosine phase in there and then demodulate each with the correct phase to extract the modulation signal. I don't know much about how much cross-talk there would be but probably, in theory, as long as

  • by Bureaucromancer ( 1303477 ) on Sunday October 10, 2010 @03:53PM (#33854000)
    Nice work they're doing, but nothing is going to get us around the need for new infrastructure. As much as the telecos are trying to deny it, we are going to need another major round of long distance fiber installations before the global network anything like stabilizes. Actually, I would draw a comparison to the British railway system in the 19th century (flawed but some interesting points in it), particularly with reference to the boom bust cycle (including apparent over construction early on that actually goes over capacity quite soon, and eventual REAL massive over investment and big collapses and consolidation). Not really sure if we want that outcome, but it seems like a reasonable parallel in some ways; on the one hand massive overbuilding would be nice for users, for awhile at least, but as it is we need to be breaking telecom monopolies, not creating more through collapses and consolidtation...
    • It makes sense both in economic and practical terms to squeeze as much out of the fiber we have before laying new stuff. Not only does that get more bandwidth easier and cheaper now, but it means when new stuff is laid it'll last longer. It is real expensive to lay a transatlantic cable (and those are what are the most full). The more we can get out of one, the better. I'd much rather we research the technology to get, say, 100tbits per fiber out of the cable and need a couple thousand fibers spanning a few

      • I'm all for efficieny gains, andthe work being done in TFA is great, we just shouldn't try to convince ourselves that we do t need further infrastructure, and pretty soon at that. Bear in mind that the strategy of a lot of the telecos lately seems to have been declaring that increased usage of he network, rather than bein a sign of technological maturity is 'abusive', and needs to be stopped. For that matter, the fact hey male these claims is a pretty good sign that there isn't enough competition, seein a
        • I think maybe you read too much Slashdot and don't look around at what is happening overall with Internet providers. For one, bandwidth per dollar has gone way up in many cases. This is true for home connections, and for business. Only 10 years ago I paid about 20% more (not even counting inflation) for 640k/640k than I currently do for 50m/5m. That's some major increase in bandwidth without an increase in cost. This is true for high end business lines as well. I realize there are cases where it isn't, howe

          • Truth be told things are for the most part a lot better in the US, take a look at some of the rates that Rogers and Bell try and foist on us in Canada. You end up with numbers like $70 for 80gb or $100 for 175, and $10/gb overage being competitive - things are getting better with the resellers like teksavy, but the network owners are putting significant effort into killing them, and won't give them access to the full network speeds (although this seems likely to be changed b the CRTC eventually).
  • People are still getting video feeds by HTTP. Multicast was supposed to save on bandwidth for things like IP TV. But it still isn't happening on any real scale. A lot of core infrastructure bandwidth could be reduced by making multicast fully functional and using it. And, of course, we need to do that in a way that precludes some intermediate business deciding what we can, or cannot, receive by multicast. Oh, and how many multicast groups are there? And how do I get one?

    • It's happening on the backend, and it's god damn huge. It's just hidden behind IPTV "cable boxes". If you're watching television on Comcast's cable plant, you're using multicast.

      • by Skapare ( 16644 )

        But Comcast is a video gatekeeper. If they are using IPTV and multicast, then they have far more ability to add channel choices than they are letting on.

        My interest is in deploying it world-wide where there is no gatekeeper and a choice of as many channels as there are people wanting to deliver video. Specifically, the sender and receiver would not be the same entity (it is the same entity when Comcast controls your set top box, and decides what programming networks get to send video out to those set top

        • Not going to happen. If you're either a channel or content provider, you're still going to want to get paid for that content being delivered. Multicast doesn't have that sort of authentication built into the protocol, unless you're able to multicast an encrypted stream, and use another protocol to handle a key exchange (after the video stream access is purchased or authorized because of a monthly subscription a la Netflix).

          It would work for things like NASA TV that are free, and I even think NASA already ha

    • The problem is timeshifting. Thousands of people are watching Hulu, but how many are watching the same content and started it at around the same time? Without any advanced caching, multicast would only help those streams that happen to line up.

      I agree with your point, though. Where is the Internet-wide multicast?

      Kind of an odd scenario where the last mile is ready for multicast naturally because of the shared medium (cable & wireless) but the core is not. I know the solution is not trivial, too.

      John

      • by Skapare ( 16644 )

        One can simply set up a box to pre-record the program when it is transmitted by multicast. A program can afterwards go request missing blocks of the program to eliminate those glitches that the live viewers had to endure. If you want to watch a particular show each week or day, you have it set to do that. It then joins the multicast group a little before it starts and records what it gets. If the sender DRMs it, then you have to view it through some process that can decrypt it (and maybe even gets the ke

        • That's an interesting concept. I had never thought of a DVR/Multicast combo. It would require an always on device and some notion of what you wanted to record. Basically taking an Apple/GoogleTV, adding a program guide and setting it to record specific multicast addresses at certain times. Add DRM and monthly charge and keep the commercials and the networks may just by off on it.

    • it's in IPv6. :P
      • by Skapare ( 16644 )

        IPv6 is fine. IPv4 has it, too. But IPv6 has more of it. Now how do I get a multicast address?

    • by ekhben ( 628371 )

      Multicast is common - at constrained scopes. It's used for router communication (OSPF, IS-IS, RIP2, GLBP), rendezvous/zero-conf (mDNS), and as some other commenters have noted, also for single-carrier IP TV, depending on the carrier. There's a few education and research networks that use it, too.

      Global multicast is non-existent, because it's hard to charge for.

  • With traffic doubling every two years, the limits of current networks are getting close to saturating.

    That's ok, we have been following Butter's Law for quite some time, just like Moore's Law following transistor density. To cost of sending data halves every 9 months, and data networks double in speed every 9 months. This is well within the problem with the traffic doubling every two years.

    • by Skapare ( 16644 )

      But there are also laws of physics that apply to fiber. And after all the strands are used up that have already are in place, they have to put in more. THAT is very expensive (dig, dig, dig). My company is trying to get fiber in from an ISP right across the street from us, and the installation costs are going to be as high as $12,000. Imagine the cost of putting in more fiber from Boston to New York to Philadelphia to Baltimore to Washington to Richmond to Charlotte to Atlanta. And that's just one run.

  • Oh...of course! (Score:5, Informative)

    by Interoperable ( 1651953 ) on Sunday October 10, 2010 @05:00PM (#33854382)

    The article implies that it's easy to do, there was simply never a need before. I seriously doubt that it's a trivial thing to accomplish a four-fold increase in bandwidth on existing infrastructure.

    Polarization has a habit of wandering around in fiber. Temperature and physical movement of the fiber will change how the polarization is altered as it passes through the fiber. In a trans-oceanic fiber the effect could be dramatic; the polarization would likely wander around with quite a high frequency. This would need to be corrected for by periodically sending reference pulses though the fiber so that the receivers could be re-calibrated. Not too difficult, but any inaccessible repeaters would still need to be retrofitted. I also don't know if in-fiber amplifiers are polarization maintaining. They rely on a scattering process that might not be.

    Phase-encoding has similar problems. Dispersion, the fact that different frequencies travel at different velocities (this leads to prisms separating white light into rainbows), will distort the pulse shape and shift the modulation envelope with respect to the phase. You either need very low dispersion fibers, and they already need to use the best available, or have some fancy processing at a receiver or repeater. Adding extra phase encoding simply implies that the current encoding method (probably straight-up, on-off encoding) is inefficient. That's not necessarily lack of foresight, that's because dense encoding is probably really hard to do in a dispersive medium like fiber. Again, it's not a trivial drop-in replacement.

    The article downplays how hard these problems are. It implies that the engineers simply didn't think it through the first time around, but that's far from the case. A huge amount of money and effort goes into more efficiently encoding information in fiber. There probably is no drop in solution, but very clever design in new repeaters and amplifiers might squeeze some bonus bandwidth into existing cable.

    • Re:Oh...of course! (Score:5, Interesting)

      by Soft ( 266615 ) on Sunday October 10, 2010 @06:01PM (#33854718)

      The article implies that it's easy to do, there was simply never a need before. I seriously doubt that it's a trivial thing to accomplish a four-fold increase in bandwidth on existing infrastructure.

      It's not, as you have pointed out. My interpretation is that, on the contrary, phase and polarization diversity (which I'll lump into "coherent" optical transmissions) are hard enough to do that you'll try all the other possibilities first: DWDM, high symbol rates, differential-phase modulation... All these avenues have been exploited, now, so we have to bite the bullet and go coherent. However, on coherent systems, some problems actually become simpler.

      Polarization has a habit of wandering around in fiber.

      Quite so. Therefore, on a classical system, you use only polarization-independent devices. (Yes, erbium-doped amplifiers are essentially polarization-independent because you have many erbium ions in different configurations in the glass; Raman amplifiers are something else, but sending two pump beams along orthogonal polarizations should take care of it.)

      For a coherent system, you want to separate polarizations whose axes have turned any which way. Have a look at Wikipedia's article on optical hybrids [wikipedia.org], especially figure1. You need four photoreceivers (two for each balanced detector), and reconstruct the actual signal by digital signal processing. And that's just for a single polarization; double this for polarization diversity and use a 2x2 MIMO technique.

      That's why it's so expensive compared to a classical system: the coherent receiver is much more complex. Additionally, you need DSP and especially ADCs working at tens of gigasamples per second. This is only just now becoming possible.

      Phase-encoding has similar problems. Dispersion, the fact that different frequencies travel at different velocities (this leads to prisms separating white light into rainbows), will distort the pulse shape and shift the modulation envelope with respect to the phase. You either need very low dispersion fibers, and they already need to use the best available, or have some fancy processing at a receiver or repeater.

      Indeed. We are at the limit of the "best available" fibers (which are not zero-dispersion, actually, to alleviate nonlinear effects, but that's another story). Now we need the "fancy processing". And lo, when we use it, the dispersion problem becomes much more tractable! Currently, you need all these dispersion-compensating fibers every 100km, and they're not precise enough beyond 40Gbaud (thus 40Gbit/s for conventional systems). With coherent, dispersion is a purely linear channel characteristic, which you can correct straightforwardly in the spectral domain using FFTs. Then the limit becomes how much processing power you have at the receiver.

      The article downplays how hard these problems are. It implies that the engineers simply didn't think it through the first time around, but that's far from the case. A huge amount of money and effort goes into more efficiently encoding information in fiber. There probably is no drop in solution, but very clever design in new repeaters and amplifiers might squeeze some bonus bandwidth into existing cable.

      Well, yes, much effort has been devoted to the problem. After all, how many laboratories are competing for breaking transmission speed records and be rewarded by the prestige of a postdeadline paper at conferences such as OFC and ECOC ;-)?

      As for how much bandwidth can be squeezed into fibers, keep in mind that current systems have an efficiency around 0.2bit/s/Hz. There's at least an order of magnitude left for improvement; I don't have Essiambre's paper handy, but according to his simulations, I think the minimum bound for capacity is around 7-8bit/s/Hz.

    • They would use circular polarization multiplexing. They already use phase shift modulation as well as wavelength division multiplexing.

BLISS is ignorance.

Working...