Squeezing More Bandwidth Out of Fiber 185
EigenHombre writes "The New York Times reports on efforts underway to squeeze more bandwidth out of the fiber optic connections which form the backbone of the Internet. With traffic doubling every two years, the limits of current networks are getting close to saturating. The new technology from Lucent-Alcatel uses the polarization and phase of light (in addition to intensity) to double or quadruple current speeds."
Hmmm... (Score:2)
Re:Hmmm... (Score:4, Insightful)
Re:Hmmm... (Score:5, Funny)
Or figure out a way of getting cyber criminals off the net. The problem for quite some time has been that they'll suck up as much bandwidth as they can get, and since they don't pay for it, there's little reason to actually throttle back their operations.
Shut down The Pirate Bay? :)
Re:Hmmm... (Score:5, Interesting)
Re: (Score:3, Insightful)
Hey! That's a good idea! Let's just shutdown the main reasons people are using high-speed internet technologies: streaming audio and video. And shutting down BitTorrent obviously wouldn't hurt.
Then we'll party like it's 1997!
Re: (Score:3, Interesting)
Re: (Score:2, Insightful)
Aside from a total economic disaster, that is.
Re: (Score:2)
Aside from a total economic disaster, that is.
Don't you mean "Aside from an even totaler economic disaster"? Or did we shut down Youtube and The Pirate Bay in 2009?
Re:Hmmm... (Score:4, Insightful)
Let's stop and think what people are downloading via TPB... music, movies, media in general. if your gripe is that the legality of these file transfers is in question let's assume that in the near future everyone is downloading content legitimately. What then?
You dumb asses are taking an interesting article about cutting edge network technology and ruining it with your stupid opinions about things that don't matter. The music and video is going to keep coming, legal or not.
Re: (Score:3, Insightful)
Anyone who doesn't see VOD as the future is daft, the bandwidth must increase and broadband internet must get to the rural areas of the US.
Re: (Score:2)
Oh, and as a follow up to that, let's mandate driving less (with a 3 strikes you're out policy):
No driving just to get to Bingo
Or bowling.
No driving to a movie. That's just frivolous.
No driving for visual chats with friends.
And if you commit any misdemeanors while after driving someplace, we'll take away your license.
People are always thinking of ways to waste highway capacity.
Re: (Score:2, Funny)
I just don't think that the content has added much of anything actually useful.
Riiiiight. And this post is SOOO useful. Time to get off your high horse and shovel some manure.
Re: (Score:2)
Increasing the bandwidth would just encourage more stupid shit to be put onto "the cloud" and cause the problem to continue to persist.
Fortunately, YOU don't get to decide for the rest of us what is "stupid shit" and what is not. According to many people, your post is stupid shit, along with virtually everything else on the internet except lolcats. (or discussions about particle physics, or star trek, or quilting, or nascar, or...)
Re: (Score:2)
The Internet is not your personal research tool. It isn't there for your convenience. If research papers, etc are of significant importance to your job - you *should* have a JSTOR account.
Re: (Score:2)
only saw a 30% drop in total throughput over the first 24 hours after shutting down TPB
Wow... 30%? For only one (admittedly extremely popular) torrent site, that seems like a hell of a lot. I guess that explains why ISP's want to block torrent traffic so badly.
Re: (Score:2)
ok...lets shut down the things that are making the internet and technology grow...that is a smart idea. While we are at it why don't we go back to using horses because there are too many cars in the world.
Dear obvious troll: I never suggested any such thing. My flippant comment about The Pirate Bay was a joke (I even included a smiley so dullards like you wouldn't miss it) and I'm not about to explain the joke for slow minds such as yours. I can just picture you rolling your eyes and foaming at the mouth as you typed that. Try not to take life so seriously next time, okay?
There are many legitimate, fully legal uses of bandwidth that cause various internet technologies to grow that we don't need the intern
Re:Hmmm... (Score:4, Funny)
Re:Hmmm... (Score:5, Funny)
Or just get rid of network neutrality so that ISPs can filter packets with the evil bit set.
Re: (Score:2)
with the evil bit set, does that mean no windows machines can use the internet?
that would shut down 99.99999% of the botnets out there over night.
Re: (Score:2)
Re: (Score:2, Funny)
A couple of GBU-31s and an F-22 Raptor ought to fix that...
Dual Pol Coherent Systems have already been done.. (Score:5, Interesting)
Well, we'll just have to hope that their competitors will implement the technology
Already have. Actually Alcatel is pretty much playing catchup with all this. Nortel introduced a 40Gb/s dual polarization coherent terminal 4 years ago (despite many people, including Alcatel, saying it wasn't possible). Furthermore Nortel Optical (now Ciena) already has a 100Gb/s version available. Alcatel is pretty late to this game.
If you squeeze glass it flows (Score:5, Funny)
If you pump too much data into a fiber optic glass, it will begin to flow under the photonic pressure similar to glass in old church windows. If you look at them, the bottoms are much thicker than at the top. The reason? All that knowledge from God in Heaven up in the sky exerts a downward pressure on churches in particular warping their window glass... the same thing will happen to fiber optics. If you put too many libraries of congress through it, it will start to flow like toothpaste and your computer rooms will have a sticky floor and all your network switches will be gooey.
Thanks
Signed,
Mr KnowItAll.
(Happy Thanksgiving by the way)
Mod parent up. Re:If you squeeze glass it flows (Score:2)
Mod parent up! it's so unlikely, it MUST be true.
Re: (Score:2)
It's Thanksgiving?
Agggh, smartass!
In Canada. . .well, tomorrow (Score:2)
Canadian Thanksgiving is tomorrow. I'm not Canadian, but heard about it somewhere recently.
Re: (Score:2, Insightful)
My Materials class in 1972 was very clear, Glass at normal temperatures can be classified as a liquid. ergo, over time it moves. This Medieval glass is considerably thicker at the bottom than the top.
Talk about the bleeding obvious........... sigh.
This Medieval glass is thicker at the bottom because of its fabrication process. Theoretically it should indeed "flow", but the relaxation time is just way too long for it to become noticeable in a matter of centuries...
Source : 2008 polymer class / http://en.wikipedia.org/wiki/Glass#Behavior_of_antique_glass [wikipedia.org]
A bit more and a rough rule of thumb (Score:4, Informative)
Because the speed of flow (creep) is related to diffusion rate a very rough rule of thumb is that if the temperature is above 2/3 of the melting point in degrees Kelvin then it will happen given time and stress.
That's why it shows up in very old and large lead pipes (low melting point) but not in large windows (high melting point).
Re: (Score:3, Funny)
Re: (Score:2)
That's really just a metaphor to describe such a disordered solid in simple terms, and you've pushed the metaphor too far. It is not going to flow until it gets hot enough.
Dark Fiber (Score:3, Interesting)
Re:Dark Fiber (Score:5, Insightful)
Because the dark fiber is where it is, not where it is needed. One of the fibers that crosses my land runs from Spring Valley, Wisconsin to Elmwood, Wisconsin. Is that going to help with a bandwidth shortage between New York and Chicago?
Re:Dark Fiber (Score:5, Informative)
The shortage is almost entirely in the transcontinental links.
Re: (Score:2)
Correction: High Frequency Trading, not insider stock market trading. Just as nefarious, but with a prettier name.
Re:Dark Fiber (Score:5, Interesting)
Speaking as someone involved in all of this, the days of dark fiber have, by and large, gone away. Back in 2002/2003 there had been massive amounts of overbuilding due to the machinations of MCI and Enron, no joke. Telco's world wide had been looking at MCI and Enron's sales numbers, vouched for by Arthur Anderson, and had launched big builds of their own, figuring that they were going to get a piece of that pie as well. When the ball dropped and it turned out that MCI and Enron had been lying with the collusion of Arthur Anderson, the big telco's realized that the purported traffic wasn't there. They dried up capex (which killed off a number of telecom equipment suppliers and gave the rest a near-death experience) and hawked dark fiber to anybody who would bite.
Those days have come and gone. Think back to what it was like in 2002: Lots of us now have 10 Mb/s connections to the ISP, 3G phones, 4G phones, IPODs, IPADs, IPhones, Androids, IPTV, and it goes on and on. The core networks aren't crunched, quite, yet, but the growth this time is for real and has used up most if not all of the old dark fiber.
Now telco's are going for more capacity, really, and it's a lot cheaper to put 88 channels of 100 Gb/s light on a existing single fiber than to fire up the ditch diggers. If you're working in the area, it's becoming fun again!
Re: (Score:2)
Right so seven years ago lots of fibre was put in, and initially it was not all used, and now it is. Hum given that the a fibre link has a lifetime greater than seven years any notion of overbuilding has in fact proved to be a load of rubbish.
In fact I could argue that given that as it is now almost all being used that far from massive over building of capacity, there was in fact massive under building of capacity.
Close to Shannon limit (Score:4, Interesting)
Assuming we have 5 THz of usable bandwith (limited by todays fiber and optical amplifiers),
and applying some technology known from radio for quite some time:
Advanced modulation (1024 QAM): 10 bits/sec
Polarization diversity (or mimo 2*2) by 2
So, 100 Tbit/sec is approximate reasonable limit for one fiber.
There is some minor work to transfer technology from experimental labs to the field,
but this is just matter of time.
Wavelength mupltiplexing just make things a bit simpler:
Instead of one pair of A/D converters doing 100 Tbit/sec, we might use 1000 of them doing 100 Gbit/sec.
In 2010, speed above 60 Tbit/sec was already demonstrated in the lab.
Eh, will we say soon: "Life is too short to surf using 1 Gbit/sec"?
Re: (Score:2)
Won't increasing the number of bits per symbol as you suggest require a higher SNR, thus meaning amplifiers have to be more closely spaced? Given the desire is to get more out of the existing infrastructure, that might be a problem.
Re: (Score:2)
Good point. And even having closely-spaced amplifiers may not work, as optical amplifiers have fundamental limitations in terms of noise added (OSNR actually decreases by at least 3dB for each high-gain amplifier).
At least, that's for classical on-off keying (1 bit per symbol, using light intensity only). Coherent transmission might not have the same limit; I'd
Re: (Score:3, Insightful)
Eh, will we say soon: "Life is too short to surf using 1 Gbit/sec"?
Seriously... damn Flash websites...
Probably not (Score:5, Interesting)
You find each time go go up an order of magnitude in bandwidth, the next order matters much less.
100 bps is just painfully slow. Even doing the simplest of text over that is painful. You want to minimize characters at all costs (hence UNIX's extremely puthy commands).
1 kbps is ok for straight text, but nothing else. When you start doing ANSI for formatting or something it quickly gets noticeably slow.
10 kbps is enough that on an old text display, everything is pretty zippy. Even with formatting, colour, all that it is pretty much realtime for interactivity. Anything above that is slow though. Even simple markup slows it does a good bit. Hard to surf the modern Internet, just too much waiting.
100 kbps will let you browse even pretty complicated markup in a short amount of time. Images also aren't horrible here, if they are small. Modern web pages take time to load, but usually 10 seconds or less. Browsing is perfectly doable, just a little sluggish.
1 mbps is pretty good for browsing. You wait a bit on content heavy pages, but only maybe a second or two. Much of the web is sub second loading times. This is also enough to stream SD video, with a bit of buffering. Nothing high quality, but you can watch Youtube. Large downloads, like say a 10GB video game are hard though, it can take a day or more.
10 mbps is the point at which currently you notice no real improvements. Web pages load effectively instantly, usually you are waiting on your browser to render them. You can stream video more or less instantly, and you've got enough to stream HD video (720p looks pretty good at 5mbps with H.264). Downloads aren't too big an issue. You can easily get even a massive game while you sleep.
100 mbps is enough that downloads are easy to do in the background while you do something else, and have them ready in minutes. A 10GB game can be had in about 15 minutes. You can stream any kind of video you want, even multiple streams. At that speed you could stream 1080p 4:2:2 professional video like you'd put in to a NLE if there were any available on the web.
1 gbps is such that the network doesn't really exist for most things. You are now approaching the speed of magnetic media. Latency is a more significant problem than speed. Latency (and CPU use) aside, things tend to run as fast off a network server as they do on your local system. Downloads aren't an issue, you'll spend as much time waiting on your HDD as the data off the network in most cases.
10 gbps is enough that you can do uncompressed video if you like. You could stream uncompressed 2560x1600 24-bit (no chroma subsampling) 60fps video and still have nearly half your connection left.
If we get gig to the house, I mean truly have that kind of bandwidth available, I don't think we'll see a need for much more for a long, long time. At that speed, things just come down at amazing rates. You could download an entire 50GB BD movie during the first 6 minutes of viewing it. Things stream so fast over a gig that you can have the data more or less immediately to start watching/playing/whatever and the rest will be there in minutes. The latency you'd face to a server would be more of a problem.
Even now going much past 10mbps shows strong diminishing returns. I've gone from 10 to 12 to 20 to 50 in the span of about 2 years. Other than downloading games off of Steam going faster, I don't notice much. 50mbps isn't any faster for surfing the web. I'm already getting the data as fast as I need it. Of course usages will grow, while I could stream a single 1080p blu-ray quality video (they are usually 30-40mbps streams video and audio together) I couldn't do 2.
However at a gbps, you are really looking at being able to do just about everything someone wants to for any foreseeable future in realtime. I mean you can find theoretical cases that could use more but ask yourself how practical they really are.
Re: (Score:2)
Yes and no; I'm sure we'll invent new ways of wast^Wusing bandwidth. (3DTV, telepresence, video editing on remote storage, cloud computing... What's next?)
But the problem no longer lies in the house. Not everybody has a 100Mbit/s Internet access yet, but that's coming in the next fe
Re:Probably not (Score:5, Interesting)
I was never arguing against the need in the core. Already 10 gbps links are in use just on the campus I work at, never mind a real large ISP or tier-1 provider. I'm talking to the home. While many geeks get all starry-eyed about massive bandwidth I think they've just never played with it to realize the limits of what is useful. 100mbit net connections, and actually even a good deal less, are more than plenty for everything you could want today. That'll grow some with time, but 20-50mbps is plenty for realtime HD streaming, instant surfing, fast downloads of large games, etc. Be a good bit before we have any real amount of content that can use more than that (or really even use that well).
Gigabit is just amazingly fast. You discover copying from two modern 7200rpm drives you get in the 90-100MBytes/sec range if things are going well (like sequential copy, no seeking). Doing the math you find gigabit net is 125MBytes/sec which means even with overhead 100MBytes/sec is no problem. At work I see no difference in speed copying between my internal drives and copying to our storage servers. It could all be local for all I can tell speed wise.
That's why I think gig will not be "slow" for home surfing any time in the foreseeable future, and maybe ever. You are getting to the point that you can stream whatever you need, transfer things as fast as you need. Faster connections just wouldn't do anything for people.
A connection to the home only really matters to a certain point. Once you can do everything you want with really no waiting, any more is just for show. At this point, that is somewhere in the 10-20mbps range. You just don't gain much past that, and I say this as someone who has a 50mbps connection (and actually because of the way they do business class I get more like 70-100mbps). That is also part of the reason there isn't so much push for faster net connections. If you can get 20mbps (and you'll find most cable and FIOS customers can, even some DSL customers) you can get enough. More isn't so useful.
Re: (Score:2)
And I think it would be nice to have clunky, electro-mechanical/magnetic devices like hard drives out on the cloud where they can be managed by someone who's only job is to make storage work.
I've played around with Rackspace Cloud, and it's just nice not to have to worry about specific hard drives failing.
Re: (Score:2)
Yup... I used to be on around 2 Mbps ADSL, it was just painful. Now I'm at 25 Mbps cable, and I've realized I don't really need more. With uTorrent+RSS most things I want are downloaded before I even know it. With 5 Mbps upload I have no trouble uploading anything to friends or keeping my ratio. Now can I pretty please soon pay for a service that is half as good as the pirates give me?
Re: (Score:2)
For games Steam and Impulse will give you better service than the pirates run. Other than when their servers are heavily loaded due to a free weekend or something I get games from them at 5MBytes+/sec. Starts fast, stays fast. Of course it also has the advantage of always being what I asked for and all that, and always being available for redownload.
For movies and so on, sorry got nothing. There are some good streaming services, but they only do HD to a Blu-ray player, they can't trust your evil computer, a
Excellent comment! but... (Score:2)
"I've gone from 10 to 12 to 20 to 50 in the span of about 2 years."
fuck you.
Re: (Score:2)
So you are saying I probably shouldn't tell you that because of how they handle business class accounts, I actually get more than that most of the time? :D
http://www.speedtest.net/result/985434853.png [speedtest.net]
That is my actual result from a few minutes ago. It is fun for bragging rights, I'll say that. However truth be told other than Impulse and Steam downloads I notice no difference over 20mbps. Personally I'd take 20/20 if it were offered instead of 50/5. However currently they use 4 downstream channels with DOCS
Re: (Score:2)
All of that applies to "last mile" connections to home. For a business, you have to multiply many of those needs by as many people using them at one time. Then there may be services going in the reverse direction. For an ISP, multiply by the number of customers (divided by the oversell factor). For core infrastructure ISPs, more than 100 Tbps is still going to be needed to service a billion homes with 1gbps and a million businesses with 10gbps.
And I'll still need to compress my 5120x2160p120 videos.
Re: (Score:2)
For a business, you have to multiply many of those needs by as many people using them at one time.
For home too. I have 1.5mb DSL (very far out, bad S:N ratio if I up it to the 3mb service). Just me and the wife surfing and doing email, no problem. ISO downloads and Windows updates while we sleep, no problem. However, if we use Netflix on the Wii and I want to surf or perhaps listen to some streaming audio, the connect gets very bogged down, Netflix pauses every so often to rebuffer, etc.
I'd be happy to u
Re: (Score:2)
Perhaps you're forgetting that we'll also have 20,000 x 10,000 resolutions by then, and maybe even 3D. In particular, loading a voxelized 100,000^3 3D map will still dig into those orders of magnitude for a little bit longer.
Re: (Score:2)
However at a gbps, you are really looking at being able to do just about everything someone wants to for any foreseeable future in realtime.
What about holographic video [mit.edu]?
Ad that to a few dozen banner ads and we could spam up a gigabit link pretty fast.
Re: (Score:2)
If we get gig to the house, I mean truly have that kind of bandwidth available, I don't think we'll see a need for much more for a long, long time.
Once speeds like these become available, once the opportunity arises and the market grows, we will find applications for it. Trust, we always do.
Re: (Score:2)
Something like single-stream ProRes 422 is going to be a lot more than 100 Mbit unless you are using a proxy mode. If "professional" means 10 bit
Re: (Score:2)
I almost agree with you. As you increase the amount of bits, the value-per-bit drops, and I think that's what you are saying. But the value of the bits DOES increase linearly as the number of bits increases exponentially.
As a professional hosting provider, our upstream is 100 Mbps. NOT the consumer 100 Mbps, but the fully-redundant, backed up, monitored 24x7, 4-hops-from-MAEWest kind of 100 Mbps. For database-driven, graphics-poor applications such as what we provide, it's amazing just how much you can do w
Re: (Score:2)
Zoom (Score:2)
New technology? (Score:2)
Phase? (Score:2)
Re: (Score:2)
Polarization, you mean? (As in the direction along which the electrical field vibrates?)
For phase modulation, try Wikipedia [wikipedia.org]. I like the diagram.
The problem in optical transmissions, unlike radio or electricity, is that you can't directly access the phase of the light. All you can do is to have two beams of light interfere together (just like with sound: if you hear two tones, very closely spaced, you will hear a low-frequency "beat" which pulses at a frequency equal to the difference between the fre
Re: (Score:2)
Re: (Score:2)
It has been proposed for some modulation types. However, this halves the efficiency: you use one wavelength for each reference beam, but you can't use the same reference for all the other wavelengths, due to the fact that these wavelengths travel down the fiber at different speeds. (This is called "chromatic dispersion" and can be a major pain in the neck at high bit rates.) So there would be a delay between reference
Re: (Score:2)
It's probably two of them. Two degrees of freedom from polarization, two from phase.
If your information is carried on an amplitude modulated sine wave, you recover it by demodulating with another sine wave. Demodulating with an orthogonal function, cosine, yields nothing. So you can pack a second carrier with a cosine phase in there and then demodulate each with the correct phase to extract the modulation signal. I don't know much about how much cross-talk there would be but probably, in theory, as long as
Interesting, but... (Score:4, Insightful)
Why not get more efficient though? (Score:2)
It makes sense both in economic and practical terms to squeeze as much out of the fiber we have before laying new stuff. Not only does that get more bandwidth easier and cheaper now, but it means when new stuff is laid it'll last longer. It is real expensive to lay a transatlantic cable (and those are what are the most full). The more we can get out of one, the better. I'd much rather we research the technology to get, say, 100tbits per fiber out of the cable and need a couple thousand fibers spanning a few
Re: (Score:2)
Re: (Score:2)
I think maybe you read too much Slashdot and don't look around at what is happening overall with Internet providers. For one, bandwidth per dollar has gone way up in many cases. This is true for home connections, and for business. Only 10 years ago I paid about 20% more (not even counting inflation) for 640k/640k than I currently do for 50m/5m. That's some major increase in bandwidth without an increase in cost. This is true for high end business lines as well. I realize there are cases where it isn't, howe
Re: (Score:2)
Where's the multicast? (Score:2)
People are still getting video feeds by HTTP. Multicast was supposed to save on bandwidth for things like IP TV. But it still isn't happening on any real scale. A lot of core infrastructure bandwidth could be reduced by making multicast fully functional and using it. And, of course, we need to do that in a way that precludes some intermediate business deciding what we can, or cannot, receive by multicast. Oh, and how many multicast groups are there? And how do I get one?
Re: (Score:2)
It's happening on the backend, and it's god damn huge. It's just hidden behind IPTV "cable boxes". If you're watching television on Comcast's cable plant, you're using multicast.
IP Multicast in Cable Networks
http://www.cisco.com/en/US/technologies/tk648/tk828/technologies_case_study0900aecd802e2ce2.html [cisco.com]
Re: (Score:2)
But Comcast is a video gatekeeper. If they are using IPTV and multicast, then they have far more ability to add channel choices than they are letting on.
My interest is in deploying it world-wide where there is no gatekeeper and a choice of as many channels as there are people wanting to deliver video. Specifically, the sender and receiver would not be the same entity (it is the same entity when Comcast controls your set top box, and decides what programming networks get to send video out to those set top
Re: (Score:2)
Not going to happen. If you're either a channel or content provider, you're still going to want to get paid for that content being delivered. Multicast doesn't have that sort of authentication built into the protocol, unless you're able to multicast an encrypted stream, and use another protocol to handle a key exchange (after the video stream access is purchased or authorized because of a monthly subscription a la Netflix).
It would work for things like NASA TV that are free, and I even think NASA already ha
Re: (Score:2)
The problem is timeshifting. Thousands of people are watching Hulu, but how many are watching the same content and started it at around the same time? Without any advanced caching, multicast would only help those streams that happen to line up.
I agree with your point, though. Where is the Internet-wide multicast?
Kind of an odd scenario where the last mile is ready for multicast naturally because of the shared medium (cable & wireless) but the core is not. I know the solution is not trivial, too.
John
Re: (Score:2)
One can simply set up a box to pre-record the program when it is transmitted by multicast. A program can afterwards go request missing blocks of the program to eliminate those glitches that the live viewers had to endure. If you want to watch a particular show each week or day, you have it set to do that. It then joins the multicast group a little before it starts and records what it gets. If the sender DRMs it, then you have to view it through some process that can decrypt it (and maybe even gets the ke
Re: (Score:2)
That's an interesting concept. I had never thought of a DVR/Multicast combo. It would require an always on device and some notion of what you wanted to record. Basically taking an Apple/GoogleTV, adding a program guide and setting it to record specific multicast addresses at certain times. Add DRM and monthly charge and keep the commercials and the networks may just by off on it.
Re: (Score:2)
Re: (Score:2)
IPv6 is fine. IPv4 has it, too. But IPv6 has more of it. Now how do I get a multicast address?
Re: (Score:2)
Multicast is common - at constrained scopes. It's used for router communication (OSPF, IS-IS, RIP2, GLBP), rendezvous/zero-conf (mDNS), and as some other commenters have noted, also for single-carrier IP TV, depending on the carrier. There's a few education and research networks that use it, too.
Global multicast is non-existent, because it's hard to charge for.
Butter's Law (Score:2)
With traffic doubling every two years, the limits of current networks are getting close to saturating.
That's ok, we have been following Butter's Law for quite some time, just like Moore's Law following transistor density. To cost of sending data halves every 9 months, and data networks double in speed every 9 months. This is well within the problem with the traffic doubling every two years.
Re: (Score:2)
But there are also laws of physics that apply to fiber. And after all the strands are used up that have already are in place, they have to put in more. THAT is very expensive (dig, dig, dig). My company is trying to get fiber in from an ISP right across the street from us, and the installation costs are going to be as high as $12,000. Imagine the cost of putting in more fiber from Boston to New York to Philadelphia to Baltimore to Washington to Richmond to Charlotte to Atlanta. And that's just one run.
Oh...of course! (Score:5, Informative)
The article implies that it's easy to do, there was simply never a need before. I seriously doubt that it's a trivial thing to accomplish a four-fold increase in bandwidth on existing infrastructure.
Polarization has a habit of wandering around in fiber. Temperature and physical movement of the fiber will change how the polarization is altered as it passes through the fiber. In a trans-oceanic fiber the effect could be dramatic; the polarization would likely wander around with quite a high frequency. This would need to be corrected for by periodically sending reference pulses though the fiber so that the receivers could be re-calibrated. Not too difficult, but any inaccessible repeaters would still need to be retrofitted. I also don't know if in-fiber amplifiers are polarization maintaining. They rely on a scattering process that might not be.
Phase-encoding has similar problems. Dispersion, the fact that different frequencies travel at different velocities (this leads to prisms separating white light into rainbows), will distort the pulse shape and shift the modulation envelope with respect to the phase. You either need very low dispersion fibers, and they already need to use the best available, or have some fancy processing at a receiver or repeater. Adding extra phase encoding simply implies that the current encoding method (probably straight-up, on-off encoding) is inefficient. That's not necessarily lack of foresight, that's because dense encoding is probably really hard to do in a dispersive medium like fiber. Again, it's not a trivial drop-in replacement.
The article downplays how hard these problems are. It implies that the engineers simply didn't think it through the first time around, but that's far from the case. A huge amount of money and effort goes into more efficiently encoding information in fiber. There probably is no drop in solution, but very clever design in new repeaters and amplifiers might squeeze some bonus bandwidth into existing cable.
Re:Oh...of course! (Score:5, Interesting)
It's not, as you have pointed out. My interpretation is that, on the contrary, phase and polarization diversity (which I'll lump into "coherent" optical transmissions) are hard enough to do that you'll try all the other possibilities first: DWDM, high symbol rates, differential-phase modulation... All these avenues have been exploited, now, so we have to bite the bullet and go coherent. However, on coherent systems, some problems actually become simpler.
Quite so. Therefore, on a classical system, you use only polarization-independent devices. (Yes, erbium-doped amplifiers are essentially polarization-independent because you have many erbium ions in different configurations in the glass; Raman amplifiers are something else, but sending two pump beams along orthogonal polarizations should take care of it.)
For a coherent system, you want to separate polarizations whose axes have turned any which way. Have a look at Wikipedia's article on optical hybrids [wikipedia.org], especially figure1. You need four photoreceivers (two for each balanced detector), and reconstruct the actual signal by digital signal processing. And that's just for a single polarization; double this for polarization diversity and use a 2x2 MIMO technique.
That's why it's so expensive compared to a classical system: the coherent receiver is much more complex. Additionally, you need DSP and especially ADCs working at tens of gigasamples per second. This is only just now becoming possible.
Indeed. We are at the limit of the "best available" fibers (which are not zero-dispersion, actually, to alleviate nonlinear effects, but that's another story). Now we need the "fancy processing". And lo, when we use it, the dispersion problem becomes much more tractable! Currently, you need all these dispersion-compensating fibers every 100km, and they're not precise enough beyond 40Gbaud (thus 40Gbit/s for conventional systems). With coherent, dispersion is a purely linear channel characteristic, which you can correct straightforwardly in the spectral domain using FFTs. Then the limit becomes how much processing power you have at the receiver.
Well, yes, much effort has been devoted to the problem. After all, how many laboratories are competing for breaking transmission speed records and be rewarded by the prestige of a postdeadline paper at conferences such as OFC and ECOC ;-)?
As for how much bandwidth can be squeezed into fibers, keep in mind that current systems have an efficiency around 0.2bit/s/Hz. There's at least an order of magnitude left for improvement; I don't have Essiambre's paper handy, but according to his simulations, I think the minimum bound for capacity is around 7-8bit/s/Hz.
Re: (Score:3, Informative)
Because it's more complicated to reach for a high spectral efficiency. Until now, on fiber, it was possible to just increase the spectral bandwidth (increase the number of wavelengths in a single fiber, in fact). In wireless, on the contrary, the spectrum is much more regulated--if only because it is shared among everybody, whereas what happens in a fiber doesn't affect anything o
Re: (Score:2)
They would use circular polarization multiplexing. They already use phase shift modulation as well as wavelength division multiplexing.
Re: (Score:2, Funny)
Re:Wtf are constants? (Score:4, Interesting)
FTFS: > to double or quadruple current speeds.
Of course, they must have been talking about capacity instead of speed. Sending more information concurrently using the same pipe. Every bit of information would still travel at pretty much the same speed obviously.
Re: (Score:2)
FTFS: > to double or quadruple current speeds.
Of course, they must have been talking about capacity instead of speed. Sending more information concurrently using the same pipe. Every bit of information would still travel at pretty much the same speed obviously.
"Capacity" is still a polysemous term. I could be a static magnitude (a "stock") or a a dynamic one (a "flow"),. For example "1 LoC" can be a unit of capacity. Maybe "throughput" is what they mean, whose unit could be "1 LoC per second".
Re: (Score:2)
OK then, to be more specific: Network bandwidth capacity.
I just thought the "Network bandwidth" part was implicit.
http://en.wikipedia.org/wiki/Bandwidth_(computing)#Network_bandwidth_capacity [wikipedia.org]
Re: (Score:2)
Re:what about color (Score:5, Informative)
In a way they already do this the different wave lengths are used in something called multiplexing, they can cram a lot of completely different signals down the same pipe at the same time with this technique.
http://en.wikipedia.org/wiki/Multiplexing [wikipedia.org]
That link probably explains it much better then I can.
Also they do not send bits, they send more then bytes, they send packets, or entire frames.
Re:what about color (Score:5, Informative)
Yes, they're using the wavelength aspect of light there.
An electromagnetic waveform can be represented mathematecally as a function of time: s(t) = A * cos(wt + p)
Amplitude (A), or the intensity of light was always used to represent the on/off states.
Wavelength (related to w), or the 'colour' of the light is used in wavelength multiplexing. You just inject multiple signals at different wavelengths and filter them out into the different signals at the receiver.
Phase (p), or the starting position of the wave if you like, you can imagine left or right shifting a sine wave, is another aspect of a wave that can carry information. Phase Shift Keying (PSK) is already used in many radio frequency digital modulation schemes. Not sure how this method could be used to increase bandwidth through modulation though. Probably worth reading the paper.
Note, I've represented a very basic two dimensional wave here. Of course polarisation or the way a 3-D wave is aligned is another aspect that may be used to encode information. For multiplexing, I imaging the idea is to have multiple waves at the same wavelength but with different polarisations. You would then need to be able to filter out particular polarisations at the receiver.
As a side note, the second paragraph of the article says something about not being able to make light go any faster beyond a barrier to increased bandwidth. It in fact has no bearing on it. Even if the on/off effect along kilometres of fibre was instantaneous, you would still have to deal with noise and attenuation in the channel.
Re:what about color (Score:5, Informative)
Already being done; TFA mentions this (for "wavelength" read "color", as the light that is being used is in the infrared).
What limits the number of wavelengths in a single fiber is the bandwidth of the amplifiers: optical fibers slightly absorb light, and current long-haul links require reamplification ca. every 100km. This is done using EDFAs (erbium-doped fiber amplifiers), which work for wavelengths in the 1530-1560nm range (the "C band"; visible light is in the 400-800nm range). Adding wavelengths outside this band would require redeploying new amplifiers along the fiber, which would be expensive; besides, other types of amplifiers aren't quite as mature as EDFAs, and you would need more of them because the fiber attenuates more outside this window.
You could also try to squeeze these wavelengths tighter, to put more of them within the C band, but they are already packed at 0.4-nm intervals, corresponding to a 50-GHz frequency interval, which holds a 10- or 12.5-Gbit/s signal with little margin, as long as conventional optical techniques are used--that is, switching the light on or off for each bit.
There remains the possibility of using smarter ways of modulating the light, using its phase and polarization, to pack e.g. 100Gbit/s in a 50-GHz bandwidth; and that's what Alcatel are doing. They are not the only ones, of course, the field of "coherent optical transmissions" has been a hot topic in the past couple of years. Now commercial solutions are getting into the field.
Note that these techniques are already widely used in radio and DSL systems, and had been proposed for optical systems back in the 1980s, before EDFAs essentially solved the attenuation problem. Now, however, we have again reached a bandwidth limit and have to turn back to coherent transmission. In the 1980s, that meant complicated hardware at the receivers, impossible to deploy outside the labs; now all the complicated stuff can be done with DSP in software. Radio and DSL already do this, but only at a few tens of Gbit/s; doing it at 100Gbit/s for optics is more challenging, and is just now becoming possible.
tens of Mbit/s not Gbit/s (was:what about color) (Score:2, Informative)
Re: (Score:2)
Different wavelengths follow different paths down the fibre and will arrive with different latency and distortion; so multiple wavelengths carry concurrent frames, rather than concurrent bits; but yeah, pretty much.
Also, no production DSP will pull phase information out of optical frequencies; to do so reliably requires a sample rate of at least 4x the frequency, so your 1530nm signal would need to be sampled and processed at around 800,000 GHz (yes, the best part of 1 PHz. Per-channel). Good luck with th
Re: (Score:2)
Well, yes. There are "wavelength-striped" systems in laboratories, but only for short-distance links AFAICT.
Re: (Score:2)
Photonics to the rescue indeed; but I thought wave-synchronised light sources at this distance would be considered part of the lab-experiment grade equipment this was said to be doable without.
Not sure where I got 4x from; been years since I did RF theory (GSM was the big news at the time...), but 2.5x makes sense now.
Still, the more I think about it the more I'm impressed that it works at all at those speeds.
Re: (Score:2)
Right, and this was the big problem with coherent when it was first proposed for optical systems back in the 1980s.
Now, you just ensure that the local oscillator is within a few tens or hundreds of MHz of the signal carrier, which is not too difficult. A residual phase drift of several hundred Mrad/s sounds hi
Re: (Score:2)
Aww, sounds like the network guys are getting all the cool equipment these days.
Just imagine all the chicks you could get with a wave coherent oscillator...
Re: (Score:2)
Installing is very very expensive. Too bad they didn't realize that back when they did put in the dark fiber. They are doing some installs now, but the financial resources are being held back until the government gives them massive tax breaks and helps them cover the costs so they can keep their profits high and not have to cut CEO salaries and bonuses.
Re: (Score:2)
Oh, singlemode fiber isn't better in that regard, but yes, that's certainly SMF they're talking about, if only because that's what installed in current long-distance links. Also, you can indeed have polarization-maintaining SMF, but not over hundreds of kilometers. For what is actually done to multiplex over polarization, see my earlier post [slashdot.org].