Become a fan of Slashdot on Facebook


Forgot your password?
Wireless Networking Network Hardware

Increasing Wireless Network Speed By 1000% By Replacing Packets With Algebra 357

MrSeb writes "A team of researchers from MIT, Caltech, Harvard, and other universities in Europe, have devised a way of boosting the performance of wireless networks by up to 10 times — without increasing transmission power, adding more base stations, or using more wireless spectrum. The researchers' creation, coded TCP, is a novel way of transmitting data so that lost packets don't result in higher latency or re-sent data. With coded TCP, blocks of packets are clumped together and then transformed into algebraic equations (PDF) that describe the packets. If part of the message is lost, the receiver can solve the equation to derive the missing data. The process of solving the equations is simple and linear, meaning it doesn't require much processing on behalf of the router/smartphone/laptop. In testing, the coded TCP resulted in some dramatic improvements. MIT found that campus WiFi (2% packet loss) jumped from 1Mbps to 16Mbps. On a fast-moving train (5% packet loss), the connection speed jumped from 0.5Mbps to 13.5Mbps. Moving forward, coded TCP is expected to have huge repercussions on the performance of LTE and WiFi networks — and the technology has already been commercially licensed to several hardware makers."
This discussion has been archived. No new comments can be posted.

Increasing Wireless Network Speed By 1000% By Replacing Packets With Algebra

Comments Filter:
  • Awesome name (Score:1, Interesting)

    by Anonymous Coward on Tuesday October 23, 2012 @04:17PM (#41744607)

    What the fuck were they thinking?

    It's like if tomorrow I invent a new protocol for mobile phones and I call it GSM.

    Or is this a fucking joke?

  • ECC is old (Score:4, Interesting)

    by mcelrath ( 8027 ) on Tuesday October 23, 2012 @04:26PM (#41744723) Homepage

    So basically they're applying interleaved checksumming error correction (a la RAID5)? Good idea. What they didn't say is how much extra data was required to be sent by their solution. If they want to be able to recover 10% packet loss, presumably that means at least 10% more data sent, and there's still a failure point where the loss is greater than the checksum's size.

    We've had these algorithms for decades. I've long been frustrated that checksums/ECC are not used at every single transmission and receiving point. Let's put this into the expansion bus, memory bus (ECC), and filesystem (btrfs/zfs), and of course, wifi and wired networks. Unfortunately the drive to the price floor resulted in everyone wanting to shave that 10% to make things cheaper. ECC was once commonly available in consumer hardware too, now you can only find it on ultra-specialized and ultra-pricey rackmount server hardware.

    The 1980's assumption that the error is 1e-20, so can be ignored, is demonstrably false in nearly every computer application today. We need to (re-)start designing error correction into everything. Hey, why not use adaptive error correction, that increases the size of the checksum when the measured loss increases?

  • It's just FEC (Score:5, Interesting)

    by Zarhan ( 415465 ) on Tuesday October 23, 2012 @04:27PM (#41744737)

    Forward error correction - there are different algorithms that are dime a dozen.

    The one thing that *does* surprise me is that no such thing is built-in to the link layer of 802.11 spec. Physical layer does whatever it can to garner signal from the noise, but there is no redundant data at higher layers at all.

    All this has of course resulted in a gazillion papers on that very topic, hoping to see practical application soon.

  • by timeOday ( 582209 ) on Tuesday October 23, 2012 @04:31PM (#41744793)
    Forward error correction [] is a pretty basic principle in encoding and has been used nearly since "the beginning" in the 1940s. They're used in several places up and down the protocol stack; WiMax uses Reed-Solomon [] coding, for example. But I guess this implementation uses a better algorithm at a different level in the stack.
  • by daniel.benoy ( 1810984 ) on Tuesday October 23, 2012 @04:33PM (#41744835)

    Man this is going to be so sweet in 25 years when the patents expire :D

    I also hope they use this as an excuse to popularize SCTP.

  • by Sheetrock ( 152993 ) on Tuesday October 23, 2012 @04:39PM (#41744907) Homepage Journal

    Efficiency in wireless communication is something of a purple elephant, mostly due to interference concerns that aren't at issue in wired Ethernet transactions. True, wired connections will have the occasional collision (though this is largely solved by modern algorithms and operating systems) but digital transmissions over an analog medium are difficult enough when they aren't running into each other in the air. And then you have other interference introduced by microwaves, whether from devices like cell phones, microwaves, or sunspots. It's a very noisy environment!

    The concept of using algebra is a unique step forward in this field. Most here would agree, if you're in a crowded cafe and trying to carry on a conversation, it's easier to shout "Pythagoreas" than to talk about squares and triangles. But with computers it happens to be exactly the opposite because they're designed to compute -- it's what they do and what they like to do. So feed it generalities and, often, it can come up with specifics, much like the Monty Hall Paradox.

    The next step appears to be to move from algebraics to broad descriptions of the type of data you want to download. This is waiting on computers with a great deal more processing power and perhaps emergent AI, but there will come a time where instead of feeding a bunch of packets over a noisy channel the Internet will simply say to your computer "short film with 20-something actor wondering whether to marry now or enjoy life for a while longer" and your system will fill in the rest, completing the transfer mathematically. This is down the road a ways, but newer technology such as lossy compression for data is already available and potentially lucrative for those who are willing to think outside of the conventional box and try something with a few more holes in it.

  • by flowerp ( 512865 ) on Tuesday October 23, 2012 @04:50PM (#41745055)

    This is why I think this will not catch on easily. You can't just put a new router with their coding functionality into your home and expect this to work. It also needs support from the server hosting the content you want to acces.

    The way they designed their system is end to end. Meaning that the internet server has to run a modified TCP stack and the client system (alternatively your router inbetween) also has to understand this modified TCP dialect.

    The chance of millions of Internet servers changing to a (likely) patented, proprietary version of TCP is ZERO.

    This is why this idea will fail.


  • by Anonymous Coward on Tuesday October 23, 2012 @04:52PM (#41745071)

    We'll get the extra packets that were re-transmits for sure. But the throughput gain (if I understand their sketchy details correctly) is from dropped packets not causing a pause at the TCP window size limit waiting for the dropped packet. You can just keep streaming them and generally assume the other end is happy.

    But this doesn't increase the available bandwidth of your transport network. And if every packet from 300 users is going out back-to-back with another users packet then it doesn't "fix spectrum crunch" any further than eliminating retries. They were estimating the fast moving train had a 5% drop rate. Assuming your area's 4G is saturated (I'm looking at you AT&T) with the same drop rate, you can expect 1) fewer pauses and a steadeir transmission 2) 5% more bandwidth.

    Applying this technology to the "long fat pipe" problem for ftp-style transfers to Europe however sounds like a grand plan. I hope it becomes an IEEE standard soon.

  • by anom ( 809433 ) on Tuesday October 23, 2012 @05:20PM (#41745399)

    The reason this is a problem at all is because TCP was developed for wired networks in which packet loss was almost always a signal of congestion -- and therefore the logical response was to reduce the rate.

    In these newfangled wireless networks losses can be completely random, yet TCP will still assume that congestion is responsible and reduce its rate. So, the answer is to either change TCP or do correction at a lower layer to "hide" the losses from TCP -- and, this has been a subject of research in networking for years now.

    Linear coding certainly isn't new -- it has been proposed for a variety of things -- including but not limited to bittorrent, to reduce the reliance on receiving a specific block and rather on simply receiving "enough" information.

    So yes, it is all well and good that we are applying this technique to TCP to reduce the impact of random, noncongestion losses, but there had better be something pretty magical in the way they do it for it to be (IMO) novel enough to be patentable/licensable/etc.

  • by Animats ( 122034 ) on Tuesday October 23, 2012 @05:32PM (#41745545) Homepage

    The actual paper, which I need to read a few more times, proposes at least two mechanisms. One problem with TCP is that, when ICMP Source Quench went away in the 1980s, the assumption was made that a lost packet indicated congestion. So a lost packet means slow down. This is a problem for systems with high packet loss rates and no link level retransmit, like WiFi. Also, with TCP, packets need not arrive in order, but if one is missing, all later packets have to be retransmitted, because the ACK scheme has no way to describe which byte ranges are missing, just the last good sequence number. So losing one packet when there are many in flight costs multiple retransmits.

    Their solution involves addressing both of those issues. Then they add the "algebra", which is a simple form of forward error correction. They send M linear combinations of N packets, M > N, from which all N packets can be reconstructed provided that not more than K packets are lost where K Why is that? This is a link layer problem and ought to be dealt with at the link layer. FEC at the WiFi link layer is apparently not as effective as it should be.

  • by White Flame ( 1074973 ) on Tuesday October 23, 2012 @06:16PM (#41745943)

    It prevents backtracking the stream.

    Say you have 10 packets to transmit. You encode them into 10 linearly combined results, with a 10-byte coefficient header (1 per packet), and transmit those 10 encoded packets.

    If the 5th packet was lost, in standard TCP you'd need to retransmit packets 5-10. With this encoding, you could in theory transmit only 1 packet to complete the set, regardless of which was lost, based on how the new ACKs describe the algebraic degrees of freedom remaining in solving for the original packet bytes. That means that you put out 11 packets instead of 15 packets into the same noisy environment, and the existing TCP window controls perceive less losses. If everybody does that, the overall contention might go down compared to stock TCP.

    In the case where it's very difficult to get any individual packet through, I could see this still encoding 2-3 packets at a time and saving bandwidth on resending vs regular unencoded serial transmission.

    (given my skimming of the paper)

  • by CmdTako ( 2503216 ) on Tuesday October 23, 2012 @07:17PM (#41746395)
    Nope, Nowadays we have pi [] on tap []
  • by TheGratefulNet ( 143330 ) on Tuesday October 23, 2012 @08:07PM (#41746737)

    better: there is an unknown part that was found rattling around inside. you find out what it was and put it back where it should have been.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling