Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Wireless Networking Network Hardware

Increasing Wireless Network Speed By 1000% By Replacing Packets With Algebra 357

MrSeb writes "A team of researchers from MIT, Caltech, Harvard, and other universities in Europe, have devised a way of boosting the performance of wireless networks by up to 10 times — without increasing transmission power, adding more base stations, or using more wireless spectrum. The researchers' creation, coded TCP, is a novel way of transmitting data so that lost packets don't result in higher latency or re-sent data. With coded TCP, blocks of packets are clumped together and then transformed into algebraic equations (PDF) that describe the packets. If part of the message is lost, the receiver can solve the equation to derive the missing data. The process of solving the equations is simple and linear, meaning it doesn't require much processing on behalf of the router/smartphone/laptop. In testing, the coded TCP resulted in some dramatic improvements. MIT found that campus WiFi (2% packet loss) jumped from 1Mbps to 16Mbps. On a fast-moving train (5% packet loss), the connection speed jumped from 0.5Mbps to 13.5Mbps. Moving forward, coded TCP is expected to have huge repercussions on the performance of LTE and WiFi networks — and the technology has already been commercially licensed to several hardware makers."
This discussion has been archived. No new comments can be posted.

Increasing Wireless Network Speed By 1000% By Replacing Packets With Algebra

Comments Filter:
  • by Anonymous Coward on Tuesday October 23, 2012 @03:14PM (#41744553)

    ...I don't see how it will solve "spectrum crunch" when every nibble of your LTE bandwidth is oversubscribed by 5 to 1. Whether you have 32 users doing 10 Mbps streams, or 320 user doing 1 Mbps streams, it's all accounted for. I'd certainly like to be one of the 10, but 20 Mhz worth of spectrum at 16 symbols/Hertz is not a limitation you can change with fast/excellent forward error correction.

    • by jargonburn ( 1950578 ) on Tuesday October 23, 2012 @03:21PM (#41744653)
      I agree that it's not a magic bullet. The point is, however, that the overall throughput of the network was increased by better usage of the available resources! If the *effective* available bandwidth is increased, then the performance of everyone "nibbling" on that network will *also* presumably increase. Also, think how much more money carriers may be able to squeeze out of users without needing to invest more in infrastructure! [/sarcasm]
      • Overall throughput? I'm skeptical. I have yet to consistently get 3G speeds out of even a 4G phone.

        • 3G speeds out of my 3G phone.
          HSPDA+ 'should' be capable of providing me with WAY more bandwidth than I could possibly need from my phone (42MB/ps?).
          I know this is a theoretical speed, but at least in the UK there is a order of magnitude difference between what you actually get using this tech, based on your operator. I.e. the majority of telcos could up their average speed, but for cost reasons choose not to (or more fairly, wouldn't expect the investment to pay off due to the complete lack of interest fr
        • by Firehed ( 942385 ) on Tuesday October 23, 2012 @03:42PM (#41744937) Homepage

          That's kinda the point. Crappy signal results in high packet loss. If you can recover lost packets through some recipient-side magic (clever math, apparently) rather than retransmission, you avoid the overhead of another roundtrip, and get higher bandwidth as a result. This cuts down massively on latency (huge win) and should also decrease network congestion significantly.

          I'm trying to think of a way to put this in the requisite car analogy, but don't honestly know enough about the lower-level parts of the network stack to do so without sounding like an idiot to someone sufficiently informed. But I'm sure there's something about a car exploding and all traffic backs up for tens of miles along the way ;)

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            We'll get the extra packets that were re-transmits for sure. But the throughput gain (if I understand their sketchy details correctly) is from dropped packets not causing a pause at the TCP window size limit waiting for the dropped packet. You can just keep streaming them and generally assume the other end is happy.

            But this doesn't increase the available bandwidth of your transport network. And if every packet from 300 users is going out back-to-back with another users packet then it doesn't "fix spectru

          • by Gilmoure ( 18428 ) on Tuesday October 23, 2012 @04:23PM (#41745439) Journal

            Think of it as bitmap vs vector graphic.

          • by Imagix ( 695350 ) on Tuesday October 23, 2012 @05:46PM (#41746183)
            Car analogy: Somebody ships you a car. It arrives with a bent bumper. Instead of having the source shipping you a new car, you just unbend the bumper.
      • by Jeng ( 926980 ) on Tuesday October 23, 2012 @03:46PM (#41744989)

        Also, think how much more money carriers may be able to squeeze out of users without needing to invest more in infrastructure!

        This might actually hurt them then because they charge by what was transmitted, not by what was received.

        • Re: (Score:3, Informative)

          by gr8_phk ( 621180 )

          This might actually hurt them then because they charge by what was transmitted, not by what was received.

          Yeah, but you have to consider how they do math. You assume they calculate profit based on usage and rates. The reality is they calculate the rate based on the desired profit and usage. So when you use less data (fewer retransmits) they will just charge you more for the bits that get through.

        • The point is it will give them higher data rates for free which they can then charge extra for.

        • "Algebraic packet reconstruction fee: $.01/kb"
    • by White Flame ( 1074973 ) on Tuesday October 23, 2012 @05:16PM (#41745943)

      It prevents backtracking the stream.

      Say you have 10 packets to transmit. You encode them into 10 linearly combined results, with a 10-byte coefficient header (1 per packet), and transmit those 10 encoded packets.

      If the 5th packet was lost, in standard TCP you'd need to retransmit packets 5-10. With this encoding, you could in theory transmit only 1 packet to complete the set, regardless of which was lost, based on how the new ACKs describe the algebraic degrees of freedom remaining in solving for the original packet bytes. That means that you put out 11 packets instead of 15 packets into the same noisy environment, and the existing TCP window controls perceive less losses. If everybody does that, the overall contention might go down compared to stock TCP.

      In the case where it's very difficult to get any individual packet through, I could see this still encoding 2-3 packets at a time and saving bandwidth on resending vs regular unencoded serial transmission.

      (given my skimming of the paper)

    • What they seem to say they're doing forward error correction, but with giant coding blocks (superblocks) comprising multiple TCP packets. Presumably this is accomplished by swapping the bits around between the packets and then using a conventional block coding algorithm on smaller blocks within each packet. So a whole packet can be lost and the block-decoding algorithm will still decode the superblock correctly even with a missing packet (but not two missing packets).

      It's a clever idea to work around the

  • by Ignorant Aardvark ( 632408 ) <[moc.liamg] [ta] [syewedyc]> on Tuesday October 23, 2012 @03:17PM (#41744585) Homepage Journal

    If you've ever used Usenet, and you've used parity files to recover missing segments of data, then you know exactly how this technique works.

    Frankly, I'm surprised it took so long for someone to apply it to lossy network environments. It seems obvious in hindsight.

    • by timeOday ( 582209 ) on Tuesday October 23, 2012 @03:31PM (#41744793)
      Forward error correction [wikipedia.org] is a pretty basic principle in encoding and has been used nearly since "the beginning" in the 1940s. They're used in several places up and down the protocol stack; WiMax uses Reed-Solomon [wikipedia.org] coding, for example. But I guess this implementation uses a better algorithm at a different level in the stack.
    • It seems obvious in hindsight.

      So does everything. Or was that the joke?

    • I'm sure that this is pretty much standard for how NASA send and receive data using the Deep Space Network. Only using TCP. What's actually patent-able here?
      • by SpazmodeusG ( 1334705 ) on Tuesday October 23, 2012 @10:59PM (#41748373)

        Yes, Raptor Codes are the specific ones used by NASA on their newest deep space missions link [arxiv.org]. Raptor codes are a type of fountain code [wikipedia.org].

        Fountain codes are worth looking at if you haven't been keeping up with the latest and greatest Comp Sci developments in the last 15 years. With fountain codes you can break up a chunk of data into any number of unique packets. Any subset of those packets that add up to the size of the original packet can be used to reform the original file.

        So say i had a 1MB file to send to Mars. I run a fountain encoder on that and i tell the encoder to make me 10,000,000 packets 1KB in size out of that 1MB file. So the fountain coder gives me 10GB of generated packets from that 1MB file. Now i spam those 10,000,000 packets across a really noisy connection. As soon as the receiver successfully receives any 1,001 of those packets (totalling up to just over 1MB worth of received packets) it can rebuild that file. I don't need to wait for retransmission requests or anything and it doesn't matter what packets make it or not. Just as long as the receiver can eventually successfully receive slightly over X bytes of data it can rebuild an X byte file.

        Traditional error correction codes are great for correcting bit errors in a message that mostly gets there. Fountain codes on the other hand are great in situations where entire chunks of the desired message can be lost, they can avoid the retransmission of these lost packets entirely. The only issue is that they require redundancy in transmission in the first place.

        It seems here they are grouping 50 packets of data together into 1 lot and making 50+R coded packets out of it where R is some number that's variable depending on how much loss they see. So they might send 60 coded packets. If any 50 of the 60 coded packets make it there they should have enough to rebuild the original 50 packets using fountain codes.

  • Math! (Score:2, Insightful)

    by Anonymous Coward

    To everyone who grew up saying that they never used math after high school, and it didn't have any meaningful use further than simple addition... you can kindly eat your own words now.

    I'll just sit here and watch.

    • Where's that politician who said people didn't need to study Algebra ??
    • by bws111 ( 1216812 )

      Yes, clearly this is the first time math has ever been used in computing!

      I'll just sit here and watch. Watch what? All those people who didn't learn math happily using their computers with improved speed? Or do you think it is going to be up to the user to solve the equations to re-create this missing data?

    • Re:Math! (Score:4, Insightful)

      by Grishnakh ( 216268 ) on Tuesday October 23, 2012 @04:13PM (#41745337)

      Sorry, but those people were right. They don't need anything more than simple addition to work at Starbucks making crappy burnt coffee drinks. The fact that their mobile phones will work better as they send inane messages to each other on Facebook is nice, but it doesn't require them to know any algebra or other higher math; the engineers (in other countries where math is valued) implementing this stuff need to know that math, but the users don't.

  • by sootman ( 158191 ) on Tuesday October 23, 2012 @03:19PM (#41744625) Homepage Journal

    ... the new Linksys 802.11x=(-b+/-sqrt(b^2-4ac))/2a router!

  • With coded TCP, blocks of packets are clumped together and then transformed into algebraic equations (PDF) that describe the packets. If part of the message is lost, the receiver can solve the equation to derive the missing data.

    It's been a while since I read the paper on exactly how it works, but isn't this basically the same principle as the par2 file recovery slices that have been used for Usenet binaries for quite some time?

  • by Moskit ( 32486 ) on Tuesday October 23, 2012 @03:24PM (#41744697)

    Article is very light in details (except "Libraries of Congress" things), but it looks like those guys implemented a kind of error correction code (ECC) to recover lost data through extra data found in other packets. This has been in use for various types of networks (optical, DSL, GSM) for years.

    Of course it is all down to how good the actual algorithm ("algebra") is in terms of overhead vs extent/capability of error correction vs introduced coding delay. There is always a trade-off, but a particular algorithm can take into account technology specifics (WiFi) and optimize it very well for a given task (whole packet lost, but not so often).

    Journalists like to put BIG BUZZWORDS to well known things.

  • ECC is old (Score:4, Interesting)

    by mcelrath ( 8027 ) on Tuesday October 23, 2012 @03:26PM (#41744723) Homepage

    So basically they're applying interleaved checksumming error correction (a la RAID5)? Good idea. What they didn't say is how much extra data was required to be sent by their solution. If they want to be able to recover 10% packet loss, presumably that means at least 10% more data sent, and there's still a failure point where the loss is greater than the checksum's size.

    We've had these algorithms for decades. I've long been frustrated that checksums/ECC are not used at every single transmission and receiving point. Let's put this into the expansion bus, memory bus (ECC), and filesystem (btrfs/zfs), and of course, wifi and wired networks. Unfortunately the drive to the price floor resulted in everyone wanting to shave that 10% to make things cheaper. ECC was once commonly available in consumer hardware too, now you can only find it on ultra-specialized and ultra-pricey rackmount server hardware.

    The 1980's assumption that the error is 1e-20, so can be ignored, is demonstrably false in nearly every computer application today. We need to (re-)start designing error correction into everything. Hey, why not use adaptive error correction, that increases the size of the checksum when the measured loss increases?

  • It's just FEC (Score:5, Interesting)

    by Zarhan ( 415465 ) on Tuesday October 23, 2012 @03:27PM (#41744737)

    Forward error correction - there are different algorithms that are dime a dozen.

    The one thing that *does* surprise me is that no such thing is built-in to the link layer of 802.11 spec. Physical layer does whatever it can to garner signal from the noise, but there is no redundant data at higher layers at all.

    All this has of course resulted in a gazillion papers on that very topic, hoping to see practical application soon.

    • Re:It's just FEC (Score:5, Informative)

      by hdas ( 2759085 ) on Tuesday October 23, 2012 @03:51PM (#41745065)

      This is not plain FEC for point-to-point communication. This is based on network coding, e.g., see http://en.wikipedia.org/wiki/Linear_network_coding [wikipedia.org] and how it can increase the capacity in the butterfly network over traditional packet routing schemes, counter to our intuition for flow networks/water pipes.

      Network coding has been a fairly hot research topic in information theory and coding theory over last few years. But it is fairly revolutionary in my opinion. It is still early days in terms of practical coding schemes and implementations.

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Tuesday October 23, 2012 @03:33PM (#41744835)
    Comment removed based on user account deletion
  • If your new error correction technology eliminates lost packets, and you lose 5% normally, then using this you gain 5% back not 10x. What they actually invented is data compression, and it's been around for decades.
    • If your new error correction technology eliminates lost packets, and you lose 5% normally, then using this you gain 5% back not 10x. What they actually invented is data compression, and it's been around for decades.

      It's not that simple, and it's not data compression.

      http://hardware.slashdot.org/comments.pl?sid=3205219&cid=41744891 [slashdot.org]

    • by bws111 ( 1216812 )

      Um, no. First of all, data compression means you are sending less data, and they are not. They are sending more data in total, but can tolerate missing packets.

      Second, no, 5% missing packets does not slow you down by only 5%. Worst case, the sender has to wait for a timeout to occur with no ack received before resending the packet - that is going to be a long time.

  • Efficiency in wireless communication is something of a purple elephant, mostly due to interference concerns that aren't at issue in wired Ethernet transactions. True, wired connections will have the occasional collision (though this is largely solved by modern algorithms and operating systems) but digital transmissions over an analog medium are difficult enough when they aren't running into each other in the air. And then you have other interference introduced by microwaves, whether from devices like cell

    • Re: (Score:3, Informative)

      by Eunuchswear ( 210685 )

      The concept of using algebra is a unique step forward in this field.

      No it isn't. It's the way all error correcting codes work.

      This stuff is maybe 50 years old!

  • by YesIAmAScript ( 886271 ) on Tuesday October 23, 2012 @03:41PM (#41744929)

    It's called forward error correction and it requires sending additional redundant data so you can solve for what is missing. Sending additional redundant data does use more spectrum for the same throughput, because you're sending more data. It may be worth it to avoid retransmissions when data is lost, but it definitely use using additional spectrum.

    This is nothing new, your computer uses FEC on its storage (HDD or SSD) and maybe even on its RAM (if it has ECC RAM).

  • by OrangeTide ( 124937 ) on Tuesday October 23, 2012 @03:45PM (#41744967) Homepage Journal

    Shannon Limit shows that there is only so much information that can fit in a channel.

    Plenty of forward error correction codes exist (algebraic encodings) to enable a channel to approach the shannon limit. Most of you have heard of Reed-Solomon or Hamming Code before.

    NASA has used these since the 1970s to provide a more robust link with the effect of utilizing more bandwidth of that link.

    This is a little fancier than what I mentioned, but conceptually similar I imagine. The advantage of just using some existing forward error correction, perhaps combined with one of the popular compression algorithms, is that techniques that have been in use for the past 4 decades probably can't have enforceable patents placed on it.

  • by XB-70 ( 812342 ) on Tuesday October 23, 2012 @03:46PM (#41745001)
    It's obvious that nothing new has been invented here. It's just that they have put one previous idea cleverly together with another. What they are trying to do is to create Coded TCP... IP!!
    • by bws111 ( 1216812 )

      So putting two previous ideas together 'cleverly' is not a new invention, eh? What exactly would qualify as a new invention?

  • Ah yes, all we have to do is find an algebraic equation whose 1542 roots happen to match packet #1767 of the Angry Birds video. And whose coefficients take up less than 5%, 75 bytes lets say.

    I thought up this compression scheme in 8th grade. even then I knew there just had to be some basic problem with it.

    • It's not compression. It's a fancy checksum which means less packets have to be discarded as lost, meaning less time wasted waiting for resent packets and less chance of network speed being negotiated down because of said lost packets.
  • Then the mathematic representation of the packet is effectively nothing, and even a computer with unplugged network cable becomes infinitely fast.

  • by flowerp ( 512865 ) on Tuesday October 23, 2012 @03:50PM (#41745055)

    This is why I think this will not catch on easily. You can't just put a new router with their coding functionality into your home and expect this to work. It also needs support from the server hosting the content you want to acces.

    The way they designed their system is end to end. Meaning that the internet server has to run a modified TCP stack and the client system (alternatively your router inbetween) also has to understand this modified TCP dialect.

    The chance of millions of Internet servers changing to a (likely) patented, proprietary version of TCP is ZERO.

    This is why this idea will fail.

    Christian

    • by flowerp ( 512865 ) on Tuesday October 23, 2012 @03:53PM (#41745095)

      And in case you want to read about the changes they made to TCP, here's the paper:
      http://arxiv.org/pdf/0809.5022.pdf

      The paper mentioned in the summary only does a performance evaluation for this TCP
      dialect, but is light on details.

    • by suutar ( 1860506 ) on Tuesday October 23, 2012 @04:55PM (#41745783)
      It doesn't necessarily need support from the server hosting the content. Any router along the path could take the old-style packets and wrap them in new-style packets. Since the servers are likely to be using wired connections anyway, this technique might not help them a lot anyway; the real win is using this to beef up the connection from the radio tower (be it cell or wifi) to the client (be it cellphone or wireless card), and that's a much smaller set of hardware/firmware to update.
  • by AK Marc ( 707885 ) on Tuesday October 23, 2012 @03:56PM (#41745133)
    Hasn't this already been solved for before? There are tons of FEC implementations out there. Shoot, from their description, they just PAR'd them all together and transmitted the parity as an (or some) extra packet(s). They just used "algebra" to generate their PAR.

    Cool that someone's using it. Dynamic parity/FEC is a much better loss tolerant scenario than TCP's algorithm, but that doesn't make it new or novel. Silver Peak is a network acceleration device (like Riverbed) that integrates FEC, so it's been out commercially available for years. Putting it on an AP (with a matched wireless client) isn't that interesting. I could have done it 2 years ago with Silver Peak VMs running on the AP and my computer, though not as practical.

    Maybe I'm just jaded because FEC and parity are things I've been working with for 20+ years.
  • You mean we get to keep all the hardware as is, and not pay more to get more..... I am in, but alas, Canadian Monopoly companies will see through this, and implant some way of getting more money for the lost bandwidth they are able to charge us for....too bad it is not credible business model.

  • It is news for nerds after all.
  • by anom ( 809433 ) on Tuesday October 23, 2012 @04:20PM (#41745399)

    The reason this is a problem at all is because TCP was developed for wired networks in which packet loss was almost always a signal of congestion -- and therefore the logical response was to reduce the rate.

    In these newfangled wireless networks losses can be completely random, yet TCP will still assume that congestion is responsible and reduce its rate. So, the answer is to either change TCP or do correction at a lower layer to "hide" the losses from TCP -- and, this has been a subject of research in networking for years now.

    Linear coding certainly isn't new -- it has been proposed for a variety of things -- including but not limited to bittorrent, to reduce the reliance on receiving a specific block and rather on simply receiving "enough" information.

    So yes, it is all well and good that we are applying this technique to TCP to reduce the impact of random, noncongestion losses, but there had better be something pretty magical in the way they do it for it to be (IMO) novel enough to be patentable/licensable/etc.

  • by Animats ( 122034 ) on Tuesday October 23, 2012 @04:32PM (#41745545) Homepage

    The actual paper, which I need to read a few more times, proposes at least two mechanisms. One problem with TCP is that, when ICMP Source Quench went away in the 1980s, the assumption was made that a lost packet indicated congestion. So a lost packet means slow down. This is a problem for systems with high packet loss rates and no link level retransmit, like WiFi. Also, with TCP, packets need not arrive in order, but if one is missing, all later packets have to be retransmitted, because the ACK scheme has no way to describe which byte ranges are missing, just the last good sequence number. So losing one packet when there are many in flight costs multiple retransmits.

    Their solution involves addressing both of those issues. Then they add the "algebra", which is a simple form of forward error correction. They send M linear combinations of N packets, M > N, from which all N packets can be reconstructed provided that not more than K packets are lost where K Why is that? This is a link layer problem and ought to be dealt with at the link layer. FEC at the WiFi link layer is apparently not as effective as it should be.

  • Don't Misunderstand (Score:5, Informative)

    by rabtech ( 223758 ) on Tuesday October 23, 2012 @06:02PM (#41746307) Homepage

    Most of the posters here seem to misunderstand what this is; it is NOT just a simple Forward Error Correction scheme.

    TCP has two design decisions (as pointed out by others) that are totally wrong for a WiFi environment where random packet loss is a normal and expected operating condition:

    1. A lost packet means a collision or network congestion, therefore the proper response to a lost packet notification is to slow down the transmit rate

    2. When packet #2 is lost, even though the client has perfectly cromulent copies of packets 3-1094932, they must *all* be thrown away and the entire stream is restarted. There is no facility to go back and retransmit just the missing packet - the ACK can't say "I got packets 1,3-1094932, please just re-send #2".

    This new scheme reconstructs packet #2 in this scenario by using the FEC data in the other packets. This allows the link to tolerate a certain % of packet loss without requiring any re-transmits, thus all those packets from 3 upwards don't have to be retransmitted. It also greatly reduces latency as reconstructing packet #2 is faster (due to the computationally efficient FEC algorithm) than requesting a retransmit. This also prevents the TCP link from scaling back its transmit rate, further improving performance.

    It's definitely clever. One of the downsides of relying on older technology (TCP in this case) is when it makes fundamental assumptions that are completely, horribly wrong for new applications (WiFi).

    To those who ask why not just do this at the link layer? Because then you are wasting the effort on protocols like UDP, etc that may not want or need this kind of correction. It may also introduce delays that are unacceptable for certain applications (like VoIP). A 50ms delay is great to avoid degrading your file transfer from 10mbit to 0.5mbit, but is completely useless during a VoIP call or a multi-player FPS. Personally I'd like to see this kind of tech rolled into the next version of TCP to make it an official part of the standard... then again I'd like to see the standard MTU size increased given the ubiquity of gigabit ethernet these days, but that still hasn't happened as far as I know, due to incompatibilities, interop issues, etc.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      TCP does not require all the packets be re-transmitted when Selective ACK is supported. Selective ACK is ubiquitous.

      Preventing TCP from scaling back the transmit rate is a defect, not a feature.

  • by russotto ( 537200 ) on Tuesday October 23, 2012 @08:04PM (#41747073) Journal
    They've rediscovered forward error correction?
  • by WaffleMonster ( 969671 ) on Wednesday October 24, 2012 @11:56AM (#41753913)

    When reading this paper it is important to not loose sight of reality.

    "This is due to the fact that TCP requires in-order reception."

    ORLY...

    "A single packet loss within a transmission window forces all subsequent packets in the window to be out of order. Thus, they are discarded by the TCP receiver."

    More like selective amnesia (SACK).

    Translation "our ECC shit only works on sequential frames"

    If you assume the link layer is reliable (trivial to no packet loss in absence of congestion) and error free then the Internet works.. If you don't then it quickly turns to mush.

    TCP is simply the wrong place for this crap. You need to do retransmission/correcting code at link layer if you have a noisy enough environment to warrant.

    We don't see TCP checksums being relied upon to catch actual errors all the heavy lifting is done at link layer with fancy algorithms for a reason: It is closer to the real world/source of error.

    If you have a radio then integrate predictive moderation of correcting codes with QOS/channel provisioning and current knowledge of the RF environment. This way higher layers do not see mixed signals with respect to congestion avoidance.

    Their answer to this was simply to rely on the threshold of loss where the correction scheme was still able to fill in the gaps. This is naive and far from optimal.

    Another benefit if you do these things at radio link there is less lag on detection of head end frame as other frames for unrelated sessions may be used to assist in rapid detection of missing frames.

    Delaying or otherwise bundling frames into some sort of super cork is not the answer either as your just trading throughput for latency.

    Finally I think it is important to consider possibility just maybe TCP/HTTP is the wrong solution for every single problem in the first place. Ultimatly attempting to mask these realities will not only never be optimal but can actually cause harm *IF* people are willing to trade congestion avoidance for gains they don't deserve to have in the first place.

    To be clear I'm not saying there is no merit to error correction schemes... I just don't think they belong in IP.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...