Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Network Software Wireless Networking Hardware

New WiFi Protocol Boosts Congested Wireless Network Throughput By 700% 130

MrSeb writes "Engineers at NC State University (NCSU) have discovered a way of boosting the throughput of busy WiFi networks by up to 700%. Perhaps most importantly, the breakthrough is purely software-based, meaning it could be rolled out to existing WiFi networks relatively easily — instantly improving the throughput and latency of the network. As wireless networking becomes ever more prevalent, you may have noticed that your home network is much faster than the WiFi network at the airport or a busy conference center. The primary reason for this is that a WiFi access point, along with every device connected to it, operates on the same wireless channel. This single-channel problem is also compounded by the fact that it isn't just one-way; the access point also needs to send data back to every connected device. To solve this problem, NC State University has devised a scheme called WiFox. In essence, WiFox is some software that runs on a WiFi access point (i.e. it's part of the firmware) and keeps track of the congestion level. If WiFox detects a backlog of data due to congestion, it kicks in and enables high-priority mode. In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal. We don't have the exact details of the WiFox scheme/protocol (it's being presented at the ACM CoNEXT conference in December), but apparently it increased the throughput of a 45-device WiFi network by 700%, and reduced latency by 30-40%."
This discussion has been archived. No new comments can be posted.

New WiFi Protocol Boosts Congested Wireless Network Throughput By 700%

Comments Filter:
  • by Anonymous Coward

    In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal

    Yeah, and you'd never have another AP with the same channel on a different 'network'. How is the AP supposed to just instantly have 'total control'

    • Re:Um... (Score:4, Informative)

      by Bruce Perens ( 3872 ) <bruce@perens.com> on Thursday November 15, 2012 @12:10AM (#41988695) Homepage Journal

      Yeah, and you'd never have another AP with the same channel on a different 'network'. How is the AP supposed to just instantly have 'total control'

      Just of its own clients and other stations that can hear it. Some packets will potentially be lost due to the hidden-transmitter problem.

      • How does it do that exactly? Isn't flow control an optional part of protocol? I guess you could use non standard interframe distances, but that would be non compliant ... and it would indeed deny other APs access.

        • It is probably using RTS for its own clients, and just not acknowledging them. But yes, it could hog the channel, not using interframe pauses and not backing off until it's done.

          It's all rather silly, because the AP would not have queues that full were it not for bufferbloat.

    • by tattood ( 855883 )
      Doesn't the AP already have complete control over the channel?
      • No. Any AP can use any (legal) channel. In the 2.4 GHz range (802.11b/g/n) this is particularly a problem because of the lack of channels. Anyone with a mobile hotspot is an AP, and they are usually set up to pick the "least congested channel" with no regard for the 1,6,11 rule for 2.4. If you turn on a scanner in an airport you'll se a very ugly picture for the 2.4 range. Tons of overlapping of channels and RSSI levels. With people ignoring the 1,6,11 rule of thumb, if you are on channel 6 (for examp

        • The solution is for everyone to move to 5 GHz (802.11a/n/ac). It's happening, slowly. The biggest help for this was probably with Samsung and Apple adding the 5 GHz band to their phones this year. Eventually 2.4 GHz will be a relic, I hope.

          The biggest problem is getting devices that support 5GHz 802.11n; they almost always support 2.4GHz 802.11n, but only the dual band devices support both 2.4GHz and 5GHz ranges; the single band device are almost 100% 2.4GHz devices.

          Glad to hear Samsung and Apple are adding the 5GHz support, but it'll still be a long time before it'll really be useful again.

          BTW, I know this mostly from trying to find a device for my old laptop. I finally settled on a 2.4GHz-only USB device as I couldn't find a dual-band

          • I've made a most of my purchases with a 5 Ghz radio being a huge deciding factor. It's one of the main reasons I've gone almost all Apple for mobile devices. Every model of the iPad has had both bands. The iPhone finally caught up this year. Most Android tablets have been 2.4 only, which for me was crippling because I work as a network architect for a lot of high-density technical conferences. No matter how many low-power microcell radios you put out, you can't get enough people on with decent performa

            • Not disagreeing, just saying it's a harder issue to tackle. There are too many 2.4GHz only devices out there, and it'll take time to convert things over. Just like it'll take time for devices to upgrade to using 802.11n instead of 802.11g (which is still extremely popular).

              When I upgraded my router (to a Linksys 4200) I got one that is dual band; and really wanted to be able to use the 802.11n in the 5GHz range - only to be very disappointed in what I could find; to the 802.11n and 802.11g both share the
              • I actually had the E4200. Nice looking, and worked better than it's predecessor. For a while my Linksys devices would just degrade very quickly, first with constant reboots needed to keep throughput working, then eventually throughput just dropped down to about 1 Mbps. It was ridiculous.

                Anyway, on any dual-band router, including the E4200, you can name your bands separately. For me I named them after my pets. That's irrelevant but for this purpose it made sense. I named my 2.4 GHz network "Tigger" whi

                • I actually had the E4200. Nice looking, and worked better than it's predecessor. For a while my Linksys devices would just degrade very quickly, first with constant reboots needed to keep throughput working, then eventually throughput just dropped down to about 1 Mbps. It was ridiculous.

                  Anyway, on any dual-band router, including the E4200, you can name your bands separately.

                  Yes, I already did that. My problem is not the router - its the lack of devices that support the 5GHz band. Right now, I only have 1 802.11n device - a USB network card that replaced the miniPCI a/b/g card in my laptop. However, the USB device only supports the 2.4GHz band. So it gets to run on 802.11n with its own AP, while everyone else shares the 802.11g network. I would have much preferred to use it in the 5GHz band, but as I noted in my earlier posts, finding a card to do so is not very easy. (miniPCI-

                  • It sounds like you are running two 2.4 GHz APs then, which means you are using 2 of the 3 non-overlapping channels in the band assuming they are using 1,6, or 11, and not the same one. I would check the channel planning to be sure you're getting the best signal quality. If you're in a busy place (dirty air, high RSSI from other APs around you), you would probably get better all around performance if you use just one AP on the best available (1,6,11) channel. I think the 4200 will let you force n-only for

                    • Then next time you buy something, make sure it's 5 GHz capable. :-D I don't know your local shopping situation, but all the "tech" stores around here (Staples, Best Buy, etc...) have at least a couple models of dual-band USB 2.0 adapters in the $40 range.

                      If you can find a USB 2.0 Dual-Band Wifi 802.11g/n device that uses a minimum amount of space sticking out, then I'd love to hear it. The one I got sticks out may be 10 mm from the computer - 90% of it is the USB connection; and it's the only one I could find that met my requirements. (I have a 2 year old at home, so I don't want a USB device sticking out that could be easily broken if the laptop were to be knocked over for some reason.)

                      Most of the Dual-band devices I found were large, had cables, etc -

        • The solution is for everyone to move to 5 GHz (802.11a/n/ac). It's happening, slowly. The biggest help for this was probably with Samsung and Apple adding the 5 GHz band to their phones this year. Eventually 2.4 GHz will be a relic, I hope.

          And once 802.11ac is being used to its full potential, using up channels with a width of 80 or even a whopping 160 MHz (4x or 8x the width of a full channel), we'll be forced to move on to yet another frequency band. Until the cycle starts over again and we're all out of usable, legal channels. And then where do we go? Keep going up the spectrum until the wavelengths are so short we have to be in the same room without an object blocking the wall to be able to keep a connection?

          Don't get me wrong--that ex

          • Bleh... a few corrections to mistakes I made:

            "Meanwhile, channel 6 is also caught in the crossfire of both the guy running on channel 3 and the guy running on channel 9."

            And:

            "Keep going up the spectrum until the wavelengths are so short we have to be in the same room without an object blocking the signal between the client and AP to be able to keep a connection?"

          • One good thing 5 GHz has going for it is you don't get cross-channel interference like you do in 2.4 GHz. So of the 23 channels you have to work with, if you bind 2, or 4, or 8 (apparently), you aren't hurting channels adjacent to that. In 2.4, with 802.11n using a 40 MHz window taking up channel 6 through 11, you're still causing interference for 3-4 and 12-14.

            Your point is true. 802.11ac is going to be a pain for people like me that have to provide wireless connectivity for high-density areas and event

            • And home router manufacturers will continue to make bonded channels the default setting to make sure customers experience what they have printed on the outside of the box.

              Yeah... that's the truly bad part. :\

              They should allow wide channels, but *never* enable it by default... because the fact is, there will come a time when these 5 GHz routers become mainstream, and eventually so will these routers that are set to use multiple channels by default, and as soon as that happens... their claims on the box effectively become a big fat lie.

              If these companies were really trying to improve the experience and bandwidth performance for their customers, they would tell their marketing

  • "We don't have the exact details of the WiFox scheme/protocol, but apparently it increased the throughput of a 45-device WiFi network by 700%, and reduced latency by 30-40%." And what makes this different than Quality of Service (QOS)?
    • by Anonymous Coward on Wednesday November 14, 2012 @10:31PM (#41988219)

      > And what makes this different than Quality of Service (QOS)?

      You could, you know, RTFA.

      "If WiFox detects a backlog of data due to congestion, it kicks in and enables high-priority mode. In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal."

      QoS just prioritizes packets in the buffer for transmission. It does nothing about spectrum. This scheme seems to have the AP tell all of the clients to STFU and get off the spectrum whenever it has a backlog of packets to dump.

      • by icebike ( 68054 ) *

        Still one has to ask how much bandwidth the other units get while the porn downloader gets full bandwidth just so the router can clear its buffers.

        It seems like this scheme favors the worst offenders, and imposed delays on the rest of the network users instead of telling the flooder to STFU.

        We've been using TCP/IP for decades and burned up thousands of hours in queuing theory, simulations, etc. What is the chance they missed something that big?

        • I was at a training session last week by a wireless network vendor. One of the people in the class was a programmer from MetaGeek, who happens to work on Wireless Packet Capture Analysis software.

          He did a 5 minute capture of the activity in the conference and then showed the results. Consistently, the iOS devices in the room were reserving significantly larger amounts of time on the Access Point, and then using a fraction of the time. This would reduce the amount of bandwidth available for other devic

      • by Nutria ( 679911 )

        That seems to turn the WAP from a hub into a switch.

      • I wonder how they can gain 700% performance gain. It sounds just too much.

        Why? Because there is a limit to the total amount of data an access point can handle: this is a hard limit, defined by the WiFi protocols and transciever hardware. This would suggest that a congested access point handles no more than 12% of the maximum traffic - and that is for all the connected nodes together.

        If a protocol can handle say 10 Mbit/s, then you can't magically make it 80 Mbit/s. So to have 700% gain, you have to start of

        • by suutar ( 1860506 )
          TLDR version: the main performance killer in a wireless net is retransmit delays due to packet loss and collision. Reduce those by any means and your effective speed goes up a _lot_.
          Thing is, there's medium speed, which is what you're talking about, and there's perceived speed, which is 'how long does it take to download my file', which includes things like retransmit delays. There was an article a few weeks ago that illustrated that if you can reduce retransmits by reducing collisions and/or packet loss,
      • by soldack ( 48581 )

        While the AP is sending faster and more often, clients will suffer. If the AP is blasting to client A, client B will struggle to upload. This may cause client B to disconnect or reassociate. It isn't clear how they prevent starvation at the client transmit side.

        • by suutar ( 1860506 )
          Yeah, but while the AP is blasting to A, A isn't sending more requests (or even ACKs), so the buffers will not just refill. Once the AP is done emptying its buffers, B should have the same ability to talk as A. I'm not saying there can't be pathological cases, but it doesn't seem likely to be a very common occurrence.
          • by soldack ( 48581 )

            My thought is what if a LAN client, C, is sending UDP traffic through the AP to A. As long it blasts, B can barely speak. It would be interesting if they were dynamically altering the multi-rate retry algorithm to decrease retries as the queue filled and to even turn on ack requests when the queue was all the way filled. That can help at the cost of reliability. Send it and forget it.
            I wonder if they experimented with traditional router methods of dealing with queuing issues. Things like Random early d

    • by MarcQuadra ( 129430 ) on Wednesday November 14, 2012 @11:21PM (#41988475)

      The difference is that most network admins shirk from the task of responsibly implementing QoS, but they'd gladly pay a hefty licensing fee to their wireless vendors for a product with a name like WiFox that 'boosts performance' by clobbering the network instead of cleverly balancing it to perform well.

      • Or that most wifi networks aren't managed by actual network admins.

      • by fsterman ( 519061 ) on Wednesday November 14, 2012 @11:48PM (#41988587) Homepage

        And what makes this different than Quality of Service (QOS)?

        - madsci1016

        The difference is that QOS is a passive buffer queuing mechanism that is susceptible to buffer bloat [wikipedia.org], when large buffers (meant to help traditional QOS) trick the avoidance congestion algorithms,

        ...buffers have become large enough to hold several megabytes of data, which translates to 10 seconds or more at a 1 Mbit/s line rate used for residential Internet access. This causes the TCP algorithm that shares bandwidth on a link to react very slowly as its behavior is quadratic in the amount of buffering"

        WiFi has a LOT of dropped packets and huge buffers, so the problem is vastly magnified. QOS over wifi involved a LOT of voodoo, and it's why P2P had such a negative impact on a network despite QOS. The fix is an active queuing [wikipedia.org] mechanism, like WiFox.

        The difference is that most network admins shirk from the task of responsibly implementing QoS, but they'd gladly pay a hefty licensing fee to their wireless vendors for a product with a name like WiFox that 'boosts performance' by clobbering the network instead of cleverly balancing it to perform well.

        - MarcQuadra

        Uhh, no [wikipedia.org] ... the traditional balancing mechanisms are making it worse,

        In a network link between a fast and a slow network packets can get backed up. Especially at the start of a TCP communication when there is a sudden burst of packets, the link to the slower network may not be able to process the sudden communication burst quickly enough. Buffers exist to ease this problem by giving the fast network a place to push packets, to be read by the slower network as fast as it can. However, a buffer has a finite size: it can hold a maximum number of packets, called the window size. The ideal buffer has a window size such that it can handle a sudden burst of communication and match the speed of that burst to the speed of the slower network. This situation is characterized by a temporary delay for packets in the buffer during the communications burst, after which the delay rapidly disappears and the networks reach a balance in offering and handling packets.

        Network links have an inherent balance which is determined by the packet transmission and acknowledgement cycle. When a packet is sent, TCP usually acknowledges it before it will accept the next packet. This means that a network must transmit a packet and then transport the acknowledgement back before the next packet is pushed into the link. The time it takes to transport a packet and transport back the acknowledgement is called the round-trip time (RTT). If a buffer is large enough to handle a burst, the result will be smooth communication with (eventually) a low delay for packets in the buffer. But if the buffer is too small, then the buffer will fill up and will itself not be able to accept more packets than one per RTT cycle. So the buffer will stabilize the rate at which packets enter the network link, but it will do so with a fixed delay for packets in the buffer. This is called bufferbloat: instead of smoothing the communication, the buffer causes communication delays and lowers utilization of the network link (i.e. causes the network link to carry less than its capacity of packets).

        • Whomever modded this down, would you mind explaining why? I honestly want to know if I have any factual errors :)

          • by Anonymous Coward on Thursday November 15, 2012 @02:37AM (#41989265)

            I'm guessing it's because you made a brief mention of P2P which didn't involve bowing down and worshipping the protocol. The more sane mods have apparently fixed it, as you're at +5 Informative right now.

            I would like to add some information about QoS. It's actually got very little or nothing to do with buffer bloat in the way you describe. It's easier to illustrate by thinking about a simple point-to-point link. There are actually two types of buffers- one at each interface and another in the routing/switching engine. The buffer we're talking about exists at the interface level, it's a FIFO type queue. The switch/router which is sending the data decides what order to place packets on the interface based on the QoS markings, but once they are in the buffer they come out the same order they go in. On the other end of the link, the packets arrive on the interface and get buffered before arriving on the switching/routing engine- again it's a first-in-first-out queue. Once the switching/routing engine gets the packets, it can then determine what order to push them to the next interface based on QoS markings.
            Generally speaking, as long as there is enough available bandwidth on a link, QoS isn't really going to have much noticeable effect on the traffic. And just for the record, QoS only matters within YOUR network- never expect your QoS markings to have any effect when the traffic goes to someone else's network. If you try to honor external QoS markings, sooner or later some asswipe will notice and just start marking all HIS data at the highest priority level, and then you're back to square one.

            Now what this article is talking about is a different scenario, it's actually a very old and simple problem... unfortunately the solution in this case is not so simple. Wireless access points operate like a hub- everything is in the same collision domain. This means that all the devices are filling up the same buffer on the wireless interface on the AP, and the AP's outgoing buffer is handling the traffic to all those devices as well. And aside from every device using a unique frequency channel (not feasible), there's no way to really resolve this like we can with wired mediums. So what they've done is come up with a software-based solution which makes the wireless AP act like something halfway between a hub and a switch... we don't really know more than that because they haven't released the details yet.

      • by icebike ( 68054 ) *

        The difference is that most network admins shirk from the task of responsibly implementing QoS,

        Maybe that is because its impossible to implement QOS over anything but the smallest of networks, and then only for a subset of users.

      • by skids ( 119237 )

        You know not of what you speak. WiFi QoS schemes are generally just extensions of diffserv, so they are only really useful for prioritization based on what the endpoints set as a QoS classification, or if you think packet mangling is a good idea, based on what your routers can and cannot do with these flags. You can also base classifications on your own policy about applications on a very coarse level. None that I have seen offer any QoS-based per-connection fairness solution, e.g. WFQ or SF-BLUE, using

    • by mellon ( 7048 )

      Actually, the usual reason for a WiFi network to be slow is not a lack of QoS-based routing, but rather bufferbloat [wikipedia.org], which is what happens when transmit queue buffers in a router or bridge are not tuned to the carrying capacity of the transport. This results in packets being transmitted late, and hence their responses coming late, which fools the TCP congestion control algorithm and causes it to stutter. The result is that although there is sufficient bandwidth on the link, people don't get to use it, b

      • by KZigurs ( 638781 )

        Interesting take. I've always lived by the gospel that if you put a packet on the radio you are playing it mighty close and packet loss will trigger the congestion avoidance sooner or later. AFAIK most mainstream congestion avoidance implementations will actually happily take into account late replies (causing recalculation of mean RTT) and adjust accordingly. Anything I'm missing?

        (also - burn the bastards who like to put extra buffers and proxies in _MY_ TCP conversation)

  • by Anonymous Coward

    If WiFox detects a backlog of data due to congestion, it kicks in and enables high-priority mode. In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal.

    I am sure that there are absolutely no practical downsides

    And, in practice, the system will never thrash when faced with realistic and unpredictable load.

    Sounds like a research project that assumes a nice, friendly environment.

    • by icebike ( 68054 ) *

      Exactly my thoughts.

      What is the likelyhood that something simple, fair, and with no elephants hiding in the bushes has been missed by all the queuing experts lo these many years?

  • by Anonymous Coward

    700% faster pr0n downloads!

  • Just when I get a new wireless router, more upgrades come out. I hope the other companies can provide a firmware download so the rest of us low-lifes can enjoy better connections.
  • by Anonymous Coward on Wednesday November 14, 2012 @10:32PM (#41988221)

    Sounds like they're messing with 802.11 CSMA/CA min channel idle time and backoff on the AP to boost its transmit priority (and probably also force RTS/CTS handshaking when there's too many collisions between client transmits).
    Neat idea, but not exactly new. Plenty APs optimized for WISP usage do this already.
    Maybe the novel part is in dynamically determining congestion level or something...

    • Yes this is true for WISPs, however, you have to use the proprietary protocol at the AP and the Client side (subscriber unit). This is something if it helps with just the AP and the clients don't need to upgrade.

      Also, the AP running slow at the airport is not just because it uses one channel. If someone is hammering WiFi, it tends to let that person use more bandwidth then they should, but your AP at your home out in the middle of the country does OK with one channel and multiple people using it.

      The issue i

  • by cosm ( 1072588 ) <thecosm3@gmai l . c om> on Wednesday November 14, 2012 @10:37PM (#41988261)
    I'm guessing this works to solve temporary channel congestion issues, and not over-subscription (to which the only real solution on my opinion is to get a bigger pipe at the over-subscription point). My guess is that they keep buffers for each of the host associated with the AP, and when one of the buffers begins to get to some relative threshold they ignore the RTS frames from the other stations and allow said buffer to clear to some min point before sending a CTS to the other stations.

    If all of your associated host are simultaneously trying to send data in a full-mesh (all hosts talking to all hosts), I don't see how this would alleviate spectrum congestion (and you would think in this scenario latency would go up if they are round-robin'ing the queue clearing).

    Implementation details would be sweet. To me this sounds like ETS queuing/COS as seen on enterprise wired L2/L3 switches. Have to wonder if there is any RED/WRED when queues reach max size? Speculation....
    • by complete loony ( 663508 ) <Jeremy.Lakeman@nOSpaM.gmail.com> on Wednesday November 14, 2012 @11:17PM (#41988449)

      Before a wifi device can transmit a packet, it must wait for a period of silence where no carrier is detected. Any device can simply keep transmitting to hog up the channel indefinitely.

      This is the same idea behind the 802.11e burst mode, where you transmit a number of packets in quick succession, then ask the intended recipient to give you a bitmask of all the successfully delivered frames. Without pausing long enough for any other devices to jump in.

    • Given the name, I speculate that this software does a barrel roll periodically.
  • The described scheme ( blah blah "priority mode" blah blah) addresses the described problem ( congested channel ) not at all.
  • Coded TCP ? (Score:4, Interesting)

    by bug1 ( 96678 ) on Wednesday November 14, 2012 @10:39PM (#41988273)

    Is this the same as last months "breakthrough" technology described in the MIT technology review.
    http://www.technologyreview.com/news/429722/a-bandwidth-breakthrough [technologyreview.com]

    That breakthrough uses special coded TCP secret technology only known to the select few who sign the NDA. The rest of us have know it since 1951 as Hamming Codes, or more recently Forward Error Correction.

    • Is this the same as last months "breakthrough" technology described in the MIT technology review.

      Since it is completely different, and originates from a different group at a different university, I would guess, no.

    • Re:Coded TCP ? (Score:5, Informative)

      by Dan East ( 318230 ) on Wednesday November 14, 2012 @10:58PM (#41988349) Journal

      No, the MIT scheme is a formal protocol which requires both the access point and the client devices to work together. The client is able to "fill in" missing data, because the data itself is expressed in a computational manner that allows the client to perform calculations and solve for missing data.

      What the NC group has done is simply made access points more assertive and "take control" of a channel by ignoring the fact that other devices are transmitting and talking over top of them. That scheme is applied when a backlog of data occurs, and assuming that most clients are consumers of data, it makes sense to push out cached data instead of wasting time listening to clients make additional data requests. Part of the reason this would work is that access points are optimally located in a given coverage area, they use higher gain antennas, and don't worry about reducing power to conserve batteries and the like, which allows them to "talk over" your typical client data consuming device.

      • No, the MIT scheme is a formal protocol which requires both the access point and the client devices to work together. The client is able to "fill in" missing data, because the data itself is expressed in a computational manner that allows the client to perform calculations and solve for missing data.

        I have not read the article on the MIT protocol, but it doesn't sound like it will work in the real world any time soon. There is no way I can expect the first generation iPads, Android 1.x phones, Kindles, Nooks, netbooks, etc. that people bring to my network to support a new protocol, even if the devices do have enough processing horsepower to do the calculations.

        What the NC group has done is simply made access points more assertive and "take control" of a channel by ignoring the fact that other devices are transmitting and talking over top of them.

        When two transmitters talk simultaneously it causes a collision. Normally when a collision is detected the transmitting parties back off and s

    • Is this the same as last months "breakthrough" technology described in the MIT technology review.

      It is the same in the sense someone has claimed they invented something new with spectacular results when the concepts are already well known and deployed in production.

      How many queuing discplines are there that would do the same thing and without the side effect of skewing send side?

  • Data consumers (Score:5, Informative)

    by Dan East ( 318230 ) on Wednesday November 14, 2012 @10:47PM (#41988305) Journal

    Sounds like this is based on the simple fact that most internet clients are consumers of data, not producers (high download to upload ratio). So if you make the access point more bossy, to the point of no longer playing nice and waiting its turn to transmit (thus it will be transmitting over the other devices in this mode), the overall result is more efficiency when moving larger amounts of data.

    This makes sense on a number of levels. There is no point in letting client devices waste airtime requesting more data (again, assuming they are primarily consumers of data) when there is already a backlog of data that needs to be pushed down the pipe. Additionally, access points are centrally located and have higher gain antennas, thus even when they "double" with another device, there is a good chance that the recipient device will still be able to "hear" the access point over the other devices.

    So I can see how this "high priority mode" would work, even if the formal protocol doesn't support it (ie the client devices can be totally stock). It's like being in a room full of people talking, and instead of waiting for people to quiet down to tell a friend across the room something, you simply start yelling. They are able to hear you because you're louder (higher gain antenna), and the other people don't have to quiet down either (since they're just talking).

    There would likely be problems with this scheme when multiple access points have overlapping coverage - there would be lots of collisions at the fringe areas where they overlap. It would also have problems when someone is performing a large upload at the same time someone is streaming data down, because the access point would keep turning a deaf ear to the uploader. Also, if you had two clients sitting side by side, then that extremely close proximity could result in too strong of a client / client signal that the access point couldn't overcome.

    • If the entire network end-to-end mitigated the present bufferbloat issues, you would not need this. It's only because a local AP gets too-full queues that this problem happens.
    • by AmiMoJo ( 196126 ) *

      Also someone will release a modified firmware that puts your router into shouty mode where it hogs all the bandwidth.

  • by Xonea ( 637183 ) on Wednesday November 14, 2012 @11:24PM (#41988487)
    Like it was/is sometimes used in ham-radio packet radio or in satellite communitation.

    http://en.wikipedia.org/wiki/Demand_Assigned_Multiple_Access [wikipedia.org]

    The wikipedia description actually makes it sound a bit more complex than it actually it. In packet-radio DAMA simply meant that the central station polled each node regularly and asked it if it has queued requests. The only thing a client was allowed to send without asking back was the "I am a new client"-message.
  • My guess, this will be another scheme where the network driver on the client has to respond to congestion control/back-off style requests from the AP, staying off the air for a (random?) amount of time. The AP will just be slightly more sophisticated about who it tells to back-off. Even NTP has this feature.
  • Someone should tell the authors that "a jump from around 1Mbps to around 7Mbps" is a 600% increase, not a 700% increase.

    Why is this concept so hard for people to understand?

    • by mdenham ( 747985 )

      A jump from 900kbps ("around 1Mbps") to 7200kbps ("around 7Mbps") is very much a 700% increase.

      The problem is a lack of significant figures here, which means that the "around 1Mbps to around 7Mbps" increase could be anywhere from as low as ~333% (1.5Mbps to 6.5Mbps) to as high as 1400% (0.5Mbps to 7.5Mbps).

      This response has been brought to you by the -pedantic switch. Have a nice day!

      • The trouble is that the English language is vague. If I say something increased by 100% I clearly mean that it doubled. So, if 1 plus 100% equals 2, then 1 plus 600% is 7.

        However, 2 is 200% of 1, just as 7 is 700% of 1.

        To be more specific, the summary should read "boosting the throughput of busy WiFi networks by up to 7 times", or "boosting the throughput of busy WiFi networks by up to 700% of previous throughput"

  • by Grieviant ( 1598761 ) * on Thursday November 15, 2012 @12:46AM (#41988851)
    Makes it sound like 700% gains are for everyone in the network. So we're to believe that occasionally discarding the buffered requests of a large subset of users magically solves a persistent congestion problem where more bandwidth is being demanded than is available? I suspect there will be a few happy users (who received priority) along with lots of sad faces.
    • Overall throughput is increased by 700%. How it affects your depends on your usage pattern and your bandwidth requirements.

    • The main issue with the standard, is that access points are considered equal peers to the other devices on the network, and all devices should back off if the wifi medium is congested. But if everyone is fetching data from the internet and not from each other, the access point should be transmitting more than anyone else. Plus if we're talking TCP streams, which is likely, those clients are likely to be asking for re-transmissions. This will only make the problem worse, as the AP now has even more packets t

  • by L4t3r4lu5 ( 1216702 ) on Thursday November 15, 2012 @04:05AM (#41989599)

    ... [T]he breakthrough is purely software-based, meaning it could be rolled out to existing WiFi networks relatively easily.

    This is the engineer-speak version.

    Sales speak: "We can slash the R&D budget to nothing for the next 5 years by selling existing hardware with incremental improvements to the software stack, maximising our likelihood of getting a brand new Audi every six months. We should probably put a new fridge in the break room, though, so the peons don't get pissy about us shafting the consumer and not giving them a pay rise for the third year running."

  • by Viol8 ( 599362 ) on Thursday November 15, 2012 @05:16AM (#41989859) Homepage

    Use an ethernet cable if you're at home. Faster AND more secure. Ok , you can't if its a smartphone or an ipad but I'm talking about when using proper computers, not toys.

    • And what % of homes are wired up?

      A friend of mine had about 20m of coax circumnavigating his rented home and had to apologise when the LAN went down because his gf had tripped over a cable, again.

      • by Viol8 ( 599362 )

        Why do they need to be wired up? Put the computer near the router FFS. Unless you live in a 20 bedroom mansion whats the problem?

        • The router is connected to a phone line (ADSL). The phone outlet isn't in a location that a computer could be set up.

          Not a 20 bedroom mansion. Just a regular suburban home where the number of wifi connected devices outnumbers the population of humans. desktop, laser printer, smart phone, laptop.

  • As their txqueue fills up they are just shifting packets from the best effort queue to the video queue and then to the voice queue (highest priority). These queues use more air time and have less space between packets. I am curious how it performs under a variety of traffic conditions (upload vs. download vs. mix). It would seem that if uploads and downloads are done at the same time, the downloads will block the uploads. What if the clients do the same thing?

  • Then ISP's can throttle traffic to WiFi 1000% more effectively.

  • 802.11b/g operates as aloha as I understand it. And 802.11n is DAMA.

    I am beginning to think that what they are talking about here is as follows: the host recognizes is has a backlog of data. It sends all data to all stations at once. After the buffers are emptied, it then begins to poll the connected stations for their ACKs on that large databurst that was just sent. As long as the connected stations can hold onto those ACK packets for a few hundred milliseconds, all should be fine. The current way (

You are always doing something marginal when the boss drops by your desk.

Working...