Follow Slashdot stories on Twitter


Forgot your password?
Network Software Wireless Networking Hardware

New WiFi Protocol Boosts Congested Wireless Network Throughput By 700% 130

MrSeb writes "Engineers at NC State University (NCSU) have discovered a way of boosting the throughput of busy WiFi networks by up to 700%. Perhaps most importantly, the breakthrough is purely software-based, meaning it could be rolled out to existing WiFi networks relatively easily — instantly improving the throughput and latency of the network. As wireless networking becomes ever more prevalent, you may have noticed that your home network is much faster than the WiFi network at the airport or a busy conference center. The primary reason for this is that a WiFi access point, along with every device connected to it, operates on the same wireless channel. This single-channel problem is also compounded by the fact that it isn't just one-way; the access point also needs to send data back to every connected device. To solve this problem, NC State University has devised a scheme called WiFox. In essence, WiFox is some software that runs on a WiFi access point (i.e. it's part of the firmware) and keeps track of the congestion level. If WiFox detects a backlog of data due to congestion, it kicks in and enables high-priority mode. In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal. We don't have the exact details of the WiFox scheme/protocol (it's being presented at the ACM CoNEXT conference in December), but apparently it increased the throughput of a 45-device WiFi network by 700%, and reduced latency by 30-40%."
This discussion has been archived. No new comments can be posted.

New WiFi Protocol Boosts Congested Wireless Network Throughput By 700%

Comments Filter:
  • by Anonymous Coward on Wednesday November 14, 2012 @11:31PM (#41988219)

    > And what makes this different than Quality of Service (QOS)?

    You could, you know, RTFA.

    "If WiFox detects a backlog of data due to congestion, it kicks in and enables high-priority mode. In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal."

    QoS just prioritizes packets in the buffer for transmission. It does nothing about spectrum. This scheme seems to have the AP tell all of the clients to STFU and get off the spectrum whenever it has a backlog of packets to dump.

  • Data consumers (Score:5, Informative)

    by Dan East ( 318230 ) on Wednesday November 14, 2012 @11:47PM (#41988305) Journal

    Sounds like this is based on the simple fact that most internet clients are consumers of data, not producers (high download to upload ratio). So if you make the access point more bossy, to the point of no longer playing nice and waiting its turn to transmit (thus it will be transmitting over the other devices in this mode), the overall result is more efficiency when moving larger amounts of data.

    This makes sense on a number of levels. There is no point in letting client devices waste airtime requesting more data (again, assuming they are primarily consumers of data) when there is already a backlog of data that needs to be pushed down the pipe. Additionally, access points are centrally located and have higher gain antennas, thus even when they "double" with another device, there is a good chance that the recipient device will still be able to "hear" the access point over the other devices.

    So I can see how this "high priority mode" would work, even if the formal protocol doesn't support it (ie the client devices can be totally stock). It's like being in a room full of people talking, and instead of waiting for people to quiet down to tell a friend across the room something, you simply start yelling. They are able to hear you because you're louder (higher gain antenna), and the other people don't have to quiet down either (since they're just talking).

    There would likely be problems with this scheme when multiple access points have overlapping coverage - there would be lots of collisions at the fringe areas where they overlap. It would also have problems when someone is performing a large upload at the same time someone is streaming data down, because the access point would keep turning a deaf ear to the uploader. Also, if you had two clients sitting side by side, then that extremely close proximity could result in too strong of a client / client signal that the access point couldn't overcome.

  • Re:Coded TCP ? (Score:5, Informative)

    by Dan East ( 318230 ) on Wednesday November 14, 2012 @11:58PM (#41988349) Journal

    No, the MIT scheme is a formal protocol which requires both the access point and the client devices to work together. The client is able to "fill in" missing data, because the data itself is expressed in a computational manner that allows the client to perform calculations and solve for missing data.

    What the NC group has done is simply made access points more assertive and "take control" of a channel by ignoring the fact that other devices are transmitting and talking over top of them. That scheme is applied when a backlog of data occurs, and assuming that most clients are consumers of data, it makes sense to push out cached data instead of wasting time listening to clients make additional data requests. Part of the reason this would work is that access points are optimally located in a given coverage area, they use higher gain antennas, and don't worry about reducing power to conserve batteries and the like, which allows them to "talk over" your typical client data consuming device.

  • by fsterman ( 519061 ) on Thursday November 15, 2012 @12:48AM (#41988587) Homepage

    And what makes this different than Quality of Service (QOS)?

    - madsci1016

    The difference is that QOS is a passive buffer queuing mechanism that is susceptible to buffer bloat [], when large buffers (meant to help traditional QOS) trick the avoidance congestion algorithms,

    ...buffers have become large enough to hold several megabytes of data, which translates to 10 seconds or more at a 1 Mbit/s line rate used for residential Internet access. This causes the TCP algorithm that shares bandwidth on a link to react very slowly as its behavior is quadratic in the amount of buffering"

    WiFi has a LOT of dropped packets and huge buffers, so the problem is vastly magnified. QOS over wifi involved a LOT of voodoo, and it's why P2P had such a negative impact on a network despite QOS. The fix is an active queuing [] mechanism, like WiFox.

    The difference is that most network admins shirk from the task of responsibly implementing QoS, but they'd gladly pay a hefty licensing fee to their wireless vendors for a product with a name like WiFox that 'boosts performance' by clobbering the network instead of cleverly balancing it to perform well.

    - MarcQuadra

    Uhh, no [] ... the traditional balancing mechanisms are making it worse,

    In a network link between a fast and a slow network packets can get backed up. Especially at the start of a TCP communication when there is a sudden burst of packets, the link to the slower network may not be able to process the sudden communication burst quickly enough. Buffers exist to ease this problem by giving the fast network a place to push packets, to be read by the slower network as fast as it can. However, a buffer has a finite size: it can hold a maximum number of packets, called the window size. The ideal buffer has a window size such that it can handle a sudden burst of communication and match the speed of that burst to the speed of the slower network. This situation is characterized by a temporary delay for packets in the buffer during the communications burst, after which the delay rapidly disappears and the networks reach a balance in offering and handling packets.

    Network links have an inherent balance which is determined by the packet transmission and acknowledgement cycle. When a packet is sent, TCP usually acknowledges it before it will accept the next packet. This means that a network must transmit a packet and then transport the acknowledgement back before the next packet is pushed into the link. The time it takes to transport a packet and transport back the acknowledgement is called the round-trip time (RTT). If a buffer is large enough to handle a burst, the result will be smooth communication with (eventually) a low delay for packets in the buffer. But if the buffer is too small, then the buffer will fill up and will itself not be able to accept more packets than one per RTT cycle. So the buffer will stabilize the rate at which packets enter the network link, but it will do so with a fixed delay for packets in the buffer. This is called bufferbloat: instead of smoothing the communication, the buffer causes communication delays and lowers utilization of the network link (i.e. causes the network link to carry less than its capacity of packets).

  • Re:Um... (Score:4, Informative)

    by Bruce Perens ( 3872 ) <> on Thursday November 15, 2012 @01:10AM (#41988695) Homepage Journal

    Yeah, and you'd never have another AP with the same channel on a different 'network'. How is the AP supposed to just instantly have 'total control'

    Just of its own clients and other stations that can hear it. Some packets will potentially be lost due to the hidden-transmitter problem.

  • by Anonymous Coward on Thursday November 15, 2012 @03:37AM (#41989265)

    I'm guessing it's because you made a brief mention of P2P which didn't involve bowing down and worshipping the protocol. The more sane mods have apparently fixed it, as you're at +5 Informative right now.

    I would like to add some information about QoS. It's actually got very little or nothing to do with buffer bloat in the way you describe. It's easier to illustrate by thinking about a simple point-to-point link. There are actually two types of buffers- one at each interface and another in the routing/switching engine. The buffer we're talking about exists at the interface level, it's a FIFO type queue. The switch/router which is sending the data decides what order to place packets on the interface based on the QoS markings, but once they are in the buffer they come out the same order they go in. On the other end of the link, the packets arrive on the interface and get buffered before arriving on the switching/routing engine- again it's a first-in-first-out queue. Once the switching/routing engine gets the packets, it can then determine what order to push them to the next interface based on QoS markings.
    Generally speaking, as long as there is enough available bandwidth on a link, QoS isn't really going to have much noticeable effect on the traffic. And just for the record, QoS only matters within YOUR network- never expect your QoS markings to have any effect when the traffic goes to someone else's network. If you try to honor external QoS markings, sooner or later some asswipe will notice and just start marking all HIS data at the highest priority level, and then you're back to square one.

    Now what this article is talking about is a different scenario, it's actually a very old and simple problem... unfortunately the solution in this case is not so simple. Wireless access points operate like a hub- everything is in the same collision domain. This means that all the devices are filling up the same buffer on the wireless interface on the AP, and the AP's outgoing buffer is handling the traffic to all those devices as well. And aside from every device using a unique frequency channel (not feasible), there's no way to really resolve this like we can with wired mediums. So what they've done is come up with a software-based solution which makes the wireless AP act like something halfway between a hub and a switch... we don't really know more than that because they haven't released the details yet.

  • by tattood ( 855883 ) on Thursday November 15, 2012 @12:31PM (#41992505)

    the AP gets to tell all the stations to STFU, and then handles traffic of each station one at a time, giving each at full upstream bandwidth, while the other stations wait until are allowed to speak. Switch.

    That is absolutely not how a network switch works. In a switch, every connected device can send and receive on a completely separate collision domain from any other device connected. They basically implemented Token Ring [] on wireless.

UNIX is many things to many people, but it's never been everything to anybody.