Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Wireless Networking Hardware Technology

Nonlinear Neural Nets Smooth Wi-Fi Packets 204

mindless4210 writes "Smart Packets, Inc has developed the Smart WiFi Algorithm, a packet sizing technology which can predict the near future of network conditions based on the recent past. The development was originally started to enable smooth real-time data delivery for applications such as streaming video, but when tested on 802.11b networks it was shown to increase data throughput by 100%. The technology can be applied at the application level, the operating system level, or at the firmware level."
This discussion has been archived. No new comments can be posted.

Nonlinear Neural Nets Smooth Wi-Fi Packets

Comments Filter:
  • by wpmegee ( 325603 ) <wpmegee AT yahoo DOT com> on Tuesday May 04, 2004 @09:23PM (#9059113)
    Firmware and/or software:
    The neural network is one of three modules in the company's WiFi Speedzone system which has reached the beta test stage. A second module monitors and analyzes network traffic and the third handles the packet-sizing operation. WiFi Speedzone can be implemented at the application level, the operating system level or as firmware embedded in an 802.11b device, the company said.

    And next time, please RTFA, m'kay?
  • by burns210 ( 572621 ) <maburns@gmail.com> on Tuesday May 04, 2004 @09:26PM (#9059142) Homepage Journal
    the technology can be executed at any of those level to be effective, not all 3 at once. So that means linux could get support for it at the kernel level... someone could write an application for windows, and palms could use an updated firmware and all 3 would effectively take advantage of the algo.
  • by Pyro226 ( 715818 ) <Pyro226@REDHAThotmail.com minus distro> on Tuesday May 04, 2004 @09:28PM (#9059156) Journal
    There would be no reason to implement the algorithms in all three. The point is that the implementation isn't so low level or complicated that it requires wireless cards to be designed for it. It can instead be implemented in firmware, hardware, or software.

    Now, a speed increase sounds good to me, but as most of my wifi usage is for internet use, and I don't think i've ever been on an internet connection faster than my wifi connection, I'd like to know if it helps with range, and I'm too lazy to RTFA.
  • by ookabooka ( 731013 ) on Tuesday May 04, 2004 @09:37PM (#9059218)
    Wow, because this neural net has like 10 nodes, and the average human brain a few billion. I made a chess engine, to me thats a lot smarter than traffic handling, and it doesnt use neural nets. . . .i would say we are a little ways off from making true AI. now a distributed neural net. . . thats interesting. . .
  • by I_Love_Pocky! ( 751171 ) on Tuesday May 04, 2004 @09:43PM (#9059263)
    Umm... it isn't so simple. You are missing the basic idea of a layered architecture. This is actually really cool that it can be implemented at any layer. Sometimes there are things that can't be done at the application layer because of the constraints created by the layers below it. For instance, it is pretty worthless to do routing at the application layer if you are using IP, because it is already taken care of at the network layer.

    So to say that it is all just "software" misses the fact that there is a significant difference between how these peices of software work. It is really cool that this can be done at the application layer, because it will allow applications to be developed to take advantage of it with out even changing the drivers for your wi-fi card.
  • Chartsengrafs (Score:5, Informative)

    by NanoWit ( 668838 ) on Tuesday May 04, 2004 @09:55PM (#9059334)
    Heres a graph that I ripped out of some lecture notes. It shows how much of a problem congestion is on 802.11b networks.

    http://web.ics.purdue.edu/~dphillip/802.11b.gif [purdue.edu]

    For a little explaination, where it says "Node 50" or "Node 100" that means that there are 50 or 100 computers on the wireless network. And the throughput numbers are for the whole network, not per host. So when 100 nodes are getting 3.5 Mbps that's .035 Mbps per host.

    Thanks to professor Park
  • by Anonymous Coward on Tuesday May 04, 2004 @09:58PM (#9059353)
    It's a new way of determining the optimum packet size on the fly so that collisions, errors & retransmissions are minimized, greatly boosting overall throughput.

    QED
  • by jarran ( 91204 ) on Tuesday May 04, 2004 @10:00PM (#9059362)
    I expect at least part of the answer is that neural networks are trivial to understand and implement compared to support vector machines.

    You might be able to build SVM implementations relatively easily on a real computer using off the shelf libraries etc., I doubt many of these would run on a WiFi card.

    Neural nets have also been around for quite a while, so they have gained acceptance. Although SVMs have been known to the machine learning community for quite a while now, they have only just started being noticed by the wider world quite recently.
  • Re:Why wireless only (Score:2, Informative)

    by Anonymous Coward on Tuesday May 04, 2004 @10:05PM (#9059401)
    Why isn't there something like this for normal internet? Even the "old days" of Zmodem's big packets if it was going well, and small packets if it wasn't, is better than the fixed MTU/MRU we're stuck with now.

    The normal internet has far less collisions & errors than wireless ethernet. And ethernet switches are now so cheap that it isn't worth your money to buy ethernet hubs.

    And the advantage here is that it is (allegedly) a successful predictive model of whether to use big or small packets, and not reactive.

    If you react to errors, you can resend, but you've already wasted bandwidth. If you can avoid the error in the first place, it's much better! :)
  • by pla ( 258480 ) on Tuesday May 04, 2004 @10:21PM (#9059533) Journal
    When I see the headline: "Nonlinear Neural Nets Smooth Wi-Fi Packets" and I only understand the words nets, smooth and packets...and none of them in relation to each other

    Simple 'nuff, really...

    Neural net - An arrangement of "dumb" processing nodes in a style mimicing that which the greybacks of AI (such as Minsky and Turing et al) once believed real biological neurons used. Basically, each node has a set of inputs and outputs. It sums all its inputs (each with a custom weight, the part of the algorithm you actually train), performs some very simple operation (such as hyperbolic tangent) called the "transfer function" on that sum, then sets all of its outputs to that value (which other neurons in turn use as their inputs).

    Nonlinear - This refers to the shape of the transfer function. A linear neural net can, at best, perform linear regression. You don't need a neural net to do that well (in fact, you can do it a LOT faster with just a single matrix inversion). So calling it "nonlinear" practically counts as redundant in any modern context.

    Smooth - A common signal processing task involves taking a noisy signal, and cleaning it up.

    Wi-Fi - An example of a fairly noisy signal that would benefit greatly from better prediction of the signal dynamics, and from better ability to clean the signal (those actually go together, believe it or not - In order to "clean" the signal without degrading it, you need to know roughly what it "should" look like).

    Packets - The unit in which "crisps" come. Without these, you can't use a Pringles can to boost the gain on your antenna to near-illegal values. ;-)

    There, all make sense now?
  • Re:Chartsengrafs (Score:1, Informative)

    by Anonymous Coward on Tuesday May 04, 2004 @10:38PM (#9059625)
    You don't need this vapourware when you have frottle [sourceforge.net] already available for free.

    It works. Works well. It's free.

    It's already in use of large wireless WAN's and retrofits to existing consumer grade wireless kit.
  • by Anonymous Coward on Wednesday May 05, 2004 @12:05AM (#9060209)
    I believe packet smoothing refers to taming rapid swings in packet output rates, so the network can adapt in a timely manner and thus drop fewer packets. In the OSI protocol layer model, I guess it's usually best accomplished in the protocol timers of the network and link layers. It would have nothing to do with receiver signal filtering, which is a physical layer process, performed before converting the received signal into packet data bits.
  • by pla ( 258480 ) on Wednesday May 05, 2004 @12:23AM (#9060299) Journal
    So, I take it this means that the "greybacks of AI" no longer believe this to be true? What is the new thinking?

    I put that in the past tense for two reasons...

    First, at least the followers of Minsky have apparently deemed connectionist learning models as passe. In fact, as far as I can tell, the very field of artificial intelligence has shifted away from the "intelligence" part, preferring to focus on (the far more marketable) automated problem solving and classification, rather than trying to mimic aspects of actual consciousness.

    And second, neurophysiology (rather than AI researcher) has all but obliterated the hope that any basic variation on the standard multilayer feedforward neural net really does all that great of a job at modeling the brain. It seems that real neurons do some pretty impressive processing, each having a local store, exceedingly fine-grained delay lines, self-feedback (at the signal, rather than just the obvious neurotransmitter level), and some degree of actual flow control. And that just mentions what we know, they may have quite a good many more secrets waiting for someone to notice... For example, recently, a few bright folks noticed that glia, the non-neuronal cells making up literally half of our brains, might do more than just sit there and take up space.


    Please confine your answer to words of less than ten syllables :)

    I apologize for the length of words in this domain, but I didn't make them up, we all just inherited them from people who liked Latin and nominalization waaaaaaaay too much. <G>
  • by 12357bd ( 686909 ) on Wednesday May 05, 2004 @01:16AM (#9060586)

    Just one point.

    'One way' classical layered models can be said to be 'passe', but recurent o 'looped' conectionist models are far from being understood, in fact, are a great source of advances.

    What's in a sig?

  • Re: Skeptic (Score:5, Informative)

    by Black Parrot ( 19622 ) on Wednesday May 05, 2004 @01:39AM (#9060697)


    > usually the neural network is just a very simple, possibly linear, adaptive filter which means that really contains no more than a few matrix multiplications ...

    No one in their right mind would use a linear ANN, since ANNs get their computational power from the nonlinearities introduced by their squashing functions. Without the nonlinearities, you'd just be doing linear algebra, e.g. multiplying vectors by matrices to get new vectors.

    As for the computational power of ANNs,

    • A simple feed-forward network with a single hidden layer can approximate any continuous function on the range [0,1] with arbitrary accuracy. (Or is it s/continuous/differentiable/ ? - can't remember.)
    • Certain architectures of recurrent ANNs are equivalent to Turing machines, if the weights are specified with rational numbers.
    • An ANN with real-valued weights (real, not fp) would be a trans-Turing device.
    Goggle a paper by Cybenko for the first result, Siegelmann and Sontag for the second, and Siegelmann (sans Sontag?) for the third third.

    > yes it has some success in approximating things locally, but terms like "learning" are really misused

    "Neural network" and "learning" are orthogonal concepts. A neural network is a model for computation, and learning is an algorithm.

    In practice we almost always use learning to train neural networks, since programming them for non-trivial tasks would be far to difficult.
  • by ThatGuyInTheHole ( 628205 ) on Wednesday May 05, 2004 @02:33AM (#9060906) Homepage
    Back in the days when computers were large hulking monsters best kept under a desk, some college had a contest matching two computer programs playing the prisoner's dilemma game with roughly equivalent outcomes. A lot of famous computer scientists submitted programs, some many pages in length, but it turned out a really simple program won: Tit for Tat. The program begins with silence, but if it is betrayed, in the next round it will betray you, then switch back to silence. That's pretty much it. TGitH
  • Re:Why wireless only (Score:2, Informative)

    by mesterha ( 110796 ) <chris@mesterharm.gmail@com> on Wednesday May 05, 2004 @02:53AM (#9060959) Homepage

    And the advantage here is that it is (allegedly) a successful predictive model of whether to use big or small packets, and not reactive.

    If you react to errors, you can resend, but you've already wasted bandwidth. If you can avoid the error in the first place, it's much better! :)

    It predicts based on past performance therefore it is reacting. The savings on switching packet size is based on resending small packets instead of resending large packets. Losing a single small packet is not nearly as bad as losing a large packet. Of course, as the packets get smaller there is more overhead... Hence you need to optimize the size based on the current noise conditions.

  • Re:could be handy.. (Score:5, Informative)

    by ComputerSlicer23 ( 516509 ) on Wednesday May 05, 2004 @05:20AM (#9061323)
    Actually, all you have to do is tweak the parameters of the TCP/IP stack. As I recall, Linux as a specific parameter for this. I want to say, it's the transmit window size. They document it as something you only change on a long haul line, which a satallite feed should count as one.

    Specifically, you want to allow a lot more packets to be outstanding then a normal TCP connection will allow. This is a bad idea on a low latency connection. It has something to do with windows, and buffering. Also, if you use advanced IP tools to ensure that ACK's get sent before anything else, you'll be much happier.

    This thread on the LKML seems to have useful information on it: LKML Thread [iu.edu]

    Kirby

Say "twenty-three-skiddoo" to logout.

Working...