Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Wireless Networking Hardware Technology

Nonlinear Neural Nets Smooth Wi-Fi Packets 204

mindless4210 writes "Smart Packets, Inc has developed the Smart WiFi Algorithm, a packet sizing technology which can predict the near future of network conditions based on the recent past. The development was originally started to enable smooth real-time data delivery for applications such as streaming video, but when tested on 802.11b networks it was shown to increase data throughput by 100%. The technology can be applied at the application level, the operating system level, or at the firmware level."
This discussion has been archived. No new comments can be posted.

Nonlinear Neural Nets Smooth Wi-Fi Packets

Comments Filter:
  • by wpmegee ( 325603 ) <wpmegee AT yahoo DOT com> on Tuesday May 04, 2004 @09:26PM (#9059146)
    Not necessarily. This improves throughput, but as a general rule wireless always adds 20ms to your ping. so 50% of that would still be a 10ms penalty.

    I'm not a network engineer, but latency is more important than bandwidth for ping times and such.

    For an example pay a half-life game, open the console and type net_graph 3. That'll show you your fps, ping, and in/out bandwith used.
  • by women ( 768472 ) on Tuesday May 04, 2004 @09:31PM (#9059176)
    I'm curious as to why they are using Neural Networks for this application? In the last 10 years or so, most machine learning applications have moved away from Neural Networks to more mathematically based models such as Support Vector Machines, a generative model (e.g. Naive Bayes), or some kind of Ensemble Method (e.g. Boosting). I suspect they used NN because the Matlab toolkit made it easy or someone in research hasn't kept up. I'd look for a paper to come out soon that improves the accuracy by using SVM.
  • by Anonymous Coward on Tuesday May 04, 2004 @09:43PM (#9059262)
    I'm curious as to why they are using Neural Networks for this application?

    Well, it's quite obviously because a Support Vector Machine is inherently linear, and to make it nonlinear, you must insert a nonlinear kernel which you need to select by hand.

    If you'd read the article, you'd see that they are using a recurrent-feedback neural network; good luck finding a recurrent-feedback nonlinear kernel for a SVM....! You can't just plug in a radial bias function and expect it to work. In this application, they are looking for fast, elastic response to rapidly changing conditions as well as a low tolerance for predictive errors--something an RFNN is ideal for, and that a SVM is absolutely terrible at.

  • Skeptic (Score:5, Insightful)

    by giampy ( 592646 ) on Tuesday May 04, 2004 @09:49PM (#9059298) Homepage
    Very often the term "neural network" is used
    just as a selling point because it sounds
    like something extremely advanced and "related
    to artificial intelligence".

    usually the neural network is just a
    very simple, possibly linear, adaptive filter
    which means that really contains no more
    than a few matrix multiplications ...

    yes it has some success in approximating
    things locally, but terms like "learning"
    are really misused

    After RTFA (the second) it actually
    seems that they did try two or three
    things before, but really i wouldn't
    "welcome our new intelligent packet sizers overlords"
    just yet.
  • Why wireless only (Score:5, Insightful)

    by Old Wolf ( 56093 ) on Tuesday May 04, 2004 @09:58PM (#9059351)
    Why isn't there something like this for normal internet? Even the "old days" of Zmodem's big packets if it was going well, and small packets if it wasn't, is better than the fixed MTU/MRU we're stuck with now.
  • Re:What? (Score:3, Insightful)

    by Anonymous Coward on Tuesday May 04, 2004 @10:01PM (#9059372)
    In average geek terms please!

    Not gonna happen. The poster was just using random random terms that have nothing to do with this article, trying to sound smart, and is probably laughing as the post gets moderated up.

    Everything the poster mentione, such as Naive Bayes [google.com] and Support Vector Machines [google.com] are used for static tasks, like classification, not for realtime feedback situations. They learn once and predict forever. They don't learn iteratively and keep changing. Follow the Google links I just gave and peruse the first few sites that come up if you're not sure. They are used for things like text classification, handwriting recognition, voice recognition, etc., i.e., "train once use often."

  • EE Times Article (Score:4, Insightful)

    by FreeHeel ( 620639 ) on Tuesday May 04, 2004 @10:12PM (#9059458)
    Wow...not 30 minutes ago I read this article [eetimes.com] in this week's EE Times on the same topic.

    This sounds like a great improvement to 802.11x technology...now let's open-source it so we can all benefit!

  • Re:Skeptic (Score:4, Insightful)

    by hawkstone ( 233083 ) on Tuesday May 04, 2004 @11:32PM (#9060027)
    usually the neural network is just a very simple, possibly linear, adaptive filter which means that really contains no more than a few matrix multiplications ...

    The simplicity of the calculation does not mean it is not a learning algorithm. Real neural networks are quite simple, as each "neuron" is simply a weighted average of the inputs passed through a sigmoid or step function. However, en masse they perform better than most other algorithms at handwriting recognition. They take a training set and operate on it repeatedly, updating their parameters, until some sort of convergence is reached. Their performance on a test set is a measure of how well they have learned. This is a learning algorithm.

    Even linear regression is a learning algorithm. You give it a bunch of training data as input (i.e. x,y pairs), iterate on that data until it converges, and is then used to predict new data. There happens to be an analytic solution to the iteration, but this does not make it any less of a learning algorithm.

    I think maybe your definition of "learning" is unnecessarily strict. The simplicity of the computation is not what defines this category of algorithms.

  • Re:Hahaha (Score:3, Insightful)

    by JessLeah ( 625838 ) * on Wednesday May 05, 2004 @12:45AM (#9060428)
    It seems that CEOs always end up in fields they have no experience in. Remember Sculley (sp.) of Apple shame? He was, what, a Pepsi or Coke exec?
  • by Ayanami Rei ( 621112 ) * <rayanami&gmail,com> on Wednesday May 05, 2004 @12:55AM (#9060484) Journal
    does not suffer from intense negative feedback as does the stock market.

    I think part of the ease of predictability may have a little to do with the kind of protocols used to collision detection/TDMA in congested 802.11 nets. If they are suffciently simple a single node could outmanuver the others.

    Some questions...
    What is the behavior of this algorithm as the number of enabled clients increases and the bandwidth demand of the clients exceeds the channel capacity? Does it degrade gracefully? Does it unfairly compete with non-enabled clients?
  • Re:Hahaha (Score:3, Insightful)

    by Jeffrey Baker ( 6191 ) on Wednesday May 05, 2004 @01:00AM (#9060508)
    There's lots of scam giveaways in this article. If the protocol "can be implemented" at the application layer, the network layer, or the MAC firmware, that means it *hasn't* been implemented in any of those places at all.
  • by Hast ( 24833 ) on Wednesday May 05, 2004 @05:01AM (#9061283)
    I'd recommend that you read the second article, from EE-times. It actually has some content which is something their own site is quite completely void of.

    And people have been doing this before. The EE-times article mentions that. Apparently no-one has either not made so much progress or just not made so much of a fuss over it before. A quick search for "variable packet length and wireless" turns up quite a lot of results though. I'm fairly confident that you can find previous research in this area if you look around.

    I'm not entirely convinced that Kalman filters would do a good job though. Or rather, they may need more additions in order to be efficient in this particular problem space. Thus making it bigger and less desireable to put into eg firmware. It's not unreasonable that they haven't considered it though. But they did seem to try some other systems, some of them seem a bit too optimistic to be reasonable in this problem space though. According to EE-times they did attempts with database lookup and expert systems eg. Seems like both of those could be "trivially" rejected to not be adaptive and accurate enough.

    It's not impossible that there exist methods which are a lot more efficient than NN to solve this problem. NN seems to do a good job though, so I guess it warrants some more looking into.

Serving coffee on aircraft causes turbulence.

Working...