Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Wireless Networking Media Networking Hardware IT

Wireless LANs Face Huge Scaling Challenges 89

BobB writes with this excerpt from NetworkWorld: "Early WLANs focused on growing the number of access points to cover a given area. But today, many wireless administrators are focusing more attention on scaling capacity to address a surge in end users and the multimedia content they consume (this is particularly being seen at universities). Supporting this involves everything from rethinking DNS infrastructure to developing a deeper understanding of what access points can handle. And 802.11n is no silver bullet, warn those building big wireless networks. 'These scaling issues are becoming more and more apparent where lots of folks show up and you need to make things happen,' says the former IT director for a big Ivy League campus."
This discussion has been archived. No new comments can be posted.

Wireless LANs Face Huge Scaling Challenges

Comments Filter:
  • So basically (Score:5, Insightful)

    by Architect_sasyr ( 938685 ) on Saturday August 30, 2008 @05:24AM (#24808299)
    ...we're having the same issues we did when we stopped using dialup and moved to broadband?
    • No (Score:5, Insightful)

      by Colin Smith ( 2679 ) on Saturday August 30, 2008 @05:31AM (#24808335)

      We're having the same scalability issues which existed with 10base2 technology and 10/100baseT on a hub. The solution is "the switch".

       

      • Re:No (Score:5, Interesting)

        by Stellian ( 673475 ) on Saturday August 30, 2008 @06:37AM (#24808615)

        The solution is "the switch".

        In the case of wireless, the role of the switch could be fulfilled by beamforming [wikipedia.org]: a breakthrough that allows the same spectrum to be used by multiple transmitters simultaneously, as long as they are physically separated.
        Unfortunately the math there is harry, and one of the upcoming technologies making use of beamforming, namely WiFi has failed to deliver thus far.

        • WiFi has failed to deliver thus far.

          I meant WiMAX, of course. Beamforming is also included in the 802.11n, I don't know how well it implemented by the early adopters.

        • "Unfortunately the math there is harry, and one of the upcoming technologies making use of beamforming, namely WiFi has failed to deliver thus far." Well I think I see your problem right there. Harry is have "performance under pressure" issues.
          • by KGIII ( 973947 )

            He's unhappy because the sequel wasn't written, it was meant to be, "Hermine (spelling???) Lets Harry Touch Her Breasts."

        • Isn't that sorta like using a parabolic reflector and/or changing the phase of the signal(eg rotate the antenna)?
      • by jstott ( 212041 )

        The solution is "the switch".

        In 802.11b/g operate on the 2.4000-2.4835 GHz band (so saith wikipedia [wikipedia.org]). That gives you 83.5 MHz of total bandwidth, for a theoretical maximum allowed data rate of 41.75 MBit/sec, or roughly 4 MByte/sec. It doesn't take too many torrents or video streams to suck up 4 MByte/sec (and that's the theoretical maximum, actual performance usually caps at at about half the theoretical max!).

        The problem isn't switching, it's having enough non-interfering access points to deliver th

        • In 802.11b/g operate on the 2.4000-2.4835 GHz band (so saith wikipedia). That gives you 83.5 MHz of total bandwidth, for a theoretical maximum allowed data rate of 41.75 MBit/sec, or roughly 4 MByte/sec. It doesn't take too many torrents or video streams to suck up 4 MByte/sec (and that's the theoretical maximum, actual performance usually caps at at about half the theoretical max!).

          I don't have the technical expertise to adequately explain modulation [wikipedia.org], but your understanding of throughput calculation is severely lacking. Standard 802.11g is able to offer 54Mbps (theoretical) in only 20 MHz of bandwidth.

    • Re:So basically (Score:5, Interesting)

      by azgard ( 461476 ) on Saturday August 30, 2008 @06:53AM (#24808679)

      No, I think we are having these issues, because we are going backwards. It's like going from cable TV back to the wireless broadcast. If we were doing that, we would have less TV channels to select from.

      • Re: (Score:3, Informative)

        by Bazman ( 4849 )

        Not really, it all depends on the cable! I just had my house re-roofed, and up there in the eaves were a bunch of old cables and a plastic box marked 'Rediffusion':

        http://rediffusion.info/cablestory.html [rediffusion.info]

        I think that system delivered about 5 tv channels, probably in black and white too. Nowadays I get 40 TV and radio channels over a terrestrial wireless broadcast system.

        • by azgard ( 461476 )

          Actually, I used "we", but I am not American (I cheated a little :-)). In my country (Czech Republic), there are about 4 available channel frequencies for terrestrial wireless analog TV broadcast in Czech (this has probably to do with the fact that in Europe, different states require different sets of channels). However, with the cable, you can get some more, as well as other European channels (I would say 30-100 channels). This will probably change with the switch to digital broadcast - then it will be pos

          • by Belial6 ( 794905 )
            And that physical limit can be overcome by pulling more wires. Yes, more bandwidth can be allocated for wireless, but RF is already pretty crowded.
      • by orasio ( 188021 )

        Yes, that happens to me. DirecTV made me go back to only receiving 500 channels at the box.
        That is a drawback, from the 10000 channels I had when I used coaxial.

        TV is esp. the best scenario for wireless distribution. You have very few transmitters, and loads of receivers, no conflict resolution. It's so good, that they only keep using cable because of its limited access, better for control and billing.

        And I don't understand what you mean by "going backwards". I really like typing this from the bed, in a ren

    • Re: (Score:1, Funny)

      by Anonymous Coward

      ...we're having the same issues we did when we stopped using dialup and moved to broadband?

      We're having the same problem we had before we moved away from sparkgap transmitters.

  • Hmmm (Score:5, Insightful)

    by Colin Smith ( 2679 ) on Saturday August 30, 2008 @05:28AM (#24808323)

    Bits of wire are dedicated to individuals, wifi spectrum is shared between individuals. Who'd have thought that might create scalability issues...

    Perhaps dedicating a little bit of the spectrum to each individual might fix the scalability problems.
     

    • Re:Hmmm (Score:4, Insightful)

      by thompson.ash ( 1346829 ) on Saturday August 30, 2008 @05:38AM (#24808369) Journal

      Surely dedicating a segment of that spectrum would cause problems ensuring equality of access?

      At the moment it seems that the more people you have on, the lower your bandwidth - stands to reason.

      Surely allocating fixed bandwidth on a first come first served basis would mean eventually you would run out of bandwidth to allocate and people would be denied access?

      • Re:Hmmm (Score:5, Interesting)

        by nuintari ( 47926 ) on Saturday August 30, 2008 @10:25AM (#24809951) Homepage

        802.11 clients can send and receive pretty much whenever they want to, the access point is expected to work it out, and clients are all expected to behave themselves. 802.11 also makes the assumption that all the clients can see each other, they frequently cannot, which is called the blind neighbor problem. Individual clients will badger the access point like mad, and if they cannot see each other, which is basically how they are supposed to know when to stop transmitting briefly, the AP becomes a single waiter in a huge restaurant, and everyone is ordering at the same time. Stuff gets dropped. The more clients you add, the worse it gets. As the load on an access point increases as a linear function, the performance for each individual station drops exponentially.

        The solution is to give the access point all the control over who sends, who receives, and when. Take it one step further, sync all the access point clocks to the same timing system, most non 802.11 alternatives use the GPS timing pulse for this, and now you can reuse frequencies on access points in relatively close proximity.

        One of these days, someone is going to realize that 802.11, common as it may be, and as universal as it may be, is not the way to go.

        • The solution is to give the access point all the control over who sends, who receives, and when.

          Ah yes, "token ring" WiFi... I like it.

          • by nuintari ( 47926 )

            Your analogy is rather lacking, and I encourage you to explore several real world non-802.11 wifi implementations that deploy this concept, all with much more success than your average 802.11 shithole.

            Suggested reading: dragonwave, canopy, trango

        • How does a linear function of load cause an exponential drop of performance?
          • If you have an AP with one client connected, there are no conflicts.
            If you have an AP with two clients, you have conflicts between clients A+B.
            With three clients, you have conflicts between A+B, A+C, and B+C.
            With four clients, conflicts between A+B, A+C, A+D, B+C, B+D, and C+D.

            See how it's jacking up pretty quickly?

            • So A has conflicts with n clients, B has conflicts with n clients, then you have nx(n-1) conflicts, or n^2-n conflicts. However, not all clients can conflict at the same same time, so there must be (n-m)x(n-1) conflicts. Both of these are non-linear, but not exponential.
      • Surely allocating fixed bandwidth on a first come first served basis would mean eventually you would run out of bandwidth to allocate and people would be denied access?

        You mean like running out of ... ports ... in a hub or a switch?

         

    • Re:Hmmm (Score:5, Interesting)

      by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Saturday August 30, 2008 @06:00AM (#24808465) Homepage

      Technically bits of wire (beyond the first hub anyway) are shared as well.. they just have a much higher bandwidth so you don't notice.

      This article could have been written 5 years ago.. don't see what's new - everyone knows wifi doesn't really scale, which is why you keep it to small defined areas like a room per AP (and keep your important infrastructure wired as far as possible). If that's news to an admin then they probably skipped a few classes...

      • Re:Hmmm (Score:5, Interesting)

        by mikael_j ( 106439 ) on Saturday August 30, 2008 @06:21AM (#24808547)

        I suspect you'd be amazed by the number of supposedly technically proficent individuals who don't understand that with WiFi you have to essentially share bandwidth with every other computer and AP using WiFi nearby.

        I used to do first and second line tech support for a line of wireless APs, more than half the calls were from people (who in a lot of cases should've known better) who were pissed at their AP for not letting them connect while there were at least ten other APs nearby...

        Unfortunately a lot of people see WiFi as either a necessity or some kind of "solution" to their cable "problem", and lord have mercy on any fool who suggests that they connect their home NAS using a regular wired network and simply hide the cables, no no no, they NEEEEEEEEEEED WiFi for their home NAS.

        /Mikael

        • Re: (Score:2, Funny)

          I'm the first to admit my knowledge on this is limited...

          I read /. in the vain attempt to learn stuff that my piss-poor university neglected to teach me!

          Congrats guys, you're all honourary lecturers!

          Now keep talking, I'm trying to take notes here!

        • Re:Hmmm (Score:5, Insightful)

          by walt-sjc ( 145127 ) on Saturday August 30, 2008 @07:07AM (#24808731)

          You will find a large number of those individuals right here on /.

          About a year or so ago there was a discussion about WiFi, and I mentioned that I wired my entire house with the standard 2 RG6U, 2 Cat5e, 2 fiber to every room, sometimes two drops in a room. I have jacks EVERYWHERE. People said I was nuts. I said I was future-proofing - they claimed wireless would get faster too. And the response is Of course it will get faster, but so will physical cable as we have seen.

          The bottom line is that wireless can not and will not replace physical cable. It can only supplement. Primary connectivity should always be planned to be wired. Yes it's more expensive. A LOT more expensive. But you need it.

          Wireless by nature is flaky. I can have a laptop 10 feet from an AP and it can drop connection (and I don't care what brand of laptop or AP you have - it happens.) Why? Because the primary wireless frequency, 2.4Ghz, is a cesspool. I find it highly obnoxious that the FCC refused to allocate a band specifically and ONLY for WiFi - especially considering how extremely important connectivity is in this modern world. But Alas, they are only concerned about how much money they can bring in via auctioning off a PUBLIC resource, selling it to a corporate entity which in return lets the public use that band for insane prices.

          • Re: (Score:2, Insightful)

            by Anonymous Coward

            You have heard about that other part of WiFi - "A"
            which is where the N feature is used effectively and due to the multiple portions of spectrum used by "A" you should really have less issues.
            Toss in MIMO and you have a winner.

            P

            • by rukcus ( 1261492 )

              Mod parent up.

              802.11b/g really is overcrowded. It doesn't help that you have to pay a premium on laptops that offer 802.11a/b/g(/n). Additionally, APs cannot offer full 11MB/s for B and 54MB/s for G in all zones. This is, afterall, a radio device, and follows radius-squared laws for intensity of signal. The farther you move away from the AP, the less signal you will receive and slower throughput.

            • Of course - while the 5Ghz band is much less crowded, it has other issues - the higher you go in frequency, the less ability you have to go through walls. You trade one problem for another.

          • Re:Hmmm (Score:4, Funny)

            by Anonymous Coward on Saturday August 30, 2008 @08:50AM (#24809277)

            Yes. Get the hell out of my 2.4 range.
                  ~ HAM Radio guy

              (those astrofolks enjoyed it as well).

          • Re:Hmmm (Score:5, Insightful)

            by Migraineman ( 632203 ) on Saturday August 30, 2008 @10:36AM (#24810067)
            802.11whatever is an access point solution. Folks who expect it to be a backhaul or backbone solution are ... not well versed in network architectures. I find it amusing that folks think an ad-hoc mesh of 802.11 nodes will *ever* have performance comparable to wired/fibered connections. Just the "shared medium" aspect should be enough to indicate performance will degrade as more connections are added. Shoveling more nodes into the mesh won't magically improve performance.

            Eh, it doesn't surprise me. Evidence of this logical disassociation is everywhere - digital cameras, cars, appliances, computers, tools ... Listen carefully and you'll hear the cries of the oppressed - "I don't want to know how it works, I just want to [do blah]."
          • Re:Hmmm (Score:5, Interesting)

            by adolf ( 21054 ) <flodadolf@gmail.com> on Saturday August 30, 2008 @10:55AM (#24810219) Journal

            And don't forget microwave ovens. It's likely that everyone reading this has a 2.4GHz radio, of power levels ranging from several hundred Watts to over a kiloWatt, in the form of a small microwave oven in a nearby kitchen. Yeah, sure, it's shielded and lead screened and whatnot. But it doesn't take much leakage to completely trash the signal from a common Linksys WRT54G, which only has a 28milliWatt transmitter.

            Further, at these high frequencies, RF can act a little strange -- my own microwave didn't cause any noticeable interference, until I moved to a different house. After the move, with the same microwave, the same access point, the same laptop, and similar SNR, everything ground to a halt whenever the microwave was in use. Both houses have modern wiring and good grounding. The only real difference is that the microwave is now rotated 180 degrees relative to the portions of the house where there is WiFi gear, which seems to indicate that the oven leaks more in some directions than in others. Switching channels seems to have worked around this issue.

            For reasons like this, as part of the ongoing remodel and rewire, every room gets at least two Cat5e, at least one RG6, and a polyester pull string to some accessible area. (I'd have run some multimode fiber, but currently don't have anything which needs it, don't have any problems which can be solved with it, and don't have any experience terminating it. The pull string should make it easy to install later if the need ever arises.) The wiring, including coax, terminates at a couple of ICC keystone patch panels in an otherwise-useless alcove next to the basement steps, which is also where the switch, routers, and cable modem live.

            Some rooms have more drops than others, like the game room and the library. The office has about a dozen RJ45 jacks, mounted both along the baseboard at regular outlet height and midway on the wall (just above the height of a monitor on a desk) for plugging all manner of things in temporarily for servicing or toying or whatever.

            People think I'm nuts, too, but I'll have more bandwidth available to more independent points than any wireless technology will be able to provide for the foreseeable future. I can plug in new gaming systems, or analog/IP telephones, whatever audio or video gear, or about anything else, wherever I want, without worrying about coverage issues, while keeping my WiFi spectrum clean for those tasks that need it, like listening to Pandora way out in the back yard next to the fire ring with an iPod Touch.

            Structured cabling isn't a problem which needs solved, but a solution for all manner of things which need connected.

            • Re: (Score:3, Informative)

              by Belial6 ( 794905 )
              I don't recommend running cable to every place that you MIGHT need it. When I do remodeling, I run 'smurf tubing' down the wall in every room. It isn't really any more expensive than running 'just in case' wire. The benefit is that you don't have to worry about what kind of cable you might need in the future. I did this on my last house. When I remodeled each room, I put in a 2" tube from the attic to a face plate in the wall. I didn't pull a single wire until the place was done. After the house was
              • by adolf ( 21054 )

                That's nice, and all. But:

                Needs arise at odd times. I don't want want to worry about climbing around in the attic with a 2-man pulling crew just because I've picked up a uPnP media player for the bedroom, or whatever -- I want it to plug in and work. I'd also like my patch panels to be nice, neat, and obvious in their layout; these goals range from difficult to impossible when cabling only happens on an as-needed basis. Further, given a choice, I'd really prefer to only visit each location one time, ins

        • by rukcus ( 1261492 )
          And then you realize that routing wires in false-ceiling environments actually IS more expensive than setting up AP from ceiling mounts. You essentially reduce the total amount of cables by a factor of 10.

          Ever heard of Cisco's Unified Wireless Architecture?
          http://www.cisco.com/en/US/prod/collateral/wireless/ps5678/ps430/prod_brochure09186a0080184925_ns337_Networking_Solution_Solution_Overview.html [cisco.com]

          Let's remind ourselves here for a moment: large networks are not easy to set up. You run into a number of
        • You're exactly right, very few people understand wireless. Heck, many people in IT probably don't understand the difference between a switch and a hub. An 802.11n wireless AP is essentially a 100 Mbps hub under IDEAL conditions since the hub doesn't really have to deal with signal strength, interference from other hubs.

          I couldn't believe the article suggested that it would be a good idea to use 160 Mbps 2.4 GHz 802.11n. That would effectively cut your capacity down to half because you'd be using 40 MH
          • APs arent five for a dime ya know...
            • You either spend the money on the Access Points and limit the number of users to a few people per AP or you deal with over crowding on a few APs. If you want the performance, you need to pay for good infrastructure. You can't expect good performance when 100 people are sharing 24 Mbps of bandwidth on an unmanaged wireless hub with a single collision domain.
              • OR, you could allocate more spectrum, and have each AP client on a separet chanell. 'course, the goverment and big ISPs would never allow it.
                • Huh, what are you talking about? I said you need to build a lot more Access Points in to your infrastructure. I did not say you need a separate channel for each access point.

                  Designing a proper wireless network means you need to put nearby APs on a separate channel with sufficient isolation. You do not need a separate channel for each AP. There's 80 MHz of spectrum in the 2.4 GHz range and 480 MHz of spectrum in the 5 GHz range. Even the largest cell phone providers only have around 100 MHz of spectr
    • by nuintari ( 47926 )

      And the switch they all plug into is a shared resource. Its a network, everything comes down to a shared resource eventually. The questions are, how much is there to share, and how well is it being shared?

      Spectrum can be doled out in a very fair and efficient manner for everyone, unfortunately 802.11 doesn't even try to accomplish this. 802.11 is a colossal failure from a design standpoint. The clients can overwhelm an AP not because spectrum is finite, it is, but that isn't the problem, but rather because

  • by MichaelSmith ( 789609 ) on Saturday August 30, 2008 @06:30AM (#24808577) Homepage Journal
    Cellular communication systems get around scaling issues by having smaller cells. A single base station might actually support four cells in different directions. I wonder if you could build a wifi antenna with a single lobe, then cluster the antennas to give a multi lobe access point.

    The base station would have to support multiple antennas but this wouldn't need to require a lot more transceiver hardware. The antennas could be multiplexed.
    • by atomico ( 162710 ) <miguel@cardo.gmail@com> on Saturday August 30, 2008 @07:57AM (#24808959) Homepage

      Access network planning and optimization is a big expense for mobile network operators: selecting sites, anntenas and channel allocation, base stations, base station controllers... lots of complexity which has to be handled carefully to obtain a decent quality of service without breaking the bank. It is a full-grown discipline with its specialized training, books, professionals, etc.

      Don't expect that WLAN can work magically without a similar effort.

      • Dont forget their protocols are optimized for this kind of thing whereas 802.11 is not.

        Further dont forget that cell phone calls are like a really long running, slow speed transmission whereas web traffic is high bandwidth transmissions in short bursts.

        On top of that, dont forget TCP/IP *hates* mesh networks and *hates* you hopping around on one. All the wireless protocols either have to deal with you moving from access point to access point or they just ignore the problem and not let you roam.

        Don't expect that WLAN can work magically without a similar effort.

        People pull

    • Cellular systems use even LARGER cell sizes than Wi-Fi. Hell, it's not even in the same ball park. Cellular providers generally have even less spectrum than Wi-Fi and even the biggest companies only have around 100 MHz. The 2.4 GHz band alone has 80 MHz and the 5 GHz band has 480 MHz of total unlicensed spectrum. The difference here is that the cell providers have exclusive access to that spectrum and they're extremely careful about how they ration the resources.
    • I interned for a company named "Xirrus" this past summer; their primary product is similar to what you describe. It's an "Array" of directional access points in a radial pattern, each access point running on a different channel in the 5GHz band (802.11a). These things are pretty expensive and mostly sold to universities, airports, etc, but they work pretty well... Actually I just noticed that Xirrus was mentioned in TFA. Anyway, if you want a look at the inside of one of these things, see here: http://xirr [xirrus.com]
  • by erikdalen ( 99500 ) <erik.dalen@mensa.se> on Saturday August 30, 2008 @06:42AM (#24808633) Homepage

    There was a very interesting research article about DenseAP, which tries to solve this problem, in the latest issue of ;login:. Unfortunately it's still subscribers only. But for Usenix members it's on the link below, and other might find something on google :)

    http://www.usenix.org/publications/login/2008-08/index.html [usenix.org]

  • But today, many wireless administrators are focusing more attention on scaling capacity to address a surge in end users and the multimedia content they consume

    Here it is after the fixing.

  • by jskline ( 301574 ) on Saturday August 30, 2008 @08:19AM (#24809095) Homepage

    The fact is that this is "Radio" for all its worth. The "radio" part is what carries the signal much like the Cat5e does with the wired stuff. The problem is that people are thinking and going about this from the wrong direction. I saw some of this years back when all we had was 802.11b and we tried to fill up a wireless access point with as many connections as we could. The access point started dropping connections erratically, and bandwidth to all connected users were suffering after only about 10 or so users doing concurrent and sustained file transfers. We tried this again later with 802.11g and pretty much got the same issue.

    All they did with 802.11g to get faster throughput, was to spread the signal out wider so it covers up about 3 channels to what 802.11b uses. It didn't really change the fundamental way in which the radio "wire" is connected and how its accessed. The sender/receiver can only handle just so much through it.

    This is not really a scaling issue and being able to resolve a large number of hosts behind an access point, but really more of change of the fundamental design of the "carrier" in the first place. My assessment here is that our so-called "Wifi" will actually have to morph to a cellular type of radio rather than what we have now in order to properly scale. A cellular method will carry with it a multi-channeled multi-homing sender-receiver that can better handle multiple connections unlike a single transmitter/receiver pair used to handle the whole lot.

    Just my humble opinion.

    • Actually, the progress from B to G and then N correspond to specific improvements in the radio. B uses "direct sequence" modulation; G (and A) use OFDM, which is more efficient and allows for greater throughput; N uses MIMO, which roughly multiplies the bandwidth of the channel by adding antennas and radios, in the same band.
  • Well duh... (Score:3, Insightful)

    by BlueParrot ( 965239 ) on Saturday August 30, 2008 @09:09AM (#24809387)

    At the end of the day the electromagnetic spectrum can carry only so much information using a given number of frequencies. If you want to send data at this and that many bits per second, you are going to need a frequency with a similar number of periods per second. Ok, it's not quite that simple, but at the end of teh day higehr data rates means you need higher frequencies. If you fix the frequency that instantly caps the theoretical maximum amount of data you can transmit. There are two ways to adress this:

    a)Increase the frequency

    b)Deploy more access points so you are less likely to have many computers using the same one.

    The second alternative is essentially equivalent to using more wired networks and fewer wireless ones. Even if all teh comunication in the network is done in some sort of p2p mesh, increasing the number of access points increases hardware costs, which is teh same problem as you have with wired networks.

    Thus to get large data throghput you need to increase the frequency. Eventually you reach frequencies where the lightwaves no longer bend around obstacles and you will need a waveguide, such as telephone line, a coaxial cable , or optic fibre. This is why wired networks will always outperform wireless. By using a waveguide you are not limited in frequency by the requirement that the signal should have a wavelength long enough to dodge obstacles and difract around corners, and thus you can increease the frequency far beyond what you will ever achieve with wireless comunication, hence getting better bandwidth.

    These are physical limits, not merely technological ones. If you want high bandwidth you will need high frequencies, which in turn means you will eventually need either line of sight between the nodes or a waveguide ( wire ). Ok, theoretically something like a proton beam has a frequency so high you will be limited by other things ( such as energy consumption ) rather than frequency, but you need line of sight for those as well. I guess if you used neutrinos or some other very penetrating radiation you would always have line of sight, but barring any sudden breakthroughs in neutrino detection/generation I doubt that is going to be practical for simple data transfer any time soon.

    • by drwho ( 4190 )

      When you say frequency, I believe you mean spectrum, or bandwidth.

      • No, he means the carrier frequency. or perhaps just band.

        If your bandwidth is 10% of your carrier frequency (quite a lot, actually, and the bigger the percent, the less gain you're gonna get on your antenna), then a 60hz carrier will be like 6 baud. Not a very high data rate, even with quadrature.

        a 6Ghz carrier however...

  • Project died [startribune.com]. I guess. Poles are still all over half the suburb.

  • Yeah right. I don't pay for texts. They're .20 if I use them.... If a text was 5k, which I know it's less... that's 1024/5*.20.... over $100 per MB...

    If they want us to use text, they're going to have to makei it free.

  • 2.4 Ghz is full. 5 Ghz is not so good for many users. It would have been great is some of that recently freed up TV spectrum was made available for wifi.

    My guess is that within a couple of years there will be 'grey-market' wifi devices that operate in other bands, illegal to use in the US and many other countries but used nonetheless, much as extended-range cordless phones and the CBs of old.

    Regulators would be wise to head off the problem by freeing up some spectrum for additional wifi bands right now.

    Anot

  • if you've got too many people hogging an access point, maybe you should think about implementing some kind of bandwidth throttling or traffic shaping. man tc.
  • by drwho ( 4190 ) on Saturday August 30, 2008 @12:00PM (#24810875) Homepage Journal

    WiFi falls back to lower data rates when signal conditions force them to. Beacons are sent at the lowest data rate, 1 mbps. If access points refused to lower their data rate beyond some threshold, then more bandwidth becomes available on a given channel. The noise floor will also drop. Of course, some users will not be able to use the network because they can't connect at a higher data rate, even with the drop in noise floor. But many of these will be outliers, or people who aren't actually on campus but using campus networks. Too bad for them, assist legitimate users in upgrading equipment.

    If you didn't have the restrictions of backwards compatibility, you could drop support for 802.11b and DSSS completely, and have an 802.11g network. DSSS is less efficient than OFDM when in close proximity. Again, distant users are at a disadvantage.

    If you've ever sniffed a large wifi network you'll see alot of junk traffic, mostly from cisco and microsoft protocols which were meant for a wired environment where bandwidth is cheap. Filtering these at the AP can help the bandwidth problem.

    OK, there's my consulting for today. My bill is in the mail.

  • A few points that might help scalability and transfer rates:

    * Larger spectrum with the ability to use slightly higher power output for increased range. Universities and corporations that require higher output would be designated a section of that spectrum as to not interfere with nearby residential wireless equipment. It is obvious that the current 2.4GHz wireless spectrum is oversaturated with devices. Given that most users leave the wireless channel on the router's default setting (not to mention the s
  • I saw the headline and thought "What? Wireless LANs and face-huggers? Huh?"

    Definitely need more coffee...

  • Video artifacts and loss of audio is the result.
  • To put it simply, guess Networkworld has missed the arrays Xirrus makes. Plus, it helps to be upgrading the back end wired equipment (switches, routers, even cabling) to support the faster requirements. Just my penny thought.

"I am, therefore I am." -- Akira

Working...