Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Wireless Networking Hardware

Apple Clarifies 802.11g Controversy 177

Wireless Spider writes "A couple of days ago there was a controversy over the 802.11g data rates and supposed changes in IEEE specification. Apple has clarified this controversy, stating that nothing has changed in the spec. It seems the article from Computerworld was somewhat misleading. Quote from an Apple Vice President: "802.11g is still a 54Mbit/sec standard," Bell told MacCentral. "802.11b is 11Mbit/sec, but your actual throughput is somewhere between 4 and 5-1/2Mbit/sec. The number that's quoted is the data rate that's used between the radios (raw data rate, which includes the protocols etc.)" After reading this article featured on Macworld, 802.11g transfer rate controversy meaningless, says Apple, it seems clear that the people at Computerworld didn't do their homework for the article featured on May 22. Also, there seems to be a lot of politics between 802.11g and a supporters, and that every article posted on the Internet about this subject might not be true, or could be politically motivated."
This discussion has been archived. No new comments can be posted.

Apple Clarifies 802.11g Controversy

Comments Filter:
  • Wow... (Score:1, Funny)

    by Anonymous Coward
    A couple of days ago there was a controversy over the 802.11g data rates and supposed changes in IEEE specification.

    Wow, if this isn't news for nerds I don't know what is.
  • by Anonymous Coward
    Also, there seems to be a lot of politics between 802.11g and a supporters, and that every article posted on the Internet about this subject might not be true, or could be politically motivated.

    I mean, good fucking lord.

  • by Kethinov ( 636034 ) on Saturday May 24, 2003 @06:53PM (#6032611) Homepage Journal
    I just, 5 minutes before this article popped up, showed a friend of mine the previous slashdot article saying that 802.11g's 54mbps is not-so. Damn contradictory news services! *shakes fist*
    • by sargon ( 14799 ) on Saturday May 24, 2003 @07:08PM (#6032667)
      This is the result of reporters not doing their jobs properly. Those reporters SHOULD have talked with our (IEEE 802.11g) Working Group chairperson. Some did, and some didn't. Some of those who did talk with Sheung Li didn't bother to ask intelligent questions.

      I guess it is a sign of the quality of journalism-school education these days....
      • As someone who speaks to the press, albeit in a different field, it is also the responsibility of the person being interviewed to MAKE SURE the reporter gets it. If they don't ask good questions, make sure you give them the answers to the questions they didn't know to ask. You have to assume that they are unfamiliar with the area they are writing about and thus try and educate tehm. Yes, that smacks of doing their job for them, but you have to help them do this. Fair? probably not. Reality? Yep.
      • There are new openings for them at the New York Times......
  • by Anonymous Coward on Saturday May 24, 2003 @06:58PM (#6032631)
    2.5 million B.C.: OOG the Open Source Caveman develops the axe and releases it under the GPL. The axe quickly gains popularity as a means of crushing moderators' heads.

    100,000 B.C.: Man domesticates the AIBO.

    10,000 B.C.: Civilization begins when early farmers first learn to cultivate hot grits.

    3000 B.C.: Sumerians develop a primitive cuneiform perl script.

    2920 B.C.: A legendary flood sweeps Slashdot, filling up a Borland / Inprise story with hundreds of offtopic posts.

    1750 B.C.: Hammurabi, a Mesopotamian king, codifies the first EULA.

    490 B.C.: Greek city-states unite to defeat the Persians. ESR triumphantly proclaims that the Greeks "get it".

    399 B.C.: Socrates is convicted of impiety. Despite the efforts of freesocrates.com, he is forced to kill himself by drinking hemlock.

    336 B.C.: Fat-Time Charlie becomes King of Macedonia and conquers Persia.

    4 B.C.: Following the Star (as in hot young actress) of Bethelem, wise men travel from far away to troll for baby Jesus.

    A.D. 476: The Roman Empire BSODs.

    A.D. 610: The Glorious MEEPT!! founds Islam after receiving a revelation from God. Following his disappearance from Slashdot in 632, a succession dispute results in the emergence of two troll factions: the Pythonni and the Perliites.

    A.D. 800: Charlemagne conquers nearly all of Germany, only to be acquired by andover.net.

    A.D. 874: Linus the Red discovers Iceland.

    A.D. 1000: The epic of the Beowulf Cluster is written down. It is the first English epic poem.

    A.D. 1095: Pope Bruce II calls for a crusade against the Turks when it is revealed they are violating the GPL. Later investigation reveals that Pope Bruce II had not yet contacted the Turks before calling for the crusade.

    A.D. 1215: Bowing to pressure to open-source the British government, King John signs the Magna Carta, limiting the British monarchy's power. ESR triumphantly proclaims that the British monarchy "gets it".

    A.D. 1348: The ILOVEYOU virus kills over half the population of Europe. (The other half was not using Outlook.)

    A.D. 1420: Johann Gutenberg invents the printing press. He is immediately sued by monks claiming that the technology will promote the copying of hand-transcribed books, thus violating the church's intellectual property.

    A.D. 1429: Natalie Portman of Arc gathers an army of Slashdot trolls to do battle with the moderators. She is eventually tried as a heretic and stoned (as in petrified).

    A.D. 1478: The Catholic Church partners with doubleclick.net to launch the Spanish Inquisition.

    A.D. 1492: Christopher Columbus arrives in what he believes to be "India", but which RMS informs him is actually "GNU/India".

    A.D. 1508-12: Michaelengelo attempts to paint the Sistine Chapel ceiling with ASCII art, only to have his plan thwarted by the "Lameness Filter."

    A.D. 1517: Martin Luther nails his 95 Theses to the church door and is promptly moderated down to (-1, Flamebait).

    A.D. 1553: "Bloody" Mary ascends the throne of England and begins an infamous crusade against Protestants. ESR eats his words. A.D. 1588: The "IF I EVER MEET YOU, I WILL KICK YOUR ASS" guy meets the Spanish Armada.

    A.D. 1603: Tokugawa Ieyasu unites the feuding pancake-eating ninjas of Japan.

    A.D. 1611: Mattel adds Galileo Galilei to its CyberPatrol block list for proposing that the Earth revolves around the sun.

    A.D. 1688: In the so-called "Glorious Revolution", King James II is bloodlessly forced out of power and flees to France. ESR again triumphantly proclaims that the British monarchy "gets it".

    A.D. 1692: Anti-GIF hysteria in the New World comes to a head in the infamous "Salem GIF Trials", in which 20 alleged GIFs are burned at the stake. Later investigation reveals that mayn of the supposed GIFs were actually PNGs.

    A.D. 1769: James Watt patents the one-click steam engine.

    A.D. 1776: Trolls, angered by CmdrTaco's passage of the Moderation Ac
    • Man, where are the moderation points when you need them!

      Great post! Funny stuff!

    • Comment removed based on user account deletion
        • When I was a kiddo, the first time I did something it was funny. The second time I did it, my parents told me to stop. The third time, I'd get a spanking. I've seen this post over 5 times word for word in previous topics.... get it?


        Yeesh, where have you been? They've been posting this one for literaly years now. . . .

        Somebody dug into the Troll archives, heh.

    • 399 B.C.: Socrates is convicted of impiety. Despite the efforts of freesocrates.com, he is forced to kill himself by drinking hemlock.


      That should be freesocrates.org, unless they aimed to make money off it... or, since it's aimed at Greeks, freesocrates.gr would also make sense...
    • No wonder this comment was posted as AC since it is just a dupe of something posted last year [slashdot.org] and even that comment was posted anonymously so I expect it was copied from elsewhere (though Google is not showing me where that might be).

      Remove the funny points! Do not encourage IP theft!

  • 802.11g spec (Score:4, Informative)

    by sargon ( 14799 ) on Saturday May 24, 2003 @06:59PM (#6032636)
    I voted on the 802.11g spec. We all knew the problems we would have with 802.11b integration (and which have been widely reported in various interoperability tests). We had to draw the line somewhere. And when you draw lines, someone will invariably take issue.

    It is obvious that CW's reporter talked to someone who had an axe to grind. Maybe when we publish the spec in June (possibly July---yes, the IEEE also has a bureaucracy) that reporter will sit down and read it instead of reporting what someone else has said.

    This assumes that the reporter can understand what he/she is reading (a BIG assumption these days with reporters).
  • It would have been good if they did this before they introduced the first (801.11b) wireless cards...

    Now, the speed rating makes it seem as if 802.11a cards are several times faster than 802.11g cards.

    Indeed, it does look as if someone is trying to create confusion.
    • If you had bothered to think this through or do some background research... I guess that is too much to ask of most Slashdot readers.

      We in the IEEE are NOT trying to confuse people. You obviously have no idea what standards bodies do.

      You should peruse our Web site (www.ieee.org) and look at the history of the 802 committees and working groups. If you had done this, you would have discovered that there are different groups of people working on different aspects of networking (we call them "working groups"
      • We in the IEEE are NOT trying to confuse people.

        I never claimed that the IEEE themselves were trying to confuse anybody.

        You obviously have no idea what standards bodies do.

        You obviously are a troll, so I'll be going now.
    • Now, the speed rating makes it seem as if 802.11a cards are several times faster than 802.11g cards.

      No. The 802.11a products will say "54Mbps" on them, and the 802.11g products will also say "54Mbps". Since 802.11a and 802.11g are essentially the same speed, it won't be misleading.
      • by afidel ( 530433 ) on Saturday May 24, 2003 @08:40PM (#6033020)
        The problem is that as soon as you introduce one 11b device into the same cell as an 11g network you will reduce the effective throughput of the even the faster devices down to around 11-15Mbps vs the 25+Mbps that a pure 11g or 11a network achieves. Basically you pay a 40-50% real world performance penalty for mixed mode operation a 2.4Ghz. Since 11a is in the fairly unused 5Ghz range it doesn't have these problems. The reality is it won't matter in 6-9 months because every chipset provider will have tri-mode dual band chipsets so you can use 11b for legacy networks, 11g for those that bought equipment while it was a draft spec, and 11a for those who bought that equipment or who will buy trimode equipment in the future.
        • These issues won't matter to most of us in a home environment since we can run the b and g networks in parallel.

          When I add an 802.11g device to my stable of equipment (Which in all likelihood will be the oh so portable 12" Aluminum PowerBook.) I will of course need one of the new Airport Extreme base stations (or other g access point). My reliable, though by comparison slow, Airport will still work fine but I will assign it a different channel in the b/g spectrum. New stuff will go fast on the g channel a

  • by peterjhill2002 ( 578023 ) <peterjhill AT cmu DOT edu> on Saturday May 24, 2003 @07:16PM (#6032702) Journal
    It is perfectly reasonable to expect only 20 mbps throughput with a 802.11a or 11g network, for the same reason that 4-5 mbps is average using a 10baset hub or 802.11b. These are all shared mediums. Clients must use Collision Detection and avoidance. There is competition for the available bandwidth. All wireless must contend with clients that are connected at different rates. If a host is far enough from a 11a access point that it associates at 12 mbps, It's communications with the AP will take a longer timeslice from the available airspace. Clients associated at a higher rate will have their effective communication rate drastically effected.

    Does it matter? Is it bad to market 11a and 11g at their 5x mbps? or 11b at 11mbps? Not really. (IMHO) Just like Hard drives are advertised at they size before putting a file system on them, it is up to the user to understand what the numbers really mean.

    If you are the only client associated with an AP, your throughput will probably be much closer to the theoretical maximum, just as if there are only two things connected to a hub, their communications with each other will be better than if there were five.
    • The problem here is that with 802.11b, the theoretical max _is_ about 5.5mbps even if there is only one user associated with the AP.
    • Wireless doesn't do collision detection... because you can't. You only do avoidance (more overhead)

      The bottom line is, what number SHOULD we put on teh spec? Call it 11Mbps? It's only approximately that, and that doesn't really tell you anything about the spec. Calling it 54Mbps is totally, completely accurate, and those who misunderstand simply, well, do not understand.

    • You can't expect much more than 5 Mbps because around 5.5 Mbps is the theoretical bandwidth. Quoting 11 Mbps is highly misleading, as this is merely the signalling rate; there is some overhead in the physical-level protocol. If this sort of misleading labelling were used by ethernet, 100 Mbps ethernet would be advertised as 125 Mbps (the signalling rate is 125 Mbps, but it needs 5 bits for each 4 bits of real data transferred).

      So yes, I think it's bad to market 11b at 11 Mbps or 11g at 54 Mbps, as these
    • "It is perfectly reasonable to expect only 20 mbps throughput with a 802.11a or 11g network, for the same reason that 4-5 mbps is average using a 10baset hub or 802.11b. These are all shared mediums"

      So then call it 20mbps for 802.11g or 5mbps for 802.11b. Calling it 11mbps is a scam since you can NEVER reach that speed. I mean with wired ethernet at least you come close to the spec, with wireless you don't even get half. Saying its a shared medium is a cop out. Myself and many other home users are only usi
      • If you bought a Toyota Prius and always drove with 4 people who weighed 300 pounds each in it, you would not get anywhere near the rated mpg.

        If you buy a HD and format it with a lousy file system, you will get nowhere near the rated capacity.

        If you buy an ethernet hub and connected 50 computers to it using daisy chained hubs, you won't get anywhere near 10mbps.

        The fact is, they need to pick a number. The number they pick relates to the maximum throughput the device can transmit. Once you subtract protoco
  • Apple? (Score:4, Insightful)

    by lostchicken ( 226656 ) on Saturday May 24, 2003 @07:17PM (#6032709)
    Why is Apple responsible for defending 802.11g, and why is anyone attacking Apple for the shortcomings (if any) of 'g?

    I have a Linksys 802.11g system, and if there is a problem with the design of the spec, that's the IEEE's fault, not Linksys, Apple or anyone else.
    • Re:Apple? (Score:1, Troll)

      by sargon ( 14799 )
      Why is this the IEEE's fault?

      What is wrong with the spec? Are you even QUALIFIED to comment on the spec? Can you tell us WHY 802.11g is a bad standard?

      Have you even read the 802.11b spec? It is available for free at the IEEE's Web site.

      Why don't you read that, then come back and tell us what YOU would change to give us a better spec.
      • He said *IF* there is a problem witht he spec, then it's the IEEE's fault, not someone elses. ANd he would be right.. if there was a problem in the first place.
        • Right on. There is nothing at all wrong with the spec. It's the best networking technology I have ever deployed. I have had not a single problem with it. It's even more reliable than 802.11b, and it just works.

          I am typing on it right now. I give the credit to the IEEE, not Apple or Linksys.
    • Why is Apple responsible for defending 802.11g, and why is anyone attacking Apple for the shortcomings (if any) of 'g?

      They just got their PR people to OK the release first, and they are a more or less neutral party.

      They only said what I and several others said in the first thread, raw performance and measured are apt to be very different. TCP/IP and an application layer protocol add quite a bit of overhead, as do the collision system and so on.

      There are a lot of 802.11a companies who would like to see

      • Umm, what companies support 11a and won't have an 11g product?? None that I am aware of. Everyone has a solution for both and very soon everyone will have a single solution for a,b, and g. 11a and 11g share signaling methods and 11b and 11g share the same frequency, so supporting all three just makes sense. What some companies like Cisco are and have been saying is that for people who have legacy 11b equipment it makes more sense to have the newer equipment on 11a. This is a simple fact, the implementation
    • by King_TJ ( 85913 ) on Sunday May 25, 2003 @11:48AM (#6035122) Journal
      You have to remember, Apple doesn't really offer a huge product line like some vendors. They have a core set of laptops, desktops, one type of server product, and several accessories and gadgets (mainly the iPod).

      The Apple "Airport Extreme" was the first commercial 802.11g device to market - and Apple did their best to put a "spin" on it that it was somehow their own invention. ("That's right folks... good old Steve J. is bringing you the next insanely great thing. Faster wireless than anyone else offers!") Can't really blame them.... They were the only one willing to stick their neck out and start selling the product at the time. Everyone else waited until Apple had it on the shelves before rushing to release their own.

      If people start publically attacking the 802.11g spec now and making it look bad, Apple stands to lose the most from it. They've already built all of their systems with it either integrated inside, or upgradable by expansion board.
      • The Apple "Airport Extreme" was the first commercial 802.11g device to market - and Apple did their best to put a "spin" on it that it was somehow their own invention

        Granted, Apple didn't invent 802.11g, but Airport Extreme also includes the Apple software, which makes it easy to set up. And after trying to get the Windows software to play nice on a shared network, I can tell you, that's a definite plus. Actual passwords (as opposed to hexadecimal) and autodetection/autojoin on the level of Airport are

        • Yeah, you're correct there. Apple's wireless is a relative no-brainer to set up (at least compared to many wireless PC config. utilities).

          Still, that's improving on the PC side as well. I recently set up some Belkin 802.11g wireless stuff for a client, and it allowed actual passwords too. (Even showed what they converted to in hex, in a seperate "info" window below as you keyed it in.) It also featured auto-detect.
          The Belkin hub had an integrated web-based interface, so using the included Windows setup
  • Thats odd (Score:3, Insightful)

    by dnoyeb ( 547705 ) on Saturday May 24, 2003 @07:21PM (#6032725) Homepage Journal
    Most of the time the quoted speed is the RAW speed. a 100Mbps network card is doing 100Mbps in RAW speed and actual data level speed is much lower. So then, shouldn't they be always quoting the higher 54Mbit/sec as opposed to some 11Mbit/sec!!?!

    Anyway, 802.11b is 11Mbps so I can't believe 802.11g would be the same. I am automatically decreeing that 802.11g is faster than 11Mbps...

    Does sound like bad reporting. Shouldnt happen from technically saavy folks
    • Sorry, but 100Mbps full duplex ethernet performs at near theoretical max. With 1500 byte packets (which happens when you are pushing a lot of data), you get 95Mbps or more.

      The fact that 802.11b is marketed at 11Mbps is a complete joke.
    • Unless 11Mbps for 802.11b was also a raw speed, which seems likely from other comments here.
    • Because in 100Mbps ethernet, the raw speed is NOT much slower.... the max theoretical speed a host can transmit on 100base with ethernet, ip, and tcp overhead is still over 90Mbps.. (I think it's near 97 Mbps, haven't calculated it for a few years). This number is even closer for 10Mbps.. (close to 9.9Mbps)

      Nobody ever really kicked up a fuss about this because the speeds are so damn close... but in wireless, they are very different.

      • ...the max theoretical speed a host can transmit on 100base with ethernet, ip, and tcp overhead is still over 90Mbps.. (I think it's near 97 Mbps, haven't calculated it for a few years). This number is even closer for 10Mbps.. (close to 9.9Mbps)

        Nobody ever really kicked up a fuss about this because the speeds are so damn close... but in wireless, they are very different.


        The max theoretical speeds for Ethernet may be higher, but everyday speeds with normal software, even under pretty decent conditions, ar
        • Two points.. first, what does your software mean by Kbytes (1024 or 1000, it's ambiguous)

          and.. that is showing the limitations of your hardware and computer.

          I regularly get speeds above 90Mbps using ftp between two hosts.

          Your switch is one bottleneck, your computers and network cards are the other.

          Good network cards, and good switches can easily get you up into that 90% range between two hosts.

          TCP overhead doesn't count for that much.

          Here we go, in bytes

          Ethernet frame:
          8 byte preamble
          6 byte destinatio
          • I have to agree with the prior poster. I do not get very close to 10Mbps on my 10 networks when I had them, and I do not get very close to 100Mbps on my 100 networks now that I have them.

            The switch should not add any delay to the transaction and I do not know why you would blame anything on the switch?

            Also, saying good hosts and good cards, blah blah, does not matter because his cards undoubtidly labelled themselves at 100Mbps cards, and not 95Mbps cards.

            Further, you are starting at the TCP layer when y
            • Let me qualify by first saying I've worked designing network equipment at a lower level, and have analyzed this stuff in detail. Of course, that doesn't make me right all the time.. I just mean, I have actually researched this stuff somewhat seriuosly, and looked at it with scopes, compared products, etcetera.

              First, the switches DO matter, because despite what you might have been told, switches are NOT all capable of switching at wire speed. If you don't think the switch has an effect, get a better switch
  • So if I don't want any B clients on my networks all G max speed. But maybe my neighbor has an entire B network that overlaps from an RF point of view with my network? If I turn off the compatibility mode will I sink his B network?
  • Are they using the correct SI form of the prefix Mega, the now outdated binary form of the prefix Mega which has been replaced by the prefix Mebi?

    I wish this stuff would catch on. It's useful.
    • by mindstrm ( 20013 ) on Saturday May 24, 2003 @08:38PM (#6033011)
      It's 54,000,000 bits per second, which is a Megabit per second.. both under the old system AND the new one.

      Yes, I realize this contradicts what you might think about a Kilobyte (now Kibi) being 1024 bits, and so on and so forth.. however data transmission speeds have ALWAYS been specified in metric units of bits per second.

      A kilobit per second was always 1000 bits per second.

      When someone says megabit, it always meant one million bits per second, not some strange power of two. That only comes about when you are dealing with memory.

      With the internet, it got confusing because peopel started going from kilobits to kilobytes, or writing software to show upload rates without real knowledge of how thigns are technically specified, so it got muddy, and you have to guess what people mean.

      However, in the case of 1.544Mbps T1, 10, 100, 1000, or 10000base ethernet, 11Mbps wireless, or 54Mbps wireless, we are talking about powers of 10
  • still misleading (Score:5, Informative)

    by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Saturday May 24, 2003 @07:31PM (#6032758)
    You quote raw signal rate and actual throughput for b, but not for g, which is a bit misleading. For those who still haven't figured it out:

    b: 11Mbps signalling rate, 4-5 Mbps effective throughput
    g: 54Mbps signalling rate, ~22 Mbps effective throughput.

    [I don't know anything about a, so I'll let someone else comment about that.]
  • by Anonymous Coward
    Reporters that don't do there homework and slashdot editors that don't check the facts before posting? What is this the New York Times?
  • this can also coincide with regulations from the FTC (correct me if i'm wrong), limiting 56K modems to actually having a maximum data transfer rate of 53Kbits..

    so although Apple mentioned the article may have something to do with politics, i'm pretty sure there are regulations being set as well..
  • by berniecase ( 20853 ) * on Saturday May 24, 2003 @07:47PM (#6032821) Homepage Journal
    After reading the article, I did a quick search for 802.11g throughput tests and 802.11a/b tests. I came up with two links:

    Tom's Hardware 802.11g throughput tests [tomshardware.com]

    ExtremeTech's 802.11a and 802.11b throughput tests [extremetech.com]

    There's going to be overhead with any protocol, but I would expect that wireless would have a higher overhead than wired protocols. There's certainly a lot of things you have to take into consideration for wireless throughput - obstructions, distance, error correction.
  • by Jon_E ( 148226 ) on Saturday May 24, 2003 @08:13PM (#6032913)
    Computerworld reports that the IEEE has changed the 100BaseT spec to only run at 65Mb/s not 100Mb/s as initially specified, thus slowing down millions of computers world-wide. Additionally gigabit ethernet has also been affected by the IEEE bringing many critical business systems down to a crawl.

    The only people who look bad as a result of this are silly chipset vendors and the 54g collaboration of idiots who put products on the market based loosely on the draft since now all their logos look stupid.
  • by Karpe ( 1147 ) on Saturday May 24, 2003 @08:43PM (#6033029) Homepage
    ...or just a play on words with "Jobs" and "Wozniak"? :) Hell, if I had a name like that I would also be promoted to president of hardware product marketing.
    • Check out the Crazy Apple Rumors site [crazyapplerumors.com] archive via this link [crazyapplerumors.com] -- scroll down to the headline "Apple Announces iClone at MacWorld Tokyo."

      To prove the system works, Jobs revealed that Apple Senior Director of Hardware Product Marketing, Greg Joswiak, is, in fact, the result of an iClone experiment combining the genes of Jobs and Apple co-founder Steve Wozniak. "Half Steve and half me!" Jobs said. "He's great for Hardware Product Marketing, and we grew him in just three weeks!"

  • uhm (Score:4, Insightful)

    by Sacarino ( 619753 ) on Saturday May 24, 2003 @09:18PM (#6033143) Homepage
    ...that every article posted on the Internet about this subject might not be true, or could be politically motivated.

    I'm not sure the age of the submitter, but if this comes as a surprise to anyone you really should be ashamed. Just because it's in print, on TV, or online does NOT make it true.
  • bandwidth (Score:2, Insightful)

    by theflea ( 585612 )
    I find the discussions about bandwidth (real and potential) less interesting than whether new AP's will have good backward compatability with a and b clients, have better range, and other usability issues.

    It would be nice to stream high-quality video over wireless links, but that's what wired segments are for. Other factors are more important for the 802.11x's (most applications; most people). Like for instance, I'd like to see a breakdown of how many web surfers a 'g' access point could handle in a mixe
  • by hackus ( 159037 ) on Saturday May 24, 2003 @11:03PM (#6033493) Homepage
    I have a question.

    Since 802.11g and b are backward compatible.

    It would seem the controversy stems from the fact that, if you already invested in 802.11b equipment, mixing 802.11g in with your environment is going to cause the 802.11g access point to step down or send RTS/CTS signals after each packet as a courtesy to 802.11b equipment trying to communicate in the same area.

    So, here is something I propose then:

    Say you decide to deploy 802.11g equipment in your wharehouse. You have not invested in anything WiFi and you have a nice radio free environment.

    So you deply your 802.11g network in your wharehouse and everything is ducky.

    Now, along comes Joe Shmoe. Joe Shmoe decides he is going to open a Steppen Brew right next door to your wharehouse.

    He has this brilliant plan about offering Customers free internet access while they sip there latte's.

    So he deploys a 802.11b access point on his roof next to your wharehouse operating with 802.11g equipment.

    All of a sudden, you start getting complaints about crappy through put on your Wharehouse wireless LAN.

    You can't seem to figure it out, but your 802.11g network is now half the network it was when your deployed it.

    So you look for anyone using 2.4Gigahertz bluetooth devices, remote phones, cordless radio headsets...etc.

    Nothing?

    In short, the question is: will 802.11g equipment step down in the presence of any 802.11b device, or does it only step down if that device is actually transmitting on your network?

    Couldn't find anything in the specs that would rule out this completely NASTY scenario.

    Anyone care to comment?

    -Hack
  • A Good Thing (Score:3, Insightful)

    by coolmacdude ( 640605 ) on Saturday May 24, 2003 @11:36PM (#6033598) Homepage Journal
    It seems to me that what the IEEE decided to do was to label the spec with the actual throughput speed as opposed to the raw one. That makes sense and I don't know why it wasn't done with b. But apparantly some people took this to mean the raw speed had been reduced from 54 to 20 which would have meant a sizeable reduction in actual speed.
  • Credibility (Score:3, Informative)

    by Jade E. 2 ( 313290 ) <slashdot@perlstor[ ]et ['m.n' in gap]> on Saturday May 24, 2003 @11:55PM (#6033678) Homepage
    Oh, sure. *I* posted this [slashdot.org] when the original article came up, and nobody cared. But then some fly-by-night company nobody's ever heard of named 'apple' steals my comment, and suddenly it's news :)
  • ...the land of illusions. The speed of your CPU turns out to be a myth and your 801.11g-card is subject to controversy.

    • I guess that's a little like PC land, where your 56K modem never really does 56K, your monitor's screen size isn't really quite what it says on the front of the box, and your hard drive's usable capacity is less than the "unformatted capacity" shown on the label. I think for a long time, Sony was even selling 3.5" high-density floppies that said they were "2MB" (format to 1.44MB on the PC though).

      AMD was labeling their PC CPUs with a mythical speed number too. So let's face it, we're talking "computer-la

"Money is the root of all money." -- the moving finger

Working...