Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Wireless Networking Hardware

When 54 Mbps isn't 54 Mbps: 802.11g's Real Speed 127

eggboard writes "Matthew Gast, author of 802.11 Wireless Networks, filed this article for O'Reilly Networks explaining exactly how fast 802.11g really is: that is, what's the actual data payload and real throughput, not the rated maximum speed. His conclusion? In mixed 802.11b/g networks, which will be common for years to come, g is only 1.6 to 2.4 times faster than b, not 5 times faster as it is in its g-only mode. This article has real math based on the specs, rather than armchair speculation."
This discussion has been archived. No new comments can be posted.

When 54 Mbps isn't 54 Mbps: 802.11g's Real Speed

Comments Filter:
  • In other news: (Score:4, Informative)

    by gerardrj ( 207690 ) on Saturday August 09, 2003 @02:50PM (#6655930) Journal
    When you connect a 10bT NIC to a 100bT switch you get reduced throughput.

    EVERY medium that I've seen specs for published the actual bit rate of the wire/cable/fiber, not the end user throughput. They can't know that because they don't know what protocols you will be running over the network.

    • by Barbarian ( 9467 ) on Saturday August 09, 2003 @03:06PM (#6656018)
      However, when you connect a 10baseT NIC to a 100baseT switch, you don't slow down the rest of the connections to the switch, which can still operate at 100baseT. The situation with wireless, a shared medium, is more analogous to connecting a 10 baseT NIC to a 100/10 baseT auto sensing hub--when you hook up that 10baseT card, it slows down the rest of the hub to 10 baseT.
    • You'll save yourself some grief [slashdot.org] if you get yourself a wireless card.
      I got myself one too. No regrets. ;)
      • by qqtortqq ( 521284 )
        Why weren't they [msnbc.com] taken alive? Why not teargas and SWAT team?

        OK, lets say you are one of the people that entered the home. You were shot at. Do you want to go in again, and get shot at more? They tried a swat style entry, were shot at, so decided to retreat and shoot back from a safer position. Sure, it would have been nice to have them alive, but why risk more US deaths for it? The house had bulletproof windows, who knows what other fortifications lied inside? If you have bulletproof windo
      • I think that not taking them alive was an unfortunate, but reasonable, alternative.

        I used to be a cop, and did SWAT for about seven years... an assault on a fortified target like that is difficult in the best of circumstances, let alone in the midst of a hostile city, where you may or may not be able to guard your flanks. If that situation had turned into a prolonged siege, the brothers might have had the opportunity to contact local resistance elements, get some media attention, and shift the balance of
    • Re:In other news: (Score:1, Redundant)

      by rf0 ( 159958 )
      You of course have the overhead of switching, over loaded network etc..

      Rus
  • by Have Blue ( 616 ) on Saturday August 09, 2003 @02:52PM (#6655941) Homepage
    This doesn't sound much better than armchair speculation either... Where are the real-world throughput benchmarks performed with actual equipment?
    • by eggboard ( 315140 ) * on Saturday August 09, 2003 @02:54PM (#6655960) Homepage
      When I said "armchair speculation," I was referring to the mass of articles that come out that talk about Wi-Fi speeds without actually looking at how the technology works.

      Matthew has now provided a baseline. Someone could now perform real-world benchmarks against these theoretical maximums which are built into the standard.

      Matthew's numbers provide optimal performance guidelines for network planning. Real performance will, of course, be even lower.
    • by Wesley Felter ( 138342 ) <wesley@felter.org> on Saturday August 09, 2003 @02:58PM (#6655976) Homepage
      I agree; it seem like it would have been much less work to run benchmarks than to come up with a theoretical model. But at least someone is giving us real data: Small Net Builder 802.11g NeedToKnow - Part 2 [smallnetbuilder.com].
    • Ugg. We've known we were getting screwed since before the "56k" modem. Nothing ever goes that fast. 1ghz processors are actually 998.5 mhz. Foot long hot dogs are actually nine inches.
      • That depends on the bus speed on that particular board and the CPU clock's bus multiplier. I have access to a 1GHz pentium 3 (a Dell, FWIW) machine that is actually 1005 MHz. This is because the clock generator on the board isn't perfect. The system bus is actually 134 MHz instead of being 133. You can't make such a generalization on processor speed because there is always a margin of error. Nothing is perfect.

        I wouldn't be surprised to find a 10/100 NIC that can actually do 105 Mbit/sec (even if the p

        • You can't make such a generalization on processor speed because there is always a margin of error. Nothing is perfect.

          Actually, I can't make such a generalization about hot dogs lengths either.

          That wasn't the point.

          But thanks for the info.
  • by PrimeWaveZ ( 513534 ) on Saturday August 09, 2003 @02:52PM (#6655943)
    Gigabit ethernet is supposed to be 100 times faster than good ol' 10BaseT. It is, at the root layer. Most devices can't push that much data through the pipe, and with wireless, there is MUCH more error correction that needs to be done in communicating back and forth. Wired networks (normally) don't have the kind of interference that 2.4 GHz-band devices now suffer from.
    • Although you are heavily moderated up as insightful, it appears to me you have not read the article.
      Even with infinitely fast hardware for the error correction and no interference at all, thoughput of 802.11g will drop to 13.4 or even 8.9 Mbps once an 802.11b station associates to an 802.11g network. The 802.11b station does not even need to transmit actual data for this!

      So, yes, this is really a new issue stemming from the compatibility between 802.11g and 802.11b.
      Note that 802.11a does not have the same
      • Well, any digital communication device has to have a certain level of redundancy and error correction. An 802.11a device will do pretty much the same thing as a b or g device. However, when b compatibility is introduced, more hopping and use on the channels is done than before. So, if you've got b compatibility enabled, you're going to have more intereference compensation (even though data itself may not be transmitted), and things only get worse when you're using things like a 2.4 GHz cordless phone or
        • Well, any digital communication device has to have a certain level of redundancy and error correction. ... Anything RF on the same swath of the spectrum will screw things up.

          You are absolutely right there, but when you read the article, that is not the cause that things are as bad as they are.
          If the reasons you mention would be the primary ones, there would be some degradation, but not a full 50% jump for a class b device just registering without transmitting further data.

          As soon as a .b device register
      • The actual reason that this is not a new issue, per se, is because this limitation (the inevitable consequence of the crosstalk prevention mechanism that's introduced in mixed mode) was discovered, tested, and posted by independent sources months before the pre-official 802.11g devices were released to the general public. Even then it was acknowledged by the vendors, who did not deny that this particular problem would most likely continue to exist after the imminent standardization of this protocol.
  • Scandal! (Score:3, Funny)

    by Jack_Frost ( 28997 ) on Saturday August 09, 2003 @02:53PM (#6655951)
    100 Megabit Network does not actually deliver 100 Megabit transfer speeds. Film at 11.
    • That's not the point here: the point is that no one has actually provided the math to know what the upper potential limit is. Matthew now has.

      We all now that throughput doesn't equal raw speed. But saying that 100 Mbps != 100 Mbps doesn't add much to the understanding of building networks, does it?
    • 100 Megabit Network does not actually deliver 100 Megabit transfer speeds. Film at 11.

      It does if you have enough processor power and I/O bandwidth. I have an Athlon XP 2500 + and a P-III 700 MHz on my desk. I can scp huge files between the two at about 7 MB/s (or 56 Megabits) through a basic 10/100 switch. The limiting factor is processor power on the P-III. With scp, and an otherwise light load, it uses all the available cycles to deliver the 7 MB/s. I'm sure if I used an in the clear protocol, lik

      • You'll need something faster than 32bit 33MHz PCI to do so. A single gigabit NIC can saturate a normal 32bit 33MHz PCI bus. Fortunately, 64 bit 66/133 MHz isn't that expensive anymore ...
  • Quick rundown: (Score:2, Insightful)

    by dbarclay10 ( 70443 )
    Okay, I read the article, and here's a basic rundown (I think :):

    * 802.11g in a homogenous network (ie: only 802.11g access points) is faster than 802.11b (by a factor of five or so) *and* 802.11a (just a bit faster)
    * 802.11g in a heterogenous network (ie: some 802.11g access points, and some 802.11a access points _which have been "assosiated" with the 802.11g_) is rougly 1.5 to 2.5 times faster than 802.11b, depending on the type of collision-detection algorithm used.

    So, to sum up the summary:
    • by dbarclay10 ( 70443 ) on Saturday August 09, 2003 @03:00PM (#6655986)

      (Sorry for the parent post, I made a typo. Just s/802.11a/802.11b/ in the second bullet point. "oops" :)

      Okay, I read the article, and here's a basic rundown (I think :):

      • 802.11g in a homogenous network (ie: only 802.11g access points) is faster than 802.11b (by a factor of five or so) and 802.11a (just a bit faster).
      • 802.11g in a heterogenous network (ie: some 802.11g access points, and some 802.11b access points which have been "assosiated" with the same network as the 802.11g access points) is rougly 1.5 to 2.5 times faster than 802.11b, depending on the type of collision-detection algorithm used. This setup is not as fast as 802.11a.

      So, to sum up the summary: If you start replacing your 802.11b access points with 802.11g access points, you'll see some performance gain with 802.11g client devices right away. When all your 802.11b client devices are gone (and thus all the 802.11b access points), it'll be way faster. Faster even than 802.11a.

      Why is this billed as a bad thing? You get compatibility with your existing infrastructure, a little bonus performance now, and when the time comes, bang you get a big boost.

      This is the kind of thing that sysadmins such as myself LOVE :)

    • Re:Quick rundown: (Score:5, Informative)

      by rusty0101 ( 565565 ) on Saturday August 09, 2003 @03:13PM (#6656048) Homepage Journal
      Why is this billed as a bad thing?

      For those who understand how this works, it is not a bad thing. However the hardware is being marketed to the general public.

      As a result you can expect that people who see the 5 x faster than b are going to completely skip the small text that disclaims this on the back of the box. I think everyone would be surprised if this did not include a significant number of ostensibly technically inclined writers who will report that they did not see the improvements advertised, and who will subsequently give the technology a bad rap.

      One fix for this would be to make APs that ran dual modes, but on different channels. For example 'b' on channel 3 and 'g' on channel 9. The AP would have to be able to buffer traffic between the two channels, but it would have to do so if it were acting as a repeater in any case, which I believe it has to to operate in both b and g modes.

      I do not know if this is likely to happen, or is part of the spec already. If it is, then people should expect to see a significant performance boost.

      -Rusty
      • For those who understand how this works, it is not a bad thing. However the hardware is being marketed to the general public.

        As a result you can expect that people who see the 5 x faster than b are going to completely skip the small text that disclaims this on the back of the box. I think everyone would be surprised if this did not include a significant number of ostensibly technically inclined writers who will report that they did not see the improvements advertised, and who will subsequently give the t

      • Re:Quick rundown: (Score:3, Interesting)

        by eggboard ( 315140 ) *
        Great idea -- there's a company called Engim [engim.com] that has a very cool set of chips that allow you to run 3 or more channels of Wi-F at the same time: you can choose to run some using a, b, or g, depending on the configuration.

        So you could have one AP with "a" on one of the 8 indoor "a" channel, "b" on a non-overlapping 2.4 GHz channel, and "g" on another one. You could offer "g" twice and "b" once. And so on.
        • Just as a reminder, 802.11a is in a different frequency spectrum than 802.11b and g. 5 ghz as compared to 2.4ghz. There is no overlaping a channel with b and g channels.

          I do like the idea however. I will take a look at the company.

          -Rusty
    • Re:Quick rundown: (Score:2, Informative)

      by Coldeagle ( 624205 )
      What everyone has to remember is it's not the transfer speeds that really matters IMHO. The additional available bandwith that is available is what the plus is for me. I had 14 computers on an 802.11b network and they crawled, now with a 802.11g AP, they cook, because they have more bandwidth to share. If they could come up with an AP that acts as a switch, now that would be cool!
    • So if you have a homogeneous 802.11g network, then some war-driving schmuck pulls up to the curb in front of your house with an 802.11a card, does your network suddenly get slower?
  • by rjstanford ( 69735 ) on Saturday August 09, 2003 @02:59PM (#6655978) Homepage Journal
    Even the manufacturers make this point. From apple's site [apple.com]:

    If a user with an AirPort-enabled computer or a Wi-Fi certified 802.11b product joins an AirPort Extreme wireless network, that user will get up to 11 Mbps and the AirPort Extreme users on the same wireless network will get less than 54 Mbps. To achieve maximum speed of 54 Mbps the wireless network may only have AirPort Extreme-enabled computers on it.

    Its not like this was quite the surprise its being made out to be...
    • FYI, you can force the AirPort Extreme base station to run in pure 802.11g, mixed mode, or pure 802.11b, so when in pure g mode, you can ensure to a large extent that you won't be losing bandwidth to b clients.
  • Wireless networks have greater latencies than wired networks. Its just a fact. Windows NT (and various linux/bsd/other systems) is usually nice enough to automatically adjust the TCP recieve window size to your network latency. Sometimes it gets it right. Other times it gets it wrong.

    For this to be a usefull test, you will need to at least publish what the window size was on each end. Also, making sure the immediate area was free of microwaves and blenders helps a bit.

    Now, I fully believe that the test wa
    • Informative? (Score:5, Insightful)

      by Wesley Felter ( 138342 ) <wesley@felter.org> on Saturday August 09, 2003 @03:19PM (#6656062) Homepage
      You didn't even look at the article, did you? There was no testing. The author didn't model TCP windowing at all, and he even failed to take delayed ACKs into account.
    • *sigh* It's not even _about_ TCP. This is a hardware issue. Windowing doesn't factor into it. The article is about the theorectical reduction in throughput (which is, btw, different than latency-- a semi full of hardrives would have high throughput and horrible latency). Also, there was no testing, this is math work. It is about the optimum situation.
    • Wireless networks have greater latencies than wired networks. Its just a fact.

      No; or not significantly. Last time I measured it on my 802.11b network it was well under 2ms. Sure, if you have interference, then you'll be hitting the retries, in which case the average latency will go up; but under good conditions, it's got negligable latency. (Some newbies have suggested that WiFi takes longer due to propogation through the air- actually the speed of radio waves in air is almost twice that of ethernet signa

  • by chriso11 ( 254041 ) on Saturday August 09, 2003 @03:02PM (#6655994) Journal
    Oh, so I only get a 60% faster connection? Given that soon enough the price differential between B & G will be gone, I still think G is the superior choice. When the wireless cards are only $15 to $20, I think that pure G networks will be much more common. And then you will get much higher throughputs.

    Maybe they should go after Dannon yogurt for decreasing the size of their container to 6oz from 8oz, but keeping the price constant. Then at least they would be reporting on something I could care about.

    • It's not unreasonable to expect 5X when it says 5X right there prominently on the box, and when the throughput is supposed to be about 5 times as much (11 vs 54).

      In my situation (all linksys equip, all G, two feet away for testing) I only get about twice what I used to with B, which was of course already much less than advertised. I can accept that 11 doesn't mean 11, but I can't as readily accept that 54 doesn't mean ~5x whatever-11-is.

      I'm thinking they should just have a different labeling system for t

  • It should be a good thing for SoIP [politrix.org], and for pissing off the RIAA
  • 5.) It's still too slow to download Celeste-Virtual_BJ.avi in a reasonable time
    4.) You're not a cafe communist with a computer and a four dollar cup of coffee.
    3.) The low-bandwidth version of Slashdot doesn't have those cool 1997 .GIF icons.
    2.) The babes dig retro shit these days, like 14.4bps dial-up.
    1.) Your life revolves around physical things, not six-hundred dollar mp3 players (iPaqs, etc.)
  • by m_chan ( 95943 ) on Saturday August 09, 2003 @03:08PM (#6656027) Homepage
    Thanks to MADWIFI [sourceforge.net] and this post [sourceforge.net]I was able to get my Netgear WAG 511 working in a laptop in under five minutes. A walk in the park compared to the last time I configured wireless on my laptop.

    I have not had a chance to thoroughly test it in a multi-signal environment, but the throughput is solid on B. There have been some drop-outs but I blame the D-Link access point to which I am connecting. (DWL-1000AP=junk, but at least it was inexpensive).

    The WAG511 was on sale at Fry's for $80; I haven't seen it significantly cheaper on line, so I grabbed two.

    This afternoon I am working on getting another card to work in a desktop with a pcmcia adapter to act as a host so I can unload the D-Link; then the higher-speed testing can begin. I have nothing but good things to say about the Netgear card so far. Thanks to all those who are doing the heavy lifting to make A/G support possible.
  • ...is good. Consider, the speeds quoted from the article: 29Mbps vs. 5.6Mbps (g vs. b actual throughput) = 5.17x faster, but 54Mbps vs. 11Mbps (g vs. b specs) = 4.90x faster.

    The actual improvement in g-only mode is better than what the specs say.

    Anyways, I don't know why anyone would have a mixed b/g network, unless they are offering it as public service. Its easy enough to upgrade everything to g-mode only. 802.11g sounds like a big win to me.

  • Definitely not 54mbps, my pr0n goes more than twice as fast on my 100mpbs connection at school.
  • Basic math... (Score:3, Interesting)

    by SunPin ( 596554 ) <slashspam@@@cyberista...com> on Saturday August 09, 2003 @03:12PM (#6656043) Homepage
    ok, so on a straight g system, you get 5 times the rate of b wireless... b gets ~11Mper second times 5 = 55... a nice approximate number to 54... where is the problem? Why is this a controversy worth discussing?
  • There's two problems I see with the authors math. First it doesn't measure real real-world conditions. Don't get me wrong, office connectivity is valid. But where's the growth? It's in ISP implementations. Big cities are specifically making use of it. Needless to say there are a plethora of different sized buildings in those areas.

    Second, algorithms are an important part of CS, but geez, I have yet to see where fluid conditions have been calculated with necessary precision with just a monolithic algo
  • by rf0 ( 159958 )
    I'm just waiting for market to see this and realise that you can, totally theoertically on a full duplex connection double throughput. So 100Mb/s give 200MB/s so I would guess we will be seening "108Mb"
    very soon

    Rus
  • by kuknalim ( 557660 ) <thonuzo@NOSPaM.yahoo.com> on Saturday August 09, 2003 @03:18PM (#6656061)
    I stopped reading the article when i got to this:

    "Furthermore, the model ignores the sophistication in the TCP acknowledgement model. To avoid constraining throughput, TCP uses "sliding windows" and allows multiple outstanding frames to be transmitted before acknowledgement. In practice, TCP acknowledgements can apply to multiple segments, so this model overstates the impact of higher-layer protocol acknowledgements."

    This reduces the "TCP" he uses to a stop-and-wait protocol.

    • by pla ( 258480 ) on Saturday August 09, 2003 @04:05PM (#6656247) Journal
      This reduces the "TCP" he uses to a stop-and-wait protocol.

      Unfortunately, I have no mod points, but I really wish I did so I could throw one your way.

      Apparently, of all the supposed techies reading the article, only you caught that problem (hey, I'll admit it, even I glazed over on the details, so kudos to you). And that one change of his TCP simulation makes ALL the difference - If you take out all the part of a protocol that make it play well in a multiple-speed in-and-out environment, then yes, in fact, it will behave only slightly better than the worst speed in any direction. Almost a trivial statement, yet the parent post's entire premise rests on this one idea.

      Sad. And again, kudos, good catch.
    • Indeed, what kind of dingbat throws out all that "sophisticated" stuff above the link layer and tries to estimate throughput using "real math"? The only way to get REAL numbers is by simply measuring the actual transfer. Not that it's impossible to model TCP's behavior mathemtically but jesus why bother for this?

      Anyway on a slight tangent here... one thing that's interesting about TCP is that on very low latency media like an ethernet or 802.11 LAN, usually TCP actually performs *better* when you limit its
  • by toupsie ( 88295 ) on Saturday August 09, 2003 @03:24PM (#6656079) Homepage
    I have no complaints about the speed of my neighbor's wifi access point.
    • when i moved a couple months ago, my connection to the net was on my neighbors wifi until i finally broke down and got my own cable modem. but i definitely had zero complaints about the speed when i was not paying for it...

      never found out who it is/was, apt complex is too large. laptop + orinoco + cantenna would probably point me in the right direction, but i don't think i could conclusively decide which unit had the signal without knocking on some doors...
  • Now we have the a/b/g and maybe the x standard, and possibly more to come, them all being close in speed and performance to each other. Why arent they all rolled into one single spec which accomodates 5.4GHz, 2.6Ghz 11Ghz and more rather than making seperate specs and causing trouble to the manufacturers, users and buyers?
  • Duh!? (Score:2, Interesting)

    by pkhuong ( 686673 )
    54 Mbps has never been the advertised real bandwidth for g. 54 Mbps is the speed at which data goes between your card and your router. Guess what? There's a lot of correction code, synchronisation, etc.

    Maybe the author should read the docs(RFCs aren't that ahrd to find, are they?) before jumping on a juicy story?

    Oh, and... DUPE! "lie" was already covered a few months ago. Heck, there even was the same conclusion: g gives you around 20 Mbps, VS what, 11 Mbps max on b?
    • Uhm, 802.11 specs are hard to find, actually.

      They're not RFCs, they're IEEE docs. Especially the drafts are a real pain to get a hold of.

      --Dan
    • Only the disclaimer is in 3-point type on the box (if it is there) and the 54Mbps raw rate is in large 72-point type. Joe user will see the larger typeface and will think 54Mbps even though that's not the rate in the real world.
  • slashdot posted this story [computerworld.com] back in may


    The Institute of Electrical and Electronics Engineers Inc. (IEEE) has approved a new and final draft standard for 802.11g wireless LANs that will have a true throughput for Internet-type connections of between 10M and 20Mbit/sec., far lower than 54Mbit/sec. raw data rate initially billed for the standard.

  • Real Speeds (Score:5, Informative)

    by heli0 ( 659560 ) on Saturday August 09, 2003 @03:58PM (#6656206)
    Here are some real numbers. [pcworld.com]

    Best Performance among various hardware

    802.11g
    wep off: 15.5Mbps
    b card on network/wep off: 9.4Mbps
    wep on: 10.3Mbps

    802.11b
    wep off: 4.8Mbps

  • Real Tests (Score:3, Insightful)

    by heli0 ( 659560 ) on Saturday August 09, 2003 @04:00PM (#6656218)
    "his article has real math based on the specs"

    Kinda like judging a car's performance based on "real math based on the specs" when you can actually test the real thing in the Real World.
  • Why do people keep insinuating that comfortable furniture is somehow incompatible with brilliant thought?

    Seriously, I've come up with many a clever solution upon taking pencil and paper to bed with me.
  • Finding A equipment (Score:4, Informative)

    by chill ( 34294 ) on Saturday August 09, 2003 @04:14PM (#6656288) Journal
    Unfortunately, many retailers no longer stock any 802.11a equipment, other than a couple of "universal" a/b/g cards.

    I was in Best Buy and CompUSA and it is wall-2-wall 801.11g -- all "54 MBps!" in big, bold print.

    It is a shame, since the 5 GHz band is so less crowded. I think "A" equipment is going to fade into a niche and be harder and harder to find.
  • And I thought this 56K modem in my stack of cards in my closet really put out 56K! ;)
  • "This article has real math based on the specs, rather than armchair speculation."

    How, exactly, is sitting around doing math not "armchair speculation?"
  • by NanoGator ( 522640 ) on Saturday August 09, 2003 @08:02PM (#6657199) Homepage Journal
    ... that the 54mbits number measured how many bits fly through the air, not how many bits of the data you want carried from one end of the other. If it takes half the bits to guarantee delivery, then you still have a 54mbit connection, but only 27 of that is the data that you actually see.

    Maybe I'm just used to marketing-ese. I remember when video game cartridges were measured in bits and not bytes. I remember being stunned that the Sega CD could store 4.7 gigs of data. Too bad I had to divide that number by 8.

    Come to think of it, floppies were like that. "2 megs unformatted!"

    Marketing really sucks for computer geeks. We want hard data, they want to give us the highest (or lowest) numbers. Go fig. This particular industry would do much better to appeal to practical #'s and develop trust based on that.
    • Come to think of it, floppies were like that. "2 megs unformatted!"

      That's the one case I don't consider the criticism fair. That floppy can hold 2 MB, you just have to use some storage method less awful than FAT12.

      Personally, I usually use tar, rather than any actual filesystem. You can trasfer files seamlessly from/to any operating system. With DOS/Windows you need a program like rawrite to create a file from the contents, but under any form of Unix, you just access the drive's device like a tar file.

      • The overhead between 1.44 MB and 2 MB does not go into the inefficiency of FAT16, but into sector headers and gaps that make it possible to use the disk in 512 byte chunks. Even if you use rawrite or tar, the disk still needs to be formatted (but then only low-level, not high-level). You still end up with only 1.44 MB.
        Only if it were possible to format the floppy with only one sector per track, you would cut out most of the overhead, and end up with almost 2 MB.
        • You still end up with only 1.44 MB.

          Only if it were possible to format the floppy with only one sector per track, you would cut out most of the overhead, and end up with almost 2 MB.

          Using a particular DOS TSR driver, I have personally formatted disks at up to about 1.88MB (no compression). In fact, that's how I originally managed to fit Quake on a 10-pack of floppies :-). Linux systems transparently support formats of up to about 1.7MB. So, I don't really see a problem with the claim, since the size is n

          • I have never seen a floppy disk formatted with FAT16, although it probably is possible. They are always formatted with FAT12

            You are absolutely right. A floppy disk does not contain more than 4000 clusters, so it will not need FAT16. My bad.

            And yes, a good deal of capacity goes into FAT12

            The FAT12 filesystem takes 1.5 bytes of FAT space per FAT for every cluster on disk.
            Assuming 2 FATs and 512 byte cluster size, this is somewhat less than 0.6%.
            Add a few sectors to this for the root directory.
            Hardly a l
  • I don't go around telling people I'm 8'11" [roadsideamerica.com] just because it's my theoretical maximum!
  • I've set up a lot of these in people's homes and I'm at the point where I'm practically begging them to get an electrician to run Cat5 behind the walls. Why? Because 2.4 ghz phones interfere badly with them, and the ranges are nowhere nearly as good as what the manufacturers claim to be, and they just keep calling me back whenever their connectivity cuts out.
  • test

A Fortran compiler is the hobgoblin of little minis.

Working...