Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Operating Systems Windows

The Thinking Behind the 32GB Windows Format Limit On FAT32 (theregister.com) 124

The reason why the Windows UI has a 32GB limit on the formatting of FAT32 volumes is because retired Microsoft engineer Dave Plummer "said so." The confession comes "in the latest of a series of anecdotes hosted on his YouTube channel Dave's Garage," reports The Register. From the report: In the closing years of the last century, Plummer was involved in porting the Windows 95 shell to Windows NT. Part of that was a redo of Windows Format ("it had to be a replacement and complete rewrite since the Win95 system was so markedly different") and, as well as the grungy lower-level bits going down to the API, he also knocked together the classic, stacked Format dialog over the course of an hour of UI creativity. As he admired his design genius, he pondered what cluster sizes to offer the potential army of future Windows NT 4.0 users. The options would define the size of the volume; FAT32 has a set maximum number of clusters in a volume. Making those clusters huge would make for an equally huge volume, but at a horrifying cost in terms of wasted space: select a 32-kilobyte cluster size and even the few bytes needed by a "Hello World" file would snaffle the full 32k.

"We call it 'Cluster Slack'," explained Plummer, "and it is the unavoidable waste of using FAT32 on large volumes." "How large is too large? At what point do you say, 'No, it's too inefficient, it would be folly to let you do that'? That is the decision I was faced with." At the time, the largest memory card Plummer could lay his hands on for testing had an impossibly large 16-megabyte capacity. "Perhaps I multiplied its size by a thousand," he said, "and then doubled it again for good measure, and figured that would more than suffice for the lifetime of NT 4.0. I picked the number 32G as the limit and went on with my day."

While Microsoft's former leader may have struggled to put clear water between himself and the infamous "640K" quote of decades past, Plummer was clear that his decision process was aimed at NT 4.0 and would just be a temporary thing until the UI was revised. "That, however, is a fatal mistake on my part that no one should be excused for making. With the perfect being the enemy of the good, 'good enough' has persisted for 25 years and no one seems to have made any substantial changes to Format since then..." ... However, as Plummer put it: "At the end of the day, it was a simple lack of foresight combined with the age-old problem of the temporary solution becoming de-facto permanent."

This discussion has been archived. No new comments can be posted.

The Thinking Behind the 32GB Windows Format Limit On FAT32

Comments Filter:
  • "no one will need more than 800k" also ;) it is just because it is ;)
    • The only numbers that ever make sense in computing are 0, 1, 2, many, much, and more. And, of course, their negative cousins.

      If you think you need to set a hard fixed number other than those , you should reexamine your design closely to see if that's really the best. If it is, great! You've found the counterexample that proves the rule of thumb.

      • The only numbers that matter are zero, one and infinity.

        • Zero is a thing that doesn't exist. It is only a "number" because it is useful to include it.

          The same is true for infinity.

          If your only number is one, you have no numbers.

          The numbers that matter are Plank's Constant and pi, but then you need an (arbitrary) artificial number system so that you can scale those, record intermediate values, and account for "frequency." You sure as fuck aren't going to do any of that with 0, 1, and infinity.

          • by hawk ( 1151 )

            >If your only number is one, you have no numbers.

            Years ago, I took graduate measure theory.

            Starting with the idea of nothing, and then adding the set containing nothing, we built numbers and then math up through at least calculus before the semester ran out.

            • Starting with the idea of nothing, and then adding the set containing nothing, we built numbers and then math up through at least calculus before the semester ran out.

              You waved your hands, and by the end you were saying "calculus."

              If you have nothing, and you add the set containing nothing, you still have nothing, you don't even have a set containing nothing. See also: https://en.wikipedia.org/wiki/... [wikipedia.org]

              In software we solve this by merely admitting that sets are useful constructs we create. Done. Now we can use the tool without getting confused.

              Platonic ideals sometimes remain a useful construct, but the physical world does not actually function in any way like that.

              Pick u

          • Comment removed based on user account deletion
        • No, only 0 and 1. Infinity is just 1/0. And negative numbers are just all numbers bigger than half of infinity for your data type. Everyone knows that! :)

      • For programming, I say the useful numbers are:

        0
        1
        As many as you want

        It's either not allowed (you can have zero), items must be paired (X can have one Y), or it's a list.

        As a random example, in a configuration for some software I wrote you can define email addresses to be notified of alerts. That's a list - notify as many people as you want. On the other hand, each user has exactly one password.

        SQL calls those one-to-one and one-to-many relationships.

        • Pedantic -- wouldn't your example be a many-to-many?

          Presumably there is more than one alert (or class of alerts), and each type of alert might have a different distribution list?

          • > Presumably there is more than one alert (or class of alerts), and each type of alert might have a different distribution list?

            In this particular example, no. But if software DID allow for users to create different groups of alerts, I'd say they should be able to create "as many as you want".

            The folks who built Azure like to allow up to four, of whatever.
            I guess in their database each field is duplicated four times - user1, user2, user3, user4. If I were designing it, that would just be "users

          • It depends. If notification lists are lists of registered users, then yes, it would be a many-to-many relationship between users and alerts. If you have to enter the email addresses for each alert, then it would be a one-to-many relationship, and you would just have email addresses repeated in the table.
        • by raynet ( 51803 )

          I am using ternary computer, you insensitive clod.

        • For programming, I say the useful numbers are:

          0
          1
          As many as you want

          As long as you restrict yourself to a 1-bit computer, this is even true!

        • There is also a very special third number called "null pointer exception" or "bottom" or "unknown" (in logic and science). :)

          No, it is not the same thing as 0/false. If you use it like that, you're gonna have a bad time.

          • Or simply null, and Chris Date wrote an interesting book about that (non)value.

            But is that a number? It's *represented* by a bit string, just as your name is represented by a bit string. But what is null / 2?
            Null - 1? Is null actually a number, or is it a flag?

      • Not entirely true. When you've got an on-disk format (or API headers etc) for addressing things, you have a certain number of bits you can use for addresses. If you pick a bit count that's too large, you're wasting a lot of space on overhead. If you pick a count that's too small, it limits your address space. That's why file sizes can only be 4GB on FAT32, the size field is only 32 bits.
        • When you've got an on-disk format (or API headers etc) for addressing things, you have a certain number of bits you can use for addresses. If you pick a bit count that's too large, you're wasting a lot of space on overhead. If you pick a count that's too small, it limits your address space.

          Not entirely true neither.
          There are also varint and similar other schemes.

          e.g.: Google protocol buffer's 'keep reading bytes and using their lower 7 bits, for as long as the 8th bit set' that allow encoding an arbitrary large number, while still using less byte for smaller numbers.

          e.g.: Unicode's utf-8 also has a variable lenght encoding (the number of upper set bits, before the first '0' bit indicate which position it occupies in the final number: 0nnn nnnnb for the first 7 bits, then 10nn nnnnb for bits 8

      • Also 3, 65537, and 2^255-19.
    • > the infamous "640K" quote of decades past,

      I know reading TFA has always been passe here, but seriously...
    • You are probably thinking of 640K from Bill Gates? - but yeah

      https://quoteinvestigator.com/... [quoteinvestigator.com]

  • by iggymanz ( 596061 ) on Monday January 04, 2021 @05:59PM (#60896858)

    FAT32 itself can support 16TB volumes with 4K sectors and 16K clusters, and can have files of up to 256GB with fat+.

    So the limitations of GUI seem like something that can be left behind with newer tools and version of OS.

    • I don't have a VM handy, but I recall Windows 98 allowing one to create FAT32 partitions larger than 32GB. The provided versions of FDISK didn't quite work right on drives over 64GB, but Microsoft did release an official patch for that.
      • For formatting large USB drives using Windows, the clear winner is actually Windows ME. It supports creating FAT32 partitions over 32GB, has ships with USB mass storage drivers (these are absent on Windows 98) and has the fixed version of fdisk.

        Of course, the real answer is to use Linux.

        • And not FAT at all, but to kill it with fire.

          • by hawk ( 1151 )

            For years, a FAT partition was the only way to safely interchange data while dual booting Linux and FreeBSD.

            While both *claimed* to support the other file system, after running a couple of days, they would do *serious* damage to the other's file system.

            Eventually, I was able to get completely way from Linux . . .

            hawk

    • by Rhipf ( 525263 ) on Monday January 04, 2021 @06:59PM (#60897084)

      From what I was able to determine fat+ isn't really all that compatible with most use cases. It looks like "FAT32+ and FAT16+ is limited to some versions of DR-DOS and not available in mainstream operating systems" https://en.wikipedia.org/wiki/... [wikipedia.org]
      So although fat+ can have files up to 245GB (minus 1 byte). Formatting a drive with fat+ would limit its usefulness.

      • Yeah, nobody uses fat+, it is exfat for the big or new stuff.

        It is annoying, I have to install a kernel module before I can mount my camera media.

        But don't believe that wiki about the linux support; it was already available, though not standard, before whatever MS says they did.

    • by Proudrooster ( 580120 ) on Monday January 04, 2021 @11:07PM (#60897578) Homepage

      It is an artificial limitation used to promote the switch the NTFS.
      To format a FAT32 filesystem larger than 32GB just use Linux. GParted is a nice GUI tool if you need one.

      Link to live bootable version: https://gparted.org/livecd.php [gparted.org]

      • by AmiMoJo ( 196126 )

        NTFS isn't a good option for removal flash memory though, especially flash drives without RAM cache and advanced wear levelling/TRIM support (i.e. 99.99% of them).

        exFAT is okay but Microsoft only made it free-ish a few years ago. There isn't really anything else that is universally readable. FAT32 doesn't have journaling and isn't very robust, not ideal for removable drives.

    • Will Windows ITSELF actually ALLOW you to read/write files to such a volume, though?

      I learned the hard way a few years ago (around the time Linux gained the ability to robustly write to NTFS volumes without jumping through hoops or doing anything special besides using a NTFS-enabled kernel and mounting the volume) that there are a lot of things NTFS would allow you to do... and Linux would, in fact, do... that would make Windows pout, sulk, or worse... characters in filenames and limits on the number of cha

      • Unless you did something special, you will also lose all the extended attributes that way. Like rights and other metadata.

      • Yes, you can write to those big FAT32 filesystems in windows just fine, can even create them from command line in windows, just the GUI can't handle them.

        Microsoft gets to set what the allowed NTFS filenames are, if you create ones from Linux or other system in an NTFS filesystem that windows doesn't like you're violating windows standards.

      • by Bert64 ( 520050 )

        NTFS allows you to create two filenames where the only difference is the case of the filename.
        Some malware uses this as a method of hiding, if you browse using the standard windows tools you will only see one (mundane) file and not the malware.

  • by phantomfive ( 622387 ) on Monday January 04, 2021 @06:00PM (#60896860) Journal

    At the time, wasting 32k was a lot more important than a 32GB filesystem size.

    • He could have made the option configurable as a read-only param in the boot sector, but that would have added extra complexity for minimal value. Extra complexity means all our camera SD cards would be written with buggy drivers.

      • no need, Fat32 itself can support volumes up to 16TB.

        Windows imposed an artificial limit that the filesystem itself doesn't have.

        • Oh, really? Why does Windows impose that artificial limit? What exactly is being limited?

          • by ceoyoyo ( 59147 )

            Sounds like it was just the options provided in the GUI.

          • To promote the use of NTFS vs FAT32.
          • Oh, really? Why does Windows impose that artificial limit? What exactly is being limited?

            If only there was a linkable article that could explain that to you...

    • in a lot of systems it seems like having a clear decision made on something is more important than what the actual decision it is.

    • by andymadigan ( 792996 ) <amadigan&gmail,com> on Monday January 04, 2021 @06:44PM (#60897032)
      It's not like it would have introduced any wasted space on a (say) 8GB disk. You choose the cluster size based on the number of (512 byte) sectors on the disk and the number of clusters you can count.

      FAT32 supports 2^28 - 1 clusters, so even using the minimum cluster size of 1, you could support a 128GB disk easily. A cluster size of 8 (4 KiB, which today is the native sector size of some disks) would have given you enough for about 1TB.

      32GB was just an artificial limit. They could have made an argument for limiting the cluster size to 1, and thus limiting themselves to 128GB, but clearly they were trying to push people to the less-interoperable NTFS filesystem.

      Sad thing is, we still don't really have a better option if you want to format a USB flash drive so that it can be plugged in to a system running Linux, Windows, or macOS.
      • by edwdig ( 47888 ) on Monday January 04, 2021 @07:28PM (#60897160)

        It's not like it would have introduced any wasted space on a (say) 8GB disk. You choose the cluster size based on the number of (512 byte) sectors on the disk and the number of clusters you can count.

        The root of the issue is that picking too high a value would have wasted a lot of disk space.

        Small cluster sizes are great if you store a lot of small files. Less wasted space per file. Great for people working on code or text documents.

        Large cluster sizes are great if you store mostly large files - there's less space reserved for tracking the clusters. But if you go too big, the benefit decreases and the waste grows.

        They made it an option so you could pick the choice that made the most sense for your use case. They limited the options so that you wouldn't pick options that didn't really make sense based on hardware that would be available within the next few years. That all made sense. The problem came in when the code far, far outlived its expected lifespan.

        • The "Cluster Slack" problem from the summary (and to some extent edwdig's comment above) was a real issue with FAT16. Any volume over 32MB (not even 1 GB) runs into that issue under FAT16. It is probably the main reason FAT32 was extended from FAT16 in the first place.

          But with FAT32, you can have a much bigger volume before "Cluster Slack" (or "internal fragmentation" or edwdig's "wasted a lot of disk space") becomes an issue. Based on 28 bit cluster numbers as described in https://en.wikipedia.org/wiki/. [wikipedia.org]

      • Sad thing is, we still don't really have a better option if you want to format a USB flash drive so that it can be plugged in to a system running Linux, Windows, or macOS.

        exFAT.

        • by Megane ( 129182 )
          It was added in 10.6.8 or so. Definitely no longer a problem, but the first time someone handed me one to read, I was still on 10.6.7.
    • by TheNameOfNick ( 7286618 ) on Monday January 04, 2021 @06:47PM (#60897044)

      No, it was a bad decision. The filesystem doesn't have that low cluster size limit and the user interface should have reflected that. If the decision had had an impact on smaller storage devices, it would have been a different choice. But you would not have been required to format small devices with a big cluster size. You would only use big clusters on bigger disks or cards. Is a big cluster size really a problem if you have much bigger storage devices? No, it is not. He needlessly imposed the limits of smaller devices onto bigger devices.

    • At the time, wasting 32k was a lot more important than a 32GB filesystem size.

      Except, the cluster size is settable. One could point out that wasting 32k isn't a big deal once you have 32gig, so maybe just let the cluster size grow more then. If you let it go higher when needed, then it's all a matter of not letting people choose inappropriately high numbers or impossibly low ones.

      • 32kb is a big deal.
        I have perhaps a few 100 thousand eMails. Most of them are a few kB big.
        Same for a big Java project. I hardly have any java sources approaching 20kb not to think about 32kb, and not mention their corresponding *.class files.

        • It's definitely related to use case. For example, a 256GB SD card in a video camera would be just fine. Most of the files would be several MB if not a couple GB at a minimum.

        • I have perhaps a few 100 thousand eMails. Most of them are a few kB big.

          This is one of several reason why email clients don't store each email as a file.

          Same for a big Java project. I hardly have any java sources approaching 20kb not to think about 32kb, and not mention their corresponding *.class files.

          Okay, and there are what, maybe 10,000 of them in a "big" project? 50,000? Oh no, a whole 100 mb wasted!

          32kb is a big deal.

          Not when you have a 1 TB drive it's not. That's the point - the 32kb is settable

  • by The MAZZTer ( 911996 ) <megazztNO@SPAMgmail.com> on Monday January 04, 2021 @06:10PM (#60896908) Homepage

    For devices using SD cards which don't pay exFAT licensing fees or just don't want to implement it, you're probably stuck with formatting FAT32. So it's important to be able to completely format a large SD card with FAT32.

    A particular device with this issue is all (AFAIK) iterations of the Nintendo 3DS and 2DS. It claims to only support SD cards up to 32GB... but that is due to the Windows restriction! It will happily accept and use larger cards formatted as FAT32.

    I expect this won't be an issue with new devices going forward (and even a little backwards; 3DS is pretty old now and the Switch does not have this limitation), as large cards become more common thus devices will be expected to support them out of the box.

    • by tlhIngan ( 30335 )

      For devices using SD cards which don't pay exFAT licensing fees or just don't want to implement it, you're probably stuck with formatting FAT32. So it's important to be able to completely format a large SD card with FAT32.

      A particular device with this issue is all (AFAIK) iterations of the Nintendo 3DS and 2DS. It claims to only support SD cards up to 32GB... but that is due to the Windows restriction! It will happily accept and use larger cards formatted as FAT32.

      I expect this won't be an issue with new de

      • by jrumney ( 197329 )

        Their "royalty free to Linux users" terms are defined very narrowly. Existing fuse based implementations do not qualify, only the kernel support in latest kernels.
        So a lot of embedded devices, which are either stuck on older kernel versions supported by their SoC vendor, or don't use Linux at all are not helped by this, and the problem will likely remain until the patents expire or are properly opened up for royalty free access to all without condition.

        • stuck on older kernel versions supported by their SoC vendor

          All the more reason to stop relying on Google's Android and use a real OS like Mobian with a mainline kernel.

          my Android is the only computer in the house that can't read my 1TB NTFS USB 3 drive.

          • by jrumney ( 197329 )

            They support 3 specific devices, two using AllWinner A64, and one using Freescale iMX8 (with many peripherals not working). The latter is why embedded developers are stuck using the kernel version supported by the upstream vendor.

      • by dryeo ( 100693 ) on Tuesday January 05, 2021 @12:14AM (#60897684)

        Though, just over a year ago Microsoft published the ExFAT specification and has made it royalty free to Linux users. Linux has native ExFAT support (GPL and all) as of kernel 5.4.

        I really don't understand this. Generally I can reuse any part of the Linux kernel and as long as I make the source available (and some documentation such as COPYING), I can re-purpose it. For example I'm currently testing an audio driver based on Alsa from 5.10.4 on OS/2. If I do the same with the ExFat code, it is a patent violation which sure seems to break the GPL v2, which in the preamble has,

        Finally, any free program is threatened constantly by software
        patents. We wish to avoid the danger that redistributors of a free
        program will individually obtain patent licenses, in effect making the
        program proprietary. To prevent this, we have made it clear that any
        patent must be licensed for everyone's free use or not licensed at all.

        and further down,

        7. If, as a consequence of a court judgment or allegation of patent
        infringement or for any other reason (not limited to patent issues),
        conditions are imposed on you (whether by court order, agreement or
        otherwise) that contradict the conditions of this License, they do not
        excuse you from the conditions of this License. If you cannot
        distribute so as to satisfy simultaneously your obligations under this
        License and any other pertinent obligations, then as a consequence you
        may not distribute the Program at all. For example, if a patent
        license would not permit royalty-free redistribution of the Program by
        all those who receive copies directly or indirectly through you, then
        the only way you could satisfy both it and this License would be to
        refrain entirely from distribution of the Program.

        If any portion of this section is held invalid or unenforceable under
        any particular circumstance, the balance of the section is intended to
        apply and the section as a whole is intended to apply in other
        circumstances.

        It is not the purpose of this section to induce you to infringe any
        patents or other property right claims or to contest validity of any
        such claims; this section has the sole purpose of protecting the
        integrity of the free software distribution system, which is
        implemented by public license practices. Many people have made
        generous contributions to the wide range of software distributed
        through that system in reliance on consistent application of that
        system; it is up to the author/donor to decide if he or she is willing
        to distribute software through any other system and a licensee cannot
        impose that choice.

        This section is intended to make thoroughly clear what is believed to
        be a consequence of the rest of this License.

            8. If the distribution and/or use of the Program is restricted in
        certain countries either by patents or by copyrighted interfaces, the
        original copyright holder who places the Program under this License
        may add an explicit geographical distribution limitation excluding
        those countries, so that distribution is permitted only in or among
        countries not thus excluded. In such case, this License incorporates
        the limitation as if written in the body of this License.

        Which seems to me to cause distributing Linux with ExFAT to not be GPL compatible.
        I'm obviously not a lawyer nor an IP expert.

    • For devices using SD cards which don't pay exFAT licensing fees or just don't want to implement it, you're probably stuck with formatting FAT32

      FAT32 has licensing fees. It's why Microsoft makes more money off Android devices than Google does.

  • Its better if you don't know how the sausage is made. Enjoy the taste, or don't, we don't really want to know how much of our software made based on guessing.

  • I've always wondered why MS didn't just fix the GUI since that's the only thing limiting the file system size. With Linux I've regularly formatted 64 GB SD cards for use with Android devices before the advent of exFAT and it always worked fine. In fact Windows can read and write to them just fine. I'm pretty sure Mac can format larger Fat32 partitions also.

    Now that exFAT is hitting the Linux kernel, there's less and less reason to use FAT32 on larger partitions, so the issue is moot at this point.

    • I formatted an external 12tb RAID array with FAT32 once using gparted just to see if it would do it. It did :)
      I didn't try plugging it into a windows machine, though.

      I also regularly reformat USB sticks that are 128 and 256gb with FAT32, because it gets me around some equipment limitations (no exfat support, FAT32 support listed as 32gb because that's what windows lists the max size of a FAT32 partition as).

  • "Cluster slack"? (Score:2, Informative)

    by Entrope ( 68843 )

    "We call it 'Cluster Slack'," explained Plummer, "and it is the unavoidable waste of using FAT32 on large volumes."

    People who know their theoretical computer science call it internal fragmentation [wikipedia.org] and don't sound like they are reinventing wheels.

    • Re:"Cluster slack"? (Score:4, Informative)

      by belthize ( 990217 ) on Monday January 04, 2021 @07:45PM (#60897190)

      Yes and no. What Plummer referred to as 'Cluster Slack' is a specific form of internal fragmentation unique to FAT32's design. There's nothing wrong with coining a term to describe a very specific instance of a more general concept. It's pretty much the basis of communication.

  • by UnknownSoldier ( 67820 ) on Monday January 04, 2021 @07:10PM (#60897106)

    ... was thinking that it would be "only temporary".

    How many years have users had to suffer due to crap designs? i.e. CP/M and MS-DOS shitty 8.3 filenames, etc.

    /Oblg. Murphy's Law Computer Poster [imgur.com]

    Meskimen's Law: There is never time to do it right, but there is always time to do it over.

    • ... was thinking that it would be "only temporary".

      Some people will hang on to anything just because they can. One cannot blame Microsoft for them. Every OS was and is only temporary, because at the speed computers are improving can one not honestly future-proof every aspect of an OS and at the same time expect it to deliver adequate performance. We have always made a compromise in this regard and we will continue to do so.

      So if this is about 32GB as it is here or 4GB, 8-, 10-, 16-, 24-, 32-, 48- or 64-bit, the year 1999-2000 or 2038, the resolution of 640x

    • Nothing lasts longer than a temporary crutch.

    • How many years have users had to suffer due to crap designs? i.e. CP/M and MS-DOS shitty 8.3 filenames, etc.

      Zero years. It's not crap design to design something in a way that would outlive its useful life. It's crap use for users to continue to use said system long after they should retire it. All of these "crappy designs" are based on sound engineering decisions to ensure systems performed well within limits of the design of the day.

      There was nothing wrong wit 8.3 filenames either back in the day when every file could be listed and printed on a dot matrix printer without actually changing the roll of paper.

      Meskimen's Law: There is never time to do it right, but there is always time to do it over.

      I act

      • by hawk ( 1151 )

        My favorite example of that is the Y2K "bug".

        I think it was fairly late 1999 when some economists published their results, having looked at the current value of what it would have taken to avoid the problem in the first place.

        I came to about three times as much as the "repair' costs.

        On a 72 column card at a time when it was more than a buck a month to rent a byte of main memory, saving two bytes made a lot of sense.

      • > There was nothing wrong with 8.3 filenames either back in the day

        BULLSHIT.

        My Apple ][+ computer had filenames that were 32 characters WITH spaces in them in 1980. And in 1983 it supported sub-directories -- albeit with filenames chopped down to 15 character filenames but then we got File Types meta-data.

        Filenames exist SOLELY for the USER.

        The file systems of CP/M and MS-DOS (1981) was designed by idiots who didn't have a fucking clue WHY filenames existed in the first place. Let me repeat that for y

  • I only became aware of this limit when MS introduced exFAT, and assumed it was a new artificial limit designed to push adoption of the new patented filesystem as the VFAT patents were about to expire.

    Certainly it was possible to format hard drives with FAT32 up to at least the 2GB limit for signed 32 bit ints in older versions of Windows, though USB drives and SD cards were not available in such large capacities at the time.

    • by jrumney ( 197329 )

      Forget the bit about a 2GB limit, clearly the limit is more than that, I was thinking TB, but the 32bit limit is in the GB range, and used to apply to usable RAM.

  • No one will ever need a 1 Petabyte hard drive #hmm
    • Use, yes.

      Need? I don't think 1TB is even strictly /necessary/. Basically only movies and game graphics use that much.

      • What about multi-user use cases?

        Whatever you define as 'necessary' for a user ends up getting multiplied by anywhere between 10s and tens of thousands, plus versioning and backups, for whoever is running the fileserver.
  • by Yaztromo ( 655250 ) on Tuesday January 05, 2021 @12:17AM (#60897692) Homepage Journal

    The real mistake was in just trying to extend FAT and its notion of clusters in the first place. FAT32 was already going to be incompatible with older FAT12/FAT16 based devices anyway, so why even bother to keep its structure? Microsoft could have designed a better filesystem that didn't rely on the already outmoded notion of "clusters" for allocation -- Microsoft's own HPFS386 from the early 1990s proved that.

    UI was the least of the problems with this project -- they needed to ditch FAT altogether back in the 90s. It really shouldn't continue to exist today with modern compute devices, but Microsoft took the easy way out, half-assed things, and this is the result.

    Yaz

    • You mean IBM's HPFS, as used in OS/2, the sane NT that, of course, died?

    • HPFS and NTFS (and most other modern high performance filesystems) require considerably more code and runtime data. They weren't practical for DOS, with its tight memory constraints. FAT32 filled the need for Win95 and Win98 to have larger disks while still being stacked on top of DOS.

    • Microsoft could have designed a better filesystem

      They did. NTFS predated FAT32 as well.

      FAT32 was already going to be incompatible with older FAT12/FAT16 based devices anyway, so why even bother to keep its structure?

      The "device" in question is a computer. Quite a complex beast with a lot of customisation options. While backwards compatibility was directly broken, i.e. you couldn't simply run FAT32 without additional drivers, the idea behind FAT32 was precisely.... backwards compatibility, except to hardware and OSes. FAT32 drivers could run in x86 real-mode which meant you could get drivers to run on DOS, and it did run on DOS and without using any significant additional memory (an

  • At least right after phasing out support for older Windows/DOS versions.

    By modern standards, even back then, FAT was a disgrace.
    It shoud have died with floppies.

  • That is what makes me the angriest about it - and possibly more than it should. It's just another dev thinking they know better than the user - and disabling everything else. IMHO that dialog should have enabled everything that's technically possible. Not what "makes sense" because that's subjective. Of course, there are certain combinations that have lots of disadvantages for most use cases. But then just spit out a warning and/or let the user type "I know what I'm" doing or something similar.
  • by aRTeeNLCH ( 6256058 ) on Tuesday January 05, 2021 @09:20AM (#60898594)
    He should have defined a maximum waste percentage, and have the system divide up the volume accordingly with block sizes doubling after a set size of the volume. A 32GB volume could then have 4k blocks all over, a 320TB volume would have, for example, 16GB of 4k blocks, 32GB of 8k, 64GB of 16k, 128GB of 32k, ... Several TB of 64M blocks. Naturally, the OS must then manage that properly, but with the advent of such large systems the OS capabilities and CPU speed / cycle availability should be in line.
  • Personally I found the limit of 4GB for an individual file to be more problematic than the limit on volume size. A common problem was for Microsoft Outlook to break when a PST file needed to exceed this file size limit.
  • When FAT32 was designed, nobody thought that people would routinely be storing files measured in gigabytes; it was before movie files were a significant thing. The slack from a larger cluster size doesn't matter much when files get that big. As nuckfuts points out, the 4GB limit on the size of a file also became an issue. Microsoft then exacerbated the problem by requiring royalty payments for the use of its successor, exFAT, slowing down its adoption; that's why Linux distros did not officially support tha

Know Thy User.

Working...