The Thinking Behind the 32GB Windows Format Limit On FAT32 (theregister.com) 124
The reason why the Windows UI has a 32GB limit on the formatting of FAT32 volumes is because retired Microsoft engineer Dave Plummer "said so." The confession comes "in the latest of a series of anecdotes hosted on his YouTube channel Dave's Garage," reports The Register. From the report: In the closing years of the last century, Plummer was involved in porting the Windows 95 shell to Windows NT. Part of that was a redo of Windows Format ("it had to be a replacement and complete rewrite since the Win95 system was so markedly different") and, as well as the grungy lower-level bits going down to the API, he also knocked together the classic, stacked Format dialog over the course of an hour of UI creativity. As he admired his design genius, he pondered what cluster sizes to offer the potential army of future Windows NT 4.0 users. The options would define the size of the volume; FAT32 has a set maximum number of clusters in a volume. Making those clusters huge would make for an equally huge volume, but at a horrifying cost in terms of wasted space: select a 32-kilobyte cluster size and even the few bytes needed by a "Hello World" file would snaffle the full 32k.
"We call it 'Cluster Slack'," explained Plummer, "and it is the unavoidable waste of using FAT32 on large volumes." "How large is too large? At what point do you say, 'No, it's too inefficient, it would be folly to let you do that'? That is the decision I was faced with." At the time, the largest memory card Plummer could lay his hands on for testing had an impossibly large 16-megabyte capacity. "Perhaps I multiplied its size by a thousand," he said, "and then doubled it again for good measure, and figured that would more than suffice for the lifetime of NT 4.0. I picked the number 32G as the limit and went on with my day."
While Microsoft's former leader may have struggled to put clear water between himself and the infamous "640K" quote of decades past, Plummer was clear that his decision process was aimed at NT 4.0 and would just be a temporary thing until the UI was revised. "That, however, is a fatal mistake on my part that no one should be excused for making. With the perfect being the enemy of the good, 'good enough' has persisted for 25 years and no one seems to have made any substantial changes to Format since then..." ... However, as Plummer put it: "At the end of the day, it was a simple lack of foresight combined with the age-old problem of the temporary solution becoming de-facto permanent."
"We call it 'Cluster Slack'," explained Plummer, "and it is the unavoidable waste of using FAT32 on large volumes." "How large is too large? At what point do you say, 'No, it's too inefficient, it would be folly to let you do that'? That is the decision I was faced with." At the time, the largest memory card Plummer could lay his hands on for testing had an impossibly large 16-megabyte capacity. "Perhaps I multiplied its size by a thousand," he said, "and then doubled it again for good measure, and figured that would more than suffice for the lifetime of NT 4.0. I picked the number 32G as the limit and went on with my day."
While Microsoft's former leader may have struggled to put clear water between himself and the infamous "640K" quote of decades past, Plummer was clear that his decision process was aimed at NT 4.0 and would just be a temporary thing until the UI was revised. "That, however, is a fatal mistake on my part that no one should be excused for making. With the perfect being the enemy of the good, 'good enough' has persisted for 25 years and no one seems to have made any substantial changes to Format since then..." ... However, as Plummer put it: "At the end of the day, it was a simple lack of foresight combined with the age-old problem of the temporary solution becoming de-facto permanent."
Didn't someone say something like (Score:2)
Re: (Score:3)
The only numbers that ever make sense in computing are 0, 1, 2, many, much, and more. And, of course, their negative cousins.
If you think you need to set a hard fixed number other than those , you should reexamine your design closely to see if that's really the best. If it is, great! You've found the counterexample that proves the rule of thumb.
Re: (Score:2)
The only numbers that matter are zero, one and infinity.
Re: (Score:2)
Zero is a thing that doesn't exist. It is only a "number" because it is useful to include it.
The same is true for infinity.
If your only number is one, you have no numbers.
The numbers that matter are Plank's Constant and pi, but then you need an (arbitrary) artificial number system so that you can scale those, record intermediate values, and account for "frequency." You sure as fuck aren't going to do any of that with 0, 1, and infinity.
Re: (Score:2)
>If your only number is one, you have no numbers.
Years ago, I took graduate measure theory.
Starting with the idea of nothing, and then adding the set containing nothing, we built numbers and then math up through at least calculus before the semester ran out.
Re: (Score:2)
Starting with the idea of nothing, and then adding the set containing nothing, we built numbers and then math up through at least calculus before the semester ran out.
You waved your hands, and by the end you were saying "calculus."
If you have nothing, and you add the set containing nothing, you still have nothing, you don't even have a set containing nothing. See also: https://en.wikipedia.org/wiki/... [wikipedia.org]
In software we solve this by merely admitting that sets are useful constructs we create. Done. Now we can use the tool without getting confused.
Platonic ideals sometimes remain a useful construct, but the physical world does not actually function in any way like that.
Pick u
Re: (Score:2)
Re: (Score:2)
Your brain is also not a number, do you feel triggered yet?
Re: Didn't someone say something like (Score:2)
No, only 0 and 1. Infinity is just 1/0. And negative numbers are just all numbers bigger than half of infinity for your data type. Everyone knows that! :)
0, 1, as many as you want (Score:2)
For programming, I say the useful numbers are:
0
1
As many as you want
It's either not allowed (you can have zero), items must be paired (X can have one Y), or it's a list.
As a random example, in a configuration for some software I wrote you can define email addresses to be notified of alerts. That's a list - notify as many people as you want. On the other hand, each user has exactly one password.
SQL calls those one-to-one and one-to-many relationships.
Re: (Score:2)
Pedantic -- wouldn't your example be a many-to-many?
Presumably there is more than one alert (or class of alerts), and each type of alert might have a different distribution list?
Re: (Score:2)
> Presumably there is more than one alert (or class of alerts), and each type of alert might have a different distribution list?
In this particular example, no. But if software DID allow for users to create different groups of alerts, I'd say they should be able to create "as many as you want".
The folks who built Azure like to allow up to four, of whatever.
I guess in their database each field is duplicated four times - user1, user2, user3, user4. If I were designing it, that would just be "users
Re: (Score:2)
Re: (Score:2)
I am using ternary computer, you insensitive clod.
Re: (Score:2)
For programming, I say the useful numbers are:
0
1
As many as you want
As long as you restrict yourself to a 1-bit computer, this is even true!
Oh kid. (Score:2)
There is also a very special third number called "null pointer exception" or "bottom" or "unknown" (in logic and science). :)
No, it is not the same thing as 0/false. If you use it like that, you're gonna have a bad time.
Re: (Score:2)
Or simply null, and Chris Date wrote an interesting book about that (non)value.
But is that a number? It's *represented* by a bit string, just as your name is represented by a bit string. But what is null / 2?
Null - 1? Is null actually a number, or is it a flag?
Re: (Score:2)
VARINT (Score:2)
When you've got an on-disk format (or API headers etc) for addressing things, you have a certain number of bits you can use for addresses. If you pick a bit count that's too large, you're wasting a lot of space on overhead. If you pick a count that's too small, it limits your address space.
Not entirely true neither.
There are also varint and similar other schemes.
e.g.: Google protocol buffer's 'keep reading bytes and using their lower 7 bits, for as long as the 8th bit set' that allow encoding an arbitrary large number, while still using less byte for smaller numbers.
e.g.: Unicode's utf-8 also has a variable lenght encoding (the number of upper set bits, before the first '0' bit indicate which position it occupies in the final number: 0nnn nnnnb for the first 7 bits, then 10nn nnnnb for bits 8
Re: (Score:2)
Re: (Score:2)
I know reading TFA has always been passe here, but seriously...
Re: (Score:2)
You are probably thinking of 640K from Bill Gates? - but yeah
https://quoteinvestigator.com/... [quoteinvestigator.com]
Re: (Score:2)
not quite seeing the issue (Score:5, Informative)
FAT32 itself can support 16TB volumes with 4K sectors and 16K clusters, and can have files of up to 256GB with fat+.
So the limitations of GUI seem like something that can be left behind with newer tools and version of OS.
Re: (Score:2)
Re: (Score:2)
For formatting large USB drives using Windows, the clear winner is actually Windows ME. It supports creating FAT32 partitions over 32GB, has ships with USB mass storage drivers (these are absent on Windows 98) and has the fixed version of fdisk.
Of course, the real answer is to use Linux.
Re: not quite seeing the issue (Score:2)
And not FAT at all, but to kill it with fire.
Re: (Score:2)
For years, a FAT partition was the only way to safely interchange data while dual booting Linux and FreeBSD.
While both *claimed* to support the other file system, after running a couple of days, they would do *serious* damage to the other's file system.
Eventually, I was able to get completely way from Linux . . .
hawk
Re:not quite seeing the issue (Score:4, Interesting)
From what I was able to determine fat+ isn't really all that compatible with most use cases. It looks like "FAT32+ and FAT16+ is limited to some versions of DR-DOS and not available in mainstream operating systems" https://en.wikipedia.org/wiki/... [wikipedia.org]
So although fat+ can have files up to 245GB (minus 1 byte). Formatting a drive with fat+ would limit its usefulness.
Re: (Score:2)
Yeah, nobody uses fat+, it is exfat for the big or new stuff.
It is annoying, I have to install a kernel module before I can mount my camera media.
But don't believe that wiki about the linux support; it was already available, though not standard, before whatever MS says they did.
Re:not quite seeing the issue (Score:4, Informative)
It is an artificial limitation used to promote the switch the NTFS.
To format a FAT32 filesystem larger than 32GB just use Linux. GParted is a nice GUI tool if you need one.
Link to live bootable version: https://gparted.org/livecd.php [gparted.org]
Re: (Score:2)
NTFS isn't a good option for removal flash memory though, especially flash drives without RAM cache and advanced wear levelling/TRIM support (i.e. 99.99% of them).
exFAT is okay but Microsoft only made it free-ish a few years ago. There isn't really anything else that is universally readable. FAT32 doesn't have journaling and isn't very robust, not ideal for removable drives.
Re: (Score:2)
Will Windows ITSELF actually ALLOW you to read/write files to such a volume, though?
I learned the hard way a few years ago (around the time Linux gained the ability to robustly write to NTFS volumes without jumping through hoops or doing anything special besides using a NTFS-enabled kernel and mounting the volume) that there are a lot of things NTFS would allow you to do... and Linux would, in fact, do... that would make Windows pout, sulk, or worse... characters in filenames and limits on the number of cha
Re: not quite seeing the issue (Score:3)
Unless you did something special, you will also lose all the extended attributes that way. Like rights and other metadata.
Re: (Score:2)
Yes, you can write to those big FAT32 filesystems in windows just fine, can even create them from command line in windows, just the GUI can't handle them.
Microsoft gets to set what the allowed NTFS filenames are, if you create ones from Linux or other system in an NTFS filesystem that windows doesn't like you're violating windows standards.
Re: (Score:2)
NTFS allows you to create two filenames where the only difference is the case of the filename.
Some malware uses this as a method of hiding, if you browse using the standard windows tools you will only see one (mundane) file and not the malware.
He made the right decision (Score:5, Insightful)
At the time, wasting 32k was a lot more important than a 32GB filesystem size.
Re: (Score:2)
He could have made the option configurable as a read-only param in the boot sector, but that would have added extra complexity for minimal value. Extra complexity means all our camera SD cards would be written with buggy drivers.
Re: (Score:3)
no need, Fat32 itself can support volumes up to 16TB.
Windows imposed an artificial limit that the filesystem itself doesn't have.
Re: (Score:2)
Oh, really? Why does Windows impose that artificial limit? What exactly is being limited?
Re: (Score:2)
Sounds like it was just the options provided in the GUI.
Re: (Score:2)
Re: (Score:3)
Oh, really? Why does Windows impose that artificial limit? What exactly is being limited?
If only there was a linkable article that could explain that to you...
Re: (Score:2)
I haven't tried it but I would assume using the format command in cmd/powershell would allow for going over the 32GB limit. This is just a limit in the GUI and not a strict Windows limit. There are 3rd party GUI tools that can format over the 32GB limit so it shouldn't be a Windows limit just the Format GUI limit.
Re: (Score:2)
] INIT HELLO
command.
worse is better (Score:2)
in a lot of systems it seems like having a clear decision made on something is more important than what the actual decision it is.
Re:He made the right decision (Score:4, Informative)
FAT32 supports 2^28 - 1 clusters, so even using the minimum cluster size of 1, you could support a 128GB disk easily. A cluster size of 8 (4 KiB, which today is the native sector size of some disks) would have given you enough for about 1TB.
32GB was just an artificial limit. They could have made an argument for limiting the cluster size to 1, and thus limiting themselves to 128GB, but clearly they were trying to push people to the less-interoperable NTFS filesystem.
Sad thing is, we still don't really have a better option if you want to format a USB flash drive so that it can be plugged in to a system running Linux, Windows, or macOS.
Re:He made the right decision (Score:5, Informative)
The root of the issue is that picking too high a value would have wasted a lot of disk space.
Small cluster sizes are great if you store a lot of small files. Less wasted space per file. Great for people working on code or text documents.
Large cluster sizes are great if you store mostly large files - there's less space reserved for tracking the clusters. But if you go too big, the benefit decreases and the waste grows.
They made it an option so you could pick the choice that made the most sense for your use case. They limited the options so that you wouldn't pick options that didn't really make sense based on hardware that would be available within the next few years. That all made sense. The problem came in when the code far, far outlived its expected lifespan.
Re: (Score:2)
The "Cluster Slack" problem from the summary (and to some extent edwdig's comment above) was a real issue with FAT16. Any volume over 32MB (not even 1 GB) runs into that issue under FAT16. It is probably the main reason FAT32 was extended from FAT16 in the first place.
But with FAT32, you can have a much bigger volume before "Cluster Slack" (or "internal fragmentation" or edwdig's "wasted a lot of disk space") becomes an issue. Based on 28 bit cluster numbers as described in https://en.wikipedia.org/wiki/. [wikipedia.org]
Re: (Score:2)
Sad thing is, we still don't really have a better option if you want to format a USB flash drive so that it can be plugged in to a system running Linux, Windows, or macOS.
exFAT.
Re: (Score:2)
Re:He made the right decision (Score:5, Insightful)
No, it was a bad decision. The filesystem doesn't have that low cluster size limit and the user interface should have reflected that. If the decision had had an impact on smaller storage devices, it would have been a different choice. But you would not have been required to format small devices with a big cluster size. You would only use big clusters on bigger disks or cards. Is a big cluster size really a problem if you have much bigger storage devices? No, it is not. He needlessly imposed the limits of smaller devices onto bigger devices.
Re: (Score:2)
Except, the cluster size is settable. One could point out that wasting 32k isn't a big deal once you have 32gig, so maybe just let the cluster size grow more then. If you let it go higher when needed, then it's all a matter of not letting people choose inappropriately high numbers or impossibly low ones.
Re: (Score:2)
32kb is a big deal.
I have perhaps a few 100 thousand eMails. Most of them are a few kB big.
Same for a big Java project. I hardly have any java sources approaching 20kb not to think about 32kb, and not mention their corresponding *.class files.
Re: (Score:2)
It's definitely related to use case. For example, a 256GB SD card in a video camera would be just fine. Most of the files would be several MB if not a couple GB at a minimum.
Re: (Score:2)
This is one of several reason why email clients don't store each email as a file.
Okay, and there are what, maybe 10,000 of them in a "big" project? 50,000? Oh no, a whole 100 mb wasted!
Not when you have a 1 TB drive it's not. That's the point - the 32kb is settable
Large FAT32 is important (Score:5, Informative)
For devices using SD cards which don't pay exFAT licensing fees or just don't want to implement it, you're probably stuck with formatting FAT32. So it's important to be able to completely format a large SD card with FAT32.
A particular device with this issue is all (AFAIK) iterations of the Nintendo 3DS and 2DS. It claims to only support SD cards up to 32GB... but that is due to the Windows restriction! It will happily accept and use larger cards formatted as FAT32.
I expect this won't be an issue with new devices going forward (and even a little backwards; 3DS is pretty old now and the Switch does not have this limitation), as large cards become more common thus devices will be expected to support them out of the box.
Re: (Score:2)
Re: (Score:3)
Their "royalty free to Linux users" terms are defined very narrowly. Existing fuse based implementations do not qualify, only the kernel support in latest kernels.
So a lot of embedded devices, which are either stuck on older kernel versions supported by their SoC vendor, or don't use Linux at all are not helped by this, and the problem will likely remain until the patents expire or are properly opened up for royalty free access to all without condition.
Re: (Score:3)
All the more reason to stop relying on Google's Android and use a real OS like Mobian with a mainline kernel.
my Android is the only computer in the house that can't read my 1TB NTFS USB 3 drive.
Re: (Score:2)
They support 3 specific devices, two using AllWinner A64, and one using Freescale iMX8 (with many peripherals not working). The latter is why embedded developers are stuck using the kernel version supported by the upstream vendor.
Re:Large FAT32 is important (Score:4, Interesting)
Though, just over a year ago Microsoft published the ExFAT specification and has made it royalty free to Linux users. Linux has native ExFAT support (GPL and all) as of kernel 5.4.
I really don't understand this. Generally I can reuse any part of the Linux kernel and as long as I make the source available (and some documentation such as COPYING), I can re-purpose it. For example I'm currently testing an audio driver based on Alsa from 5.10.4 on OS/2. If I do the same with the ExFat code, it is a patent violation which sure seems to break the GPL v2, which in the preamble has,
and further down,
Which seems to me to cause distributing Linux with ExFAT to not be GPL compatible.
I'm obviously not a lawyer nor an IP expert.
Re: (Score:2)
For devices using SD cards which don't pay exFAT licensing fees or just don't want to implement it, you're probably stuck with formatting FAT32
FAT32 has licensing fees. It's why Microsoft makes more money off Android devices than Google does.
Re: Large FAT32 is important (Score:2)
Your comments on drugs is problematic. The main reason we need new drugs is but so the drug companies can make more money (although that certainly is part of their motivation for doing it) but because older drugs aren't panaceas. If we could invent a drug which worked for everyone and had no side effects and didn't interact with anything else, then we'd be set and never need another drug for that again. But we can't. Many drugs get created that work very well but not for everyone who needs that kind of drug
Re: Large FAT32 is important (Score:2)
First *but=not
Re: Large FAT32 is important (Score:2)
True enough. The whole "pharmaceutical companies make new drugs only for profit" thing is just one of my pet peeves. I probably shouldn't assume that anyone who says it is a Republican, but I always do. Sorry about that. Lol
sausage (Score:2)
Its better if you don't know how the sausage is made. Enjoy the taste, or don't, we don't really want to know how much of our software made based on guessing.
Linux and Mac can both format at least 64 GB Fat32 (Score:2)
I've always wondered why MS didn't just fix the GUI since that's the only thing limiting the file system size. With Linux I've regularly formatted 64 GB SD cards for use with Android devices before the advent of exFAT and it always worked fine. In fact Windows can read and write to them just fine. I'm pretty sure Mac can format larger Fat32 partitions also.
Now that exFAT is hitting the Linux kernel, there's less and less reason to use FAT32 on larger partitions, so the issue is moot at this point.
Re: (Score:2)
I formatted an external 12tb RAID array with FAT32 once using gparted just to see if it would do it. It did :)
I didn't try plugging it into a windows machine, though.
I also regularly reformat USB sticks that are 128 and 256gb with FAT32, because it gets me around some equipment limitations (no exfat support, FAT32 support listed as 32gb because that's what windows lists the max size of a FAT32 partition as).
"Cluster slack"? (Score:2, Informative)
People who know their theoretical computer science call it internal fragmentation [wikipedia.org] and don't sound like they are reinventing wheels.
Re:"Cluster slack"? (Score:4, Informative)
Yes and no. What Plummer referred to as 'Cluster Slack' is a specific form of internal fragmentation unique to FAT32's design. There's nothing wrong with coining a term to describe a very specific instance of a more general concept. It's pretty much the basis of communication.
Re: (Score:3)
You could at least read the link I posted before that defines "internal fragmentation". Remove the pole from your eye before you claim your neighbor has a stye in his.
Re: (Score:2)
In the olden days when Dinosaurs ruled the earth we used to have "cylinder slack". There is still "cluster slack" in every filesystem storage format to this very day, nay hour, nay second. And this one too. And the next one, and the one after that.
Btrfs uses tail packing to mitigate that problem. I doubt it's the only one.
The first mistake ... (Score:4, Interesting)
... was thinking that it would be "only temporary".
How many years have users had to suffer due to crap designs? i.e. CP/M and MS-DOS shitty 8.3 filenames, etc.
Meskimen's Law: There is never time to do it right, but there is always time to do it over.
Re: (Score:3)
... was thinking that it would be "only temporary".
Some people will hang on to anything just because they can. One cannot blame Microsoft for them. Every OS was and is only temporary, because at the speed computers are improving can one not honestly future-proof every aspect of an OS and at the same time expect it to deliver adequate performance. We have always made a compromise in this regard and we will continue to do so.
So if this is about 32GB as it is here or 4GB, 8-, 10-, 16-, 24-, 32-, 48- or 64-bit, the year 1999-2000 or 2038, the resolution of 640x
Re: The first mistake ... (Score:2)
Re: (Score:2)
Old engineering lesson: (Score:2)
Nothing lasts longer than a temporary crutch.
Re: (Score:2)
How many years have users had to suffer due to crap designs? i.e. CP/M and MS-DOS shitty 8.3 filenames, etc.
Zero years. It's not crap design to design something in a way that would outlive its useful life. It's crap use for users to continue to use said system long after they should retire it. All of these "crappy designs" are based on sound engineering decisions to ensure systems performed well within limits of the design of the day.
There was nothing wrong wit 8.3 filenames either back in the day when every file could be listed and printed on a dot matrix printer without actually changing the roll of paper.
Meskimen's Law: There is never time to do it right, but there is always time to do it over.
I act
Re: (Score:2)
My favorite example of that is the Y2K "bug".
I think it was fairly late 1999 when some economists published their results, having looked at the current value of what it would have taken to avoid the problem in the first place.
I came to about three times as much as the "repair' costs.
On a 72 column card at a time when it was more than a buck a month to rent a byte of main memory, saving two bytes made a lot of sense.
Re: (Score:2)
> There was nothing wrong with 8.3 filenames either back in the day
BULLSHIT.
My Apple ][+ computer had filenames that were 32 characters WITH spaces in them in 1980. And in 1983 it supported sub-directories -- albeit with filenames chopped down to 15 character filenames but then we got File Types meta-data.
Filenames exist SOLELY for the USER.
The file systems of CP/M and MS-DOS (1981) was designed by idiots who didn't have a fucking clue WHY filenames existed in the first place. Let me repeat that for y
Not related to exFAT? (Score:2)
I only became aware of this limit when MS introduced exFAT, and assumed it was a new artificial limit designed to push adoption of the new patented filesystem as the VFAT patents were about to expire.
Certainly it was possible to format hard drives with FAT32 up to at least the 2GB limit for signed 32 bit ints in older versions of Windows, though USB drives and SD cards were not available in such large capacities at the time.
Re: (Score:2)
Forget the bit about a 2GB limit, clearly the limit is more than that, I was thinking TB, but the 32bit limit is in the GB range, and used to apply to usable RAM.
No one (Score:2)
Re: No one (Score:2)
Use, yes.
Need? I don't think 1TB is even strictly /necessary/. Basically only movies and game graphics use that much.
Re: (Score:2)
Whatever you define as 'necessary' for a user ends up getting multiplied by anywhere between 10s and tens of thousands, plus versioning and backups, for whoever is running the fileserver.
The real mistake... (Score:3)
The real mistake was in just trying to extend FAT and its notion of clusters in the first place. FAT32 was already going to be incompatible with older FAT12/FAT16 based devices anyway, so why even bother to keep its structure? Microsoft could have designed a better filesystem that didn't rely on the already outmoded notion of "clusters" for allocation -- Microsoft's own HPFS386 from the early 1990s proved that.
UI was the least of the problems with this project -- they needed to ditch FAT altogether back in the 90s. It really shouldn't continue to exist today with modern compute devices, but Microsoft took the easy way out, half-assed things, and this is the result.
Yaz
Re: The real mistake... (Score:2)
You mean IBM's HPFS, as used in OS/2, the sane NT that, of course, died?
Re: (Score:2)
HPFS and NTFS (and most other modern high performance filesystems) require considerably more code and runtime data. They weren't practical for DOS, with its tight memory constraints. FAT32 filled the need for Win95 and Win98 to have larger disks while still being stacked on top of DOS.
Re: (Score:2)
Microsoft could have designed a better filesystem
They did. NTFS predated FAT32 as well.
FAT32 was already going to be incompatible with older FAT12/FAT16 based devices anyway, so why even bother to keep its structure?
The "device" in question is a computer. Quite a complex beast with a lot of customisation options. While backwards compatibility was directly broken, i.e. you couldn't simply run FAT32 without additional drivers, the idea behind FAT32 was precisely.... backwards compatibility, except to hardware and OSes. FAT32 drivers could run in x86 real-mode which meant you could get drivers to run on DOS, and it did run on DOS and without using any significant additional memory (an
They should have killed FAT. (Score:2)
At least right after phasing out support for older Windows/DOS versions.
By modern standards, even back then, FAT was a disgrace.
It shoud have died with floppies.
Dev thinking they know what's best for the user (Score:2)
future proofing requires foreseeing exponential gr (Score:3)
4GB File Size Limit (Score:2)
Missed the growth of file sizes (Score:2)
Re: (Score:2)
I'll take Nadella over Balmer and Bill. Microsoft seems a bit less willing to throw customers under the bus to sell new-ware all the time, and is a bit friendlier to OSS. Still jerks, but slightly less jerky.
Re: Microsoft needs FAR better management. (Score:2)
Microsoft has never had good management.
The current bloke is still following Gates âoeWorld dominationâ plan, itâ(TM)s just that heâ(TM)s not as loud as Ballmer or Gates.
Re: (Score:2)
MS, Amazon, Google and Apple have most divided up the pie and are willing to let the other keep their unique domain. I honesty wouldn't be surprised if they had an actual verbal agreement.
The willingness of Amazon to let Microsoft grow Azure is kind of surprising. I mean some of it would happen anyway, but honestly, I'm surprised Amazon hasn't been pushing Amazon WorkSpaces with a high quality Linux desktop super hard. It's almost like they added it just to appear to have it without being aggressive abou
Re: (Score:2)
>
What the fuck does a UI have to do with an on-disk storage format?
Uh... the UI is what the users employ to format the media? Sure, you bought the cheap Tesla because it is *really* capable of the same performance as the more expensive model, but if you have no UI to access that performance it might as well not exist.
It wasn't necessarily dumb not to provide a UI option which had no practical utility at the time. What was dumb was never revisiting that decision decades after the assumptions it was based on became obsolete.
Re: What a Flakey Nutbar (Score:2)
It's somehow sad to see people that think something as basic as formatting requires a GUI.
I bet you copy and paste with the mouse and right click context menu too. (Unix-style middle click is acceptable where applicable.)