Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

HDD Manufacturers Moving To 4096-Byte Sectors 442

Luminous Coward writes "As previously discussed on Slashdot, according to AnandTech and The Tech Report, hard disk drive manufacturers are now ready to bump the size of the disk sector from 512 to 4096 bytes, in order to minimize storage lost to ECC and sync. This may not be a smooth transition, because some OSes do not align partitions on 4K boundaries."
This discussion has been archived. No new comments can be posted.

HDD Manufacturers Moving To 4096-Byte Sectors

Comments Filter:
  • by suso ( 153703 ) * on Monday December 28, 2009 @10:30AM (#30571456) Journal

    Why not just move it to 1000 byte sectors, then we could minimize the space lost to advertising.

    (Note to accuracy nazis, this is meant to be funny)

    • by drainbramage ( 588291 ) on Monday December 28, 2009 @11:30AM (#30572158) Homepage

      Mine goes to 11.

  • by 7o9 ( 608315 ) on Monday December 28, 2009 @10:33AM (#30571486)
    According to the Anandtech article, only the pretty much end-of-life Windows XP is out of luck. Linux, OS X and modern Windows versions all work ... Non news?
    • by gbjbaanb ( 229885 ) on Monday December 28, 2009 @10:39AM (#30571538)

      whoooooo. WinXP is end-of-life? You'd best tell that to all the millions of users (including big businesses) out there.

      What that's you say? Upgrade to Windows 7 and use its perfectly infallible XP mode?

      Ah, I understand now. Hi Bill, how's Steve getting on, still a bit sweaty and concerned he's not selling enough?

      • Re: (Score:2, Interesting)

        by iamhassi ( 659463 )
        "WinXP is end-of-life? You'd best tell that to all the millions of users (including big businesses) out there."

        Couldn't agree more. Hopefully I don't have to rehash how horrible Vista was [pcworld.com], and Windows 7 came out a few months ago so it's a bit early to proclaim XP is dead when it's hopeful replacement just showed up.

        I think 4096-byte sectors are Very Bad News. I have no experience with these drives but XP doesn't like them [anandtech.com] which is reason enough for me to avoid them. I hope hard drive manufactures c
        • by iamhassi ( 659463 ) on Monday December 28, 2009 @11:03AM (#30571840) Journal
          ah this was what I was looking for: Drobo, XP Users: Beware of 4K “Advanced Format” Drives! [fosketts.net]

          Article states that not only will XP have problems but so will many other devices like media centers, USB drives, game consoles, and anything else that uses a hard drive. USB drives will be the worse though since 4k drives formatted for XP won't work with Windows 7 and vise versa. Honestly I think this is too soon, put it off another 10 years, by then we'll have OS's that would have supported 4k for 10+ yrs already and all devices should be compatible by then.
          • by lorenlal ( 164133 ) on Monday December 28, 2009 @11:09AM (#30571928)

            Eventually, you have to put a line in the sand. If you push off the deadline, manufacturers will still take their time, and they'll be in the same place 9 years and 11 months from now.

            Example: IPv6.

          • by Anonymous Coward on Monday December 28, 2009 @03:11PM (#30575020)

            What a bunch of misinformed drivel. That article is missing a couple of things:

            firstly) The issue affects all Windows versions based on a 5.x kernel. That means Windows 2000, XP, 2003 server and Windows Home Server.

            1) These drives are NOT strictly-4k-sector. The platters may be organized in 4k sectors, but the drive only talks to the OS in terms of 512 byte-sectors. And since we're discussing old Windows versions: NTFS has defaulted to using 4k (logical) sectors since its introduction, so there is NO performance penalty when using NTFS on these drives. You shouldn't be using FAT32 anyway.

            2) The issue can be worked around by creating partitions with a tool that understands 4k sectors, or by re-aligning the partitions after creation/installation. If you only use a drive in those systems (i.e. no repartitioning), the drive will work as it should. Even if you create partitions that are unaligned, the drive will still work - you will only lose some performance.

            3) The one genuine problem raised in the linked article comes when you want to use these drives in closed-firmware devices. In this case you still have two options: either you use the WD-provided jumper setting, or you pre-create the partitions before you insert the drive.

            I fail to see what the fuss is all about.

        • by kill-1 ( 36256 ) on Monday December 28, 2009 @11:04AM (#30571858)

          The new hard drives will have a compatibility mode. It will be slower though because it has to read-modify-write behind the scene.

      • by alen ( 225700 ) on Monday December 28, 2009 @11:12AM (#30571960)

        MS has a clear support policy. Maybe you like Apple's 3 year support policy better than Microsoft's 10 year 7/3 policy?

      • by alen ( 225700 )

        and it's not like corporations will buy all the new 2TB drives to use as a OS drive in their ancient XP workstations? they will just buy a new PC with Windows 7 installed or use a 7 corporate image

      • by Idiot with a gun ( 1081749 ) on Monday December 28, 2009 @01:32PM (#30573844)
        I've never understood this long living love for XP. The longer I work with it (I'm a support tech), the more I hate it. It genuinely has the feeling of an OS that was organically grown, without any fore planning. Wireless control often ends up in the hands of a user-space program instead of in the OS (wtf?), and updates are done through a god awful activex webpage. Blech. The long term (and even short term) stability of XP these days is poor at best, and I have no clue why everyone claims to love it.

        On the other hand, most people I've met who make fun of Vista, never used it. My dad was slamming it earlier "Did you ever use it?" "... No". The vast majority of complaints about it stemmed from 2 problems:
        • The so called "power users" always complain about any change, regardless of whether or not it's good.
        • Underpowered machines were marked as "Vista Capable" when they were not.

        And to honest, 7 is quite good. This is coming from a die hard Linux user (who actually liked Gentoo).

    • This is news, but not because of the potential problems that could arise. It's interesting from a technological standpoint. Why/how would changing the sector size effect performance? What are the downsides - why wasn't it done before? Why is it now chosen at 4k, why not something even larger? Those questions are what make it news (for nerds). It doesn't have to break something to be newsworthy.
      • by AlecC ( 512609 ) <aleccawley@gmail.com> on Monday December 28, 2009 @11:03AM (#30571848)

        Why wasn't it done before? Sheer inertia. 512 bytes has been the HDD sector size since time immemorial. Some HDDs in the past could be re-sectored to different sizes, and sometimes were. I did it on one generation of disks to optimise storage for a particular reasons, but it didn't work reliably on the next generation of disks, so I dropped it. Some disks had a sector of 1080 bits, I think to handle the 33rd bit on IBM System/38.

        What is the advantage? Every sector has a preamble, a sync mark, a header, the payload data, ECC, and postamble. These can amount to tens of bytes, especially as you have stronger ECC for weaker signals. By having fewer sector, you recover this space from most of the sectors. This could easily add 10% to the capacity of a drive. And, as posted elsewhere, most OSes do 4K transfers most of the time.

    • Really? Go back to your fvwm desktop and stop the rumor-mongering. Ok?

    • According to the Anandtech article, only the pretty much end-of-life Windows XP
      I wouldn't call XP pretty much end-of-life just yet, you can still purchase it with new systems and it's still supported until april 8 2014 (that's after desktop support expires for the NEXT release of ubuntu LTS) and I haven't seen much use of either vista or win7 in buisness/academia yet.

      This isn't that bad though, the logical sectors will still be 512 byte so it's just a matter of getting the partitions aligned right and wd wi

    • I don't know what "pretty much end-of-life Windows XP" you speak of. I'm replying to this from Windows XP Media Center Edition. 10-20% of the computers on display at Best Buy last week were netbooks and nettops with Windows XP. Most HP workstations [hp.com] have "Windows XP Professional 32-bit (available through downgrade rights from Genuine Windows® 7 Professional 32-bit)" and "Windows XP Professional 64-bit (available through downgrade rights from Genuine Windows® 7 Professional 64-bit)" as options as

    • Who's gonna install old XP on one of these new HDDs? HIBT?

    • Actually no. (Score:5, Interesting)

      by Mashiki ( 184564 ) <mashiki@gm a i l .com> on Monday December 28, 2009 @12:23PM (#30572944) Homepage

      Most of the drive manufactures are releasing tools to align the drives to 4k clusters so they can be used under XP. WDC already has theirs out here: WDC Adv Format [wdc.com] Plus instructions on all of their new 1TB and higher drives on how to set them up properly. You do have to jumper them, then format them specially but the drives work fine with 4k clusters. I put one in my work machine on Saturday, works flawlessly.

      *I only used WDC because that's the brand I picked up recently. I do know other companies have similar tools and jumper settings on their newer drives as well.

  • by daha ( 1699052 ) on Monday December 28, 2009 @10:34AM (#30571494)

    There are certain models of the Western Digital Caviar Green drives that are already shipping with a 4K sector size, such as this one: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136490 [newegg.com]

    • by Pedrito ( 94783 )
      There are certain models of the Western Digital Caviar Green drives that are already shipping with a 4K sector size, such as this one: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136490 [newegg.com] Where do you get the 4K sector size from? From here: [wdc.com] User Sectors Per Drive 1,953,525,169 1.9 billion * 4K sectors = 7.6 GB 1.9 billion * 512 byte sectors = 972 MB Or am I missing something?
    • From the WD website:
      http://www.wdc.com/en/products/products.asp?DriveID=763 [wdc.com]

      Capacity 1 TB
      User Sectors Per Drive 1,953,525,169

      That would be 1 TB / 1,953,525,169 = 512. I tried to verify with the spec sheet but the model's pdf is password protected.

      • Re:Looks like 512 (Score:5, Informative)

        by butlerm ( 3112 ) on Monday December 28, 2009 @11:27AM (#30572124)

        Those are "logical" sectors, which can be different from the physical sector size. According to the Anandtech article [anandtech.com] the Western Digital hard drive model numbers that end with "EARS" use the larger, 4KB physical sector size, while presenting a 512 byte logical sector size to the operating system for compatibility reasons.

        Please note, of course, that the logical sector size is a drive interface level concept distinct from the filesystem cluster or block size. Filesystem block sizes have generally been larger than the logical or physical sector size for quite some time.

        • by chill ( 34294 )

          So, if you're using a compatible OS, will you be able to take advantage of the drive by having it present you with 4,096 byte logical sector sizes? Or is that all in the disk format?

    • There are certain models of the Western Digital Caviar Green drives that are already shipping with a 4K sector size

      I think you mean 4.096K sector size. It's a hard drive [wikipedia.org], after all.

  • "...This may not be a smooth transition, because some OSes do not align partitions on 4K boundaries..."

    In cases like these, it always helps to provide examples. Care to do so? Thanks.

  • Why does the sector size presented by the interface have to reflect anything about the hardware? isn't this like the CHS/LBA conversion done under the hood? What about the ability to request a particular sector size, with the default being 512 bytes and the recommended amount being the hardware amount for optimisation purposes? Memories of 512 versus 2048 in the CD booting of older versions of VMS...

    • by tepples ( 727027 ) <tepples AT gmail DOT com> on Monday December 28, 2009 @10:52AM (#30571708) Homepage Journal

      Why does the sector size presented by the interface have to reflect anything about the hardware?

      If the OS clusters aren't aligned to physical sectors, the hard drive's controller has to read-modify-write all the time.

    • It doesn't, and indeed these WD drives will still have 512 byte logical sectors so there will be 8 logical sectors to one physical sectors.

      The problem is if the partition is misaligned the OS is likely to make a load of unaligned writes. Those unaligned writes will force the drive to do a read-modify-write (which afaict will mean waiting for a complete rotation in the middle of the operation)

      Add this to the fact that some systems (most notablly XP) have a habbit of aligning partitions on the boundries of cy

  • by SharpFang ( 651121 ) on Monday December 28, 2009 @10:49AM (#30571666) Homepage Journal

    It doesn't sound like the 512 bytes per sector is tightly bound to hardware. More like a low-level reformat plus change of some #defines in the firmware to transform from one to another type. Which would mean there could be i.e. a jumper setting for sector size, allowing for backward compatibility.

    Also, the fact an OS doesn't enforce partition alignment doesn't mean it won't respect a disk formatted to aligned partitions. Just provide a 3rd party partitioning tool that aligns the partitions right, and install the OS on pre-made partitions. If your business depends on WinXP so much, your IT dept should be capable of doing it.

  • disable ECC? (Score:4, Interesting)

    by Anonymous Coward on Monday December 28, 2009 @10:54AM (#30571740)

    I heard some talks from the ZFS folks at Sun about how they were floating the idea to HD mfgr's of just disabling ECC on the drives. ZFS checksums every block, and in a RAID configuration, it would be able to transparently correct any checksum errors. I think this may have also been the motivation behind bringing triple-redundant RAID to ZFS.

    The motivating idea was that this would reduce the overhead involved on ECC and gain extra space.

    Thoughts?

    • Re:disable ECC? (Score:4, Interesting)

      by MobyDisk ( 75490 ) on Monday December 28, 2009 @10:59AM (#30571796) Homepage

      That wouldn't work with existing file systems that assume the drive does this. That's like deciding to remove the checksums from TCP and IP because a few protocols provide their own checksums. Might work in specialized cases. Probably just adds risk though for no benefit.

      • Well, presumably the idea would be to add an ATA command which allows one to disable ECC on a drive on-the-fly. Or, at minimum, a hardware switch of some kind.

      • Re:disable ECC? (Score:4, Informative)

        by Junta ( 36770 ) on Monday December 28, 2009 @11:41AM (#30572302)

        That's like deciding to remove the checksums from TCP and IP because a few protocols provide their own checksums.

        Funny you should mention IP checksums, that's one feature removed from the IP layer in IPv6 precisely because the 'important' protocols do it themselves anyway (i.e. TCP).

    • Re:disable ECC? (Score:5, Insightful)

      by Waffle Iron ( 339739 ) on Monday December 28, 2009 @11:31AM (#30572162)

      It doesn't seem like a great idea to me. There are a lot of different ECC algorithms and implementations. It seems to me that it would be better to let the hard drive manufacturer select one that closely matches the expected signal and noise characteristics of a particular disk drive rather than some generic algorithm in the filesystem.

    • Re:disable ECC? (Score:4, Interesting)

      by Jeff DeMaagd ( 2015 ) on Monday December 28, 2009 @11:31AM (#30572168) Homepage Journal

      I can see this working for drives made specifically for RAIDs. Lose ECC on single drive configurations and you're asking for trouble. At least for RAIDS, a controller would need to be aware of this and do the remapping themselves, in the end, I don't know if it's worth doing this at all. If some enterprising RAID controller company could prove it works better to do it this way, then I can see it happening.

    • Re:disable ECC? (Score:4, Interesting)

      by TheLink ( 130905 ) on Monday December 28, 2009 @11:47AM (#30572412) Journal
      If they really did that, I'd say they were clueless. Such a feature would increase the odds of error.

      ZFS might checksum every block. But what happens when ZFS is not everywhere? Does the BIOS or whatever equivalent support ZFS checksumming for reading the boot sectors? So those sectors better be 100% or you better be turning it off for boot drives. You have to use ZFS everywhere and for everything. For example, if you ever try to image a 1TB disk without ECC, the odds of bit errors will be high. Even if ZFS can repair it - you'd only find out much later (too late?) and likely after another error prone write.

      Such a feature would just be creating more opportunities for people to get things wrong.

      And for what benefit?

      > The motivating idea was that this would reduce the overhead involved on ECC and gain extra space.

      I think the people who'd want ZFS or RAID would rather have better reliability than the 10% or so extra space.

      Even if they don't know it at first ;).
    • Re:disable ECC? (Score:5, Informative)

      by butlerm ( 3112 ) on Monday December 28, 2009 @11:48AM (#30572430)

      That's insane. ECC at the hardware / firmware level corrects the vast majority of bit errors transparently in a manner that is invisible to the operating system. If you took out sector level ECC, the drives would be useless in anything other than a ZFS RAID configuration, and even then performance would drop in the presence of trivially ECC correctable errors, due to the re-reads and stripe reconstructions at the filesystem level.

      Drive performance would probably drop because the heads would have to stay in closer alignment without the ability of ECC to correct data read errors caused by small vibrations and electrical noise. In addition, sector relocations would probably increase because tiny flaws that do not impair the ability of a drive to write an ECC correctable sector would force the drive to remap that sector to another part of the disk.

      It is a similar issue with various wire level data transmission schemes. If DSL connections did not use error correcting codes, they would suffer much higher packet loss rates than they do now, especially at distance. Most those packets would generally get retransmitted due to transport level checksum errors, but why resort to performance impairing fall back measures when the problem can be largely eliminated at a lower level?

    • Re: (Score:3, Insightful)

      by Izmunuti ( 461052 )

      Ugh. Sounds like a bad idea. Hard drive channels are noisy. How will ZFS fare if lots and lots of sectors read from every drive have at least a couple of bits in error? With no ECC in the drive, errors would be common.

    • Re: (Score:3, Insightful)

      by drinkypoo ( 153816 )

      If you were going to eliminate ECC in one place or another, it wouldn't be on the drive. The drives have to operate in the real world of analog states, while the filesystem works in the virtual world of "whatever the disk actually feeds me". Disks have to have correctable ECC just to reliably give you accurate data from magnetic media at these densities. It would make more sense to upgrade the on-disk ECC and give the filesystem better access to the disk's ECC.

  • by AP31R0N ( 723649 ) on Monday December 28, 2009 @10:58AM (#30571776)

    i'm asking because i don't know, not to troll.

    What is their purpose? Does the purpose still matter?

    • by JordanL ( 886154 ) <jordan DOT ledoux AT gmail DOT com> on Monday December 28, 2009 @11:08AM (#30571916) Homepage
      A sector on a HDD is the minimum writeable space. Think of it as a lot in a subdevelopment. If each lot is 50,000 sq. ft. on a 20 acre plot, and you move to 60,000 sq. ft. lots instead, the plot is still 20 acres, but the development now has less lots on it.

      In computers, larger sectors are often better for large files, while smaller sectors are better for smaller partitions and smaller files. If a sector is 4096 bytes, and you create a 1024 byte file, it still occupies 4096 bytes on the disk, as the HDD won't write anything else but that file to the sector. If you have files that are hundreds of megabytes though, you can access the file, with minimum wastage, by using fewer sectors, which reduces thrashing and similar issues.

      The discrepancy between file sizes and sector sizes is what the difference is in Windows when you view a hard drive and it displays "size" and "size on disk". "Size" is the actual file size, while "size on disk" is the amount of space the file occupies on the hard drive.
      • by AP31R0N ( 723649 )

        +1 Informative.

        So why have the sectors at all? Bigger sectors seem to incur more waste, though with HDDs growing at the rate they do it might not matter. But if there was no sectoring, then a file could take as much space, and only as much space as it actually needs. The 1024 byte file could then take 1024 bytes.

        i'm not saying sectors aren't needed, i just don't understand why they exist in the first place.

        It is it some kind of addressing thing?

        • by JordanL ( 886154 ) <jordan DOT ledoux AT gmail DOT com> on Monday December 28, 2009 @11:23AM (#30572076) Homepage
          It is indeed. Unless HDD makers were going to create firmware, and programmers made partition formats, which address each bit individually (which itself would require an enormous amount of space... much larger than the HDD in fact), you will always be unable to live without sectors. The subdivision idea is again relevant. Imagine if every part of the 20 acre plot had to be "addressable" down to the square inch.
          • Re: (Score:3, Interesting)

            by AP31R0N ( 723649 )

            Ah, so it's saying, "Turn left on Evergreen... it's on that block". And the monstrous estate is from Elm to Fern at State. As opposed to GEOCOORD 32'57"(bunchOfDigits) by 32'57"(more digits).

            Got it.

            With the 1024 byte example, could the address just be "from bit X to bit X+1023"? i guess that too would be too much. All those tiny .dlls and .inis would take more space to define than they actually take.

            Thanks!

          • by tepples ( 727027 ) <tepples AT gmail DOT com> on Monday December 28, 2009 @11:36AM (#30572246) Homepage Journal

            Unless HDD makers were going to create firmware, and programmers made partition formats, which address each bit individually (which itself would require an enormous amount of space... much larger than the HDD in fact), you will always be unable to live without sectors. The subdivision idea is again relevant. Imagine if every part of the 20 acre plot had to be "addressable" down to the square inch.

            It's called block suballocation [wikipedia.org]: store a small file in its entirety in another file's slack space. And yes, it's a "killer" feature.

        • by MBCook ( 132727 )

          Let's take a 1TB disk, which is becoming common. To address it all in 512B sectors, you need 31 address bits (since there are 2 billion sectors).

          If you change to 4KB sectors, you now have 1/8 as many, so you only need to address ~270 million sectors, which takes 28 bits of address space.

          The thing is, disks are given addresses of a certain size. If all address are 16 bits, and the sectors only have 512B in them, your disk can't be bigger than 32MB. By using 32 bits, you can go up to ~2GB. If you are using

        • Re: (Score:3, Informative)

          by Thanshin ( 1188877 )

          So why have the sectors at all? [...]
          The 1024 byte file could then take 1024 bytes.

          That's not "not having sectors", that's having sectors 1 byte long.

          Thus, apply the reasoning of "bigger sectors, faster treatment of bigger files, and vice-versa".

        • by bdsesq ( 515351 )

          Sectors used to be needed because the drives would lose sync. The sector header would help keep it in sync.

          One thing that didn't get covered very well is "breakage". Breakage is the amount of space lost due to sector size. For each file you lose, on average, one half of a sector. This is because the last sector used by a file has somewhere between one and 512 bytes of data. For the new drives that is between one and 4096 bytes.

          So if you have 1,000,000 files on a drive with 512 byte sectors you lose half

          • by AP31R0N ( 723649 )

            What does sync mean in this context?

            So they are making the drives faster in the sense that there are fewer sectors, so it's easier to get to a city than to a particular block of a city. They are also keeping the address space small. And they are wasting space because most of the blocks of the city have huge yards and a tiny house.

            Actually that first sentence should be a question. Does having bigger sectors make the drive appear to be faster?

            Fascinating.

        • Re: (Score:3, Informative)

          by TheRaven64 ( 641858 )

          Yes, it's an addressing thing. The grandparent is confusing sectors with allocation units. A filesystem is perfectly at liberty to allocate sub-sectors to different files (some do). A 32-bit disk interface can address 2^32 sectors. If you have one-byte sectors then that means you're limited to 4GB disks. If you have 512 bytes sectors then you're limited to 2TB. If you want a disk bigger than 2TB then you can either make the interface wider or can make the sector size bigger. Making the address wider

      • by DarkOx ( 621550 ) on Monday December 28, 2009 @11:25AM (#30572106) Journal

        You are confusing physical sector size with cluster size. May file systems are already addressing data in larger blocks. 4096 is very commonly used. They are generally multiples of 512 which is the physical sector size; so that its is easy to calculate the physical sector that needs to be changed when you know the logical.

        Its quite possible to have a cluster size smaller than the sector size; the file system would need to be smart enough to determine what other clusters fall on that sector and write them all though.

        • by JordanL ( 886154 )
          Sector does have several definitions, but from reading the article I'm fairly certain they are talking about sector size, and not clusters.
      • by blueg3 ( 192743 )

        The first part is true, but the second part isn't. The logical cluster size and its implications are all in software and dependent on the filesystem. It's entiirely possible to have filesystems that put multiple small files into a single logical cluster and, for that matter, into a single physical sector. This does mean things are a bit more complicated, though.

        The real answer is simply that a sector is the size of a unit of data that can be read from or written to a hard drive. That constrains how the oper

    • Hard drives are random-access devices and sectors are the smallest atomic unit that a drive can normally physically read and write. It doesn't read or write half a sector. When emulating a write to a 512 byte logical block with 4096 byte physical blocks on the media, it has to read the whole 4K sector, modify it with the changed 512 bytes, and rewrite the entire 4K sector.

      The concept of sectors could be hidden from the interface, theoretically. You could put the whole file system into the drive (OSD), for

    • Re: (Score:2, Informative)

      A sector used to be quite literally a sector of a disc in the mathematical sense, like a wedge shape that spins around. Now with LBA (labeling hard drive's blocks in series from zero rather than by their physical position) it is just like a block on your filesystem, but on the hardware instead, it is a blob of data that must be read or written as a whole. The rationale is that you are not likely to ever want to read or write one byte at a time, so there is no reason to make the hard disk handle requests for

  • by Himring ( 646324 ) on Monday December 28, 2009 @11:01AM (#30571818) Homepage Journal
    This may not be a smooth transition, because some OSes do not align partitions on 4K boundaries.

    "One life ends; another begins"
  • Larger sectors means more empty space at the end of the last sector of a file. Lots of files means lots of wasted space. Modern OS'es, especially Windows, have many more, smaller files than in past versions, and the trend continues upwards.

    So larger sectors means more space bought on a drive that isn't used. Which means more drives bought.

    I can see how drive manufacturers would like that.

    • by ettlz ( 639203 )
      I believe a filesystem with tail-packing support would overcome this.
    • Well, the trend today, especially with large drives, is to go for a cluster size of 4k anyway. Sure, there'll be a lot of system files under 4k, but there's going to be much more music and pictures over 4k that will likely take more space.

    • Any change in sector size that doesn't affect the filesystem block size will not affect the number of KB required to store a file at all. Since virtually every filesystem already uses 4 KB block sizes by default a change to 4KB logical or physical sector sizes will not have an effect on storage requirements.

The most delightful day after the one on which you buy a cottage in the country is the one on which you resell it. -- J. Brecheux

Working...