Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Changes in HDD Sector Usage After 30 Years 360

freitasm writes "A story on Geekzone tells us that IDEMA (Disk Drive, Equipment, and Materials Association) is planning to implement a new standard for HDD sector usage, replacing the old 512-byte sector with a new 4096-byte sector. The association says it will be more efficient. According to the article Windows Vista will ship with this support already."
This discussion has been archived. No new comments can be posted.

Changes in HDD Sector Usage After 30 Years

Comments Filter:
  • by wesley96 ( 934306 ) on Friday March 24, 2006 @02:10AM (#14986239) Homepage
    Well, CD-ROMs use 2352 bytes per sector, ending up with 2048 actual bytes after error correction. Looking at the size of the HDDs these days a 4096-byte sector seems pretty reasonable.
    • by Ark42 ( 522144 ) <slashdot AT morpheussoftware DOT net> on Friday March 24, 2006 @02:28AM (#14986308) Homepage
      Hard drives do the same thing - for each 512 bytes of real data, they actually store near 600 bytes onto the disk with information such as ECC and sector remapping for bad sectors. There is also tiny "lead-in" and "lead-out" areas outside each sector which usually contain a simple pattern of bits to let the drive seek to the sector properly.
      Unlike CD-ROMs, I don't believe you can actually read the sector meta-data without some sort of drive-manufacturer-specific tricks.
      • by alexhs ( 877055 ) on Friday March 24, 2006 @05:17AM (#14986712) Homepage Journal
        Unlike CD-ROMs, I don't believe you can actually read the sector meta-data

        What are you calling meta-data ?
        CDs also have "merging bits", and what is read as a byte is in fact coded on-disk as 14 bits, and you can't read C2 errors either, that are beyond the 2352 bytes that really are all used as data on an audio CD, an audio sector being 1/75 of a second, 44100/75*2(channels)*2(bytes per sample) = 2352 bytes and it has correction codes in addition too. You can however read subchannels (96 bytes / sector)

        When dealing with such low-level technologies, reading bits on disk doesn't mean anything as there really are no bits on the disc, just pits and lands (CD) or magnetic particles (HD) causing little electric variations on a sensor, then no variation is interpreted as 0 and a variation is interpreted as a 1, and you need variations even when writing only 0's as a reference clock.

        without some sort of drive-manufacturer-specific tricks.

        Now of course, as you cannot change HD platters within different drive with different heads like you can do with a CD, each manufacturer can (and will !) encode differently. It has been reported that hard disks with the same reference wouldn't "interoperate" exchanging the controller part because of differing firmware versions, while the format is standardized for CDs or DVDs.

        they actually store near 600 bytes

        (that would be 4800 bits) In that light, they're not storing bytes, just magnetizing particles. Bytes are quite high-level. There are probably more than a ten thousands magnetic variations for a 512 byte sector. What you call bytes is already what you can read :) But there is more "meta-data" than that.

        Here's an interesting read [laesieworks.com] quickly found on Google just for you :)

        • Now of course, as you cannot change HD platters within different drive with different heads like you can do with a CD, each manufacturer can (and will !) encode differently. It has been reported that hard disks with the same reference wouldn't "interoperate" exchanging the controller part because of differing firmware versions, while the format is standardized for CDs or DVDs.

          I've had to change the controller on a few hard drives for clients who did some really stupid things to their drives, but didn't w

    • I wonder if the 4096 bytes are before or after error correction. If it's after, it might make sense because (and I'm sure someone will correct me) isn't 4K a relatively common miimum size in today's filesystems. I know that the default for HFS+ on a mac is.
      • NTFS has a cluster/allocation size from 512 bytes to 64K. This determines the minimum possible ondisk filesize, but I don't think it has too much to do with the sector size.
        • It doesn't, other than the FS block size should be a multiple of the disk sector size to avoid wasting extra read/writes to access/store a FS block, as well as to avoid wasting space storing an FS block.
  • Cluster size? (Score:3, Interesting)

    by dokebi ( 624663 ) on Friday March 24, 2006 @02:13AM (#14986251)
    I thought cluster sizes were already 4KB for efficiency, and LBA for larger drive sizes. So how does changing the sector size change things? (Especially when we don't access drives by sector/cylinder anymore?)
    • Re:Cluster size? (Score:5, Informative)

      by scdeimos ( 632778 ) on Friday March 24, 2006 @02:55AM (#14986366)
      I thought cluster sizes were already 4KB for efficiency, and LBA for larger drive sizes.
      Cluster sizes are variable on most file systems. On our NTFS web servers we tend to have 1k clusters because it's more efficient to do it that way with lots of small files, but the default NTFS cluster size is 4k. LBA is just a different addressing scheme at the media level to make a volume appear to be a flat array of sectors (as opposed to the old CHS or Cylinder Head Sector scheme).
  • so long as this new format is transparent, built internally in the drives and doesn't effect older hardware or software, there shouldn't be a problem. It also should not contain any DRM junk.

    All to often an advantage in speed improvements and such are more than countered by adding overhead junk.

    now maybe I should RTFA...
    • It can't be, at least not efficiently. Like flash devices, it's impossible to write less than a sector at a time.

      If this were transparently implemented by the hardware, the OS would frequently try to write a single 512 byte sector. In order for this to work, the hard drive controller would have to read the existing sector then write it back with the 512 bytes changed. This is a big waste, as a read then a write costs at least a full platter rotation (1/7200 second). Do this hundreds or thousands of time
  • Most "normal use" filesystems nowadays (FAT32, Ext3, HFS, Reiser) all use 4K blocks by default. That means that the smallest amount of data that you can change at a time is 4k, so every time you change a block, the HDD has to do 8 writes or reads. That would leave the drive preforming 8x the number of commands that it would need to.

    As filesystems are slowly moving towards larger block sizes, now that the "wasted" space on drives due to unused space at the ends of blocks are not as noticable, moving up the s
    • by Anonymous Coward
      You're talking bullshit. In SCSI/SATA you can read/write big chunks of data (even 1MB) in just one command. Just read the standards.
      • by Anonymous Coward
        I'm pretty sure he was talking about operations performed by the drive's internal controller, not those sent through the interface cable.
      • Grandparent is discussing "native command queueing", where the hard disk will parse the OS read/write calls and stack them in a way that optimizes hardware access. Pretend there are three consecutive blocks of data on the hard drive: 1, 2, and 3. The OS calls for 1, 3, and then 2. Instead of going three spins around, NCQ will read the data in one spin in 1, 2, 3 order but then toss it out to the OS in 1, 3, 2 order. Now, I'm not sure how much higher sector sizes will affect NCQ capability, because I thought
    • A file-system block is not a hard-disk block. This means that block sizes smaller than 4096 bytes will not be available, and that tools that talk to the disk at a low level (such as fdisk and parted) will have to be reviewed for any assumptions that block sizes are not, in fact, 512 bytes. It also means that old drivers that made such assumptions are not going to interoperate correctly with these new disks and controllers, unless the manufacturers are very clever about maintaining interfaces that look ident
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Friday March 24, 2006 @02:19AM (#14986276)
    Small devices like cellphones typically save files of several kilobytes, whether they be the phonebook database or something like camera images. Whether the data is saved in a couple large sectors or 8 times that many small sectors isn't really an issue. Either way will work fine, as far as the data is concerned. The biggest problem is the amount of battery power used to transfer those files. If you have to re-issue a read or write command (well, the filesystem would do this) for each 512-byte block, that means that you will spend 8 times more energy (give or take a bit) to read or write the same 4k block of data.

    Also, squaring away each sector after processing is a round trip back to the filesystem which can be eliminated by reading a larger sector size in the first place.

    Some semi-ATA disks already force a minimum 4096-byte sector size. It's not necessarily the best way to get the most usage out of your disks, but it is one way of speeding up the disk just a little bit more to reduce power consumption.
    • I guess that's why they call you bad analogy guy. :) R/W filesystems tend to abstract media into clusters, which are groups of sectors, so that they can take advantage of multi-sector read/write commands (which have been around since MFM hard disks with CHS addressing schemes, by the way) to get more than one sector's worth of data on/off the hard disk in a single command.
    • If you have to re-issue a read or write command (well, the filesystem would do this) for each 512-byte block, that means that you will spend 8 times more energy (give or take a bit) to read or write the same 4k block of data.

      Well sorry, but that's the way it is.

      Hard drives generally have the ability to read/write multiple sectors with a single command. (Go read the ATA standards). And DMA is usually used [ program I/O just plain sucks].

      I don't see how changing the sector size is going to save power... Eit
    • Umm, excuse me, but WTF is a 'semi-ATA disk?' Either it's ATA or it's not, there is no hybrid that I'm aware of.
  • by sinnerman ( 829959 ) on Friday March 24, 2006 @02:23AM (#14986289)
    Well of course Vista will ship with this supported already. Just like WinFS...er..
  • by dltaylor ( 7510 )
    Competent file system handlers can use disk blocks larger or smaller than the file system block size, but there are some benefits to using the same number for both. Although it may provide more data-per-drive to use larger blocks and you can index larger drives with 32-bit numbers, the drive has to use better (larger and more complex) CRCs to ensure sector data integrity integrity, the granularity of replacement blocks may end up wasting more space simply to provide an adequate count of replacements, and
  • by alanmeyer ( 174278 ) on Friday March 24, 2006 @02:33AM (#14986320)
    HDD manufacturers are looking to increase the amount of data stored on each platter. With larger sector sizes, the HDD vendor can use more efficient codes. This means better format efficieny and more bytes to the end user. The primary argument being that many OSes already use 4K clusters.

    During the transition from 512-byte to 1K, and ultimately 4K sectors, HDDs will be able to emulate 512-byte modes to the host (i.e. making a 1K or 4K native drive 'look' like a standard 512-byte drive). If the OS is using 4K clusters, this will come with no performance decrease. For any application performing random single-block writes, the HDD will suffer 1 rev per write (for a read-modify-write operation), but that's really only a condition that would be found during a test.
  • Seems good to me. (Score:3, Informative)

    by mathew7 ( 863867 ) on Friday March 24, 2006 @02:37AM (#14986331)
    Almost all filesystems I know of use at least 4Kb clusters. NTFS does come with 512 byte on smaller partitions.
    LBA accesses on sector boundaries, so for larger HDD's, you need more bits (currently 28-bit LBA, which some older bioses support, means a maximum of 128GB- 2^28*512=2^28*2^9=2^37) Since 512-bytes were used for 30 years, I think it is easy to assume it will not last for 10 more years (getting to LBA32 limit). So why not shave off 3 bits and also make it an even number of bits (12 against 9).
    Also there is something called "multible block access" where you make only one request for up to 16 (on most HDD's) sectors. For 512-byte sectors you have 8K, but for 4K sectors that means 64K. Great for large files (IO overdead and stuff).
    On the application side this sould not affect anyone using 64-bit sizes (since only the OS would know of sector sizes), as for 32-bit sizes it already is a problem (4G limit).
    So this sould not be a problem because on a large partition you will not have too much wasted space (i have around 40MB wasted space on my OS drive for 5520MB of files, and I would even accept 200MB)
  • by TrickiDicki ( 559754 ) on Friday March 24, 2006 @02:49AM (#14986358)
    That's a bonus for all those boot-sector virus writers - 8 times more space to do their dirty deeds...
    • That's a bonus for all those boot-sector virus writers - 8 times more space to do their dirty deeds...

      Oh great, now my viruses can be bloatware, too. I guess with that much space, they can even install a GUI for the virus, or maybe "Clippy" to keep me distracted while he formats my hard drive.
  • But really...think about this: if each sector has overhead, then any file over 512 bytes will have less overhead, and you'll effectively get more space in most cases. What percentage YOUR files are less than 4k?
    • It is not just files that are less than 4k. It is almost all small files. Think about a 5k file, that now uses 8k. - almost 40% waste. A 9k file uses 12k - about 25% waste. So the more small files you get, the more waste. The larger files you get, the less waste.

      Which is good, you don't really want lots of small files anyway.

      If you are using windows, you can see how much is space is wasted at the moment, just right click on a directory, and it will tell how much data is in the files, and how much disk space
  • by Animats ( 122034 ) on Friday March 24, 2006 @03:05AM (#14986391) Homepage
    The real reason for this is that as densities go up, the number of bits affected by a bad spot goes up. So it's desirable to error correct over longer bit strings. The issue is not the size of the file allocation unit; that's up to the file system software. It's the size of the block for error correction purposes. See Reed-Solomon error correction. [wikipedia.org]
  • by filterchild ( 834960 ) on Friday March 24, 2006 @03:14AM (#14986409)
    Windows Vista will ship with this support already.

    Oh YEAH? Well Linux has had support for it for eleventeen years, and the Linux approach is more streamlined anyway!
  • File sizes (Score:3, Interesting)

    by payndz ( 589033 ) on Friday March 24, 2006 @03:59AM (#14986518)
    Hmm. This reminds me of the time when I bought my first external Firewire drive (120Gb) and used it to back up my 10Gb iMac, which had lots of small files (fonts, Word 5.1 documents, etc). Those 10Gb of backups ended up occupying 90Gb of drive space because the external drive had been pre-formatted with some large sector size, and even the smallest file took up half a megabyte! So I had to reformat the drive and start again...
  • Actually, this almost can't be anything but a good thing.

    First of all, most OSes these days use a memory page size of 4k. Having your IO system page match your CPU page makes it much more efficient to DMA data and the like. Testing has shown that this is generally a helpful.

    Second, RAID will benefit here. Larger blocks mean larger disk reads and writes. In terms of RAID performance, this is probably a good thing. Of course, the real performance comes from the size of the drive cache, but don't underestimate the benefit of larger blocks. Larger blocks mean the RAID system can spend more time crunching the data and less time handling block overhead. The fact that more data must be crunched for a sector write is of concern, but I'd bet it won't matter too much (it only really matters for massive small writes, not generally a RAID use case).

    Third, (and EVERYONE seems to be missing this) some file systems DON'T waste slack space in a sector. Reiserfs (v3 and v4) actually takes the underused blocks at the end of the files (called the "tail" of the file) and creates blocks with a bunch of them crammed together (often mixed in with metadata). This has been shown to actually increase performance, because the tail of files are usually where they are most active and tail blocks collect those tails into often accessed blocks (which have a better chance of being in the disk cache).

    Netware 4 did something called Block Suballocation. While not as tightly packed as Reiser tail blocks, it did take their larger 32kb or 64kb blocks (which were chosen to keep block addresses small and large file streaming faster) into disk sectors and storing tails in them.

    NTFS has block suballocation akin to Netware, but Windows users are, to my knowledge, out of luck until MS finally addresses their filesystem (they've been putting this off forever). Windows really would benefit from tail packing (although the infrastructure to support it would make backwards compatability near impossible).

    To my knowledge, ReiserFS is the only filesystem with tail packing. If you are really interested in this, see your replacement brain on the Internet [wikipedia.org].

    Fourth, larger sectors means smaller sector numbers. Any filesystem that needs to address sectors usually has to choose a size for the sector addresses. Remember FAT8, FAT12, FAT16, and FAT32? Each of those numbers were the size of sector references (and thus, how big of a filesystem they could address). This will prevent us from needing to crank up the size of filesystem references eventually.

    Finally, someone mentioned sector size issues with defragmenters and disk optimizers. These programs don't really care as long as all of the sectors on the system are the same size. Additionally, they could be modified to deal with different sector sizes. Ironically, modern filesystems don't really require defragmentation, as they are designed to keep fragments small on their own (usually using "extents"). Ext2, Ext3, Reiserfs and the like do this. NTFS does it too, although it can have problems if the disk ever gets full (basically, magic reserved space called the MFT gets data stored in it and the management information for the disk gets fragmented permenantly). If it weren't for a design choice (I wouldn't call it a flaw as much as a compromise) NTFS wouldn't really need defragmentation. ReiserFS can suffer from a limited form of fragmentation. However, v4 is getting a repacker that will actively defragment and optimize (by spreading out the free space evenly to increase performance) the filesystem in the background.

    I really don't see how this can be bad unless somebody makes a mistake on backwards compatability. For those Linux junkies, I'm not sure about the IDE code, but I bet the SATA code will be overhauled to support it in a matter of weeks (if not a single weekend).
    • Forget waste of space in something as small as a sector.
      If this is an issue, you use the wrong application - one word file per phone number?

      File systems became simpler over time. This is a GOOD THING AND THE ONLY WAY TO GO.

      If you try to optimize too much, you end up with something like the IBM mainframe file systems from the 70s, which are still somewhat around.

      Create a simple file, called a data set ? Sure, in TSO (what passes for a shell, more or less), you use the ALLOCATE command: http://publibz.b [ibm.com]

  • by cnvogel ( 3905 ) <chris AT hedonism DOT cx> on Friday March 24, 2006 @05:12AM (#14986707) Homepage
    Wow, finally, a new block size, never heard of that idea before.

    Doesn't anyone remember that SCSI-drives that support a changeable block-size are around since basically forever? Of course with harddisks it was used mostly to account for additional error-correcting / parity bits, but also magneto-optical media could be written with 512 or 2k (if I remember correctly).

    (first hit I found: http://www.starline.de/en/produkte/hitachi/ul10k30 0/ul10k300.htm [starline.de] allows 512, 516, 520, 524, 528 but there are devices that do several steps between 128 and 2k or so...)
  • Size != storage (Score:2, Informative)

    by tomstdenis ( 446163 )
    You're all missing one key point. Your 512 byte sector is NOT 512 bytes on disk. The drive stores extra track/ecc/etc information. So a 4096-byte sector means less waste, more sectors, more useable space.

    Tom
  • I really don't know much about how drives store data. So this may be a really stupid question. But do larger sectors also mean the boot sector? Is this good news for boot loaders?

  • 1983, trying to convince the CDC engineer that yes, I did want him to configure the disk for 336 byte sectors.

    Ah the joys of using a Harris 24 bit word/8 bit byte/112 word disk sector machime.
  • In 1963, when IBM was still firmly committed to variable length records on disks, DEC was shipping a block-replacable personal storage device called the DECtape [wikipedia.org]. This consisted of a wide magnetic tape wrapped around a wheel small enough to fit in your pocket. Unlike the much larger IBM-compatible tape drives, DECtape drives could write a block in the middle of the tape without disturbing other blocks, so it was in effect a slow disk. To make block replacement possible all blocks had to be the same size, and on the PDP-6 DEC set the size to 128 36-bit words, or 4608 bits. This number (or 4096, a rounder number for 8-bit computers) carried over into later disks which also used fixed sector sizes. As time passed, there were occasional discussions about the proper sector size, but at least once the argument to keep it small won based on the desire to avoid wasting space within a sector, since the last sector of a file would on average be only half full.

Fast, cheap, good: pick two.

Working...