Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Exploring Advanced Format Hard Drive Technology 165

MojoKid writes "Hard drive capacities are sometimes broken down by the number of platters and the size of each. The first 1TB drives, for example, used five 200GB platters; current-generation 1TB drives use two 500GB platters. These values, however, only refer to the accessible storage capacity, not the total size of the platter itself. Invisible to the end-user, additional capacity is used to store positional information and for ECC. The latest Advanced Format hard drive technology changes a hard drive's sector size from 512 bytes to 4096 bytes. This allows the ECC data to be stored more efficiently. Advanced Format drives emulate a 512 byte sector size, to keep backwards compatibility intact, by mapping eight logical 512 byte sectors to a single physical sector. Unfortunately, this creates a problem for Windows XP users. The good news is, Western Digital has already solved the problem and HotHardware offers some insight into the technology and how it performs."
This discussion has been archived. No new comments can be posted.

Exploring Advanced Format Hard Drive Technology

Comments Filter:
  • I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

    • by ArcherB ( 796902 )

      I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

      OK, it's 4K (4096 bytes), not 4096K. I guess that's a bit more doable when we're talking about sizes greater than 1TB.

    • Re: (Score:3, Insightful)

      by BitZtream ( 692029 )

      You want the sector size to be smaller than the average file size or you're going to waste a lot of space. If your average file size is large, and writes are sequential, you want the largest possible sector sizes.

    • Most file systems work by clusters, not sectors.

      NTFS partitions use 4k clusters by default so you already have this problem.

      • Indeed that is why they are do this at 4k. most current FS's use a 4k file as it's base cluster size. By updating the sector size to match that of the average cluster anyways, they litterally cut down the size of the required ECC by 8. You can take two drives of the same physical characteristics and by increasing the sector size to 4k you gain hundreds of megabytes on the average 100 gigabyte drive.

        • You can take two drives of the same physical characteristics and by increasing the sector size to 4k you gain hundreds of megabytes on the average 100 gigabyte drive.

          For the sake of argument, let's assume "hundreds of megabytes" equates to 500MB. That works out to be a saving of 0.5% of the capacity, which isn't really all that useful. If you are using your 100GB drive at peak capacity where 500MB will allow you to store a worthwhile amount of data, you're going to run into other issues such as considerable file fragmentation as there isn't enough free space to defrag it properly.

    • by forkazoo ( 138186 ) <wrosecrans@gmail.cQUOTEom minus punct> on Friday February 26, 2010 @05:09PM (#31291338) Homepage

      I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

      The filesystem's minimum allocation unit size doesn't necessarily need to have a strong relationship with the physical sector size. Some filesystems don't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit. (IIRC, Reiser is such an FS.)

      Also, we are actually talking about 4 kilobyte sectors. TFS refers to it as 4096k, which would be a 4 megabyte sector. (Which is wildly wrong.) So, worst case for your example of a thousand 1k files is actually 4 megabytes, not 4 gigabytes as you suggest. And, really, if my 2 terabyte drive gets an extra 11% from the more efficient ECC with the 4k sectors, that gives me a free 220000 megabytes, which pretty adequately compensates for the 3 MB I theoretically lose in a worst case filesystem from your example thousand files.

      • by tepples ( 727027 )

        Some filesystems don't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit. (IIRC, Reiser is such an FS.)

        True, block suballocation [wikipedia.org] is a killer feature. But other than archive formats such as zip, are there any maintained file systems for Windows or Linux with this feature?

        • by jabuzz ( 182671 ) on Friday February 26, 2010 @05:53PM (#31291782) Homepage

          IBM's GPFS is one, though it ain't free it does support Linux and Windows both mounting the same file system at the same time. They reckon the optimum block size for the file system is 1MB. I am not convinced of that myself, but always give my GPFS file systems 1MB block sizes.

          Then there is XFS that for small files will put the data in with the metadata to save space. However unless you have millions of files forget about it. With modern drive sizes the loss of space is not important. If you have millions of files stop using the file system as a database.

          • Re: (Score:3, Interesting)

            by mgblst ( 80109 )

            GPRS is a ridiculously fast os, probably the fastest in the world, when setup correctly. We used to use it for our cluster of 2000 cores.

        • by Korin43 ( 881732 )
          From the wikipedia page you linked to: Btrfs, ReiserFS, Reiser4, FreeBSD UFS2. All of these are actively maintained, and ReiserFS and UFS2 are stable (although UFS2 is BSD, not Linux).
        • Re: (Score:3, Interesting)

          by Carnildo ( 712617 )

          NTFS uses a limited form of block suballocation: if the file is small enough, the file data can share a block with the metadata.

      • Also, we are actually talking about 4 kilobyte sectors. TFS refers to it as 4096k, which would be a 4 megabyte sector. (Which is wildly wrong.)

        Wanna bet TFS was written by a Verizon employee? ;)

    • by Cyberax ( 705495 )

      Unless you use a clever filesystem which doesn't force file size to be a multiple of sector size.

    • by NFN_NLN ( 633283 )

      I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

      You had me worried for a while there so I did a quick check. Turns out NONE of my movies or MP3's are less than 4096 bytes so it looks like I dodged a bullet there. However, when Hollywood perfects it's movie industry down to 512 different possible re-hashes of the same plot they might be able to store a movie with better space efficiency on a 512 byte/sector drive again.

      • Turns out NONE of my movies or MP3's are less than 4096 bytes so it looks like I dodged a bullet there.

        But how big are script files and source code files and PNG icons?

        • Extract from ls -l /etc

          -rw-r--r-- 1 root root 10788 2009-07-31 23:55 login.defs
          -rw-r--r-- 1 root root 599 2008-10-09 18:11 logrotate.conf
          -rw-r--r-- 1 root root 3844 2009-10-09 01:36 lsb-base-logging.sh
          -rw-r--r-- 1 root root 97 2009-10-20 10:44 lsb-release

      • Re: (Score:3, Interesting)

        by owlstead ( 636356 )

        You didn't dodge any bullet. Any file that has a size slightly over each 4096 border will take more space. For large amounts of larger files (such as an MP3 collection), you will, on average, have 2048 bytes of empty space in your drive's sectors. Lets say you have an archive which also uses some small files (e.g. playlists, small pictures) say that the overhead is about 3 KB per file, and the average file size is about 3MB. Since 3000000 / 3000 is about 1/1000 you could have a whopping 1 pro mille loss.

        • Sorry to reply on my own post here, FS block size should be minimum allocation size, which may be smaller than the physical sector size. So for your MP3 collection the overhead may be even lower...

    • It isn't that great for the OS's partition but it works out great for my Media partition

    • Re: (Score:2, Interesting)

      by Avtuunaaja ( 1249076 )
      You can fix this on the filesystem level by using packed files. For the actual disk, tracking 512-byte sectors when most operating systems actually always read them in groups of 8 is just insane. (If you wish to access files by mapping them to memory, and you do, you must do so at the granularity of the virtual memory page size. Which, on all architectures worth talking about, is 4K.)
    • I see what you mean but will it be like other parts of the computer? I do computation on CPUs, GPUs or FPGAs depending on what hardware is appropriate for the work that needs to be done. Is this similar?

      You have data with certain attributes and store it appropriately.

    • NetWare has been doing block suballocation for a while now [novell.com]. Not a bad way to make use of a larger block size, and it was crucial when early 'large' drives had to tolerate large blocks, at least before LBA was common. Novell tackled a lot of these problems fairly early as they lead the way in PC servers and had to deal with big volumes fairly quickly. Today, we take a lot of this for granted, and we are swimming in disk space so it's not a big deal. But once upon a time, this was not so. 80MB was pricel

    • You can have file systems that don't use up a full sector for small files. Or you do what the article mentioned and have 8 effective blocks within one physical block.

      On the other hand, with your logic, 512 byte sectors are too big too, because I have lots of files that are much smaller than that...
    • Cluster Size (Score:3, Interesting)

      by krischik ( 781389 )

      I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K.

      Most file system already use a cluster size of 4096 (clustering 8 sectors). The only file system I know of which used sector=cluster size was IBM's HPFS.

      So NO, we don't use size. Still I am wary of this emulation stuff. First the 4096 byte sector is broken down to 8 512 byte "virtual" sectors and then those 8 virtual are clustered to one cluster. Would it not be better to use an intelligent file system which can handle 4096 bytes sectors natively? Any file system which can be formatted onto a DVD-RAM should

  • XP users (Score:4, Funny)

    by spaceyhackerlady ( 462530 ) on Friday February 26, 2010 @05:02PM (#31291262)

    XP users do not need big hard drives to have problems.

    ...laura

  • by WrongSizeGlass ( 838941 ) on Friday February 26, 2010 @05:06PM (#31291304)
    When this issue came up a few weeks ago there was a problem with XP and with Linux. I see they tackled the XP issue pretty quick but what about Linux?

    This place [slashdot.org] had something about it.
    • Some distro installers do it right and some do it wrong. Give it a few years and I'm sure it will all be sorted out.

    • by marcansoft ( 727665 ) <(moc.tfosnacram) (ta) (rotceh)> on Friday February 26, 2010 @05:20PM (#31291438) Homepage

      If Advanced Format drives were true 4k drives (i.e. they didn't lie to the OS and claim they were 512 byte drives), they'd work great on Linux (and not at all on XP). Since they lie, Linux tools will have to be updated to assume the drive lies and default to 4k alignment. Anyway, you can already use manual/advanced settings in most Linux parititioning tools to manually work around the issue.

    • Re: (Score:2, Informative)

      Linux has had 4096 block size in the kernel for ages. See this article [idevelopment.info] The issue being, as I recall somebody say, is that fdisk cannot properly do this. So use parted and you will be ok. ext3 and jfs and I suppose xfs and a whole bunch of others support the 4096 block size as well. BTW, who "tackled the XP issue pretty quick"? was it Microsoft or was it the hard drive makers. AFAIK a few hard drive manufacturers are emulating a 512 block size so it is not a complete fix.
      • Actually XFS supports true 4096 sector size as well. For example you can format XFS onto a DVD-RAM (sector size= 2048) without trouble. So best for Linux would be if you can tell the drive not to lie about the sector size.

    • Actually it makes me wonder if the virtual 512 sector stuff can be switched off. XFS for example handles lagers sectors sizes gracefully.

  • It says 4096K, they mean 4096 bytes (4K). Error is in the original.

  • Speed is irrelevant (Score:4, Interesting)

    by UBfusion ( 1303959 ) on Friday February 26, 2010 @05:26PM (#31291492)

    I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:

    1. Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?

    2. Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?

    3. Are there differences in sustained read/write performance? E.g. is the new format more suitable for video editing than the old one?

    For me, the first issue is the more important than all, given that owning huge 2T disks is in fact like playing Russian roulette: without proper backup strategies, you risk all your data at once.

    • by Surt ( 22457 )

      I think the answer is that:

      #1: only an idiot relies on the MTBF statistic as their backup strategy, so speed matters more (and helps you perform your routine backups faster).

      #2: for energy efficiency, you don't buy a big spinning disk for your laptop, you use a solid state device.

      #3: wait, i thought you didn't want them to talk about performance? This format should indeed be better performing for video editing, however, since you asked.

    • 1. Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?

      Yes. By packing the bits more efficiently, each cylinder will have more capacity, thus requiring fewer cylinders and fewer head movements for any given disk capacity.

      2. Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?

      Probably sl

    • by jedidiah ( 1196 )

      > I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:

      You really want to be able to copy your stuff. If your stuff is 2TB, then it makes sense that you would want to copy that 2TB in a timely manner.

      So yeah... speed does matter. Sooner or later you will want that drive to be able to keep up with how big it is.

    • This is what raid, mirroring and script backups are for. If you can't write a batch file to copy shit to a USB/Firewire drive, or simply have another cheap blank 2TB disk in the same PC to copy to, you are failing at backup.

      Hard drives are so cheap now that you should merely have massive redundancy, also flash USB sticks are good for one time files like documents and smaller stuff you want to keep.

  • by JorDan Clock ( 664877 ) <jordanclock@gmail.com> on Friday February 26, 2010 @05:44PM (#31291666)
    Anandtech [anandtech.com] has a much better write up on this technology, complete with correct conversions from bits to bytes, knowledge of the difference between 4096 bytes and 4096 kilobytes, and no in-text ads.
    • by WrongSizeGlass ( 838941 ) on Friday February 26, 2010 @05:53PM (#31291776)
      That article doesn't sound like fun at all. How are we supposed to mock it if they haven't made multiple errors, typos and other such blunders? We're smug, semi-knowledgeable 'first posters' with nothing better to do than critique articles that we were too lazy to read or too incompetent to write. I'm going to go wait on the homepage to refresh so I can jump into the next thread without a second thought.
  • Unfortunately, this creates a problem for Windows XP users. The good news is, Western Digital has already solved the problem

    Is there a particular reason that we should care that a new technology isn't backwards compatible with an obsolete technology? Especially in light that it actually is compatible?

  • Partitioning the right way deals with it. You can use fdisk in Linux to do the partitioning for both Linux and Windows.

    First, find out exactly how large the drive is in units of 512 byte sectors. Divide that number by 8192 and round any fractions up. Remember that as the number of cylinders. In fdisk, use the "x" command to enter expert commands. Do "s" to enter the number of sectors per track as 32. Do "h" to enter the number of heads (tracks) per cylinder as 256 (not 255). Do "c" to enter the numbe

  • Those of us who work with RAID arrays have cared about partition alignment for a long time. If a write spans two RAID-5 stripes, the RAID controller has to work twice as hard to correctly update the parity information. Aligning partitions and filesystem structures on stripe boundaries is essential to obtaining good performance on certain types of RAID arrays.

  • From TFA: "Western Digital believes the technology will prove useful in the future and it's true that after thirty years, the 512 byte sector standard was creaking with age."

    What does "creaking with age" really mean? I mean, the current format performs the same. The basic design is still the same, just with different magic numbers. I usually read "creaking with age" to mean that there's some kind of capacity or speed limit that we hit, but that's not the case. Is this more of a case of "why not" change it i

    • You could have 11% more capacity - but for some unknown reason WD did not exploit that.

      If the drive would not lie about the sector size then you would have a little speed gain as well - but for some unknown reason WD went for compatibly instead.

      So yes: there is some potential for larger sector size - but is was not exploited.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...