Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Exploring Advanced Format Hard Drive Technology 165

MojoKid writes "Hard drive capacities are sometimes broken down by the number of platters and the size of each. The first 1TB drives, for example, used five 200GB platters; current-generation 1TB drives use two 500GB platters. These values, however, only refer to the accessible storage capacity, not the total size of the platter itself. Invisible to the end-user, additional capacity is used to store positional information and for ECC. The latest Advanced Format hard drive technology changes a hard drive's sector size from 512 bytes to 4096 bytes. This allows the ECC data to be stored more efficiently. Advanced Format drives emulate a 512 byte sector size, to keep backwards compatibility intact, by mapping eight logical 512 byte sectors to a single physical sector. Unfortunately, this creates a problem for Windows XP users. The good news is, Western Digital has already solved the problem and HotHardware offers some insight into the technology and how it performs."
This discussion has been archived. No new comments can be posted.

Exploring Advanced Format Hard Drive Technology

Comments Filter:
  • by ArcherB ( 796902 ) on Friday February 26, 2010 @05:58PM (#31291216) Journal

    I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

  • by WrongSizeGlass ( 838941 ) on Friday February 26, 2010 @06:06PM (#31291304)
    When this issue came up a few weeks ago there was a problem with XP and with Linux. I see they tackled the XP issue pretty quick but what about Linux?

    This place [slashdot.org] had something about it.
  • Speed is irrelevant (Score:4, Interesting)

    by UBfusion ( 1303959 ) on Friday February 26, 2010 @06:26PM (#31291492)

    I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:

    1. Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?

    2. Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?

    3. Are there differences in sustained read/write performance? E.g. is the new format more suitable for video editing than the old one?

    For me, the first issue is the more important than all, given that owning huge 2T disks is in fact like playing Russian roulette: without proper backup strategies, you risk all your data at once.

  • by Avtuunaaja ( 1249076 ) on Friday February 26, 2010 @06:37PM (#31291608)
    You can fix this on the filesystem level by using packed files. For the actual disk, tracking 512-byte sectors when most operating systems actually always read them in groups of 8 is just insane. (If you wish to access files by mapping them to memory, and you do, you must do so at the granularity of the virtual memory page size. Which, on all architectures worth talking about, is 4K.)
  • by owlstead ( 636356 ) on Friday February 26, 2010 @06:50PM (#31291744)

    You didn't dodge any bullet. Any file that has a size slightly over each 4096 border will take more space. For large amounts of larger files (such as an MP3 collection), you will, on average, have 2048 bytes of empty space in your drive's sectors. Lets say you have an archive which also uses some small files (e.g. playlists, small pictures) say that the overhead is about 3 KB per file, and the average file size is about 3MB. Since 3000000 / 3000 is about 1/1000 you could have a whopping 1 pro mille loss. That's for MP3's, for movies the percentage will be much lower still. Of course, if your FS uses a block size of 4096 already then you are already paying this 1 promille of overhead.

    Personally I would not try and sue MS or WD over this issue...

  • Re:1 byte = 10 bits? (Score:2, Interesting)

    by KPexEA ( 1030982 ) on Friday February 26, 2010 @06:51PM (#31291750)
    Group code recording? http://en.wikipedia.org/wiki/Group_code_recording [wikipedia.org] Back on my old Commodore Pet drive this was how they encoded data since too many zeros caused the head to lose it's place.
  • by Carnildo ( 712617 ) on Friday February 26, 2010 @10:22PM (#31293850) Homepage Journal

    NTFS uses a limited form of block suballocation: if the file is small enough, the file data can share a block with the metadata.

  • by symbolset ( 646467 ) on Saturday February 27, 2010 @01:32AM (#31294890) Journal

    What this really means is that magnetomechanical media is dead.

    When you're doing tricks like this to get a few extra bytes per block it means you have run out of physical media density technologies. It's kind of like when they moved the Earth, Moon and stars to get dial-up modems from 48.8Kbps to 56Kbps - redefining bps along the way. It's the End. It's an admission that we're out of magnetic media density improvements. There might be one more but after this but it's over and even now the density isn't even the important thing any more.

    I warned about this here several years ago: the consolidation of server workloads leads to an I/O choke point. Next month AMD releases their 12-core Magny-Cours processor and Intel replies with a new processor technology - both of them increasing the memory channels and the amount of RAM that can be configured on a system to over a terabyte. It's on like Donkey Kong in terms of processing and RAM, but all of this tech will suffocate for lack of I/O.

    The good news is that solid state technologies are here with sufficient capacity and doubling all of streaming bandwidth, IOPs and storage density at more than an acceptable rate. That they're greener is just bonus. And then there's the fact that the price per gigabyte - while still not competetive with consumer magneto-mechanical media - is coming down at an even better rate and already bests enterprise media (SAS and FC). There will be an accommodation period much like there was when we moved from analog modems to DSL and beyond - and this is a ripe field for the snakeoil salesmen. There will be wrenching pain as we realize that 8Gbps FC SAN doesn't even effectively serve a 5-pack of properly constructed third generation SSD-format drives, let alone an entire rack of them. The world will spin about us as multiplexed 4x SAS V2 (24Gbps) connections become the order of the day briefly, unless Intel makes a coup and figures a way to apply a heirarchical routing structure to LightPeak, which isn't even released yet and even so is obsolete. For sure electrical interconnects are right out - they don't have the bandwidth. We're going optical and I mean right now. 3.5" SAS drives will become the new tape. Tape has already been the new punchchard storage method for several years.

    My guess: we'll find a new brand for "Enterprise storage" that uses RAID technologies to aggregate the bandwidth and improve the reliability of flash technologies in a way that doesn't rate-limit IOPs and in a way that provides reliable end-to-end performance and scales to terabits per second, until it becomes a static storage medium that actually reaches the performance of RAM. An interim solution may include huge RAM cache on SAS attached Flash drives backed by supercapacitors for guaranteed commited writes even if the power fails to preserve data integrity at the storage unit level. FC isn't the interconnect solution and SAS isn't it either - it'll likely be derived from external PCIe but be over optical media and probably multiple strands of it.

    This is a big change - a revolutionary rather than an evolutionary change. A bigger change is coming. An extinction level event. When we've mastered the IOPs and the storage capacity of everything that everybody wants to store, then what? When every enterprise has consolidated their workloads down to three servers geographically separated for HA and DR, then what? What do we sell them then?

    Friends the situation got dynamic. Good luck to you all.

  • Cluster Size (Score:3, Interesting)

    by krischik ( 781389 ) <krischik&users,sourceforge,net> on Saturday February 27, 2010 @05:47AM (#31295738) Homepage Journal

    I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K.

    Most file system already use a cluster size of 4096 (clustering 8 sectors). The only file system I know of which used sector=cluster size was IBM's HPFS.

    So NO, we don't use size. Still I am wary of this emulation stuff. First the 4096 byte sector is broken down to 8 512 byte "virtual" sectors and then those 8 virtual are clustered to one cluster. Would it not be better to use an intelligent file system which can handle 4096 bytes sectors natively? Any file system which can be formatted onto a DVD-RAM should do.

  • by mgblst ( 80109 ) on Saturday February 27, 2010 @06:55AM (#31295914) Homepage

    GPRS is a ridiculously fast os, probably the fastest in the world, when setup correctly. We used to use it for our cluster of 2000 cores.

"It's a dog-eat-dog world out there, and I'm wearing Milkbone underware." -- Norm, from _Cheers_

Working...