Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Technology

The Limits To Perpendicular Recording 222

peterkern writes "Samsung has a new hard drive and says it can now store 667 GB on one disk, which comes out to be about 739 Gb/sq. in. That is more than five times the density when perpendicular recording was introduced back in 2006, and it is getting close to the generally expected soft limit of 1 Tb/sq. in. It's great that we can now store 2 TB on one hard drive and that 3-TB hard drives are already feasible. But how far can it go? It appears that the hard drive industry may start talking about heat-assisted magnetic recording again, soon."
This discussion has been archived. No new comments can be posted.

The Limits To Perpendicular Recording

Comments Filter:
  • TFA is unreadable. (Score:5, Informative)

    by FooAtWFU ( 699187 ) on Tuesday August 03, 2010 @05:35PM (#33130716) Homepage

    However, more density also provides a way to higher capacity 3.5" drives, which means that Samsung is now able to build 2.7 GB and 3.3 GB hard drives with four or five disks, respectively. Such drives are rather unlikely however, as we would expect the density to grow to 750 GB per disk, which could enable 4-disk 3 GB drives.

    Oh, wow, a 3-gigabyte drive! How futuristic!

    Seriously, what sort of monkey messed the article up this badly?

  • by jtownatpunk.net ( 245670 ) on Tuesday August 03, 2010 @06:05PM (#33131114)

    Why not do both in separate product lines? Kinda like what they're already going right this very moment. If I want a lot of stuff in one place, I buy hard drives. If I want a small amount of stuff accessed very quickly, I buy SSDs. One division increasing capacity doesn't stop an entirely different division from increasing performance. And those SSDs are increasing in size pretty quickly. The Vertex 2 Pro is up to 240 gigs for under $700. Wasn't long ago that the tiniest, crappiest-performing SSD cost that much. Now that's the price of the biggest and fastest. In another year, the $/gig ratio will be even better along with performance.

    So I think fast storage is coming along just fine and I'm happy to have the slow spinning stuff for my "access occasionally" data like audio, video, backups, etc.

  • Re:Get Perpendicular (Score:1, Informative)

    by Anonymous Coward on Tuesday August 03, 2010 @06:08PM (#33131160)
    Clicky link [youtube.com] for the lazy.
  • by Anonymous Coward on Tuesday August 03, 2010 @06:13PM (#33131236)

    kdawson posted TFA. This explains everything.

  • The Vertex 2 Pro is up to 240 gigs for under $700. Wasn't long ago that the tiniest, crappiest-performing SSD cost that much. Now that's the price of the biggest and fastest.
    I can't comment on fastest but it's far from the biggest. You can get 512GB and 1TB SSDs (though the 1TB ones are desktop form factor) now but the price is insane.

    In another year, the $/gig ratio will be even better along with performance.
    I sure hope so

  • by Grishnakh ( 216268 ) on Tuesday August 03, 2010 @06:29PM (#33131456)

    Theoretically, for many applications, zipping up the 1000 files into 1 compressed file and decompressing it on-the-fly really is faster, and has been for quite some time. Disk speeds haven't changed that much in the past 10-15 years, but CPUs and memory buses have become far, far faster. Since disk seek time and latency is so long, compared to the amount of work a modern (esp. multicore) CPU can do in that amount of time, it frequently makes more sense to compress data and archive disparate files into single larger ones.

  • More than feasible (Score:4, Informative)

    by davmoo ( 63521 ) on Tuesday August 03, 2010 @06:32PM (#33131496)

    From the summary:

    It's great that we can now store 2 TB on one hard drive and that 3-TB hard drives are already feasible.

    3TB drives are already well past "feasible". Seagate has one for sale in the form of the STAC3000100 FreeAgent GoFlex Desk. Its an enclosure with a single SATA 3TB hard drive. The reason its currently only available as an external drive is because most motherboards will not support a boot drive that large, hence not a lot of reason to offer it as an internal yet.

  • by Anonymous Coward on Tuesday August 03, 2010 @06:48PM (#33131656)

    Uh no, I don't think so, because Moores law is about the number of transistors (or other features) per dollar. And even if it was about speed that's still not true, because RAM speeds are not increasing exponentially.

    Flash storage capacity on the other hand is developing faster than Moore's law at the moment. That helps to increase capacity and lower power consumption. Capacity and power consumption are extremely important factors in many applications today.

    And wear is not really a big problem. To the extent that it is, it will be solved by increasing storage capacities. If you have 4 TB of flash it will wear out 1000 times slower than if you have 4 GB of flash all else being the same.

    So you're not going to see a system with a 1:1 ratio of RAM and flash.

  • by bertok ( 226922 ) on Tuesday August 03, 2010 @07:51PM (#33132312)

    Theoretically, for many applications, zipping up the 1000 files into 1 compressed file and decompressing it on-the-fly really is faster, and has been for quite some time. Disk speeds haven't changed that much in the past 10-15 years, but CPUs and memory buses have become far, far faster. Since disk seek time and latency is so long, compared to the amount of work a modern (esp. multicore) CPU can do in that amount of time, it frequently makes more sense to compress data and archive disparate files into single larger ones.

    You'd be surprised.

    I've recently had to optimise a compression step in a large system, and I was appalled at how slow most compression libraries and programs are, especially the ones in common use.

    Typical (zip) style compression libraries rarely exceeds 10MB/s compression rates, and 20-30MB/s in decompression. That's substantially slower than what most mechanical hard-drives can do, let alone SSDs. In practice, reading or writing a 'zip' file, which includes all MS Office 2007 formats, XPS, etc... will be CPU limited.

    There's all sorts of sillyness: many libraries perform IO operations with tiny buffers (4K or less), perform IO synchronously, and don't take advantage of 64-bit instructions, SSE, or multi-core CPUs. Even if optimisations were used, most compression formats are very heavy on unaligned byte and bit twiddling, which is inefficient on modern CPUs.

  • by guruevi ( 827432 ) on Tuesday August 03, 2010 @11:59PM (#33134002)

    It's not that motherboards won't support it, it's that Windows (even 7) won't support it. You CANNOT boot Windows from a disk with GPT. You also CANNOT boot Windows (7) on most EFI systems.

  • by rdebath ( 884132 ) on Wednesday August 04, 2010 @03:15AM (#33134950)

    Except, it can take quite a while to sync two drives of this size. So you will probably find that the second drive spends at least half it's time sitting next to the primary drive.

    You actually need THREE drives so one of they is always a safe backup.

    The actual rules:

    1. Make a Backup.
    2. Make it safe.
      eg: offsite
    3. Keep it safe.
      eg: THE backup must stay offsite, it only comes back when it's not THE backup.
  • by ImprovOmega ( 744717 ) on Wednesday August 04, 2010 @12:00PM (#33138714)
    I don't see it happening. At a bare minimum, cache memory will always be faster just because it's baked on to the CPU and it takes less time for the signal to travel there.

    Intuition tells me that no matter how fast non-volatile memory gets, it will always be outstripped by volatile memory because you don't have to concern yourself with permanently storing it.

If you think the system is working, ask someone who's waiting for a prompt.

Working...