Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Technology

Seagate Plans 37.5TB HDD Within Matter of Years 395

Ralph_19 writes "Wired visited Seagate's R&D labs and learned we can expect 3.5-inch 300-terabit hard drives within a matter of years. Currently Seagate is using perpendicular recording but in the next decade we can expect heat-assisted magnetic recording (HARM), which will boost storage densities to as much as 50 terabits per square inch. The technology allows a smaller number of grains to be used for each bit of data, taking advantage of high-stability magnetic compounds such as iron platinum." In the meantime, Hitachi is shipping a 1 TB HDD sometime this year. It is expected to retail for $399.
This discussion has been archived. No new comments can be posted.

Seagate Plans 37.5TB HDD Within Matter of Years

Comments Filter:
  • Terabits??? (Score:5, Insightful)

    by Anonymous Coward on Friday January 05, 2007 @10:34AM (#17472926)
    It's bad enough that hard drive manufacturers are dead set on confusing people with 1,000,000,000-byte GBs. Do they really need to start throwing around figures in Terabits? Seriously, enough is enough...
  • That's great. (Score:5, Insightful)

    by Aladrin ( 926209 ) on Friday January 05, 2007 @10:39AM (#17472998)
    That's a great amount of storage and a great price, but what about some REAL information: Speed, heat, power consumption. If for the same price I can run 4 250gb drives and save on heat and increase speed, this doesn't make sense to do. If I can run 6 and RAID them, and gain security, it really doesn't make sense.

    The largest drive in the world isn't any use to me if it's slower than a 3.5" floppy or I can use it to replace my space heater.
  • Fragmentation? (Score:0, Insightful)

    by Anonymous Coward on Friday January 05, 2007 @10:42AM (#17473036)
    Defraging that baby should be fun.
  • Easy answer (Score:1, Insightful)

    by SNR monkey ( 1021747 ) on Friday January 05, 2007 @11:04AM (#17473400)
    Hi-Def pr0n

    Adult entertainment always spurs innovation.
  • ANOTHER LIE (Score:3, Insightful)

    by Anonymous Coward on Friday January 05, 2007 @11:16AM (#17473562)
    This drive will not be 1TB. It's another scam. Rather than actually be a 1GB drive, as in 1,099,511,627,776 bytes it's a 931.32~ GB drive as in 1,000,000,000,000 bytes. Yep, 69GB short of a Terabyte. It's just falsely advertised as a 1TB drive.

    Hard drive makers:
    Kilobyte = 1024 bytes
    Megabyte = 1024 kilobytes
    Gigabyte = 1024 megabytes
    Terabyte = 1024 gigabytes

    Label your fscking drives accurately.

  • by Penguinisto ( 415985 ) on Friday January 05, 2007 @11:16AM (#17473572) Journal
    ...the tape will be in a cartridge that holds a spool 65cm in diameter, holds approximately 600TB (1200TB w/ compression) and will require an autoloader that eats at least one rack for the entry-level 8-tape kit. /dev/nst0 will weigh in at 38kg, and cleaning will require a tape w/ 6000-grit sandpaper in place of media.

    All BS aside: you do bring up an excellent point. I'm a guy who has to do backup/recovery, and I've found that even a fully compressed LTO-3 will barely --just barely-- hold up to 1.2TB if you rig it right (by combining hardware/software compression, and the love that Bacula gives it (though admittedly sparse file handling most likely has inflated the reported amount of stuff).

    Anyrate, that boils down to --maybe-- two full HDD's if the two are 500GB SATAs.

    The good news is, after you pare down the crap you really don't need to backup, it usually isn't all that much for most companies. You can safely exclude out most of the OS itself for starters... w/ kickstart on RHEL and a .ks file that replicates what you've got on a given server (partitions, packages, etc), you can cut a LOT out.

    Even more good news - if you get up a monster RAID array of similar drives (full SAN kitting or just attached to a big ol' server, no biggie), you can use it instead of tapes for most of your day-to-day backup. Then latch your tape drive or autoloader onto it and only commit to tape the reallly vital stuff that requires a long retention period. Most backup software suites (even Bacula) support writing to file as well as tape, so this shouldn't be too big of a problem for a sysadmin if s/he knows what s/he's doing.

    Adaptation and all that.

    But then, most of the servers in my care consist of a pile of RAID5'ed SCSI drives that range 36-140GB in size... and I doubt that most of them will get much bigger before it's time to replace the servers themselves. Just because you can get monster capacity on a single drive, doesn't mean that you need to or even want to.

    Now if I already had a monster robotic multi-drive tape library running 24/7 now, and the boss wants to up the HDD capacity on a given pile of servers because he pretty much has to? Yeah. That would require a lot more thought and planning, and at that stage of the game a disk backup solution similar to what's been outlined above would be big and ugly, but would pretty much be what you're stuck with having to do.

    ...at least until they come out with the LTO-48 ;)

  • by Lethyos ( 408045 ) on Friday January 05, 2007 @11:21AM (#17473664) Journal

    The cost, longevity, performance, and capacity is completely inferior to making backups of disks onto other disks, and has been for quite some time. I have no idea why people ever stick with tape at all these days other than for nostalgia. Does it feel good to have a cartridge using a remarkably old fashion approach to data storage or are people just ill-informed?

  • by cdrudge ( 68377 ) on Friday January 05, 2007 @12:01PM (#17474350) Homepage
    It depends on what the "backup" is for. If it's for disaster recovery, then you are right. But if it was a on-line backup in case of an "Oh shit I didn't mean to delete that" type of thing, then dual partitions can make sense.
  • I think the market is right around the corner: high-definition TV.

    The PVR market has been crippled in recent years because of market confusion, and compatibility problems (will my TiVO work with my cable box, etc.), plus competition for consumers' money by HDTVs themselves.

    Once people get done buying their HDTV and paying off their credit cards, they're going to start looking at PVRs. I think that's a market that's probably going to explode in the next 5-10 years, even more than it has already. I also think you're going to see PVR functionality being built into the 'standard' cableco boxes, rather than as an upgrade. (Not that it will be free, they'll just charge everyone for it.)

    High-def TV takes up a lot of space. That means if you want to have significant PVR functionality, you need to have a lot of local storage. 37.5TB, or 300Tb (aka 300,000,000Mb, if we use the 'marketing department' definitions) would be about 4,340 hours (180 days) of 19.2Mb/s HDTV. While that seems impossibly huge, I could imagine a future PVR using it as local cache: constantly downloading and storing programming based on your preferences. Add in a big HD movie library (say the contents of your local Blockbuster) and you can give the customer the impression of many simultaneous channels, even if they only have a relatively narrow pipe. (Narrow being 1 HD channel at a time, or a 20Mb pipe -- fat by today's standards, granted.)

    Content always expands out to fill the available capacity. I remember when I first heard about the development of DVDs, back in the early 1990s. They seemed pretty ridiculously big then, too. Now I have stuff that I can't back up to DVDs, because it would be impractical to split it among so many discs as would be required. (Apple's Aperture doesn't even try to have a backup-to-DVD option, it's designed strictly to work with removable hard discs as backup 'Vaults.')

    There was a time when people thought 20MB removable media was more than a single person would ever need, though we might look back and laugh. There's going to be a time in the future when 40TB looks the same way.
  • Re:37.5TB HDD (Score:3, Insightful)

    by FirienFirien ( 857374 ) on Friday January 05, 2007 @12:13PM (#17474578) Homepage
    For "ms" read "milliseconds" not "minutes".
  • by Vellmont ( 569020 ) on Friday January 05, 2007 @12:13PM (#17474588) Homepage

    When I asked why, he said that although he didnt want to buy another drive, he understood the importance of having a backup for his data.

    Well, obviously he's not going to be protected from a failure of the drive mechanism. But his strategy isn't totally useless. By copying to a seperate partition he's protected himself from accidental erasure, and corruption of the data (though software that either corrupts it, or from a power failure).

    It's really a poor mans archival mechanism. I'd argue that data corruption or unwanted erasure happens more often than drive failure.. though I do agree the guy shouldn't have chinced out and just bought 2 drives, RAID-1 them, and then figure out some proper archival method like tape, or even a removable drive.
  • Re:HARM (Score:3, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday January 05, 2007 @12:49PM (#17475152) Homepage Journal
    We had a batch come in for some IBM e-servers, and a third of them died within 6 months. Absolutely disgraceful. The ones we have running Hitachi hard drives are all still going.

    Your anecdotal experience runs contrary to most of the anecdotes I've read, most of which say that Seagate has good reliability (I've always found it to be so, at least, in the post-ST-506 era) and that hitachi drives are all big pieces of crap.

    Proof, of course, that anecdotal information is worth every penny spent on the study that produced it.

  • Re:Terabits??? (Score:4, Insightful)

    by TheRaven64 ( 641858 ) on Friday January 05, 2007 @01:48PM (#17476208) Journal
    1KB was used to describe 1024 Bytes earlier than 1980. Standards that come along and re-define terms in common usage decades after their first use should be ignored, and are by everyone except those marketing hard drives.
  • Re:ANOTHER LIE (Score:3, Insightful)

    by Arcaeris ( 311424 ) on Friday January 05, 2007 @01:50PM (#17476258)
    While the names may have been officially changed in 1999 and many of us having been working on computers for far longer than that, that's just old habits dying hard - the real problem is one of a perception.

    Windows (and I assume Mac OS?) continues to display file size in terms of base 2, and HD manufacturers have bought into this base 10 thing (to make their hard drives sound larger).

    I don't care either way which one they use, as long as both groups agree on the same thing. This discrepancy between what is on the HD box/computer website and what is shown in the OS confuses a lot of people, and it's a pain to explain to the average Joe consumer why he isn't getting what he thought he'd be getting.
  • Re:300 Terabits. (Score:2, Insightful)

    by WrecklessSandwich ( 1000139 ) on Friday January 05, 2007 @02:21PM (#17476814)
    The real problem is that Windows is reporting drive and file sizes in GiB (while making the mistake of labelling it GB of course). Solution: Insert an option in Explorer (and any other file managers that do this, not sure off the top of my head what does and doesn't for other OSes) to toggle between GiB and "true" GB. Hard drive vendors should then use both units to avoid confusion. When I go on newegg or wherever, I should see drives labelled as 80 GB (74 GiB) Seagate SATA blah blah blah.
  • Re:ANOTHER LIE (Score:2, Insightful)

    by D4rk Fx ( 862399 ) on Friday January 05, 2007 @02:39PM (#17477114) Homepage
    How big are hard disk sectors? That's right. 512 bytes. An average cluster is 8 times that, or 4096 bytes.

    It's still a completely valid, and necessary way of describing file sizes. If a hard disk manufacturer swants to make the sector sizes 500 bytes instead of 512, then by all means, the Original SI designation of kilo, etc. would be a better way to represent the total space. But since they still use 512 bytes for a sector, it should always be in 2^n.
  • by CityZen ( 464761 ) on Friday January 05, 2007 @03:03PM (#17477560) Homepage
    Do you want to HARM your data or do you want to HAMR it?

    Frankly, neither one sounds very appealing to me.
  • Re:Terabits??? (Score:3, Insightful)

    by Wildclaw ( 15718 ) on Friday January 05, 2007 @06:06PM (#17481250)
    Yes, Giga is latin for billion (10^9).

    So get off you high horse and admit that it is the computer scientists fault for trying to change the definitions of an already existing prefix system to fit their own domain.

  • Re:300 Terabits. (Score:3, Insightful)

    by toddestan ( 632714 ) on Friday January 05, 2007 @06:58PM (#17482128)
    As others have pointed out, the hard drive manufacturers are following the proper convention, and in fact (if you look into the history), HD manufacturers have been using the "factor of 1000" convention since the very beginning (since the first magnetic platters, really).

    I still have a 20MB hard drive that holds 20,9xx,xxx bytes on it. The switchover happened back in the 80's, and was a deliberate move by the harddrive manufacturers to deceive people. You can rattle on about standards all you want, but it all started because of a scummy marketing move.

    Besides, they are still only one of the few playing that game. When was the last time you saw either a 528MB or a 512MiB memory module for sale?
  • Re:Terabits??? (Score:3, Insightful)

    by Kjella ( 173770 ) on Friday January 05, 2007 @08:35PM (#17483194) Homepage
    Man, you really need to check out some more standards. Very many transmission standards use 1kB = 1000B. Video codecs and other streaming stuff too. In short, it's a mess that people get caught up in all the time, not just when buying HDDs. Besides by your very own argument, it never should have been 1kB = 1024B in the first place. The prefixes are univeral across all sciences and in daily speech (e.g. kilometer, kilogram), and it was computer scientists who "came along and re-defined terms in common usage decades after their first use". They created their own little sub-domain where it works differently than everywhere else. It's like trying to use a C++ class and discover that for this particular class, the + operator is doing multiplication instead. No matter how you twist and turn it, prefix overloading was a really crappy idea to begin with and it can either be fixed or remain broken, but don't kid yourself into thinking that it's anything but a very obscure exception to a common standard.

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...