Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Science

Beyond Nobel, Hard Drives Get Smart 156

mattnyc99 writes "Giant magnetoresistance got its day in the sun when it won the Nobel Prize in physics last week—and when Hitachi rode that spotlight by announcing they'd have a 4-terabyte desktop hard drive by 2011. It's about time says Glenn Derene over at Popular Mechanics, in what amounts to an ode to the rise and future of super hard drive capacity. From his great accompanying interview with data storage visionary and computer science legend Mark Kryder: 'To get to 10 Tbits per square inch will require a drastic change in recording technology ... Hitachi, Seagate, Western Digital and Samsung ... are currently working on this 10-terabits-per-square-inch goal, which would enable a 40-terabyte hard drive.'"
This discussion has been archived. No new comments can be posted.

Beyond Nobel, Hard Drives Get Smart

Comments Filter:
  • I'd like to know which factors have allowed (forced?) the disk storage industry to continue to advance at such a steady pace. I am well aware of Moore's Law and Kryder's Law [wikipedia.org], but these are just observations, not explanations.

    Why haven't we seen similar improvements in fuel efficiency or internet bandwidth (in the US at least)?
    • by raeb ( 1041430 )
      I agree with you and also am curious about access times. I've got the space, I'd rather they focus a little more on how to access all the info faster. But either way this is awesome.
      • Hybrid drives (Score:5, Insightful)

        by nojayuk ( 567177 ) on Wednesday October 17, 2007 @05:58PM (#21017035)

        I believe the higher capacity drives will force a rethink on how data is stored and accessed on standalone machines like laptops and desktops. I've only got a couple of terabytes of data on this machine and doing a file search over the five (I think, I can't actually remember how many drives I've got fitted in this thing) disks is already pretty time-consuming. The solution will be to add intelligence to the disk interface so that data indexing is done pre-emptively and the results cached on the fly.

        The first generation of hybrid drives are already here but they're only at the beginning of their development cycle. HDD recording densities will increase as will flash RAM densities and that will improve access times but only for the most commonly accessed data.

        Imagine a 10Tb HDD built in the classic 3.5" wide form factor, with 256Gb of 1024-bit-wide 150MWord/sec flash memory or MRAM on the controller board acting as cache. The spinning disk becomes a backing store for the flash where data is kept "fresh" by a smart algorithm. The drive spins down intelligently when not needed, saving power and reducing heat dissipation.

        Higher recording densities are only one part of the future of disk drive technology.

        • Re:Hybrid drives (Score:4, Interesting)

          by ArcherB ( 796902 ) * on Wednesday October 17, 2007 @06:51PM (#21017773) Journal
          Imagine a 10Tb HDD built in the classic 3.5" wide form factor, with 256Gb of 1024-bit-wide 150MWord/sec flash memory or MRAM on the controller board acting as cache. The spinning disk becomes a backing store for the flash where data is kept "fresh" by a smart algorithm. The drive spins down intelligently when not needed, saving power and reducing heat dissipation.

          I'd rather they be broken into separate drives. I'd like a flash based drive for my OS and maybe a few commonly used applications and a spinning HDD for all my data and backups.

          • Re: (Score:3, Informative)

            by Jake73 ( 306340 )
            You know, this is a pretty interesting point.

            Perhaps it would be better to have two drives available to the OS with rated latencies and bandwidths. Then, the OS can make software-based decisions based on the usage profile of the machine (server, workstation, media, etc).

            Alternatively, some rating could be given to each file installed by software installation programs. Things like help databases, samples, aux tools, uninstallers, etc could be thrown on lower-latency spin disks. The critical items like pro
            • Better than that, why don't we just have one storage area for programs, and a totally separate one for data? You could have your OS, all your applications, basically any and all executable files, stored in one place that was difficult to change, and then all your data, temp files, states, etc. could be stored somewhere else.

              Bet nobody's thought of that before.
              • by Jake73 ( 306340 )
                Not a novel idea, but is it actually implemented anywhere like this?
        • Partitions (Score:2, Interesting)

          by Walzmyn ( 913748 )
          What I want to know is if these new larger drives are going to come with new restrictions on partitioning the drives. I would love to be able to test drive a dozen or so different Linux distros, see what BSD is like, have a safe (somewhat) place to stick my /home while I upgrade - but I am limited by the number of partitions (got one taken up with winders).I know you can work it all around and do it with just 4 primaries, but it would really be nice to set up 15 or so partitions. Especialy if the drive has
        • by the_olo ( 160789 )

          Imagine a 10Tb HDD built in the classic 3.5" wide form factor, with 256Gb of 1024-bit-wide 150MWord/sec flash memory or MRAM on the controller board acting as cache.

          Isn't PRAM memory [wikipedia.org] seen as a successor to flash memory in near future? Flash is much less reliable and much slower WRT write operations...

      • by mollog ( 841386 ) on Wednesday October 17, 2007 @06:19PM (#21017301)
        When I see headlines about 1TB drives, I immediately think of losing 1TB of data.

        How about they put a RAID 1 array in a 3.5" form factor? Two separate platters, two head/arm assemblies, two SATA connectors.
        • Because you could just buy two 3.5" drives and run them in RAID1 yourself? It only costs money.
          • Because you could just buy two 3.5" drives and run them in RAID1 yourself? It only costs money.

            I think you're missing the point. You have to do that yourself, it takes money, and most importantly it takes up space and electricity.

            Put two hard drives in a 3.5" enclosure and have them run a seamless RAID 1. The user doesn't have to be involved in that.

            Then if one part of the array fails, sure, you have to replace the array, but you don't lose your data. That's the most important thing. Hard drive costs pale in comparison to the cost of replacing data.

            • Re: (Score:3, Insightful)

              by ckedge ( 192996 )
              Imagine running a raid-1 array where if one half of the array goes bad, you have to replace BOTH. This immediately doubles your failure cost. No way in heck is anyone going to do that. Instead their going to raid-1 their failed "integrated raid-1" drives until the second half fails itself. Anything else would be a gross waste of money and time.

              > You have to do that yourself,

              How is buying and plugging in a second drive "hard"? You already have to buy one, why not just tell the guy behind the counter
              • The main point I would make is that the RAID is transparent, that there's no actual setup, no hardware or software to fiddle with. But there's no reason that such a drive couldn't be as modular as a regular array. Just pop open the top, snap in a new module, and let the built-in mechanism go to work.

                Even if all those other factors are indeed moot -- in a hypothetical product, mind you -- the convenience alone would be worth it, especially for a home user.
            • Put two hard drives in a 3.5" enclosure and have them run a seamless RAID 1. The user doesn't have to be involved in that.

              Then if one part of the array fails, sure, you have to replace the array, but you don't lose your data. That's the most important thing. Hard drive costs pale in comparison to the cost of replacing data.

              You have to replace the array, plug both units in to transfer all the data over to the new one, wait ages for the transfer to finish, then unplug the old one and throw it away. That's a bit of a waste when it still has a working drive in it.

              With real RAID you can hot swap the new disk in without any downtime and carry on working while the array rebuilds in the background, and not waste any disks.

        • Well they also mention online storage but that is more of a economic and trust issue than a technical one. I'm not about to trust my data to a company that could change policies at a whim or go under and take my data along with them. There are also the issues of security, are they backing it up, availability, etc.
      • by cnettel ( 836611 )
        7200 RPM gives you a hard wall. Faster rotation is a pain, and arm movement is not (generally) the limiting factor. The only thing I can imagine is putting two heads there, right opposite each other. That creates a nice scheduling problem, but I guess it would be doable. You wouldn't only get RAID 0, because, with two heads free, you could actually cut the time before the right sector is under either head in half. One thing that comes to mind is whether a construction with two arms would be much more (i.e.
        • WREN did this in their WRENiii series ESDI drives. For the time those drives were fast. Rather than a radial axis for the drive heads they used a linear actuator where the head stacks could move independently of each other. Worked like a champ. I would still use them but the controller is VLB (and no MB these days supports that on P4's and up), and the drives were only 160 meg (and 5-1/4" full height beasts).

          -nB
    • by Beyond_GoodandEvil ( 769135 ) on Wednesday October 17, 2007 @05:39PM (#21016775) Homepage
      I'd like to know which factors have allowed (forced?) the disk storage industry to continue to advance at such a steady pace.
      Easy pr0n, somebody should calculate how much disk space is required given mpeg2 compression to ensure that someone would have the equivalent of 60+ years of pr0n, that is how big hard disks will get.
      • by daemonc ( 145175 )
        "somebody should calculate how much disk space is required given mpeg2 compression to ensure that someone would have the equivalent of 60+ years of pr0n, that is how big hard disks will get."

        60 years * 365 days per year * 10 minutes of wanking per day * 6 MB per minute of medium quality video = 1314000 MB = 1283 GB = 1.25 TB
      • by julesh ( 229690 )
        somebody should calculate how much disk space is required given mpeg2 compression to ensure that someone would have the equivalent of 60+ years of pr0n

        I performed this calculation when I saw the sizes being discussed and came to the conclusion that half a petabyte of storage ought to be enough for anybody. ;)
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Systems with feedback (technology helps you make more and better technology) often go exponential until they ram into some kind of saturation limit. (Put bacteria in a dish with a food provided at a fixed rate, and the population will grow exponentially until it hits the resource limit and flatten out.) Some technologies have already passed their exponential stage and flattened out, whereas we've been fortunate with computer technology. Atomic scales pretty much set the limit there, and we're getting cl
    • Re: (Score:3, Insightful)

      by realmolo ( 574068 )
      Yeah! And why haven't they cured cancer yet? And why does it still cost $9 for a small soda at the movie theater? Lazy-ass researchers.

      The reason that fuel efficiency and internet bandwidth haven't "increased" as much as hard drive space is because they are COMPLETELY DIFFERENT PROBLEMS with COMPLETELY DIFFERENT solutions.
    • Fuel efficiency: We have made strides forward, but American consumers seem to prefer using the added efficiency to improve acceleration rather than gas mileage. (For example, I've heard that even a 2007 Civic has significantly better acceleration and handling than the powerful muscle cars from the 50s.)

      Internet bandwidth: Huh? Ten years ago, almost everyone outside of a university was on dialup, if they had internet access at all; now, over 90% of residences have access to some kind of broadband. Sounds lik
    • Here's my stab at it, though I'm not really an expert:
      • fuel efficiency: Not knowing too much about it, I would assume that part of the problem is actual energy requirements. In order to move an object from point A to point B, you're going to need a certain amount of energy no matter what. So, if there's X amount of energy in 1 gallon of gasoline, and it takes X amount of energy to move a car Y miles, than 1 gallon of gasoline will never move a car more than Y miles. So right now, we're getting (Y-Z) mil
      • You are also not taking into account that current internal combustion engines operate at about 25-50 percent efficiency when converting the stored energy in gasoline into kinetic energy in the pistons. This is where the limited gains in mpg have come from in the last 50 years. They have improved the conversion efficiency. Who's to say that they couldn't improve it more?
        • Well, my attempt to answer fails to do a very good job of taking many things into account, but I'm specifically talking about the engine inefficiency. In a nutshell, I was saying that you can make cars lighter and more aerodynamic and things like that, but that you're really talking about engine efficiency when you're talking about the value "Z". And although 50% could probably be improved on, it's probably not simple. It's not like engineers can just say, "Oh, right, I should just make my engine 100% ef
      • by cnettel ( 836611 )
        In order to move an object from point A to point B, both placed on equal height, you need no energy at all (or, well, you need to borrow some, but you can pay most of it back when you're done). Evacuated tunnels are maybe not a realistic option, but this shows that aerodynamics and surface contact is everything. And then we haven't even started discussing the actual engine.
        • Well, yeah, sort of. My answer isn't exactly perfect, but there's something to it. Energy=work. Work can't be done without some sort of energy being done. Therefore, an object cannot be moved from point A to point B without expending energy.

          So, yeah, I guess theoretically the amount of energy needed to move an object depends on how massive it is and how fast you want to move it. Which means, in that sense, the distance doesn't matter. Right? But it takes energy to move it there at 50 MPH.

          But then,

    • Why haven't we seen similar improvements in fuel efficiency

      With regard to vehicle fuel efficiency there are other considerations in a practical everyday vehicle other than fuel efficiency. It has long been known for example that some very complex concept cars, when maintained meticulously by teams of engineers and employing technologies which are either extremely expensive, high maintenance, or impractical or all of the above, have achieved very high fuel efficiencies on the order of 70+ miles per gallo
    • by iendedi ( 687301 )

      I'd like to know which factors have allowed (forced?) the disk storage industry to continue to advance at such a steady pace. I am well aware of Moore's Law and Kryder's Law [wikipedia.org], but these are just observations, not explanations.

      Why haven't we seen similar improvements in fuel efficiency or internet bandwidth (in the US at least)?

      It is profitable to replace old computer hardware every 18 months. It is not profitable to reduce the demand for fuel, on any timeline.

      The real conspiracy isn't that they keep finding ways to increase storage capacity or decrease die size for semiconductors. The real conspiracy is that they gently walk us through an upgrade curve when they have radically more advanced processes perfected in the labs. In this respect, laws such as Moore's law could be considered to be business guidelines for how quick

  • that video cards will get better, games will get much larger, and of course - we'll all be fighting over which format the game will be on and complain if its blue-ray and we have HD-DVD, and vise versa.
    • by Eivind ( 15695 )
      No. HD-DVD/Blueray will likely be the last generation of physical media.

      Oh, there'll be media offcourse, you need to store stuff -somewhere-, but bundling the data, which is what you pay for, and the storage-medium is no longer interesting, makes about as much sense as selling water, and insist it only be stored in YELLOW bottles, not BLUE ones damnit.

      Music, Movies, Software, these are all just data. Where I want to -store- my data is up to me, I will choose based on price/performance/convenience, but my ch
      • I agree. The wave of the future doesn't seem to be higher capacity read-only media. The future seems to be in offering services to sell entertainment online. I think I'll skip hddvd/bluray and just wait for amazon's, netflix's or apple's solutions for home theater to mature.

        Not sure why you weren't modded up...

    • we'll all be fighting over which format the game will be on and complain if its blue-ray and we have HD-DVD, and vise versa.

      Complain ?
      Why complain ?
      By then, most users and all /. will have no-name korean multi-format burner that will handle both HDDVD and BlueRay.

      HDDVD and BlueRay is no real format war and has nothing to do with the old VHS vs. BetaMax stuff. It's closer to the DVD "plus" vs. "minus", because with disc, multi-format are easily doable.

      And are actually already done, several companies have ano

  • And what will this cost?
  • by Anonymous Coward
    Windows Future XP Gee Whiz Penultimate Enterprise Edition®© will have no problem filling those drives.
  • When are they going to stop to push down their latest technology innovations down the consumer's throat? Most households don't need a frigging TB of HDD space.

    They should direct their sales to the server and business market.

    • Re: (Score:2, Funny)

      by Daimanta ( 1140543 )
      In other words: "1TB ought to be enough for anybody"

      We know how that ended ;)
    • Your not thinking like a marketer! Most households also don't not need a frigging TB of HDD space. We're providing a value-add edge over competitive products while maintaining sales figures consistent with current products for a win-win solution to all parties! We're selling the new models at the price of the old model while the old model gets cheap/fades away and the customer gets extra space.

      Now, if you'll excuse me, I need to go erode some pillars with my head (gets that awful marketing after-thought ou
    • "Most households don't need a frigging TB of HDD space."

      Just like how 640K of ram was "enough for everybody"... hmmm. Just like how supereme commander doesn't need 64-bit memory addressing... either way, the thing is we'll find ways to use it and the consumer is not the only customer of hard drive technology don't forget. The medical and scientific community need enormous amounts of storage for the volume of data that is being generated for research purposes.

      I'm already filling up over 2 terabytes of hard
    • by geekoid ( 135745 )
      Home movies and picture.
  • 2-Way Wrist HD (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Wednesday October 17, 2007 @05:43PM (#21016829) Homepage Journal

    this 10-terabits-per-square-inch goal, which would enable a 40-terabyte hard drive.'

    It could also enable a 750-gigabyte 1" radius HD, if they're really clever. Which could serve the Bluetooth wristphone/player we've all been waiting for. So we can stop referring to that mobile multimedia terminal as a "phone", and again more accurately as a "watch".
    • by thewiz ( 24994 ) *
      It's sad to think we're going to need drives this size just to install Ubuntu in 2011.

      And you thought I was going to say Windows!
  • by Tribbin ( 565963 ) on Wednesday October 17, 2007 @05:51PM (#21016931) Homepage
    The biggest part of our hard disks are spent on movies, music and games.

    Most of these are on thousands of computers.

    Wouldn't a good sharing/streaming protocol/project be the solution for storage for the average person?
    • Re: (Score:3, Insightful)

      by Zironic ( 1112127 )
      For now and probably for quite a while disk space is cheaper then bandwidth.
    • Of course, the problem with that is the idea of "copyright". If we were most concerned with conservation of resources, reliability of backups, and easy distribution, we probably would have made a huge shared filesystem using methods similar to bittorrent, and all movies/music would be stored online and made readily available to anyone with an internet connection. Storing this stuff on your local hard drive would be only necessary for the purpose of caching it so you could listen offline.

      Still, big hard d

  • by Nonillion ( 266505 ) on Wednesday October 17, 2007 @05:55PM (#21016999)
    Yeah, in 10 years well be bitching because we wont have enough space for a decent Windows xx or Linux xx install. I must be getting old, because I remember asking how on earth could you fill a 40 Megabyte hard drive.
    • by dgatwood ( 11270 )

      It was always easy if you were into audio recording. Digital video has, of course, compounded this a lot, but I've wished I could increase the size of my storage for about as long as I can remember. Even for plain old text, 40 megabytes isn't a lot. Are you telling me you didn't have more than 50 3.5" floppies?

    • 40 Meg drive had a huge 10 year life span from about 1984 to 1994.
      So 13 to 23 years ago.

      Given that many Slashdoters had their first computer at 10 years old, its entirely posible for you to be only 23 which for many people isnt old at all.

      If you ever want to put a date on a hard drive you can use my page here:
      http://www.mattscomputertrends.com/harddiskdata.html [mattscomputertrends.com]
      • I'm 24, and I remember running Windows 3.0 on my CompuAdd 386 back in the day. Kids these days don't appreciate how far Windows has come (and technology in general) ;)

        G3t 0ff my l4wn

    • You think you are old? I remember asking how'd you ever fill a 60 minute software tape cassette. ;)
    • The same way you fill a 40TB HDD: porn.
  • this filesystem has been mounted 32 time, checking filesystem.
    634 Hours Remaining.
    • There are plenty filesystems that don't do that, which are included with any kernel released in the last few years.
      • Comment removed based on user account deletion
        • Luckily most modern file systems are "journaling" which helps prevent against corruption. Either the data has been committed, or it hasn't.

          My point exactly. Any kernel made in the last few years supports a journalling filesystem which recovers in seconds from a dirty shutdown. It probably even recovers faster on larger disks as the journal is usually of a fixed size, and bigger disks are usually faster than smaller ones.

          The grandparent is talking about what would happen if you used ext2 for a drive with a s

    • But it does bring up a serious issue.
      The capacity is growing hugely, but the data transfer speeds aren't really speeding up all that much.

      It's like a giant dam of water with a tiny backyard tap attached at the bottom.
      It makes copying and checking quite time consuming.
      • That could be because in any two dimensional storage system, as storage density quadruples in two dimensions, it only doubles in one dimension. That means a track on a 40GB platter will only be twice as dense as on a 160GB platter, hence twice as fast given the same rotational speed. As density increases, you'll start to see a curve in the ratio of speed to capacity, not at all unlike this [hdtune.com]. You can achieve further increases by using multiple platters per drive, but 4 seems to be the limit there, which is
    • Guess I'll schedule that defrag for when I'm on vacation.
  • by HonkyLips ( 654494 ) on Wednesday October 17, 2007 @06:13PM (#21017223)
    I work in animation & video production and a single project can take up a terabyte... I'm all for storage increases but I have no idea how to back it all up... It's all very well for the Blu-Ray and HD-DVD club to go on about storing 30/50 gig on a disc but when your drive holds 4 terabytes (and you just know it will fill up quickly) the backup problems just get bigger too...
    • by Chirs ( 87576 )
      Backing up is essentially straightforward....use more hard drives.

      The real problem is that the transfer rate is not keeping up with the capacity increases, so the amount of time it takes to fully duplicate a drive keeps going up. Maybe it's time for multiple heads per platter, kind of like the 72X CDROM drive from a while back.
    • by geekoid ( 135745 )
      mirror.

      High performance tape library.

      Keep peoples works local and on a central server.

      There are solutions. I get 550 an hour to consult, let me know if you need any work done.
  • Why Not Even Bigger? (Score:4, Interesting)

    by Ralph Spoilsport ( 673134 ) on Wednesday October 17, 2007 @06:24PM (#21017383) Journal
    If you can get a TB on 1 inch drive, why not build a drive with the same density that's LOTS bigger? I imagine warping might be a problem, but I remember 10" winchester drives!

    A 2.4cm drive has an area (just for this thought experiment) of (1.2 x 1.2)pi, or roughly,4.52 sq cm. now, a 10 inch drive (24cm) has an area of 45.2 sq cm.

    So, that would make it a 45 TB drive. Data retrieval might be kind of slow, but: if you have massive RAM caching, it could be of great use. Imagine a home theatre with something like this.

    Imagine buying a drive like this that comes pre-installed with every song ever produced by WEA or EMI or Sony/Columbia. Say, everything from 1925 onward. How much would you pay for such a drive?

    Or, ALL the movies ever made by (name your favourite) movie studio between (date x) and (date y).

    I'd pay some serious green for that. All the classic movies. All the great songs of history.

    That's what we're facing, very very soon: the trivialisation of media technology.

    And eventually, that 25cm drive holding 45TB becomes a 2 inch drive holding 90TB.

    We should be able to predict the arrival of the $500 2 inch exabyte drive.

    The entire collection of world culture, audio in mp3, film in mp4, and images in jpg. Japanese, chinese, American, canadian, English, French, Italian, Russian, etc etc etc. on one or maybe two drives, or even one for audio, one for video, and one for images.

    what then? with all of audio and visual culture at your fingertips, what will we do with it? what will a society in the future (assuming it doesn't implode with the loss of petroleum, or vapourise itself fighting over it) DO with that much data commonly available. to anyone?

    Will it be possible to write a new melody? Will it be possible to tell a new story? Will it be possible to make an image that matters? Some would argue that imaging is dead - eaten alive by advertising. some would argue that film is dead as all the stories are told, and now we're in a grid of "1 from column A, two from column B" kind of mix and match story telling. And some say that even music itself has run its course - washed up on the blandishments of pop, the inaccessibility of the academy, and the dumbed-down rumbling of a sold before it was born hiphop, and an inchoate melange of world music that mimics and fights the imperial culture.

    When it's ALL on your drive, who cares? will culture just gradually wither away?

    Maybe we will do better when the oil runs out, and the machines stop working. We'll have to sing to each other, and tell stories to each other by the fire, instead of the sitting around having the fire tell stories to us.

    RS

    • Re: (Score:3, Informative)

      by sconeu ( 64226 )

      A 2.4cm drive has an area (just for this thought experiment) of (1.2 x 1.2)pi, or roughly,4.52 sq cm. now, a 10 inch drive (24cm) has an area of 45.2 sq cm.


      Math error. 452 cm^2. Remember, you're squaring that 10.
    • by Eivind ( 15695 )
      Yeah, true, pretty soon you'll be able to store all human culture on your wristwatch. Well, except for whatever is made in the last decade, because offcourse the bandwith of media will keep going up with storage. (a Blueray disk takes more space than a DVD. Whatever we have in 20 years for movies will take more space than blueray does)
      • pretty soon you'll be able to store all human culture on your wristwatch.

        But most people couldn't be bothered to read anything longer than a book chapter. Sort of like putting a banquet in front of someone who just ate. You can only consume so much.

        • yeah but you can consume anything you like at will, even if you ignore the rest. its about having access to anything, not access to everything
    • Imagine buying a drive like this that comes pre-installed with every song ever produced by WEA or EMI or Sony/Columbia. Say, everything from 1925 onward. How much would you pay for such a drive?

      You can always sample higher and add more channels. Try 192Khz at 32 bits per sample in 6.1 channels. And, of course, lossless compression or WAV files for pristine sound.

      What good is the entire song ever made by the big 4 if it sounds like crap in stereo 128 kbps.

      Or, ALL the movies ever made by (name your favour

    • Imagine buying a drive like this that comes pre-installed with every song ever produced by WEA or EMI or Sony/Columbia. Say, everything from 1925 onward. How much would you pay for such a drive?

      My student loan company says I paid roughly $15,000 for it, and it was a do-it-yourself model.

  • The real problem that I see is that drive bandwidth has not been increasing at the same rate as drive capacity, which means that the time to read/write an entire disk keeps going up.

    Maybe it's time that manufacturers start using multiple heads per platter to cut down on seek times and increase bandwidth. I'm sure there are people that would pay for double the bandwidth...why hasn't anyone done this yet?
    • Transfer rate is limited by the bandwidth of the electronics (preamps and channel.) After all, you have to pick up microvolts, amplify them to usable amplitudes with low noise, and then do A/D sampling and some fairly complex filtering on them. Today's drives transfer at around a Gb/s; that is not going to increase much. Nobody will want to pay for or cool GaAs read channels. And there's no reason to expect that seek times can be reduced much. Latency could be reduced further in exchange for higher power co
  • Yeah, but... (Score:4, Insightful)

    by unitron ( 5733 ) on Wednesday October 17, 2007 @06:56PM (#21017847) Homepage Journal
    That's great and all, but will we still be limited to 4 primary partitions?
    • by Fweeky ( 41046 )
      I'd hope we're going to see BIOS support for GPT [wikipedia.org] partitions before too long. One more doubling and we're right flat against the limit on MBR partition sizes, so.. what, a year to go?
  • Danger! (Score:3, Funny)

    by RowanS ( 1049078 ) on Wednesday October 17, 2007 @08:42PM (#21019103)

    Giant magnetoresistance got its day in the sun when it won the Nobel Prize in physics last week--and when Hitachi rode that spotlight by announcing they'd have a 4-terabyte desktop hard drive by 2011.
    Oh my god! Four terabytes of sentences like that would contain over 6 x 10^10 mixed metaphors. Crammed into a single 3.5" drive bay the figurative density would be so great that the drive would collapse into a metaphorical black hole, sucking in all nearby figures of speech, similes and allusions. Somebody stop them!
  • It's 15,5 Gb/mm^2.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...