Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

The Past and Future of the Hard Drive 223

Snags writes "Brian Hayes of American Scientist has written a nice little historical review of hard drive technology, from the first hard drive (nice pic) made by IBM in 1956 to what may be available in 10-15 years. He muses on how to fill up a 120 TB hard drive with text, photos, audio, and video (60,000 hours of DVD's). Kind of ironic that this came in my mailbox today considering IBM's announcement."
This discussion has been archived. No new comments can be posted.

The Past and Future of the Hard Drive

Comments Filter:
  • I remember going to the Trenton Computer Fair 8 years ago and finding a great bargain! a 5.25 full height drive that could hold 1gb!!! our bbs was gonna rock(loved them 0-day warez d00d).. we paid 20 dollars for it..which at the time was like a dream.. so it was probably hot... but it worked :)

    • I remember going to the TCF back when it was at Mercer County Community College... now I have to drive all the way to Edison.

      There was just something nice about that whole spread out deal, with dealers in the gym and around the classrooms and a good hike to the outdoor area. I just don't feel at home there anymore.
    • I remember 17 years ago when a Seagate 20 MB hard drive went for US$500. Those were also the days when Maxtor introduced their 650 MB 5.25" full-height SCSI hard drive, which cost US$6,000 back then. (eek!)

      Nowadays, that same US$500 buys you 320 gigabytes of storage on two 160 GB ATA-100 3.5" 1/3 height hard drives. (thud)
  • The first hard drive on my very own computer was 40 megabytes. I managed to fit Wing Commander 2 on it twice, though why I did this escapes me...
  • More space.. (Score:3, Insightful)

    by PopeAlien ( 164869 ) on Thursday April 18, 2002 @02:13AM (#3363850) Homepage Journal
    I don't care how much space you give me.. its never enough. I'm running around 300 gigs right now and it mostly full. Why? because I do video work and motion graphics design, and I'm not very organized. I've got tons of source files from archive.org and I really don't want to go through the trouble of burning CD's or backing up to tape - I love to have it all accesible quickly via harddrive.

    So of course I'm not going to fill all that space with typed notes, but even if I dont do as the article suggest and "document every moment of our lives and create a second-by-second digital diary". I still want that space for massive amounts of easily accesible data.. There's no reason I should ever have to delete anything ever again..

    Uh. Except that I cant find anything I'm looking for anymore.. Can't this search function go faster?
    • Re:More space.. (Score:3, Interesting)

      by heliocentric ( 74613 )
      I really don't want to go through the trouble of burning CD's or backing up to tape...There's no reason I should ever have to delete anything ever again..

      Ok, I like the idea of having all my stuff handy as well, but you need to think beyond just you deleting something... There are other reasons to backup your data including natural disaster. And no, burning stuff to a single CD and wiping it so you can play back your MP3s isn't a backup - you've just made the natural disaster issue portable.

      Uh. Except that I cant find anything I'm looking for anymore.. Can't this search function go faster?

      No clue what OS you use, do you have some form of *nix where you've got access to the locate command where it's not actively polling the drive when you request the info (however data can be stale)?
      • Re:More space.. (Score:1, Offtopic)

        by dimator ( 71399 )
        And no, burning stuff to a single CD and wiping it so you can play back your MP3s isn't a backup - you've just made the natural disaster issue portable.

        It would truly be a disaster if my vast archive of Celine Dione mp3's got damaged...

    • Re:More space.. (Score:2, Insightful)

      Yup, join the party - 400 gigs here and I still need more. DVD authoring, HD Film work, graphics post, you name it.

      Thankfully though, it's on my own machine at home due to the falling price of technology, both hardware and software. I can work at home with the music cranked up, without people bugging me, and get stuff finished faster than in an office.

      Then just pop in the 80 or 160gb removable IDE drive to drop the files on and courier them back to the client or the post-house. Great stuff, this cheap storage.

      But I could still easily use 1tb or more of space to allow me to work on larger sequences, store more video, etc.

      One of the great uses of this amount of space that the author suggests is something like completely capturing an entire digital sat/cable stream.

      No more "using the VCR" to tape the show you're going to miss, but storing an entire weeks' worth of programming or more. Don't have to worry about missing something because you've got the entire broadcast from every channel archived and ready to watch.

      That would be cool. A "central multimedia server" [like central heating?] that would store every TV program received, every radio station programmed, every CD I buy [ya, really!], every DVD I own, etc. Great use for nearly unlimited storage.

      I'm sure that the old folks at the MPAA/RIAA are going into panic-mode even THINKING about that sort of idea, but it's the future folks, deal with it.
    • Re:More space.. (Score:2, Informative)

      by g1n3tix2k ( 219791 )
      I cant remember who posted about it but it was on /. not long ago. A professor in the US has designed a hard-drive with works just like RAM. Its blisteringly fast but is resistant. IE: after a reboot will not lose data. Resistant Random Access Memory. Apparently the limitations on size for these babies are phenomenal. were talking terabytes and terabytes. however if this is going to be produced, or if its just vapourware is yet to be seen. I would however revolutionarize our current data storage!
    • by upper ( 373 ) on Thursday April 18, 2002 @03:14AM (#3364003)
      The article's hypothetical drive is 120TB -- 400 times as much space as you complain about filling up. I know video editing takes a lot of space, but would you keep 7 years of video? A movie a day for your whole life? Or 30,000 copies of the one you're working on? I doubt it.

      I used to think that my files would expand to fill all available space, but not anymore. Different tasks, different tools, and different personalities mean different thresholds, but I think everyone has a threshold above which they won't keep their disks full. For me, my disks stopped being full around 10 gigs. My wife's antique PC (running msdos 2.0 and used strictly as a text editor) had a 15 meg drive and never went above 10% full. Obviously your threshold is higher, but I'm sure it exists.

      Even after they hit their thresholds, most people's use will grow over time, but slowly. We'll also start writing things in ways that don't try to conserve disk space. Compression will be used almost exclusively for data transmission. Future filesystems will probably keep every version of every file ever saved. (Hopefully with an option to delete the occasional residue of an indiscretion and accidental copy of /dev/hda). But even these things won't increase our use by us more than a factor of 10 or so. If we really do get 120TB drives, we won't talk about buying new ones very often.


      • If we really do get 120TB drives, we won't talk about buying new ones very often

        I don't know, how much space does full motion holography take up? Seriously, we'll fill the space with something. Not everyone will fill a 120TB dive of course but there are those of us who will change our habits subconsciously to use more disk space. We just need a "killer" app.

        • Re:I'll fill it. (Score:3, Interesting)

          by colmore ( 56499 )
          A lot of people are mentioning 3d data but...

          There is no way of capturing a fully 3d image

          perhaps i'm just low on creativity, but I can't imagine any way of capturing vide from real life with a system that isn't functionally equivalent to some finite number of video cameras.

          now lets say that in the future telivision is filmed at 10x DVD resolution and from 10 different angles

          that only adds 100x to current video storage needs. nothing to sneeze at, but also not so spectacular that you'd need a petabyte drive either.

          once we have the ability to record any sort information at fidelity approaching the maximum for human perception, storage growth will rapidly outpace our need.

          people seem to be upset about this though...
          once storage reaches the maximum that any user would ever need, it has no place to go but cheaper.

          also, file system organization will need to be massively overhauled to make a petabyte drive remotely useful.
          • I specifically said holography. This would involve something like taking a normal hologram but instead of the photographic plate you have a very high res CCD. Later, you shine a laser on a very high res LCD (or what ever, not my field). Add in motion and you're talking about a lot of data.

            perhaps i'm just low on creativity, but I can't imagine any way of capturing vide from real life with a system that isn't functionally equivalent to some finite number of video cameras.

            A hologram isn't made by taking lots of photos from different angles. I don't know why you'd suggest it for motion holography. You need to look up how holograms are made.

            You honestly can't see how a holodeck-like virtual recreation of a medieval battle (or whatever) would take up a petabyte? How about a hundred recreations?

      • I can easily imagine that by the time 120Tb is available it will be 'fillable'.

        Video editting will be done at higher resolutions because it'll be possible and with HDTV and digital projection it'll be needed.

        Maybe by that time (10-15 years? I didn't read the article) we'll have 3D displays or movies. That's a whole new dimension of information to store - it could easily account for a couple of orders of magnitude of increase.

        I can also imagine that some people will become "information hoarders" - never throwing anything away, automatically downloading and keeping anything that might be of use or interest later. That'll happen more if people use software/filesystems that allow for easier organisation/searching.

        And software may bloat up even more that it has at present. I can imagine software packages coming with built-in instructional videos to supplement or replace help files. I've seen this on a small scale with some things already.

        Will everyone fill 120Tb? No. But some will.

      • No, He'd probably keep non compressed video.

        As good as DVD is, it's NOT the quality of first generation (aka out of the camera) video tape, even in NTSC format, never mind HDTV. You start talking about data being (with lossless compression) being about 12 gig/hour, so your only talking 1000 hours of data. Now I'd only need a drive array of 4000 of them at work to keep our archive (Yes, we have 4 million hours of footage)
      • Try editing uncompressed HDTV, I have a friend that is trying to put togethor an off the shelf system for NASA that is built for real-time uncompressed HDTV editing. Considering 30 seconds of uncompressed 640x480 sucks up a gig and a half, 1920x1080 is gonna use somewhere on the order of 20GB per minute, or ~1.2 TB per hour (pardon my lousy math). What? I can only store 10 hours of footage at a time for my 1 hour documentary? What? I can't get more storage on the drive because people are complaining it's too big as it is?

        Video editing CAN and WILL take up 120TB in the near future. I myself only do an extremely small amount of video editing on 3D animations for clients, and I am amazed at how fast it sucks up hard drive space. Combine that with huge 3D files and images that are a couple hundred megs a pop, and my 60GB drive disappears fast. Personally, I am amazed you can contain your video editing within 10GB. I need more than that just for project data storage (and I don't have any MP3s). You must do a lot of burning or other media storage.
      • I know video editing takes a lot of space, but would you keep 7 years of video?

        Except that the DVD bitrate is far below video editing needs.

        Assuming that I'd work with 720x480x~30(fps), or typical NTSC DVD resolution and framerate, which on DVD takes about 4-10Mbps would take more like 230Mbps uncompressed, 80Mbps losslessly compressed. Or about 35GB/h (losslessly compressed).

        Then consider the possibility of working with 1080p (1920x1080 progressive scan) HDTV material.. In end-user format that'd be about 15-24Mbps MPEG (8GB/h), in editing it'd be about 1900Mbps (down to 600Mbps compressed), or 270GB/h.

        Then go for higher frame rate (60fps), multiple layers from which the image is formed (8 layers for this example), and material use rate of eg. 20% (using one shot out of every five), and You end up with 21TB/h of source material, perhaps 4TB/h of working material, and that 270GB/h end result, or total of about 25TB/h.

        Of course people working with theatricals and film grade material need more space, but probably not on their home computers.. At least I don't know people working with theatrical material at home.

        Of course the above example isn't completely valid, but that's how things could be if amateur-price digital motion cameras are available when the diskspace is. Guess how many tapes does it take to make one 20minute amateur movie? Now assume that it was all digital data, and be very much afraid. 120TB drives are so very small.

        OK, how much diskspace do I currently use? Some 400GB or so. I have about 600GB, so I consider the disks "full" (>60% utilization). At the moment I'd guess about 3TB would be enough until I had some free time and about 250-300TB more space. I could currently imagine use for about 500-700TB, but not more. Ask again in ten years and I'll give You a figure at least ten times that, but definitely not 1000 times higher.. Then again, ten years back I had some 4GB and could've probably used about 10-20 at most, and couldn't have imagined use for terabytes.
  • by wumarkus420 ( 548138 ) <wumarkus@NOSpAm.hotmail.com> on Thursday April 18, 2002 @02:14AM (#3363854) Homepage
    120TB seems like an enormous capacity, but multi-terabyte storage mediums will be a neccessary in the future with REAL streaming media. What I'm worried about is the network connection - that has been my current bottleneck (even at 1.5Mbps). When we eventually combine HDTV PVR's, MP3 players, DVD archives, and pictures into a giant media database, the numbers don't look as staggering. But transferring that much data from one machine to another may prove to be the hardest part of all. 120Tbps network connection -now THAT'S impressive!
    • In ten years I'll probably have 10Gbps switched ethernet or alike at home, so transferring 120TB would take about a few days.. Already transferring all the data from one computer to another (should there be diskspace to do that) would take about a day for me.

      Of course having 10Gbps internet connection would be another thing. I believe I'll be stuck to this 10Mbps for a while, but 100Mbps might be possible in the future, and the house is cabled for 1000Base-TX, so it's not completely out of question.
  • How will you pay for it all? At current prices, buying 120 million books or 40 million songs or 30,000 movies would put a strain on most family budgets

    Don't we have Napster, Gnutella, KaZaa and the likes? We don't pay for that stuff, we **cough* borrow **cough** it!
  • Karma Whore (Score:1, Informative)

    In case of inevitable /. effect, or if you don't want to read it html for some reason, it's also available in other formats:
  • Anybody know how much that first hard drive's capacity was?
  • I remember trying to cram as much as possible on my little 20mb hard drive -- I had DoubleSpace and then Stacker running to compress that thing to the utter maximum -- and I backed it up on floppies, too!

    What my real question is, with as much abstraction going on between the ide controller and the drive platters as there is now, why haven't we seen more in the way of harddrive firmware except for better bad-sector remapping and the like? What about hardware compression? A little flash memory or a dedicated on-disk area and a compress/expand chip and we could probably fit quite a bit more on existing physical head/platter technology with not much speed loss -- In fact, we might see some speed GAIN if we only have to pull 100 bytes off of the disk to return 1000 bytes of data, etc.. Of course, it wouldn't give you any more space for your DivX ;-) collections, but for those of us who actually store mostly normal files, a lot of source code, etc, it'd be great!

    I guess, unfortunately, it requires a bit (ok a lot) of work getting an OS to play nice with such a gimmik, but would installing a "driver" for your hard drive compression electronics really be all that much different than for your video card or drive controller itself? -- then of course there is the question of a filesystem that can handle the indeterminate capacity of this kind of system. IE you couldn't necessarily delete a 100M file and fit another 100M file in the same physical storage space...

    ideas. something to muse over
    • If it was in the Fireware of IDE controller no new drivers would be needed.
      NTFS already supports disks which can change size, as can many others (as shown by the fact NTFS can compress files, and have the size change aviable, and progs like VMWare which can use virtual disks of an undetemed size).

      mlk
      • Not to nitpick, but VMWare makes you specify the size when you create the virtual disk; while the file that actually gets saved to the host machine can be variable, as far as the virtual OS is concerned, it's fixed. This is necessary for the partition tables AFAIK, so some new scheme for partitioning will have to become available before we can have truly variable disks at anything below the filesystem level (ie., AFAIK hardware compression is not possible with today's OS's).
    • by mors ( 1419 )
      Adding electronics to drives to gain more space is not a particular good idea in this day and age. You are not going to get 1000 bytes out of a 100 byte read very often, probably only if you are reading text. As you note Video and Graphics are usually stored in formats that are already compressed, so storing them on a drive with hardware compression wont win you much. And as the article says, text doesn't take up any noticable amount of space anyway. So even if you mostly care about your source code etc, you probably doesn't have anywhere near a GB, which is 1% of a modern drive.

      Furthermore, compression and decompression takes time, so it would lower the performance of the drive somewhat (no idea how much, but some).

      Wasn't Stacker those guys who made a piece of hardware to place between your drive and the IDE controller, to do the compression?
      • Adding electronics to drives to gain more space is not a particular good idea in this day and age. You are not going to get 1000 bytes out of a 100 byte read very often, probably only if you are reading text. As you note Video and Graphics are usually stored in formats that are already compressed, so storing them on a drive with hardware compression wont win you much. And as the article says, text doesn't take up any noticable amount of space anyway. So even if you mostly care about your source code etc, you probably doesn't have anywhere near a GB, which is 1% of a modern drive.

        Actually, personally I have about 10GB of sourcecode on my computer. Yeah, i'm not most people, but I can point you out lots of servers with gigs upon gigs of spreadsheets, word documents, powerpoints, and loads of other highly compressable data. For a random sampling of data (ie not your mp3 collection) you can get 2:1 compression. Even stuff like your browser cache should compress to about 2:1 - maybe highter. The point is that it could buy a lot of people a lot of extra disk space in certain applications.

        Furthermore, compression and decompression takes time, so it would lower the performance of the drive somewhat (no idea how much, but some).

        Well not if you use hardware that can compress and decompress data at the maximum transfer rate of the controller. That's the point of doing it in hardware. A dedicated chip that can compress or decompress at 166MB/s is not far fetched at all. If you had such a chip, the only thing you'd experience is a performance GAIN since it would take the drive less time to read or write the smaller amount of compressed data from the physical disk - you wouldn't have to wait as long for reads or writes to complete. It might affect latency a small bit depending on your algorighm, but compression latency would easily be smaller than a drive's seek time and both actions could happen at the same time.

        Wasn't Stacker those guys who made a piece of hardware to place between your drive and the IDE controller, to do the compression?

        I don't know. Someone else who replied to me had heard of a hardware compression product. AFAIK, Stacker was only software.

        ~GoRK
    • If Drivespace had been updated to support Fat32 I'd still be using it -- mostly for installed applications though, not my data.

      There was a single hardware accellerated disk compression product at one stage but it never took off as a concept, which is a shame.

      I love disk compression as a concept - it's so twisted. I think I managed to get about 800MB out of a 630MB hard drive using Drivespace 3. Makes uninstalling it tricky ;)

      "Zip Folders" are really the closest thing that survived, but I never bought the Win98 Plus pack. These days I mostly use RAR for compression and I do it manually. Perhaps I should investigate something like Rarissimo [karefil.com].

  • by Nathdot ( 465087 ) on Thursday April 18, 2002 @02:21AM (#3363877)
    He muses on how to fill up a 120 TB hard drive

    Let's see - if the history of the internet serves as an apt model - 120TB drives probably won't meet consumer demands for long.

    First harddrives will start to fill up with fully-imersive holo-pr0n, followed quickly, due to adaptive marketing trends by fully-imersive unsolicitted holo-spam.

    There... that solves that ol' capacity problem quite nicely then.

    :)
      • First harddrives will start to fill up with fully-imersive holo-pr0n, followed quickly, due to adaptive marketing trends by fully-imersive unsolicitted holo-spam.
      Of course, this will require the Macromedia Flesh plug-in...
    • Don't forget about that amount of data you store because you -might- want access to some peice that it contains.

      I have loads of data on my machine that I will probably never look at, but it's nice to know it's there.
    • Not forgetting that .hprn will be a Microsoft file format and therefore have an anti-tardis effect on your hard drive. (BTW for those who don't know the TARDIS was bigger in the inside than it looked from the outside).
  • I remember in the days of 8-bit home computing, when the Amstrad CPC, Spectrum and Commode 64 were the kings, there would always be some little company who was *about* to release a harddisk for said machine. Usually no more than 80Mb, and usually costing around 500 quid. Never happened though, no doubt someone's got one out by now though?
  • That will be simple, once we get three dimensional display technology figured out. Full length three dimensional movies will eat up 500GB of memory easily...

    Damn, just how much memory would it really require to do a simple Holodeck simulation?

  • 120 TB enough? (Score:3, Insightful)

    by quantaman ( 517394 ) on Thursday April 18, 2002 @02:30AM (#3363902)
    Sure it can holds 60,000 hours of DVD. That's why when the time comes we'll come up will a more precise format. I don't believe we will had enoguht space until it can store a lifetime's worth of information in a format indistinguistable from reality to human senses. Until that point we will always be able to make it a little more lifelike, a little longer, there will always be somrething else to eat up the space, clockcycles and bandwidth. My 1 GB filled up just as fast as my 256 MB and I'm sure that my 60 GB will fill up as well. We still have a long way to go before we have "enough" space.
    • Why only one life time? Certainly other people's lifetimes (or parts of them) as well as fictional ones will be interesting to you as well? And why human senses? There will be applications that need more precision than that... I agree with you, I just think the bar is a little higher still.
  • Back in the day... (Score:2, Interesting)

    by SynKKnyS ( 534257 )
    My dad would come home from work and tell me that the computer (probably something 8 bit) at work would fail several times a day. Mainly the hard drive (which was as huge as an ATX full tower case and only stored 10 mb) would stop accessing so they would have to reboot the computer. How did they do that? They gave a swift kick to the hard drive and the machine started right back up.
    • We had surplus Wang (don't laugh) 30 mb drives at my highschool. If they were off balance, you could get them to walk when doing a disk catalog.

      The cartridges were about 2' in diameter....
  • by zorba1 ( 149815 )
    A well-known corollary of Parkinson's Law says that data, like everything else, always expands to fill the volume allotted to it.

    I don't think this extends to distributed computing; I hardly think the collective drivespace of the WWW has been filled to the brim. Even a few percent free space per drive per server equates to huge amounts of unfilled sectors.
  • by Gis_Sat_Hack ( 101484 ) on Thursday April 18, 2002 @02:34AM (#3363913) Homepage
    There are already a number of Terra satellites downlinking data at about 4GB/hr, circling from pole to pole in orbits lasting under 2hrs.

    There are multitudes of airborne surveys churning out digital snapshots at 400MB a frame.

    Mosaiced together at 1m resolution with R,G,B and mean height above sea level, how much storage will a single global snapshot of the earth take ?

    Then consider for historical and environmental reasons, most urban/semi rural areas deserve a mosaiced snap at least once a year.

    120 TB is just the start . . .
    • As it says in the article, there will easily be cases which need vast amounts of storage (like your satellite imagery above). However, those situations alone won't make these technologies profitable; what they're after is 120TB drives coming in new PCs that "Mom & Pop" users would buy. As is said, how much will they really use?
      • As I see it, a single one metre resoltion image of the earth's surface *is* a Mom & Pop application that in 5-6 years time if it's not standard with every computer should at least be in every school and library of the world.

        It makes an ideal backdrop for xchat, can show children were the ares damaged by radiation are, and can have little blinking lights for all the toxic waste ghost ships floating about looking for somewhere to dock.

        The 'high - storage' demand apps are those that have the above data in a time series or with deep 3D layers for seismic exploartion of atmospheric modelling, etc.

        I'm just talking about a good school atlas of the near future. :)
    • And I'm working on the next generation of hyperspectral imagers that generate 80 GB/hr of raw, uncompressed data. Take a look at the data system requirements for AVIRIS [nasa.gov], and you'll see what I mean. AVIRIS scan lines are 224 spectral channels by 614 pixels. Our Advanced Hyperspectral Imager collects 2048 spectral channels over 3072 pixels across a 120 FOV and a spectral range of 360 to 1000 nm. This is adequate to cover the globe with better than 10km resolution daily using a sun-synchronous polar orbit.

      AVIRIS generates about 140MB per frame, which takes about 15 minutes to collect. In comparison, our instrument generates 6MB per frame, but collects 7 frames per second. We throw about half of this away. At this rate, it would take our instrument only about a month to fill a 120TB volume with raw, level 0 data. Fortunately, specific missions will not require the full FOV or spectral range of our laboratory model, so the cost of the downlinks and data systems can be mitigated somewhat.

  • Information itself becomes free (or do I mean worthless?), but metadata?the means of organizing information?is priceless.
    Put that in your pipe and smoke it, Lars Ulrich! Because graveyhead is a database/sql nerd he becomes the star because he can organize all the worthless Metallica tracks!
  • 15 years ago, I was speaking with a freind of my fathers about these new exotic 'hard drives' you could get. He was a big time computer guru, and I was a little kid.

    I said to him that I thought a 10 megabyte hdd would be perfect, and that someday we would be able to buy one for less then $1,000. He scoffed.

    I still remember this very clearly. "Why in heavens name would you ever need that much space in a hard drive"

    I worked it out, and every concievable program I'd use, including saved files, all told would only need 2 megs. It was just impossible to think up enough applications at the time that a home PC really needed.

    I'd love to run into him again and ask him if he remembers that conversation. In 10 years, 100 Terrabyte drives will seem 'quaint'.
    • "I'd love to run into him again and ask him if he remembers that conversation."

      If I'm not mistaken, I think you'll be able to find your "big time computer guru" [microsoft.com] somewhere in Redmond, Washington.

      Wasn't he also the one who said that 640Kb memory should be enough for everybody?

      • Wasn't he also the one who said that 640Kb memory should be enough for everybody?

        Gods, will you give it a rest? That was how many years ago?

        • I agree, shouldn't there be some sort of penalty for people who keep dragging this up! Like a beheading perhaps?
          • How about a six month course on the designs of the IBM PC and MS DOS? There was no 640k limit in DOS. Many people ran it with more than that. What the limit was is that the memory must be contigious to be seen by DOS, and the CGA/EGA/VGA adaptors all used the memory starting at 640k. That was IBM's decision, they could have put the CGA at 896, and DOS would have supported 896k without any changes at all. IBM's decision wasn't stupid either. They had to put memory starting at 0, so that the interupt table was programmable, and spliting the available memory space for 60% RAM, 40% hardware memory mapping was perfectly reasonable, especially considering that they allocated 40 times the initial shipping memory.
    • I still remember this very clearly. "Why in heavens name would you ever need that much space in a hard drive"
      Which leads me to my theory: When you think the processor is faster than anyone will ever need, when you think that you RAM is more than enough to do anything you will ever want to do, or when you think the current state of storage capacity is larger than anyone will ever need.... get out of the field. You no longer get what it is all about. You have lost the vision.
    • "..home PC really needed."
      you know, you can get an OS, editor, spreadsheet, browser, email client in less then 10 megs, and still have plenty of room to save your files that you need. That is all a home PC really needs.

      If the hard drive capacity keeps increasing at the rate it has been, I believe we will end up with drives that will never be completly filled.

    • Lesse here - hard drive capacity is growing at what, 60% per year? The largest consumer hard drive right now is 160G. So, that gives us:
      2002: 160G
      2003: 256G
      2004: 410G
      2005: 655G
      2006: 1.0T
      2007: 1.6T
      2008: 2.6T
      2009: 4.3T
      2010: 6.8T
      2011: 11.0T
      2012: 17.5T
      2013: 28.0T

      So, if history is any indication, it will be somewhat more than 10 years (just under 15) before 100TB drives become available at the consumer level.

      Despite being able to surmount what were once thought to be intractable magnetic effects limiting hard drive density, I don't believe hard drives will make it past a terabyte or so. We are quickly approaching the point where the energy involved in changing a 1 to a 0 (and yes, I realize that with common data encoding schemes, it's not that simple) is less than the thermal energy present in the system. Just as fickle electrons are being replaced by photons in data transport, I think they will replace electrons in data storage as well. Photons leave each other alone so you can pack them more densely; they're low-power; they're resistant to external forces such as electric and magnetic fields and various forms of radiation.

      I'm confident there will be 100TB storage devices commercially available within 15-20 years (to give myself a little wiggle room), but if it's based on spinning disks of any kind, and one of you can find me, I'll eat my shirt.
  • human imagination (Score:2, Insightful)

    by sahala ( 105682 )
    First off, great article. Well written and even understandable for someone non-technical (we should all take note).

    I do question some of his statements, however, particularly about human creativity:

    Now it seems we face a curious Malthusian catastrophe of the information economy: The products of human creativity grow only arithmetically, whereas the capacity to store and distribute them increases geometrically. The human imagination can't keep up.

    Or maybe it's only my imagination that can't keep up.

    I'd say that the bolded part above is very likely. He states that he individually can't think of what to do with 120 TB, but collectively I'm confident we'll find a use for it. I've read through a dozen posts already where we've come up with some suggestions for use. Not to be critical or anything, but the surface has barely been scratched. It's not going to be all about data warehouses, streamable content, and how many dvd movies we can rip. Tell the whole world that 120 TB is available for storage, and a variety of uses will come up.

    I'm pretty convinced that the actual consumer use of 120 TB will be for something that, if suggested now, we'll all laugh at and ask why the hell we'd want to pursue such an insane idea. For instance, the article mentioned mounting tiny cameras on eyeglasses to document one's whole life. The article also mentions home digital media hubs. Both are probably uses, but I actually think they are rather conservative ideas in the grand scheme of things. A successful idea now to think of an interesting and probably use of that much space in the future is more likely to come out of the mouth of some random guy while intermittently taking puffs out of a giant bong than any mature, prominent engineer.

    Then again, some founders of successful companies were allegedly (I don't have any factual evidence) pretty fond of the herb [mac.com].

  • autopr0n (Score:5, Funny)

    by Renraku ( 518261 ) on Thursday April 18, 2002 @03:07AM (#3363987) Homepage
    "Thus the 120-terabyte disk will hold some 60,000 hours worth of movies; if you want to watch them all day and all night without a break for popcorn, they will last somewhat less than seven years." Most people can't even last seven minutes with high-res pr0n playing, much less seven YEARS.
  • Not trying to troll or flame...but...

    That 120TB will be filled with heavily bloated DRM, compulsery spyware, and 1200x1024x32bit+ advertisement videos with embeded scripts that have almost virus like "features" (mark all pics and vids on HD with logos, insert advertisements in documents, make system not boot without reading advertisement, randomly play comercials, etc.) Then there is the 72GB+ windows install if you go that way...

    Just an observation of the way things are going... :/
  • Suppose I could reach into the future and hand you a 120-terabyte drive right now. What would you put on it? You might start by copying over everything on your present disk--all the software and documents you've been accumulating over the years--your digital universe. Okay. Now what will you do with the other 119.9 terabytes?


    A cynic's retort might be that installing the 2012 edition of Microsoft Windows will take care of the rest, but I don't believe it's true. "Software bloat" has reached impressive proportions, but it still lags far behind the recent growth rate in disk capacity.


    I think they would have no problem occupying 20% of a 120 TB HDD, MS products have done that to my drives for years...

  • ie, hard-disk drive, aka HDD: the terms only came into general use after 'floppy disks' became a familiar storage medium for 'microcomputors', as they were once called. Pre-floppy, we just called them 'disk drives'.

    Yes, it's a slow day at the office, how could you tell?

  • I bought a 20 Gig drive a couple of years back and it's still doing fine, a couple of Gig left, and it's got pretty much everything I've downloaded or created since, ooo, the early 90s. I'm just upset that I deleted my collection of DOS games I bought in Hong Kong and Malaysia some time around 1990 in a fit of morality. My .MOD/.S3M and .JPG collection (all on floppy disks) went around the same time. Mind you, all of that put together would still only add a couple of hundred meg to my collection.

    A friend burns CDs like there's some sort of deadline. I've only just got myself a CD burner [imation.com]. Maybe if I downloaded MP3s through work I might run out of space a bit faster, but the 20gig I mentioned at the top of this comment is an IDE drive in a removable bay with a USB connection. If I need more storage I just buy another cheap drive and another tray.

    At work, and I am fairly new here so I might get this wrong, we currently have about 6Gig of network storage for students which is more or less full, but our solution is simply to have Zip drives in every PC. Students have a quota and really their data is their own problem. Staff have 30Gig to play with and 5 is still free. And 9 of that is a backup of the Ghost images of the student labs that I made when I arrived.

    Speaking of backups, it's been my experience that hard drive space is useless without the same amount of backup space available. DDS3 tapes only go up to something like 12/24 Gig if they haven't changed in a year or so, meaning that cheap and easy backup really ends at 20Gig. Personally, my 20Gig hard drive is more or less the backup, with data burnt to CD the same time it's moved from my portable's 4Gig to the 20Gig drive. But burning CDs isn't the best solution for stuff that changes frequently. (Although it seems to be the best option for Mac-oriented graphics designers who have to live in an otherwise PC-only corporate enviroment.)

    My interest in retro gaming also probably helps keep my storage requirements low. Recently I burnt an 8Mbyte Dreamcast image that had an Atari 2600 emulator and over 100 (Public Domain) ROMs.

    And I mean, who needs 120TB of random access storage? Seriously. I mean, sure it's nice to be able to skip instantly to a particular chapter of a movie, but how often do you do it, really? And the "Random" button on any given MP3 player is fun, but if you had to listen to the tracks in a particular order it wouldn't be the end of the world (imagine a DDS3 MP3 player - that's 12 days of music, solid, and it would probably be able to be smaller than the original Sony Walkman).

    Wow, that was a long post.

    • And I mean, who needs 120TB of random access storage?Right now, probably no one, unless they are archiving the net or something. Fairly soon it will be scientists, sometime after that artists (around which time this sort of capacity will probably start getting on the desktop), after that pr0n collectors and gamers and around the same time developers. At least that's what I would think, if there is one thing that time has show is that statements like "who could ever need that much <computer aspect>?" are usually shown wrong after some time. Besides if I can think of applications requiring this now, certainly the need will come along sooner or later.

      Also, since when is it being nice not reason enough to have something? I mean we are chargin ahead with this whole computer innovation thing, not just struggling to get by.

  • It seems to me that, in order to get capacity increases beyond 120 Tb, they might just have to increase the physical size of the drive. After all, the form factor they have been working with for the past while has been these 3.5", half height things.

    I have two boatanchors at home, 10 Gb each, which take up essentially two complete 5.25" bays. (Is this what a full height drive is?) What would happen if they applied the technology used in the smaller form factor to something this size? After all, they should be able to fit something like 12-20 platters inside one of these things, and those platters will be wider. Will cramming all those platters inside a larger box yield some savings in overhead, too?
  • First off, virtual reality files, especially with photorealistic motion images, kineorealistic tactile sensation, and sound, (plus possibly smell).

    Then comes your logs, because your going to log everything. with the encryption we will have, we can do it without fear. The only data we have to worry about is the data we have to compress for movement.

    Program Files for the media. While right now we currently mostly use our data for media, I predict an explosion of data formats, which will require bulky reading/viewing/listening/VR software to operate.

    Distributed computing data.
    Your computer will be part of a p2p distributed computer project of some sort. Of course, since the project is either curing cancer, or earning you money, or evolving an ai, you don't mind.

    Intelligent agents
    IA (ai with a purpose :) you need to search your data, and if you want it to be at all intutive when you have over a million files you will need a program to organize it for you. plus aol-time warner sony will want to have your (hopefully) anonymous user data.

    Device controllers for everything
    everything will be controlled, even your toaster (toaster/oven/microwave/pressure cooker etc. of course). after all, who wouldn't want there auto drive to know what's in their appointment books :).

    now, lets hope somebody will repost this with links and get modded up

    Gryftir
  • ...you drop/kill one of these 120Tb drives?
    Surely we want more effort into making them bulletproof. (+ it's only just one spindle).
  • The article mentions how we'd needs hundreds of thousands of CD-ROM's to make a dent in the 120TB drive, but the author didn't consider the future of optical storage. One company I've been keeping my eye on is Constellation 3D [c-3d.net]. They are making a "Fluorescent Multi-layer Disc" (FMD) which holds information in many layers (12-30), with an initial storage capacity of about 20-100 Gigabytes. I really hope this takes off, as I remember a day when a CD-ROM was a massive amount of information (exceeding most hard drives at the time), but nowadays we use them as we did floppy drives back then :)

    It'd be nice to have an optical disk capacity comparable to hard drives again so that it is practical to do backups.

  • The article is fascinating but a little overcharismatic,
    David A. Thompson and John S. Best of IBM write: An engineer from the original RAMAC project of 1956 would have no problem understanding a description of a modern disk drive.
    No problem, I'd love to see them explain to a cryogenically frozen engineer from 1956 Reed-Solomon Error Correction codes realtime FPGA/ASIC design (Hamming basics), RLL coding standards, GMR head construction using nanometer technology, realtime control design of servo-actuated heads' feedback mechanism (to keep on track without resonant head movements), electron beam lithography [eetimes.com] to debug the IDE on-drive electronics.

    I'll admit though once they cover all that, the differences between SCSI/EIDE plus ATA will be a walk in the park.

    Plus can IBM be sued for fraud or illegal trading because of their 120GXP drives being way off 200,000 hours MTBF specification? It must be written down in stone somewhere.

  • A 120T HDD is perfect for distributed file systems, or systems similar to Freenet.

    Think about it, you'd upload a file to Freenet and it would never disappear, every Freenet node that would have ever received the file could keep it cached for a long time.

    With such capacity Freenet or distributed file systems would become the ultime backup tool, you'd never have to loose data again. All movies, music and books could be stored online and would be readily available from a nearby Freenet node.

    But bigger HDDs will be needed so that even if most of the world is destroyed, the most important data online is preserved on single nodes.

    A more optimist use for having all of the world's data on a single computer, is for sending the data along on space ships to far away galaxy's. Perhaps for humons on the ship to enjoy themselves during a trip that would take years (or generations), or for exchanging our culture with alian civilisations in outer galaxies.

  • Bandwidth? Backups? (Score:3, Informative)

    by T-Punkt ( 90023 ) on Thursday April 18, 2002 @05:53AM (#3364307)
    Whenever I see an announcement of the newest harddisc with record breaking capacity I think "How do you backup that beast?".

    Whenever harddisc manufactureres manage to double the number of bits that can be written on an inch of a track they get an four fold increase of capacity. But unless you increase the rotational speed of the plattern the time to read the whole content of the harddisc will double as well since the recording/writing speed is proportional to the linear density.

    And the rotational speed only increases very slowly - we recently saw the small jump from 5400 to 7200 RPM for the "standard" consumer (IDE) harddisc, the first for several years (I personally stick with 5400 for the cheapo IDE drives for the next few years. Reliability, you know --- see IBM)).

    Given that the lower limit for the time to make a full backup of harddiscs will increase roughly with the square root of the growths of their sizes over the time.

    The other problem is that backup devices and media affordable for the home user can't keep pace with the harrdiscs, so in my eyes the traditional full backup get's more and more inpractical.

    One of the most cost effective backup devices for a harddisc today is another harddisc, but it still needs hours to mirror the content of one disc to another. RAID or something that keeps two discs in sync automagically in the background is no solution - it saves you from data loss by harddisc failures (good if you use IBM GXPs and the like) but it won't help you if you or your software have/has destroyed some important files you have created over the past few weeks/months.

    Well, I don't know what other people do, but I stopped doing full backups of the whole disc: Thank God a large amount of files on my harddiscs is not backup worthy since in case of loss I won't miss it (think swap space or contents of a web cache), can easily recreate it (like mp3s made from my own CDs or object files - if you track netbsd-current you keep them around to save some time on future inkremental builds and don't delete them after "make install") or get it back from another of my machines, CDROM or the net.

    So I only have to backup a fraction of my discs and the good news is that in absolute numbers the amount of data to backup doesn't grow nearly as fast as harddisc capacities. In my case with compression it easily fits on an older harddisc for complete backups and for weekly incremental backups I can still use and old 1GB DAT (DDS) tape I've got for free. It's not the best solution since recovering from desaster needs some time and a lot of manual work but I can sleep better than those who don't do backups at all...
  • I was hired in 1982 to manage a new network of CP/M machines just purchased. The file server consisted of a 40 meg fixed drive and 10 meg removable platters. Cost for the unit was $30,000 and $100 for each 10 meg removable cartridge. Insane.

    Later, in 1986, I bought an external 20 meg (HDSC20) for my Mac Plus for $1,200 and couldn't believe how cheap they had gotten. That same year I spent $700 for two one meg SIMMs for that computer.

    I also remember around 1984 seeing my first b/w 2-bit porn pic on my Mac and being amazed at the quality!

    The old days sucked, but it was also kinda nice to be around during that time. I can really appreciate how good I got it now. I just bought a 160 gig external firewire drive for $400 for example. Sweet...

    No more backup worries either, I just buy firewire disks and tack on as needed, and rsync nightly...


  • I can just imagine the Canadian high school trying to flip enough burgers for that shiny new 5 petabyte mp3 player in 2010.

    $100 down, $200,000 to go... just another 200 years and I'll have it!
  • by ka9dgx ( 72702 ) on Thursday April 18, 2002 @07:00AM (#3364413) Homepage Journal
    If you do real HDTV, for an editing suite, etc... you only get 44 hours and 39 minutes of full quality 1080p60 for your 1.2E14 bytes.
    1920 x 1080 pixels
    16 bits/pixel x RGB = 6 bytes /pixel
    60 frames/second
    Yields 746,496,000 bytes/second. (Or about 8 parallel gigabit ethernet cards)

    Do this at full bore, and you get 160,751 seconds of video, less than 2 days worth!

    Sure, I know you could compress the video, but I've seen 1080p up close and personal, I noticed the artifacts in the video on the monitor of the broadcast quality HDTV demo, and the sales guy finally confessed that they just had to compress it to make it feasable to record it on tape.

    If I noticed it right off the bat, someone will pay to have this quality level.

    So... when do we get the petabyte storage?

    --Mike--

  • I some letter to the editor of some computer magazine.

    Now I own a 20MB & some 4.5, 6, 10, 15s, 19 & 30 GB drives, and you know what? I still can't backup my stuff worth crap.

    I have one CR-RW drive for my Linux box (which IS backing up my domain, :-) and one CD-RW drive on my TiBook which I would use for backup except that the drivers from Retrospect don't seem to quite be working.

    Its never been IF the drives fail (I've hung lots of opened up drives on my cubicle wall,) but WHEN. And nobody has EVER addressed that issue properly.
  • by ErikZ ( 55491 ) on Thursday April 18, 2002 @07:47AM (#3364541)

    I'm surprised no one else here has ripped a DVD to their HD before. 2 gig per hour seems like a minimum to me.

    And even if it isn't, by the time the 120 TB disk come out, you think we're still going to be using the worst DVD format? 720p is supposed to be coming out in a few years. I can't find the page with the details now, but 720p will require a new type of DVD disk, one that can store up to 24 GB.

    That's right, your DVD collection is going to become outdated and worthless as companies republish into the new "Hi-def" DVD format.

    Let us say the average DVD uses 70% of the space, 16.8 GB. That's about 7,100 hours. Compression might be less effective on these disks, since the entire point of having Hi-Definition DVDs is the extra detail.

    And I've come up with another use for a 120 TB drive. The biggest, most kick-ass TIVO in the world. Imagine having any TV show that was shown in your lifetime available to view.


    • 2 gig per hour seems like a minimum to me.

      Using iMovie and a digital camera, I've seen disk usage of about 12 GB per hour of imported raw video. This is presumably completely uncompressed, consumer-grade camera video. Uncompressed broadcast-quality video has got to be bigger.
    • Most DVDs on market are single side, dual layer discs of about 8.5B. Some are 4.7GB single layer discs.
      Typically they have 5-7GB of material, of which some 200-500MB is studio logos and such, then there are extras and stuff, usually leaving a bit more than half for the actual main content (the movie).

      Now, HDTV quality would require 2-2.5 times the current maximum transfer rate of the DVD drive, and the main content would be about 3-4 times the current. Of course if all the extras, studio logos and such would also be in HDTV quality (720i to 1080p), the total size of a currently typical disc in the hi-def format would be 15-28GB. Seems like Blueray is just enough..
  • IBM's "announcement" (Score:2, Interesting)

    by AIXGuy690 ( 530128 )
    First off, The future 120TB drives will be cool. Second, does anybody read? IBM is NOT getting out of the HD business. They are moving most of their HD business into a joint venture with Hitachi. This new company will be 70% owned by Hitachi and 30% owned by IBM. (IBM is NOT selling 70% of its HD business). IBM is going to supply most of the technology (and employees. According to a CBS Marketwatch article) and Hitachi will manage most of the business. IBM will still be a leader in HD technology, they just don't have to take on as much burdon in the poor HD market right now.
  • speed (Score:2, Insightful)

    by winse ( 39597 )
    If the average file size increases to something more like a half gig instead of a couple hundred k won't disk io be prohibitively slow? I mean copy an entire 80 GB hard drive right now takes forever....even if it is at 15k rpm. I wish that disk io could at least attempt to keep up with processor speeds and storage capacity
  • People wil probably want *personal* storage of a thousnd hours of media. Beyond that you start forgeting it or never re-using it. We already have this capicty for books and music. A few terabytes gives a thousand hours of conventional TV. Some day they'll be forms of 3D TV, and that will really consume storage.
  • "...today the price of disk storage is headed down toward a tenth of a penny per megabyte, or equivalently a dollar a gigabyte. It is now well below the cost of paper."

    t_t_b

  • I can't see spinning disk drives reachin the 120 tb capacity talked about in the article. Although I think the capacity will surely exist in 10 years, I'd place my money on either:

    - some form of persistent RAM
    - holographic storage media

    Both have come a long way. The holographic option had a usable storage time of close to a year the last time I checked; not good enough yet for sale, but given that it started on the order of *hours* this is a damned impressive advance.

    Plus, the holographic model being talked about now is a cube which you could *actually pop out of the machine*, put into your pocket, and take with you. Imagine going to a friend's house and being able to snap in your entire hard drive as easy as you do a floppy....

    Max
  • Sure, a terabyte on a single disk would be great, especially for large supercomputers like ASCII White or the new Linux supercomputer [slashdot.org]. But wouldn't this result in lowered overall memory throughput?

    Let me explain. If I were to build a supercomputer with a 1TB storage array, I would probaby use 100 or so 10GB drives rather than ten 100 GB drives. Creating a RAID 0 array with 100 drives would probably be much faster than with 10 drives, even though the 100GB drives transfer data internally faster than thier 10GB counterparts (assuming the same RPM and number of platters). I realize the cost of supporting 100 disks as opposed to 10 is much greater, but you must make a tradeoff. Likewise, a single 1TB harddrive would not be as fast as ten 100GB drives.

    Of course, if you are rich like IBM and most major universities (all they have to do is bump up tuition... again), you can just buy a bunch of the biggest drives available and make a super-fast, super-big array.

Every cloud has a silver lining; you should have sold it, and bought titanium.

Working...