Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

Hitachi to Release Half TB Drive Soon 607

samdu writes "Hitachi has announced plans to release a 7200 RPM 3.5 inch 500 GB hard drive in the first quarter of this year." Maybe this one won't require a new motherboard to use. I think I've replaced more mobo's to handle larger drives than I have to support faster CPUs.
This discussion has been archived. No new comments can be posted.

Hitachi to Release Half TB Drive Soon

Comments Filter:
  • yay! (Score:4, Funny)

    by ikea5 ( 608732 ) on Thursday January 06, 2005 @02:44PM (#11278489)
    More porn, yay!
  • by Aliencow ( 653119 ) on Thursday January 06, 2005 @02:44PM (#11278496) Homepage Journal
    Hard drives get bigger and bigger, we might reach the 1TB limit one day ! More at 10.
    • Re:Tonight at 10 (Score:3, Interesting)

      by Lumpy ( 12016 )
      call me when they can make them reliable.

      I have replaced more drives that are only 2-3 years old in the past 4 years than in my 15 year career.

      Drives below the 20 gig mark are much more reliable, and drives over the 120 gig mark seem to be the most unreliable.

      to hell with more space, give me a drive that will actually last the life of the pc.

      It's so bad that I only buy Segate server class IDE drives for the workstations here. Dell will give you funny questions if you order Pc's without os or hard driv
  • by Anonymous Coward on Thursday January 06, 2005 @02:45PM (#11278505)
    Sorry but I can't think of a single interesting thing to say about the launch of a new hard-drive whose only claim to fame is it being a bit bigger than the previous biggest.

    So... anyone got anything interesting to say?
    • If I buy two of these, I can fit a Terabyte in my Power Mac G5.

      Buying one will get me .75 Terabytes.

      That's cool.
    • by dsginter ( 104154 ) on Thursday January 06, 2005 @02:56PM (#11278702)
      So... anyone got anything interesting to say?

      Seriously... isn't this a wonderful industry?

      If you answered "no" to the above question, please exit stage left. Thanks for playing.

      I don't consider myself "old", but my first PC was an XT with *dual* 5.25" floppy drives (that required a soldering iron for overclocking when there was no such word as "overclocking"). My second PC was a 386SX with an 80 megabyte hard drive.

      As much as I knew it would happen (just looking at the graphs in Byte Magazine was enough to see that), it is still amazing to me. I'm just happy to be working in such a dynamic industry.

      Enough nostalgia for now...
      • I don't consider myself "old", but my first PC was an XT with *dual* 5.25" floppy drives

        Bah! My first computer used a cassette to load programs (at about 300 baud, I think). Eventually, we got a single floppy for it (single sided, what's that, 180K?)

        (and, yes, I guess I do consider myself old. though at least I never used 8" disks.)
        • My first compnuter, a casio PB-700 had 4Kb of ram, expandable to 16Kb (I went to 12Kb). with plastic modules the size of cigarette lighters for each 4kb.

          It had BASIC, and I COULD NOT save programs on any media. It was battery operated (4 AA iirc), and I lost all memory when changing.

          it had 160*32 pixels display, and I got good at writing games for it, and drawing out quadratic equations. I carried it in my jacket pocket.(That was 1984).

          link: http://pocket.free.fr/html/casio/pb-700_e.html [pocket.free.fr]

          My friend had a
      • Tandy 1000 with dual 5.25" floppy drives souped up to 640K memory and an RGB monitor. 2nd computer was a 486 with a massive 40meg drive. At 2400 baud it took awhile downloading from BBSs but I did it. I even remember using Stacker 2.0 because drive space was so tight. These days I just yawn at drive sizes.
      • The first computer I used with any regularity was a Mac Plus, with one 3.5", 400K disk drive, and no hard drive. It had SCSI, so you could add one pretty easily, but it didn't have one.

        My next was a Mac SE, with a 20 megabyte hard drive and an 800K disk drive. After that was the 386SX with the 80MB hard drive.

        I remember the first time I ever heard the word gigabyte. My uncle had a Giga-ROM CD for Mac - 650 megs of archives, over a gigabyte of software on one disc! It took me forever to look through a tent
    • Makes me wonder if Slashdot had a story about the first "half GB" hard drive...

      Searching the archives out of curiosity didn't yield any results with the obvious key words...
    • I thought all modern motherboards could read drives up to 2 terrabytes which is the limit in a 32-bit operating system?

      Someone correct me if I am wrong because I plan to go back in the pc industry soon and need to know these things.

    • it will drive the cost down, more, of the drives we are most likely to use.

      so we got that goin' for us. which is good.
    • This is how technology generally evolves, isn't it? Small steps that have a tremendous impact in the long-term.

      I guess it's not that exciting in day-to-day news, but it's important to realize that the evolution from the first computer to the one you're reading this article on wasn't made in a giant leap -- it took years and many many many small improvements.
    • So... anyone got anything interesting to say?

      My cat's breath smells like cat food. Well it does...
    • 3.5 inch hard drives get bigger capacities and are cheaper within a short period of time(6-8 months.) But why isn't this carried over to laptop hard drives(2.5inch?)
      Anything over a 40GB still cost a pretty penny and 5400/7200rpm disks are still the exception rather than the norm in laptops.
      And good luck finding laptop hard drives above 100GB.
      • 3.5" hard drives often have four platters on which to store data. 2.5" hard drives usually only have one or two. In addition, the 2.5" hard drive platters are (obviously) physically much smaller than the 3.5" hard drive platters. For a given data density, not only do you have half the number of platters, you have much less surface area.

        As far as transfer performance, you can transfer the most data where the platter is spinning the fastest - on the outer edge. The 3.5" hard drives' edge spins that much

  • 3.0Gb/s - 817 Mb/s? (Score:4, Interesting)

    by stupidfoo ( 836212 ) on Thursday January 06, 2005 @02:46PM (#11278518)
    The specs for te 7K500 (500GB) include 817 Mb/s max. media data rate, 8.5 ms average seek time, 7,200 RPM, 4.17 ms average latency, ATA-100/Serial ATA 3.0 Gb/s.

    While it's nice to something as fast as possible, is there a point to have a 3.0Gb/s interface to a product that can only handle 817Mb/s?
    • by teg ( 97890 ) on Thursday January 06, 2005 @02:48PM (#11278567)

      While it's nice to something as fast as possible, is there a point to have a 3.0Gb/s interface to a product that can only handle 817Mb/s?

      On drive cache.

    • While it's nice to something as fast as possible, is there a point to have a 3.0Gb/s interface to a product that can only handle 817Mb/s?

      The drive's onboard cache runs a lot faster than the drive itself.
    • by jm92956n ( 758515 ) on Thursday January 06, 2005 @02:49PM (#11278572) Journal
      Maybe so you can put two drives on one controller?

      Yes.
    • Maybe if the data you want is in the Hard Drive's cache, the transfer rate could be higher than 817 Mb/s.
    • There is a point: cache.

      When you get a read hit, you get it at 3GB/s. And more importantly, when you queue a write to the drive, you do it at 3GB/s. With SATA, like SCSI or fibre, you can queue a bunch of writes and have the drive order them in a mechanically-optimal manner. Meanwhile, your computer can do other things, including issue reads.
    • by archen ( 447353 )
      Assumably you could have a lot of data pushed down the pipe and have the hard drive cache queue the data until it can be transferred to the drive. Obviously you wouldn't get anywhere near 3Gb sustained transfer though. I'm thinking that 3Gb has more to do with the SATA standard, and nothing to do with the fact that hard drive technology is no where near that level.
  • Rooms full of drives (Score:5, Interesting)

    by Stanistani ( 808333 ) on Thursday January 06, 2005 @02:47PM (#11278535) Homepage Journal
    In the eighties, our raised floor had a TB of storage - 48 six-foot by 4-foot cabinets with the power, cooling, and connectivity that implies, as well as thousands of dollars in maintenance fees.

    Now I can hold a TB in one hand...

    I like this decade better.
  • use for backup (Score:3, Insightful)

    by feenberg ( 201582 ) on Thursday January 06, 2005 @02:47PM (#11278538)
    Am I the only one who likes 5400rmp drives because he thinks they will last 72/54 times as long as 7200 rpm drives? We use large drives for backup, and since the access is all sequential, the high rotation speed isn't that important to us.
    • The idea that 5400 RPM may last longer is based on assumptions and heresay. The high rotation speed helps on sequential data transfer speeds too, not just random access.
    • Unless they're laptop ones. we replaced 50 out of the last 400 laptops we got. All 5400 rpm drives.
    • Re:use for backup (Score:3, Insightful)

      by bluGill ( 862 )

      At least you backup...

      I'm not so sure you are gaining anything though. Your point is correct, and 5.4k drives don't run as hot, two points in your favor.

      However, that assumes everything else is the same. If they used higher quality components in the faster drive, it might last longer. It wouldn't surprise me, an extra $.05 on bearings can make a large difference in the price after all the layers of suppliers is gone through, enough to account for the difference in price.

      Its all just speculation unl

  • Yay....but (Score:4, Funny)

    by bwcarty ( 660606 ) on Thursday January 06, 2005 @02:47PM (#11278542)
    Will it be big enough to install Longhorn on?
  • Inching up (Score:3, Funny)

    by QuietLagoon ( 813062 ) on Thursday January 06, 2005 @02:48PM (#11278550)
    It's nice to see hard drive capacity start to inch upwards once again. We were stuck in the 250-300GB range for too many years.

    Now, when am I going to see this capacity in my iPod? ...

  • at the increasing capacity of spinning magnetic media when about 20 years ago (I guess when thin-film heads came about), many pundits said that the medium had reached it's physical limits.

    Just where to they squeeze these extra bits from on the same size platter?
    • by mblase ( 200735 ) on Thursday January 06, 2005 @03:15PM (#11279002)
      Just where to they squeeze these extra bits from on the same size platter?

      It's actually a compression algorithm. You know that computers store information as a series of ones and zeroes, right? Well, they just added a driver that writes only the ones, not the zeroes, instantly doubling the storage space.

      After that, it's been a matter of building the drives with smaller and smaller pencils to write those ones side-by-side. When hard disks were first introduced, they used a standard #2 pencil sharpened down to the eraser, but eventually they moved to mechanical pencils, then realized they could use the mechanical pencil lead without the pencil at all.

      Today, special microscopic pencils can be built one molecule at a time. The "eraser threshold" (currently the smallest one is 0.00003 centimeters in diameter) is a key factor in manufacturing drives.
  • by grub ( 11606 ) <slashdot@grub.net> on Thursday January 06, 2005 @02:48PM (#11278560) Homepage Journal

    One day Hitachi invented a 500 gigabyte drive. The RIAA said "The public is evil, that's 100,000 5 MB MP3s!" Then the MPAA cried "The public is evil, that's over seven hundred 700 MB xvid movies!" So their lobbyists went to Washington to get these high capacity drives made illegal. And their shareholders lived happily ever after.

    The End
    • Re:A Fairy Tale (Score:3, Insightful)

      by eddy ( 18759 )

      No, it's going to go like this:

      The ??AA said "The public is evil, they're going to use these devices for theft of our precious "IP"! Since we can't control this, we demand a blanket levy put on these devices, made payable to our puppet umbrella organisation whose purpose it is to "fairly" distribute said levy to ourselves."

      The ??AA members could then lie back and enjoy their new "tax", having no more incentive to actually produce anything. "Who would have thought, that taxes could be so much fun?", They

    • Re:A Fairy Tale (Score:3, Interesting)

      by Just Some Guy ( 3352 )
      Not quite. The RIAA would say: "The public is evil, that's 34.5 million MP3s, so each of these drives costs us 223.47 million dollars". Then, reporters would lurch on about the dastardly teenagers and their "fear-to-fear" networks (that spread spam, viruses, and Linux to your own computer, right under your very nose!).

      Your original quotes were far too logical - and mathematically accurate - to ever originate from the RIAA.

      Even worse, AT&T could claim that the drive could store 897 copies of an old

  • by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Thursday January 06, 2005 @02:48PM (#11278565) Homepage
    Maybe this one won't require a new motherboard to use. I think I've replaced more mobo's to handle larger drives than I have to support faster CPUs.
    Sounds like an OS issue. Linux handles 200+ GB drives just fine on my p3 box with ATA/33 controllers.

    Seriously, as long as you get the kernel in the part of the disk that your motherboard supports, (or don't boot off that disk at all), Linux will work with it, no matter what motherboard you've got. No 128GB limit to worry about, even if you don't have ATA/100 (or is it ATA/133 that is supposedly required to support 128GB+ drives?)

    I've even read those 200+ GB disks on a Pentium 120 Dell's onboard controllers on Linux. No problem -- Linux knew to ignore the BIOS settings on the drive and just made it work.

    • by badfrog ( 45310 ) on Thursday January 06, 2005 @02:56PM (#11278713)
      Yes, I'm quite suprised a Slashdot story included the motherboard comment. My file server is an old P200 running samba on a 160GB drive, and even with a rather old Red Hat installation (7.3) no extra configuration is required.
      All it can support under DOS/Windows is 8GB. It's so ancient the MB doesn't even support IDE CD-ROM booting.
    • No, there are quite a few motherboard chipsets that only show at max the first 130GB of the disk, ignoring the rest. This is the maximum they can address. Some BIOSes are happy to hand addressing off to the OS, some arent, so your point of getting the kernel in the lower boundry is a little bit pointless when you want to dual boot. Dont assume that just because you havent come across it it must not exist, because it does and its a pain.
      • by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Thursday January 06, 2005 @03:35PM (#11279341) Homepage
        This is the maximum they can address. Some BIOSes are happy to hand addressing off to the OS, some arent
        Once you get booted up, it's not up to the BIOS anymore, unless you're using an old OS that uses the BIOS for disk access.

        By old, I mean DOS old -- I don't even think Windows 95 uses the BIOS for disk access once booted up unless it has no other choice. OS/2 had an int 13h driver that it could use if there was no other option -- but you certainly didn't want to use it unless you had to, because the performance sucked.

        The problem is that Windows blindly trusts what the BIOS returns for the drive parameters. A smart OS can ignore the BIOS settings if they don't match what the drive itself returns. It can also look at the partition table and use those settings instead of what the BIOS reports, if that makes more sense.

        so your point of getting the kernel in the lower boundry is a little bit pointless when you want to dual boot.
        I said OS issue. I meant it.
        Dont assume that just because you havent come across it it must not exist, because it does and its a pain.
        Oh, I've come across it. And I know it's a pain. But I certainly wouldn't replace a motherboard for it -- I'd either 1) update the BIOS (if an update available), 2) add an external IDE card (which has it's own BIOS), or 3) or pick an OS that can handle the BIOS issue better. Another option might be one of those `boot managers' that comes with the large drives as well -- they add a little bit of code that fools Windows into seeing the correct drive parameters instead of what the BIOS returns.

        But if my P120 box can read a 200 GB disk with it's internal controller, I'm guessing that almost anything can. But the BIOS on that computer can't handle anything over 8 GB properly, so Windows would be out of the question.

  • by aquarian ( 134728 ) on Thursday January 06, 2005 @02:49PM (#11278571)
    I wonder what everyone's doing with all these huge drives, other than indulging a compulsive collecting habit. How much music can one listen to, and how many movies can one possibly watch?

  • by cmburns69 ( 169686 ) on Thursday January 06, 2005 @02:50PM (#11278597) Homepage Journal
    What I want to see is an array of HDs made for the consumer. Slap a couple of iPod-style drives together in some sort of RAID configuration, give it a controller, and we'd see a drive with excellent throughput and reliability! .. Just wishing! ...

    • I don't know "why" drives fail, but I've often wondered if it wouldn't be possible in large, multi-platter drives to have some kind of RAID-5 redundancy inside the drive itself (perhaps as a configurable option?).

      The loss of performance and capacity might be worth it in some situations if it mitigated some decent-sized portion of drive failures.

      Another idea I had was the ability to daisy-chain drives directly together and have a "direct" RAID system without a seperate controller, using RAID logic integrat
  • does anyone know why they don't read hard drive platters in parralel? from what I understand they read them one at a time. If the read them in parralel, throughput would increase at a multiple of the number of platters in a drive.
  • The drives include another key feature of SATA II, staggered spin-up. Staggered, or delayed, spin-up enables the host to individually "spin-up" drives in multi-drive configurations. This reduces the power draw of a booting system, enabling system designers to reduce the size of the power supply and minimize the total cost of ownership for end-users.

    Screw that, keep those system designers off my power supply, I want more power not less!!

  • Why not faster? (Score:5, Interesting)

    by cablepokerface ( 718716 ) on Thursday January 06, 2005 @02:51PM (#11278606)
    Does anyone know the reason why the speeds of these drives are rarely upgraded? I mean, IDE is just 7200, which it has been for years, S-ATA is 10.000 sometimes, but not really very much faster still.
    Is it technically difficult? Is it unnessecary?
    And now that I think about it, what is taking those solid state disks so long ?
    • Re:Why not faster? (Score:5, Informative)

      by chill ( 34294 ) on Thursday January 06, 2005 @02:58PM (#11278744) Journal
      Seagate Cheetah U320 SCSI drives are available in 15,000 RPM models. Much faster than that and you have problems with the spinning media deforming due to the stress.
    • Re:Why not faster? (Score:2, Informative)

      by stratjakt ( 596332 )
      How fast do you think you can make a pound of metal spin with only a few watts of power, without falling apart or exploding, etc?

      I don't know exactly what the mechanical problems are, but 10,000 RPM is pretty friggin fast. I remember years ago hearing that 4,800 was the absolute fastest speed they could go for some reason or another.

      • Modern 3.5" disk assemblies are barely a few ounces (the platters and the spindle). Certainly nowhere near 'pounds of metal'.

        I think you'd have to spin these things *way* beyond 10K RPM before they would significantly deform under centripetal loading, and many times that before they would actually 'explode'.

        The amount of power used to spin up such a system (and keep it spinning) is almost insignificant if your bearings are sufficently low-friction.
    • Re:Why not faster? (Score:3, Informative)

      by 314m678 ( 779815 )
      Recall from HS physics that the acceleration that a body experiences is proportional to the square of the velocity. So if you make the platter spin twice as fast, you increase the stress on the drive by four. --Paul
    • Well, 7200RPM with double or quadripple the areal density is like double or 4x the RPMs at the original areal density.
      As for solid-state disks, they've come an amazing way. It's just that their cost relative to hard disks is still bad. But relative to their original costs, they've probably done as well as platters in terms of price/bit.
    • Re:Why not faster? (Score:5, Informative)

      by vadim_t ( 324782 ) on Thursday January 06, 2005 @03:12PM (#11278965) Homepage
      Technical issues. It's hard to spin a platter at 10K RPM. It also requires cooling, and makes lots of noise too. 7200 is about the most you can use without having a fan blow on the HDD, and I would prefer not to because they get quite hot. I suppose the manufacturers picture lots of users buying a 10K RPM drive, sticking it into an under-ventilated box and getting a replacement a week later because it died from overheating.

      There's also that RPM is not the only way of making things faster. Basically, the performance of a hard disk is determined by 3 variables:

      Rotational latency: The time it takes for the disk to spin into the right position. That is, once the head is on the right place, this is how long it has to wait for the data to pass under it. More RPM translates into less rotational latency.

      Seeking latency: The time it takes for the drive's assembly to get into the right position.

      These two are often added up in the statistics. Solid state drives pretty much lack them. I'm setting up now a firewall that boots from CompactFlash on CF-IDE adapter, and it boots really fast despite a transfer rate of only 2 MB/s. Latency can add up to quite a lot.

      Data rate: The speed at which the drive reads or writes data once everything is in the right place. This is a function of the RPM and data density. More speed means the data passes under the heads faster. More density means there's more data per square inch.

      So, increasing RPM is one way of getting more performance. The other one is packing more data into the same place. Some drives have small platters for this reason. This also means that a bigger drive is often also faster than a smaller one, given identical RPM, platter size, and number of platters.

  • We got the more room for porn post.

    Good job.

    We got the when will we see this kind of capacity on my iPod post.

    Good job.

    Now all the need is -- (drum roll please) --

    But does linux support it yet?

  • Is why in the storage realm, everytime they hit some stupid short-sighted limitation, they implement some new addressing scheme or something as a band aid, (LBA, etc etc), which is suboptimal, but somewhat understandable, *BUT* the solution itself is very short sighted, providing for capacities of 25-30% more than the limitation hit, but will break beyond that, and do the same thing after a few months when their capacities hit the new limit. I figured at least with SATA they had a chance to mostly start fr
    • Are you posting from 1995 or something?

      LBA is what they SHOULD have done from the start, it abstracts the specific geometry from the amount of space on the drive. Anyone who remembers having to dial-in CHS values knows this, LBA is a godsend. The reason it wasn't implemented from the start was that it would shift processing (sector locating) to the drives themselves, which wasn't cheap to do in the eighties and early nineties. LBA has also been the standard for a LONG while now, and besides a minor bump in
  • Hitachi, feh (Score:3, Insightful)

    by retro128 ( 318602 ) on Thursday January 06, 2005 @02:52PM (#11278627)
    I won't touch Hitachis. I still have a bad taste in my mouth from the last DeathStar I owned. That's nothing compared to a friend of mine though, who had to turn in his 75GXP 4 times under warranty before he finally figured it wasn't worth the trouble and scrapped the drive. The magnets that came out of it are more useful than that drive ever was.

    Yes, I know I was burned by IBM rather than Hitachi, but when I was asking some techs who still work in the tranches about it, saying that they were not big fans of Hitachi drives would be putting it lightly.
    • only the 75GXP line (Score:4, Informative)

      by Macrat ( 638047 ) on Thursday January 06, 2005 @03:29PM (#11279214)
      Only the 75GXP line was lemons. 120GXP and higher releases have been MUCH higher quality. (Don't argue with me about it as I have FOUR 7K250 drives, a DOZEN 120GXP drives and a DOZEN 180GXP drives in use 24x7 across a variety of desktop systems.)
  • Thread on SR (Score:4, Informative)

    by Sivar ( 316343 ) <charlesnburns[@]gmail...com> on Thursday January 06, 2005 @02:55PM (#11278683)
    There's an interesting (as far as "new drive is bigger than old ones!" is interesting) thread [storagereview.net] on Storagereview.com which includes some insights as to how this thing is built, and why it uses lower-capacity platters than even Seagate's 400GB drives.
  • "I think I've replaced more mobo's to handle larger drives than I have to support faster CPUs."

    I'm sure there will be a PCI card that you can tie into, these type of monster size drives aren't typically used because of their speed, they're used for storage of various things.

    If it were about performance you're probably not going to use this style of drive anyways. When your storage needs aren't limited by size persay but by I/O, you'd be better off investing the money in a scsi solution. Especially if yo
  • I think I've replaced more mobo's to handle larger drives than I have to support faster CPUs.

    Sounds like the CmdrTaco Center for Pornography Storage is doing pretty well. At least we know the Slashdot subscription fees are going to a worthy cause.
  • Well great, right after I put 4 300GB drives in Raid5 on a sata, 8 drive card, this comes out. w/ 500GB drives the practical size of the array when full would be almost another TB. (the 4 300GB drives in raid 5 produce about 850GB now so I assume 8 drives would produce about 2TB).
  • Maybe this one won't require a new motherboard to use. I think I've replaced more mobo's to handle larger drives than I have to support faster CPUs.

    An alternative to buying a new motherboard is to just buy a PCI IDE controller. The only reason for the upgrade is so that enough bits are used to address all of the sectors on the disk; the interface otherwise doesn't change. In fact, new hard disks sometimes come with controller cards in a bundle if you're too cheap to pay the $20. I'm currently running a pa
  • by multiplexo ( 27356 ) on Thursday January 06, 2005 @02:58PM (#11278734) Journal
    Video files are generally at least two orders of magnitude larger than audio files, so while it has been feasible for the last few years to build an MP3 server to store all of your music (and now it's even feasible for most geeks to build one to store their music in a compressed, lossless format) the same hasn't been true for DVDs.

    But last night I was looking at the price for Hitachi's 400Gb IDE drive ($368 on at newegg.com) and figured that I could throw a pretty decent video server together for about five kilobux. I was thinking of getting a big case and power supply, eight of these drives and an Adaptec eight port SATA raid controller. Set up a Linux system, set up the drives and RAID controller as RAID-5 and you could get about 2,500Gb of storage, which works out to about 265 DVD images (assuming that each image was a from a dual layer disc and 9.4 Gb in size. Use SMB over gigabit ethernet to mount these images to your clients and then play whatever you like. Eight 500 Gb drives would give you about 3,200Gb of storage which works out to 340 images (making the same assumptions about the size of each DVD). I'm sure there are better ways of doing this, this is just what I came up with off of the top of my head.

    Note that this assumes that you're not doing any processing on the DVDs. With a tool such as DVD-Shrink you could increase the amount of images you were able to store by stripping out alternate soundtracks, extra features and even the menus. And with DiVX re-encoding you might be able to (I don't know much about DiVX so comments would be appreciated) reprocess the video streams so that they used less space but were not visibly reduced in quality. If I had a spare 5 kilobux to blow right now I'd build one of these as a mighty heigh-ho and fuck you to Bill Gates, Jack Valenti and all of the other assholes in Hollywood and have the pleasure of having a whole-house video solution.

    • by jgardn ( 539054 )
      With your scaling of drives, you missed something important. Right now, let's say that once every 4 years one of these drives will fail. That's a pretty good record, I think, for consumer hardware. When you've got four of these running, you are pretty much guaranteed that one will fail every year. With eight, you now have a good chance that one will fail every 6 months.

      I'm not saying they'll fail once every six months. I'm saying that on average they will. More than likely, three will fail in a single mont
    • Just a suggestion but if you are looking for a good SATA RAID controller take a look at what 3ware [3ware.com] has to offer. Their 8000 series [3ware.com] controllers are very nice. 3ware has always done it's best to work closely with the FOSS community, Adaptec, not so much.
    • It doesn't matter how 'good' DivX encoding is, Mpeg-2 is a lossy format. Since mpeg-2 is a lossy format, conversion to any other lossy format (including mpeg-2) will result in 'further' degredation of the video quality. in the case of DivX since DivX and mpeg-2 throw away different bits of data, the lossy conversion will be worse, than encoding from a lossless codec like HuffYUV.
      So to anwser your question, converting to DivX will result in both a generational loss, and some mpeg-4 specific loss of quality
    • DIVX Saves Bandwidth (Score:3, Interesting)

      by meehawl ( 73285 )
      I don't know much about DiVX

      I have a 1TB media server RAID-5 NTFS array (vintage 2002 so it's not a speed demon but still respectable - maxes out the PCI!). I back it up using FW400 (also not the fastest these days) onto an external [compgeeks.com] 1TB RAID-1 array.

      Anyway, one advantage I have noticed about DIVX over DVD is reduced bandwidth. You can get very respectable video quality from 1.5Mbps DIVX, versus ~4-5 times that DVD. Either of these is acceptable over wired connections, but 802.11a barely allows acceptable
  • At the risk of sounding like Mr. Gates fabled 640K comment back in the day, how in the world is a user supposed to make use of such a product?

    1. My music collection? Nope, DRM prevents me from burning my CD's anymore...

    2. Digital movies? Nope, again DRM requires me to buy a seperate copy of each work, even for backup purposes.

    3. Software? Nope, that's all subscription based, I just get to pay my $37.50 a month and be happy with what they choose to offer.

    So, I'm left with .txt, .sxw, and .doc files
  • speed, not space! (Score:2, Interesting)

    by ikea5 ( 608732 )
    I'd rather have a 15,000rpm/200gig IDE drive then a 7200rpm/500gig one, seeing that hdds are the major bottleneck on performance.
  • Comment removed based on user account deletion
  • Back in the day when hard-drives were measured in MB not GB, there were tape disks that can backup GBs at a time. So it was possible to backup an entire harddrive relatively cheaply and easily. How the hell am I suppose to backup 500GB?!? That's over 120 DVD-Rs, 20 Blue-Rays, 700 CD-Rs...you get the idea. I would imagine, I'd need some sort of special hardware just to do something simple like backing up. Jeez, do I need to buy another 500GB as a backup drive?
  • I think I've replaced more mobo's to handle larger drives than I have to support faster CPUs.

    Yeah, I guess spending $25 and dropping in a Promise ATA controller is too much effort. Western Digital was even bundling them with the drives for a while.

    Funny thing, those controllers perform significantly faster than many built-in IDEs. My nforce2 MBs have IDE defects that cause lockups that require power cycles to clear (reset won't do it). I don't even use the on-board IDE on those boxes.

    I have been buyin
  • 1 terabyte? (Score:2, Insightful)

    Not quite. Remember that 500 GB will not REALLY be 500 GB, being that drive manucaturers don't cound bytes correctly. Plus, 500 full GB plus 500 full GB does not equal 1 TB. 1 TB is still 1024 GB, so you'd need 24 more GB.
  • by Anonymous Bullard ( 62082 ) on Thursday January 06, 2005 @03:47PM (#11279525) Homepage
    Any recommendations for well-supported (under Linux) SATA or SATA II PCI cards to drive these things? RAID isn't needed here but others might be interested in that as well.

    Most motherboards currently in use don't have SATA support built-in, and even the news ones that do may come with chipsets that haven't got complete Linux support yet.

    Since my next motherboard and drives may well be all SATA, it would make sense to start adding SATA drives to my current setup using an add-on controller card.

E = MC ** 2 +- 3db

Working...