Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware IT

Latest SCSI Drive Reviewed 213

Sivar writes "StorageReview got their hands on a Maxtor Atlas 10K V, the first SCSI hard drive in more than two years to double capacity. Considering how quickly storage was improving just a few years ago, and other news like Intel's cancellation of the 4GHz Pentium IV despite AMD's lead you have to wonder if the traditional predictions of the end of Moore's Observation are actually beginning to come true."
This discussion has been archived. No new comments can be posted.

Latest SCSI Drive Reviewed

Comments Filter:
  • Thank you (Score:5, Insightful)

    by Rosco P. Coltrane ( 209368 ) on Tuesday November 02, 2004 @07:01PM (#10705912)
    the traditional predictions of the end of Moore's Observation

    Thank you for correctly not calling Moore's observation "Moore's law". It's refreshing once in a while.
    • Re:Thank you (Score:4, Informative)

      by rmarll ( 161697 ) on Tuesday November 02, 2004 @07:08PM (#10705966) Journal
      Unfortunately Moore's observation has nothing to do with clock speed or hard drive capacity. Plus one, and minus two?
      • And why not? While it was originally about transistor count and such, I see no reason why it cannot be applied to other aspects of computing hardware, or of any technology that improves in a exponential rather than linear fashion.
    • Re:Thank you (Score:3, Interesting)

      by Jason1729 ( 561790 )
      What makes you think "Moore's Law" is not a correct term?

      From Wikipedia under "physical law": A physical law or a law of nature is a scientific generalization based on empirical observations.

      Moore's LAW is the empirical observation that every 18 months the transistor density of high-end chips doubles.

      Jason
      ProfQuotes [profquotes.com]
    • Re:Thank you (Score:3, Insightful)

      by gl4ss ( 559668 )
      too bad for you Moore's obsercation is CALLED "Moore's law".

      it being a real law or just a theory not having much to do with it.

      besides, the whole law is just an old dog for newswriters to kick.

      and as a sidenote(besides that moore's law has very little to do with hd space in any of the usualy things moore's law is stated to mean). it could also mean that scsi is being slowly pushed further and further into it's niche(and thus having smaller and smaller markets compared to other biz the hd companies could
    • Unfortunately, the correct term is in fact "Moore's Law".
  • by tyroney ( 645227 ) on Tuesday November 02, 2004 @07:02PM (#10705920) Homepage
    The other two models are 73 and 147 according to the article.
  • wtf? (Score:4, Informative)

    by Speare ( 84249 ) on Tuesday November 02, 2004 @07:02PM (#10705922) Homepage Journal
    What the fuck do hard drive capacities have to do with "Moore's Observation," which was about transistors?
    • by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Tuesday November 02, 2004 @07:06PM (#10705950) Journal
      Welcome to the new /. where we just LOOK like we know what the hell we are talking about.

      • Welcome to the new /. where we just LOOK like we know what the hell we are talking about.

        Judging by the fact that half the comments in this story concern the non-sequitor invocation of Moore's Observation, "we" don't even look like we know what the hell we're talking about.

      • I submitted this article, and expressly used Moore's Lawbservation in terms of hard drive technology because like transistor count in ICs, hard drive storage increases exponentially rather than linearly. Who said it can be used ONLY to describe processors and the like? You?

        If you would like to discuss how it relates to storage--and it does--feel free to post to the StorageReview forums, or email me through them (user name: Sivar).
        • http://www.intel.com/research/silicon/mooreslaw.h t m [intel.com]

          The Gentleman that came up with Moore's Law doesn't agree with you. I trust Gordon Moore more than almost anyone on /.

          Kind of funny that way.

          From the above link.

          Gordon Moore made his famous observation in 1965, just four years after the first planar integrated circuit was discovered. The press called it "Moore's Law" and the name has stuck. In his original paper, Moore observed an exponential growth in the number of transistors per integrated
          • This misses the point entirely. It doesn't matter that Moore happened to be talking about transistor count when he made his famous observation--the point is that the exponential (rather than linear) growth he observed applies to more than just transistor count. It is a famous observation that is easily recognized, so it was appropriate to compare the slowing of storage technology to the slowing of IC advancement.
            Besides, Slashdot submissions often need a little extra flair to be accepted for publication.
    • Re:wtf? (Score:2, Funny)

      by junkgui ( 69602 )
      Then I would like to be the first to make the observation that hard disk capacity doubles every 12 to 18 months... wow... I want a raise.
    • Bingo!

      Maybe what we will see an end to is people applying "Moore's Observation" to everything that has anything to do with computers.

      SCSI hard drive capacities aren't increasing as fast as they were simply because the demand isn't there; the emphasis is on performance rather than capacity. If there was a market demand for 400G SCSI drives, they'd be available today.

    • Curiouser (Score:2, Insightful)

      by tepples ( 727027 )

      What the fuck do hard drive capacities have to do with "Moore's Observation," which was about transistors?

      Other than the curious observation that both IC density and magnetic storage density happen to be ceasing to scale up at the same time?

  • by Jason1729 ( 561790 ) on Tuesday November 02, 2004 @07:07PM (#10705962)
    It's a doubling of the density of transistors every 18 months. It doesn't say anything about magnetic storage density or the clock speed of chips. Intel cancelling the 4GHz P4 was just admitting (and it's about time) that cranking up the clock speed is not the best way to improve CPU performance. There is no indication that will prevent Moore's Law from continuing

    Jason
    ProfQuotes [profquotes.com]
  • Large caches (Score:5, Interesting)

    by Twillerror ( 536681 ) * on Tuesday November 02, 2004 @07:08PM (#10705964) Homepage Journal
    The article claims that hard drives are starting to clammer for 16 mb caches. It seems odd that no one has come out with a standard cache expansion kit.

    A mother board with an ATA chipset that could plug in older dirt cheap SRAM or even newer DDR or better. Imagine a 4 gig cache of SRAM attached to your harddrives. A machine left on for a while would start to smoke.

    I have some really highend SCSI raid controllers that allow 256 megs of cache...I wonder why there is a product out there to add cache to an existing ATA system. Obviously cost is an issue, but it seems like this sort of thing would give a big bang for the buck. High end games will pay anything for a 5% perf increase.

    • A machine left on for a while would start to smoke.

      Yeah, I had a machine like that once. I think dust was blocking the air vents :)
    • Re:Large caches (Score:4, Insightful)

      by Gldm ( 600518 ) on Tuesday November 02, 2004 @07:15PM (#10706032)
      Yes, it would be great to add cache to SATA drives. Or wait, how about an onboard CPU to offload the processor? Oh and wait, let's add RAID5 xoring too! Oh and command queuing, elevator sort seek optimizations, and all the other nice SCSI stuff.

      If only someone made a product like that which supported many drives and most major OSes including linux... [3ware.com]

      If only I could find such a thing under this rock where I've been living the last few years!

      • Well... It would have been also nice if they did not make it even more fussy then many "proper" high end controllers which take SCSI drives.

        As someone who has suffered 3Ware on a chasis with a riser card I can tell you that you quite often need extra luck to get it going on 20-30% of the motherboards (that is for 850x-SATA).
    • Re:Large caches (Score:5, Interesting)

      by BeerCat ( 685972 ) on Tuesday November 02, 2004 @07:21PM (#10706084) Homepage
      The more serious response...

      It seems odd that no one has come out with a standard cache expansion kit.

      What about a cache expansion kit that is a small daughterboard that can take multiple RAM type designs (SIMM / DIMM / SO-DIMM etc), and which then plugs into the drive's cache socket. This would mean that all the old RAM that you had to remove to upgrade your machine could be put to good use. Even though it would not be as fast as the RAM in main use, it would still be around 1000 times faster than the HD itself. OK, so trying to integrate 30-pin SIMMs would probably be a bit silly (especially with a limit of something like 8Mb), but anything from about 168-pin would do.
      • Re:Large caches (Score:3, Interesting)

        by zakezuke ( 229119 )
        What about a cache expansion kit that is a small daughterboard that can take multiple RAM type designs (SIMM / DIMM / SO-DIMM etc), and which then plugs into the drive's cache socket. This would mean that all the old RAM that you had to remove to upgrade your machine could be put to good use. Even though it would not be as fast as the RAM in main use, it would still be around 1000 times faster than the HD itself. OK, so trying to integrate 30-pin SIMMs would probably be a bit silly (especially with a limit
      • I've often thought that a CD rom would be cool if you could plug a 512meg ram stick in the back above the IDE connector, you could basicaly cache almost all your cd in one go.

        That would be a fast cdrom.
    • A mother board with an ATA chipset that could plug in older dirt cheap SRAM or even newer DDR or better. Imagine a 4 gig cache of SRAM attached to your harddrives. A machine left on for a while would start to smoke.

      ...and in the event of power failure/child plug socket scenario/accidental powerbutton press/fuse blow/failing psu/nuclear war your install is completely fscked!
      • a"nd in the event of power failure/child plug socket scenario/accidental powerbutton press/fuse blow/failing psu/nuclear war your install is completely fscked!"
        1. was that pun intended?
        2. You would want to use the ram as a read buffer not as a write buffer.
    • Because it doesn't really make much of a difference (and nowhere near 5%) on that particular hardware. Consider the two drives mentioned in the article the 10K V, and MaXLine III. While SR commented on the discrepancy, the 10K V despite having half the cache has a read service time that is nearly twice as fast as that of the ML3 thanks to a faster spindle speed.

      The OS disk cashing system ultimately provides essentially the same function dynamically. That's not to say that another layer of caching would not
    • The fundamental thing that everybody seems to be missing is the use of *WRITE* cache. If you add a big write cache to a HD, along with a small battery to keep it going should the system fail, you'll blow the doors off of everything. That's what the large fibre-channel RAID arrays do.

      That's why the increase in cache sizes helps so tremendously. You can avoid the spindle delay entirely.
    • Guessssss what? (Score:4, Insightful)

      by Ayanami Rei ( 621112 ) * <rayanami@@@gmail...com> on Tuesday November 02, 2004 @11:49PM (#10707504) Journal
      All operating systems do this anyway with your system RAM.

      The memory on the drive is just there as a holding pen for pending reads and writes so that it can give the drive head a chance to get to where it needs to be, perhaps killing multiple birds with one stone.
      At a certain capacity you start needing more cache because you'll be dealing with potentially more complex access patterns (more disparate regions to access data, larger transfer units per track)

      It is not a substitute for a file-system/block cache.
    • I would not be surprised that on-drive memory cache size goes to as much as 64 MB very soon. This especially with Serial ATA going to possibly four times the burst transfer rate of current Serial ATA devices as early as 2007.
  • SCSI (Score:3, Informative)

    by iamnotacrook ( 816556 ) on Tuesday November 02, 2004 @07:08PM (#10705971)
    i recently used SCSI for the first time when i built a new fileserver for home. yeh, it cost but the performance increase was phenomenal .. i got what i paid for.

    i wont be moving back to IDE.

    • i recently used SCSI for the first time when i built a new fileserver for home. yeh, it cost but the performance increase was phenomenal .. i got what i paid for.

      I also prefer SCSI for the more-common 5-year warranties and very large MTBF ratings. I'm old enough, now, that I'm willing to pay more to not get slave-labor crap, and, for a multi-year investment, the extra $150 for a SCSI disk (I already had a controller) wasn't unreasonable. Even better, after a couple years, it's only two-thirds full, and
    • Whats funny about scsi vs ide in general - for me at least. The most catastrophic disk failures I've ever had were all on scsi disks - even back when I used to use scsi-2 on older sun4m systems. Were talking where the disk wouldn't power up, read, or anything. And I abuse scsi disks just as much as I do ide. And I've always paid more for scsi. I have some 18 gig 10,000 RPM IBM drives around here that cost over 450$ a piece as little as 4 years ago.

      In the past 5 years I had only two ide problems - both were
  • Is there much point to new SCSI drives, as Serial ATA becomes a more widespread technology? From what I've heard, it can hit similar speeds but has several benefits over SCSI (not least of which is those nice thin cables) at a lower price.
    • Re:SATA (Score:2, Informative)

      by Anonymous Coward
      mean time between failure (MTBF)...

      SCSI 1.2 million hours

      SATA 0.6 million hours

      That and SATA is still NO WHERE near the performance of higher end let alone mid-range SCSI drives.

      • Re:SATA (Score:3, Interesting)

        That is true, you might be think of a pair of SATA, such as the WD Raptor, drives together in a RAID as approaching the performance of a SCSI drive, but individually nope no how no way.

        It would be nice to see more hours out of the SATA drives, after all the big huff about the warrenty reduction by Maxtor and WD I picked up one of their "3 year" drives, it still shit the bed after 6 months. Yeah! 2 years worth of pr0n, Enterprise and Red Dwarf episodes gone. Guess you'll still have to role the dice on d

    • Re:SATA (Score:3, Informative)

      Serial ATA will not be able to compete with SCSI in the server market until implementations of native command queueing are complete. Are far as I know, none of the current SATA drives on the market currently support NCQ.

    • Re:SATA (Score:3, Informative)

      by ewhac ( 5844 )

      SATA:

      • Maximum transfer rate: 150MB/sec
      • Maximum number of devices: 4 (typical; controller-dependent)
      • Available spindle speeds: 7200 RPM
      • Typical seek time: 8.5ms

      SCSI:

      • Maximum transfer rate:
        • LVD Ultra SCSI: 80MB/sec
        • SCSI-160: 160MB/sec
        • SCSI-320: 320MB/sec
      • Maximum number of devices: 15
      • Available spindle speeds: 7200 RPM, 10000 RPM, 15000 RPM
      • Typical seek time: 4.5ms

      And yes, you can tell the difference, even as a "normal" user.

      This doesn't mean SATA sucks. In fact, it's quite good for the target appli

      • Re:SATA (Score:2, Interesting)

        by katz ( 36161 )
        SATA maximum spindle speeds are definitely not 7200 RPM. Western Digital's SATA Raptor drives are 10,000 RPM.
      • bullshit specs (Score:4, Informative)

        by RelliK ( 4466 ) on Tuesday November 02, 2004 @08:35PM (#10706511)

        SATA:

        * Maximum transfer rate: 150MB/sec
        * Maximum number of devices: 4 (typical; controller-dependent)
        * Available spindle speeds: 7200 RPM
        * Typical seek time: 8.5ms

        SCSI:
        * Maximum transfer rate:
        o LVD Ultra SCSI: 80MB/sec
        o SCSI-160: 160MB/sec
        o SCSI-320: 320MB/sec
        * Maximum number of devices: 15
        * Available spindle speeds: 7200 RPM, 10000 RPM, 15000 RPM
        * Typical seek time: 4.5ms


        1. 150MB/s is waaay more than a single drive can push, so it is more than sufficient.SATA is a point-to-point connection, one drive per channel. SCSI may be 320MB/s and support up to 15 devices, but that bandwidth is *shared* among all of them. By the time we have HDs that can actually deliver 150MB/s transfer rate, faster SATA will be available.

        2. Maximum number of devices: that's a number you pulled out of your ass. You can have as many SATA devices as SATA ports. 3ware makes nice 12-port RAID controllers.

        3. Spindle speed & seek time are the properties of the *hard drive*, not the *interface* (Do you understand the difference?). A SCSI and SATA HD with otherwise identical specs will have the same performance. Also, there are 10000RPM SATA HDs -- the WD Raptors, though they are not very cost-effective. If reasonably-priced 10K and 15K RPM SATA drives are released, they will totally kill SCSI market (which is, I suspect, the main reason they are not available).
      • And it also doesn't suffer from bus termination-fu and other black arts of SCSI configuration.

        that has been a non issue cince LVD scsi came into existance in 1998. active termination cables as well as active termination on the cards solve the issue completely.

        Setting LVD scsi id is super easy (with 16 of them to choose from) and some newer drives will autoset their id.

        anyways, SCSI can have multiple hosts on the chain. I can have a drive array with 2 computers accessing it at the same time. somethin
        • > some newer [LVD] drives will autoset their id.

          Actually, I haven't seen a disk in YEARS that won't do this.

          Of course, it may have something to do with the equipment I'm buying; I was kind of surprised to see that there are still 7200 RPM disks out there when I read the OP's post. Most of my disks are 15,000 RPM with the older ones being 10,000 RPM.

          > we went back to the stack of 12 32gig scsi
          > drives and kept the SATA drives for a storage
          > server use only.

          If you want to ditch the cable nest,
      • SATA: ...
        * Available spindle speeds: 7200 RPM


        Some SATA drives run to 10,000 [westerndigital.com]
      • Actually, there is the Western Digital 'Raptor' line of SATA disk's which are of comparable speed to a SCSI drive.

        -Adam
    • Re:SATA (Score:5, Informative)

      by Lumpy ( 12016 ) on Tuesday November 02, 2004 @07:33PM (#10706176) Homepage
      yes, I have OLD SCSI U160 drives and tried the new fast SATA drives for my Media array.

      the old SCSI drives from 4 years ago kick the ever living crap out of the SATA drives.

      this is non raid performance. When capturing RAW video from a TARGA 3000 card (A $7,500.00 professional video capture card) the SATA drives would drop frames and completely CHOKE after 5 minutes of capture at 40Megabytes per second.

      the old SCSI drives with an even older 29160 scsi card had zero problems.

      I hope that SATA will speed up eventually, but SCSI is drastically faster, even from ages ago.

      I'm betting that SCSI U320 makes the fastest SATA stuff look like a complete joke.
      • PCI (Score:3, Interesting)

        by dspeyer ( 531333 )
        What difference does it make whether your controller/disk interface is 150MB/s or 320MB/s when your data is going to sit on the controller waiting for the 132MB/s PCI bus anyway?

        I'm serious. Is there some way around the PCI bottleneck? Is it not as bad as I think it is? Should we all be using PCI-X anyway?

        • You plug an Ultra320 SCSI Host Bus Adapter into a 64-bit 66MHz PCI slot, not a 32-bit 33MHz PCI slot. That, and most high-end workstation/server motherboards that are going to be used in a SCSI RAID box will have multiple independent PCI buses, so their bandwidth will not be shared with other devices.
  • No way (Score:5, Informative)

    by Cutie Pi ( 588366 ) on Tuesday November 02, 2004 @07:13PM (#10706010)
    First, Moore's Law has nothing to do with hard drive storage space. That said, hard drive capacities have been growing at a pace exceeding Moore's Law for several years now. If that rate slows down, it'll probably still be a pretty fast pace. Besides, these are fast SCSI hard drives. You have to look at IDE hard drives to really see storage space improvements.

    Second, Intel cancelled their 4GHz CPU because of heat problems. It turns out that Intel's engineers just can't get the leakage current down to low enough levels. But again, Moore's law has nothing to do with clock speed... the metric is the number of transistors on the chip. In this regard, Moore's law is still on track. To counter the heat issue, logic designers will have to rethink their designs to do more work per clock cycle. AMD already does this with their chips. Intel is going down this route too with its Pentium M. Same with IBM's G5. The Pentium 4 is a horrendous example because Intel designed it to be inefficient so they could ramp its clock speed. Well now the consequences of that stupidity is showing.

    You know, I've heard that the human brain operates at about a 10Hz frequency, has 100Bln neurons, and trillions of interconnections. Amazingly, its power dissipation is at around 40W. (And its MIPS rating is on the order of 10^15 instructions per second). Clearly mother nature got it right for efficient computation.
    • by Anonymous Coward
      "You know, I've heard that the human brain operates at about a 10Hz frequency, has 100Bln neurons, and trillions of interconnections. Amazingly, its power dissipation is at around 40W. (And its MIPS rating is on the order of 10^15 instructions per second). Clearly mother nature got it right for efficient computation."

      Try calculating PI on it.
    • Re:No way (Score:5, Funny)

      by Anonymous Coward on Tuesday November 02, 2004 @07:30PM (#10706145)
      mazingly, its power dissipation is at around 40W
      And it has a bitchin' liquid cooling system. I've seen some groovy case mods, too.
    • First, Moore's Law has nothing to do with hard drive storage space. That said, hard drive capacities have been growing at a pace exceeding Moore's Law for several years now.

      Ummmm, anyone care to explain this to me? My head hurts thinking about it...
    • Re:No way (Score:3, Insightful)

      by kfg ( 145172 )
      Clearly mother nature got it right for efficient computation.

      At the cost of deterministic precision and data integrity.

      When designing a computational device the ideal depends a good deal on just what it is you are trying to compute and there are always engineering tradeoffs.

      KFG
    • You know, I've heard that the human brain operates at about a 10Hz frequency, has 100Bln neurons, and trillions of interconnections. Amazingly, its power dissipation is at around 40W. (And its MIPS rating is on the order of 10^15 instructions per second). Clearly mother nature got it right for efficient computation.

      Give them time. Evolution has a few billion years head start on R&D.
    • I've heard the numbers are 5 bln neurons, 5000 connections each, 1000 Hz. Toss in the 10% utilization number that's floating around, and you get .. 2.5x10^15.
  • by shoppa ( 464619 ) on Tuesday November 02, 2004 @07:18PM (#10706054)
    Forget Moore's law.

    SCSI drive capacities have stayed where they were while IDE drive capacities got bigger because for real-world RAID arrays (where SCSI drives are used) capacity isn't the goal. It's speed. If you need 1 Terabyte of really fast RAID storage it makes far more sense to put in 15 73gbyte3 SCSI drives (10K RPM, 15K RPM) than it does to use 4 300 GB IDE drives (7.2K RPM).

    In the meantime IDE drives have begun to be used in RAID arrays, but usually where capacity matters and not performance. Admittedly the lines have blurred, especially for network-connected storage arrays where ethernet pipes are the limit and you cannot really tell the difference between a good IDE array and a regular SCSI array.

    • Exactly.

      I've just ordered a rackmount server with 4 15K RPM 36GB SCSI drives in RAID 10 configuration. I need the speed, not the size.

      In fact in this particular case the resulting size (72GB) of the array is an overkill for what I want.

      I'm hardly likely to exchange my array for a single 300GB IDE drive.
    • it makes far more sense to put in 15 73gbyte3 SCSI drives (10K RPM, 15K RPM) than it does to use 4 300 GB IDE drives (7.2K RPM).

      Wrong. You can easily get 30 300GB 7.2kRPM drives for the same or lower price than 15 73GB 15kRPM SCSI drives. Now run the IDE/SATA drives in RAID 1+0 configuration (versus RAID-0 over SCSI drives), and you get:

      • 2x the avaliable storage capacity even with RAID 1+0
      • fault-tolerace because of RAID 1+0 in IDE/SATA setup
      • about the same power consumption (15kRPM drives are power hung
  • Since when does Moore's law apply to hard drives? Does fitting double the transistors in half the space make your hard drive have a higher capacity?

    Obviously, nobody remembers the hard drive capacity lull that happened about `99 or so. Hard drives were quickly nearing their technological limits. Then, IBM got GMR heads [economist.com] working in hard drives, and everyone has been pushing that technology as fast as they could. Perhaps that technology, too, has reached it's limit.

    You could be an optimist and say that a
    • whether hard drive technology is maxing out.
      All you need to do is observe how many platters are being used. If there truly is no way to increase the density of platters, you can simply add more platters. Since we're still seeing drives with two to three platters then it is safe to assume there is still a capacity ramp in the works.
  • by RealAlaskan ( 576404 ) on Tuesday November 02, 2004 @07:25PM (#10706117) Homepage Journal
    So, if Moore's Observation does fail, how bad is it?

    We've said recently that as machines get faster, the software gets slower, so the work we have to do doesn't get sped up much (though the expectation for bells and whistles like fancy typesetting go up and up...), so would it really make such a big difference in our lives?

    Here's one nifty thing that will break with Moore's Observation: the optimal slack time for large computations [gil-barad.net]. If you're doing large computations, it would suck to see your slack time evaporate!

  • This is a low performing SCSI drive when compared to the 15k RPM ultra 320 drives out there. With things like out of order execution tags SCSI beats the snot out of IDE when used in database servers and the like.

    • Capacity is not the only reason people buy SCSI, but that doesn't mean that they'd like a lot more capacity anyway.

      In a lot of situations, given a ~10% performance difference vs. a 100% capacity gain, it's a given that I'll go with capacity - especially since the RAID arrays I build typically max out the busses they're connected to, and the small percentage difference won't matter.

      steve
  • by Gldm ( 600518 ) on Tuesday November 02, 2004 @07:43PM (#10706237)
    First of all, Moore said nothing about storage, only transistors in semiconductors.

    If we assume there is a similar correlation with density on magnetic media, it still doesn't necessarily mean it's slowing down now.

    AFAIK, drives had a major slowdown in the past around the 8GB mark and then suddenly 20GB->120GB appeared very rapidly, and then slowed down a bit then. I'd need to do alot of research and get some actual data before making a statement about exponential growth of magnetic storage density and whether or not it is feasible to continue or at what rate in the future.

    Also, narrowing the comparison to just SCSI devices is foolish, as they are rapidly being supplanted by cheaper ATA based devices. Yes SCSI is superior, it always has been. Except in one place, cost per unit storage. And as they say, quantity has a quality all its own.

    Also, lower costs disks such as SATA enable alternate means of increasing capacity and performance such as low cost RAID. SCSI used the RAID argument over mainframe SLED solutions to win in the market. Now mainstream SATA drives are using the exact same argument vs SCSI. The same principles that were true in the 80s and 90s are true now: more disks have inherant advantages, and can be flexibly arranged to provide whichever one you want whether it's performance, capacity, or reliability, in varying degrees. All for lower cost even with the added hardware overhead of the controller.

    Finally, there's one more factor that can be causing the slowdown in disk expansion. The fact that file sizes do not expand at the same rate, so demand for larger storage is being outpaced by the increase in density. I'd be interested in seeing what the average webpage size is from 1994-2004. I'm sure it goes up really quick as features like image support and frames first come in, but then mostly levels off. Word processor documents, even bloated by modern office suites, are still not more than an order of magnitude larger than they were 20 years ago. People still put their school papers and resumes on (GASP!) floppy disks. And their rate of density increase has been zero for quite some time, discounting alternate formats such as zip and usb flash.

    As storage continues to increase, we're seeing people actually have enough storage. I remember having to pick which games I could install on my 286 and 486. Now I just throw them on and by the time my disk fills in a year I just buy more disk as it's that cheap. My 105MB hardcard for my 286 cost ~$700 in 1989 or so. The 1.7GB fast SCSI-2 Micropolis HD I upgraded my 486 with the 525MB SCSI-2 Conner cost $900 in 1994. These days I could go grab a 200GB disk for $99 on sale. But the point isn't that the technology is better. In 1994 the biggest disk I could get was about 9GB and cost thousands. These days if I want the bigest thing on the block it's 400GB and costs under $400. What the average user gets in a new machine is much closer to the most advanced part in the market than it was 15 years ago when we had 340GB HDs in home machines and 4GB HDs in highend servers. Where did the highend disks go? RAID replaced them. These days if you want an order of magnitude more than what a major OEM ships as standard (Say, 160GB*10) you go for a RAID, either SCSI or ATA.

    Once you're paying for RAID hardware you're getting performance levels in the enabling hardware that make SCSI irrelevant. SCSI has a 320MB/sec bus, command queueing on drives, and a dedicated CPU and cache on the host controller. A highend SATA RAID like 3Ware has 150MB/sec per drive non-shared switched bandwidth, command queuing on drives, and a dedicated CPU and cache on the host controller. Only the 3Ware setup will give you VASTLY more bang for the buck because you can buy more and larger disks to give whatever performance/capacity/reliability you want. A 12 drive SATA RAID10 is going to utterly destroy a 5 drive SCSI RAID5 in every possible way except for thermal output and physical space, which can be
  • It's 300GB. Are the editors even trying?
  • Ugh... (Score:4, Informative)

    by Hank Reardon ( 534417 ) on Tuesday November 02, 2004 @08:02PM (#10706352) Homepage Journal

    Maxtor...

    After losing a total of twelve DiamondMax drives to hardware failure, never again. Eight I had purchased, the other four were replacements for four failures.

    I had four in two separate mirror configurations fail within minutes of each other. The original eight were bad within twelve weeks of purchase.

    My local retailer honored the replacement warranties with more DiamondMax drives. I accepted on the first four failures and those died within 6 months.

    Never, EVER again will I buy anything from Maxtor.

    • Of course it's actually a Quantum drive, but Quantum is now owned by Maxtor.
    • Have you had this problem with other brands? With so many failures, it could be your hardware (power supply, etc.)
      • Never with any "real" brands. I had an old drive in a junk box from a manufacturer called Palladium or something like that that went bonkers after a couple of weeks. I expected this one to hork out, so it was just used for extra swap.

        Other than that, no. I even had one power supply fail and send 110 AC through one of the 5V rails and blew everything in the box except the 20G IBM DeskStar I had in at the time.

    • Re:Ugh... (Score:3, Insightful)

      by NerveGas ( 168686 )

      I've bought a lot of Seagates over the past couple of years, and never had a problem - until I got a batch of 120's that started crapping out like flies. Every other drive before then (and after then) has been fine.

      Before that, I'd bought Fujitsus for about a year, until nearly every one of them went belly-up in a short amount of time.

      Wait, I've also had Western Digital drives crap out in large numbers before. And what about the whole IBM "DeathStar" fiasco?

      Every manufacturer gets bad runs.
  • Well, when I saw it was Maxtor, I kind of giggled.

    They've got a shitty reputation for a reason, duh.
  • by TyrranzzX ( 617713 ) on Tuesday November 02, 2004 @08:49PM (#10706574) Journal
    Then all those software companies making investments on more and more inefficient software are gonna take a hit big time. It would definatly be nice to see a good sine curve to moorse's law, whereas you get peaks of developement (meaning, progress is doubling every year or so) and drops (where tech is only gaining in 1.2-1.3 times capacity every 2 or so years). Gives technicians a chance to catch up and spend time unionizing, gives companies time to review their strategies and focus their designers on better materials and more feature filled hardware, and it forces software designers and especially their bosses to rethink their strategy of creating ultimatly trashy, inefficient, flashy software tools.

    As for moors law coming to an end, we'll have to see. There's been an auful lot of new stuff on the horizon, and I think we've gotten to the growing pains number 4, where major hardware changes are occuring; the first started with the 80386 and 80486, virtual mode, simm memory, EISA, IDE, and AT standards. The second with the pentium, EIDE, PCI, AGP, MMX, 3dNow, widespread modem use, and CD-rom's with the ATX standard. The third with the pentium 4/ddr/qdr, DVD-rom drives, PCI taking off into never never land (how many different kinds of cards is that?), LANing PC's together via DSL lines. Now we're in the 4th generation, where we've got 64 bit datapaths, new instruction set additions, SATA, PCI-X and PCI-express, DVD burners, Gigabit ethernet, usable, pretty linux, mini-ITX standards.

    The first set of changes turned the PC into a mutli-user inexpensive platform. The second gave it internetworkability and spurred the internet, as well as drove it into some multimedia stuff. The third added 3d gaming to the platform, perfected the networking aspect, and added a lot more data features and especially, and most importantly, stability. Now, we're getting into the most significant of those stages; making machines a *lot* more powerful and easier to configure. Just look at some of the newer 3d games coming out, I remembered watching some Cutscene's from old FF games as well as some old computer games, and Doom3 blows their socks off. Again, after these changes have occured, we'll move into another term of relative peace.

    The 5th generation tech I fully expect to come in around 2007-2008, and will be centered around public wireless networks (more or less, people leaving open wifi all over the place), porability, altered reality (think virtual grafitti, waypointing your friends, ect). It'll also be marked by a major freedom vs corporatism; DRM vs the internet, for example; DRM will probably seek to segment the internet into trade zones, or as the companies will call them, "trustworthy zones"(example message: You are leaving the safe zone, if you leave the safe zone, you will be subject to viruses, trojan's, malware, and bad stuff. Do you wish to continue?"). As malicious software becomes more prevalent and voracious, we'll see the open source movement gaining a lot of steam considering these corps will begin digitally enslaving people. Why spend a billion on advertising when you could simply serve it to people off of their own computers?

    So, within the next few years, we're going to see a lot of bad and good things happening, and most likely, some people's lives turning to hell, namely, those who don't care. Those who choose to fight it out will probably be persecuted; breaking DRM is, afterall, against the DMCA, and if MS gets angry, they can pull strings to have your linux-coding monkey ass assassinated or thrown into jail as a terrorist. Things'll get interesting, to say the least.
    • Wow. Long post. Careful idetification of "stages" of evolution, as though EIDE, PCI, AGP, MMX, 3dNow, and widespread modem use had annything at all to do with each other.

      So, what defines these stages you babble about?
      • Functionality and time. EIDE, PCI, AGP all added functionality to the PC; all the archetectures were designed for multiple uses. EIDE added ATAPI, which allowed for the use of CD-Rom, Zip disk, Jazz drives, ect on the EIDE bus. PCI added a whole ton of new functions, but manely, automatic configuration and a better interface to the processor. AGP added a direct bus to the processor, allowing for MMX and 3Dnow to really take off once implimented, as well as some real nice multimedia stuff. Infact, I rea
  • by otis wildflower ( 4889 ) on Tuesday November 02, 2004 @10:05PM (#10706913) Homepage
    ... I can guarantee you most DB guys I know would shit their pants in joy if they could get 15k RPM 9GB drives in bulk. I know of DBAs that buy 18g drives and only use half of them. In theory you only use the inner cylinders, but internal geometry these days is largely divorced from logical geometry.. DBAs who deal with random small writes want lots and lots of spindles striping using lots and lots of hardware RAID adapters.

    The super exciting thing about the 2.5" drives IMHO for SCSI is the possibility of boosting rotational speed thanks to reduced media weight. If you could get 1" 20-40kRPM 9GB SCSI or SAS drives and join together 100 of them that would be unbelievable.

  • Moore's "Obsevation" (Score:4, Informative)

    by StikyPad ( 445176 ) on Wednesday November 03, 2004 @05:45AM (#10708878) Homepage
    Had nothing to do with hard drives, or even processor speed. It merely stated that technology was advancing such that the number of transistors on a chip doubled every 18 months.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...