Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Seagate Claims 2.5" SCSI Drive is World's Fastest 218

theraindog writes "Seagate has announced a 2.5" SCSI hard drive that spins at an astounding 15,000RPM. The Savvio 15K is the first 2.5" hard drive with a 15K-RPM spindle speed, but what's more interesting is that Seagate claims it's the fastest hard drive on the market. Indeed, the drive boasts an impressive 2.9ms seek time, which is more than half a millisecond quicker than that of comparable 3.5" SCSI drives. The Savvio 15K also features perpendicular recording technology and a claimed Mean Time Between Failures of 1.6 million hours."
This discussion has been archived. No new comments can be posted.

Seagate Claims 2.5" SCSI Drive is World's Fastest

Comments Filter:
  • wow (Score:1, Informative)

    by mastershake_phd ( 1050150 ) on Wednesday January 17, 2007 @01:31PM (#17648760) Homepage
    and a claimed Mean Time Between Failures of 1.6 million hours.

    Thats 182 years.
  • Re:laptop use? (Score:5, Informative)

    by morgan_greywolf ( 835522 ) on Wednesday January 17, 2007 @01:33PM (#17648784) Homepage Journal
    Generally speaking, Seagate's Savvio line of HDDs are intended for server and enterprise storage (read: SAN/NAS) use, not for laptop use. 2.5" hard drives are particularly useful in some compact storage arrays or in blade servers. They probably consume wayyyy to much power for your average laptop. Also, most laptops don't feature SCSI storage. Most use IDE or SATA. It's possible that Seagate could, in the future, come out with a SATA version of this drive, but I don't think it's likely given the power consumption and heat characteristics of 15K RPM drives. Seagates laptop drives don't even break 7.2K.

  • by Pegasus ( 13291 ) on Wednesday January 17, 2007 @01:34PM (#17648798) Homepage
    I have 15k rpm disks in production since ... 2002 I think. The poster should mention data per actuator figure from TFA, because that is what really matters.
  • Re:wow (Score:4, Informative)

    by pe1chl ( 90186 ) on Wednesday January 17, 2007 @01:39PM (#17648882)
    Before you think that this means it has a lifetime of 182 years: this is not the case. The definition of MTBF is not related to lifetime.
  • by skoaldipper ( 752281 ) on Wednesday January 17, 2007 @01:45PM (#17648972)
    In less [georgetown.edu]than two years [ntt.co.jp], magnetic storage will sit aside vacuum tubes and punch cards in the Computing wing at Smithsonian.
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday January 17, 2007 @01:54PM (#17649126) Homepage Journal
    It's seizegate, so the warranty is five years [crn.com].
  • by ErMaC ( 131019 ) <ermac@@@ermacstudios...org> on Wednesday January 17, 2007 @02:26PM (#17649728) Homepage
    SAS is not designed to be used by a SATA controller. If you wanted your cheapo SATA controller to work with SAS drives, it wouldn't be a cheapo controller. The difference between SAS and SATA is that SAS uses SCSI as its command language, which requires a whole different set of logic on the controller end.
    If you're a workstation user looking for a speed boost, then you use SCSI or SAS drives with a proper controller like workstations have since 1990.

    And Flash drives have almost no chance of penetration in the server market, which is where this drive is being targeted (not at Laptop or Workstation users). Don't let the 2.5" form factor make you think it's for laptops, it's for high density servers or blades.
  • by jdgeorge ( 18767 ) on Wednesday January 17, 2007 @02:29PM (#17649764)
    You do realize that the SSD you reference is based on flash, right? If you look carefully, you will find that no vendors list write seek times or write IOPS for such devices. The reason is that the performance is just plain awful.

    RAM based SSD is nice, but flash based SSD won't touch a decent 15k drive for any write heavy application.


    The reason "seek time" isn't listed for SSD devices is the same reason dynamic RAM manufacturers don't list "seek time" in their device specifications, namely, it doesn't apply. In storage device parlance "seek time" refers to the time it takes for the drive head to reach the target data on a rotating disk. Read the (ahem) authoritative Wikipedia article here [wikipedia.org].

    Furthermore, the recently announce flash-based SSD's from Samsung and SanDisk have file access times far superior to any rotating disk-based storage device. However, it is true that the dynamic RAM-based devices have access times that are approximately 10 times faster than the flash-based devices, but the flash based devices have file acces times typically much more than 10 times faster than a disk drive's seek time. For reference, see the SanDisk press release [sandisk.com] for their SSD device.
  • by tppublic ( 899574 ) on Wednesday January 17, 2007 @02:31PM (#17649798)
    You don't see a reason to switch, because the benefits of SAS are in reliability, not in speed. The mechanism inside an enterprise drive is different than that in a consumer drive, and you can see that in the reliability specs and the warranty periods. Given that most consumer data really isn't mission critical (as much as people claim it is), RAID 1 SATA drives are sufficient.

    Seagate Research presented a good technical article [usenix.org] on SCSI vs. SATA back in 2003. Much of this is still relevant today (though it's SAS vs. SATA)

  • Re:wow (Score:5, Informative)

    by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Wednesday January 17, 2007 @02:31PM (#17649806)
    MTBF is only defined within the drives expected life (something like 3 or 5 years). So, if you take 182 drives, you expect about 5 of them to die within 5 years, even if all of them die within 10 years.
  • by Anonymous Coward on Wednesday January 17, 2007 @02:34PM (#17649850)
    Right now I'm sitting in a room with 40 Seagate 3.5" 15k RPM drives. The noise isn't that bad. I can still hear the heads move over the rotation noise. Of course in a few months when the bearings start going, then they'll start screetching like hell. I'm dreading that because when the bearings in the 7,200 RPM Seagates we replace with the 15k ones started failing I couldn't hear to talk on the phone. I'm sure it's going to be worse with the 15k ones.
  • by Anonymous Coward on Wednesday January 17, 2007 @03:06PM (#17650322)
    you are wrong... your figure is hundreds of inches/hour, not km/hour. 3.5" * pi * 2.54 = ~28cm circumference. *15000 = 4.2e5 cm/min. /100 = 4.2e3 m/min. /1000 = 4.2 km/min. *60 = 251 km/hour. the edge velocity for a 3.5" as compared to a 2.5" drive is simply the ratio of their diameters.
  • by TopSpin ( 753 ) * on Wednesday January 17, 2007 @03:43PM (#17650880) Journal
    This is at the expense of all that extra storage area.

    The people for whom these high end disks are intended aren't concerned with the "storage area" of individual devices. They care about the ratio of storage to spindles and arms. They buy things like this [tpc.org].

    Why is this front page news?

    Because it's a site about stuff geeks want to read. It's actually rather nice to hit the page and find some news about the latest incremental change in storage, as opposed to more [slashdot.org] move-slash, dot-on politics [slashdot.org].

  • by rrohbeck ( 944847 ) on Wednesday January 17, 2007 @03:52PM (#17651048)

    Can anyone explain to me why SCSI drives always seem to be lagging IDE in terms of capacity?
    The main limitation for bit density on a high speed drive is the channel data rate (since you can't use anything but standard CMOS in a low power, high volume, low margin product.) If you spin faster, at a given maximum bit rate, you lose bit density. Also, for faster seeks, you have to put down more servo information (otherwise you may not see any servo bursts for some time while the head is crossing only data.)
    You can generally stuff more data on a platter by spinning it slower. That's why basic 2.5" drives usually spin at 5400 or even 4500 rpm.
    Of course the interface has nothing to do with it. SCSI=>high end=>faster=>lower capacity. This may actually change with the convergence between SATA and SAS.
  • by Anonymous Coward on Wednesday January 17, 2007 @06:17PM (#17654026)

    You could get just about as high an average seek if you partitioned up a 3.5" 15K drive and only kept data on the inner partition.
    Perhaps, but the 2.5" drives use less power and you can squeeze more of them into the same space. This gives you more performance for a fixed amount of space or power consumption.
  • by Anonymous Coward on Wednesday January 17, 2007 @10:53PM (#17658000)
    Wow, you're correct.  Here's the output from smartctl from one of 90 new Seagate 750 GByte drives we have (w/ the middle unimportant columns deleted for clarity):

    ID# ATTRIBUTE_NAME         RAW_VALUE
      1 Raw_Read_Error_Rate    214270477
      4 Start_Stop_Count       21
      7 Seek_Error_Rate        10434919
      9 Power_On_Hours         163
    194 Temperature_Celsius    42
    195 Hardware_ECC_Recovered 2701

    Notice it has only run 163 hours, but has 214,270,477 read errors!  That's around 365 errors per second.  Something definitely isn't right about the way Seagate counts errors.

    It also claims the drive is 148 degrees F.  It's in a case with very good airflow in a 65 degree F computer room.  That number is also bogus.

    The other three drives I looked at had similar numbers.  Out of the 90, two have quit so far.  I'm going to have fun trying to keep that 67TByte storage cluster running.
  • by Agripa ( 139780 ) on Wednesday January 17, 2007 @11:40PM (#17658402)
    In this case, they've got many times lower capacity than even their 10k RPM 2.5" HDD, never mind their 3.5" HDDs.

    One of the applications for these drives are systems that are performance limited by access time and not capacity that can not yet use solid state storage. In a lot of very large storage installations, the existing arrays are already capacity underutilized because excess spindles and actuators have to be added to lower the average access time for multiple requests. It is not uncommon to not even utilize the inner area of 3.5 inch drives because the extra capacity is not needed and doing so marginally lowers the access time for systems where this is of primary importance.

    Personally, I'll wait for 3.5" HDDs with dual servos instead (basically, internal RAID), which will completely smoke this, and everything else out there.

    Dual actuator drives would indeed help significantly and it has been tried however the price premium over using twice as many standard drives would seem to make it too expensive. I suspect solid state storage will become cost effective before multiple actuator drives do.
  • Re:Flash Drives (Score:3, Informative)

    by drsmithy ( 35869 ) <drsmithy@nOSPAm.gmail.com> on Thursday January 18, 2007 @12:25AM (#17658774)

    What do you mean? I fully expect that rotating drives are on their way out. There's too many advantages to flash and the disadvantages with using SSDs in a server environment are being worked out as_we_speak. I'm willing to wager that within 3 years SSDs will beat high end HDDs in every desirable metric sans price- and price is just a matter of time.

    I doubt SSDs are going to come within a bull's roar of magnetic media in terms of cost-effectiveness any time soon (if they ever do).

    What I *can* see, is the growing use of flash [drives] as an intermediate caching device - in SANs/NASes (eg: each physical array comes with an SSD for caching purposes), individual drives (the drives with flash RAM that have been talked about recently), some magic device that plugs in between the regular drives and the disk controller and the poor-man's DIY version at the OS level (eg: Vista's "ReadyBoost").

    I can also see them being used in small scale, very specific tasks (eg: DB transaction logs).

    But, flash completely - or even meaningfully - replacing magnetic media in the forseeable future ? No way. It just can't provide sufficient density at a reasonable cost. Price out a 500G (usable) array of flash disk. Even being generous and using a parity-based RAID scheme where you only need n+1 or n+2 disks is still going to have a cost vastly in excess of an array of regular disks (and potentially requiring more physical space as well).

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...