Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Portables Hardware

WD's Terabyte Scorpio Notebook Drive Tested 100

MojoKid writes "Recently, Western Digital stepped out and announced their new 1TB 9.5mm Scorpio Blue 2.5-inch notebook drive. The announcement was significant in that it's the first drive of this capacity to squeeze that many bits into an industry standard 9.5mm, 2.5" SATA form-factor. To do this, WD drove areal density per platter in their 2.5" platform to 500GB. The Scorpio Blue 1TB drive spins at only 5400RPM but its performance is actually surprising. Since areal density per platter has increased significantly, the drive actually bests some 7200RPM drives."
This discussion has been archived. No new comments can be posted.

WD's Terabyte Scorpio Notebook Drive Tested

Comments Filter:
  • by fnj ( 64210 )

    And the 2.5" form factor once again pulls into approximately equal volumetric parity with the 3.5" (when you count the actual space consumed by the drive and mounting arrangement for 2-3 2.5" drives compared to 1 3.5" drive). And roughly equal power consumption per GB as well.

    • And the 2.5" form factor once again pulls into approximately equal volumetric parity with the 3.5" (when you count the actual space consumed by the drive and mounting arrangement for 2-3 2.5" drives compared to 1 3.5" drive)

      Actually, two 2.5" drives (70x100x9.5 mm) will fit perfectly on top of a single 3.5" drive (102x146x25.4 mm) (wikipedia entry on dimensions [wikipedia.org]. By volume, one 3.5" HDD = 5.3 2.5" HDDs. So 2.5" drives surpassed 3.5" drives in volumetric data density long ago.

      I suspect the main constrai

      • Comment removed (Score:4, Interesting)

        by account_deleted ( 4530225 ) on Saturday July 30, 2011 @01:28AM (#36930914)
        Comment removed based on user account deletion
        • Unfortunately I'm seeing the same pattern with WD. When did Hitachi stop making drives? I bought one not long ago. I agree those were good drives.

          • That is the thing with anecdotal evidence it is not factual. And as such when people bash Seagate/WD/Intel/AMD/Ford/Chevy whatever unless they back it up with fact it has no weight.

            For some reason it is popular to bash Seagate, I see this all the time on the hardware forums. But it is important to take that for what it is. Nothing.

          • Comment removed based on user account deletion
      • by Anonymous Coward

        In theory, smaller platters should permit higher rotational speeds (same linear speed), and less stroke per head, so an arbitrarily capacity RAID should be faster with many 2.5" HDDs than fewer 3.5" HDDs. So at the same volumetric density, 2.5" makes more sense. (In practice, AFAIK both 2.5" and 3.5" drives mostly top out at 15k RPM -- I think there were a few 20k?)

        But GP's point involved mounting them, which for enterprise RAIDs usually means some sort of hotswap bay, so I think his claim of just reaching

        • Speaking of form factors, anyone remember 5.25" hard drives? I liked the Bigfoots back in the day -- affordable high-capacity, and they were third-height, so you got decent airflow over them in half-height bays.

          Yes, I remember them. Didn't ever have a BigFoot though, I'm pretty sure. Weird-shaped things, weren't they. I may have picked up one from a corpse pile one time.

          Last one I had was the 19GB model

          My last one was ... I can't remember the make, but it was 90MB. Absolute bitch because I had to partition

      • Actually, two 2.5" drives (70x100x9.5 mm) will fit perfectly on top of a single 3.5" drive (102x146x25.4 mm) (wikipedia entry on dimensions [wikipedia.org].

        Yes in principle you can fit four 2.5 inch drives in the space of one 3.5 inch drive by mounting them sideways. In practice though even if you put the screwholes for mounting under the drives you would struggle to get a backplane into the 2mm of space you have next to the drives. Front to back you also only have 6mm for the incoming sata connectors and for the mounting hardware that supports the drives. I'm not sure whether such a mount is possible but if it is then it would require some pretty serious prec

        • Even if you did get four drives into a bay and used this new drive then you would still only be getting 4TB compared to the 3TB you would get by using a single 3.5 inch drive.

          Or 3 TB in raid 5. Anyway you put it, you'd get a massive increase in throughput and/or redundance.

    • This drive puts them well ahead actually. It also gives a good indication about western digital shipping 4 or even 5TB desktop drives soon. These platters have an areal density of around 125GB per square inch. If they can produce that across a 3.5" platter then they'll be looking at 1TB platters, and hence 4-5TB drives.

    • WARNING ! COMMENT BY OLD FART

      Why in my day, my first HD was bigger than a shoe box, only came in an external model, weighed about 20 lbs., and had the amazing capacity of 20 Mb. AND WE LIKED IT"

  • The Scorpio Blue 1TB drive spins at only 5400RPM but its performance is actually surprising. Since areal density per platter has increased significantly, the drive actually bests some 7200RPM drives.

    Has there ever been a single generation of drives in which the next generation of 5400 RPM drives did not beat the existing generation of 7200 RPM drives? Okay, maybe you have to skip two generations. Either way, it's not unusual by any means. When people ask on audio recording boards whether they need 7200 R

    • by bazald ( 886779 )

      The only thing surprising about this drive is that normally the 7200 RPM drives come first, before the 5400 RPM drives at that density.

      That's patently false for 2.5" HDDs. I can't remember a time when I haven't had the choice of a faster 7200 RPM drive or a higher capacity 5400 RPM drive when notebook shopping.

    • by fuzzyfuzzyfungus ( 1223518 ) on Friday July 29, 2011 @10:47PM (#36930374) Journal
      It depends somewhat on your workload:

      Being at the top of the areal density pile will make your nice, long, continuous reads or writes run like a bat out of hell; but it isn't nearly as useful if you are dealing with highly scattered reads and/or writes. If the area you need has passed the head, you just need to wait until it comes around again.

      Long run, high-RPM drives are probably on their way out, since high-density, lower-RPM ones do impressive linear performance and absurdly low cost, while decent solid state gear kicks out the I/OPs better than an entire shelf of 15k screamers; but you can certainly construct tests, not entirely artificial, where RPM matters more than density, within reason.
      • by dgatwood ( 11270 )

        True, but just about every passing generation has faster seek/settle speed than the previous generation, too. At this point, that's just a very small part of the total seek time (I think), but IIRC, it used to be a much bigger part.

      • Long run, high-RPM drives are probably on their way out, since high-density, lower-RPM ones do impressive linear performance and absurdly low cost, while decent solid state gear kicks out the I/OPs better than an entire shelf of 15k screamers

        It seems like we'll eventually have both a HDD and SSD in our system, with a smart filesystem which automatically puts randomly accessed files on the SSD and sequentially accessed files on the HDD.

        • by Kjella ( 173770 ) on Saturday July 30, 2011 @05:21AM (#36931556) Homepage

          The question is really more how and what level will be managing it. On the one end you have pure heuristics based on usage and access patterns, on the other you have a completely fixed split installation between the SSD and HDD. The downside to heuristics is that they don't work until it's gathered statistics, it doesn't use any ex facto knowledge even though we know what the performance critical parts of the game is the launcher and engine, not the cinematics. They're prone to misclassification, move around in a video looking for a particular scene and it could be classified as random access, even though it makes no sense. And worst of all from a consumer point of view, the performance is unpredictable. Suddenly things are much slower because it's been evicted from cache but there's no obvious reason as to why. The current mechanisms also look more to usage than access method, after all randomly accessed files that you never use don't make sense to cache. However this too is an imperfect approach, the MP3 playlist you have running often may lead to all the MP3s being pulled into cache because they're used so often, even though it makes no sense since they're played at 320kbps or less.

          Personally, I would like to manage my SSD more by myself, picking what goes where but I find I lack the granularity. There's 25GB games and you can either install it all here or all there, there's no in between. I'd like to be able to pick an application and get a slider bar starting with "Full - SSD only" and ending with "None - HDD only" with settings in between.

          Take for example Civilization 5, total size 4,58 GB.
          461 MB is the opening movie in different languages.
          1,21 GB are UI resources (bitmaps)
          959 MB are terrain textures
          1,46 GB are sounds.
          106 MB are DirectX installers

          Subtract that and you got 430 MB that is the "core" of the game - maybe less if you go through it properly. That's small enough I'd like to say just install it, keep it on the SSD permanently. That way it'd take >1 TB of installed applications to fill up my 128 GB SSD, not just a few all-or-nothing hogs. Of course there's a few downsides to this approach, you get RAID0-ish reliability, if one disk failes the entire installation is hosed. And you have to move those in sync if you want your files somewhere else. But overall I'd be pretty cool with such a solution.

          • Sounds like the old option when installing from CDs. Where the game would leave some parts of the data, like movies and sound, on the CD.

            Now the question is the SSD market saturation high enough to warrent game makers to program that type of option into the game. My gut says no but it is indeed an intresting idea.

      • Being at the top of the areal density pile will make your nice, long, continuous reads or writes run like a bat out of hell; but it isn't nearly as useful if you are dealing with highly scattered reads and/or writes.

        I thought density mattered in the radial direction too, and the less you have to move the head, the better it is for seek latency.

        If the area you need has passed the head, you just need to wait until it comes around again.

        Agreed. But the way I understand it, this is an argument for high RPM. High density is somewhat orthogonal and improves performance in other ways. So why don't they make 2.5'' drives at 15k RPM? Wait, I think they do, they just package them in 3.5'' cases.

        In my understanding, making computer hardware faster has always been about higher densities and smaller sizes. I don't se

      • "but you can certainly construct tests, not entirely artificial, where RPM matters more than density, within reason."

        You don't need tests, you can definitely notice a quicker boot-up and snappier performance when using a 15,000 RPM drive. Those milliseconds add up fast!

        I always wanted a 15,000 RPM drive, so when they dropped to ~$40 on ebay (they're cheaper than that now) a few years back I picked one up along with a cheap PCI SCSI card. The difference was very noticeable, XP booted much faster than
        • by dgatwood ( 11270 )

          You don't need tests, you can definitely notice a quicker boot-up and snappier performance when using a 15,000 RPM drive. Those milliseconds add up fast!

          Depends on the OS you're running. Mac OS X does a lot of work to make sure that booting consists largely of long sequential reads (the kext cache, etc.). If you saw a huge difference in boot times with Mac OS X (more than a couple of seconds), then you're probably seeing a sequential throughput difference rather than anything to do with seek penalties.

          As

        • by Wolfrider ( 856 )

          --If you don't mind my asking, which SCSI card did you go with? I'm still using an Adaptec 2940, but I know there are faster ones out there that may still allow booting...

    • "I'm always quick to point out that a new 5400 RPM laptop drive approaches the speed of the early 15,000 RPM desktop drives"

      True, but no one buys a 15,000 RPM drive for transfer rates, they buy them for access time, which you can't increase with higher areal density. Even our modern 3TB drives are no match for a 10 yr old 15,000 RPM drive.
  • Fair Warning (Score:3, Interesting)

    by yamamushi ( 903955 ) <yamamushi@gma[ ]com ['il.' in gap]> on Saturday July 30, 2011 @12:16AM (#36930698) Homepage
    These drives have actually been on the market for well over a year now, and I was (un)lucky enough to pick one up last year when my local Fry's Electronics got them in stock. While the drives themselves are handy because of the amount of data you can squeeze into them, making my macbook pro a beast of a mobile studio (at the time I was using it for music production), they seem to be prone to issues. The first drive lasted about a month, before I almost lost several weeks worth of a project I was working on due to the drive crashing. I was able to retrieve my work from the drive by mounting it externally before it became completely unreadable, and I attribute this to the high density drives not being able to handle the average bouncing around of a laptop in a backpack. When I attached the drive to one of my linux workstations, I could hear the disks spinning up but dmesg wouldn't pick up the drive and they just kept spinning endlessly louder and louder. The second drive lasted about 2 months before a similar problem occurred, though by that time I had migrated most of my work to a different workstation. I replaced the drive with the original 500gb drive my macbook came with, and I haven't had any problems since. In short, I'm not sure if the early drives off the assembly line were just prone to failure more often or if perhaps I was just extremely unlucky with the ones I procured. Either way, I am rather uncomfortable about putting any important data on one of these drives in the future until they've been on the market for a while and have been thoroughly tested.
    • by PayPaI ( 733999 )

      Why weren't you using Time Machine?

    • Re:Fair Warning (Score:5, Insightful)

      by bemymonkey ( 1244086 ) on Saturday July 30, 2011 @01:03AM (#36930848)

      "These drives have actually been on the market for well over a year now, and I was (un)lucky enough to pick one up last year when my local Fry's Electronics got them in stock."

      The 9.5mm, 1TB version? Over a YEAR? Even Samsung's 9.5mm 1TB drive only came out a few weeks back... it only came into regular stock last week. WD's version isn't even showing up in online shops yet.

      WTF are you talking about?

      • I didn't see the 9.5mm size mentioned at first, but considering that the larger drives failed so often I am extremely weary about trying out a smaller form factor drive until they too have been thoroughly tested: http://www.frys.com/product/6063518?site=sr:SEARCH:MAIN_RSLT_PG [frys.com]
        • But those previous drives were not for use with laptops. They are not rated to handle the physical abuse that is required of a laptop drive. Instead, they are designed as external backup drives or NAS drives. If you put this drive in a laptop and it failed - no big surprise there.... But these newer drives are designed for laptops so they should be quite different in regards to durability.
          • The issue with high load count on drives without Apple-type firmware could be in play. I've never figured out how one reads the head load/unload cycle count off a disk, but apparently some open-market drives end up cycling at a very elevated rate, which can lead to limited lifetime. It's why I haven't hassled with replacing the 120GB drive in my MBP -- not worth the risk.
      • How the hell is one person's story at all relevant to overall reliability? "Oh no I had two drives fail, they suck!" As the saying goes "The plural of anecdote is not data."

        People need to understand that just because hardware failed for you doesn't mean it is bad overall. You need more data. I cannot name a brand of harddrive I haven't seen fail at work. Every single one, I've seen failures on. None of that indicates they are bad. In terms of systems I use I've seen more WD failures than anything else... Be

    • These drives have actually been on the market for well over a year now, and I was (un)lucky enough to pick one up last year when my local Fry's Electronics got them in stock. While the drives themselves are handy because of the amount of data you can squeeze into them, making my macbook pro a beast of a mobile studio (at the time I was using it for music production), they seem to be prone to issues. The first drive lasted about a month, before I almost lost several weeks worth of a project I was working on due to the drive crashing. I was able to retrieve my work from the drive by mounting it externally before it became completely unreadable, and I attribute this to the high density drives not being able to handle the average bouncing around of a laptop in a backpack. When I attached the drive to one of my linux workstations, I could hear the disks spinning up but dmesg wouldn't pick up the drive and they just kept spinning endlessly louder and louder. The second drive lasted about 2 months before a similar problem occurred, though by that time I had migrated most of my work to a different workstation. I replaced the drive with the original 500gb drive my macbook came with, and I haven't had any problems since. In short, I'm not sure if the early drives off the assembly line were just prone to failure more often or if perhaps I was just extremely unlucky with the ones I procured. Either way, I am rather uncomfortable about putting any important data on one of these drives in the future until they've been on the market for a while and have been thoroughly tested.

      The drive mentioned in the article has not been out for over a year. The drive you likely bought is this one: http://www.newegg.com/Product/Product.aspx?Item=N82E16822136545 [newegg.com] That drive is 12.5 mm high, not 9.5. They are indeed prone to issues; at work, we have purchased 11 of them for custom NAS servers and 2 of them have been DOA.

      • by adolf ( 21054 )

        DOA is one thing: It's easy to remedy, and it doesn't affect services that are already operating (though it may push back the start date on new services). The solution is easy: Just print an RMA form and label and send it back to the chumps that sold it to you. The causes are varied, but mishandling, ESD, and shipping damage seem like likely candidates.

        Dead after a moderate period of time is another thing entirely. It can disrupt services that people are already accustomed to using, and it's more of a

    • by pebs ( 654334 )

      I almost lost several weeks worth of a project I was working on due to the drive crashing

      It blows my mind when people running OS X don't use Time Machine.

      My hard drive died a few weeks ago, and it was so easy to restore from Time Machine. I was right back where I left off when the drive died. In my case I bought a 2TB 7200 RPM Hitachi Deskstar. I had heard that those tend to fail, but the price was right, and I have enough confidence in Time Machine and the off-site backups I make every few weeks (rotate external drives which have a complete backup of my entire system) that I can take that r

  • Recently, Western Digital stepped out and announced their new 1TB 9.5mm Scorpio Blue 2.5-inch notebook drive. The announcement was significant in that it's the first drive of this capacity to squeeze that many bits into an industry standard 9.5mm, 2.5" SATA form-factor.

    Samsung announced theirs back in early June [engadget.com]. It's been coming in and out of stock since then. I last saw it on Newegg a couple weeks ago [newegg.com], though curiously it's now marked as deactivated.

  • by jafo ( 11982 ) * on Saturday July 30, 2011 @01:12AM (#36930864) Homepage

    The summary makes it sound like "squeezing" 1TB into a laptop drive is impressive, but with 600GB SSDs in the same form-factor (admittedly at almost 10x the price), I'm just not overwhelmed... Especially with the recent stories about optical discs storing 500GB RSN. And the SSD is going to be able to survive being dropped without losing all that data...

    And as far as performance, the summary says at 5400RPM it bests the 7200RPM competitors... That's really only true for raw streaming, say video or audio production work. People seem to be blinded by the MB/sec rate and forgetting the average access latency -- which IMHO is the most important factor in almost all cases. I had a client who was pushing back on the 15K RPM discs I recommended for their database several years ago, because the 7.2K RPM discs had a higher MB/sec number. Not for their database, they don't...

    Access latency is what, in most cases, makes a computer feel slow.

  • It seems everyone is always on about performance and storage capacity. But what about the reliability?

    Now, admittedly it's a bit of an edge case but in my home server I have a comparatively ancient 30 GB IDE disk for the system disk and a bunch of SATA drives in RAID-Z for bulk storage and I've been thinking about moving to a new system disk out of pure paranoia (this thing has been in constant use for what seems like an eternity) but I can't seem to find any good statistics for the reliability of current d

    • by goodcow ( 654816 )

      Is there really no one out there who has said "fuck performance, we're gonna build drives that are good for at least five years"?

      They're called Enterprise grade drives. They usually cost 2X more than the consumer level drives.

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...