Forgot your password?
typodupeerror
Data Storage Portables Hardware

WD's Terabyte Scorpio Notebook Drive Tested 100

Posted by timothy
from the border-guards-will-not-be-amused dept.
MojoKid writes "Recently, Western Digital stepped out and announced their new 1TB 9.5mm Scorpio Blue 2.5-inch notebook drive. The announcement was significant in that it's the first drive of this capacity to squeeze that many bits into an industry standard 9.5mm, 2.5" SATA form-factor. To do this, WD drove areal density per platter in their 2.5" platform to 500GB. The Scorpio Blue 1TB drive spins at only 5400RPM but its performance is actually surprising. Since areal density per platter has increased significantly, the drive actually bests some 7200RPM drives."
This discussion has been archived. No new comments can be posted.

WD's Terabyte Scorpio Notebook Drive Tested

Comments Filter:
  • by fuzzyfuzzyfungus (1223518) on Friday July 29, 2011 @11:47PM (#36930374) Journal
    It depends somewhat on your workload:

    Being at the top of the areal density pile will make your nice, long, continuous reads or writes run like a bat out of hell; but it isn't nearly as useful if you are dealing with highly scattered reads and/or writes. If the area you need has passed the head, you just need to wait until it comes around again.

    Long run, high-RPM drives are probably on their way out, since high-density, lower-RPM ones do impressive linear performance and absurdly low cost, while decent solid state gear kicks out the I/OPs better than an entire shelf of 15k screamers; but you can certainly construct tests, not entirely artificial, where RPM matters more than density, within reason.
  • Fair Warning (Score:3, Interesting)

    by yamamushi (903955) <yamamushi@noSPAM.gmail.com> on Saturday July 30, 2011 @01:16AM (#36930698) Homepage
    These drives have actually been on the market for well over a year now, and I was (un)lucky enough to pick one up last year when my local Fry's Electronics got them in stock. While the drives themselves are handy because of the amount of data you can squeeze into them, making my macbook pro a beast of a mobile studio (at the time I was using it for music production), they seem to be prone to issues. The first drive lasted about a month, before I almost lost several weeks worth of a project I was working on due to the drive crashing. I was able to retrieve my work from the drive by mounting it externally before it became completely unreadable, and I attribute this to the high density drives not being able to handle the average bouncing around of a laptop in a backpack. When I attached the drive to one of my linux workstations, I could hear the disks spinning up but dmesg wouldn't pick up the drive and they just kept spinning endlessly louder and louder. The second drive lasted about 2 months before a similar problem occurred, though by that time I had migrated most of my work to a different workstation. I replaced the drive with the original 500gb drive my macbook came with, and I haven't had any problems since. In short, I'm not sure if the early drives off the assembly line were just prone to failure more often or if perhaps I was just extremely unlucky with the ones I procured. Either way, I am rather uncomfortable about putting any important data on one of these drives in the future until they've been on the market for a while and have been thoroughly tested.
  • Re:Density (Score:4, Interesting)

    by hairyfeet (841228) <{bassbeast1968} {at} {gmail.com}> on Saturday July 30, 2011 @02:28AM (#36930914) Journal

    Actually I'd say they have already licked the 3.5 heat problem by dropping the speed to 5400 or 5900 RPM. As TFA shows once you get to a certain level of density the slower speed drives keep up quite nicely with the faster drives, especially if it has a decent sized cache and Windows 7 is given a decent amount of RAM to use for Superfetch.

    Personally I never thought I'd own a drive slower than 7200RPM, as I still had bad memories of the late 90s 5400RPM drives but after building several kits for customers that came with new 5900RPM drives as the only drive I have to say I was wrong. I took the plunge and bought a 5900RPM Samsung EcoDrive to replace my gaming drive and damned if it didn't whoop the 400Gb Seagate 7200RPM drive it replaced on benchmarks. I guess that 32Mb buffer really makes the difference.

    I'm only sad we have lost Samsung and Hitachi as hard drive manufacturers as they really made great drives.I hope WD keeps their quality up as Seagate has already turned to shit since buying Maxtor. After having three Seagate Tb drives die in less than a year I wouldn't touch their crap again. So congrats WD, please don't pull a Seagate and turn to poo, okay?

  • by Kjella (173770) on Saturday July 30, 2011 @06:21AM (#36931556) Homepage

    The question is really more how and what level will be managing it. On the one end you have pure heuristics based on usage and access patterns, on the other you have a completely fixed split installation between the SSD and HDD. The downside to heuristics is that they don't work until it's gathered statistics, it doesn't use any ex facto knowledge even though we know what the performance critical parts of the game is the launcher and engine, not the cinematics. They're prone to misclassification, move around in a video looking for a particular scene and it could be classified as random access, even though it makes no sense. And worst of all from a consumer point of view, the performance is unpredictable. Suddenly things are much slower because it's been evicted from cache but there's no obvious reason as to why. The current mechanisms also look more to usage than access method, after all randomly accessed files that you never use don't make sense to cache. However this too is an imperfect approach, the MP3 playlist you have running often may lead to all the MP3s being pulled into cache because they're used so often, even though it makes no sense since they're played at 320kbps or less.

    Personally, I would like to manage my SSD more by myself, picking what goes where but I find I lack the granularity. There's 25GB games and you can either install it all here or all there, there's no in between. I'd like to be able to pick an application and get a slider bar starting with "Full - SSD only" and ending with "None - HDD only" with settings in between.

    Take for example Civilization 5, total size 4,58 GB.
    461 MB is the opening movie in different languages.
    1,21 GB are UI resources (bitmaps)
    959 MB are terrain textures
    1,46 GB are sounds.
    106 MB are DirectX installers

    Subtract that and you got 430 MB that is the "core" of the game - maybe less if you go through it properly. That's small enough I'd like to say just install it, keep it on the SSD permanently. That way it'd take >1 TB of installed applications to fill up my 128 GB SSD, not just a few all-or-nothing hogs. Of course there's a few downsides to this approach, you get RAID0-ish reliability, if one disk failes the entire installation is hosed. And you have to move those in sync if you want your files somewhere else. But overall I'd be pretty cool with such a solution.

A LISP programmer knows the value of everything, but the cost of nothing. -- Alan Perlis

Working...