WD's Terabyte Scorpio Notebook Drive Tested 100
MojoKid writes "Recently, Western Digital stepped out and announced their new 1TB 9.5mm Scorpio Blue 2.5-inch notebook drive. The announcement was significant in that it's the first drive of this capacity to squeeze that many bits into an industry standard 9.5mm, 2.5" SATA form-factor. To do this, WD drove areal density per platter in their 2.5" platform to 500GB. The Scorpio Blue 1TB drive spins at only 5400RPM but its performance is actually surprising. Since areal density per platter has increased significantly, the drive actually bests some 7200RPM drives."
Re:How is that surprising? (Score:5, Interesting)
Being at the top of the areal density pile will make your nice, long, continuous reads or writes run like a bat out of hell; but it isn't nearly as useful if you are dealing with highly scattered reads and/or writes. If the area you need has passed the head, you just need to wait until it comes around again.
Long run, high-RPM drives are probably on their way out, since high-density, lower-RPM ones do impressive linear performance and absurdly low cost, while decent solid state gear kicks out the I/OPs better than an entire shelf of 15k screamers; but you can certainly construct tests, not entirely artificial, where RPM matters more than density, within reason.
Fair Warning (Score:3, Interesting)
Comment removed (Score:4, Interesting)
Re:How is that surprising? (Score:4, Interesting)
The question is really more how and what level will be managing it. On the one end you have pure heuristics based on usage and access patterns, on the other you have a completely fixed split installation between the SSD and HDD. The downside to heuristics is that they don't work until it's gathered statistics, it doesn't use any ex facto knowledge even though we know what the performance critical parts of the game is the launcher and engine, not the cinematics. They're prone to misclassification, move around in a video looking for a particular scene and it could be classified as random access, even though it makes no sense. And worst of all from a consumer point of view, the performance is unpredictable. Suddenly things are much slower because it's been evicted from cache but there's no obvious reason as to why. The current mechanisms also look more to usage than access method, after all randomly accessed files that you never use don't make sense to cache. However this too is an imperfect approach, the MP3 playlist you have running often may lead to all the MP3s being pulled into cache because they're used so often, even though it makes no sense since they're played at 320kbps or less.
Personally, I would like to manage my SSD more by myself, picking what goes where but I find I lack the granularity. There's 25GB games and you can either install it all here or all there, there's no in between. I'd like to be able to pick an application and get a slider bar starting with "Full - SSD only" and ending with "None - HDD only" with settings in between.
Take for example Civilization 5, total size 4,58 GB.
461 MB is the opening movie in different languages.
1,21 GB are UI resources (bitmaps)
959 MB are terrain textures
1,46 GB are sounds.
106 MB are DirectX installers
Subtract that and you got 430 MB that is the "core" of the game - maybe less if you go through it properly. That's small enough I'd like to say just install it, keep it on the SSD permanently. That way it'd take >1 TB of installed applications to fill up my 128 GB SSD, not just a few all-or-nothing hogs. Of course there's a few downsides to this approach, you get RAID0-ish reliability, if one disk failes the entire installation is hosed. And you have to move those in sync if you want your files somewhere else. But overall I'd be pretty cool with such a solution.