Seagate Launches Hybrid SSD Hard Drive 224
MojoKid writes "Though there has been some noise in recent years about hybrid storage, it really hasn't made a significant impact on in the market. Seagate is taking another stab at the technology and launched the Momentus XT 2.5-inch hard drive that mates 4GB of flash storage with traditional spinning media in an attempt to bridge the gap between hard drives and SSDs. Seagate claims the Momentus XT can offer the same kind of enhanced user experience as an SSD, but with the capacity and cost of a traditional hard drive. That's a pretty tall order, but the numbers look promising, at least compared to current traditional notebook hard drives."
Manageable hybrid (Score:5, Insightful)
Hybrid storage drives should be manually manageable.
You should have the possibility of configuring which files/folders/partitions/whatever you want to be accessed fast and which parts are to be left as "long term", slow access, storage.
Re:Manageable hybrid (Score:2, Insightful)
I have a manageable hybrid.
Read heavy system partitions on a small SSD (/boot, /bin, /etc ...etc), everything on magnetic.
Re:ReadyBoost in hw? (Score:3, Insightful)
Re:Hmmm... (Score:4, Insightful)
It wouldn't help start up time would it ?
Re:Gets Better Over Time (Score:4, Insightful)
Re:Or wait.. (Score:5, Insightful)
SSD wont be as cheap per GB as traditional drives for many years to come. Chances are that even when a 500 GB SSD drive gets to an acceptable price point, an old-fashioned hard drive would still be cheaper and hold many, many more data at the same time.
This solution provides a cost-effective way to have both performance and storage *right now*.
3rd run (Score:1, Insightful)
Throughout the article, the reviewer praises the abilities of the hybrid drive--after the 3rd run, which provides the drive with enough data accesses for it to predict what data will be checked next and pre-load it into the solid-state memory. However, on the first run, and on larger operations, the drive performs just like the normal 7200 rpm drive that it effectively is.
This hybrid drive certainly is a jump up over traditional HDDs, but I'm glad they benchmarked the drive against a true SSD as well--a quick glance over the graphs and the pure SSD seemed to be about twice as fast as the hybrid drive on several occasions. I chuckled when I first saw the graph on this page:
http://hothardware.com/Articles/Seagate-Momentus-XT-Solid-State-Hybrid-Preview/?page=7
Point is, if you're splashing out money to get a faster drive, get a "budget" SSD, not a budget-conscious traditional hard drive. Even the budget ones are miles above a standard or even a hybrid HDD, and the first-generation issues have been resolved (the disk controller problem being the main one) as far as I am aware.
For the same money you're spending for this hybrid drive, you could just get a massive standard high-speed drive and not be crippled for space. Or if you are spending the money, get a smaller-sized SSD and put your data files (*that's* why you have a 1TB drive currently, your 200GB mp3 collection and 600GB of "other" files) on an external drive, where speed isn't as much of a concern as web browsing or FPSs.
I went for the latter option: I picked up an Intel X25-M, and it's unquestionably rejuvenated a *4* year old laptop for daily use. It's faster than my friends' computers that are half that age, and I fully expect to get a couple extra years of service out of it because of that upgrade.
And some of the other graphs aren't very fair to SSDs, either--they're done on a logarithmic scale instead of a normal one. Transferring 200 MB/s is twice as fast as 100 MB/s, not something like 20% faster (which is what it seems at first glance). The continued dominance of the SSD in the tests is minimized by the choice in graphing.
Re:Gets Better Over Time (Score:3, Insightful)
No, not really (Score:5, Insightful)
The way to get both performance and storage right now is to by TWO disks. An amazing concept I know. Who would have thought it was possible to get more then one HD/SSD into a PC.
Every single story about SSD's seems to bring out the idiots who want everything on one disk. Good thing these guys ain't farmers or they would be trying to plow the field with a Ferrari or cruise town with a tractor.
This drive is only of use to people who can't afford a real SSD and are limited to a laptop with only one drive bay and even then you would get far better performance with a normal SSD and an external drive for your porn collection.
Yes yes, there are people who use a laptop AND have need for far bigger datasets but on the whole, those people also need far greater access speeds then a traditional laptop HD can offer. I find it amazing to see someone claim he needs to edit video on a laptop with a 500gb 2.5 inch HD running at 5400 rpm. Who are you trying to kid?
And this drive won't be much help here. 4GB is just a cache file, if you are lucky it caches the right files but if you are doing complex stuff these "smart" caches often get horribly confused and start caching the wrong data. Like Vista trying to cache torrented files. Yes, I know it accesses the file a lot but please don't try and cache a 10gb file on the same HD. What's the fucking point? If you for instance will be running a large database from this drive, I am willing to bet its cache performance will degrade as it simply has to much to cache. Small caches only work when a small amount of files is requested a lot and the rest isn't. Like a porn collecton on your OS drive. Video editing, databases, filesharing always screw up caches.
If you really want performance in a laptop, spring for one with two drive bays, put as much memory in it as it can hold and get an SSD and a HD. A real SSD not one of the cheap ones some laptop companies put inside. An SSD is NOT just a fast HD, they truly are in a class of their own. And even if you got only a small single SSD, then you can still save space by putting your music/porn on a flash card or usb stick instead.
I wonder if people can ever get it into their heads that an SSD is about speed, not about capacity. Then again, since every single netbook these days comes with a 360gb slow ass HD instead of small but fast SSD, I think I might be fighting a loosing battle. Seems the average customer can only judge something if the number is bigger.
Durability and Power (Score:4, Insightful)
This drive still suffers from the historical bugaboo of spinning platters: it is damaged by shock. Also, it has the power draw (and heat output) of other spinning media.
Those are the two biggest reasons for SSD, especially in notebooks. Performance improvements are a factor, but I think they're the least interesting. In this respect, Seagate still needs to bring an answer, and they need to do it fast to justify their run up in stock price.
Re:No, not really (Score:3, Insightful)
The way to get both performance and storage right now is to by TWO disks. An amazing concept I know. Who would have thought it was possible to get more then one HD/SSD into a PC.
In most computers sold today, the fitting of more than one harddrive is not possible. Besides that, it's a very difficult to manage solution, as people will have to manually decide what to put on the fast drive and what to put on the large drive. All in all it's a very fiddly solution, only available to tech-savvy folks with customazible computers. Not to mention the fact that two drives are more expensive than one.
In the real world, a hybrid drive such as Seagate is proposing is a lot better in almost every way thinkable. It's just one drive, so it will fit in basically every computer in existence and it functions completely automatic, as the user is presented with just one storage medium. The tests in the article prove this type of drive is both faster than traditional drives and a lot cheaper than SSD's, so it really is best of both worlds.
I wonder if people can ever get it into their heads that an SSD is about speed, not about capacity.
That's because harddrives are meant for capacity, not speed. Nobody thinks "Hey, my computer is slow, lets get a new harddrive". People buy harddrives to store their stuff on, so they want the drive that will hold the most stuff. So if you want to sell a lot of harddrives, you have to make sure they are able to hold a lot of data first and then think of a USP on top of that, which is exactly what Seagate is doing by creating these hybrid disks. The result will be large and fast disks for everyone.
Re:4GB? (Score:5, Insightful)
What makes this special is not just that it has a cache. Every HDD out there has a cache. This puppy has a "cache" 100x what current drives have. What's more is that this cache is persistent/nonvolatile. It's good when you reboot, so even at OS load, you see the advantages.
Re:You lost me... (Score:3, Insightful)
No, I would expect a harddrive to work in Linux. A harddrive which relies on ReadyDrive would not be a very good product, as it would only work correctly in Windows. That's why those type of harddisks never caught on, even though Microsoft did try to push this concept.
What Seagate is doing now, is using the ReadyDrive-concept of hybrid harddrives, but provide ReadyBoost-type technology on the controller of the harddisk instead of relying on the operating system.
Re:So they make a hard drive with a cache? (Score:1, Insightful)
I predict, initially, there will be an exploit related to this infrastructure change that will not be close-able with out negating the benefits therein.
Re:No, not really (Score:5, Insightful)
4GB is just a cache file, if you are lucky it caches the right files but if you are doing complex stuff these "smart" caches often get horribly confused and start caching the wrong data.
You do realize that the reason your computer is so fast is because of progressive layers of cache, right?
The fastest cache on the system is L1 cache. It's also the most expensive. Next is L2 cache, which runs at about 1/10th the speed of L1, but it's much cheaper and so there can be more of it. That it's only an order of magnitude different means the larger L2 cache has time to fill the L1 cache before the L1 cache is completely empty. Then comes L3 cache (usually), which is again about 1/10th the speed of L2, and it keeps the L2 full. Then RAM, which has kept pace pretty well and is about 1/10th the speed of L3 and keeps L3 full. And here is where things break. RAM speeds are measured in nanoseconds. Spinning disk hard drive speeds are still measured in milliseconds, and not even 1 or 2 milliseconds, more like 5-10 milliseconds. That's a couple orders of magnitude slower and breaks the chain of cache that we had going, and it is not enough to maintain full RAM at all times. What we need is a cache that is about 1/10th the speed of RAM to sit between RAM and Hard Disk.
SLC Nand flash, with its sub-millisecond read and write times, fits the bill perfectly. It's basically a scaled up version of the caching they use on hard drives already, and because of its size should be much, much more effective.
Re:Durability and Power (Score:3, Insightful)
The platform is the benefit though. Right now it has 4GB/250GB is 1.6% flash. Once this proof-of-concept works, I bet they could make one with closer to 20% flash. At that point it might spin-up the platter drive rarely enough that the power-draw issue goes away. If the drive is usually parked, the shock benefits improve a bit. That might be a good-enough solution to stick around for 5-10 years before the next thing comes along.
Re:This is the wrong place for this optimization (Score:3, Insightful)
Or, how about this instead?
Phase one, release SSD drives that are clearly faster and make a bunch of money from early adopters who think they can use them.
Phase two, listen to developers who are trying to make them work better. Implement things like the 'release' command. Offer an idea or two of your own, like the controller side copying.
Phase three, release the new version of the drive that supports all of that stuff and make even more money.
Your version relies on back-room deals with proprietary software makers and will probably ultimately result in a worse solution for everybody.
Both versions make a whole bunch of money for the hardware manufacturer. Your version treats users as passive idiots who haven't a clue. My version treats them as active participants in the process hardly worthy of the word 'user'.
Re:Gets Better Over Time (Score:5, Insightful)
You're forgetting one thing:
Sometimes, a machine will go from seemingly normal to suddenly thrashing about in swap rather heavily, with no warning at all. This has been the bulk of my experience, anyway. When your machine gets to that point, and you're in a graphical environment like the majority of desktop users are, you may not be *able* to look into the problem at all. You have to wait until after the damn thing comes to its senses, because you can't even switch to a regular text console, let alone log in in from another box. Forget trying to spawn a terminal. Every little program you launch to try to find the cause just causes the machine to use more memory or swap at this point, which just compounds the problem.
When the offending program finally does end, it's too late to see what went wrong because most programs leave no traces of their actions other than doing whatever they're programmed to do. Unless you're running some kind of process/resource logging program on your box (I'm not aware of anyone who does this outside of security professionals perhaps), good luck finding out what actually caused the problem, unless you saw something visibly bug out just before the machine stopped responding.
There are no two ways about it - this is absolutely the worst way to handle an out-of-memory condition. Most people would much rather have programs complain about lack of memory than to have their machine "lock up" for an hour while sitting there churning away in swap. In my experience, the average user figures their computer's being stupid again, and it's time to hit the power switch or the reset button, or maybe call someone for help (which doesn't work anyway, so they're back to square one).
To give an example, I once set my machine off to run a build which should have taken maybe half an hour, and went off to run some errands and watch a movie. It was still going three hours later, and had dug the machine so deep into swap that mouse events were taking 10-20 seconds just to echo to the screen, and keyboard events were nonexistent as far as X was concerned. I tried my level best to bring the machine back to a sane state, but I eventually had to give up and hit it with Alt-SysRq-U/S/B.
I love Linux as much as anyone, but I got sick and tired of this happening on my boxes, and responded the only way that seemed to make sense: I disabled swap entirely on both systems and added enough RAM to each to make up for the lost "memory".
Aside from older hardware that clearly needs it because of sheer lack of RAM, is there even any reason to recommend/enable swap by default anymore? Modern machines come standard with around 4GB of insanely fast RAM - isn't that enough?
Re:Manageable hybrid (Score:3, Insightful)
I sort of disagree. Humans are really, really bad at this kind of management, and a smart computer algorithm can often do better. Just look at the people who disable swap space because "it makes the computer slower". You can't trust humans to manage this optimally, and computers can, in theory at least, generate extremely complicated structures and processes (i.e. "if the user runs this program, he's probably about to be reading this data, so let's get this onto the SSD ASAP.")
Re:4GB? (Score:2, Insightful)
Re:Manageable hybrid (Score:4, Insightful)
Just look at the people who disable swap space because "it makes the computer slower".
There are two main mindsets to designing computer systems.
The batch processing mindset says that what matters is average performance.
The real time systems mindset says that what matters is meeting your deadlines consistently.
IMO desktops are closer to the latter than the former. Tens of milliseconds on each user action won't generally be noticed, the user can't do the next operation that quickly anyway. Tens of seconds on one action WILL be noticed and quite possiblly piss the user off especially if it's unexpected even if it only happens on a very small subset of actions. Unexpected delays break the flow of thought.
Now consider an app like firefox. It has a habbit of using a LOT of memory (whether this is a leak or a design feature is a subject of many /. arguments and not one I want to get into here). It is also single threaded so if any part of the app needs something swapped in the whole app is blocked. If the OS decides to swap it out for whatever reason (e.g. some app ran away with memory usage and didn't finally fail until after it had swapped out everything or a long running batch job overnight caused the OS to swap stuff out and expand the disk cache). Then you click on it's taskbar icon and wait ages as all the memory pages it's state is spread over grind their way back into memory.
You can't trust humans to manage this optimally
True but you can't really trust computers to either. Especially when the computer hasn't really been told what the human considers important or even how the data will be used.
Re:4GB? (Score:3, Insightful)
But what good is cache if it isn't persistent? The OS already has a perfectly fine read cache. It's the write cache that is the problem, and a non-persistent write cache of multi-gigabytes is pretty scary if you suddenly lose power. You could wipe out an entire file system that way.