

Is the Time Finally Right For Hybrid Hard Drives? 311
a_hanso writes "Hard drives that combine a traditional spinning platter for mass storage and solid state flash memory for frequently accessed data have always been an interesting concept. They may be slower than SSDs, but not by much, and they are a lot cheaper gigabyte-for-gigabyte. CNET's Harry McCracken speculates on how soon such drives may become mainstream: 'So why would the new Momentus be more of a mainstream hit than its predecessor? Seagate says that it's 70 percent faster than its earlier hybrid drive and three times quicker than a garden-variety, non-hybrid disk. Its benchmarks for cold boots and application launches show the new drive to be just a few seconds slower than a SSD. Or, in some cases, a few seconds faster. In the end, hybrid drives are compromises, neither as cheap as ordinary drives — you can get a conventional 750GB Momentus for about $150 — nor as fast and energy-efficient as SSDs.'"
It'd better happen quick then (Score:5, Insightful)
If there is to be a time for hybrid drives, the window on it is fast closing. As SSDs get cheaper and cheaper more and more people will opt to just go that route. Most people don't really need massive HDDs and so if smaller SSDs get cheap enough that'll be the way they'll go. They don't have to be as cheap as HDDs, just cheap enough that for the size people need (probably 200-300GB for more people) they are affordable enough.
For me personally, the time already came and went. I was very enthusiastic about the concept of hybrid drives, particularly since I have vast storage needs (I do audio production). However no hybrid drive for desktops was forthcoming. Then there was a sale on SSDs, 256GB drives for $200. I picked up two of them. $1/GB was my magic price when I'd be willing to get them. Now I have 512GB of SSD storage for OS, apps, and primary data. That is then backed by 3TB of HDD storage for media, samples, and so on.
A hybrid drive has no place. I'd certainly not replace my SSDs, they are far faster than any hybrid drive (even being fairly slow on the SSD scale). Likewise I have no real reason to upgrade my HDDs, they serve the non-speed intensive stuff.
While I'm willing to spend more than most, it is still a sign of things to come. As those prices drop more and more people will say "screw it" and go all SSD.
Re:It'd better happen quick then (Score:5, Insightful)
Right, but it didn't happen quickly. These is only one model of a hybrid hard disk available, which makes it unsuitable for any serious use in mass production. Also Seagate now tell us that their previous version was actually crap, and the new one is much much better. The price is lower but still high - about 100 dollars for 8 GB of flash. For that money you could get an SSD with 48 GB - and put all your system data on it.
This is a niche product, designed for laptops with only one disk slot that require both fast access and high storage. It is heavily compromised in both aspects, and the price is outrageous.
Re:It'd better happen quick then (Score:4, Interesting)
SSDs typically have large memory caches, where as HDDs are still stuck around the 32MB mark. With RAM so cheap these days even the lowest end graphics cards are coming with 1GB, but not HDDs for some reason.
Re:It'd better happen quick then (Score:4, Informative)
The cache on a hard disk is often used as write cache - store incoming data in cache, leave actually committing it to disk until a convenient opportunity arises.
32MB of cache doesn't take that long to flush. 1GB, OTOH...
Re: (Score:2)
So maybe there could be a little computer and a battery on the hdd along with the gigs of cache?
Re:It'd better happen quick then (Score:5, Informative)
Would add far too much cost to the hard drive, but this is essentially what server-class hardware RAID controllers do. The battery doesn't power the hard disk, it just keeps the cache running.
Re: (Score:3)
Re:It'd better happen quick then (Score:4, Informative)
That's because it doesn't do anything good for hard drives. There was a paper about it some years ago, I'm too lazy to google it up, but even 32 MB is too much (I think the sweet spot was around 2 MB).
If you think about it, it's not surprising, what good would it do that the disk cache in main memory managed by the OS didn't already do?
Large on-disk cache would only make sense if it was combined with a battery or something so you don't loose data on crashes.
Re:It'd better happen quick then (Score:5, Interesting)
That's because it doesn't do anything good for hard drives. There was a paper about it some years ago, I'm too lazy to google it up, but even 32 MB is too much (I think the sweet spot was around 2 MB).
Having had the 2MB and 8MB versions of the same disk from Seagate that uses the same mechanism and having seen the 8MB disk be substantially faster, I'm pretty sure it's not 2MB.
Re:It'd better happen quick then (Score:4, Interesting)
Re: (Score:3)
How in the hell do we zero out a drive if the file system is a lie?
There's an ATA-SE command which is a fairly new part of ATAPI but which drives can optionally support. You have to trust the manufacturer for that, but guess what? Remapping bad blocks has been going on for some time, and there is no way for you to erase those blocks without the cooperation of the disk controller. In theory you could swap the controller and then perform a new low-level format, assuming you trust the disk to do that. I wouldn't personally make assumptions, but if I were, I'd imagine that pro
Re: (Score:3)
You don't seem to understand how the cache memory is used. It thing like read-ahead data that the drive basically gets for free as it waits for the disc to rotate to the correct place, or metadata like bad block and reallocation maps. With a larger cache it would be easy for the drive to do background reads when the computer is idling it, increasing the chance that the next read will be already in the cache, like a kind of super read-ahead.
The drive can make smart decisions that the PC can't because it know
Re: (Score:3)
That's because it doesn't do anything good for hard drives. There was a paper about it some years ago, I'm too lazy to google it up, but even 32 MB is too much (I think the sweet spot was around 2 MB).
The sweet spot will be very application and OS dependent. In the old days, the drive didn't have any cache, and the controller couldn't hold much more than 1 sector. So, when the head dropped, you had to wait for your sector to spin around before you could read. If you then needed the adjacent sector, you might have to wait for an entire revolution before you could read it. Schemes like interleaving were devised to get around this. (Logical sectors N and N+1 were physically 2 or 3 sectors apart)
Wi
Re:It'd better happen quick then (Score:5, Insightful)
> Most people don't really need massive HDDs
Are you kidding me.
Record FRAPS of your gaming sessions, photography (or RAW), record and edit anything with any modicum of quality? Save said media and final encodings?
Age of conan, 33 GB. LA Noire13 GB. Mortal Online, 30 GB.
That is stuff ordinary people do, not audio producers.
Re: (Score:2)
One word: Esata
Re: (Score:2)
I was using a Lenovo with a manual hybrid SSD / HDD (I.E. they have a laptop with a SSD drive, an HDD drive, and some logic to link the two into one drive). It actually booted about twice as fast as the HDD alone, and launched applications rather snapily.
Of course, when the drive had problems it was impossible to get at or fix. That's why I went up to a 256 SSD System drive and traditional HDD for data / programs. But a more traditional hybrid drive wouldn't have that problem, and would just run faster.
Re:It'd better happen quick then (Score:5, Insightful)
The rewrite figures are going to shit as they move to smaller processing tech, 25nm eMLC is already down to 3000 writes/cell, they say you won't get $1/GB at normal prices until we get 19nm which at least some say will be down to 1000 writes. That you're getting 500MB/s write speed is nice, but if you actually start using that regularly you'll burn through the disk in a matter of months. My first SSD - which I admit I abused thoroughly - died after 8-9000 writes average (was rated for 10k) after 1.5 years. My current setup is trying to minimize writes to C:, but I still don't expect it to last nearly as long as a HDD. Using it as a read-heavy cache of static files may be a better way to boost it for those that haven't got hundreds of dollars to spend every time it wears out.
Re:It'd better happen quick then (Score:5, Insightful)
The rewrite figures are going to shit as they move to smaller processing tech, 25nm eMLC is already down to 3000 writes/cell, they say you won't get $1/GB at normal prices until we get 19nm which at least some say will be down to 1000 writes.
Based on 3000/25nm tech, the new erase cycle limits will be ~58% (1700/19nm) but the storage capacity per area will increase by ~70%.
That you're getting 500MB/s write speed is nice, but if you actually start using that regularly you'll burn through the disk in a matter of months.
The smaller tech has just as much "heavy use" as the larger tech when equal amounts of board area are dedicated to flash chips. A board with 1 TB of 1700-cycle flash can take a serious write pounding even with considerable write amplification. The same board on the 25nm tech would only have 588 GB of 3000-cycle flash/
"Heavy use" doesnt mean "fastest possible erases." I don't know what you think heavy use means, but even extreme pounding scenarios (such as cycling the entire 1 TB once per day, something you might see in a non-incremental backup server) still gives these drives years of cycles to "blow" through. You could technically kill this theoretical drive in a little over a month but that says nothing about what a "heavy user" will actually witness.
The people solving write needs extreme enough that they would burn through the cycles of this theoretical 1 TB drive in less than a year are dedicating a lot more than a single 1 TB drive to their data volume problem
Re: (Score:2)
I'd kill for a decent hybrid drive for my laptop right now. I'm currently running Samsung's 1TB 2.5" drive, and that's about halfway full... pretty much the only SSD I'd be able to use is Intel's 320 (or 310?) with 600gigs, which costs about as much as I paid for my Thinkpad. And even with that, I'd be uncomfortably limited due to the lack of room for expansion... not to mention leaving room for wear leveling and such.
Looks like I'll be upgrading to a Thinkpad with two hard drive bays, or one with an mSATA
Re:It'd better happen quick then (Score:5, Informative)
While I love the speed the SSD (and the prices is hitting the "magic" $1/GB) you're forgetting the HUGE elephant in the room with SSD that almost no-one seems to notice ...
SSDs have a TERRIBLE failure rate.
http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html [codinghorror.com]
and ...
http://translate.googleusercontent.com/translate_c?hl=en&ie=UTF8&prev=_t&rurl=translate.google.com&sl=fr&tl=en&twu=1&u=http://www.hardware.fr/articles/843-7/ssd.html&usg=ALkJrhjecZZv1F6d_oT-dr41FPFYOIkVCw [googleusercontent.com]
At the _current_ price point & abysmal failure raite, SSD sadly has a ways to go before it catches on with the main stream.
Re:It'd better happen quick then (Score:5, Informative)
It wasn't even one of those gradual fails you tend to get with HDDs where they tend to start getting faults for a while before failing, giving you a chance to get the data off of it and order a replacement. One day it was working normally, next day, wasn't even recognised by the bios.
Just to add insult to injury, OCZ have an awful returns policy, had to pay to get it send recorded delivery to the Netherlands. Cost me £20. Going to be a few years before I take the plunge again and I won't be buying OCZ. Paying premium prices for something so unreliable, isn't on, especially given how much of an impact a sudden drive failure has on just about every type of user.
Re: (Score:3)
Ignore the returns policy, send it back to the retailer... Your contract is with the retailer, not the manufacturer. Know your rights!
Re: (Score:2)
Hybrid drives are designed for laptops. Most laptops don't have space for two drives. Thus, the hybrid drive will let media obsessed folk to carry around 750 GB of stuff but give them a speed boost when necessary.
Re: (Score:2)
Most laptops have space for two drives (by default HDD and optical). It's just for some reason few vendors seem to offer combinations out of the box that don't involve an internal optical drive.
Re: (Score:3)
Re: (Score:2)
8 gigs is more than enough for the components in windows that you actually load on a regular basis. A windows install may be 17 gigs, but that includes all the utilities you use once in a blue moon, a heap of desktop wallpapers, drivers for all the hardware in the Windows world you DONT own, sound themes, etc. The actual base OS that is loaded into RAM on boot is likely nearer 1/3 to 1/2 a gig.
Ditto for the apps you install.
Re: (Score:2)
I'm all for getting rid of spinning disks as well but if anything your post legitimizes hybrids.
Re: (Score:2)
Re: (Score:3, Insightful)
Frankly I'm not sure the write thing's as much of an issue as people make out.
MTBF for HDD and SSD are both ludicrously high these days. I'd be more worried about the mechanical failure of an HDD than reaching the write limit on an SSD.
Re: (Score:2, Troll)
Re: (Score:3)
So the longer MTBF for an SSD is a bad thing thing for you?
Personally I think you're talking crap. What exactly do you do to hard disks Miss Jane Q. Poweruser?
Re: (Score:2)
he swaps to them.
Re: (Score:2)
Re: (Score:2)
However, the good news is that the article says write endurance has increased a lot in the last few years, with some manufacturers offering SSDs rated at more than 5 million write cycles.
So, the question in my case is not "Will a cheap SSD do the trick?" but "Do I want to spend the money on a high
Re:It'd better happen quick then (Score:5, Informative)
MTBF is a complete BS statistic. Take the first week of a hard drive's life. Make a linear extrapolation to that over the next 1000 years. Post marketing statistic that is grossly divergent from reality. The Western Digital listed in the thread below has a MTBF of 171 years. Anyone working in a real environment will confirm that is just ludicrous. What you're measuring is that for the first week of a hard drive's life, it behaves like it would live for 171 years. After the first week, it's all downhill. Back in the real world I kill laptop drives at least every 2 years, and desktops every 5.
This makes MTBF an OK but not great cross-device comparison statistic, with the assumption that all hard drives age in about the same way. SSD's really don't age like Hard Drives. They're less prone to total catastrophic failure. They lose a little capacity on a regular basis. They don't have axle bearings or dust to worry about. They will age and have electrical problems, but nowhere near the mechanical problems of hard drives. They will age in a more linear fashion. A 50 year MTBF of an SSD drive is actually a plausibly useful data point, whereas a 200 year MTBF of a hard disk is a BPOS.
Re: (Score:3, Insightful)
"They will age in a more linear fashion. A 50 year MTBF of an SSD drive is actually a plausibly useful data point, whereas a 200 year MTBF of a hard disk is a BPOS."
Theoretically that may be true, but SSDs are still young enough that I give a lot of weight to "anectodal evidence", and the majority -- in fact almost all -- of what I have heard is that they actually tend to fail catastrophically.
As far as I am concerned, a "wait and see" approach is still feasible before I spend a bunch of money.
Re: (Score:2, Redundant)
The only time I have really heard of them failing on any large scale is when they are plugged in and just don't work(or work intermittently) due to incompatibilities/software defects or when someone updates the firmware and generally if they work for the first week without problems they will run well. I own several and they have been solid although I have avoided spending large amounts of money and ended up with the smaller sizes that I used for system files.
With just the system files on SSD the difference
Re: (Score:2)
I'm not saying that's what they typically do. I'm only saying those are the majority of the failure stories I have heard / seen.
I have been an early adopter of many things. But I value my data (which is my bread and butter, after all), and so I will take a conservative approach to this. After more people have had them, for a little while longer, I will revisit the i
Re: (Score:2)
No data I care about resides on any of my SSD drives since I use them for system files only. Anything I care about is on a standard drive (due to size issues) as well as backed up elsewhere.
Re: (Score:2, Insightful)
If you value your data that much (and I understand if you do, because I value mine as well), you shouldn't be trusting either the SSD or the conventional HD. You should be backing up that data to a second device either way, at which point whether or not it is SSD or HD is irrelevant.
My machine is all SSDs for what its worth. Data files and system files are all on SSD. Of course the personal data files are all backed up to a server built of conventional HDs that sits in my wiring closet. Not because I th
Re:It'd better happen quick then (Score:5, Insightful)
MTBF is not the failure rate of a single disk, it's the average failure rate of disks used in an array. If you have a type of disk with a 100,000 hour MTBF, and use 100 of them (whether in a raid array, a cluster, or 100 individual desktops in a company). Then you will (roughly) replace one disk due to failure for every 1000 hours (100,000 MTBF / 100 disks), or 40 days.
It doesn't try to pretend that a single disk lasts 100,000 hours. That's stupid.
Re:It'd better happen quick then (Score:4, Insightful)
If you would care to read up what MTBF actually means and how it is used you would not say it is BS.
If you have a drive with MTBF of 171years, how likely is it will fail during its expected usage period of 5 years? It is 2.88%
What is the likelihood a drive with 230 years MTBF will fail during the next 2 years? It is 0.86%
The formular is p(a) = 1-e ^ (- a/mtbf)
If you can not work with that, don't balme the engineers that can.
Re: (Score:3)
I'm guessing that you don't make stuff with an MTBF of 10 years. An MTBF of 10 years means that each year 10% of the items are breaking down and requiring repair/replacement due to some form of break down. In most industries, with products that unreliable you'd be out of business pretty quick.
MTBF is a completely different measure to expected lifetime. The expected lifetime is just that - how long the device is expected to work for, before its performance becomes unacceptable (i.e. when the device becomes w
Re: (Score:3)
"OCZ Vertex 3 240 GB - 2,000,000 hours MTBF Western Digital Caviar Black 640GB - 1,500,000 hours MTBF"
If true, it is the first time I have seen such a figure.
And if true, I would consider it to be good news.
Re: (Score:2)
"OCZ Vertex 3 240 GB - 2,000,000 hours MTBF Western Digital Caviar Black 640GB - 1,500,000 hours MTBF"
But I should also point out that as far as I know, the MTBF figure is not related to the number of rated write-cycles.
But I don't claim to be certain. I'll check.
Re: (Score:2)
as far as I know, the MTBF figure is not related to the number of rated write-cycles.
Of course it isn't, and that's the problem - the numbers are easy to manipulate.
Re: (Score:3)
... I don't think it matters much to me if it takes 171 years or 228 years for the drive to fail...
Won't somebody please think of the great-great-great-grandchildren?
Re: (Score:2, Informative)
LOL! Where did you get that from?
If I bought a million hard drives I'd expect several of them to not even power on. By your definition I'm sure the MTBF of all consumer hardware would be zero.
PS: MTBF means "Mean time between failures" not "Mean time before failure".
Re:It'd better happen quick then (Score:4, Insightful)
I'll lecture you about my practical usage versus your theoretical bullshit.
I used a SSD for 3 years now and I have zero problems. It was cheap and it has literally transformed the way I use my computer. Its so fast I'd never go back to mechanicals.
On the other hand, I had 3 mechanical drives failing on me, after an average use of 2-3 years.
Until you actually try SSDs, don't lecture other people about them because you don't know what you're talking about.
Re: (Score:2)
I'm glad to say that I've never had a Hard Drive fail on me, ever. I've had MBT partitions get corrupted, which I thought was a faulty HDD but turned out to be caused by faulty RAM under a specific circumstance, but never a HDD fail.
However, I'm still a firm believer that anything with "Moving parts" will inherently be more likely to break than something without, or at the very least will certainly wear over time and fail eventually. SSD's do seem to have their issues and certainly have their own issues wit
Re: (Score:3)
Re:why do firmware updates format? (Score:5, Informative)
It sucks but has an easy but time consuming fix that leaves you with the drive contents intact:
Boot a live Linux distro. And hook a USB HDD to the system and mount it. The USB hdd can even be formatted NTFS if the live distro has FUSE installed along with the ntfs-3g driver, most live distros already have it or will allow you to install them. Assuming your SSD is the primary or only disk in your system then:
(You need to be root or use sudo, on most live distros you simply type "su root" or "sudo -s")
/dev/sda is the first disk in the system. you may have to run ls /dev/sd* to get a list of disks and partitions. and note, sda is the entire disk block-for-block, sda1 is a partition just like sda2 , sda3 etc. If you have more than one disk and don't know which letter it is then simply type fdisk -lu /dev/sdX (X being the letter you want to check) and it will dump the drive info.
It may take about 5+ hours assuming you have a 512GB SSD and an optimal USB transfer rate of 25MBps to the backup disk (in my experience the average for USB 2.0 write speeds). Faster backup disks and smaller capacity SSD's will backup much faster.
Once complete, you now have a bit for bit block-level copy of the SSD. This ignores the boot sector, boot loaders, partitions and file systems. It does not matter what OS you had on it, how many partitions or what file system you used. if your very paranoid and want to wait hours more, the run diff against the disk and disk image file to be sure they are an exact copy (never did it and never will).
Now reboot and upgrade the firmware the way the manufacturer tells you. So now your data is wiped out, big stinkin deal. Fire up the live Linux distro and again attach your backup disk and then enter the following command:
This writes the image file back to the SSD and if all goes well (It has never failed me yet and I have done this dozens of times for various systems) you now have your upgraded firmware with its original contents fully intact.
You can even mount one or more of the partitions contained within the disk image (under Linux of course) if you do a bit of homework (search google for mounting dd images) or just go here:http://darkdust.net/writings/diskimagesminihowto [darkdust.net] That tutorial is how I started playing with dd images.
You can also movethe contents of a smaller cramped disk to larger drives. Works for windows/NTFS too! You simply dd the entire smaller drive to the new drive (works best when both drives are hooked up via sata.) Then you use gparted or some other parted disk GUI to grow the file system on the new drive. Shut down and remove the linux cd/thumb-drive and remove the old disk and move the sata cable from the old disk to the new disk. Boot your PC and if your using windows (2000, XP , Vista, 7) it will run the check disk to verify the volume (DONT SKIP IT!) and reboot. Once it reboots to windows, open up explorer and see that you now magically have all that shiny new space without formatting, reinstalling, adding new drive letters or mounting drives under folders etc. Its transparent!
Example command:
sda is the small disk and sdb is the new large disk. I have done that trick multiple times as well with a 100% success rate. My friends were amazed.
Re: (Score:2)
I have a 100MB DEC SCSI disk from an old vax which still boots, its from the late 1980s and was running 24/7 from manufacture until about 2001, since then it's been used occasionally but spent most of its time turned off.
I also have 4x 4GB SCSI disks which were used in the late 90s in a server, before i got hold of them in 2000 and built a 4 disk raid0 array which i ran continually with very poor cooling until about 2004. As far as i know those disks still work, although i have had no reason to access them
Re: (Score:2)
I can't get worries out of my head
Do you also obsessively defragment your hard disks? Change you car's engine oil more often then the recommendation?
Re: (Score:3, Informative)
(P.S. Please don't lecture me about wear-leveling, etc. I know how they work.)
The last FLASH process size reduction took away ~39% of overall erase cycles but added ~64% more capacity per per mm^2.
In your view the latest generation SSD's are even worse than the previous generation because they only have 61% of the erase cycles, right?
If you really knew how SSD's worked, you wouldnt be talking about SSD's with millions of erase cycles per block. I mean what the fuck...
Re: (Score:2)
"If you really knew how SSD's worked, you wouldnt be talking about SSD's with millions of erase cycles per block. I mean what the fuck.."
Write cycles, not erase cycles. But never mind, I really don't want to split hairs.
/. how I could easily write a program to wear one out. I was told I was full of BS. Lo and behold, a program to do just that was made available on the web.
See the article I linked to elsewhere in this thread. It says the life of SSDs is still limited mainly by the number of write cycles. So my concern is perfectly valid.
Further, in regard to how they work: when SSDs were a bit newer, I mentioned right here on
Of cour
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
"No" (Score:2)
Re: (Score:2)
so it's not the time for hybrid drives - but it's the time for hybrid setups?
on a laptop with a single drive bay I could see use for a hybrid drive.
Re: (Score:3, Insightful)
That's precisely what a hybrid HDD does, except it takes the decision regarding what will benefit most from going in the SSD out of your hands.
Re: (Score:2)
That's what backups are for. You can even have a file monitoring daemon making backups from the SSD to the HD when idle.
movies and video (Score:2)
$50/TB (next year) implies a 4 GB movie stores for 20 cents, not quite zipless for favorite 1000 movies and videos at $200, plus back up doubles that cost for a simple mirror.
Re: (Score:2, Interesting)
Multiple failure points (Score:3)
Is this significantly different from SRT? (Score:4, Interesting)
I don't imagine it is. Anandtech found it wasn't that difficult to evict stuff from the cache you actually wanted [anandtech.com]. Not to mention that if you start copying anything especially large (your MP3 collection, or installing a couple games from a Steam sale, say) you nuke the cache and are back to mechanical HD performance.
Personally, I prefer to do it manually. Stuff I want to load fast (Windows, applications, games, my profile folder) sit on an SSD. Bulk data sits on a mechanical drive.
Re: (Score:2)
But now I know it's all this cache crap, I'm suddenly not at all interested. If one wants the best of both worlds, simply get two drives, and install the OS on the SSD one.
For What? (Score:2)
Who is this for?
With only 4-8gb of flash I can't think of who this is for?
Mid-range consumer desktops/laptops?
Really with such little cache you might as well just add more ram.
Wouldn't even dream of putting one of these in a server. It's a shame linux doesn't have L2ARC support and it would be nice if there was a drop in hardware equivalent.
Hybrid can actually be sometimes faster (Score:2)
The core problem with SSD's is write speed on workloads that have a large number of small updates. My testing on the older 500GB Momentus XT showed that in general it had better write speed doing, e.g., a Fedora install, than the 80GB Intel SSD that I benchmarked it against (same generations of product here, about a year ago), due to the large number of small updates that the non-SSD-aware EXT3/4 filesystems do during the course of installing oodles of RPM's. Because the Momentus only caches *read* requests
Prices! (Score:3, Interesting)
Not only are SSD prices going down, but traditional hard drives are going UP! (At least for the short term)
Prices taken from Newegg.com:
Seagate Barracuda XT 3TB is $399.99 (used to be a lot cheaper)
Seagate Barracuda 1TB SATA III:
About a year ago: On sale for $60, regular $70
Now: $149.99
I think now is the time of the SSD and the hybrid drive is just not worth the price
And considering this drive is retailed at $239.99 and a regular mechanical 750GB drive is between $69.99(Hitachi Deskstar) and $179.99(Western Digital Black) there is no reason to buy it.
Just go buy a small SSD and a regular mechanical drive and do it manually
Cache hasn't helped that much has it? (Score:2)
One lesson I've learnt over the years is that hard disk cache (in this case the traditional RAM based cache) doesn't matter all that much. Drives with 8Mb cache consistently show 99% of the performance as drives with 16Mb. And so on for the 128Mb vs 64Mb vs 32Mb varieties of hard disks.
I do realize there's a benchmark there. But i'm still skeptical given the history of how little on board hard disk cache matters.
Re:Cache hasn't helped that much has it? (Score:5, Informative)
There are only two things drive cache can help with significantly. When rebooting, where memory is empty, you can get memory primed with the most common parts of the OS faster if most of that data can be read from the SSD. Optimizers that reorder the boot files will get you much of the same benefit if they can be used.
Disk cache used for writes is extremely helpful, because it allows write combining and elevator sorting to improve random write workloads, making them closer to sequential. However, you have to be careful, because things sitting in those caches can be lost if the power fails. That can be a corruption issue on things that expect writes to really be on disk, such as databases. Putting some flash to cache those writes, with a supercapacitor to ensure all pending writes complete on shutdown, is a reasonable replacement for the classic approach: using a larger battery-backed power source to retain the cache across power loss or similar temporary failures. The risk with the old way is that the server will be off-line long enough for the battery to discharge. Hybrid drives should be able to flush to SSD just with their capacitor buffer, so you're consistent with the filesystem state, only a moment after the server powers down.
As for why read caching doesn't normally help, the operating system filesystem cache is giant compared to any size it might be. When OS memory is gigabytes and drive ones megabytes, you'll almost always be in a double-buffer situation: whatever is in the drive's cache will also still be in the OS's cache, and therefore never be requested. The only way you're likely to get any real benefit from the drive cache is if the drive does read-ahead. Then it might only return the blocks requested to the OS, while caching ones it happened to pass over anyway. If you then ask for those next, you get them at cache speeds. On Linux at least, this is also a futile effort; the OS read-ahead is also smarter than any of the drive logic, and it may very well ask for things in that order in the first place.
One relevant number for improving read speeds is command queue depth. You can get better throughput by ordering reads better, so they seek around the mechanical drive less. There's a latency issue here though--requests at the opposite edge can starve if the queue gets too big--so excessive tuning in that direction isn't useful either.
Larger SSD caches do have advantages (Score:2)
Larger SSD caches bring two advantages, the cache persists across restarts assisting boot time and may also be larger than the amount of memory the OS allocates to its own cache.
Already here... sorta. (Score:3)
If you're willing to make a bit of effort, that is.
Just yesterday I was just investigating the Highpoint Rocket 1220 and 1222 HBAs, which imbues its possessor with the power of Creation... the power to create hybrid magnetic-flash storage devices. Hook up an SSD and a good old moving-platter drive to it, and the HBA does the heavy lifting to create a virtual hybrid drive that will appear as a single device to the host system. It's similar to what is being done with some RAID enclosures of the last couple years, using chipsets like the JMicron JMB393 to create singular virtual drives that are really RAID arrays. I have no doubt there will be other brand HBAs of a similar sort joining these Highpoint ones soon enough.
With products like this Highpoint HBA, it's not necessary to be a lady-in-waiting to to some royal manufacturer's whim. You can pick and choose an SSD and disk drive of prices and capacities and characteristics that suit your specific needs, rather than waiting breathlessly for some one-size-fits-all solution that benefits the maker more than the buyer.
Write back cache (Score:3)
I would buy one now if they would implement it as a write-back cache. It wouldn't be hard to do. Take a GB of flash, structure it as a ring buffer. That eliminates the "small random writes" problem - you're just writing a linear journal, and the places you're writing are pre-erased and ready to go. If the power fails the drive just plays back the cache when the power comes back on.
That would let you have massive improvements in write performance. Metadata updates leave you seeking all over the disk. BTRFS is currently very slow to fsync because of this. But if it could just blast it to a big flash cache, and the drive could confirm that as committed to disk immediately, it'd scream.
Unfortunately all the manufacturers seem to just want to use it as a big persistent read cache to make Windows boot faster.
Re: (Score:2)
Sounds nice, but I think the truth is that most people on non-db-server workloads don't really write a lot of random data in the first place. For them, start up speed is probably more important. I know it is to me. :)
I think there's a place for these. (Score:3, Insightful)
A hybrid drive would be great in my laptop. It doesn't have room for "storage" drives and a 600GB SSD would be heinously expensive. You could also put one in a USB 3.0 external enclosure (I assume they can work like that.) That would give you a nice trade off between speed, capacity and, most importantly, portability.
That seems to be what Seagate is thinking too. Since the drive is in the 2.5" form factor.
First hand (Score:4, Informative)
I have one. It works great, but "chirps" occasionally which I think is the sound of the motor spinning down. None of the firmware updates i've applied that claim to fix the chirp actually fix it.
It runs much faster than my previous drive, but i'm also comparing a 7200RPM drive to a 5400RPM drive so the speed increase isn't just because it's a hybrid.
I guess the advantage of the SSD cache is that if you use it in a circular fashion you can avoid a lot of the 'read-erase-rewrite' cycles... but I don't know how the cache is organised for sure.
Re: (Score:2)
Realize the limitations... (Score:5, Insightful)
Hybrid drives, and even all of the hybrid RAID controllers I've looked at, only use the SSD for read acceleration. They aren't used for writes, from what I could tell from their specs. So you're almost certainly better off upgrading your system to the next larger amount of RAM rather than getting a hybrid drive.
Personally, I looked at my storage usage and realized that if I didn't keep *EVERYTHING* on my laptop (every photo I'd taken for 10+ years, 4 or 5 Linux ISOs, etc) and instead put those on a server at home, I could go from a 500GB spinning disc to an 80GB SSD. So I did and there's been no looking back. The first gen Intel X-25M drives had some performance issues, but since then I've been happy with the performance of them.
The worst of both worlds (Score:2)
Slightly OT, SSD for OS issues (Score:2)
I was all set to buy a new laptop with the OS mounted on the SSD and a second HDD for mass storage. The obvious solution to me would've been to map the user directories to the HDD for file storage. Not a problem with Linux of course, but you can't do this with Windows! Can't recall the details, but there's some path info hard-coded somewhere that prevents you from moving your "My Documents" folder to a different drive. I never saw any workaround that didn't feel like a hack that would cause problems lat
Re: (Score:3)
Using ProfilesDirectory to redirect folders to a drive other than the system volume blocks upgrades. Using ProfilesDirectory to point to a directory that is not the system volume will block SKU upgrades and upgrades to future versions of Windows. For example if you use Windows Vista Home Premium with ProfilesDirectory set to D:\, you will not be able to upgrade to Windows Vista Ultimate or to the next version of Windows. The servicing stack does not handle cross-volume transactions, and it blocks upgrades.
h [microsoft.com]
This Drive is CRAP (Score:5, Informative)
This Drive is CRAP
ASSUMING that it still only does read caching.
I bought one of the Gen-1 drives and was very underwhelmed. I wanted write caching; 4GB of non-volatile memory with the performance of SLC flash could allow windows (or whatever) to write to the drive flat out for up many seconds without a single choke due to the drive.
In addition 4G of write-back cache is enough to give a significant performance boost for continuous random writes across the drive and even more so across a small extent such as a database or a DotNET native image cache.
But for reading it's insignificant compared to the 3-16Gbytes of (so much faster) main memory that most systems contain, except at boot time when, unlike RAM, it will already contain some data. The problem with this is that it will contain the most recently read data, whereas the boot files can quite reasonably be described as least recently read.
So in the real world it's useless for anything except a machine that's rebooted every five minutes ...
Re: (Score:2)
Considering the price of ram and flash i do not really understand these hybrid drives. Wouldn't it be cheaper and make more sense to just put 8GB (or 16GB, or more) battery protected RAM cache inside the hard disk rather than flash memory?
P.S. i chose to go an SSD route anyway, hybrid drives never entered my mind as an alternative.
Seagate slashvertisement? (Score:4, Informative)
That's horribly incorrect. I liked the sound of hybrid drives as well when I saw the price... The 500GB laptop hard drives with 4GB Flash for $150, should be awesome... But I, not being an idiot, did some research, and sure enough, the reviews say it's not remotely comparable to a real SSD.
eg. http://www.storagereview.com/seagate_momentus_xt_review [storagereview.com]
It's faster than a drive without such a cache, and it might be a good option for a laptop, but even there I'd say a 32GB SD card would be cheaper, and will work wonders on FreeBSD with ZFS configured for L2ARC...
I have no particular interest in what anyone buys, but the comparison to real SSDs is a massively dishonest.
Re:Seagate slashvertisement? (Score:4, Informative)
How much faster is fsync? (Score:2)
That's the real question with a hybrid drive. If you're running any kind of database, your performance is limited by how quickly you can fsync. A hybrid ought to be instant, which would be a major speed and reliability win.
Few seconds slower boot is half or a third. (Score:2)
Its benchmarks for cold boots and application launches show the new drive to be just a few seconds slower than a SSD.
My Debian sid boots in a few (noticeably less than ten) seconds into kdm. A few seconds of ten seconds is about a third or more.
"Newfangled tech! Now at least 33% slower!"
Great slogan you got there.
Is the Time Finally Right For Hybrid Hard Drives? (Score:2)
Is the Time Finally Right For Hybrid Hard Drives?
No.
They are the WORSED of both worlds (Score:3)
The article seems to think hybrid drives are the best of both worlds, but they are not.
They have the unknown reliability of SSD/flash drives (they do fail) COMBINED with the failure rate of consumer grade HD's (not that good).
They are not as speedy as pure SSD and not as cheap as pure HD.
So, the people that want speed, spend the money for a real SSD and use cheap reliable HD's for mass storage in a nas.
The people that want cheap, buy regular old HD's and accept the lower performance or just whine about it without doing anything about it because they are cheap.
The middle market, the people to cheap to buy a SSD but willing to spend far more on a small HD... I guess it just ain't there. ESPECIALLY since this lower class of consumer tends to buy ready made machines. Notice how the consoles only increase the HD space at the same time netbooks do? When THAT size of laptop HD as reached rock-bottom prize and you actually would have to pay more to get a smaller sized one?
Well, same for budget PC's makers. They buy HD's in bulk and put the same size in everything to cut costs. They are NOT going to add several tenners worth of hardware in the faint hope that budget PC buyers will buy their more expensive model when its sits next to the cheaper models in the shop.
And the high-end PC makers? They simply buy cheap SSD's and charge a premium for them.
Budget and high-end markets are FAR easier to supply for then the mid range. Because the budget people think anything more expensive is a rip-off and the high-end people look down their noses at anything cheap.
Intel's Z68 chipset negates the need for this driv (Score:4, Informative)
and their upcoming Ivy Bridge chipset will take it even further. Both allow for the use of a small SSD drive as a cache against a larger traditional hard drive.
Per the wiki page on their chipsets, The Z68 also added support for transparently caching hard disk data onto solid-state drives (up to 64GB), a technology called Smart Response Technology
SRT link is http://en.wikipedia.org/wiki/Smart_Response_Technology [wikipedia.org]
Re: (Score:3)
Re: (Score:3)
But the SSD doesn't have to be added as a discrete component. You can already get motherboards [newegg.com] that incorporate a small SSD drive to be used with SRT.
Why different just based on usage pattern? (Score:2)
The current idea is to put often used files on the SSD, and less used files on the HDD.
I bet you could get even better performance by splitting every file and putting the first few blocks on the SSD.
When a file is accessed, the SSD can start delivering data immediately while the HDD has some time to find the rest of the file and take over from there.
That should make every file access fast.
Never if you're running a database... (Score:2)
So what's stored where? (Score:3)
So how do hybrid drives decide what's stored in the SSD vs the disk? From working in the hard drive business, I can think of several ways to tackle this - which is it?
1. Drive observes usage patterns and stores data on SSD vs disk based on that. This would be cool since it's transparent to the OS, etc., so it can work by "magic" (e.g. like bad block remapping), but it feels like it might be less effective than the other strategies depending on how good a job it does guessing how data is used. Also, there are some cases that are 'rare' (such as boot time) but which are important to optimize, even if statistically it wouldn't appear so.
2. Driver/OS controls what's stored where. This could be great, since they can have much more knowledge of what's going on than the drive.
3. SSD and disk are distinct 'drives'. This would allow the user to optimize (e.g. put boot OS and swap on SSD, big files on disk, etc.). But it requires users to understand and manage tradeoffs explicitly, which most people probably don't want to deal with.
So which is it? Does anyone know?
Re: (Score:2)
DRAM storage is fine until power runs out.
Even that i-RAM only lasts for ~16 hours on battery.
If you work 8 hours a day, don't ever be late. And don't take weekends off.
It might be okay for a cache, but that only helps after booting and then you might as well just add memory on the motherboard.
Re: (Score:2)
place things that are used most often
It's not just about being used often, it's also about the pattern of use. A favorite video may be accessed pretty often but it's always read sequentially from beginning to end and the speed of reading is limited by the playback speed not the hard drive. So there is little point in putting it on a SSD. Program files (and associated program data files) otoh are read in a non-sequential pattern while the user is waiting for a program to start do something so you do want them on a SSD.
Are the caching algorithm