Top Solid State Disks and TB Drives Reviewed 216
Lucas123 writes "Computerworld has reviewed six of the latest hard disk drives, including 32GB and 64GB solid state disks, a low-energy consumption 'green' drive and several terabyte-size drives. With the exception of capacity, the solid state disk drives appear to beat spinning disk in every category, from CPU utilization, energy consumption and read/writes. The Samsung SSD drive was the most impressive, with a read speed of 100MB/sec and write speed of 80 MB/sec, compared to an average 59MB/sec and 60MB/sec read/write speed for a traditional hard drive."
Hmm (Score:5, Interesting)
Number of writes? (Score:3, Interesting)
Why is the ultimate number of writes never taken into account in these comparison reviews? Why are solid state drives tested so that their weaknesses are not probed?
MTBF/Write Cycles (Score:5, Interesting)
Write Cycles: Even at the lowest estimate, 100,000 write cycles to failure
Meaning on a 32GB Drive, before you start seeing failures, you would have to (thanks to wear-leveling) write 32*100,000 GB, or 3.2Petabytes
at 60MB/sec write speed of the Samsung drives, you would need to write (and never, ever read) for 3,200,000,000/60, or ~53Million seconds straight.
53Million divided by 86,400 means you would need to be writing (and never ever reading) for ~617 Days straight (That's roughly 20 months of just writing, no reading, no downtime, etc...
So... the sky is not falling, these drives are slated to last longer than I've ever gotten a traditional drive to last in my laptop(s)
Almost forgot to mention, standard NAND of late has been more in the 500k-1M write cycle between failures range. 100k was earlier technology, so multiply numbers accordingly.
What about real performance (Score:4, Interesting)
Of course SSD will beat an IDE disk hands-down, but that is not why you buy IDE drives.
I have always used SCSI for my OS/system and IDE for my storage, this combination (in addition to SMP rigs when available) has allowed me to out-live 3 generations of processors. Therefore saving me money on upgrades.
SSD seems best marketed to 'gamers' so why is it always connected to a very limited IO bus?
Not the jump I was hoping for (Score:2, Interesting)
Looking at it, the biggest benefit I can see is that the solid state drives should be better at withstanding shock and vibration - which normal hard drives hate. If they cannot improve the performance (which will still be useful for gamers, servers, and other speed freak things) then reliability and security of data is the selling point. I can see rugged notebooks using these.
Re:MTBF/Write Cycles (Score:4, Interesting)
It gets better (Score:4, Interesting)
what about number of reads (Score:1, Interesting)
Speed for a mech. HD is burst, not track-to-track? (Score:4, Interesting)
"Samsung rates the drive with a read speed of 100MB/sec and write speed of 80 MB/sec, compared to 59MB/sec and 60MB/sec (respectively) for a traditional 2.5" hard drive."
The speed quoted for a mechanical hard drive is a burst speed, accurate for reading only one track, and doesn't include the time it takes for a conventional rotating hard drive to change tracks. Isn't that correct?
Re:Longevity of NAND flash (Score:5, Interesting)
Re:Longevity of NAND flash (Score:5, Interesting)
Some friends of mine at another company that were using them in a I/O laden system that wanted to replace laptop drives to make the machinews lower power and more reliable can blow out a flash drive in about 4 weeks.
Kirby
Re:MTBF/Write Cycles (Score:3, Interesting)
Seriously, this "issue" comes up in every discussion about SSDs, and it seems like people are just unwilling or unable to accept that what was once a huge problem with the technology is now not even remotely an issue. Any SSD you buy today should outlive a spinning disk, regardless of the operating conditions or use pattern. It is no longer 1989, engineers have solved these problems.
So, it's inevitable that someone who doesn't know this particular detail, but is already familiar with how platter based magnetic media work will come up with that issue in pretty much every discussion.
The problem is it's new. That's all. (Or, perhaps that techno-journalists write about stuff they don't know enough about.)
Re:Longevity of NAND flash (Score:5, Interesting)
Yes I have. However, I've never had one magically get smaller on me in such a way that fsck decides that your done fixing the filesystem. With SSD, YES, I've had exactly that happen to me.
In my life, I've lost a total of about 42Kb be completely unrecoverable with spinning media (yes, I mean that number literally). I use RAID extensively, I was the DBA/SA/Developer at a place that had ~10TB of disk online for 5 years. In all that time, 42KB is all I lost. Oh, that was in the off-line, tertiary backup of the production database (it was one of 5 copies that could be used as a starting point for recovery, we also had the redo logs for 5 days, each DB was a snapshot from one of the previous 5 days). It was stored on bleeding edge IDE drives put in a RAID 5 array. We used it as a cheap staging area before pushing the data over Firewire/USB to a removable drive that an officer of the company took home as part of the disaster recovery system (it had only the most recent DB and redo logs). The guy didn't RMA the hot spare, and we had two drives fail in about 3 days while the hot spare was waiting for the RMA paper work to fill out. In that one particular case, using ddrescue, I recovered all of the data off of the RAID5 array but 42KB (even though it was an ext3 filesystem on LVM, on a RAID5 array, which made the recovery even more complex). Every other bit and byte of data in my life from spinning media that I cared about, I've recovered (I've had a number of drives die with data I didn't care about, but I could have recovered from if need be). Trust me, I know about reliability, backups, and how to manage media to ensure that failure doesn't happen. I know about failure modes of drives. I've hot swapped my fair share of drives, and done the RMA paperwork. I've been in charge of drives that losing any one of the ~200 drives would have cost 10 times as much as I made in a year if I couldn't reproduce the data on it within hours.
If it had been worth $10K, I'd have sent off the drive to get that 42KB of data recovered. But it wasn't. It's well understood how the failure mode of spinning media. People know exactly how to do things like erase drives securely. People know who to call that has a clean room that can remove the magentic media to and put it under a microscope to get the data recovered. SSD isn't nearly as mature in that sense.
All of that is really to say: Yes, I know something about disks and drives. My point is to say that SSD's aren't magic pixie dust in terms of reliablabilty. I've had exactly what he's saying I shouldn't worry about happen to me on a regular basis. Enough, that our engineering department has developed specific procedures to deal with them in the field. We've changed our release procedures to accout for them. If your going to use an SSD or flash drive, go kick the crap out of it. Don't believe on faith anything you read on Slashdot (including this post, which is anecdotal). We order lots of 5,000 flash disk, and you can bet that at least 100 of them has serious flaws within being fielded. The ones the developers and testing uses regularly develop problems in terms of months, not years. The manufacturer tells us essentially, it's not worth it to find those, so deal with it.
The whole point of replacing the laptop drive was to make the silly thing more reliable. But making it uber-reliable for 4 weeks until the write leveling crapped out wasn't the idea.
Kirby
Re:Longevity of NAND flash (Score:3, Interesting)
on the other hand i have 4 wd 250gb ide drives on my desk that give smart errors and just are damn flaky... what can i say.. you get what you pay for.
Re:MTBF/Write Cycles (Score:3, Interesting)
Re:Longevity of NAND flash (Score:3, Interesting)
I believe what Kirby was saying, in addition to SSD's crapping out in weeks instead of years, is that he can get the data back from rotating media virtually every time if it's important enough to be worth spending the $$$s on. Unimportant stuff he doesn't bother to spend the time and money on.
I believe he is also saying that "dead" rotating drives can still have their data recovered, while "dead" SSD drives cannot with current methods available.
As a user who had a lightly used Jump Drive die suddenly after 4 years, I can attest that the failure was complete, and every possible online recovery tool tried recovered nothing, as well as discouraging the idea of actually sending it in for full disassembly and attempted recovery. It was simply dead.
And this is not even bringing in the question of constant and sudden decreases in SSD drive capacity. How would you feel about a rather full regular hard drive that was suddenly several percent smaller? That could kill your system right there, even though most of the SSD was intact.
Re:What about real performance (Score:4, Interesting)
On the other hand, random writes issues are "fixable". My company just published tests for various Raid-5 Flash SSDs setups. For 4 drives testing with 10 threads on Linux 2.6.22 using our MFT "driver", we get:
4K random reads 39,689 IOPS 155 MB/sec
4K random writes 29,618 IOPS 115 MB/sec
These are real numbers and the application does see the performance improvement.
For full details on drive performance see:
http://managedflash.com/news/papers/index.htm [managedflash.com]
Re:Longevity of NAND flash (Score:5, Interesting)
When I was young and stupid about drives and media, I lost a 1.2GB WD drive and lost everything on it. I couldn't spell "mkfs" or "fsck" and had no idea how to recover the drive at the time (I also didn't have the money to have a second drive to recover too, and no credit card so I could hold onto the first while having the second during the RMA). I was just young and ignorant. I lost a 1-2GB laptop drive that I literally just rode into the ground, I could have copied everything off and moved along. I knew the drive was going bad, but it was just a knock around system that I didn't care about. In the end, had I been thinking, I'd have saved the e-mail on it. I lost the first ~5-6 years of e-mail I had, but who wants e-mail from when they were 18-24? That was probably a couple of hundred MB that I might regret, but of nothing more then sentimental value. I'd never read it, and only be amused that I could prove I'm getting the same chain letters 15 years later.
I believe I had 4-5 drives I lost due to a virus or pilot error, but not a mechanical/media problem.
I've RMA'ed probably 100-200 drives due to some type of failure. I've had lots of of drives fail that were in a RAID array, that the mirror saved me. I've had lots of drives fail that were stand alone that had a section of bad sectors. All of that I recovered every byte of data from. Normally a drive that is going bad, you can still recover from for a very limited amount of time. Normally you have plenty of lead time, especially with SMART drive monitoring that your drive is going south. As long as you pay attention, spinning media isn't that hard to keep in good shape.
As a professional IT person, 42KB is it. On machines where production work is done for money at a company. 42KB is it, and in that case I was bound and determined to recover absolutely everything, and I invested a week into that project. I gave up on the 42KB once I proved that it was in a backup for the database that was at that point 15 days old (and thus of no use). Had it been necessary or cost effective, I'd have spent the $1-3K to get that drive images recovered by a professional data recovery shop. I think I've lost a drive or two on my personal machines at work, but the drive was fine, the laptop SATA controller was overheating. Using FSCK, I recovered the entire FS once the RAID controller was replaced. I think I had to re-rip some music from CD, because I failed to back it up prior to sending the laptop in for repair. I re-imaged the drive just to be safe in case the RAID controller had corrupted something important on the OS drive, which was the only reason I actually lost the music.
Again, it's the fact that the flash drives we have decided the drives are smaller at the interface level. Using fsck just scragged the system pretty much start to finish. I don't have a clue where the missing blocks are from. I have no idea what happened, upon reboot it decided that the block devices was smaller. Filesystem recover tools haven't had a chance to mature to understand those types of failures. Flash makers haven't yet decided that access to diagnostics and re-mapping logs might be of value to data recovery tools (at least none that I'm aware of). Access to the raw data (in case they are holding blocks in reserve). All of these things are reasons to be concerned about write leveling.
Kirby
Re:Longevity of NAND flash (Score:4, Interesting)
I think that makes perfect sense, but then I'd think that all the money they'd spent in making the thing perform faster then say 1MB/sec read rate is totally wasted. I'd assume folks are trying to push these are replacements for enterprise server machines, which I'd be extremely relucant to do.
Folks talk about these things in the theoretical (the original poster linked to a story that crunched numbers to show it should be safe). My question is does anyone have solid experience they can point to show that it has actually been safe for say 6-18 months under some well known duty cycle (A database, a file server, an e-mail server).
I have actual experience, with crappy flash made by a low end manufacturer that shows me, it's not terrible reliable. It is my understanding that we've had better luck with other makers, but their parts were too expensive (but software development is free *sigh*).
There are other threads in here that make me want to cram a CF-IDE converter into my machines and try putting my journal onto a Flash drive. Sounds like the performance boost and power consumption is a big win, but the fact that every byte of data pushed to the journal might be an issue. On a home machine, it might be worth playing with for giggles for performance testing.
Other folks I know who have tried to do things with flash have also been disappointed over the past 12-24 months, despite assurances from various experts that "it should work"... I'm looking for, "I'd done it, here it is, go play with it.". Now obviously MP3 players have been doing it for a while. I'm more interested in general purpose usage of a Flash drive. Those are the types of things I'm currently working on, cramming a flash into a machine that runs an ext2/ext3/xfs/reiserfs/jfs or some other read/write heavy usage ready FS on it.
Kirby
Re:Longevity of NAND flash (Score:4, Interesting)
Re:where are your logs stored? (Score:3, Interesting)
I used to hook up the IDE->CF [microcenter.com]. But the next time, I do this, I will use this instead (cheaper and does not take up a slot) [microcenter.com]. In addition, absolutely do not use the cheap CF garbage. There is lots coming out of China and the quality is horrible. If you do use one of the cheap one and it goes bad, you will at least understand why quality costs. I used Sandisk [microcenter.com]. I bought it at micro center since it was close,but I would go with newegg if ordering off the web (lots cheaper).
As to the software, it is pretty much a standard install.
Install / to the CF. Keep it SMALL. I am using kubuntu these days, so they automatically do it small. During the install, I added
I actually decided to leave the logs on the CF. They are the one thing that keep causing a disk to spin up.
I moved the data areas of apache, postgres, mysql, and parts of mythtv to the hard disk. They were located in
Squid is in a tmpfs on the system. I figure that since I reboot infrequently, it may actually help to clear it.
BTW, I have a couple of gigs of ram in the server. I turned off swap. All in all, my disks now spend the vast majority of their time sleeping, powered down, with the server drawing very little power. Several have commented about the seasonal change, but I started measuring about 1 week after the re-build. The fact that the temp dropped so much will tell you that less power is being used.
We're only at 1TB? (Score:3, Interesting)
- 3600 or 4200 RPM rotational speed
- low noise
- low heat
- low power consumption
The reduced speed (wear and tear on parts) and heat should also lead to greater reliability. If a 3.5" drive can be 1 TB today, a 5.25" drive should be 1.5-2TB. A drive like this would be perfect for a home media server or HTPC, where performance is not critical (SD DVD is only 4 GB/hour; even BluRay is only 25 GB/hour--and I'm pretty happy with ripped DVDs at ~1500 kbps--less than 1 GB per hour) but low heat, low noise, and low power consumption are all desirable traits. (There's more rotating mass, but at lower speed there should be much less energy/momentum/intertia/whatever overall.) And as long as CDs and DVDs are still ~5"--and that seems to be the case (DVD, HD-DVD, BluRay, SACD)--we'll already be using properly-sized cases.
* granted, that old thing was slow as hell. Swapping out the stock 8 GB Quantum Bigfoot for a 30 GB Maxtor dropped boot times from 3 minutes to 40 seconds.