Four SSDs Compared — OCZ, Super Talent, Mtron 206
MojoKid writes "Solid State Drive technology is set to turn the storage industry on its ear — eventually. It's just a matter of time. When you consider the intrinsic benefits of anything built on solid-state technology versus anything mechanical, it doesn't take a degree in physics to understand the obvious advantages. However, as with any new technology, things take time to mature and the current batch of SSDs on the market do have some caveats and shortcomings, especially when it comes to write performance. This full performance review and showcase of four different Solid State Disks, two MLC-based and two SLC-based, gives a good perspective of where SSDs currently are strong and where they're not. OCZ, Mtron and Super Talent drives are tested here but Intel's much anticipated offering hasn't arrived to market just yet."
Along with SSDs an optimized OS? (Score:2)
Re: (Score:3, Insightful)
I had a 286 laptop with MS-DOS in ROM.
Re: (Score:2)
I had a 286 laptop with MS-DOS in ROM.
I had a Sinclair ZX-81 with OS and BASIC language in ROM. It was the first in a series of machines I owned in which the OS and BASIC were either included in onboard ROM or came on a ROM cartridge that plugged in.
Man, I just realized how old I'm getting ...
Re:Along with SSDs an optimized OS? (Score:5, Funny)
Man, I just realized how old I'm getting ...
No you didn't. You realized that awhile back but forgot.
Re: (Score:2)
My father had an HP laptop (I believe it was a 8088) with Rom carts at the bottom, behind a panel. I remember he had a cart for Lotus 123, another one for some drawing program ,etc.
It had 4-5 slots..it was pretty cool.
Re: (Score:2)
I have a nametag with a BASIC interpreter in EEPROM.
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:3, Informative)
You can set up your machine this way right now if you want. Just put /home on a traditional disk and have the kernel and maybe a couple more trees of system files on an SSD. This way your SSD doesn't wear out as fast and you have super-quick read access to the kernel and settings.
If you're running something other than linux I'm sure there's a less transparent way of doing this. Mac OS doesn't really let you set mountpoints with Disk Utility but it won't freak out if you put in your own (MacFUSE does this).
Re: (Score:3, Funny)
Re: (Score:2)
Ultimately I think we're going to see systems with the OS essentially in ROM on a solid state disk, with room for application installation. Data will end up being stored on a traditional disk. I sincerely hope that the developers of next gen Windows, Linux, MacOS, and others, are taking this scenario and building an OS that is optimized for it. I think Linux certainly has a head start.
Maybe it's just me, but putting the OS data (least performance sensitive and most easily replaced) on the SSD (most reliab
You can run linux in ram now (Score:2)
For server systems, you can build a fully functionally ram based server image which will run in less than 200mb of ram.
Just add 4-128gb of memory as required.
Course, all that goes completely out of the window the instant the words "Gnome", or "Java" are mentioned. You are welcome to your rotating metal disk levels of performance there.
Re: (Score:2)
Ultimately I think we're going to see systems with the OS essentially in ROM on a solid state disk, with room for application installation.
There's really no need for a rom. You can just make a read-only partition for files of this nature. You also shouldn't have to worry about this to begin with if you've properly partitioned your OS during the install.
Oh wait, are you talking about Windows?
Deconstructing solid state. (Score:2, Insightful)
You know, we keep talking about solid state as its better because there are no moving parts, and less wear, but chips and circuits have plenty of moving electrons and go through a lot of thermal stress. I know that for a lot of applications a circuit can seem to be more reliable, but do we really have a sufficient experience to make such a sweeping statement that in fact solid state is more -reliable- than a mechanical system? There are some steam trains out there that are running and are over 100 years o
Re: (Score:3, Insightful)
Re: (Score:2)
Re:Deconstructing solid state. (Score:5, Interesting)
Well, let's see:
- Magnetic hard drive = solid state (ICs, buffers, etc) + magnetic platter + mechanical (rotating platter(s) + moving heads)
- SSD = solid state
As soon as the price per GB of SSDs is at parity with the magnetic drives, I'm switching. It probably puts out less heat and require less power, meaning quieter drives too.
Re:Deconstructing solid state. (Score:5, Insightful)
As soon as the price per GB of SSDs is at parity with the magnetic drives, I'm switching.
Actual price parity will likely only occur once the older technology become a rarity, and I suspect that for the next decade, magnetic drives will continue to be the cheapest mass storage out there. That being said, for me, I'll buy a SSD when I can get a decently rated 120 gig drive for less than $150.
Re: (Score:2)
Re: (Score:2)
You're going to use more power. Yes, SSDs use less power, that much is obvious. But what you're forgetting is that the CPU does a lot of idling because of waiting for harddrives to give it the data it needs to process... With an SSD, the data comes faster and the CPU spends less time idling and more time working, and in turn ends up using more power.
Seriously, go replace your laptop's HDD with an SSD and watch your battery life actually go down. It's because your CPU
Re: (Score:2)
Yes, but if your CPU loads data faster you'll be able to finish your work sooner which means you'll be able to turn the computer off sooner.
For the reasons above I conclude that SSD are directly related to the number of pirates in the world, and thus have a huge effect on global warming.
Re: (Score:2)
I've been debating on which one is more important to me though. (/sarcasm)
Re: (Score:2)
Re: (Score:2)
Not a joke. It's just a pitfall when benchmarking.
Let's say you benchmark how much power is expended during indexing documents, for 10 minutes. This is a bad benchmark.
First you test with a hard disk, which spends a large amount of time seeking, during which the CPU sleeps waiting for data to arrive.
Then you test with a SSD, which has near instant seeking, leaving the CPU very little time to sleep.
Then you look at the results: The HDD laptop consumed 100 units of power during 10 minutes. The SSD laptop cons
This is Slashdot. And you forgot Microsoft? (Score:2)
But Microsoft is, in a way, just symbolic for the software developers in general. We've had growing SSDs for quite some time, now (let's think thumbdrive, CF, Ramdisk, and others).
The problem with this is that as RAM becomes cheaper, the software developers deliberately bloat their software, and thus make SSDs again impractical for the latest software. Since government purchases and requirements drive everyone to the new level, SSDs have remained impractic
Re: (Score:2)
That's because you don't have one of those new perpendicular recording hard drive. Every time I load or save a file, I hear disco music [hitachigst.com].
Re:Deconstructing solid state. (Score:5, Funny)
There are some steam trains out there that are running and are over 100 years old... do we really think that a CPU or a RAM or a motherboard can live that long?
I agree completely. I, too, am dismayed at the lack of development in steam-powered computing.
Re: (Score:2)
Yeah ever since we lost Babbage that tech seems to have stalled.
Re:Deconstructing solid state. (Score:4, Interesting)
Re: (Score:2)
Makes you wonder why they're permitted to claim 1,000,000+ h MTBF in their literature when they don't give any assurance. Seems kind of like the sort of scummy propaganda that ought to be illegal. Saturate the media with consistent but unsubstantiated claims, and you make bullshit into
Re: (Score:3, Informative)
I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases). What makes you wonder why they have so little faith in their product (or in their own reliability estimate).
You need not wonder. The disks have a limited life time - like the brakes on your car, or the tyres, they will wear out eventually, and then you have to replace them. Nothing you can do about that. But that is not the same as "failures". A "failure" happens when your tyre blows after only 10,000 miles of normal use. Let's say a tyre is worn out after 800 hours of normal use. And one in thousand tyres has a failure before it is worn out, then you have 800,000 hours MTBF but only 800 hours life time.
Re: (Score:2)
I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases).
Which if you do the math is well over a year of 24/7 operation, 20000 is almost 2.5 years. Last I checked you didn't normally get more than three years (in many cases less) on consumer HDDs too, so draw whatever conclusion you like but I don't think they're very different from other sellers.
Re: (Score:2)
1,000,000 hours is over 114 years. Your estate would be making those warranty claims.
Tony.
Re: (Score:2)
So the number is presumable "sim
Re: (Score:2)
I noticed they claim 1,000,000+ h MTBF, but they only warranty for less than 10,000 h (or 20,000 in some cases). What makes you wonder why they have so little faith in their product (or in their own reliability estimate).
Because those are completely different things. One is "this drive has an expected lifespan of 10,000 hours", the other is "if you are using 1,000,000 drives that haven't reached the end of their expected lifespan, you can expect approximately one to fail every hour" (a way to measure how likely a drive is to fail before the end of its expected lifespan). Remember the "bathtub curve"? MTBF is how high the low middle part is, lifespan is where the rise at the end is. Of course, this assumes that the "bathtub
Re:Deconstructing solid state. (Score:5, Interesting)
Re: (Score:2)
There are some steam trains out there that are running and are over 100 years old...
I hope you also realize how much continual maintenance the average train requires, or any large piece of machinery for that matter. I should know, I am a maintenance technician for a plastics factory. Even our most dependable and reliable machines require at least annual maintenance. Tear down and check for part wear, replace or weld up worn parts, change belts, fluids, etc.
If a hard disk could be maintained (replace worn motor, lubricate bearings, etc) then I would agree with you. But they are disposab
Re:Deconstructing solid state. (Score:4, Insightful)
Are those steam trains really running with 100 year old parts?
Or do you regularly go in and maintain the various components of you hard drives?
Re: (Score:2)
Re: (Score:2)
But do you think that the steam trains are still working with all-original parts? Or do you think that those steam trains have been disasembled and replacement parts swapped in repeatedly over the course of that 100 years? If you were (able) to treat hard drives the same way, then you could expect longer lives out of them.
And we don't expect computer components to last that long (at least not in production use) much for the same reason none of those steam trains are in regular production use (novelty use is
Re: (Score:2)
There are some steam trains out there that are running and are over 100 years old... do we really think that a CPU or a RAM or a motherboard can live that long?
I assume that these steam engines have been taken apart and rebuilt so many times that none of the original parts are still present.
A train is more analogous to a computer. Swap out the motherboard when it fails, swap out a drive in the RAID1 array when it fails, etc. And you *can* keep a computer running that long... if you can find the parts.
Oh For God's Sake (Score:5, Insightful)
Yet another SSD review by clueless PC dweebs.
The whole point of SSDs is that they have no moving parts, so they don't have the seek time and rotational latency of spinning disks. That translates into faster random access. As the review says:
So what do they measure? Sequential transfer rates.
Gah.
Re: (Score:2, Informative)
The whole point of SSDs is that they have no moving parts, so they don't have the seek time and rotational latency of spinning disks.
Indeed, but it's nice to have some hard numbers to back that claim up. And it's nice to see HOW much faster they are versus a traditional drive.
So what do they measure? Sequential transfer rates.
Actually they measured the performances against each other. They show us that not all SSDs are created equally and they show tell us which SSDs they think are worth the money.
What they
Re: (Score:2)
Makes sense to me... We already know the random access on an SSD blows standard HDDs out of the water (Three of the drives are 1 MS access).
The main things people are looking at now are price and performance comparisons with HDDs.
One of the places that SSDs have been shown to be lackluster (and also perform at various levels) is in sequential transfer rates, so it makes sense to be focusing on that.
As it stands... When the prices drop a bit more, I'm seriously considering RAIDing a couple of these
Re: (Score:2)
If you look at those results, you'll find they only ran sequential tests. They talk about random access, but don't run any random access benchmarks.
Disagree (Score:2)
>> When you consider the intrinsic benefits of anything built on solid-state technology versus anything mechanical
As far as I can see there really aren't any, at least for conventional desktop PC use. The most obvious one would be performance, except suprisingly when comapred with the fastest of todays mechanical drives there's not much if any performance advantage. In some cases SSDs are actually worse.
There's still a lot of other disadvanteges to SSDs, like a more limited number of write operations,
Re:Disagree (Score:5, Insightful)
Someone wake me up when there's a 1TB SSD for $250 that can do unlimited rewrite ops.
Um, even mechanical hard drives cannot promise unlimited rewrite ops. Maybe you want lower your sights jut a tad?
Re: (Score:2)
And I want a pony. The point is that even spinning disc drives don't offer unlimited writes. Modern drives only don't crap out constantly because they "cheat" and automatically remap bad sectors (and have tens of thousands of spare sectors).
Re: (Score:2)
At least here in the usa, his desire is already fulfilled, just the manufacturers haven't caught on yet wiht their box labeling. You see, here in the USA, the maximum device lifetime of any device is 7 years. No manufacturer is expected by law to make any device that lasts longer than that, and they are legally allowed to call any device that is expected to last 7 years of normal service "lifetime" and other BS keywords, that could easily encompass "infinite writes". Because you see, these SSD drives can
Re: (Score:2)
Wow, do you really think the industry isn't going to push to get around technological obstacles?
Amazing how all those video cards managed to sell since the 3dfx, and all those slower processors prior to technology getting to the level it is today. I guess the rest of us were "lower sighted" for using and enjoying the technology of the day.
The way I see it, a portion of what you spend on any new equipment goes back into R&D, thus often resulting in small improvements over time.
The industry would simply c
Re:Disagree (Score:5, Insightful)
Someone wake me up when there's a 1TB SSD for $250 that can do unlimited rewrite ops.
Let me guess, you want a car's drivetrain to promise "unlimited mileage" and your homes A/C refrigerant to promise "unlimited compression/decompression cycles".
I hate to be the one to break it to you, but words like "unlimited" are marketing words only. EVERYTHING is limited and finite. In this case, consumer protection laws state that 7 years of normal usage is long enough to be considered "lifetime" or "infinite" or "unlimited" and all sorts of other key words and tricky phrases.
Those mechanical drives you are comparing SSDs to? They don't offer "unlimited rewrites" except in the marketing sense. 7 years of normal usage. In that same sense, SSDs are already offering unlimited rewrites as they have enough rewrite cycles to last 7 years of normal usage. Just like the mechanical drives.
Re: (Score:2, Funny)
Someone wake me up when there's a 1TB SSD for $250 that can do unlimited rewrite ops.
Ah, finally! We've waited for you to go away forever, and here you offer it to us on a silver platter!
Re: (Score:3, Funny)
It's an SSD, there is no platter!
Article without 60 pages of ads (Score:5, Informative)
http://www.hothardware.com/printarticle.aspx?articleid=1211 [hothardware.com]
Re: (Score:2)
Actually not too far off (Score:2)
I was actually surprised to see the capacities and prices. As someone who's never had a hard drive bigger than 80GB (and even then only used half of it), the capacities of SSDs are starting to look pretty decent. The prices are still an order of magnitude away from what they'd need to be to get me to switch, but hopefully that's only a year or two away?
The big thing for me is the durability to shock. I don't own a desktop; I'm a purely notebook guy. Just recently I toasted a (mechanical) hard drive by drop
Re:Actually not too far off-ORDERS OF MAGNITUDE (Score:2)
Are you speaking of a binary order of magnitude, or a decimal order of magnitude?
And human years, dog years, or Internet years?
I don't understand the problem (Score:4, Interesting)
I am not an expert by any stretch but it seems to me that write speed issues, at least when it comes to relatively small amounts of writing, could easily be mitigated with a very long on-board RAM buffer controlled by the drive... and by very large, I mean like 1GB at least. And to keep it stable, a capacitor should be enough to keep it alive when power drops to commit any changes in buffer to the SSD storage. Maybe what I speak of is impossible or ridiculously expensive, but I don't think either is the case.
Re: (Score:2)
Re:I don't understand the problem (Score:4, Insightful)
You're not the first person to think of such a thing. Problem is: it's pretty risky.
Most high end RAID controllers do this already, if you set to write-back. But they also have big batteries attached to them, and even then, you have something like 24 hours to power back on or total system corruption can occur. This means that mentioned systems must be affirmatively managed.
Can you imagine what a hassle this would be for the HD makers, particularly in the notebook use case? It would be a never ending chain of angry users blaming the HD maker for their data loss...
I think the right place to do this is way up in the OS, with a file system that is aware of the issues of small page commits to these devices, and therefore doing some kind of page-coalescence thing. Sun's ZFS can do this. Now we just need something over in consumer space.
C//
Re: (Score:2)
Write staging is virtually irrelevant with a modern filesystem. Any log-based, undo-baesd, or recursive-update (zfs) based filesystem has virtually no synchronous writing constraints. HAMMER for example can issue a ridiculous amount of write I/O asynchronously even when it is flushing or fsync()ing.
So there are two things going on: First, the OS is caching the write data and disconnecting it from the application. Second, a modern filesystem is further disconnecting writes by reducing write-write dependa
Caveats indeed: limited lifespan (Score:2)
I have magnetic media drives that are 15 years old and still work fine. Would I be able to say the same about any SSD "drive"? I doubt it. Unlike magnetic media, flash RAM technology is known to have a finite, limited, and unpredictable lifespan; it will only tolerate so many rewrites and then begin to fail, and you'll never know exactly when that will be. That sad day will come sooner than 15 years down the road, and you'll have paid a premium for it. Same for phase-change optical media, really. That
Re: (Score:2)
Actually that isn't correct. Magnetic media has thermal bleeding and magnetic cells will deteriorate over time. However, many of the issues surrounding magnetic media are directly related to reading and writing, verses just sitting on a shelf, and if you really needed the data you would still be able to recover it many years down the line even after the bearing lubrication turned to mush.
With flash you are dead in the water once a cell dies. There is no way to recover the data. And flash cells will leak
SSDs are useful, but not for write performance (Score:2)
The usefulness of SSDs is undeniable in small devices, but there isn't much of a point of actually using it as a high performance write-heavy device with the limited life of its flash cells. Any heavy write use will quickly wear the device out, and this has already been proven pretty much with people trying to use them as generic HD replacements on microsoft systems with swap enabled.
The amount of storage is also very tiny compared to a modern hard drive in the same form factor. Combine those two together
Re: (Score:2)
You are not very informed. The Mtron device tested has a write endurance of 140 years at 50 GB/day writing. Yes, you will probably claim that you write more than that (unlikely), but you can write terrabytes per day on average, and still not have a problem.
It will be less than a year before every top of the line system has SSD for performance reasons. The only downside is the price, but that has never bothered the people buying top of the line.
Re: (Score:2)
Watch out for MFG figures on writes.
140 years at 50 GB/day is for linear writes. If you are doing random writes you have to scale this based on your average write size.
If your average random write size is 12K, then the amount of writes will be 12KB/2MB of the above amount or about 0.58% as much data. This is 170x worse.
Now this is an SLC drive, so you are probably still OK.
Re: (Score:2)
Don't take manufacturer claims at face value, they never tell the whole story. 50 GBytes/day worth a block I/O operations is not the same as adding 50 GBytes/day in new files or data. It isn't even close.
A typical consumer machine running typical office software, or a game, or a typical laptop being used by a typical consumer might get in under the limit and such a claim might be reasonable in that context. Turn on paging to swap, though, and do any sort of moderate workload and you will blow out that 50
Re: (Score:2)
All those laptops sold today with SSD do have swap enabled. And they do not seem to fail within weeks.
The SSD can not tolerate writes or swap is no longer true. End of story.
Besides, my machine has 4 GB of swap. The OS has chosen to use 0 bytes of it. Why? Because before you spend a fortune on a SSD drive, you will buy enough RAM that you do not need to swap. Swap is SLOW if you did not notice.
Not a Benefit or Obvious Advantage (Score:2)
I don't find a benefit or obvious advantage in a device that requires wear-leveling to keep from wearing itself out. The fact that it degrades its storage capacity gracefully instead of all at once doesn't offset that swap files can really work over mass storage devices and the first bad sectors have been known to start sh
Re: (Score:3, Informative)
I don't find a benefit or obvious advantage in a device that requires wear-leveling to keep from wearing itself out. The fact that it degrades its storage capacity gracefully instead of all at once doesn't offset that swap files can really work over mass storage devices and the first bad sectors have been known to start showing up after only weeks of use in some cases.
Magnetic media does this too, just not as intelligently. Magnetic media waits until a sector is nearing failure, then reads the data (hopefully) and moves it to a new sector.
You can query your magnetic drive to get a list of bad sectors. The list grows over time.
I just bought an OCZ drive... now I'm selling it (Score:5, Interesting)
I aimed to slash my build time for complex scenarios.
I thought the Compile -> Jar -> War -> Deploy -> Expand -> Launch would be greatly spead up as the files would be accessed quickly.
I hoped effectively for a much more targeted and capacious file cache/ RAM disk.
Unfortunately, the hype does not turn out to be true.
The enormous time cost of writing files smaller than 8MB (!) [see footnotes] completely counters any read speed increase. Building a proect is making thousands of 2KiB files : one of the most pathological cases for these drives.
So is it slow? No, it's just as quick as a sluggish 7K250, but then again I just coughed up £179 for the privelege of the same speed.
So I'm ebaying mine to someone who wants it for a light and quiet laptop, perfect.
-----------------
Some "Terrible small write performance" links I found during research:
* http://www.xbitlabs.com/articles/storage/display/ssd-iram_6.html [xbitlabs.com]
* http://www.alternativerecursion.info [alternativ...rsion.info]/?p=106
Re:I just bought an OCZ drive... now I'm selling i (Score:2)
I'm with ya. I like the idea of SSDs in specific applications like portables and other quiet, low-power rigs, but for general purpose computing they're kinda pointless.
It's still far better to throw a ton of Ram into your PC and let the disk cache work its magic.
Re:I just bought an OCZ drive... now I'm selling i (Score:2)
Compile -> Jar -> War -> Deploy -> Expand -> Launch
Sun's Matryoshka_doll [wikipedia.org] approach to Java has become a bit much.
Re: (Score:2)
This is "fixable".
If you want to test a beta of our MFT software, drop an email to sales@easyco.com or read up at http://easyco.com./ [easyco.com.]
ps: my apologies for the blatant advert. I think my karma can stand it. At least I was short ;)
Re: (Score:2)
It is possible you just selected the wrong drive. I have similar workload to what you describe, and I have very much considered the Mtron PRO devices. They can supposedly do random writes in small blocks efficiently.
What would be even awesomer (Score:2)
Recycling the suckers (Score:2)
Since there's a lot less moving parts and no thick metal parts like hard-drives are the SSDs more Green-Friendly in the manufacturing phase and end-of-life to recycle? Since there's been a lot of consideration in the industry about environmental friendliness I through this might be something that should be brought up.
Re: (Score:2, Informative)
Re:1+1+1 != 4 (Score:5, Funny)
RTFM? Shit... comments, articles, now I have to read a damn manual too. Jesus Slashdot is getting harder and harder these days.
1+1+1 = 4 if 1, 1, 1, and 4 are rounded numbers (Score:2)
I did RTFA; at least, the first several screens. Not a word about price. If you've read the whole thing, what do these suckers cost? I want to build fanless, nose-free a media center but I ain't Bill gates (good thing too because I'll base it on Linux).
Re:1+1+1 = 4 if 1, 1, 1, and 4 are rounded numbers (Score:4, Informative)
Re: (Score:2)
Ouch! Thanks for the info (mods, please mod him "informative"). I'll have to wait until the price comes down. Seems that since they're solid state, eventually they'll be cheaper than magnetic drives.
Re: (Score:2)
Flash is a block device, you need to copy it to 'real memory' for the processor to use it.
There have been things like MRAM in labs for a while now that would give you what you describe.
But flash wont.
Re: (Score:2)
That's why a 32GB SLC Flash + a big HDD is more interesting, than an expensive not very 'slow' MLC Flash alone.
For a desktop user with 32GB of Flash, except in exceptional case, all your data are already cached on the Flash, except movies or MP3, but why would you care about having a 0.1ms access time instead of a 10ms access time for a movie?
Re: (Score:2)
"unless you have a critical need to access a lot of data at high speed while driving a truck over a small post-apocalyptic wasteland."
The Detroit Police Department will be buying in bulk.
Re: (Score:2, Informative)
These guys are idiots. A few points:
- They 'cheated' on ATTO, only configuring it to start at 8k. Last I checked, default sector size is 512b. Regular day-to-day apps, such as Outlook, perform random sector-level access to the PST when downloading mail.
- If you're going to do an SSD roundup, how about at lest grabbing a few drives off of the SSD top 10. Specifically, Memoright (#1 on that list) makes an SLC drive that competes with the other SLC drives on price, yet outperforms them all: http://www.stor [storagesearch.com]
Re: (Score:2)
Including 4K numbers makes sense. If you are working inside of a filesystem (NTFS, FAT, ext3, ... and most others), the smallest IO that actually happens is 4K and these are aligned on 4K boundaries. So 512 byte numbers don't actually mean anything.
Re:1+1+1 != 4 (Score:4, Informative)
I know, I RTA...
Re:1+1+1 != 4 (Score:5, Funny)
By what WITCHCRAFT would thou know the article contents?
10 writes per second for 18 years (Score:3, Informative)
For instance, MLC NAND memory has between 1,000 and 10,000 write cycles per cell, SLC memory about 100,000. Some applications will be more write intensive, so they'll wear out the memory faster.
That's why modern CF, SD, and SSD controllers spread writes to a single logical sector over multiple physical sectors [wikipedia.org]. They also dedicate 5 to 7 percent of their space to spare sectors in case one wears out; this accounts for the difference between a GB and a GiB. For example, a half-full 16 GB SSD with blocks of 128 KiB has over 60,000 free blocks. If your app makes 864,000 writes per day (10 writes per second 24/7), then the wear leveling circuitry would go through the entire free memory just under 15 tim
Re: (Score:2)
Your math is a bit off. Wear leveling lets you use the entire drive. If it is done correctly (and most SSDs are at least very close), then you can guess how many total "random writes" you can do before the drive wears out.
The first thing to guess is the size of the erase block. With most current drives this is 1MB or 2MB. So a 16GB SLC drive has 16,000 / 2 * 100,000 = 800,000,000 total lifetime available writes. At 86,400 writes/day this is 9,259 days or about 25 years.
The same numbers for MLC are 1/10
Re: (Score:3, Informative)
SSD's will reach $/GB equity for enterprise disks within 2 years. They already beat them on $/IOPS, and will soon on $/MB/s.
A reasonable projection for SATA is 6-7 years. However, if you know technology, that's like talking about what's going to happen in a thousand years. One just cannot know. The cross-industry pressure is definitely going to incentivize the spinning media makers to work on areal density.
In spite of that, I feel pretty sure that SSD's are going to wipe out Tier 1 entirely. Tier 1 is an IO
Re: (Score:2)
I would agree that SSDs can take a large chunk out of the high-performance HD market, but only for read-heavy environments. I can already see it solving issues related to hybrid static and dynamic web content serving, for example. But in write-heavy environments the IOPS capabilities of flash becomes irrelevant because any write-heavy environment will also quickly wear the flash device out. That makes its usefulness questionable as a staging medium for things like database commits. It doesn't take much b
Re: (Score:2)
Flash $/GB is already lower than 15K 36GB 2.5" drives. $/IOPS is silly compared to 15K drives. Power and physical space are also wins on the SSD side, so this leaves only wear.
Our company ships servers with SSDs and software that "linearizes" the writes to the drives. This fixes the two big problems with Flash SSDs. First, random writes are no different than linear writes, so random writes are fast. Often > 20,000 4K write IOPS. Second, the wear performance of the drive improves dramatically.
When y
Re: (Score:2)
I would agree that SSDs can take a large chunk out of the high-performance HD market, but only for read-heavy environments. ...But in write-heavy environments the IOPS capabilities of flash becomes irrelevant because any write-heavy environment will also quickly wear the flash device out.
This has been sufficiently debunked. See comment 24888609 [slashdot.org]. Even in the enterprise storage arena. Those guys will probably have fewer drives to replace, since magnetic drives don't have built in wear leveling.
And I'm looking forward to newer and cooler things that ZFS will do with pools of both types of media. It already uses flash drives as a read cache for magnetic media, what else will it do?
Re: (Score:2)
For a lot of people a 100GB drive already provides more storage then they will ever need. Drives these days are simply big enough for most people, so it doesn't really matter if they will have 9900GB of free unused storage or just 900GB, since both of them will be 'big enough' and what matters when you already have 'big enough' is stuff like reliability, speed, noise, power use and such. I agree on the point that there will be still a gap in price/capacity, I just doubt that it will matter much in what peop
Re: (Score:2)
I don't think this is true any more. My parents already use several hundred gigabytes and the number keeps growing. The reason is that your typical consumer these days is using the storage in the same manner we used to use archival tapes... their entire lives are stored on their hard drive now. Not only are their entire lives stored, but they need backups as well (or in the case of Apple products, simply replicating the entire data set on multiple boxes).
In modern day, that means all the pictures you've
Re: (Score:3, Insightful)
Speaking of handicaps and stalls, isn't that exactly what's going to happen to many of these 1st- and 2nd-generation SSD drives when they reach their maximum # of write cycles and suddenly fail to be writable anymore?
Just like SATA and SCSI drives, it will just build up bad sectors as the system tries to write information, resulting in a "shrinking" drive.
It is actually much less likely this type of storage device will have a sudden, catastrophic failure, when it only takes one moving part to foul in a mechanical drive to destroy everything it contained.
Re: (Score:3, Interesting)
Latency, Power, Journaling (Score:3, Informative)
The big win with SSDs is low latency read access - you don't have to wait for rotation or seek time to start fetching your data. That's really useful for many kinds of data applications, speeding up transactions in databases, etc. If you RTFA, and look at some of the benchmarks like Windows Startup, they totally smoke rotating disks - and if you're trying to run servers in a datacenter, you've got less downtime if you ever have to reboot the things.
They also consume less power, which is good for some kind