Samsung 256GB SSD is World's Fastest 190
i4u submitted one of many holiday weekend slow news day stories which starts "Samsung Electronics announced today the world's fastest, 2.5", 256GB multi-level cell (MLC) based solid state drive (SSD) using a SATA II interface.
Performance data of the new Samsung 256GB SSD features a sequential read speed of 200 megabytes per second (MB/s) and sequential write speed of 160MB/s.
The Samsung MLC-based 2.5-inch 256GB SSD is about 2.4 times faster than a typical HDD. Furthermore, the new 256 GB SSD is only 9.5 millimeters (mm) thick, and measures 100.3x69.85 mm. Samsung is expected to begin mass producing the 2.5-inch, 256GB SSD by year end, with customer samples available in September. A 256GB capacity is getting large enough to replace hard-drives for good — now just the prices just need to come down further for large capacity SSDs."
Summary (Score:5, Insightful)
42 zillion dollars? (Score:4, Insightful)
When this SSD is cheap enough that I can buy 3-4 of them and stripe that into a bus-raping powerhouse, for less than a mortgage payment, then we'll talk.
Re:42 zillion dollars? (Score:5, Insightful)
Also, it doesn't help to have cheap 32GB SSDs when nobody buys them and you can't really launch into mass production because you are stuck with a niche market. To drive down the price you need to be able to produce them en mass and in order to do that you need to catch up (or outstrip) existing technology.
Technology: Still new! Still Improving! Surprised? (Score:4, Insightful)
Solid State Drives for computers? They aren't really out of beta!
Re:Large enough? No way. (Score:5, Insightful)
I'd like to subscribe to your reality if it has Terabyte-sized 2.5 inch drives. Where do I sign up?
Re:Large enough? No way. (Score:5, Insightful)
The current largest widely available 7200rpm is only 200GB. The majority of notebooks ship with 200GB of HD space.
Re:Technology: Still new! Still Improving! Surpris (Score:5, Insightful)
Re:Large enough? No way. (Score:5, Insightful)
I don't think SSD will make an impact in desktops anytime soon, but if I can put an SSD in my notebook and gain a little speed, some battery life, and better shock resistance without giving up any serious capacity (heck, my 2-month-old MacBook Pro has a 250GB HDD in it right now), depending on the price differential I'll probably be all over it.
Also worth thinking about (though it's not in the submitter's link) - I read a couple of releases on this drive yesterday, and though they aren't giving production prices yet they claim that multi-level cells will make it cheaper than the older models. Between that and the natural speed of price cuts, this drive may be at competitive HD pricing levels sooner than we expect. If I can get a 256GB SSD at a 25% price premium to a HDD of the same size (like you suggest), I think it would be pretty much a no-brainer. That 250GB HDD is only about $150 or so - maybe even less.
256gigs is a lot (Score:5, Insightful)
Re:Large enough? No way. (Score:4, Insightful)
Re:Random write ops? (Score:5, Insightful)
Re:Large enough? No way. (Score:3, Insightful)
Re:Seems like the complexity is lower (Score:2, Insightful)
but just like CD's are cheaper to produce than cassettes, that doesn't mean the cost will ever come down.
Re:Large enough? No way. (Score:5, Insightful)
SSDs and spinning disks can still co-exist - in a year or two you will be able to run your OS and programs on a 100GB-200GB SSD and go buy a 2TB disk or 5TB array to store your data on that is less performance critical.
Re:MLC, not SLC. (Score:1, Insightful)
Re:Seems like the complexity is lower (Score:2, Insightful)
Re:Did we not already have? (Score:5, Insightful)
Right?
And if hard disk storage had ever been that expensive, it would have meant the abandonment of the hard disk technology forever.
Right?
Is there a good reason why.... (Score:2, Insightful)
Apples to Celery (Score:4, Insightful)
However, how does an oligopoly selling copyrighted content compare to a commodity market? Basic economics tells you they don't, and you can count on one of two things happening. A) SSD prices fall in line with hard drives. Or B) hard drive capacity moves beyond the needs of most consumers and SSD takes up that niche while being only marginally more expensive per GB than hard drives.
Re:Large enough? No way. (Score:5, Insightful)
We are far past the point where the average consumer cares very much about capacity. What do you think they are going to do with 2 terabytes? Unless you are talking about someone who is frequently downloading movies and the like, I don't see how they would use that content. OK, there are probably a handful of people who are doing their own hi-def video editing or processing the output of large sensor arrays, but in what would do you define these guys as "most consumers?"
The reality is SSD doesn't have to come anywhere near the price of hard drives. It just needs to provide enough capacity (256-512 GB today) at a reasonable price. If you tell a consumer they can get a regular old hard drive, or pay 10% more for a SSD that doesn't fail when dropped and runs way faster, a lot of regular consumers will pony up for that.
What kind of filesystem? (Score:1, Insightful)
Is this raw access, or over a filesystem? If it's the former, you have a benchmark which doesn't mean much in the real world. If the latter, which filesystem was used?
Choosing the wrong filesystem type will indeed get you non-optimal performance.
Re:Random read ops? (Score:3, Insightful)
Re:BAARF (Score:4, Insightful)
RAID6 is a far better option then RAID5. At least it makes it less likely that you'll end up with a double-drive failure that takes out the entire array.
OTOH, the failure mode of both RAID5 and RAID6 leaves a lot to be desired. Rebuild time increases linearly as you add more disks to the array. So a 10+ RAID5/RAID6 array can have huge rebuild times, leaving you vulnerable for a lot longer. As in half a day or longer to rebuild the array (or at least a few hours).
Personally, my preference is the more conservative RAID10 approach. Rebuild times are based on the size of an individual disk in the array (not the total array size), which means your vulnerability window is a lot smaller. And depending on luck, you can survive a multi-disk failure. Rebuild times are typically under 2 hours for arrays that are based on 300-500GB drives.
(My preference is to have 1 spare disk for every 6-8 drives in the array. So a 12 disk RAID10 array would probably be RAID10 over 10 disks with the other two as spares.)
Smart controllers (Score:3, Insightful)
(In practice NTFS usually uses 4Kb blocks so you'd optimize for that but the argument stands...)
This would also help a lto with wear levelling, etc. as you'd write the entire disk in a round-robin fashion, remapping blocks as you go.
The controller would need static RAM to hold the remapping table but that's no big deal these days.