Samsung Mass Produces Fast 256GB SSDs 280
Lucas123 writes "Samsung said it's now mass producing a 256GB solid state disk that it says has sequential read/write rates of 220MB/sec and 200/MBsec, respectively. Samsung said it focused on narrowing the disparity of read/write rates on its SSD drive with this model by interleaving NAND flash chips using eight channels, the same way Intel boosts its X25 SSD. The drive doubles the performance of Samsung's previous 64GB and 128GB SSDs. 'The 256GB SSD launches applications 10 times faster than the fastest 7200rpm notebook HDD,' Samsung said in a statement."
Re:10,000 RPM (Score:5, Insightful)
Just imagine the power savings as well. Also, they should last an order of a magnitude longer than media that needs to spin all the time.
As soon as these get cheaper and have more capacity, spinning media is dead.
Re:Fuzzy math (Score:5, Insightful)
Hard disks have to position the heads at the right sector before starting a read. Maybe these SSDs don't have a solid state analog to that activity and are thus faster by however long that takes.
I don't know the specifics, but I'd guess that comparing overall program access and launch time to peak transfer speed is apples and oranges.
Re:10,000 RPM (Score:5, Insightful)
Spinning media already is dead. It's just that no one's told it yet.
Actually, spinning media will continue to be used in servers that need huge capacities of storage. But for cheaper devices, the speed, energy efficiency, durability, and price of solid state drives will effectively make using spinning media obsolete in the next few years.
small, cheap, and reliable, please! (Score:5, Insightful)
Re:Fuzzy math (Score:4, Insightful)
That the job time differs by a factor of 3.5 does not mean that data transfer speeds aren't improved tenfold. There are other factors involved, you know. It'd have been a cleaner comparison if they had transferred a single 250GB file from one HDD to another HDD, then a copy of that same file from one SSD to another SSD.
All the same, once capacities reach 750gb or better and the price point is below $200 or so, I'll be buying them. Hell, I'd probably consider buying a 256GB drive just to improve boot times. (when Linux decides it's time to fsck boot times are slow)
Question: That they could transfer 10 25 GB files to the SSD leads me to think it's 256 gigabytes rather than gibibytes? Are these SSDs rated using actual gigabytes, or gibibytes with the gigabyte label? I think SSD technology is a great breaking point where manufacturers could/should agree to abandon the misleading gibibyte ratings.
On an unrelated note: Maybe a spyware-infested Windows box will boot in under two minutes now ;)
Re:Fuzzy math (Score:5, Insightful)
This is somethat that a lot of people tend to overlook, either because they don't understand how a hard drive works, or because they don't stop and think about it. Loading programmes, especially ones which rely on libraries, translation files, multimedia, etc... at other locations on a disk would greatly slow down a HDD in comparison to an SSD.
Contrasted with SSDs, which are pretty much random access devices, in order to read each of those files from an HDD, there are basically 3 time factors to consider.
1. Seek time. The time it takes to move a reader head to a specific track (ring of data on a platter). Assuming that there is only this read taking place, you can pretty much assume that the reader head moves from its current location to the correct spot on the disk right away. Things are not always this pretty, though.
2. Rotation time. On average, you will have to wait half a rotation for the correct spot on the disk to spin around to the reader heads. There may be algorithms designed to mitigate this by reading even as it waits. In case the read is large enough to span a significant portion of the track, it can append that buffered data later, but I don't know if this is done or not.
3. Read time. This is the amount of time required to read the data off of a single track, and can take up to 1 rotation of the platter to complete.
So while the GP has a point in that people need to be careful about what kinds of statistics they believe, he/she glosses over the fact that reading a single piece of data with an HDD is hardly a random access, constant time operation (or linear time for n pieces of data).
Powers of Two (Score:3, Insightful)
All these flash drives and solid state drives keep advertising capacities in powers of two: 64 GB, 128 GB, 256 GB. So why do they still say a 256 GB SSD is 256 million bytes?
Re:cant wait (Score:4, Insightful)
ditto, but im waiting for permanent data erasure to become a little more mature. i understand the wear leveling incorporated into SSD can cause current programs to stumble.
The mean time to fail in the new SSD's is a bucket load better that most regular HDD's.
http://www.google.com/search?q=ssd+mean+failure+vs+hdd [google.com]
Between the 10,000-100,000 writes and the logic used to try not to rewrite the same place over and over they do quite well.
Re:Powers of Two (Score:3, Insightful)
It has 256GiB of raw capacity but some capacity is used for overhead so only 256GB is left.
Yep (Score:5, Insightful)
Disk I/O is the one area I still have an easy time slamming modern computers on. Most others, it isn't too expensive for me to simply get enough power that handles what I want in realtime without slowdown. Multiple VM, no problem quad cores are cheap. Big audio projects? Hell I can get 4GB of RAM for less than a month's Internet access... However when those projects start hitting the disk, I start having problems, even with a RAID array. The sequential stuff isn't it, it's the random access that kills it.
Audio only takes 172Kbytes per second per track (for 32-bit floating point). So you figure that doing something with, say, 64 tracks isn't a big deal right? Only about 11Mbytes/sec, way under what a single disk can take. However you can find that it'll choke. Reason being is that the audio isn't all nice and sequential. It's written to disk as 32 separate stereo audio files. Also you maybe have some of them reading, some of them writing and so on. The disk gets overloaded trying to seek to the information in time.
VMs are the same thing. Two VMs running computations at the same time on a system works at full speed. They each use a core of the CPU, there's no problem. The do contend for memory bandwidth, but that is plenty high enough. Likewise one VM doing disk access happens at near native speeds. There's not a lot of overhead to read and write to the disk. However get two VMs doing disk access, man things grind to a halt. Your drive is dancing all over trying to service the simultaneous requests from different areas so throughput grinds to a halt.
An SSD would just be amazing for apps like this. Not because it has so much more bandwidth, but because it's bandwidth stays much higher under intense random access. Where a harddrive might obtain 50MB/sec in sequential read, the same drive might struggle to pull even 5MB/sec in random reads. For the SSD it might be more along the lines of 200MB/sec for sequential and 180MB/sec for random. Even though it isn't full speed, it's close enough as no odds. With that, the VM and audio work would have no throughput problems.
Re:Powers of Two (Score:4, Insightful)
Computer math doesn't work like regular math, like for example SATA2 which is 3Gbps. Now if I showed you a cargo ship with a capacity of 3000 tons, you'd think you could actually load 3000 tons right? And not that 600 tons of the cargo hold would have to be fixed support beams. But with computers it's somehow okay that 600Mbps is just parity bits and that you can't actually transfer more than 2400Mbps of data. And computers have been fucked with 1000/1024 at least as far back as the 1.44MB = 1.44*1000*1024 floppy which can't be right in any system and probably longer. Ignore it, honestly whoever started this has wasted more time for computer users than whoever dropped the century digits leading to the y2k problem.
Re:Random access (Score:5, Insightful)
And please stop calling them disks! Disks are cicular objects.
Good luck with that. People still "dial" phones, even though phones with dials haven't existed for decades.
Re:Because that's what GB means (Score:3, Insightful)
It was only when somebody couldn't quite make a 1GB hard disk that 1,000,000,000 bytes became "good enough."
Re:cant wait (Score:5, Insightful)
I could be wrong, but you sound like you're being sarcastic, which is a pretty stupid attitude to have here.
Let's say you have a crappy unoptimized database. You can spend tens of thousands of dollars' worth of programmer time to fix it up and optimize it so that it runs fast on your current hardware. Or you can spend perhaps one tenth of the money to upgrade to a super-fast disk, achieving the same end result. Which one is the smarter move?