Seagate Announces First SSD, 2TB HDD 229
Lucas123 writes "Seagate CEO Bill Watkins said today that the company plans to put out its first solid state disk drive next year as well as a 2TB version of its Barracuda hard disk drive. Watkins also alluded to Seagate's inevitable move from spinning disk to solid state drives, but emphasized it will be years away, saying the storage market is driven by cost-per-gigabyte and though SSDs provide benefits such as power savings, they won't be in laptops in the next few years. A 128GB SSD costs $460, or $3.58 per gigabyte, compared to $60 for a 160GB hard drive, according to Krishna Chander, an analyst at iSuppli. 'It will take three to four years for SSDs to come to parity with hard drives,' on price and reliability."
Every news source (Score:5, Informative)
Seagate is announcing two seperate products. One is a SSD and the OTHER is a 2TB hdd.
Re:I simply see market for a hybrid drive (Score:4, Informative)
Re:Oh, no.. Here comes the nostalgia again.. (Score:5, Informative)
Summary and article fail at simple comparisons.... (Score:3, Informative)
Is it that fucking hard to include the cost per gigabyte of the current hard drives ($0.375/GB for the example given)? Why quote one $/GB figure if you can't be bothered to include the other?
Re:I simply see market for a hybrid drive (Score:4, Informative)
The only hybrid drive I see is an 80 gig seagate though, although there are likely more offerings.
Re:2TB Hard Drive (Score:2, Informative)
I remember my long-former managers happily paying nearly $10k each, for the damned things...
Re:Every news source (Score:4, Informative)
Re:We would disagree (Score:3, Informative)
The price point of spinning disks goes down way faster than SSDs. Not to mentions SSDs can't keep up with long, sequential reads versus a good spinning disk (caveat: a good 8-channel SSD can do 128MB/s; you will have severe trouble implementing this).
Re:Eee PC not a laptop ? (Score:3, Informative)
Or the MacBook Air, or the Lenovo x300.
Re:Me Too! (Score:5, Informative)
Separately, it's nice to know that analysts agree with research I've done that it's only 4 years before SSD surpasses HD, at least in 2.5 inch drives. I've been comparing the relative price improvement of hard disk prices to flash and its pretty easy to estimate a crossover point.
You can have a look at my data (charts) and conclusions here. http://www.mattscomputertrends.com/flashdiskcomparo.html [mattscomputertrends.com]
Re:We would disagree (Score:1, Informative)
- SSDs aren't as vibration sensitive (both will not take a bullet, but only SSD can likely survive a normal drop of 2M on to concrete)
My new Seagate drive is rated for 300G (non-operating) maximum shock versus 1500G for a Samsung SSD I have. Either will survive much better than the laptop surrounding it so the fact that a SSD may survive a larger drop is a moot point. The drive in my last laptop survived a 12' drop onto concrete so I don't buy the argument that that is a selling point for a SSD.
- SSDs don't have the temperature/altitude constraints
My new Seagate laptop drive advertises that it is capable of operation from 0 to 60 degrees Celsius. That's better than the specs on the Corsair USB key I have in my pocket.
- SSDs don't have latency and no rise/shutdown time for green needs, in fact, they use hardly any power at all
My old Seagate used 0.6W at idle versus 0.32W at idle for the Samsung SSD I replaced it with. It didn't make a measureable difference to the battery life and using 50% of the power doesn't cound as "hardly any." It was not worth the money especially considering the loss of capacity and speed.
- No electromechanicals to wear out.
One laptop Seagate drive I found specs for had a 1 million hour MTBF versus the 2 million hour MTBF for my Samsung SSD. It's not that big of a difference in practical usage. Wearing-out a harddrive just isn't a concern unless you manage a lot of drives or keep computers for a long time.
While for some people the reduced physical size of SSD's will make them successful, for most people the desire for more space will ensure that magnetic drives won't go the way of tubes for a long time.
Re:Analysts are dumb (Score:3, Informative)
Finally, the interface has nothing to do with the recording technology. SSDs and HDs use the same interfaces.
Re:Oh, no.. Here comes the nostalgia again.. (Score:2, Informative)
Re:Every news source (Score:4, Informative)
With modern wear leveling algorithms, you can write to an SSD continuously at its maximum write rate for about fifty years before you wear it out. They are, if anything, much more suitable for rapidly changing data than a regular hard drive.
Dashing the dreams of /.ers since 1999 (Score:2, Informative)
It's not impossible to implement that functionality with a dumb SSD and HDD. The easy part is unionfs -- done. The hard part is determining with sufficient accuracy what files are unlikely to be written again -- a first cut could just consider some directories, MIME types and/or file extensions more or less likely to be rewritten than others. The ugly part: file metadata has to be present for both file sets at all times (or at least all directories which are split across both devices), metadata might be changed frequently, the HDD must be on for as little time as possible, and writing to flash must be avoided as much as possible. The only way to satisfy all those constraints is by reading and maintaining a complete write-back cache of the HDD's inodes and dirents in RAM at mount time, flushing dirty entries whenever the HDD spins up and writing through whenever the HDD is on. At 144 bytes apiece a cache for a typical homedir/archive disk could eat up a sizable chunk of RAM.
The usual structure of a storage hierarchy is that each level contains a fast, small subset of the next level. A consequence of this is that at the steady state the final level contains a complete copy of everything. Poor write endurance makes flash SSDs poor participators in this sort of hierarchy.
Re:Price / Performance isn't always king (Score:2, Informative)
SSD Performance (Score:5, Informative)
For 100% read applications SSDs tend to be similar in performance to hard disks when reading linearly, and a lot faster than hard disks when reading randomly. This shows up in linear read speeds of 100 MB/sec for a typical Flash SSD which is "close" to a hard disk. For random 4K reads, Flash SSDs can stomp any hard disk. Most disks are in the 10,000 4K read IOPS range where 15K SAS drives are in the 250 range or 40x slower. So for applications that are 100% read SSDs can be as much as 40x faster, although the average is usually in the range of 15x to 20x.
When you start writing to Flash things get interesting. Flash is really designed for large, linear, aligned, writes. With most drives, you can get maximum write throughput only if you write exactly aligned with the drives internal erase blocks. Thus you can write exactly 2 megabytes on exact 2 megabyte drive boundaries and get 100% of the theoretical write throughput of the drive. Unfortunately, no application acts like this, so you are at the mercy of the file system and Flash controller to turn your smaller, probably random, and probably mis-aligned writes into what the drive can handle. The net impact of this is that good Flash SSDs have 4K random write IOPS in the 120s which is 1/2 the speed of a 15K SAS drive. I have measured Flash SSD with 4K write IOPS with values like 135, 120, 64, 43, 24, 13, 4.0, and 3.3.
This is why Flash SSD performance is so hard to judge. The random write performance can suck up the available "drive time" and dig a system deep into dirty buffer flushing. We talked with one Dell laptop user that described their system becoming "unusable" while an Outlook indexing operations was randomly updating a big file. Unusable in this case was 2+ minutes for to bring up task manager.
These random writes also have a real impact on the wear of the drive. Every time you seek a write, you basically chew up a write/erase cycle, even if the write is only 4K long. If you look at a drive that claims 50 GB/day for 10 years, this is 50 GB of linear writes on exact erase block boundaries. If you write 4K randomly, the 50 GB really means 25,000 4K writes or 100 Megabytes of random writes.
The solution to this is to not write randomly to the drive. There are file systems designed for Flash that address these issues. These are typically called "Log File Systems". Unfortunately, there is no generally available file system really designed for performance. In Linux the LogFS options are really tuned for small memory small storage systems and for hardware where the flash chips are directly accessible. They do help drive wear a lot, but they are just not tuned for Gigabytes of space or database crunching performance.
Another solution is my companies product called MFT (Managed Flash Technology) which is a software block mapping layer that runs on the host. It gives you the random write performance benefits and wear benefits of a LogFS while allowing you to use whatever file system you wish. MFT was developed on 2.6 Linux and has been ported to Windows. With MFT, the same drives that do 25 4K random write IOPS usually measure over 10,000. The linear speed of the drive is still equal to a hard disk, but the random speed is now closer to symmetric with reads and writes. Thus jobs like updating databases can literally run 20x faster than the fastest hard disks.
In the end, Flash SSDs will find specific markets initially. I can say with certainty that they won't get used for off-line backups or storing/edit large quantities of HD video. But give them databases or file systems with lots of small files, and they can really smoke a hard drive.
Mac OS X already does that... (Score:3, Informative)
Basically, read only often accessed files are moved to the zone on hard drive where the access to files is fastest.
It would not be hard to adapt this behaviour to move the files onto SSD portion of the disk at all.