Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Seagate Announces First SSD, 2TB HDD 229

Lucas123 writes "Seagate CEO Bill Watkins said today that the company plans to put out its first solid state disk drive next year as well as a 2TB version of its Barracuda hard disk drive. Watkins also alluded to Seagate's inevitable move from spinning disk to solid state drives, but emphasized it will be years away, saying the storage market is driven by cost-per-gigabyte and though SSDs provide benefits such as power savings, they won't be in laptops in the next few years. A 128GB SSD costs $460, or $3.58 per gigabyte, compared to $60 for a 160GB hard drive, according to Krishna Chander, an analyst at iSuppli. 'It will take three to four years for SSDs to come to parity with hard drives,' on price and reliability."
This discussion has been archived. No new comments can be posted.

Seagate Announces First SSD, 2TB HDD

Comments Filter:
  • Every news source (Score:5, Informative)

    by Gewalt ( 1200451 ) on Friday May 30, 2008 @06:44PM (#23605471)
    Every news source has merged those two statements together, and every time, my brain gets stuck on it.

    Seagate is announcing two seperate products. One is a SSD and the OTHER is a 2TB hdd.
  • by poeidon1 ( 767457 ) on Friday May 30, 2008 @07:00PM (#23605577) Homepage
    Just found out that hybrid drives are already in the market. http://www.engadget.com/2007/03/07/samsungs-hybrid-hard-drive-hhd-released-to-oems/ [engadget.com]
  • by Detritus ( 11846 ) on Friday May 30, 2008 @07:05PM (#23605627) Homepage
    Ha! I can remember having to order the installation of a new 220V electrical circuit to support the installation of a rack-mount winchester 450 MB hard disk drive. You needed at least two people to lift the drive enclosure off the floor. The new electrical circuit was needed to supply enough current for the drive to spin up. We used 10 MB removable hard disk cartridges that were about the size of a large pizza to store the operating system and user programs.
  • by NewbieProgrammerMan ( 558327 ) on Friday May 30, 2008 @07:18PM (#23605707)

    A 128GB SSD costs $460, or $3.58 per gigabyte, compared to $60 for a 160GB hard drive...

    Is it that fucking hard to include the cost per gigabyte of the current hard drives ($0.375/GB for the example given)? Why quote one $/GB figure if you can't be bothered to include the other?
  • by mlts ( 1038732 ) * on Friday May 30, 2008 @07:20PM (#23605729)
    Vista offers a spec to drive makers called the ReadyDrive, or a hybrid hard disk which combined some flash memory with a mechanical hard disk, to allow the drive to immediately write contents somewhere permanent, which boosts performance and allows the drive to schedule the optimum way to write out data as opposed to writing one chunk, waiting for the platter to spin around for another segment, then back to the first.

    The only hybrid drive I see is an 80 gig seagate though, although there are likely more offerings.
  • Re:2TB Hard Drive (Score:2, Informative)

    by Penguinisto ( 415985 ) on Friday May 30, 2008 @07:26PM (#23605773) Journal
    Heh - I remember when Micropolis (yeah, I'm old, deal with it) sold 9 Gigabyte HDD's that were twice as tall to (IIRC) hold one hell of a tall stack of platters in it. It ran somewhat warm-ish if you really beat the crap out of it, but otherwise it wasn't much noisier or hotter than the 360MB (not "G", "M") disks that were out around the same time. The only real PITA was getting it to play nice with the other hardware.



    I remember my long-former managers happily paying nearly $10k each, for the damned things...

    /P

  • Re:Every news source (Score:4, Informative)

    by Tubal-Cain ( 1289912 ) on Friday May 30, 2008 @07:37PM (#23605855) Journal
    What you describe is called wear-leveling. On pure flash drives, the data that rarely changes gets written to the most worn sections of the drive.
  • by bluefoxlucid ( 723572 ) on Friday May 30, 2008 @07:45PM (#23605933) Homepage Journal
    Tubes are in the expensive high-end hi-fi stuff (as well as some interesting transformer stuff-- because a 1:1.14 transformer winding can transfer a crisp clear square wave while a 1:3 will round and distort it), and are extremely important for building a good guitar amp. Classic rock versus modern day stuff, listen to something like ZZTop or Hendrix and you'll hear tubes. When they play soft it's clean or a little fuzzed; hard and loud and it becomes dirty. That essential sensitivity of a blues amp is something that can't be accurately modeled either (even variations in tube plate structure drastically change the type of overdrive).

    The price point of spinning disks goes down way faster than SSDs. Not to mentions SSDs can't keep up with long, sequential reads versus a good spinning disk (caveat: a good 8-channel SSD can do 128MB/s; you will have severe trouble implementing this).
  • by dal20402 ( 895630 ) * <dal20402@ m a c . com> on Friday May 30, 2008 @07:54PM (#23606027) Journal

    Or the MacBook Air, or the Lenovo x300.

  • Re:Me Too! (Score:5, Informative)

    by matt21811 ( 830841 ) on Friday May 30, 2008 @08:01PM (#23606085) Homepage
    Spot on. Making a hard disk for a competitive price is hard. Thats why there are only a handful of hard disk manufacturers. Making a circuit board with some chips on it can be done by hundreds of companies all over the world. I cant think of any reason to buy a Seagate SSD over any one of the other hundreds of competitors, especially when they all have the same electronics inside.

    Separately, it's nice to know that analysts agree with research I've done that it's only 4 years before SSD surpasses HD, at least in 2.5 inch drives. I've been comparing the relative price improvement of hard disk prices to flash and its pretty easy to estimate a crossover point.

    You can have a look at my data (charts) and conclusions here. http://www.mattscomputertrends.com/flashdiskcomparo.html [mattscomputertrends.com]
  • by Anonymous Coward on Friday May 30, 2008 @08:01PM (#23606087)
    Are you sure? Hard drives have had a tremendous amount of money invested to make them tough and effecient.

    - SSDs aren't as vibration sensitive (both will not take a bullet, but only SSD can likely survive a normal drop of 2M on to concrete)

    My new Seagate drive is rated for 300G (non-operating) maximum shock versus 1500G for a Samsung SSD I have. Either will survive much better than the laptop surrounding it so the fact that a SSD may survive a larger drop is a moot point. The drive in my last laptop survived a 12' drop onto concrete so I don't buy the argument that that is a selling point for a SSD.

    - SSDs don't have the temperature/altitude constraints

    My new Seagate laptop drive advertises that it is capable of operation from 0 to 60 degrees Celsius. That's better than the specs on the Corsair USB key I have in my pocket.

    - SSDs don't have latency and no rise/shutdown time for green needs, in fact, they use hardly any power at all

    My old Seagate used 0.6W at idle versus 0.32W at idle for the Samsung SSD I replaced it with. It didn't make a measureable difference to the battery life and using 50% of the power doesn't cound as "hardly any." It was not worth the money especially considering the loss of capacity and speed.

    - No electromechanicals to wear out.

    One laptop Seagate drive I found specs for had a 1 million hour MTBF versus the 2 million hour MTBF for my Samsung SSD. It's not that big of a difference in practical usage. Wearing-out a harddrive just isn't a concern unless you manage a lot of drives or keep computers for a long time.

    While for some people the reduced physical size of SSD's will make them successful, for most people the desire for more space will ensure that magnetic drives won't go the way of tubes for a long time.
  • Re:Analysts are dumb (Score:3, Informative)

    by rrohbeck ( 944847 ) on Friday May 30, 2008 @08:25PM (#23606285)
    There are several SSDs with >100MB/s transfer rate available today, the transfer rate will go up with Moore's law too (as opposed to hard drive transfer rates) and there are architectural possibilities too (running more chips in parallel.) And those hard drive transfer rates are only applicable when you do linear transfers, as everybody with a fragmented drive found out the hard way. As opposed to SSDs, where it doesn't matter because of zero seek time.
    Finally, the interface has nothing to do with the recording technology. SSDs and HDs use the same interfaces.
  • by oldenuf2knowbetter ( 124106 ) on Friday May 30, 2008 @09:21PM (#23606595)
    That the best you got? I remember having a line of 10Mb drives connected to our Burroughs B5500. Each drive cabinet was the height and depth and about half the width of a washing machine. They had platters over two feet in diameter spinning on a horizontal axle. Every day or two we had to put them back into line as they precessed as the earth rotated. Great fun. Oh, we also had one of those IBM 1401 with a model 1405 drive which provided 10Mb in a cabinet about the size of a new side-by-side refrigerator/freezer. Good times.
  • Re:Every news source (Score:4, Informative)

    by Free the Cowards ( 1280296 ) on Friday May 30, 2008 @09:52PM (#23606731)
    I really don't understand why everyone treats SSDs as being so fragile when writing to them. Yes, they have a limited write cycle. But so does your regular hard drive. The difference is that your SSD's cycle is guaranteed by the manufacturer, whereas your HD could blow up at any moment.

    With modern wear leveling algorithms, you can write to an SSD continuously at its maximum write rate for about fifty years before you wear it out. They are, if anything, much more suitable for rapidly changing data than a regular hard drive.
  • by marxmarv ( 30295 ) on Friday May 30, 2008 @10:44PM (#23606981) Homepage

    In fact, wouldn't it be great if the drive could be smart about it and--over time--identify files that were mostly read-only (iPhoto archives, MP3s) and migrate them to the flash storage area where fast, low-power reads would be a benefit.
    No. Actually, it'd be awful. The drive has absolutely no business knowing anything about filesystems. That's the OS's job, specifically delegated to the filesystem driver.

    It's not impossible to implement that functionality with a dumb SSD and HDD. The easy part is unionfs -- done. The hard part is determining with sufficient accuracy what files are unlikely to be written again -- a first cut could just consider some directories, MIME types and/or file extensions more or less likely to be rewritten than others. The ugly part: file metadata has to be present for both file sets at all times (or at least all directories which are split across both devices), metadata might be changed frequently, the HDD must be on for as little time as possible, and writing to flash must be avoided as much as possible. The only way to satisfy all those constraints is by reading and maintaining a complete write-back cache of the HDD's inodes and dirents in RAM at mount time, flushing dirty entries whenever the HDD spins up and writing through whenever the HDD is on. At 144 bytes apiece a cache for a typical homedir/archive disk could eat up a sizable chunk of RAM.

    While we're dreaming, database engines could even be optimized to read only from the SSD-portion of a hybrid drive if a particular data point had not been written to in over N minutes, or since the last collation (explained later), but would write to the platters, and then during quiet cycles, it could do a collation. The collation would move data which was on the platters, but which did not have a pattern of large volumes of writes back to the SSD volume.
    An equal amount of battery-powered RAM as cache and journal for a traditional HDD would under most real workloads beat RAM+SSD or HDD+SSD. If you really wanted to identify (manually or otherwise) cold tables and load them into flash SSDs, the database engine will probably still load and cache them in main memory anyway (costing all of a few extra milliseconds), and any RAM not used to cache those tables can be used to speed up temporary tables or for dynamic caching. (compare Amdahl's Law)

    And... I'd like a pony...
    NOT YOURS

    The usual structure of a storage hierarchy is that each level contains a fast, small subset of the next level. A consequence of this is that at the steady state the final level contains a complete copy of everything. Poor write endurance makes flash SSDs poor participators in this sort of hierarchy.
  • by Auntie Virus ( 772950 ) on Friday May 30, 2008 @10:57PM (#23607031)

    while blank CDs cost about $40 for 50
    WTF? Where are you buying these? Staples??? 50 Good quality White printable 700MB CDRs are $9.99 CDN where I shop.
  • SSD Performance (Score:5, Informative)

    by DDumitru ( 692803 ) <doug@easycoOOO.com minus threevowels> on Friday May 30, 2008 @11:33PM (#23607181) Homepage
    is very dependent on the application. In particular it depends on the mix of linear vs random operations and the mix of reads and writes.

    For 100% read applications SSDs tend to be similar in performance to hard disks when reading linearly, and a lot faster than hard disks when reading randomly. This shows up in linear read speeds of 100 MB/sec for a typical Flash SSD which is "close" to a hard disk. For random 4K reads, Flash SSDs can stomp any hard disk. Most disks are in the 10,000 4K read IOPS range where 15K SAS drives are in the 250 range or 40x slower. So for applications that are 100% read SSDs can be as much as 40x faster, although the average is usually in the range of 15x to 20x.

    When you start writing to Flash things get interesting. Flash is really designed for large, linear, aligned, writes. With most drives, you can get maximum write throughput only if you write exactly aligned with the drives internal erase blocks. Thus you can write exactly 2 megabytes on exact 2 megabyte drive boundaries and get 100% of the theoretical write throughput of the drive. Unfortunately, no application acts like this, so you are at the mercy of the file system and Flash controller to turn your smaller, probably random, and probably mis-aligned writes into what the drive can handle. The net impact of this is that good Flash SSDs have 4K random write IOPS in the 120s which is 1/2 the speed of a 15K SAS drive. I have measured Flash SSD with 4K write IOPS with values like 135, 120, 64, 43, 24, 13, 4.0, and 3.3.

    This is why Flash SSD performance is so hard to judge. The random write performance can suck up the available "drive time" and dig a system deep into dirty buffer flushing. We talked with one Dell laptop user that described their system becoming "unusable" while an Outlook indexing operations was randomly updating a big file. Unusable in this case was 2+ minutes for to bring up task manager.

    These random writes also have a real impact on the wear of the drive. Every time you seek a write, you basically chew up a write/erase cycle, even if the write is only 4K long. If you look at a drive that claims 50 GB/day for 10 years, this is 50 GB of linear writes on exact erase block boundaries. If you write 4K randomly, the 50 GB really means 25,000 4K writes or 100 Megabytes of random writes.

    The solution to this is to not write randomly to the drive. There are file systems designed for Flash that address these issues. These are typically called "Log File Systems". Unfortunately, there is no generally available file system really designed for performance. In Linux the LogFS options are really tuned for small memory small storage systems and for hardware where the flash chips are directly accessible. They do help drive wear a lot, but they are just not tuned for Gigabytes of space or database crunching performance.

    Another solution is my companies product called MFT (Managed Flash Technology) which is a software block mapping layer that runs on the host. It gives you the random write performance benefits and wear benefits of a LogFS while allowing you to use whatever file system you wish. MFT was developed on 2.6 Linux and has been ported to Windows. With MFT, the same drives that do 25 4K random write IOPS usually measure over 10,000. The linear speed of the drive is still equal to a hard disk, but the random speed is now closer to symmetric with reads and writes. Thus jobs like updating databases can literally run 20x faster than the fastest hard disks.

    In the end, Flash SSDs will find specific markets initially. I can say with certainty that they won't get used for off-line backups or storing/edit large quantities of HD video. But give them databases or file systems with lots of small files, and they can really smoke a hard drive.
  • by mario_grgic ( 515333 ) on Saturday May 31, 2008 @08:49AM (#23608895)
    With hot file adaptive clustering ( http://developer.apple.com/technotes/tn/tn1150.html#HotFile [apple.com] )

    Basically, read only often accessed files are moved to the zone on hard drive where the access to files is fastest.

    It would not be hard to adapt this behaviour to move the files onto SSD portion of the disk at all.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...