Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage IT

Seagate to Offer Solid State Drives in 2008 324

Lucas123 writes "Seagate will introduce drives based on flash memory in various storage capacities across its range of products including desktop and notebook PCs, according to Sumner Lemon at IDG News Service. The drives are expected to consume less power (longer battery life), offer faster data transfer rates and be more rugged than spinning disk, which has moving parts that can be damaged from an impact."
This discussion has been archived. No new comments can be posted.

Seagate to Offer Solid State Drives in 2008

Comments Filter:
  • Re:Warranty? (Score:5, Informative)

    by imamac ( 1083405 ) on Thursday August 23, 2007 @08:07PM (#20337903)
    The rewrite issue has been rehashed a million times. It will be fine. I promise.
  • by rolfwind ( 528248 ) on Thursday August 23, 2007 @08:11PM (#20337945)
    is projected out in the future? Normal hard drive capacity growth has certainly seemed to level off lately and perhaps is stagnating (so it seems to me). Yes, flash has grown astronomically the past few years, but is it sustainable to the point of meeting and exceeding conventional drives?

    If we had the rate of growth in conventional drives that we had a few years back, we would almost certainly be looking at multi-TB drives right now.
  • by EEPROMS ( 889169 ) on Thursday August 23, 2007 @08:13PM (#20337965)
    The headache now is that most file systems are optimised for mechanical based storage media so wont this also mean we will have to look at changing to new file systems ?
  • Re:Lifespan? (Score:4, Informative)

    by smallfries ( 601545 ) on Thursday August 23, 2007 @08:16PM (#20337997) Homepage
    There are different grades of flash chip, with varying amounts of write cycles. The problem with the kind of flash that you get in a usb keyring is not that flash is limited in the number of writes, but that cheap low-end flash is. The kind of solid state storage in a drive can take millions of write cycles, which combined with a file-system that spreads the writes evenly across he chip will give a decent lifespan.

    Cost is still a major issue though. The article only has one number in it, that densities will go up to 160Gb. Do you think they'll take a cheque for that, or you do you have to spread and touch your toes in person?
  • Limit on writes... (Score:5, Informative)

    by CannonballHead ( 842625 ) on Thursday August 23, 2007 @08:27PM (#20338135)


    It's not all that bad. If I remember correctly, most flash memory can take 100,000-300,000.. according to wikipedia:


    "while high endurance Flash storage is often marketed with endurance of 1-5 million write cycles"


    I did a small research project (informational) on flash stuff recently for school, I believe solid state hard drives back in June or so were said to have about 2 million writes.


    2 million writes per sector. You can always move the information around, and algorithms are being written to do that.


    But, with all that, seems like hybrid drives would be the way to go right now.. after all, there's no limit on READING from solid state drives, just writing.

  • Re:Warranty? (Score:5, Informative)

    by Amiga Lover ( 708890 ) on Thursday August 23, 2007 @08:27PM (#20338137)
    Not trolling, I just havent ever seen hard stats on current flash/solid state durability over time recently.

    Take a 40GB hard drive, and pretend it's Flash memory. If you wrote 40GB worth of data to it every single day (with the circuitry inside a drive to spread writes out over cells evenly), then you would average 1 write per day across each cell. Flash memory can be written to a minimum of 10,000 times before dying, most is even more reliably by an order of magnitude (100,000 writes). Assuming we have crappy 10,000 write limits, we could write 40GB to the drive every day for 10,000 days, or 27 years, before failing is an issue.

    Looking at the 40GB drive in one of my machines, the total writes in its uptime comes to about 800MB, which is a shade under 24 hours uptime. That's 800MB worth of writes in a day, 50 times *less* than writing 40GB to the drive every day, so a 40GB flash drive at my current usage rate could be expected to last 27 * 50, or 1350 years.

    A lot longer than I have to worry about. The numbers are going to differ for some people, but the initial stats work out - few people would write to every cell every day, and even then that's decades worth of use.

  • Re:Warranty? (Score:5, Informative)

    by SixDimensionalArray ( 604334 ) on Thursday August 23, 2007 @08:28PM (#20338147)
    I have been researching some of the more current SSD drives lately, and I know that they greatly improved the technology/algorithms behind how they write data to the physical memory. Most companies use some kind of wear-leveling techniques that evenly distribute the writes over the entire surface of the disk, maximizing the disk's life span. I have also read that the different-sized memory modules have different physical characteristics such that smaller modules are actually outlived by larger ones.

    I can't give exact figures, but I've seen comparisons showing a reasonable life span (>20 years @ 100GB of writes/day) - some of the numbers are even comparable to those of spinning/mechanical hard drives. Considering how often mechanical hard drives seem to fail, it doesn't seem that there will be any major roadblocks in terms of reliability.

    I know what I've written is mostly qualitative (apologies on that), but I know the research into how to mitigate the problem of life span has truly advanced in the last few years as interest in SSD has increased. Jim Gray of Microsoft Research fame, predicted that SSD would replace mechanical drives not far off from now. Check out his paper "Flash Disk Opportunity for Server-Applications" for more on that.

    SixD

  • Re:Warranty? (Score:5, Informative)

    by spagetti_code ( 773137 ) on Thursday August 23, 2007 @08:35PM (#20338205)
    Current flash technology has 1-5 million *write* cycles MTBF.
    All modern flash drives use write levelling to ensure writes
    are evenly spread across the device.

    This article [storagesearch.com]
    takes those numbers and using a hypothetical "write logger" app that
    continually writes, estimates an average life of 51 years.

    MTron specs [mtron.net] for their SSDs estimate:

    Write endurance

    In the case of 32GB capacity Mtron SSD: >85 years @ 100GByte / day erase/write cycles


    So lets lay this one to rest. SSDs are worth it.
  • by Televiper2000 ( 1145415 ) on Thursday August 23, 2007 @08:54PM (#20338371)
    I did some poking around the net for information on NAND write cycles. They've already been quoted in the comments here (100,000 to 2,000,000) so I'm just going to post this neat white paper I found on Zeus drives that explains the endurance they get from their SSD Drive. http://www.baydel.com/images/gallery/NAND%20flash% 20resilience.pdf [baydel.com]
  • Re:Warranty? (Score:5, Informative)

    by Anonymous Coward on Thursday August 23, 2007 @08:54PM (#20338375)
    The drive controller will do wear leveling, so it will not rewrite the same bits over and over again, even if the OS thinks it does. This has also been rehashed a million times.
  • Re:Warranty? (Score:5, Informative)

    by Clover_Kicker ( 20761 ) <clover_kicker@yahoo.com> on Thursday August 23, 2007 @08:59PM (#20338417)
    Relax, they still sell mechanical hard drives.
  • by Anonymous Coward on Thursday August 23, 2007 @09:40PM (#20338785)
    Let's explore a worst case scenario:

    Let's assume write-speed is 64MB/s, and that the memory is spec'd to one million writes/cell.

    If we assume the same standard as plain ol' disks, 512 byte sectors, and fill the disk to the brim, leaving only a single sector free to write to, let's see what happens.

    64MB/512 = 131072 (aka 128K) writes/second to that single cell.

    1000000 / 131072 ~= 7,629394531

    That's the expected lifetime of that cell. Seven point six three seconds.

    Don't bullshit me with "will not happen in your lifetime", please.
  • by evanbd ( 210358 ) on Thursday August 23, 2007 @10:03PM (#20338971)
    That's why you don't *do* that. Or, more precisely, why the SSD shouldn't *let* you do that. All it needs to do is keep some hidden spare space (10%? 5%? 1%? I don't know, but it's not huge) and dynamically remap sectors to balance writes. If you have GB of remapping room, even a "full" disk with heavy load would take a long time to wear out.
  • Re:Warranty? (Score:2, Informative)

    by TimTucker ( 982832 ) on Thursday August 23, 2007 @10:03PM (#20338973) Homepage
    CompactFlash to IDE adapters can be had for $5 or so and work fine with most motherboards.
  • by Televiper2000 ( 1145415 ) on Thursday August 23, 2007 @10:22PM (#20339157)
    ...and at that rate you lose 1 sector. That's assuming the disk manager was written poorly enough to actually do such a strange and unprecedented thing.
  • Re:Flash/RAM Drives? (Score:4, Informative)

    by Angst Badger ( 8636 ) on Thursday August 23, 2007 @10:29PM (#20339217)
    For that matter, how come we never saw magnetic drives with builtin RAM caches in the GB scale, occasionally written (in parallel) back to the magnetic disc for reliability?

    Possibly because you weren't looking. For all I know, they still exist, but the vendor we got one from went out of business a few years ago. They sold full-length PCI cards packed with 8GB of SDRAM -- and they had larger models -- that presented a SCSI interface to the system and, with the appropriate driver, could mirror to a magnetic drive. The cost was stratospheric, and our storage needs soon outgrew the available space. We also found that not as much of our processing was I/O-bound as we thought. Other than that, it worked great. Given enough money and a motherboard with a sufficiently large number of PCI slots, it might be the ideal solution for certain niche applications, but the cost and size constraints otherwise make them a poor substitute for magnetic drives in most cases.

    That said, it was pretty cool to be able to reformat the "drive" in a few seconds.
  • Re:Warranty? (Score:3, Informative)

    by tknd ( 979052 ) on Thursday August 23, 2007 @11:15PM (#20339563)

    How do you know that the drive will evenly distribute writes per cell?

    You don't. But what we do know is that if you take a balanced 6-sided die and roll it a large number of times, the distribution of faces to come up will be uniform. That is, each face has an equal chance of being selected. So if we randomly choose a sector and write to it, the wear over large numbers of writes will be uniform over all sectors.

    Its more likely that some cells may remain untouched, which other cells may get written or changed much more frequently.

    That's why if you happen to hit a cell that already has data, you relocate the existing data and write to it anyway. Even though you are using more write cycles, as long as you don't max the capacity the disk will wear out evenly and you won't use up all of the write cycles anytime soon. Assuming a 40GB disk with a poor 10,000 write cycle limit, that would be 400,000GB of data to write before the disk completely fails. That means over one year (365 days) you'd have to write 1095GB of data a day to kill a disk that had the most optimal wear-leveling algorithm. If the algorithm required an average of 2 writes per every 1 write of actual data due to moving around data, then you'd still have to write more than 500GB to kill the disk in a year. The truth is most people don't immediately max the disk until a good year later if at all. Even then, they would only write in the 10s of GBs unless they totally stripped out their ram capacity.

    So it's safe to say that the write cycles are nearly unlimited for useful purposes as long as we attempt to do some kind of uniform distribution across all the cells. Most tech only has a max life span of around 10 years so the write cycles for even poor flash cells is pretty much unlimited for it's useful lifespan. In a laptop or portable device, I'm willing to bet your battery will give you problems before any other hardware. Battery recharge cycles are usually around 500 cycles, yet nobody complains about batteries like they do about flash write cycles.

  • by MtHuurne ( 602934 ) on Thursday August 23, 2007 @11:30PM (#20339671) Homepage
    According to this article [storagesearch.com], it would take decades before the write limits are reached on today's SSDs.
  • by Anonymous Coward on Thursday August 23, 2007 @11:31PM (#20339679)
    Thats the great part. With hard drives you only know that a sector has gone bad after a read. With flash you know immediately when a write fails. When this happens the sector will be marked bad and the write will be attempted somewhere else. Eventually enough bad sectors will cause the drive to become full, but you never lose any data. Just boot from another disk and make a backup. Reads do not harm the disk in any way, only writes do. Even though flash has a more limited number of write cycles, the fact that it fails more reliably makes it more reliable over all.
  • by RudeIota ( 1131331 ) on Friday August 24, 2007 @12:28AM (#20339987) Homepage
    Flash drives simply don't write the same first bits over and over again. Their firmware is programmed to 'intelligently' spread written data across the entire storage area as fairly as possible.

    Between this, massive storage capacity (think: 'dilution') and what will surely be engineering improvements, flash drives should prove to be very reliable.

    I for one, welcome out solid state overlords.
  • by Anonymous Coward on Friday August 24, 2007 @01:00AM (#20340185)
    Filesystems created specifically for use on Flash storage already exist -- JFFS2 [wikipedia.org] and come to mind. However, they were build for embedded devices and don't scale well. [wikipedia.org]
  • by sssssss27 ( 1117705 ) on Friday August 24, 2007 @01:58AM (#20340469)
    This is already handled by the drive itself.
  • by Kjella ( 173770 ) on Friday August 24, 2007 @02:53AM (#20340679) Homepage
    Am I safe in assuming SATA transfer rates are sufficient to handle a SSD?
    Will it move choke points elsewhere on the system?
    I'd like to know what other practical benefits such would have other than lower power consumption and durability.


    1. Yes, at least so far the fastest I've seen is 90MB/sec sustained read with a 150MB/s SATA interface and if that became a problem they could move to a SATA2 interface and get up to 300GB/sec (NB: Since flash don't have cache, there's no point in going to SATA2 unless the flash can actually handle it).
    2. As far as I can tell, not yet.
    3. Primarily responsiveness. I have many annoying applications that block for IO access. With faster random access, those apps should lag a lot less when you're using your disk for a lot of things at once (e.g. bittorrent, playing a movie etc.)
  • by Eivind ( 15695 ) <eivindorama@gmail.com> on Friday August 24, 2007 @02:56AM (#20340697) Homepage
    There are many techniques. If you really want details, get a book or hit wikipedia. But to give you a general idea:

    Each block has, infact, a bit more storage than the amount exposed. There are error-correcting checksums and stuff, allowing the drive to detect (and sometimes correct) errors, among these are, typically, a counter saying how many times the block has been written to.

    If the drive notices that one block has a lot more writes than the average block, it can swap the contents of those two blocks internally, and then make a note of this swap. (just a simple mapping 0x000FE37 isat: 0x00A32B) The host-OS never even notices this, it keeps asking for block 0x000FE37, and keeps getting the same content that was always there, only that content is now *physically* stored somewhere else.

    It's a lot as if your office is worn-down and needs to be redone, and management puts you in a different office, but let you keep your old phone-number. People calling for you won't even notice that *physically* you're now somewhere else, all they know is, they dial that number, they reach you.

    Every OS with virtual memory does the same thing to RAM (though for a different reason) the logical adresses that the programs see are related to the *physical* (actual) ram-locations by a lookup-table.

    It's *really* not a hard problem to solve, and it's been thoroughly solved for literally decades. Thus you really *CAN* assume that the entire drive is (aproximately) evenly used. Which means those calculations aren't bullshit afterall. Even if you just constantly rewrote a single block, what would happen is after a while (say 1000 writes) that block would, internally in the flash, be replaced with another physical block, if you write another 1000 times that'd happen again and so on.

    Yes, there's a sligth overhead: Every time you do 1000 writes, the flash needs to do (aproximately) 1003 writes and 3 reads. That is a small overhead though, and it can be reduced by upping the constant from 1000 to 10000 say. (which would result in wear being sligthly less evenly distributed, but nevermind)
  • It can get trickier. The arrangement of chips varies by size and vendor. The controller used and how the chips are connected affect this too. There is a very friendly computer shop that lets me try everything and return if it doesn't work as advertised. I picked up two 2G sticks for about EUR 10,-- each that use a Psion controller and are arranged in 16k blocks. (A four GB stick, with the same Taiwan brand, at EUR 39,-- turned out to be very slow. You have to test them. Except for quick tweaks, its not wise to make excessive writes. It must (in this case) write 16k no matter how small the file. (symlinks are hell) I simply allocate the same space on a HD and do 'dd if=/dev/sdN... of=/dev/da1s2a bs=16k' and that will run 396M in about 22s on BSD and surprisingly much faster with Linux. I use them as boot loaders with the kernel and userland. I slice them because ext2fs has different requirements. Reading is not normally a problem with current sticks. That block size parameter is quite important. I get 30MB/s on BSD (ufs2) and a surprising 90MB/s on Linux. (ext2fs). This is _much_ faster than the claimed rates for msdosfs/ntfs they advertise. (12MB/s write, something I've never seen msdosfs/ntfs do.) Bottom line is I can have upto a dozen or more systems on two sticks.

    The lower cost units tend to be better, perhaps only because they are smaller or compliant to my filesystems. It may be worth noting I colour code the usb sockets to avoid mistakes. It is really easy to mess up, so always having a copy on a real hd is very comforting. Since the sticks are ROM and written once per development cycle, they will never wear out electricly. (The USB sockets will go much faster.) I think we all know what happens if you use dos. This is my experience and these things are developing rapidly. They are as fast as ordinary SCSI drives (they are SCSI drives) and indeed somewhat more stable. Expect a hot product from Seagate. :)

         
  • by Inda ( 580031 ) <slash.20.inda@spamgourmet.com> on Friday August 24, 2007 @08:40AM (#20342429) Journal
    Thanks for the very informative link.

    To summarise:

    8 million writes before failure. Failure occurs during write or erase. Stored data does not get corrupted.

    64gb would take 20 years to fill if the same byte was overwritten one million times.

    I hope the rest of the Slashdot ill-informed take note.
  • Re:Here and now (Score:3, Informative)

    by LarryRiedel ( 141315 ) on Friday August 24, 2007 @12:57PM (#20345335)

    Are there any adapters for laptop-sized IDE drives?

    Addonics CF-IDE [addonics.com].

    Larry

  • Re:Here and now (Score:1, Informative)

    by Anonymous Coward on Friday August 24, 2007 @01:57PM (#20346169)

    Bottom line: It's not that you can't use CF for storage, it's just too expensive per GB to do so.

    Err.. unless you actually want solid state storage (for the various speed/reliability advantages that everyone's talking about), and you're in a situation where even a "mere" 16GB is more than you need.

    I don't think he was arguing that CF is competitive in terms of GB/$, just that it's affordable enough to be a reasonable option in situations where number-of-GB doesn't matter. I wouldn't put my media collection on CF (yet), but boot from it and run the "system" from it? Fuck yeah! Why not?

    4GB should be enough for anyone. ;-)

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...