Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage IT

Top Solid State Disks and TB Drives Reviewed 216

Lucas123 writes "Computerworld has reviewed six of the latest hard disk drives, including 32GB and 64GB solid state disks, a low-energy consumption 'green' drive and several terabyte-size drives. With the exception of capacity, the solid state disk drives appear to beat spinning disk in every category, from CPU utilization, energy consumption and read/writes. The Samsung SSD drive was the most impressive, with a read speed of 100MB/sec and write speed of 80 MB/sec, compared to an average 59MB/sec and 60MB/sec read/write speed for a traditional hard drive."
This discussion has been archived. No new comments can be posted.

Top Solid State Disks and TB Drives Reviewed

Comments Filter:
  • by ASkGNet ( 695262 ) on Wednesday December 26, 2007 @11:16AM (#21821626) Homepage
    NAND flash deteriorates with use. When used in a high-I/O situations like hard drives, just how much time will it be able to work correctly? If I recall correctly, NAND blocks are guaranteed to the order of 100000 writes.
    • by peragrin ( 659227 ) on Wednesday December 26, 2007 @11:19AM (#21821650)
      Yes, new on disk topology mappings, and new tech give you roughly a million r/w and the mappings help to evenly distribute the load.
      • by plague3106 ( 71849 ) on Wednesday December 26, 2007 @11:25AM (#21821692)
        This is always claimed as the solution, "evening" writes. But I think the question about how long will the drive last is still relevent; all it takes is a mostly full disk, which has a high I/O load. Even with evening, it seems that at least part of the disk can fail before the rest of the disk.

        Do traditional drives fail if the same sector is written to over and over again as well?
        • by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Wednesday December 26, 2007 @12:35PM (#21822320) Homepage Journal

          This is always claimed as the solution, "evening" writes. But I think the question about how long will the drive last is still relevent; all it takes is a mostly full disk, which has a high I/O load.
          Easy: don't let the drive become mostly full. This means heavy-duty drives will be a 64 GB chip reformatted for 48 GB with the rest designated as spare sectors for wear leveling, but the power consumption and seeking speed benefits can still make it worthwhile.
          • Re: (Score:3, Funny)

            by plague3106 ( 71849 )
            Easy: don't let the drive become mostly full.

            Ya, becuase THAT is realistic in the real world...
        • by vertinox ( 846076 ) on Wednesday December 26, 2007 @12:59PM (#21822556)
          Do traditional drives fail if the same sector is written to over and over again as well?

          No, but they'll fail either reading or writing over time regardless if you are writing or just reading just because the drive is moving. Even if you cool your standard drive, eventually it could just fail because it was left on for 10 years (since an active drive is constantly spinning).

          Now its not guaranteed to fail, but the chances of a standard HDD failing that you only read from and don't write it is far greater than a SSD that you put files on it one time and don't write further.

          I think SSD shine in archival types of things that you don't plan on trashing and rewriting that often such as image collections, movies, and MP3s. That said, swap disks, scratch disk, and cache file directories would logically still have better performance on your spinning platter drives and if that drive goes belly up you haven't lost much.
          • SSD doesnt shine at archive.
            Archive does not need fast speed nor good seek times.

            Normal hard drives have plenty of speed for archive however, would be spun down most of the time (no wear and tear) and they provide what SSD cannot: capacity.
            Having 64gig of data archived is great and all but at home I have 900gig of archive data. Small problem dont you think?
        • by s_p_oneil ( 795792 ) on Wednesday December 26, 2007 @01:06PM (#21822614) Homepage
          "all it takes is a mostly full disk, which has a high I/O load"

          It is a relevant question, but this wouldn't kill your hard drive, it would simply reduce the amount of free disk space. And it's not difficult to imagine a file system smart enough to move files around when this happens. When a sector gets written to too many times, it can simply look for and move a really old file onto that sector to free up some of the rarely used sectors of the drive. With the increased performance of SSD, you probably wouldn't even notice it.

          Aside from the re-write issue, flash memory drives should be WAY more reliable than a mechanical HD. It should never just completely die or start getting bad sectors so fast you don't have time to retrieve your data. It should also be a lot easier to replace when it starts to degrade. It shouldn't be as susceptible to damage when you drop it from a height of 3-5 feet, or due to heat, cold, vibration, dust, humidity, etc. I'm not sure whether a magnetic field could erase it like a hard drive, but if not, that's another plus for SSD. I imagine SSD's are more susceptible to static electricity, but so is almost everything else plugged into your motherboard, so I'm not sure if that could be considered a minus.

          I'm sure if you ever tried an SSD on a laptop, you'd never want to go back to an old HD. The improved performance and battery life would make going back to an old laptop HD seem like going from broadband back to an old 56K modem.
    • by goofy183 ( 451746 ) on Wednesday December 26, 2007 @11:39AM (#21821800)
      Will this ever die? The write cycle counts in modern flash is in the millions now. Doing the math you very easily get 20+ years before write cycle wear is a concern: http://www.storagesearch.com/ssdmyths-endurance.html [storagesearch.com]

      How many heavily used spinning drives do you know that last even 10+ years?
      • by baywulf ( 214371 ) on Wednesday December 26, 2007 @11:49AM (#21821882)
        Actually the endurance on NAND has been going lower over the years as they switched to smaller cell geometry, larger capacity and MLC technology. Some are as low as 5000 cycle endurance. These MLC(multi-level cell) NAND tend also to be much slower than SLC(single-level cell) NAND. Most SLC NAND have around 50K or 100K endurance.
        • Re: (Score:2, Informative)

          by theoverlay ( 1208084 )
          With 3-bit and even quadbit MLC NAND around the corner we should see faster controllers that will make these drives more attractive and larger. There are even some hybrid controllers that allow multiple nand types(mlc and slc) and even nor in the same application. One of these is Samsung's flex-OneNAND. A good site for more information is http://infiniteadmin.com/ [infiniteadmin.com]
      • by ComputerSlicer23 ( 516509 ) on Wednesday December 26, 2007 @12:05PM (#21822066)
        You ever actually done this? I work on embedded systems that use flash drives... Even with write levelling, we've had failures. It's lots of fun, when your 512MB flash isn't 512MB, and will suddenly lose ~41MB suddenly. As a work around, we've had to start partitioning with extra space left lying around at the end of a disk. This isn't even a heavy workload system.

        Some friends of mine at another company that were using them in a I/O laden system that wanted to replace laptop drives to make the machinews lower power and more reliable can blow out a flash drive in about 4 weeks.

        Kirby

        • You've never had spinning platter hard drives fail on you?
          • by ComputerSlicer23 ( 516509 ) on Wednesday December 26, 2007 @12:42PM (#21822392)

            Yes I have. However, I've never had one magically get smaller on me in such a way that fsck decides that your done fixing the filesystem. With SSD, YES, I've had exactly that happen to me.

            In my life, I've lost a total of about 42Kb be completely unrecoverable with spinning media (yes, I mean that number literally). I use RAID extensively, I was the DBA/SA/Developer at a place that had ~10TB of disk online for 5 years. In all that time, 42KB is all I lost. Oh, that was in the off-line, tertiary backup of the production database (it was one of 5 copies that could be used as a starting point for recovery, we also had the redo logs for 5 days, each DB was a snapshot from one of the previous 5 days). It was stored on bleeding edge IDE drives put in a RAID 5 array. We used it as a cheap staging area before pushing the data over Firewire/USB to a removable drive that an officer of the company took home as part of the disaster recovery system (it had only the most recent DB and redo logs). The guy didn't RMA the hot spare, and we had two drives fail in about 3 days while the hot spare was waiting for the RMA paper work to fill out. In that one particular case, using ddrescue, I recovered all of the data off of the RAID5 array but 42KB (even though it was an ext3 filesystem on LVM, on a RAID5 array, which made the recovery even more complex). Every other bit and byte of data in my life from spinning media that I cared about, I've recovered (I've had a number of drives die with data I didn't care about, but I could have recovered from if need be). Trust me, I know about reliability, backups, and how to manage media to ensure that failure doesn't happen. I know about failure modes of drives. I've hot swapped my fair share of drives, and done the RMA paperwork. I've been in charge of drives that losing any one of the ~200 drives would have cost 10 times as much as I made in a year if I couldn't reproduce the data on it within hours.

            If it had been worth $10K, I'd have sent off the drive to get that 42KB of data recovered. But it wasn't. It's well understood how the failure mode of spinning media. People know exactly how to do things like erase drives securely. People know who to call that has a clean room that can remove the magentic media to and put it under a microscope to get the data recovered. SSD isn't nearly as mature in that sense.

            All of that is really to say: Yes, I know something about disks and drives. My point is to say that SSD's aren't magic pixie dust in terms of reliablabilty. I've had exactly what he's saying I shouldn't worry about happen to me on a regular basis. Enough, that our engineering department has developed specific procedures to deal with them in the field. We've changed our release procedures to accout for them. If your going to use an SSD or flash drive, go kick the crap out of it. Don't believe on faith anything you read on Slashdot (including this post, which is anecdotal). We order lots of 5,000 flash disk, and you can bet that at least 100 of them has serious flaws within being fielded. The ones the developers and testing uses regularly develop problems in terms of months, not years. The manufacturer tells us essentially, it's not worth it to find those, so deal with it.

            The whole point of replacing the laptop drive was to make the silly thing more reliable. But making it uber-reliable for 4 weeks until the write leveling crapped out wasn't the idea.

            Kirby

            • Re: (Score:2, Insightful)

              by zeet ( 70981 )
              So you're saying that you lost 42kb of data you did care about, and some other unnamed amount of data that you lost but didn't care about? That seems a bit disingenuous. Even if you could have recovered the other data, since you didn't try it wasn't recovered.
              • Re: (Score:3, Interesting)

                So you're saying that you lost 42kb of data you did care about, and some other unnamed amount of data that you lost but didn't care about? That seems a bit disingenuous. Even if you could have recovered the other data, since you didn't try it wasn't recovered.

                I believe what Kirby was saying, in addition to SSD's crapping out in weeks instead of years, is that he can get the data back from rotating media virtually every time if it's important enough to be worth spending the $$$s on. Unimportant stuff he d

              • by ComputerSlicer23 ( 516509 ) on Wednesday December 26, 2007 @03:02PM (#21823690)

                When I was young and stupid about drives and media, I lost a 1.2GB WD drive and lost everything on it. I couldn't spell "mkfs" or "fsck" and had no idea how to recover the drive at the time (I also didn't have the money to have a second drive to recover too, and no credit card so I could hold onto the first while having the second during the RMA). I was just young and ignorant. I lost a 1-2GB laptop drive that I literally just rode into the ground, I could have copied everything off and moved along. I knew the drive was going bad, but it was just a knock around system that I didn't care about. In the end, had I been thinking, I'd have saved the e-mail on it. I lost the first ~5-6 years of e-mail I had, but who wants e-mail from when they were 18-24? That was probably a couple of hundred MB that I might regret, but of nothing more then sentimental value. I'd never read it, and only be amused that I could prove I'm getting the same chain letters 15 years later.

                I believe I had 4-5 drives I lost due to a virus or pilot error, but not a mechanical/media problem.

                I've RMA'ed probably 100-200 drives due to some type of failure. I've had lots of of drives fail that were in a RAID array, that the mirror saved me. I've had lots of drives fail that were stand alone that had a section of bad sectors. All of that I recovered every byte of data from. Normally a drive that is going bad, you can still recover from for a very limited amount of time. Normally you have plenty of lead time, especially with SMART drive monitoring that your drive is going south. As long as you pay attention, spinning media isn't that hard to keep in good shape.

                As a professional IT person, 42KB is it. On machines where production work is done for money at a company. 42KB is it, and in that case I was bound and determined to recover absolutely everything, and I invested a week into that project. I gave up on the 42KB once I proved that it was in a backup for the database that was at that point 15 days old (and thus of no use). Had it been necessary or cost effective, I'd have spent the $1-3K to get that drive images recovered by a professional data recovery shop. I think I've lost a drive or two on my personal machines at work, but the drive was fine, the laptop SATA controller was overheating. Using FSCK, I recovered the entire FS once the RAID controller was replaced. I think I had to re-rip some music from CD, because I failed to back it up prior to sending the laptop in for repair. I re-imaged the drive just to be safe in case the RAID controller had corrupted something important on the OS drive, which was the only reason I actually lost the music.

                Again, it's the fact that the flash drives we have decided the drives are smaller at the interface level. Using fsck just scragged the system pretty much start to finish. I don't have a clue where the missing blocks are from. I have no idea what happened, upon reboot it decided that the block devices was smaller. Filesystem recover tools haven't had a chance to mature to understand those types of failures. Flash makers haven't yet decided that access to diagnostics and re-mapping logs might be of value to data recovery tools (at least none that I'm aware of). Access to the raw data (in case they are holding blocks in reserve). All of these things are reasons to be concerned about write leveling.

                Kirby

            • by TooMuchToDo ( 882796 ) on Wednesday December 26, 2007 @12:56PM (#21822534)
              I can understand your reluctance to trust flash media. Indeed, it hasn't been proven like spinning media has. Let's take another example. An in-car radio. I want a 100GB hard drive in my car, solid state, that is for all intents and purposes write once. I should be able to dump 10s of GBs of MP3s onto it, and the index should be stored on a replaceable CF card (as the index would be changed often). But why would I remove music from the drive? I can just add more music.

              For the above example, a flash drive works very well. If you need the benefits of flash storage mediums (vs spinning media) you should be prepared to engineer around the situation. Run temporary data out of RAM with battery backup, and only commit the data to flash between reboots and power outages.

              • by ComputerSlicer23 ( 516509 ) on Wednesday December 26, 2007 @03:25PM (#21823898)

                I think that makes perfect sense, but then I'd think that all the money they'd spent in making the thing perform faster then say 1MB/sec read rate is totally wasted. I'd assume folks are trying to push these are replacements for enterprise server machines, which I'd be extremely relucant to do.

                Folks talk about these things in the theoretical (the original poster linked to a story that crunched numbers to show it should be safe). My question is does anyone have solid experience they can point to show that it has actually been safe for say 6-18 months under some well known duty cycle (A database, a file server, an e-mail server).

                I have actual experience, with crappy flash made by a low end manufacturer that shows me, it's not terrible reliable. It is my understanding that we've had better luck with other makers, but their parts were too expensive (but software development is free *sigh*).

                There are other threads in here that make me want to cram a CF-IDE converter into my machines and try putting my journal onto a Flash drive. Sounds like the performance boost and power consumption is a big win, but the fact that every byte of data pushed to the journal might be an issue. On a home machine, it might be worth playing with for giggles for performance testing.

                Other folks I know who have tried to do things with flash have also been disappointed over the past 12-24 months, despite assurances from various experts that "it should work"... I'm looking for, "I'd done it, here it is, go play with it.". Now obviously MP3 players have been doing it for a while. I'm more interested in general purpose usage of a Flash drive. Those are the types of things I'm currently working on, cramming a flash into a machine that runs an ext2/ext3/xfs/reiserfs/jfs or some other read/write heavy usage ready FS on it.

                Kirby

      • by Lumpy ( 12016 )

        How many heavily used spinning drives do you know that last even 10+ years?


        I have at least 15 of them doing that right now. my last employer changed out the SCSI array in a couple of powervaults in 2005 I picked the drives out of the trash and have been using them in a powervault I got off ebay for $25.00 the drives have been spinning for over 10 years now.

        I have had only 1 drive fail out of the "untrustworthy" ones I got out of the trash.

        SCSI U160 drives are incredibly robust, not like the crap they have
        • Re: (Score:3, Interesting)

          by Amouth ( 879122 )
          i know what you mean.. my desktop i am using right now has an IBM 36gb scsi drive that is pushing 9 years as we speak.. wonderful drives - they truly just don't make them like they used to ... on the other hand just for the sake of it (and it's age) i have a seagate 9.1gb scsi drive that takes up 3x5.1/4 bay's - it was one of the first 9gb drives on the market.. still running.. on a duel p pro running slackware.. it keeps right on chugging away and keep spam out of my mail box..

          on the other hand i have 4
      • Is it worth getting a small one and using it for swap? In other words: Is it faster than a normal HDD? And how long would it last (with this usage)?
        • by fm6 ( 162816 )
          I'd guess it'd he faster — but not as fast as increasing your RAM so you didn't swap as much.
        • Re: (Score:3, Insightful)

          by johannesg ( 664142 )
          Why not just buy enough RAM? It is cheaper than using a solid-state disk, and if all you use it for is swap anyway it really doesn't matter if it volatile or not...
    • NAND flash deteriorates with use. When used in a high-I/O situations like hard drives, just how much time will it be able to work correctly? If I recall correctly, NAND blocks are guaranteed to the order of 100000 writes.

      Do a web search for "flash wear leveling."

      -a
      • Do a web search for "flash wear leveling."

        And you do a search to see how well that works when your SSD is mostly full, and the swap space is getting hit hard. Leveling doesn't tend to move static files often, meaning when the SSD is mostly full, only a small part of it is getting continually whacked. And when that goes out of service, you have an even smaller pool of free space to handle all the activity.

    • by Khyber ( 864651 ) <techkitsune@gmail.com> on Wednesday December 26, 2007 @12:05PM (#21822064) Homepage Journal
      And this is why we're moving away from NAND, so get that damned term out of your head already! OUM/OVM is coming, uses a nearly identical manufacturing process (It's the same thing found in RW optical media, except you use electricity instead of a laser to change it's state) as CMOS does, and it has FAR more read/write cycles than anything NAND could have ever hoped to achieve, in the range of 10^8 as opposed to NAND 10^5-10^6
    • Re: (Score:3, Informative)

      by psydeshow ( 154300 )
      Just remember to mount these drives noatime [faqs.org] to avoid a write every time you read a file.

      For that matter, noatime is a sensible default for any desktop OS. When was the last time you actually searched for files you hadn't accessed in six months?
  • by jafiwam ( 310805 ) on Wednesday December 26, 2007 @11:17AM (#21821632) Homepage Journal
    I could do with a 64 GB primary drive on my gaming machine.

    Disk performance it the main roadblock to getting on the server first, which has a huge advantage over slower-loading players.

    Yes, I am a LPB. Sue* me.

    * By "sue" I mean attempt to frag.
  • Reliability (Score:5, Insightful)

    by RonnyJ ( 651856 ) on Wednesday December 26, 2007 @11:18AM (#21821638)
    It's not mentioned in the summary, but added reliability might make these types of disks more appealling too:

    The no-moving-parts characteristic is, in part, what protects your data longer, since accidentally bumping your laptop won't scramble your stored files. Samsung says the drive can withstand an operating shock of 1,500Gs at .5 miliseconds (versus 300Gs at 2 miliseconds for a traditional hard drive). The drive is heartier in one other important way: Mean time between failure is rated at over 2 million hours, versus under 500,000 hours for the company's other drives.

    • Samsung says the drive can withstand an operating shock of 1,500Gs at .5 miliseconds (versus 300Gs at 2 miliseconds for a traditional hard drive).

      I was just thinking the other day that 300G just wasn't cutting it anymore. I can't count how many times I've thrown my laptop out of the space shuttle and the drive was barely readable after it landed in a concrete parking lot.

  • Hmm (Score:5, Interesting)

    by orclevegam ( 940336 ) on Wednesday December 26, 2007 @11:18AM (#21821644) Journal
    I'm really interested in the SSD drives as high performance replacements (particularly for holding OS images where boot times should be nicely reduced), but I've got to wonder how the mean time to failure of one of these compares to a traditional magnetic disk. I know they use write leveling, but that just means everything will have a tendency to fail around the some time later, rather than a spot or two now and then. Anyone have any actual reports on these? I can usually make it 2 or 3 years before I start to see errors crop up on magnetic disks (sometimes more or less depending on how much thrashing the disk is subjected to). Might it be cheaper to simply buy a decent sized CF or SD card and an ide/sata adapter rather then paying for an actual disk, or is there some inherit advantage to one of these you'd be missing out on?
    • by Sibko ( 1036168 )
      If you had RTFA, you'd probably have noticed it said this:

      "Samsung says the drive can withstand an operating shock of 1,500Gs at .5 miliseconds (versus 300Gs at 2 miliseconds for a traditional hard drive). The drive is heartier in one other important way: Mean time between failure is rated at over 2 million hours"
    • It gets better (Score:4, Interesting)

      by WindBourne ( 631190 ) on Wednesday December 26, 2007 @11:44AM (#21821854) Journal
      My home server has a terabyte of disk, but I added a CF-IDE adaptor card, along with 4G CF card. I loaded Linux kernel on it, and then mapped a few dirs to partitions on the HD. After about 6 months at it, I noticed that the temp in the case dropped. It appears to be about 5-10C lower (depending on load). The disk spend the bulk of their time sleeping. I have been pleased enough with this server, that I am going to do the same to my small shoe box computer. Rip out the HD, add CF for /, and then mount my home dir from the server.
    • by DamonHD ( 794830 )
      I already boot my low-power Linux server off CF:

      http://www.earth.org.uk/low-power-laptop.html [earth.org.uk]

      Rgds

      Damon
    • I know they use write leveling, but that just means everything will have a tendency to fail around the some time later, rather than a spot or two now and then.

      Not necessarily. It really depends on the statistical distribution of the number-of-writes-until-failure the various blocks (or whatever the unit of failure is) in a SSD. If they're normally distributed, then you'd probably see several blocks fail here or there long before the majority of them had failed.

      OTOH, if you or your operating system are

  • Is it just me? (Score:4, Informative)

    by crymeph0 ( 682581 ) on Wednesday December 26, 2007 @11:21AM (#21821666)
    Or does the linked article say nothing about TB sized drives, only the flash drive?
    • No, it's not just you. The /. summary seems to bear little resemblance to the actual article. There's also no mention of the pricing or availability of the SSD, but from a quick check on frys.com, it looks like it's not available yet, what is available is 32 Gb sizes, and 32 Gb sizes will set you back about $350.
    • With a little digging around in the "Hardware" section you can find three Terabyte HDD reviews, one for "The 1TB Barracuda" [computerworld.com], one for the WD RE2-GP [computerworld.com] and one for the Hitachi Deskstar 7K1000 [computerworld.com].

      Interestingly, the Seagate has so much space that "[t]he odds are excellent that Windows will never again tell you that you're running low on hard disk space with this 1TB drive, and that alone might be worth the price of admission", while the equally-sized Hitachi "doesn't boast efficiency, but its slightly lower platter
  • Number of writes? (Score:3, Interesting)

    by QuietLagoon ( 813062 ) on Wednesday December 26, 2007 @11:28AM (#21821714)
    With the exception of capacity, the solid state disk drives appear to beat spinning disk in every category,

    Why is the ultimate number of writes never taken into account in these comparison reviews? Why are solid state drives tested so that their weaknesses are not probed?

    • Re:Number of writes? (Score:5, Informative)

      by Planesdragon ( 210349 ) <slashdot&castlesteelstone,us> on Wednesday December 26, 2007 @11:41AM (#21821814) Homepage Journal

      Why is the ultimate number of writes never taken into account in these comparison reviews? Why are solid state drives tested so that their weaknesses are not probed?
      Because it's a measure best reflected by Baysean Data, and they don't have enough time to test them.

      If you want, buy an HDD and a Flash-Drive of the same cost, hook them up to a program that runs each at equal data-transfer rates, and see how much data you can read and write to each before they fail. Report back to us in the six months it'll take you.

      Oh, and you need to do the trial over a wide sample, so get, oh, at least ten of each.

      • Because it's a measure best reflected by Baysean Data, and they don't have enough time to test them.

        What's Bayesian Data? [And yes, I am too lazy to Google it.]

        Did you mean Monte Carlo?

        Or maybe Latin Squares?

  • MTBF/Write Cycles (Score:5, Interesting)

    by Lookin4Trouble ( 1112649 ) on Wednesday December 26, 2007 @11:29AM (#21821724)
    Since I've seen this plenty of times, I'll address it.

    Write Cycles: Even at the lowest estimate, 100,000 write cycles to failure

    Meaning on a 32GB Drive, before you start seeing failures, you would have to (thanks to wear-leveling) write 32*100,000 GB, or 3.2Petabytes

    at 60MB/sec write speed of the Samsung drives, you would need to write (and never, ever read) for 3,200,000,000/60, or ~53Million seconds straight.

    53Million divided by 86,400 means you would need to be writing (and never ever reading) for ~617 Days straight (That's roughly 20 months of just writing, no reading, no downtime, etc...

    So... the sky is not falling, these drives are slated to last longer than I've ever gotten a traditional drive to last in my laptop(s)

    Almost forgot to mention, standard NAND of late has been more in the 500k-1M write cycle between failures range. 100k was earlier technology, so multiply numbers accordingly.

    • by jafiwam ( 310805 )
      What happens when you run the same napkin math on a drive that has Windows, Office, and two big games on it?

      That leaves you about 10 GB of space to use for writes for swap, temp files, etc.
      • Re:MTBF/Write Cycles (Score:4, Interesting)

        by goofy183 ( 451746 ) on Wednesday December 26, 2007 @11:43AM (#21821830)
        Except most wear-leveling MOVES data around on the drive. Since random access is 'free' shuffling mainly read-only data around on the disk periodically is perfectly reasonable.
        • by renoX ( 11677 )
          Please mod parent up! I'm sick of all these posts (modded up!!) who thinks that writing on a mostly full disk remove the effectiveness of wear-leveling, there is no reason why this should be the case..
          • Re: (Score:3, Insightful)

            by orclevegam ( 940336 )
            Well, it may not entirely negate the effectiveness of wear leveling, but it definitely makes the calculations a bit more complicated. Lets look at the theoretical example of a 32G disk with 31G used and a 512M write about to happen. It decides that the free space already has too many writes, and it needs to write the data to a used section of the disk instead, so it finds a 512M chunk of data that has the lowest write count and copies that to the free space (with a high write count, further increasing it's
          • by Skapare ( 16644 )

            If the block on disk has ever been written the flash device has to keep it. It has no idea that no file inodes point to it anymore. Wehn a write is done, it picks a block from the pool and writes it there, and juggles its own mapping. But I am curious about a flash device that will, on its own, just juggle things around. That could avoid the data stagnation problem, where any data that doesn't get written on is just keeping the zones of writing all that much smaller. But it can also increase the number

      • As long as the driver is smart enough to disable a paging file, not that much writing is done to the hard drive on a Windows box (at least by the OS). When you do updates of course, writing is done. And when you save files, writing is done. But if you're just surfing the web and have 2-4GB of ram, disable caching, and the browser shouldn't write to the disk. If you're running Office, or games, save your work or savefile over webdav to a remote provider or use Amazon S3 to save those small amounts of data.
    • Re: (Score:2, Insightful)

      by everphilski ( 877346 )
      Meaning on a 32GB Drive, before you start seeing failures, you would have to (thanks to wear-leveling) write 32*100,000 GB, or 3.2Petabytes

      NOT true, unless the drive is completely empty! If you have 31 gigs of data on that drive which you were using as long-term storage, then you'd only have to write (32-31)*100,000 GB of data before failure. You obviously wouldn't be overwriting any data already stored on the drive ...
      • by baywulf ( 214371 )
        Not true if the drive uses static wear leveling algorithms. These algorithms will swap data between low use and high use NAND regions periodically.
      • by vadim_t ( 324782 )
        It's still not the same failure mode though.

        On a magnetic hard disk, once you get a failure you can expect the thing to die completely soon, because failures tend to be mechanical. Once there's scraped magnetic material bouncing around on the inside it's only going to get worse, possibly very fast.

        On a SSD what should happen is that sectors die in a predictable fasion, and they die due to writes, so you can still read and recover your data.
      • Re:MTBF/Write Cycles (Score:5, Informative)

        by NMerriam ( 15122 ) <NMerriam@artboy.org> on Wednesday December 26, 2007 @11:58AM (#21821988) Homepage

        You obviously wouldn't be overwriting any data already stored on the drive


        No, but the wear-leveling routines in the drive will happily move around your existing data so that rarely written sectors are available for heavy writing operations.

        Seriously, this "issue" comes up in every discussion about SSDs, and it seems like people are just unwilling or unable to accept that what was once a huge problem with the technology is now not even remotely an issue. Any SSD you buy today should outlive a spinning disk, regardless of the operating conditions or use pattern. It is no longer 1989, engineers have solved these problems.
        • Re: (Score:3, Interesting)

          by jafiwam ( 310805 )

          No, but the wear-leveling routines in the drive will happily move around your existing data so that rarely written sectors are available for heavy writing operations.

          Seriously, this "issue" comes up in every discussion about SSDs, and it seems like people are just unwilling or unable to accept that what was once a huge problem with the technology is now not even remotely an issue. Any SSD you buy today should outlive a spinning disk, regardless of the operating conditions or use pattern. It is no longer 1989, engineers have solved these problems.

          Actually, I think the issue is there are differences in the drives that don't come up in the articles themselves, so that detail gets left out every time.

          So, it's inevitable that someone who doesn't know this particular detail, but is already familiar with how platter based magnetic media work will come up with that issue in pretty much every discussion.

          The problem is it's new. That's all. (Or, perhaps that techno-journalists write about stuff they don't know enough about.)

    • There's a serious flaw in your analysis - you're assuming a totally empty drive. You're going to be wearing the drive more and more as it gets full, and the combination of an almost-full drive and a busy swap partition might get interesting very quickly.

      I agree that on the whole, flash is a lot more durable now than it used to be, but I'm not quite convinced that these will be suitable as a general-purpose replacement for magnetic disks. Aside from the NAND longevity issue, I'd be concerned about the a
      • by Zerth ( 26112 )
        Smart wear leveling enables the drive to swap files on sectors with many writes left(i.e., read only or rarely changed files) and with those on sectors with few writes left(swap, savegames, etc).

        So performance isn't that far from a nearly empty drive.

        Although I do agree, I'd be concerned about recovering from controller failure more than with a magnetic drive.
      • Why would anyone use flash for virtual memory? You can get 4GB of DDR2 SDRAM for seventy bucks [frys.com], or two gigs for less than half that [newegg.com]. Notebook SO-DIMM prices are about the same [frys.com].

        With DDR2 prices so cheap, I don't see why anyone (with a modern enough system to use DDR2) is swapping data to disk regularly. Certainly not anyone who can afford a SSD.
    • Even at the lowest estimate, 100,000 write cycles to failure

      Hey, I never get this question answered: the bad block map has to be stored somewhere, so is it also limited to 100,000 writes? You can't remap the map, can you? If not, then, are you limited to 100,000 total errors?

  • by B5_geek ( 638928 ) on Wednesday December 26, 2007 @11:33AM (#21821764)
    How do these SSD compare to a real high-end disk like a 15k rpm Ultra320 SCSI drive?
    Of course SSD will beat an IDE disk hands-down, but that is not why you buy IDE drives.
    I have always used SCSI for my OS/system and IDE for my storage, this combination (in addition to SMP rigs when available) has allowed me to out-live 3 generations of processors. Therefore saving me money on upgrades.

    SSD seems best marketed to 'gamers' so why is it always connected to a very limited IO bus?

    • by jd ( 1658 )
      I've seen SSDs used to cache access to a traditional hard drive, and have even seen traditional hard drives used to cache access to optical mass storage. So long as your disk usage is "typical" (lots of access to a limited range of programs and libraries, infrequent access to the full range), it makes sense to layer the access in this way. You don't then have to care about limited space on the SSDs, you don't have to worry about MTBF because it's very unlikely all layers will fail at the same time (cacheing
    • by KonoWatakushi ( 910213 ) on Wednesday December 26, 2007 @12:14PM (#21822152)
      No need to compare with 15k rpm drives; flash disks lose spectacularly to low rpm laptop drives for random write performance. For obvious reasons though, no one ever tests random write performance. Manufacturers also rarely report random write IOPS.

      Flash is great, if your disk is basically read-only.
      • Indeed - good for booting your OS from, then, as another poster has pointed out.
      • by DDumitru ( 692803 ) <.doug. .at. .easyco.com.> on Wednesday December 26, 2007 @02:46PM (#21823522) Homepage
        Random write performance to bare drives is usually quite bad. Most "reputable" vendors do publish random write figures. SanDisk quotes 13 IOPS in the spec sheets. Mtron quotes 120 IOPS. I have not seen quotes from Samsung, but have tested their old drives at 27 IOPS. I even tested one drive at 3.3 write IOPS.

        On the other hand, random writes issues are "fixable". My company just published tests for various Raid-5 Flash SSDs setups. For 4 drives testing with 10 threads on Linux 2.6.22 using our MFT "driver", we get:

        4K random reads 39,689 IOPS 155 MB/sec
        4K random writes 29,618 IOPS 115 MB/sec

        These are real numbers and the application does see the performance improvement.

        For full details on drive performance see:

            http://managedflash.com/news/papers/index.htm [managedflash.com]
  • The new solid state drives did beat out older drives in terms of performance, but I can honestly say that I was hoping for a bigger difference between the two in terms of performance. Not just "beating" the older technology, but beating it by an order of magnitude.

    Looking at it, the biggest benefit I can see is that the solid state drives should be better at withstanding shock and vibration - which normal hard drives hate. If they cannot improve the performance (which will still be useful for gamers, serv
    • Re: (Score:2, Informative)

      That's because those are not really performance SSD drives. Random Access time is much improved but the transfer rate is way below a good HD. MTron has some high performance drives that pulverize everything else but they do cost an arm, leg and probably one of your kidneys. The only real benefits of those Samsung SSD's are much lower power consumption, no heat or noise. On a laptop this is still very good news.
    • by baywulf ( 214371 )
      It would not be too hard to increase the sequential performance by striping data across more NAND. Random read performance is also not too hard. The hard part is always random write performance. This is because if you want to modify a sector of data, all the remaining data must be moved to a new place. The copying of old data takes lots of time but tricks can be used to optimize them out but for true random writes, the performance will never be that good with the current NAND limitations.
    • by Skapare ( 16644 )

      They should be able to parallel several flash chips to increase the speed. Or maybe the old drives already did this?

    • I suppose with flash drives there is more potential for spreading things out in parallel and maybe they are not taking advantage of that yet. For example, when copying a 1 gig file, a flash drive could break the file up and simultaneously read from/write to ten different flash chips.
  • So... yeah. I want one, I'm sure there's more than a few other slashdotters out there who also want one. But none of Samsung's links helped me find a store that sells the 2.5 inch 64GB drives. Does anyone know where these are being sold?
    • Re: (Score:3, Informative)

      by baywulf ( 214371 )
      Try newegg.com
    • Re: (Score:3, Informative)

      Near me, this place [scan.co.uk] has a handful of different ones.
  • by alegrepublic ( 83799 ) on Wednesday December 26, 2007 @11:57AM (#21821964)
    I am still waiting for a reasonably priced low-end drive. An 8GB usb drive [buy.com] can be found for about $50. Packing 4 of them and replacing the usb circuitry with SATA would make for a 32MB for $200. Granted, it may not be the fastest drive around, but sometimes speed is not the most important factor. A 32MB would be enough for installing any current OS and still have some room for personal files to carry along on a trip. So, I think the current trend of providing high-end drives only is just an attempt to milk users to the maximum without much concern for what we actually need.
  • by hack slash ( 1064002 ) on Wednesday December 26, 2007 @12:01PM (#21822020)
  • by Futurepower(R) ( 558542 ) on Wednesday December 26, 2007 @12:03PM (#21822044) Homepage
    Quote from the Computerworld article and the Slashdot summary:

    "Samsung rates the drive with a read speed of 100MB/sec and write speed of 80 MB/sec, compared to 59MB/sec and 60MB/sec (respectively) for a traditional 2.5" hard drive."

    The speed quoted for a mechanical hard drive is a burst speed, accurate for reading only one track, and doesn't include the time it takes for a conventional rotating hard drive to change tracks. Isn't that correct?
    • by 0123456 ( 636235 )
      "The speed quoted for a mechanical hard drive is a burst speed, accurate for reading only one track, and doesn't include the time it takes for a conventional rotating hard drive to change tracks. Isn't that correct?"

      Depends. My IDE drives seem to sustain 60-ish MB/second on a large contiguous file even across multiple tracks... but suck if the file is heavily fragmented.
  • With the exception of capacity, the solid state disk drives appear to beat spinning disk in every category,

    Well excuse me, BUT, capacity is the single largest factor in my disc drive purchase decisions. I'll give away speed, power consumption, size, heat, noise, and even cost - everything but reliability - in favor of capacity. Even "slow" hard drives are quite fast historically speaking, and none of those other factors make up for running out of drive space.

    And don't the SSD's cost a lot more too?

    • It's not all one way or the other. At some price point, there are consumers who will buy the fast thing, just to have SOME of the fast thing. One is not required to have only one thing inside one's computer. For example, I could make productive use of a 32 or 64GB flash system if it were fast enough and cheap enough RIGHT NOW, regardless of the fact that I have a 500GB SATA drive in my system in order to store movies and images.

      There are also valuable business applications for the same technology. If 64GB f
      • There are also valuable business applications for the same technology. If 64GB flash were "relatively affordable" and noticeably faster and more effective than a good RAID array, then these flash drives could be an important component of an enterprise storage system.

        We're going to see how effective this is over the coming months: NAS and SAN products are clearly going to start sprouting SSDs either instead of the primary cache or as a mid tier between the RAM and the disk. I'm not expecting miracles: RA

        • All good thoughts. Agree and all that. This being slashdot, instead of talking at length about agreement we'll have to up the ante and disagree about something (hah). Let's talk about that "you still need to mirror the SSD until your insane" comment. I'm finding in-box redundancy to be less and less useful. Check out http://www.isilon.com./ [www.isilon.com] These guys, and emerging vendors like them, are beginning to move away from in-array replication of data to cross-array/cross-box replication. Packets are striped across
          • by igb ( 28052 )
            It doesn't matter if the replication is within a chassis, within a closely coupled cluster or between boxes linked only by GigE: the point is that only a fool would take a write and place it into a single device, be that device RAM, SSD or rotating media. I currently use the classic ``two sets of electronics fronting one RAID array'' assemblage (Pillar) with replication taking place within a few hours to another box (a pile of second-user DotHill disk, a Sun, the magic of ZFS) twenty miles away. Except t
    • I have never used more than 20 GB on any system I have ever owned. I can go years and years on the same system without using significantly more space. But I don't pirate music and movies, and it's pretty hard to fill up a hard drive without doing those things (I guess if you install lots of software that could do it too, but the "working set" of software for most people is still probably pretty small and you can always uninstall stuff you are not using to make room for new software). I would be exceeding
    • Re: (Score:3, Insightful)

      by Bacon Bits ( 926911 )
      So why not move to mag tape?
  • by Spyder ( 15137 )
    The wear issues originally faced by this tech seem to have been addressed (2mil hours MTBF and at least 5x the sector rewrites), and they have become big enough to do most stuff especially with the current availability of external storage. With a nameplate like Samsung behind it I'm willing to give it a spin.... or um, try :)

    Now has anyone found any place to GET ONE? I've been looking and I can't even find a model/part number. WTF? Why can't I be the first one on my block to have a 0 spindle laptop? It
  • We're only at 1TB? (Score:3, Interesting)

    by sootman ( 158191 ) on Thursday December 27, 2007 @12:13AM (#21827274) Homepage Journal
    Why no love for the 5.25" form factor? That extra inch-and-three-quarters gives you a lot of extra real estate to play with. ((5.25/2)^2) / ((3.5/2)^2) = 2.25, if I'm doing this correctly, so even minus room for the spindle, etc., you're still talking about 5-100% more area.) Why is no one making a modern version of the Quantum Bigfoot* that came in my sister's 400 MHz Compaq Presario 5150? I would love to see a modern 5.25" HD with...
    - 3600 or 4200 RPM rotational speed
    - low noise
    - low heat
    - low power consumption
    The reduced speed (wear and tear on parts) and heat should also lead to greater reliability. If a 3.5" drive can be 1 TB today, a 5.25" drive should be 1.5-2TB. A drive like this would be perfect for a home media server or HTPC, where performance is not critical (SD DVD is only 4 GB/hour; even BluRay is only 25 GB/hour--and I'm pretty happy with ripped DVDs at ~1500 kbps--less than 1 GB per hour) but low heat, low noise, and low power consumption are all desirable traits. (There's more rotating mass, but at lower speed there should be much less energy/momentum/intertia/whatever overall.) And as long as CDs and DVDs are still ~5"--and that seems to be the case (DVD, HD-DVD, BluRay, SACD)--we'll already be using properly-sized cases.

    * granted, that old thing was slow as hell. Swapping out the stock 8 GB Quantum Bigfoot for a 30 GB Maxtor dropped boot times from 3 minutes to 40 seconds.

"...a most excellent barbarian ... Genghis Kahn!" -- _Bill And Ted's Excellent Adventure_

Working...