Forgot your password?
typodupeerror
Data Storage Hardware

AnandTech Gives the Skinny On Recent SSD Offerings 96

Posted by timothy
from the they're-small-and-quiet-the-end dept.
omnilynx writes "With capacity on the rise and prices falling, solid state drives are finally starting to compete with traditional hard drives. However, there are still several issues to take into account when moving to an SSD, not to mention choosing between a widening array of offerings. Anand Lal Shimpi of AnandTech does a better job than anyone could expect detailing those issues (especially those related to performance) and reviewing the new offerings in the SSD arena. Intel's X25 series comes out on top for sheer speed, but OCZ makes a surprise turnaround with its Vertex drive giving perhaps the best value."
This discussion has been archived. No new comments can be posted.

AnandTech Gives the Skinny On Recent SSD Offerings

Comments Filter:
  • by vertinox (846076) on Thursday March 19, 2009 @03:43PM (#27260469)

    I saw this article earlier today off a comment from Engadget and read the whole thing (no printer friendly version).

    Out of curiosity, I searched Amazon.com [amazon.com] for current offers of that Intel X25-M and in both offerings (80gb and 160gb) the reviews are that this thing is the greatest thing next to sliced bread.

    The only complaints are the price but people are claiming its worth the price.

    I did come across a detractor that shows you can't use XP/Vista on bootcamp [apple.com] with the drive because of partition issues with OS X.

    Supposedly Windows 7 will have true blue SSD support so I'll wager by the time it comes out, SSD will be standard in all machines.

  • WOW (Score:5, Informative)

    by andrewd18 (989408) on Thursday March 19, 2009 @03:47PM (#27260545)
    This may be the most informative and practical article I have read in a long, long time. It's definitely going to influence my SSD hardware purchases for the foreseeable future.
  • by vertinox (846076) on Thursday March 19, 2009 @03:48PM (#27260547)

    I didn't know that. And it sucks.

    No. Not use, but slower the more you write to it. You can read all the time and it doesn't speeds.

    The article goes into detail why writing does cause problems. The author does conclude even at the slowest possible speed the Intel model (he said he did a simulation where by writing to all the blocks at least once) that its still beats HDD.

    The other versions he tested shows wasn't at great.

    Apparently it depends on the controller version which affects the speed. Intel put a good one in and the other brands no so good.

    He said its still noticeable though sometimes.

  • by phantomfive (622387) on Thursday March 19, 2009 @04:15PM (#27260923) Journal
    But it isn't a physical problem, ie, the drive itself isn't slowed down, it's a matter of the way things have been allocated. So if you reformat the drive, or if you use a filesystem built specifically for flash, this isn't as much of a problem. You can do this of course if you are using linux, but if you are using windows, sorry too bad. I expect you could set up a special flash system for OS X, but I doubt it is officially supported.
  • Not really, no. (Score:5, Informative)

    by XanC (644172) on Thursday March 19, 2009 @04:47PM (#27261369)

    Reformatting isn't sufficient to get back to new performance, you have to issue an ATA SECURE ERASE command.

    And you can't run a filesystem built specifically for flash on these drives, with Linux or otherwise, because they don't present a flash interface. They present an SATA interface.

    In any case, the take-home message is probably to consider the drive's "used" performance as its real performance. If the drive is not a crummy one (watch out for those), it's still _much_ faster than an HDD, and very worthwhile depending on your application.

  • by vux984 (928602) on Thursday March 19, 2009 @04:56PM (#27261487)

    No. Not use, but slower the more you write to it. You can read all the time and it doesn't speeds.

    Not quite. Once it runs out of completely free blocks, the drives 'hit a wall', and from that point on they are significantly slower to write to.

    But it doesn't continue getting slower and slower and slower and slower over time. Just that, at some point, it suddenly becomes x% slower to write to and stays that way.

    The author does conclude even at the slowest possible speed the Intel model (he said he did a simulation where by writing to all the blocks at least once) that its still beats HDD.

    The intel model is the fastest by far. The Samsung drives are also good. And the OCZ Vector was also good. (Not as good as the intel one, but still 48% faster than the WD veliciraptors, which is seriously still excellent.)

    The important point however, is that the units that 'still beats HDD' doesn't mean "a little bit faster". They mean continue to royally spank an HDDs ass.

    However, the other models, by comparison, are basically unusable.

    Apparently it depends on the controller version which affects the speed. Intel put a good one in and the other brands no so good.

    Its FAR more complicated than that by far. And the article is 30+ pages long for a reason. (30+ real pages, not bullshit 'half-paragraph per page' pages.

    He said its still noticeable though sometimes.

    In the sense that yes, once your drive 'hits the wall' the slow down can be noticeable relative to when it was new... but its still twice as fast to 5x as fast as the fastest alternatives.

    There is also stuff the OS can do to mitigate the problem, once we have SSD aware OSes.

    Essentially, the reason it slows down, is that once your drive has used all the blocks, it has to erase a block before it can use it again, and this can require it to read multiple pages in, erase the block, and write it back out again, which can take up to half a second.

    The better controllers, including extra blocks that aren't reported to the OS, and adding OS awareness to the issue can essentially let the drive stay ahead of the random write requests, and erase blocks before they are needed, to ensure their is always a pool of completely erased and ready to go blocks available, and therefore keep the drive much closer to its 'like new speed' indefinitely.

  • Re:Amazing Article (Score:5, Informative)

    by paitre (32242) on Thursday March 19, 2009 @05:12PM (#27261695) Journal

    "some geek"?

    Anand has been around, reviewing hardware, for close to 10 years now. He is, rightfully so, considered an expert in hardware usage, performance tuning and over systems construction.

    There are others out there with similar cachet.

    He is far, FAR from just being "some geek".

  • by 644bd346996 (1012333) on Thursday March 19, 2009 @05:12PM (#27261705)

    It isn't. The whole point of TRIM is to erase a block before you're waiting to write something to it, ie. before your disk is full and you need to reuse the space. The garbage collection on a TI calculator is really defragmentation, to eliminate gaps between files. This is necessary because the calculators have the flash attached to the address bus, rather than behind a hard drive controller, and there's no MMU to give programmers a linear address space if there flash apps were to be discontiguous in memory. Thus, in order to load an app in to the calculator flash, there must be a contiguous region of available flash.

  • Re:Amazing Article (Score:5, Informative)

    by klui (457783) on Thursday March 19, 2009 @05:31PM (#27261949)

    If you took 5 sec. to search his credentials you'll find he graduated from N. Carolina State with a CE degree.

    His site has been in existence for quite some time and I find his articles are among the better ones on the net, but you may want to read others and compare. The reason why I like his articles over others is the depth of his articles. He describes the underlying architecture and provides thoughts on why he thinks a company chose a path with followups that either reinforce or refute a theories.

  • by lagfest (959022) on Thursday March 19, 2009 @05:38PM (#27262031)

    Mod parent up. I'm not arguing with him, but merely emphasizing a key point.

    ... and adding OS awareness to the issue can essentially let the drive stay ahead of the random write requests, and erase blocks before they are needed, to ensure their is always a pool of completely erased and ready to go blocks available, and therefore keep the drive much closer to its 'like new speed' indefinitely.

    Actually, this is the part about the new sata Trim command. And ironically a part where Anand swings and misses completely, or it's dumbed down to a level where it is completely misleading.

    It's not so much about making the OS SSD aware in the sense that the OS now knows about the inner workings of the SSD, but making the SSD aware of what space is actually used for data, and what has been discarded. Knowing what data blocks has been discarded means you can consolidate discarded blocks, by moving valid data to other pages, and then erase the page full of discarded blocks, so it is ready for writing new data.

    So not only do you get write performance that doesn't degrade with time, but you can also store slightly more, because you don't have to reserve as much space.

  • by pyite (140350) on Thursday March 19, 2009 @07:14PM (#27263059)

    it is important to know whether the TRIM command work through a RAID controller and actually reach the SSD

    Not really. Stop using hardware RAID. It's dangerous, expensive, and not necessary.

    The best thing you can do is use ZFS. It even optimizes for SSDs.

  • by dfn_deux (535506) <datsun510.gmail@com> on Thursday March 19, 2009 @07:18PM (#27263093) Homepage
    yup! Sun's openflash initiative is exactly this.
  • by TheLink (130905) on Friday March 20, 2009 @02:49PM (#27271937) Journal
    There's another thing you might want to do, to workaround the problem.

    In Windows NT/2000/XP, Linux, FreeBSD and a few other operating systems, the O/S by default writes to the drive on every file/directory access to update the "Last Accessed Time".

    This means the O/S will write stuff every time it opens a directory or file, even if it's just for reading!

    This is bad for drive performance whether "conventional HDD" or SSD. And extremely bad for the crappier SSDs that don't do writes well.

    You can turn that "insanity" off but at the risk of screwing up some apps/stuff (badly designed apps IMO).

    For Windows: create a DWORD called HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\NtfsDisableLastAccessUpdate and set it to 1

    For Linux: you can mount filesystems with "noatime", however this is incompatible with some applications (e.g. mutt). Fortunately for newer versions of Linux you can use relatime (which might already be the default on recent distros), which should reduce the amount of writing.

    Note/warning: Do this at your own risk, YMMV, blahblahblah.

    All I can say is "WORKSFORME" - so far I haven't noticed any probs with the Windows/Linux programs that _I_ use just because Last Access times weren't updated.

All warranty and guarantee clauses become null and void upon payment of invoice.

Working...