Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

AnandTech Gives the Skinny On Recent SSD Offerings 96

omnilynx writes "With capacity on the rise and prices falling, solid state drives are finally starting to compete with traditional hard drives. However, there are still several issues to take into account when moving to an SSD, not to mention choosing between a widening array of offerings. Anand Lal Shimpi of AnandTech does a better job than anyone could expect detailing those issues (especially those related to performance) and reviewing the new offerings in the SSD arena. Intel's X25 series comes out on top for sheer speed, but OCZ makes a surprise turnaround with its Vertex drive giving perhaps the best value."
This discussion has been archived. No new comments can be posted.

AnandTech Gives the Skinny On Recent SSD Offerings

Comments Filter:
  • a link to a one-page (printer-friendly) version of the article! Thank you.
    • totally off topic, but I met Anand recently in Raleigh (Ive been a reader of the site for years, and a regular in the forums as well) and hes a very smart guy, quite nice, as well as interesting. The articles they do are usually pretty lengthy, and in-depth enough that I rarely bother to read the whole thing.

      Turns out that as popular as his forums are, a few large companies actually have people on payroll just to read the forums and collect feedback (lots of tech enthusiasts there, especially concerning vid

  • I didn't know that. And it sucks.

    • Re: (Score:3, Informative)

      by vertinox ( 846076 )

      I didn't know that. And it sucks.

      No. Not use, but slower the more you write to it. You can read all the time and it doesn't speeds.

      The article goes into detail why writing does cause problems. The author does conclude even at the slowest possible speed the Intel model (he said he did a simulation where by writing to all the blocks at least once) that its still beats HDD.

      The other versions he tested shows wasn't at great.

      Apparently it depends on the controller version which affects the speed. Intel put a goo

      • by vux984 ( 928602 ) on Thursday March 19, 2009 @03:56PM (#27261487)

        No. Not use, but slower the more you write to it. You can read all the time and it doesn't speeds.

        Not quite. Once it runs out of completely free blocks, the drives 'hit a wall', and from that point on they are significantly slower to write to.

        But it doesn't continue getting slower and slower and slower and slower over time. Just that, at some point, it suddenly becomes x% slower to write to and stays that way.

        The author does conclude even at the slowest possible speed the Intel model (he said he did a simulation where by writing to all the blocks at least once) that its still beats HDD.

        The intel model is the fastest by far. The Samsung drives are also good. And the OCZ Vector was also good. (Not as good as the intel one, but still 48% faster than the WD veliciraptors, which is seriously still excellent.)

        The important point however, is that the units that 'still beats HDD' doesn't mean "a little bit faster". They mean continue to royally spank an HDDs ass.

        However, the other models, by comparison, are basically unusable.

        Apparently it depends on the controller version which affects the speed. Intel put a good one in and the other brands no so good.

        Its FAR more complicated than that by far. And the article is 30+ pages long for a reason. (30+ real pages, not bullshit 'half-paragraph per page' pages.

        He said its still noticeable though sometimes.

        In the sense that yes, once your drive 'hits the wall' the slow down can be noticeable relative to when it was new... but its still twice as fast to 5x as fast as the fastest alternatives.

        There is also stuff the OS can do to mitigate the problem, once we have SSD aware OSes.

        Essentially, the reason it slows down, is that once your drive has used all the blocks, it has to erase a block before it can use it again, and this can require it to read multiple pages in, erase the block, and write it back out again, which can take up to half a second.

        The better controllers, including extra blocks that aren't reported to the OS, and adding OS awareness to the issue can essentially let the drive stay ahead of the random write requests, and erase blocks before they are needed, to ensure their is always a pool of completely erased and ready to go blocks available, and therefore keep the drive much closer to its 'like new speed' indefinitely.

        • by lagfest ( 959022 ) on Thursday March 19, 2009 @04:38PM (#27262031)

          Mod parent up. I'm not arguing with him, but merely emphasizing a key point.

          ... and adding OS awareness to the issue can essentially let the drive stay ahead of the random write requests, and erase blocks before they are needed, to ensure their is always a pool of completely erased and ready to go blocks available, and therefore keep the drive much closer to its 'like new speed' indefinitely.

          Actually, this is the part about the new sata Trim command. And ironically a part where Anand swings and misses completely, or it's dumbed down to a level where it is completely misleading.

          It's not so much about making the OS SSD aware in the sense that the OS now knows about the inner workings of the SSD, but making the SSD aware of what space is actually used for data, and what has been discarded. Knowing what data blocks has been discarded means you can consolidate discarded blocks, by moving valid data to other pages, and then erase the page full of discarded blocks, so it is ready for writing new data.

          So not only do you get write performance that doesn't degrade with time, but you can also store slightly more, because you don't have to reserve as much space.

          • Re: (Score:3, Insightful)

            by AmiMoJo ( 196126 )

            Adding cache RAM would mitigate a lot of the problems too. It's a shame only high end RAID cards have it, because it could really help out both HDDs and SSDs.

            Basically you have a relatively small RAM cache for the drive. It could be in the drive itself or on the motherboard, depending on how you want to do it. When any data is written to the drive, it goes into the cache RAM and gets written out as quickly as the drive can manage. From the OS's point of view the write completes as soon as the data is in cac

            • 1. Hard Drives do this already. [wikipedia.org]

              2. Some of the SSDs in question do this already. [anandtech.com] [TFA]

              3. Most filesystems do equivalent things already by delaying and aggregating writes.

              From the OS's point of view the write completes as soon as the data is in cache RAM, which is almost instantly.

              As stated, this is a Bad Thing. Most of the time, it is acceptable, but the filesystem (which is what you meant) does occasionally need to be able to force data to be written to disk immediately.

              What do they say on Everything2? "Your revolutionary ideas on the future of storage have already occurred to others." Depending on your specific nee

              • by AmiMoJo ( 196126 )

                You have failed to understand what I was suggesting.

                Using cache RAM on the drive or the controller is different from having more RAM for use as cache by the filesystem or the OS. You correctly state that sometimes the filesystem needs to know that data has really been committed to disk, and then go on to say that having cache RAM is thus a bad idea. Apparently you didn't read the bit I wrote about backup batteries/capacitors.

                The whole point I was trying to make is that because SSDs are low power, it isn't a

                • I read your post in its entirety. It is uninformed on several levels, and a completely inappropriate solution to any given problem.

                  Do note that this device already exists in a couple different forms, and is selling extremely poorly.

                  This device would be horribly expensive. In essence, you are paying for the same amount of storage twice, plus a bunch of specialized electronics. The base cost for the device would be about $250. Flash memory costs dollars per gigabyte, DRAM costs tens of dollars per gigabyte. A

                  • by AmiMoJo ( 196126 )

                    You clearly have no idea what you are talking about, and still have not managed to grasp my original point.. Nicely done.

                    Your costing is way, way off. You also seem to imply that you would need as much RAM as you have solid state storage, but that too simply is not the case. The RAM would be for cache use only. so does not need to be bigger than say 64MB if you really want to save some money, Also, you seem to have no idea about battery/capacitor cost, or what type/size would be required.

                    Most of the faster

        • There's another thing you might want to do, to workaround the problem.

          In Windows NT/2000/XP, Linux, FreeBSD and a few other operating systems, the O/S by default writes to the drive on every file/directory access to update the "Last Accessed Time".

          This means the O/S will write stuff every time it opens a directory or file, even if it's just for reading!

          This is bad for drive performance whether "conventional HDD" or SSD. And extremely bad for the crappier SSDs that don't do writes well.

          You can turn that "ins
          • by vux984 ( 928602 )

            In Windows NT/2000/XP, Linux, FreeBSD and a few other operating systems, the O/S by default writes to the drive on every file/directory access to update the "Last Accessed Time".

            In most file systems, if not all, I would imagine that these time stamps are stored in the directory "file" itself, not within the actual files in the directory. So its not like every file is being touched every time its accessed, just the folders they are in.

            Granted, turning off last access times would reduce writes to the director

            • by TheLink ( 130905 )
              Even if it is just the folders being touched, with SSDs it means small blocks will be written when launching apps or opening files.

              That will cause slowdowns for SSDs with high write latencies.

              You can check the percentage for your usage, start perfmon.msc and add the relevant counters e.g. disk writes/sec and disk transfers/sec. Then do your real world stuff.
              • by vux984 ( 928602 )

                You can check the percentage for your usage, start perfmon.msc and add the relevant counters e.g. disk writes/sec and disk transfers/sec. Then do your real world stuff.

                Trouble is I intuitively think the percentage is pretty low, so even running the PC for a couple weeks with it on and them a couple more with it off I don't think I'd be able to positively conclude the difference could be attributed to timestamp writes from the results. I'm not even sure if existing benchmarking software suite could be used t

            • by XanC ( 644172 )

              Ingo Molinar says [kerneltrap.org]: "I cannot over-emphasize how much of a deal it is in practice. Atime updates are by far the biggest IO performance deficiency that Linux has today. Getting rid of atime updates would give us more everyday Linux performance than all the pagecache speedups of the past 10 years, _combined_."

              • by TheLink ( 130905 )
                Well it's amazing how none of the O/S devs got around to fixing it in 10 years.

                It's not as if nobody noticed - the rest of us were working around that with noatime and similar stuff.

                Sometimes I wonder if there are people going about looking for ridiculously pathetic stuff like that, and working to get them fixed.

                Another example - for years the hardware people did not create an easy way for the software people to do "gettimeofday" on desktops. The hardware people were telling the software people - no don't u
    • Re: (Score:3, Informative)

      by phantomfive ( 622387 )
      But it isn't a physical problem, ie, the drive itself isn't slowed down, it's a matter of the way things have been allocated. So if you reformat the drive, or if you use a filesystem built specifically for flash, this isn't as much of a problem. You can do this of course if you are using linux, but if you are using windows, sorry too bad. I expect you could set up a special flash system for OS X, but I doubt it is officially supported.
      • Not really, no. (Score:5, Informative)

        by XanC ( 644172 ) on Thursday March 19, 2009 @03:47PM (#27261369)

        Reformatting isn't sufficient to get back to new performance, you have to issue an ATA SECURE ERASE command.

        And you can't run a filesystem built specifically for flash on these drives, with Linux or otherwise, because they don't present a flash interface. They present an SATA interface.

        In any case, the take-home message is probably to consider the drive's "used" performance as its real performance. If the drive is not a crummy one (watch out for those), it's still _much_ faster than an HDD, and very worthwhile depending on your application.

        • by Methlin ( 604355 )

          And you can't run a filesystem built specifically for flash on these drives, with Linux or otherwise, because they don't present a flash interface. They present an SATA interface.

          However you CAN run any Log-structured file system [wikipedia.org] that isn't tied to MTD devices. However that only delays the write performance hit until you've wrapped the drive, you'd still need some way to inform the drive that blocks X to Y are unused and can be erased.

          • by XanC ( 644172 )

            Interesting. The ATA TRIM command, which hopefully drive firmware will start supporting, could work for that. But you'd also need some way to turn off the fancy wear-leveling and other algorithms on the drive itself, and also you'd want some way to unlock the extra storage which the drives physically have but don't present over SATA.

        • by mgblst ( 80109 )

          In the article he mentions the new trim command, that will be similar to defrag I guess. But a lot quicker, and can be run internally on the drive.

          And you can write a file system for these new drives, in the article he mentions how to do it as well.

        • by adisakp ( 705706 )

          And you can't run a filesystem built specifically for flash on these drives, with Linux or otherwise, because they don't present a flash interface. They present an SATA interface.

          The OS can simply query the rotational speed. Anything that responds with 0 is generally a flash drive. There's also no reason that an OS shouldn't allow you to put whatever filesystem you want on a drive when you format it if you have a flash-optimized fs already present in your OS. Some of the blame lies with OS's as well as the drives.

          • You sound a little confused here. It's not that the OS doesn't know it's a flash drive, the problem is that you can not access the actual flash chips through the SATA interface. You can't tell the chip to "erase flash block N", since there's no such thing in the SATA interface. No OS can get around this without help from the SSD manufacturers and a standard way to access the real flash chips inside.
            • by TheLink ( 130905 )
              "the problem is that you can not access the actual flash chips "

              I don't see that as a problem. In fact I see that as a good thing. Otherwise think of all the buggy custom drivers you would be plagued with.

              I'd rather the SSD manufacturers improve the technology so that the O/S doesn't have to know about such stuff.

              What's important to have is stuff like "make sure data is flushed to nonvolatile storage", features that will remain important for decades to come.
            • Is there really any reason you can't simply have the OS automatically "zero" the space used by any file that is deleted? This kind of thing can already done as a security measure as it is for spinning disks. The only problem with that I could see is that eventually if the drive gets really fragmented, you'll eventually reach a point where every block will have some pages containing data so you'll still have to do the erase and rewrite business to write a file.

      • In a way, it is a physical problem. With hard drives, seek times are slow compared to read and write speeds, and reading and writing happen at the same speed. With flash, seek times are negligible, and read speeds are much faster than writes. It's that inequality of read and write speeds that messes up so many of the assumptions our current software makes, and leads to behavior that users find disturbing.

        • I guess the best thing to do is think of your flash disk as being at the end of a very fast ADSL modem. It's a lot slower to "upload" data to it than it is to "download" from it.
      • Actually in the article he specifically points out that it is a physical problem that doesn't depend on any external factors.

        After all of the blocks have been used the first time a new physical delay is introduced: the need to erase old blocks.

      • by klui ( 457783 )
        exFAT is designed for flash-based drives. It was introduced in Windows Vista SP1. Supposedly it can be back-ported to Windows XP through a hotfix. http://en.wikipedia.org/wiki/ExFAT [wikipedia.org]
  • i read this article yesterday and thought it was very interesting. I didn't know much about SSD's besides the common "better performance but not worth the money" opinion. Nor did i know about the 1st gen problems that most of them have. Good stuff, anyone interested in getting a SSD soon should definitely read this.

    • by vux984 ( 928602 )

      Mod parent up. This is REALLY the best article I've EVER read about SSD performance. If you are interested in buying an SSD do yourself a favor and read it, and understand it.

      • Mod parent up. This is REALLY the best article I've EVER read about SSD performance. If you are interested in buying an SSD do yourself a favor and read it, and understand it.

        Anand consistently delivers top notch articles. They've had the same, unobtrusive, useful interface for the last, oh, 8 years now? They haven't pumped the site full of advertisements either like some other *cough tomshardware cough* websites.

        For other good reads from Anandtech check out the DX11 writeup [anandtech.com] or their Intel SSD [anandtech.com] article.

    • Agreed, this is a great article. A point I hope nobody missed is that the manufacturer (OCZ) was pissed about a bad review, but worked with the reviewer, learned something from the review and made massive improvements to their product. And the reviewer gave a full account of the interaction, which is wonderful journalism.

      This is almost the perfect review. Anand gives us his methodology, reveals his contact with manufacturers in context, and explains fully what the tests mean and why they were designed

  • by vertinox ( 846076 ) on Thursday March 19, 2009 @02:43PM (#27260469)

    I saw this article earlier today off a comment from Engadget and read the whole thing (no printer friendly version).

    Out of curiosity, I searched Amazon.com [amazon.com] for current offers of that Intel X25-M and in both offerings (80gb and 160gb) the reviews are that this thing is the greatest thing next to sliced bread.

    The only complaints are the price but people are claiming its worth the price.

    I did come across a detractor that shows you can't use XP/Vista on bootcamp [apple.com] with the drive because of partition issues with OS X.

    Supposedly Windows 7 will have true blue SSD support so I'll wager by the time it comes out, SSD will be standard in all machines.

    • Re: (Score:3, Funny)

      by Yvan256 ( 722131 )

      Supposedly Windows 7 will have true blue SSD support

      Did Sony just invent yet another format while nobody was looking?

    • Supposedly Windows 7 will have true blue SSD support so I'll wager by the time it comes out, SSD will be standard in all machines.

      Sure but by then we should also have solved world hunger, cured cancer and conquered Mars!

      (I kid, I kid)

  • Bad controller (Score:2, Insightful)

    by afidel ( 530433 )
    They only got 31.7MB/s with the X-25e @4K random writes, that's MUCH slower than I've been able to get out of it. On my HP P400 I get ~75MB/s and on my HP workstation with builtin Intel chipset I get ~150MB/s. I would say it's their testing rig that was seriously holding the drive back and if they redid it with a better controller they would have come to a very different conclusion. Of course I have yet to find an enterprise class RAID controller that can keep up with a 2 drive RAID-10 of X-25e's so the abs
    • How TF are you getting RAID-10 [acnc.com], which requires 4 drives, out of two drives? I'm guessing you mean RAID-0 [acnc.com] built around two drives. Maybe you have something funky like RAID-10 built from four partitions on two drives, but that makes your redundancy moot.

      • by afidel ( 530433 )
        Sorry, HP calls RAID-1 and RAID-10 both RAID-10 in the setup screen for their controllers.
    • Are you doing random or sequential writes? Your numbers sound like their sequential write numbers.
      • by afidel ( 530433 )
        100% random writes using IOMeter.
    • by tknd ( 979052 )
      The 4k random writes number is after simulating a "used" drive. Fill up your SSD with random files and deletions and then test your 4k random writes again.
      • by afidel ( 530433 )
        I left the write test running overnight, at 150MB/s it doesn't take long to fill a 32GB drive =)
  • WOW (Score:5, Informative)

    by andrewd18 ( 989408 ) on Thursday March 19, 2009 @02:47PM (#27260545)
    This may be the most informative and practical article I have read in a long, long time. It's definitely going to influence my SSD hardware purchases for the foreseeable future.
    • Not only that, but the atention to the whys of the tests it was amazing. I had a very pleasant reading on that review. Kudos to the reviewer.
  • With the lower cpacity SSD being somewhat greater Gb/$ than the larger drives and the bandwidth improvements of RAID 0 it is important to know whether the TRIM command work through a RAID controller and actually reach the SSD? Likewise will a RAID controller report rotational speed of "0" which the OS may be looking out for before it issues a TRIM command?
    • whether the TRIM command work through a RAID controller and actually reach the SSD?

      Probably not. RAID controllers will need new firmware.

    • Re: (Score:3, Informative)

      by pyite ( 140350 )

      it is important to know whether the TRIM command work through a RAID controller and actually reach the SSD

      Not really. Stop using hardware RAID. It's dangerous, expensive, and not necessary.

      The best thing you can do is use ZFS. It even optimizes for SSDs.

  • This 'TRIM' procedure sounds like the 'garbage collect' routine run on the internal flash on my TI Calc when it fills up.
    • Re: (Score:3, Informative)

      It isn't. The whole point of TRIM is to erase a block before you're waiting to write something to it, ie. before your disk is full and you need to reuse the space. The garbage collection on a TI calculator is really defragmentation, to eliminate gaps between files. This is necessary because the calculators have the flash attached to the address bus, rather than behind a hard drive controller, and there's no MMU to give programmers a linear address space if there flash apps were to be discontiguous in memory

  •   Perhaps a hybrid model will arise, which keeps usage statistics for files, and allows for mostly-read and marked-read-only files to go migrate to SSD.
      I would want most of my high-traffic files (development work, application caches) to be fast-write at all times and relatively immune to data congestion. But praps silent overhead will just bump up to 50% as prices go down, with a new analogue to defreg, the "flatten" or such.

  • by argent ( 18001 ) <peter@NOsPAm.slashdot.2006.taronga.com> on Thursday March 19, 2009 @03:54PM (#27261461) Homepage Journal

    The real solution is going to be when the OS (which knows what that data really means, which is file and which is metadata and which is cache and backing store) and not the flash controller does all the wear leveling and block erasing, bypassing the flash controller as much as possible. Which is going to require new APIs and interfaces.

    • Re: (Score:3, Interesting)

      by ruiner13 ( 527499 )
      I personally want my OS involved with the actual writing of the data as little as possible. It has less chance of messing with it. Hand it off to the hardware with as little tampering as possible.
      • You can trust Linux to do it correctly and reliably. Windows, not so much.

        In fact, Linux already has a few flash file systems.

      • by Chonine ( 840828 )

        I can certainly understand the desire to let the OS "not handle" stuff, because there are plenty of OS's that do a bad job... "handling" stuff, to be sure.

        But that's its job. It multiplexes your CPU to handle hundreds of threads at the same time, making each one think it has the entire CPU to itself. That's a pretty big job, and we trust the OS to handle it. Sure, there is a need for a good interface, a good point for what has to be abstracted and handled by the disk, what has to be handled by the OS - b

    • Re: (Score:3, Informative)

      by dfn_deux ( 535506 )
      yup! Sun's openflash initiative is exactly this.
  • Does anyone know which controller / drive is in macbooks, and how these results match up to those on macs with SSDs?
  • Amazing Article (Score:4, Insightful)

    by ShooterNeo ( 555040 ) on Thursday March 19, 2009 @04:06PM (#27261625)
    I read the article in it's entirety. One thing that I was impressed by was the tremendous power of internet hardware reviewers. The reviewer in the article is some geek with a website, but he influences thousands, probably millions of dollars in sales. In the article, he figures out that the OCZ Vertex is an SSD that actually offers a good price/performance ratio. After reading that, I check newegg.com : yep, the top selling SSDs are the OCZ Vertex and Intel's. Geeks really do look for objective, hard benchmarks to decide what to spend their hard earned cash on. More than that, OCZ actually revised their firmware to meet the reviewer's demands. They would not have done this at all if they had been left to their own devices, and the final product is actually usable. Finally an SSD upgrade is viable : on newegg, the smallest OCZ Vertex drive is 30 gigs, for $108. Two of those in a RAID 0 configuration would be ideal, giving performance exceeding the Intel X-25M for half the cost. ($216 versus $350 for the X-25M) I'm strongly tempted to make the purchase, although I know it'll be even cheaper if I just wait a few more months...
    • Re:Amazing Article (Score:5, Informative)

      by paitre ( 32242 ) on Thursday March 19, 2009 @04:12PM (#27261695) Journal

      "some geek"?

      Anand has been around, reviewing hardware, for close to 10 years now. He is, rightfully so, considered an expert in hardware usage, performance tuning and over systems construction.

      There are others out there with similar cachet.

      He is far, FAR from just being "some geek".

      • Re:Amazing Article (Score:4, Insightful)

        by ShooterNeo ( 555040 ) on Thursday March 19, 2009 @04:16PM (#27261761)

        Does he have an engineering degree? Could he have made the change to the firmware himself? Does anyone but other geeks know who he is?

        I'm not trying to badmouth him, it's amazing that he does what he does, but it isn't immediately obvious why he carries so much respect.

        • Re:Amazing Article (Score:5, Informative)

          by klui ( 457783 ) on Thursday March 19, 2009 @04:31PM (#27261949)

          If you took 5 sec. to search his credentials you'll find he graduated from N. Carolina State with a CE degree.

          His site has been in existence for quite some time and I find his articles are among the better ones on the net, but you may want to read others and compare. The reason why I like his articles over others is the depth of his articles. He describes the underlying architecture and provides thoughts on why he thinks a company chose a path with followups that either reinforce or refute a theories.

          • I've always liked Anand's articles primarily because he's not afraid to be frank and say something that's bad is bad. He doesn't sugar coat. Sometimes, when a product launches, he reviews it, and he says it doesn't live up to the hype, or X thing is missing/wrong/etc, I get bummed, but he does quite well at putting it all into perspective.

            At the same time, I "feel" this giddy-nerd-joy when he writes about something that is ground-breaking or game-changing (RV770, Nehalem, etc). Take a look at this article,

        • by mandolin ( 7248 )

          I'm not trying to badmouth him, it's amazing that he does what he does, but it isn't immediately obvious why he carries so much respect.

          The site at least (anandtech) gets respect because from them, articles of this quality level are not perceived as a fluke.

          Merely as an example, if this had come from Tom's Hardware, I would have been floored.

        • >Does anyone but other geeks know who he is?

          Does it matter? You seem to be surprised by how the market works. Products get reputations. In technology this is driven largely by benchmarks. In every industry there are guys like him who are unknown outside of the industry. It doesnt matter if he's unknown to 99.9999999999999 of the people out there.

          Also the web is something of a meritocracy. He's popular because x amount of people think he's good at what he does. Hes not some guy who just registered a blogs

      • Anand is alright ..I've been reading his site for years. For info on consumer-level tech, he seems to know his stuff, although he seems to slant toward Intel a bit too much.

        GP is right, though. He is basically just a geek that like to mess around with new HW that also gets to go to all of the consumer electronics shows and things. He just made a business out of it.

  • That was a really, really, good article that Anandtech put tons of work into. So in response Slashdot hammers them with a zillion people jumping directly to the printable page link? No ad impressions for them, and more bandwidth hit from those that don't read the whole thing. That wasn't nice.

    I'm just as likely to hit the printable link as the next guy when a site has a terrible ads, or content/ad ratio, but Anandtech didn't deserve this.

  • Anand rocks (Score:3, Interesting)

    by pak9rabid ( 1011935 ) on Thursday March 19, 2009 @05:23PM (#27262533)
    Anand rocks. This article is very informative and easy to read, not to mention unbiased. Anand is known for his lack of personal bias in his reviews. Highly recommend you give it a good read.
  • I'd be interested in using an SSD as a replacement drive in a laptop but it's passively cooled so I'd need to know it ran cooler than a traditional HD.

  • From reading the artical it appears that none of the SSD manufactures did any real world testing and they only designed their products to maximise sequental read/write performance. Not one ever tried popping it into a real machine to see how it performed.

    • by rob1980 ( 941751 )
      My understanding of the article is that all but Samsung and Intel whiffed. Samsung had to get their controller right because a company like Apple isn't going to put up with the nonsense that usually comes to mind when people think of SSD performance, and Intel simply did a superior job on their own. Reading the article and more importantly finding out who's goofing off and letting first impressions based on a lack of extensive testing sell their products vs. who's actually making something worth buying ha
  • The article goes into detail about the trim command, etc. I thought this whole issue could be avoided by just setting the beginning block to align with the SSD and then setting the FS block size to the same as the erase block on the SSD. This way, every time it gets a request to write a block it always writes the whole thing and doesn't have to worry about reading it or doing any copying.

    • by amorsen ( 7485 )

      Erase blocks are probably going to get larger. Try a typical Unix file system without tail packing on a 1MB block drive...

  • "SSDs make Vista usable."

    I took the guy just a little less seriously after reading this pie-in-the-sky claim [phrases.org.uk] ;-)

No spitting on the Bus! Thank you, The Mgt.

Working...