Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Samsung Launches SSD 830 Drive 148

MojoKid writes "Although they haven't been big hits with enthusiasts, Samsung's solid state drives have been successful due to strong relationships with a number of OEMs, including Apple. With the release of their new SSD 830 Series Solid State Drives, however, Samsung appears ready to make inroads with enthusiasts as well. The SSD 830 tested here is 256GB model, with eight 32GB Samsung NAND flash memory chips, 256MB of Samsung DDR2 SDRAM cache memory, and a new Samsung SSD Controller. The Samsung controller features a 3-ARM core design with support for SATA III 6Gb/s interface speeds. Performance-wise, the Samsung SSD 830 Series drive offered the best Read performance of the group that was tested, even versus the latest SandForce-based SSDs, though the SSD 830 couldn't quite catch SandForce in writes."
This discussion has been archived. No new comments can be posted.

Samsung Launches SSD 830 Drive

Comments Filter:
  • I assume if anywhere there are "enthusiasts" here on Slashdot, so per the summary, why haven't Samsung's solid state drives "been big hits with enthusaists"? Whose drives have?

    I'm thinking of sprucing up an old laptop with an SSD - any recommendations?

    • by fuzzyfuzzyfungus ( 1223518 ) on Sunday September 25, 2011 @01:50PM (#37508930) Journal
      Historically, Samsung's offerings have been relatively solid; but quite unexciting in performance terms, and pretty tepid in performance/dollar.

      OEMs love 'em because, while mediocre, they have been comparatively reliable(no equivalents of the Jmicron controller debacle, firmware that makes them show up as only 8MB in size, assorted bleeding-edge weirdness and general "No, we really do have to offer these things under a 3-year warranty to get business customers"-stopping issues.)

      The enthusiast-darling crown has changed hands a number of times. Intel was the one to have a little while ago, I think that they've been eclipsed by some of the newer Sandforce gear of late. There are rather more brands than there are chipsets, so brand enthusiasm tends to swing wildly based on cost and who is releasing the new hotness chipset this month.
    • by JonySuede ( 1908576 ) on Sunday September 25, 2011 @01:51PM (#37508934) Journal

      Avoid SuperTalent like the plague they are.
      Avoid anything used or refurbished.
      Avoid any hybrid solution as they drain more battery.
      If you don't need a lot space and need extreme reliability look at intel 311 series (those drives kick ass) or any SLC based SSD for that matter.
      If you don't need extreme reliability, but don't want to play a game of Russian roulette with 3 bullets instead of one (like in the case of a SuperTalent drive), look and anything sand-force based.

      Since you have an aging laptop, you do not need something that can saturate sata 6Gb/s so try to find something like an OCZ Vertex 2 1 drive or a Corsair Force 1 as in real life they are quite similar (you do not need the third edition (both drives have a V3) in an aging laptop).

      Also bench the writing speed only one or two time as the more you bench the slower your drive get, you can usually bring some of it back by emptying the drive by formating it to ntfs in windows 7 and the use a force trim utility, wait about 15 minutes. After that you can reformat your drive to your file system of choice and the performance should be OK

      1- In synthetic benchmark they differs a little bit but it is imperceptible in real life, unless your main workload is approximated correctly by the synthetic benchmark you were looking at.

      • Interesting you mention SuperTalent drives. I have a pair of the 64GB SSDs from 2 years back in RAID0, with no issues. I never upgraded the firmware to support TRIM, if that gives you a time reference. What was the problem with them?
        • died a month after the updated to support trim and the replacement drive died on me to, at that point a gave up and bought something else....

          • Guess it may have to do with that firmware then. I remember there was an issue with it originally, they retracted the original firmware, and released another trim version, but I never got around to doing it because of the full wipe.
  • by Jane Q. Public ( 1010737 ) on Sunday September 25, 2011 @01:28PM (#37508806)
    Does anybody have a backup plan for when their SSDs die? After all, unlike magnetic media, SSDs have a limited number of writes. AFAIK, none of them are rated yet for over a million writes, so they are bound to fail at some point.

    When SSDs were newer, I argued here on /. (against vociferous claims to the contrary) that I could write a program that would break an SSD quickly. The wear-leveling is better today, but since then such applications have actually been written and tested, and they work.
    • by Microlith ( 54737 ) on Sunday September 25, 2011 @01:35PM (#37508834)

      AFAIK, none of them are rated yet for over a million writes, so they are bound to fail at some point.

      That rating, mind you, is per cell. Virtually all SSDs do some form of wear leveling and are over-provisioned to ensure that no one erase block gets worn out early. And the "backup plan" is pretty much the same as for a regular hard drive: duplicates on RAID for reliability and backups for failure recovery.

      I could write a program that would break an SSD quickly

      Sure, you can deliberately and forcefully break an SSD. But the amount of IO required to do so tends to go above and beyond what even the average enthusiast will do. And if your typical IO pattern is one that will break an SSD, then you should plan for it and determine if the speedup is worth the cost.

      • Re: (Score:2, Informative)

        by 0123456 ( 636235 )

        And the "backup plan" is pretty much the same as for a regular hard drive: duplicates on RAID for reliability and backups for failure recovery.

        Mirroring an SSD to another SSD which is likely to fail at almost the same time doesn't seem a great plan to me :).

        • SSDs aren't exactly inexpensive, are they?

          Perhaps I wasn't clear: you'd keep copies on a RAID made of regular disks.

          • by 0123456 ( 636235 )

            SSDs aren't exactly inexpensive, are they?

            Byte-by-byte they're about the same price as the 15k SAS drives we use in the RAID on the servers I maintain; and a lot cheaper than those drives were when first installed a few years ago.

            • Depends on your perspective then, I suppose. Inexpensive for a server isn't exactly inexpensive for the average home user.

            • Ive seen hard drives installed that failed after 27 years in service. Can that be said or even assumed for SSDs?

        • by Jeremi ( 14640 )

          Mirroring an SSD to another SSD which is likely to fail at almost the same time doesn't seem a great plan to me :).

          Assuming SSDs are likely to fail after a certain number of writes (as opposed to after a certain number of hours of uptime), a Time-Machine style backup system would work fine (since the number of writes to the backup drive would be much less than the number of writes to the primary). RAID-0 probably would be a bad idea, for the reason you mentioned.

        • SSDs fail on write, not on read, which makes your whole argument rather meaningless.

      • "That rating, mind you, is per cell. Virtually all SSDs do some form of wear leveling and are over-provisioned to ensure that no one erase block gets worn out early."

        I am aware of how they are constructed and how they work, thank you very much. None of that changes the essential point: they can and do wear out, and probably cannot be expected to last as long as a modern hard drive, depending of course on usage.

        The amount of I/O required to break an SSD (if one is doing it deliberately) is nowhere near as much as you seem to think. One only has to do it intelligently.

        I can envision a simple virus that could break SSDs willy-nilly, although its operation would be tr

        • by DamonHD ( 794830 )

          Running a busy USENET server (I think I hovered at ~#10 in the stats for while) used to wear out normal hard drives too, back in the day; SSDs aren't especially novel in that regard IMHO. It's really only a matter of how frequent and comprehensive your backups are.

          And as my USENET data didn't last longer than about a week then I think I regarded backups as largely pointless except for some very low-traffic local groups and just threw away a drive when it died and let the new one fill up again!

          BTW, I've bee

          • "BTW, I've been running a server entirely on a mixture of SD cards and USB Flash for a couple of years so far, so good."

            Unless you have some kind of RAID-style paralleling arrangement, that has to be slow as molasses.

            • by DamonHD ( 794830 )

              It's entirely fast enough for my purposes: I have front-end mirrors and CDN to serve data quickly to end users. It also handles my mail (including many thousands of SPAMs per day), and SVN and so on.

              And it does mean that I've been able to run the entire system off-grid, on a few solar panels propped up against a wall! B^>

              Rgds

              Damon

            • And here I just thought it would be slow as molasses because it's running on a "plug computer" with a marginal amount of memory. Just because ones duty cycle is only 5% doesn't mean you actually want to get by with only 5% the computer.
              • by DamonHD ( 794830 )

                It works for me. Maybe it wouldn't do for you, I can't tell.

                But as I actually enjoy the challenge of working with resource-constrained 'embedded' systems (my first job was designing and implementing a robotic OS and hardware 25+ years ago), this part of the fun.

                Rgds

                Damon

      • by Kjella ( 173770 )

        Sure, you can deliberately and forcefully break an SSD. But the amount of IO required to do so tends to go above and beyond what even the average enthusiast will do.

        For me it took all of 18 months, I must say I didn't optimize my system to minimize writes and I used and abused it heavily - my OS was on it, it was running a freenet node and downloading incoming torrents, keeping it almost full so it had to work really hard to wear it levelly and so on. And it wasn't a premature failure either, the chips were rated to 10k writes and when it failed I had an average of 8.7k writes with the worst cells having almost 15k writes. If I'd taken the easy steps to reduce writes i

    • by 0123456 ( 636235 ) on Sunday September 25, 2011 @01:36PM (#37508844)

      Fortunately the Intel SSDs come with a 'wear indicator' showing how much life is left. Mine are all showing 99-100% life left, so unless I hit the Intel 320 8MB bug that randomly trashes the drive I don't see failure being a problem before I replace them.

    • by vadim_t ( 324782 ) on Sunday September 25, 2011 @01:38PM (#37508852) Homepage

      So? HDDs also die. They're guaranteed to in fact, since they have plenty moving parts that will wear out eventually. I've had quite a few drives die on me.

      SDDs at least in theory wear out in a predictable manner and can deal with the effects without data loss. Since flash fails on write, a SDD conceivably could (I don't know if any do that) reach a point where it says "that's it, no more redundancy left, read only access from now", which is a whole lot better than a head crash.

      Everybody should have a backup plan, regardless of storage tech.

      • Re: (Score:2, Troll)

        by 0123456 ( 636235 )

        So? HDDs also die. They're guaranteed to in fact, since they have plenty moving parts that will wear out eventually. I've had quite a few drives die on me.

        HDDs usually fail gracefully starting with a few bad blocks, giving you time to get the data off. SSDs have a marked tendency to fail catastrophically and lose everything.

        • by DamonHD ( 794830 )

          I don't think you've been doing it right and having enough fun! I've experienced plenty of catastrophic HDD failures with little warning or possible recovery, including my last MacBook's internal HDD killed pretty abruptly by static AFAIK. (I managed to recover my SSH key, but that was about it.)

          Rgds

          Damon

        • HDD's usually fail by not spinning up, or just stop answering commands properly, in my experience. Which is from 100 to *Crash* before you know what's happening.

          On my fileserver I've had two disks stop working, with notices like :

          [1995429.300714] sd 11:0:0:0: [sdi] Unhandled error code
          [1995429.300718] sd 11:0:0:0: [sdi] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
          [1995429.300723] sd 11:0:0:0: [sdi] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
          [1995429.300738] end_request: I/O error, dev sdi, sector 0

          Because of RAID, I didn't lose any data, but the funny part was that even after it stopped working properly, SMART still reported it as fully functional.

          • by 0123456 ( 636235 )

            HDD's usually fail by not spinning up, or just stop answering commands properly, in my experience.

            Whereas I've never had any of those happen. Every hard drive failure I've seen has been easily predicted by looking at the SMART data for reallocated blocks.

            In fact, no drive has ever actually stopped working, probably because I get them replaced within a few days of the bad sectors appearing. Even my old laptop drive that had been sitting around in a box for a decade still worked when I plugged it in, though it had developed a bunch of bad sectors over that time.

            • Both of you are giving personal experiences which don't really mean anything. I've had HDs slowly die and I've yanked the data off first and some that just decided one day to not work at all. Luckily I back things up so I've not really lost anything of value since probably about 1995 when I tried transferring data from one computer to another via a big ass stack of floppies and stupidly deleted the source data after putting it on floppies only to find out a couple were bad.
          • Google did a study awhile back and came to the conclusion that when you start seeing SMART errors that the disks are 10x or so more likely to fail than ones without. But, when it comes to HDD, or really any storage medium, when you stop having complete faith in the unit, it's time to get a new one.

            I have a few HDD that probably will work for some time, but since I don't trust them and I can't prove them to be reliable, they're going to be recycled.

            It kind of sucks, but the disks are a lot less expensive to

          • I've seen Raid-1 fail in such a way that the second drive fails within a few days of the first drive (twice)... IMHO it's urgent to replace that first failed drive quickly.. as the second is likely to go soon. The first time the second drive died before the RMA replacement made it back... for my NAS box, It's raid-5 with a spare drive sitting next to the box, just in case.
          • Speaking of not answering commands properly, one of the reasons I switched from using my hardware RAID controller in my home server as an actual RAID controller to just having it pretend to be a bunch of SATA ports was actually one disk which managed to confuse the controller to the point where it would freeze up on its own or get the kernel to hang (FreeBSD btw).

            That one disk worked fine for about a year and a half, then it started to show write errors, but only when it was more than 80% or so full. And th

        • HDDs usually fail gracefully starting with a few bad blocks

          I have never had a hard drive fail in this way. I have never seen a SMART status go bad before I had a very sudden loss.

          I have had several drives that simply would not spin up one day (even after a trip to Mr Freezer). I have had large swaths of data oct in catastrophic storms, barley able to recover half my data after multiple passes with recovery tools.

          I've been fortunate in that I've lost almost no data, primarily because I was able to recover

          • by TheLink ( 130905 )

            I have never had a hard drive fail in this way. I have never seen a SMART status go bad before I had a very sudden loss.

            What do you use to monitor SMART on your drives?

            • smartmontools [sourceforge.net]

              You can run it daily via cron if you like, I have to admit my approach is more manual.

              • by TheLink ( 130905 )
                I use smartd, and configure stuff to run short self-tests daily, long self-tests weekly, and send email notifications if "stuff happens".

                If you stick to a manual approach you might end up not checking often or regularly enough. That might explain why you never see a SMART status go bad before a sudden loss.

                Yes there can be sudden complete losses, but from my experience, the first time you get a sector, CRC or other problem, you usually have a few hours or even a few days before the drive fails completely.

                SM
        • If you aren't taking backups, or at the very least using RAID, expect to lose data. "A few blocks" if they're the wrong blocks is everything.
        • Psh. I've NEVER had HDD drive gracefully, and I've lost at least 20.

          I had a laptop drive right in the middle of booting into Linux once. That was cool. Typed in my password, boomheadcrash.

          "What was that noise? Why is it taking so long to load?"

      • SDDs at least in theory wear out in a predictable manner and can deal with the effects without data loss. Since flash fails on write, a SDD conceivably could (I don't know if any do that) reach a point where it says "that's it, no more redundancy left, read only access from now", which is a whole lot better than a head crash.

        My boot drive failed by destroying half the Linux partition. I was able to copy off /etc, kernel config and a bunch of useful scripts and things, but most of it was just a bunch of unreadable sectors. Shortly afterwards the drive failed completely and was no longer recognised as a disk. It was just under a year old - I used the noatime option, swap was on an HDD and it was only about 3/4 full.

      • by izomiac ( 815208 )

        Since flash fails on write, a SDD conceivably could (I don't know if any do that) reach a point where it says "that's it, no more redundancy left, read only access from now", which is a whole lot better than a head crash.

        That's been my experience exactly. Every PC I've owned has "died" from a HDD crash, usually sudden. The last SSD I had hit its erase limit in about two years (small SSD and I'm prone to reinstalling various OSes monthly). The lovely thing was that I could run a maintenance tool and see exactly how many erases were left on each cell (BTW the wear leveling was only 1% from mathematically perfect). This allowed a simple extrapolation down to the day some cells would start hitting their advertised capacity,

      • by klui ( 457783 )

        Predictable in theory but not necessarily in practice. When your metadata goes RO on you I think that's why some responses are in the area of a corrupt file system or a drive that is not recognized at all. The ones about partitions being corrupted is a much bigger issue. You would think one wouldn't normally mess with partition metadata.

        Probably the file systems and low level utilities need to be updated to take into account SSD-type failures.

    • by SiMac ( 409541 ) on Sunday September 25, 2011 @01:38PM (#37508854) Homepage

      Does anybody have a backup plan for when their SSDs die? After all, unlike magnetic media, SSDs have a limited number of writes. AFAIK, none of them are rated yet for over a million writes, so they are bound to fail at some point.

      Buy a new SSD? SSD failure is predictable. If you're lucky, your firmware will not try to write to blocks that are past their rated # of write cycles and so when your SSD reaches the end of its lifespan, your data will become read only. Even if not, you can still tell very easily if you're approaching end of lifespan using SMART status. I suspect that SSD death is much more predictable than HD death...

      • Re: (Score:2, Flamebait)

        "If you're lucky, your firmware will not try to write to blocks that are past their rated # of write cycles"

        You would have to be very lucky, since such a creature does not exist.

        Maintaining a count of how many times any given cell has been written would take a lot more memory (not to mention processing power) than these devices contain.

        Instead, what they do is over-provision, so that a detected bad block is replaced with a spare. (Most hard drives do much the same thing.) However, there are only so many spares.

        As someone else mentioned: with any real luck your firmware might report what percentage of tho

        • by vadim_t ( 324782 ) on Sunday September 25, 2011 @02:26PM (#37509080) Homepage

          Maintaining a count of how many times any given cell has been written would take a lot more memory (not to mention processing power) than these devices contain.

          Bullshit.

          SSDs erase in extremely large blocks, like 256K. Having a counter per block is not a problem. It works out to 16K of memory per GB for a 32 bit counter per block.

          It probably doesn't even take an extra space, since a block probably already contains metadata and ECC, so a simple counter probably fits in there nicely, It won't even cause any extra wear because the only time you want to change the counter is when the block is being rewritten anyway.

          • I stand corrected. GP did write blocks; I was thinking cells.

            On the other hand, large block sizes detract enormously from both the write efficiency and the lifetime of the unit. The smaller the block size, the faster the write time (in general), and also longer life. But the smaller the block size, the greater the overhead of keeping track of the health of any block.
        • Re:Predictable? (Score:4, Insightful)

          by vadim_t ( 324782 ) on Sunday September 25, 2011 @04:56PM (#37509788) Homepage

          Different kind of failure. You're linking to firmware bugs. HDDs have those as well [mswhs.com]

          In this thread we're discussing wear induced failure.

          • by TheLink ( 130905 )
            Do you really think the bulk of those 2-3% return rates (see the linked behardware articles) are due to wear induced failures?

            a) If they are then the "wear levelling" stuff sure isn't working well enough.
            b) If they aren't then isn't it ridiculous to talk about wear induced failures being predictable when the bulk of the failures are due the bugs and other faults? And judging from google and feedback many of those don't seem as predictable.

            One might try to claim the RMAs are mainly due to PEBKAC but note tha
            • by vadim_t ( 324782 )

              But I'm not arguing about that. SSDs are new, bugs are expected until issues are hammered out. However, both SSDs and HDDs share a considerable amount of failure modes (circuitry, firmware, sensibility to power problems in the computer), so on those parameters neither is intrinsically better than the other.

              Now, the one thing that is different is how the actual storage is done. SSDs can manage the underlying flash in a way that provides for somewhat elegant failure modes.

              Hard disks on the other hand, can't d

      • >>Buy a new SSD? SSD failure is predictable.

        Write failures are predictable and reportable.

        Unfortunately, the more common SSD failure mode is turning it on one day and it doesn't work. As I mentioned elsewhere, I read through thousands of comments on Newegg looking both at the overall score and the 1-egg ratings, to see if failure patterns emerged. One I remembered was a drive that would arbitrarily corrupt its data without a firmware update, but the firmware update also would wipe the drive...

    • This might be of interest to you :

      SSD Write Endurance 25nm Vs 34nm [xtremesystems.org]

    • by fuzzyfuzzyfungus ( 1223518 ) on Sunday September 25, 2011 @01:55PM (#37508960) Journal
      Because they are somewhat more expensive, an SSD failure is a little more painful than an HDD failure; but the basic rules of "don't trust a hard drive" really haven't changed.

      The mechanicals sometimes last a decade if you get lucky, or die within days of install if you don't. Moral of the story: If you store anything on a hard drive, you don't love that something very much. You'd better have backups.

      The shape of the failure probability/time graph is likely a bit different for SSDs; but the "You'd better have backups" message, and the available means of taking those backups are pretty much exactly the same.

      Again, because of the somewhat higher cost, burning your way through SSDs is a little more painful than burning your way through HDDs; but anybody whose plans involve just trusting a hard drive has always been doomed.
    • by jovius ( 974690 )

      What exactly happens when an SSD dies. Are the cells just read-only then or complete garbage?

    • I only have applications and the OS for the most part on my SSD, On my laptop anything important is in my dropbox directory, that syncs to my desktop... My desktop gets daily backups (+ whatever is in the dropbox), I have a few HDDs as well in my desktop all also backed up. I have a 4x1TB nas box (synology), but have out grown it, so will be building an (up to) 11 drive nas probably based around FreeNAS. within the next few months.

      I've been a bit tepid in doing the upgrade to the homebrew nas solution,
      • grr... 8.5TB and 11.3 respectively usable... Need to proofread better.
      • As someone who has FINALLY gotten his FreeNAS to a point where I'm satisfied with its performance, I'd like to share a few pointers that may help you out...

        1.) get a good SATA/SAS controller. I don't care how many SATA ports your motherboard has. Ignore that entirely and get a good PCI-Express SATA controller. the one that was recommended to me - and that ultimately solved my problem - was an IBM BR10i that was force-flashed the LSI IT firmware, along with a pair of SAS-SATA breakout cables. There are some

    • No drive has unlimited writes. They all die eventually so the back up plan is the same as before. You either back-up nothing and lose it or back-up often and don't worry if it dies.
    • "Does anybody have a backup plan for when their SSDs die? "

      Same plan as ever. If it matters, burn it to DVD at slow speed. If it's large and matters, copy to different computers and external hard drives.

    • by Surt ( 22457 )

      I have bad news for you about your magnetic media, which also has a limited number of writes.

    • by Jonner ( 189691 )

      Does anybody have a backup plan for when their SSDs die?

      Don't you mean "Does anybody have a backup plan for when their storage devices die?" Even if SSDs fail more quickly than hard drives (which isn't necessarily true any more), no storage device will last forever. Everyone should backup whatever he doesn't want to lose regardless of what type of device he uses. I do.

    • by mcrbids ( 148650 )

      We use SSDs in our CentOS Postgresql database servers, the OCZ Vertex 3. The difference has been jaw-dropping. The SSD drives, used as the partition the DB servers mount on, so thoroughly stomp 15K SCSI drives that the performance ratio is something like 10:1. Server loads drop from 3 to 0.3, and everybody notices the speed.

      I put them into production over the summer, just for safety's sake I'll replace them over Christmas and keep the original set handy for emergencies.

      Oh, and we back up all our databases e

    • Does anybody have a backup plan for when their SSDs die?

      Fud, fud? Fud fud fud fud fud? Fud fud, fud.

      Does anybody have a backup plan for when their HDDs die? After all, unlike SSDs, stepper motor bearings get ground out of round eventually. AFAIK, none of the disks made in the last decade are going to be spinning much longer, so they are bound to fail at some point.

      • by cffrost ( 885375 )

        [U]nlike SSDs, stepper motor bearings get ground out of round eventually.

        Voice coils replaced stepper motors for HDD head actuation a couple decades ago. Also, don't forget about fluid-dynamic bearings or ramped head parking/unloading, the latter of which reduces the occurrence of sticktion (and head crashes).

        I'm not disputing that the laws of physics guarantee that all mechanisms will fail; I'm merely pointing out some friction-reducing technologies present in modern HDDs.

        PS: Mad props for not forcing apostrophes into pluralized initialisms.

  • Big questions (Score:5, Informative)

    by rsborg ( 111459 ) on Sunday September 25, 2011 @01:35PM (#37508832) Homepage

    1) How does this Samsung chipset compare vs latest Sandforce2 in terms of compressed read/writes?
    2) TRIM support?
    3) OSX friendliness?
    4) Cost?
    5) Size max?

    So far I've identified 2 use cases that have very nice sweet-spot answers - a) For a desktop with PCI-e, the OCZ Revodrive3 X2 just gives amazing performance, completely bypassing SATA and delivering unbeatable performance/cost ratio. b) For a laptop solution, I'm more interested in max storage/price/performance, and the 512GB Crucial m4 seems unparalleled in delivering this (expensive at $700, but can completely replace an laptop HDD).

    It will be interesting to see if Samsung is ready to challenge this market.

    • by Nemyst ( 1383049 )

      AnandTech has a review [anandtech.com] up. For your questions, the drive has TRIM support, Samsung has been the SSD manufacturer of choice for Apple so I'd say OSX support is a given, costs will be in line with the SSD 470 which is within the range of the OCZ Vertex 3, Crucial m4 and Intel SSD 510 and it goes up to 512gb, which is the sample that has been reviewed.

      Notably, they say that this is the first really exciting release by Samsung. Apparently, garbage collection is delayed to moments with low IO activity, making to

  • Price??? (Score:4, Informative)

    by Rick Richardson ( 87058 ) on Sunday September 25, 2011 @01:39PM (#37508856) Homepage
    "It's really hard to rate a solid-state drive (SSD) without knowing its exact pricing, and that's just what we had to do with the Samsung 830 series. Samsung has been very tight-lipped about how much the 830 costs and will not reveal that until the drive is available for purchase in October." - CNET
  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Sunday September 25, 2011 @01:43PM (#37508880)
    Comment removed based on user account deletion

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...