Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Upgrades

OCZ RevoDrive 400 NVMe SSD Unveiled With Nearly 2.7GB/Sec Tested Throughput (hothardware.com) 117

MojoKid writes: Solid State Drive technology continues to make strides in performance, reliability and cost. At the CES 2016 show there were a number of storage manufacturers on hand showing off their latest grear, though not many made quite the splash that Toshiba's OCZ Technology group made with the annoucement of their new RevoDrive 400 NVMe PCI Express SSD. OCZ is tapping on Toshiba's NVMe controller technology to deliver serious bandwidth in this consumer-targeted M.2 gumstick style drive that also comes with a X4 PCI Express card adapater. The drive boasts specs conservatively at 2.4GB/sec for reads and 1.6GB/sec for writes in peak sequential transfer bandwidth. IOPs are rated at 210K and 140K for writes respectively. In the demo ATTO test they were running, the RevoDrive 400 actually peaks at 2.69GB/sec for reads and also hits every bit of that 1.6GB/sec write spec for large sequential transfers.
This discussion has been archived. No new comments can be posted.

OCZ RevoDrive 400 NVMe SSD Unveiled With Nearly 2.7GB/Sec Tested Throughput

Comments Filter:
  • by ledow ( 319597 )

    Why the huge PCIe card for such a tiny device on a relatively unpopulated PCB?

    • Re: (Score:3, Informative)

      by Anonymous Coward

      It's primarily M.2. (look it up), M.2 is only on very new mobos, so to actually sell the things they sell it will a PCI-E adaptor.

    • by jon3k ( 691256 )
      The PCIe card is functioning as an M.2 adapter. That little card is an M.2 card plugged into a slot on the PCIe card.
    • The PCIe card is also useful if you ever need to recover the data from the M.2 drive. Most of us don't have a second system or an external drive case or adapter that can handle an M.2 drive; putting the module in the PCIe card lets you remove the drive from your laptop and put it in a desktop system to read if necessary.
  • by Anonymous Coward

    I had two of these 120 gigabyte SSD drives, one old size firmware, one new as it decreased the physical size of the drive.

    Both failed spectacularly.

    • Re: (Score:2, Offtopic)

      by nuckfuts ( 690967 )

      I had two of these...Both failed spectacularly.

      I think you mean "Twice bitten once shy".

    • by lucm ( 889690 )

      I've had a few SSD drives die on me too, and every time it was OCZ junk. Never had any problem with Intel ones.

      I'd rather use a refurbished IBM DeathStar than put anything OCZ in my computers ever again.

      • "Never had any problem with Intel ones."

        Aren't the Intel SSDs the drives that fail to brick, as opposed to failing read only?

        • But only after being written to continuously for several years. Given that the amount of data written to that drive was well beyond what a normal person would ever do I wouldn't be worried.
    • Every OCZ 120 gig SSD I owned failed. I no longer use them even if they are better now that Toshiba owns them.
    • I have had similar problems with OCZ drives, and no matter how fast and sexy I wouldn't touch them again.

  • by butlerm ( 3112 ) on Sunday January 10, 2016 @05:43AM (#51271795)

    I think it is more than a little amusing that anyone cares about performance numbers on a drive like this without first asking whether the drive provides any assurance that it won't catastrophically lose all your data, if not be bricked permanently, due to a simple power loss. That seems to be par for the course with the most of the SSDs on the market. Preserving your data? That is an enterprise feature.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      regular spinning drives don't provide any assurance against catastrophic data loss, either. you should always have backups.

      • by TeknoHog ( 164938 ) on Sunday January 10, 2016 @06:48AM (#51271877) Homepage Journal

        regular spinning drives don't provide any assurance against catastrophic data loss, either. you should always have backups.

        When the power suddenly goes out, regular spinning drives don't generally lose everything that's already on platters. With SSDs, their internal state is more in flux, as older blocks can still be reorganized all the time. Also, there's much more logic between the actual storage and the outside interface, so a bad controller can easily make everything inaccessible, even if the data is still there in the storage medium.

        • > When the power suddenly goes out, regular spinning drives don't generally lose everything that's already on platters.

          I get that you are saying SSDs will fail more often on losing power, but at Backblaze we see regularly spinning drives catastrophically fail when we power them down NICELY. Something about cooling off, then reheating (we suspect, not sure).

          The bottom line is ALL DRIVES FAIL. You HAVE to backup. You WILL be restoring from backup. And so unless the failure rate is high enough to b
          • by butlerm ( 3112 )

            > The bottom line is ALL DRIVES FAIL. You HAVE to backup. You WILL be restoring from backup.

            Guess what? The entire modern financial system revolves around the proposition that block devices do not lose their entire contents during a power failure. Sorry we wired a large sum of money to somewhere but don't quite recall where doesn't quite fly in the real world.

            The quaint notion of a data "backup" does not suffice when you can't lose any data recorded since your last one. Such as email messages recentl

            • by radish ( 98371 )

              I worked in financial tech for 16 years. You are correct in that the levels of data reliability required are much higher than in many other situations, and that traditional backups don't suffice for every use case (specifically in flight or recently written data).

              However, you avoid that issue by duplication. The underlying device technology is utterly irrelevant because not only could your spinning disk fail on a power outage, it could fail because the server room got filled with super heated steam or the e

              • by butlerm ( 3112 )

                > And when you're doing that - when you're really designing for failure - the type of disk you use really doesn't matter.

                In principle, sure. In practice you don't want a power glitch in your data center to potentially corrupt every disk in the facility beyond the point of recovery. You can't recover your site from a remote location if none of your systems will even boot.

                This could happen at an ordinary office location as well. Lights on and off a couple of times, and everyone's desktop is a brick. I

            • > The bottom line is ALL DRIVES FAIL. You HAVE to backup. You WILL be restoring from backup.

              Guess what? The entire modern financial system revolves around the proposition that block devices do not lose their entire contents during a power failure. Sorry we wired a large sum of money to somewhere but don't quite recall where doesn't quite fly in the real world.

              The quaint notion of a data "backup" does not suffice when you can't lose any data recorded since your last one. Such as email messages recently sent and received, for example.

              Going out of your way to make borderline defective devices that will force users to resort to backups of varying degrees of staleness simply because you wanted to save a few cents on manufacturing costs isn't common sense, it is more like engineering malpractice.

              I would presume that pairs of SSD's would be raid connected, and as was the case with some Raid systems, the cache to the drives were (lead) acid battery backed up along with sufficient power to insure the last operation is /was safely completed..

              • by Agripa ( 139780 )

                I would presume that pairs of SSD's would be raid connected, and as was the case with some Raid systems, the cache to the drives were (lead) acid battery backed up along with sufficient power to insure the last operation is /was safely completed..

                I am a little surprised nobody makes an interposer for the SSD power connection in so that power to the drive is maintained long enough for the SSD to complete any internal operations. I wonder though if this would be enough if the SSD implements scrubbing since i

          • by Agripa ( 139780 )

            The failure mode when unexpectedly being powered down is specific to SSDs and how they use Flash memory.

            SSDs like hard drives include a data structure for translating virtual to physical sectors. On hard drives this is used for masking bad physical sectors and "relocating them" but on SSDs, every sector has to be translated because of how the erase and write cycle works and every write updates this table. Flash memory however can corrupt data *other* than the data being written when power is lost so if po

        • Since I have been using SSDs for 5 years or so with zero data loss I'm going to call bullshit on your claim.
          • by butlerm ( 3112 )

            I haven't had a hard drive fail hard on me since the IBM DeathStar drives from the early 2000s - the drives that caused them to get out of the hard drive business.

            Should that overwhelming anecdotal evidence convince the reader that hard drives do not catastrophically fail, not ever? Isn't a sample size of a couple of dozen drives enough to make an accurate generalization to a global population? Or am I just lucky?

            • Dear moron. (You are a moron for even thinking you could make such a claim fly on Slashdot.) The point isn't a claim that SSDs never fail. Nobody is making that claim except you, who is merely advancing a Red Herring. The point is the claim that they are unreliable is bullshit. Yes hard drives fail. Yes SSDs fail. No, SSDs are not less reliable than rotating media when used by competent people.
              • by butlerm ( 3112 )

                Most SSDs are garbage from an engineering perspective. No one in his right mind would use them to store important data, not in a RAID or any other configuration, if he can avoid it. They are unreliable and when they fail it is generally not a lost sector or two here and there, it is their entire contents.

                This is nothing but engineering malpractice. A mirrored pair of real hard drives is not generally susceptible to catastrophically losing all your data at a moments notice. RAID solves that problem. Whe

                • As I said, you are an idiot. Now off you go ....
                  • by butlerm ( 3112 )

                    You can't defend the indefensible. Consumer SSD manufacturers purvey millions of devices with known catastrophic failure modes that could be remedied with a few cents in extra parts.

                    Sort of like if the engineers of the Tacoma Narrows Bridge put it into mass production. It only fails during wind storms you see.
                    https://www.youtube.com/watch?... [youtube.com]

                • Most SSDs are garbage from an engineering perspective. No one in his right mind would use them to store important data, not in a RAID or any other configuration, if he can avoid it. They are unreliable and when they fail it is generally not a lost sector or two here and there, it is their entire contents.

                  This is nothing but engineering malpractice. A mirrored pair of real hard drives is not generally susceptible to catastrophically losing all your data at a moments notice. RAID solves that problem. Whereas on most SSDs a simple power loss will destroy all the data in your RAID group simultaneously. Say good bye to everything stored since your most recent backup.

                  RAID means Redundant Array of Inexpensive Drives. It doesn't work with a Redundant Array of Defective Drives, which is what most SSDs are, in this most crucial respect.

                  Simple SSD raid systems will fail as you indicated. However, other raid systems that need fast access and reliability have the SSD drives separately powered via battery backup to the drive(s), and they also include the extended ECC checking, smart technology and soforth detect failures and pending failures.

                  SSDs will replace spinning drives, and it is a fact of life. Should the same argument be made that DDR3 dimms will fail catastrophically, causing the entire system to be lost, and therefore all critical

                  • by butlerm ( 3112 )

                    Well designed SSDs with power loss protection and a strong internal software architecture are wonderful, perhaps the best thing to happen to the database world in decades. No disagreement there. My complaint is with the self destruct on power loss variety.

          • The first SSDs I had (two of them) were OCZ - both failed, and did so suddenly after 9-12 months and without prior warning, resulting in total data loss on the drives (thankfully, I have good backups). I've tried other vendors and had similar misery. I bought an expensive workstation about 5 years ago with an Intel SSD, and I'm still using this now, in fact, I'm typing this reply on that machine. Sure, it's had hard power-offs when things have locked up occasionally, and its wear level is slowly decreasing,
            • by Wolfrider ( 856 )

              > I bought an expensive workstation about 5 years ago with an Intel SSD, and I'm still using this now, in fact, I'm typing this reply on that machine

              --Around the 5-year mark, might be worth your while to price the latest Intel SSDs; buy a new one, copy your data over, and start using the older SSD as a secondary or backup drive. It may last quite a while longer, but by replacing it before it fails you will actually be ahead of the curve ;-)

              • Thanks for the tip - I'm aware that I should replace the drive, which I will at some point soon. A new one would be faster, not to mention, have a lot more space (GB/$ is much better now). I've never found an imaging solution that works on Windows 7, that can transfer data from a HDD/SSD to another SSD, so the inertia is making me put this off - it'd take me about 3 days to reinstall everything from scratch. I've tried Paragon Migrate OS to SSD, Acronis, Ghost (admittedly, older versions of those two), and
                • by Wolfrider ( 856 )

                  --Hmm. I've had good luck with Acronis in the past, and WD has free transfer software based on it. Admittedly I have only done HD -> HD instead of HD -> SSD so far. Try booting the system normally, but with the SSD attached to an extra SATA port as a spare drive, so the OS loads the driver 1st before trying OS migration... Also try checking the SSD mfr's website, they may have migration software, or try EaseUS - and feel free to email me, I'd be interested to see how it goes.

                  Refs:
                  http://www.easeus.c [easeus.com]

        • by mlts ( 1038732 )

          I notice Intel enterprise SSDs have capacitors on them, and I'm guessing it is there so the drive has enough power to complete any in-flight writes (or at least find a stable, consistent point on the block/page level to stop and power down.) This makes me guess that for the hard-power off issue, this is a solved problem.

          Non-enterprise drives, who knows.

          • by butlerm ( 3112 )

            Internally, all SSDs essentially implement a database of disk sectors, the equivalent of a log structured filesystem with a single file, due to the inability to overwrite existing data in place. A power loss without backup capacitors places the integrity of that database at risk, which is why any power loss can lead to the loss of completely unrelated areas of the logical device, not just areas with pending in flight writes, and often the contents of the device in its entirety.

            It is conceivable that with e

            • by tlhIngan ( 30335 )

              It is conceivable that with extremely careful design an SSD could be designed to provide power loss protection without backup capacitors but apparently no one has managed to do that yet - or at least not with adequate performance. Take a good look at the design of something like ZFS for a clue as to how one might go about doing that.

              Actually, the performance issues have been solved - because SATA is a bottleneck. SATA-III can only do about 500-540MB/sec, which is why all SSDs using SATA pretty much quote th

            • by Agripa ( 139780 )

              Internally, all SSDs essentially implement a database of disk sectors, the equivalent of a log structured filesystem with a single file, due to the inability to overwrite existing data in place. A power loss without backup capacitors places the integrity of that database at risk, which is why any power loss can lead to the loss of completely unrelated areas of the logical device, not just areas with pending in flight writes, and often the contents of the device in its entirety.

              It is conceivable that with ex

    • And will provide you with all the power less protection you might need.

      Is the threat of power loss and the data loss risk associated with it somehow a new thing that people haven't been concerned with an dealing with relatively inexpensively for a long time?

      I think you could also make an argument that SSDs might be less likely to lose data vs. spinning rust, as their greater speed means that disk transactions are more likely to be completed faster, leaving the device idle more often and thus less likely to

      • UPS doesn't protect you from power supply failure. I've had about five of those so far.

      • by vadim_t ( 324782 )

        An UPS won't protect against intentional power cycling.

        Let's say you have a problem that really needs a reboot. Perhaps the UI locked up, or the machine is swapping so much it's unusable, or the video card crapped out and you can't see anything. Is it safe to hit the power button and reboot?

        In the worst SSD implementations the problem is not the swap file getting corrupted, it's the metadata that keeps the SSD itself functional getting corrupted, which risks making the entire drive unusable.

        • Is it safe to hit the power button and reboot?

          No never, that's why you have a reset button on the motherboard. The failures associated with SSDs are the inability to write cached stuff being worked on out on the drive. That's independent from what the computer is doing and a function of the controller itself which means it should maintain its state quite happily when you hit the reset button. The only difference in enterprise quality drives which don't have this failure mode is some capacitance that keeps power on the drive for a fraction of a second l

      • by butlerm ( 3112 )

        UPS power isn't expensive, just a hundred dollars and ten or twenty pounds of lead acid batteries and special communication cables and automatic shutdown software to make up for the failure of SSD manufacturers to include a gram or two of capacitor protection.

        Real hard drives, by the way, are not at risk of losing their entire contents in an unexpected power loss. Most SSDs are. Not to be used if you actually want your data to still be there when you return in the morning.

        The reason why is that SSDs requi

    • by mlts ( 1038732 )

      SSD do fail, but they fail in different ways than HDDs. It is wise to have backups regardless of what one's primary media is, but SSDs are nice in the fact that they can take environmental issues than a HDD. It is still not good to drop one, but if one drops an external SSD while it is plugged in, it is almost certain to continue working. Drop a HDD, who knows.

      I would say that the benefits for using a SSD as primary storage well overcome their drawbacks. Just the fact that don't have that bottleneck of

      • by butlerm ( 3112 )

        SSDs that do not put any of their contents at risk during a power loss except possibly areas addressed by recent, unflushed sector writes are probably the best thing that has happened in enterprise hardware in the past decade.

        It is the unnecessarily flaky way consumer grade SSDs are designed and manufactured to the great concern of approximately no one that is what is annoying.

  • Not a fan (Score:3, Insightful)

    by jargonburn ( 1950578 ) on Sunday January 10, 2016 @06:13AM (#51271833)
    I'm pretty sure every drive (small sample size) I've used from OCZ has failed within two years. And half the RAM.

    I don't use them, anymore; I got tired of them using me.

    • Re: (Score:2, Informative)

      by Mashiki ( 184564 )

      OCZ is gone, they're owned by Toshiba now. Both my first gen OCZ SSD's are still working to this day, they're nearly 8 years old at this point and both PSU's that I bought from them are still chugging along. It was later when the president of ocz decided to start cutting corners and pissing over everyone that it turned to shit.

    • I've had a 100% success rate. Okay sample size of 1, my Vertex 3 is still going strong but you're right OCZ had a well deserving reputation... Which is also why they went bust, and were bought out by Toshiba.

      The only thing I can't figure out is why Toshiba kept the name. It's not like the OCZ brand had a good value. They are no longer the same drives, not the same controllers, not the same company, but for some reason they still have the shitty name attached to them.

      • 100% success with a sample size of 3

        50GB Vertex 2 spent time in my PC, my laptop, my unRAID server as a cache, and now sits in a an enclosure but never really used

        60GB Vertex 2 spent time in my PC, my laptop, and my unRAID server as a cache, but is no longer being used as it was replaced with a 512GB Crucial

        120GB Vertex 3, sitting in my PC right now

    • These aren't OCZ drives. The only OCZ thing about them is the name. Toshiba bought out the company when it filed for bankruptcy two years ago.
    • Sample of 1 also: a Mushkin Chronos Deluxe been going nice for 4 years, after one XP install and 3 Win7 installs (with lots power failures and whatnot in between) it keeps working nicely and now it's retired to an old Macbook, just today did the "upgrade" after a Parted Magic secure erase.

      Now I have an EVO 850 in the PC, hope it delivers the same value as that little SSD: best hardware upgrade ever I don't think I'm buying spinning rust again, maybe one of those caviars RED for a NAS if I ever need one.
  • by swb ( 14022 ) on Sunday January 10, 2016 @07:16AM (#51271911)

    While I'm sure there's a whole world of forum posters with their disk benchmarks in their warlording signature who go for this kind of thing because they want to be the guy with the best benchmark numbers, what's the actual performance gain in a typical kind of scenario vs. a SATA3 SSD?

    These kinds of sequential benchmarks don't really tell me how much real-world time something like this will shave off booting a computer, launching an application, etc versus a more conventional solution.

    • by Kjella ( 173770 ) on Sunday January 10, 2016 @09:29AM (#51272213) Homepage

      These kinds of sequential benchmarks don't really tell me how much real-world time something like this will shave off booting a computer, launching an application, etc versus a more conventional solution.

      About as much faster as you'd get to work in a race car. If you're not doing anything storage intense, the PCIe bandwidth is not going to make much of a difference. Same with NVMe, main advantage is at big queue depths. This technology isn't coming because consumers are demanding it, but because of enterprise needs. Now they're looking for prosumers who are willing to pay more, but not quite enterprise money for performance. I'm guessing eventually it'll trickle down to consumers since PCIe and NVMe are far more natural interfaces for SSDs than SATA but it won't make much of a real world difference. A bit like DDR4, it doesn't really offer much over DDR3 for consumers but because the enterprise needs it, it's trickled down to Skylake.

      • If you're not doing anything storage intense, the PCIe bandwidth is not going to make much of a difference. Same with NVMe, main advantage is at big queue depths.

        Actually besides queue depth, a lot of the benefit comes from reducing host CPU usage, contention, latency and context switching. AHCI has a single global queue (of pretty limited depth) and so multiple threads doing IO need to either block or else incur the overhead of bouncing the IO to another thread. For your hypothetical enterprise application actually saturating on 16 cores with 32 threads, Amdahl's law starts to actually become an issue. In NVMe, each physical CPU core has its own personal NVMe comma

    • by ddtmm ( 549094 )
      So they post the specs and the first thing people talk about is how specs don't mean anything. Is shaving off boot time really your higher priority? Most people don't boot more than once a day, if not once a week. So what if it takes 2 more seconds to boot compared to a SATA3 drive. Most people's usage would never see the difference between a SATA1 and SATA3 SSD and for the ones that really do need serious read/write like video editors or music composers with huge sample libraries, the 2.4GB/s speed is mass
    • Unless you're doing something storage-intensive with very large files (e.g. real-time video editing), there's very little benefit. The problem is we perceive computer speed in terms of how much time we have to wait, and MB/s is the inverse of wait time (sec/MB). So each doubling of MB/s only results in half the decrease in wait time of the previous doubling. Imagine you needed to read 1 GB of sequential data.

      125 MB/s HDD = 8 sec
      250 MB/s SATA 2 SSD = 4 sec
      500 MB/s SATA 3 SSD = 2 sec
      1 GB/s PCIe SSD =
  • OCZ = Toshiba (Score:5, Informative)

    by Gravis Zero ( 934156 ) on Sunday January 10, 2016 @07:33AM (#51271925)

    So I thought I recalled that OCZ went bust and i was right because this isn't the same OCZ, it's OCZ Storage Solutions not OCZ Technology Group. [wikipedia.org] Basically, Toshiba bought up the remnants of the company and took the name and logo and founded OCZ Storage Solutions on January 21, 2014. So of course they are using Toshiba's this and that because they are Toshiba.

    • Re:OCZ = Toshiba (Score:4, Insightful)

      by TeknoHog ( 164938 ) on Sunday January 10, 2016 @09:14AM (#51272173) Homepage Journal
      Why on Earth would they use the infamous OCZ name when Toshiba was a perfectly good hard drive brand (at least until the Hitachi deal)?
      • "OCZ RevoDrive" does get brand recognition as a fast SSD on a PCIe card.

        • Any advantage that the RevoDrive brand may have is completely decimated by putting OCZ in front of it. Seriously OCZ is a company which failed spectacularly in reliability and in dealing with customers. They should call it Toshiba RevoDrive and get away from the stigma associated with the asshats at OCZ who screwed customers out warranty claims in any possible way they could while having orders of magnitude higher unreliability figures.

      • they were infamous for QA problems but despite that, the brand is still recognized for being fast. Toshiba probably thinks they can rehabilitate the brand but if they screw it up, it won't hurt Toshiba's brand name.

    • I read OCZ and think bad quality lost data. They should have used a different name.

  • I was one of the ones burned by low reliability on early OCZ SSD drives. I know they were bought by Toshiba, so things might have changed. However, most SSD are so fast these days for a desktop that I rather trade some of the speed for known reliability. Unfortunately, it is hard to get a metric for reliability that you can trust.
    • In general though, a bigger more recent SSD ought to be more reliable than a smaller older one.
      Bigger is more reliable : more spare blocks, file system not full, and more consistent write performance. But bigger also means faster though you can rather quickly hit the interface's speed.
      So, I'd like a slow and reliable drive too, but I don't think there is much point to it.

  • I continue to think of the name OCZ as the SSD equivalent to a Yugo auto, even though they have new owners. I will not buy one.
    • by Dog-Cow ( 21281 )

      So even though you know better, you're going to ignore them. Thanks for sharing?

      • You're welcome. My muddy point was that OCZ still has a long way to go to remove the bad taste of 80% failure rate on the OCZ SSD"s that i purchased years ago.
        • Why would one buy lots of SSDs of just one brand, back in that time the most relevant info for purchasing one was the controller and the cell technology, OCZ problems were known very early in the product lifecycle, is not their fault that you didn't do your homework. After all, if it were only for brand, i had OCZ as another "gamer" focused company with no substance, history and that appeared from nowhere. Maybe I had the wrong perception of the company.. oh wait, no, they failed hard and went bankrupt.
    • I continue to think of the name OCZ as the SSD equivalent to a Yugo auto, even though they have new owners.

      Why, do you have hard numbers of failure rates of Toshiba OCZ SSDs to backup your assertion?

      I will not buy one.

      Why, because they have the name of a company which no longer exists who made a product with a shit storage controller that is no longer used and slapped it with memory module which no longer comes from the same manufacturer?

  • IOPs are rated at 210K and 140K for writes respectively.

    In any SSD lacking infallible power-loss-protection, IOPs should never be cited at a queue depth > 4.

    Of course, we can make a small exception for rainbow-eating Unicorns, whose data-center workloads don't require a corresponding level of data-center fault tolerance.

  • I have not had any non-OCZ drive fail. I bought an OCZ drive a couple of years ago and within two weeks of relatively light duty (Linux boot drive) it bricked.

    Last year I was working on a project and the machine they gave me had an OCZ enterprise class drive. Within two weeks the drive was corrupting data, rendering the machine unusable. I will never buy another OCZ drive again. I still have two OCZ drives but they are backed up daily.

  • I wonder why so many in a tech savvy and marketing averse community got so burned by a company that only lived through its marketing, ignoring the writing on the wall about its reliability, which was known since almost the beginning of it's failed SSD line. I agree in that they should have used the Toshiba brand but I imagine they are using that one for the enterprise market.

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...