Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Intel Stats Upgrades Hardware

Intel's New Desktop SSD Is an Overclocked Server Drive 111

crookedvulture writes "Most of Intel's recent desktop SSDs have followed a familiar formula. Combine off-the-shelf controller with next-gen NAND and firmware tweaks. Rinse. Repeat. The new 730 Series is different, though. It's based on Intel's latest datacenter SSD, which combines a proprietary controller with high-endurance NAND. In the 730 Series, these chips are clocked much higher than their usual speeds. The drive is fully validated to run at the boosted frequencies, and it's rated to endure at least 70GB of writes per day over five years. As one might expect, though, this hot-clocked server SSD is rather pricey for a desktop model. It's slated to sell for around $1/GB, which is close to double the cost of more affordable options. And the 730 Series isn't always faster than its cheaper competition. Although the drive boasts exceptional throughput with random I/O, its sequential transfer rates are nothing special."
This discussion has been archived. No new comments can be posted.

Intel's New Desktop SSD Is an Overclocked Server Drive

Comments Filter:
  • by edmudama ( 155475 ) on Thursday February 27, 2014 @10:17PM (#46365015)

    Hard for any SATA drive to distinguish itself on sequential transfers, given that SATA is capped around 550MB/s

    • by Luckyo ( 1726890 ) on Thursday February 27, 2014 @10:28PM (#46365069)

      Pretty much this. Vast majority of SSDs on the market today are very similar in terms of speed in normal usage, because the bottleneck is now in SATA. You can overclock it all you want, but you'll need to start pushing disks to PCI-E or similar bus for it to start to matter.

      And then there's the whole issue of "does it really matter when it's this fast on desktop?"

      • To bad that DMI is only pci-e 2.0 X4 and all chipset is has to go over that. And Intel chips have limited PCI-E IO.

        • You can cheat with bridges, that may be acceptable, or play with the PCIe - bringing the graphics card down to PCIe 8x 3.0 so you use the other 16x physical slot is a good deal.
          Now the trouble is the expected new connector, SATA Express (ought to get common and cheap, 2x PCIe 2.0) won't be present on yet-to-be-launched Z97 and down chipsets. Maybe because of that DMI bus limit or because they were lazy and risk adverse. It was formerly announced it'd be released with that chipset.
          So the arrival of SATA Expr

      • No kidding (Score:5, Insightful)

        by Sycraft-fu ( 314770 ) on Friday February 28, 2014 @01:44AM (#46365609)

        What you discover with SSDs is that for desktop usage pretty much any drive is "fast enough" and that faster doesn't much matter. I went from a SATA-2 SSD that was fairly slow even for that generation (WD Siliconedge) to a SATA-3 SSD that is fairly fast for this generation (Samsung 840 Pro) and I don't notice any difference. I can benchmark a difference, but I don't see any difference in load times and so on. SSDs are fast enough that they are making themselves not the bottleneck.

        That's also why there isn't a ton of interest in the PCIe SSDs. You can get way more performance, but it is a somewhat limited set of scenarios (on the desktop at least) where that would matter.

        • Re:No kidding (Score:5, Informative)

          by Luckyo ( 1726890 ) on Friday February 28, 2014 @02:46AM (#46365749)

          The much bigger reason for lack of interest in PCI-E SSDs is inability of that interface to pass on TRIM commands in the current implementations. In home use that is of far greater importance to speed over drive's life time than theoretical read and write times.

          • Re:No kidding (Score:5, Informative)

            by nateman1352 ( 971364 ) on Friday February 28, 2014 @04:54AM (#46366079)

            Citation Please.

            In truth current gen PCIe SSDs [sata-io.org] appear to the OS as a PCIe bus connected AHCI controller with a single disk that supports TRIM. There makes it completely transparent... it works exactly the same as a SATA SSD from a software perspective.

            Pretty soon we will start seeing next gen PCIe SSDs that expose themselves as an NVMe controller [nvmexpress.org] instead of an AHCI controller. Those SSDs will be backwards incompatible with AHCI but the command protocol and DMA interface enables extreme parallism so we will see pretty incredible performance for those SSDs. From a software stack perspective they use a new NVMe host controller and a new command set (ATA commands are completely gone!) So you need new drivers for it. They have OSS Win7/8/8.1 drivers available for NVMe but due to kernel limitations only the Win8/8.1 version of the driver is capable of supporting TRIM (Maybe that is where you got confused.) Win8.1 also have a NVMe driver in-box from Microsoft.

            Don't worry though, AHCI PCIe/SATA Express SSDs will be with use for a very long time esp. since Win7 is rapidly turning in to the next WinXP (the version that everyone likes and uses despite Microsoft's best efforts.)

            • by Luckyo ( 1726890 )

              I've been looking at current gen PCI-E based SSDs and I haven't found a single one that would support TRIM yet. Maybe some server grade ultra expensive stuff would. But current generation of consumer products does not.

              This could be because most of these go for maximum speed, and shove a RAID0 controller on the SSD to improve speeds. This would be another factor that strips TRIM commands.

          • That's crazy. All you'd need is a controller that pretends to be SCSI. I'm fairly certain SCSI supports TRIM. Same goes for AHCI (SATA).

            Besides, many "PCI-e" cards are SATA controllers attached to a PCI-e - SATA bridge (often in a RAID-like format, sometimes all exposed to the OS).

          • Yeah, right. Because home users regularly say: "Oh, wow, that is a sweet PCI-E SSD; too bad it doesn't support TRIM or I would have bought the shit out of it!"

            Yes, that is much more probable than "home users can't tell the difference between different SSD experiences."

            • by jedidiah ( 1196 )

              > Yeah, right. Because home users regularly say: "Oh, wow, that is a sweet PCI-E SSD; too bad it doesn't support TRIM or I would have bought the shit out of it!"

              Your average home user probably still isn't even aware that they can gain any advantage out of an SSD. This is despite constant ongoing persistent propaganda about how just adding/replacing an SSD will change everything and even improve the weather.

              "home users" simply aren't relevant here.

              • by Anonymous Coward

                Hi, I'm the average home user and I don't use an SSD because:

                1.) too expensive

                2.) no need

        • by AmiMoJo ( 196126 ) *

          The main improvement is not the extra bandwidth provided by SATA3 but the improved caching using on-board DRAM and improved handling of background processes to shuffle data around. These only affect write speeds, read speeds are mostly unchanged.

          At the moment PCI-E SSDs are fairly pointless because the performance bottleneck is the write speed of the SSD. In benchmarks on an empty, virgin drive write speeds of 550MB/sec are not uncommon, but once the drive starts to get full up and blocks need shuffling or

          • by Bengie ( 1121981 )

            The main improvement is not the extra bandwidth provided by SATA3 but the improved caching using on-board DRAM and improved handling of background processes to shuffle data around. These only affect write speeds, read speeds are mostly unchanged.

            At the moment PCI-E SSDs are fairly pointless because the performance bottleneck is the write speed of the SSD. In benchmarks on an empty, virgin drive write speeds of 550MB/sec are not uncommon, but once the drive starts to get full up and blocks need shuffling or partially re-writing the performance drops to less than half that on most drives.

            Most benchmarks that I've seen of Samsung, Intel, and other top end drives, which also tend to be some of the cheapest, is they have almost no discernible difference in performance in synthetic benchmarks, even up to 95%+ capacity. They tend to have enough spare space set aside to handle shuffling around data.

      • Pretty much this. Vast majority of SSDs on the market today are very similar in terms of speed in normal usage, because the bottleneck is now in SATA. You can overclock it all you want, but you'll need to start pushing disks to PCI-E or similar bus for it to start to matter.

        And then there's the whole issue of "does it really matter when it's this fast on desktop?"

        Although i'm running 840 Pros (b/c of cost), i did contemplate grabbing some hard-to-get Hitachi SAS (6Gbs) SSD's to run off my current LSI 9300-8i 12Gbs SAS HBA... don't have the link handy, but read one write-up where these Hitachi's were running near as fast as the PCI-e stuff Apple is running in their new Mac Pro ... i.e., completely smokes existing SATA SSD throughputs ... maybe these SAS variants will come down in price (i can hope can't i??)

    • Hard for any SATA drive to distinguish itself on sequential transfers, given that SATA is capped around 550MB/s

      Which is why every fast SSD has data rates for SATA2 and SATA3. SATA3 is a lot harder to cap. But even then, for the ultrafast are SSD cards, and no SATA involved.

      • by jones_supa ( 887896 ) on Friday February 28, 2014 @12:29AM (#46365445)

        Hard for any SATA drive to distinguish itself on sequential transfers, given that SATA is capped around 550MB/s

        Which is why every fast SSD has data rates for SATA2 and SATA3. SATA3 is a lot harder to cap. But even then, for the ultrafast are SSD cards, and no SATA involved.

        The 550MB/s is for SATA3 and has been capped for a good time already.

        • Hard for any SATA drive to distinguish itself on sequential transfers, given that SATA is capped around 550MB/s

          Which is why every fast SSD has data rates for SATA2 and SATA3. SATA3 is a lot harder to cap. But even then, for the ultrafast are SSD cards, and no SATA involved.

          The 550MB/s is for SATA3 and has been capped for a good time already.

          Did you miss a part of my post?

    • There is a new standard which will increase SATA speed ( http://en.wikipedia.org/wiki/S... [wikipedia.org] )

      Currently, Apple computers use PCIe SSD disks, which increases their performance:

      http://www.anandtech.com/show/... [anandtech.com]

      "I'm very pleased with Apple's PCIe SSD, at least based on Samsung's new PCIe controller. Sequential performance is up considerably over last year's 6Gbps SATA drive. Go back any further and the difference will be like night and day, especially if you were one of the unfortunate few with an older Toshiba

      • In reality, the big advantage of SSD is zero access time for reading gazillions of 4 KB blocks. And there the transfer speed doesn't really make much difference. Even just 100 MB/second is 25,000 random four KB blocks.
        • by jedidiah ( 1196 )

          Yes. Quite.

          SSD tech is still quite expensive. Drives are still tiny. This doesn't leave a lot room for massive bulk storage. So the use cases for improved sequential IO access are limited.

          If you run an artificial benchmark you've got to be careful. Blink and you will miss the results.

      • If it's a PCI-e port on a cable, does that mean you can plug non-storage devices in too? I can see applications for things like video walls or GPGPU number-crunchers, where very 'sata' port is potentially a way to cram another video card in.

        • by tlhIngan ( 30335 )

          If it's a PCI-e port on a cable, does that mean you can plug non-storage devices in too? I can see applications for things like video walls or GPGPU number-crunchers, where very 'sata' port is potentially a way to cram another video card in.

          And yet, that exists today, it's called Thunderbolt. Which is effectively a PCIe x2 over a cable. Thunderbolt drive arrays exist for performance gains that go beyond what SATA has and all that.

    • by jon3k ( 691256 )
      Well, SATA Express and NVMe will be here soon and we should see a pretty massive jump in sequential throughput.
    • Look at page 4

      http://techreport.com/review/2... [techreport.com]

      The 730 drive is in the middle of the pack for sequential reads. None of the drives reach 550MB/s.

  • by Wrath0fb0b ( 302444 ) on Thursday February 27, 2014 @10:29PM (#46365075)

    For many (but certainly not all) applications, especially when it comes to UI, what matters is 95% worst performance, not peak throughput. From the Anandtech review [anandtech.com], that's where this drive really shines.

    Different tradeoffs have to be made for different workloads -- it can't be boiled down to a single (or even a set of) number(s). Some applications are far more tolerant of worst-case performance than others.

    • by Anonymous Coward

      For many (but certainly not all) applications, especially when it comes to UI, what matters is 95% worst performance, not peak throughput.

      Applications of porn acting being the notable exception.

    • That's an excellent point, and a metric I hadn't paid much attention to despite the fact that I run quite a few drives, including one storage pool of 28 drives and growing.

  • Overclocked? (Score:5, Insightful)

    by mc6809e ( 214243 ) on Thursday February 27, 2014 @10:38PM (#46365107)

    Running something at the speed it was designed and verified to run at by the maker isn't overclocking.

    • by Jeremi ( 14640 )

      Running something at the speed it was designed and verified to run at by the maker isn't over clocking.

      If Intel designs a component to run at speed X, then later finds out that it can run some of those components at speed 1.5X, and verifies and sells them at that higher-than-rated speed, I think it's fair to say Intel is over clocking. The only difference is that in this situation, the warranty will be honored if it stops working.

      • by Anonymous Coward

        If Intel designs a component to run at speed X, then later finds out that it can run some of those components at speed 1.5X, and verifies and sells them at that higher-than-rated speed, I think it's fair to say Intel is over clocking. The only difference is that in this situation, the warranty will be honored if it stops working.

        You mean like how Intel has been increasing processor frequencies for years by now?

      • by Kjella ( 173770 )

        Chips have tolerances which means there's a spread on how fast they'll run. Binning is not overclocking, if Intel finds a i7-4770K that can run 100MHz above the rated speed they won't sell it now. They'll put it in storage and wait until they have enough of them then launch a new model i7-4790K (coming to you in Q2). People analogy, if you select the best people to go into elite forces and the rest in the regular army you've binned, but not overclocked. That'd be more like putting them on drugs to amplify t

      • by AmiMoJo ( 196126 ) *

        It's not overclocking, it's thermal and power management. For example Intel sells the same CPUs for use in a variety of different devices, and allows the manufacturer to set thermal and power limits via their BIOS. An ultrabook laptop might set the limits much lower than a full size laptop with bigger battery and more cooling.

        In this case Intel set the server drives up for server workloads and realized there was no point in having the high clock rates, so chose instead to generate less heat and save power.

    • Intel (and everybody else) does this for good reason .. high endurance components (Milspec, server, whatever) are usually designed for tolerances far beyond the actual spec, because manufacturing issues can cause the tolerances of the finished product to deviate somewhat.

      If they design a [gizmo] to operate at 1.5ghz and sell it as a 1ghz chip knowing full well there is plenty of overhead but chances of failure running it at 65% of design are pretty much nil, yay for them for meeting the rejection rate.

      The
  • Not supercaps, no, electrolytics.

    What happens if your superduper SSD develops bad cap syndrome?
    https://en.wikipedia.org/wiki/Capacitor_plague

    I am stil finding equipment with those sorts of failures today...

    Not recomending, even having two of them in parallel...

    Nope, not for me, sorry

    • by arielCo ( 995647 ) on Friday February 28, 2014 @12:44AM (#46365479)

      tl;dr: these are storage caps, which don't endure the ripple currents that kill filter caps.

      Electrolyte decomposition is usually caused by high ripple current, which is why caps pop mostly (only?) when used as filters, as in motherboard DC-DC converters and gadgets powered by wall-wart adapters. In this particular application, the PSU impedance is quite low and the caps are handled by on-board regulators (V=Q/C and all that), so there's no load ripple and the caps just have to sit pretty and charged with insignificant heat losses until the computer is shut down or outage occurs. Maybe that's why Intel didn't even bother to use the solid (polymer) kind.

      If these caps dry out due to age or bad quality they just won't hold as much charge for emergency sync'ing, which is still better than ordinary SSDs/HDDs with no caps.

      • To be fair, an HDD can use its platters as a flywheel to quickly flush its (relatively tiny) buffer. I never did see proof that that was ever done, though.

        • * This is effectively regenerative braking, which I'm not sure you can do with a stepper motor.
          * The arm servo needs extra energy; not sure the platter+rotor have enough.
          * What if it's stopped, heads unloaded, when the power fails?

          • If the heads are unloaded, there shouldn't be any operations going on, so no harm done (unless some genius decided that it wasn't worth it to immediately get the heads in place the moment data comes in). Note that few hard drives actually spin down, but again, if it's not spinning, there's no data flowing (unless you're really unlucky and it just started).

            • by karnal ( 22275 )

              On a side note, it has been YEARS since I've witnessed a laptop that actually had the chance to spin down it's drives. Probably since win3.x days. Software being what it is today (McAfee at my place of employment) seems to have this bug of reading/writing to disk every few seconds, defeating any power management setups.

              • On a laptop, it may make sense to enable Windows' write cache, since there's a battery backup available. That may help you save some power.

                • by Zan Lynx ( 87672 )

                  Eh. Except that a blue screen or if you have to do a forced power-off will lose data and require a chkdsk run. I ran my desktop that way for a little while (with a UPS) but it had problems.

        • by tlhIngan ( 30335 ) <slashdotNO@SPAMworf.net> on Friday February 28, 2014 @12:22PM (#46368597)

          To be fair, an HDD can use its platters as a flywheel to quickly flush its (relatively tiny) buffer. I never did see proof that that was ever done, though.

          None used it to flush the cache because it is too risky - the platters are not maintaining a fixed speed (they're slowing down to generate electricity) so writes to platters become tricky as the timing is off which means you can overwrite more than you expect.

          Far better to just dump the buffers.

          In fact, the electricity generated by the spinning platters slowing down is used to park the heads - it's called an emergency head park because it basically dumps the electricity into the voice coil that flings the heads to the mechanical stops in the park area. It's fairly violent and most hard drives have much less emergency head park life than standard power down (where the drive moves the heads to the parking area in a controlled fashion) life - a drive may have 50,000+ head load/unload cycles, but under 10,000 emergency park cycles.

          You can tell because a soft-park makes only the smallest of clicking sounds on a drive when it spins down. But emergency park it and it's a much louder clunk.

    • by tlhIngan ( 30335 ) <slashdotNO@SPAMworf.net> on Friday February 28, 2014 @01:07AM (#46365555)

      What happens if your superduper SSD develops bad cap syndrome?
      https://en.wikipedia.org/wiki/... [wikipedia.org]

      I am stil finding equipment with those sorts of failures today..

      Except those caps are Nippon Chemi-con. High end high quality capacitors made in Japan. And not the kind involved in the bad caps.

      Bad cap syndrome happens to the cheap caps - stuff like CapXon (aka CrapXon) and such.

      In fact, a lot of bad caps you're finding are the cheap crap ones by the crap manufacturers. You can easily buy them and they will fail.

      That's why you'll find people inspecting caps - and seeing if it's Nippon Chemi-con, Rubycon, Panasonic/Matsushita or other Japanese brand. (You can almost generalize it to those whose brands contain "con" in their name are higher quality - from when they used to be called condensers. The cheap brands all tend to have "cap" in their name).

      So no, I don't see the caps being the weak point because Intel went and spec'd top-quality caps.

    • by ericloewe ( 2129490 ) on Friday February 28, 2014 @05:13AM (#46366119)

      A quality electrolytic capacitor will last a long time.

      The ones used here look like Nippon Chemi-Con, rated at 105 C. They'll most likely last forever.

      • Plus they are one of the few components left that anyone with a soldering iron can diagnose and replace, even in this age of surface-mount stacked-chip fiddleyness.

  • To little, too late. (Score:4, Interesting)

    by Guspaz ( 556486 ) on Thursday February 27, 2014 @11:51PM (#46365337)

    Don't get me wrong, I own five discrete SSDs (all currently in active use), and they're all Intel (one G1, two G2s, and two 330s). However, I've been disappointed with Intel of late. It used to be that they came with a premium price, but also dramatically lower failure rates than the competition, and you could usually find them cheaper than the competition if you waited for the right sale.

    These days, however, Samsung's failure rates are lower than Intel's, and their price premium is so large that no sale is going to get their larger SSDs anywhere near as cheap as Samsung's. I was hoping that they might make a comeback with a new consumer model, but the 730 is a disappointment in terms of its extremely poor performance-per-dollar and capacity-per-dollar.

    I've bought nothing but Intel in the past, because they were the safe bet, but at this point it looks like my next SSD will be from Samsung.

    • Just bought a new system that came with the first SSD I've ever owned.

      Been wondering whether or not Samsung was a good choice by the folks who built it for me.

      I'm guessing this means that it was. Thanks for that. :)

      • the first SSD I've ever owned.

        You'll never go back to a 7200 RPM drive for your operating system / primary storage again. The performance when multi-tasking is just too good.

        As for the warm and fuzzy feeling side. Get a good backup program like Acronis True Image Home or learn to use rdiff-backup or whatever and write your backups to two different physical drives.
        • I keep /home on a separate physical drive (HDD for now). Makes an upgrade or moving to a new machine much more pleasant. I use an SVN repo for my backups

          The boot time is incredible. The new box also builds 8 different versions of $popular_foss_db from source, one after another, in about 25 minutes. Takes about 2 hours on my laptop, using exactly the same OS, build script, compiler options, gcc version, etc.

          (Now if I can just figure out why nouveau does not like my graphics card, and why X doesn't like the N

          • by Wolfrider ( 856 )

            > Now if I can just figure out why nouveau does not like my graphics card, and why X doesn't like the Nvidia driver

            --Try searching/posting on the support forum for your distro; you may get results.

    • I'm satisfied with my two Samsung 830s, but given the tendency of most Samsung products I own to let me down, I'm not too willing to buy anything else from Samsung.

    • by Spoke ( 6112 )

      But the main reason for the Intel 730 is to get power loss protection so your data doesn't get scrambled if your computer suddenly loses power.

      The popular Samsung 840 series don't include that.

      Where are you getting your failure rates from?

      • by Guspaz ( 556486 )

        French retail return rates:

        http://www.hardware.fr/article... [hardware.fr]

        That's a bit out of date (May 2013), but it includes the previous figures (the "contre" is from November 2012). I doubt they've changed too dramatically since then. I think it's fair to say that Intel and Samsung have similar rates, if nothing else, so Intel's huge price premium is hard to justify.

        Power failure protection is nice, but I don't have any computers that don't have some form of battery backup, be it a UPS or a built-in battery, and some

        • by Spoke ( 6112 )

          Even with built-in battery or UPS, while that reduces the risk of unexpected power loss, in my experience it still happens.

          As far as comparing reliability of SSDs to HDDs, an actual study found that SSDs were much more likely to lose lots of data, sometimes bricking the entire drive.

          http://www.cse.ohio-state.edu/... [ohio-state.edu]

          Enterprise HDDs were the most reliable, even the best SSD they tested was not as good (though similar to consumer grade HDD).

          Unfortunately, the study does not reveal which drives were tested.

          • by Guspaz ( 556486 )

            The study you've linked to does not support your claims.

            Only two out of the 15 SSD in their test suffered from serious issues. One unit suffered from an SSD metadata issue that effectively prevented access to about 30% of the data on it, and another failed entirely, after having been subjected to 136 power failures in rapid succession.

            Only two HDDs were included in the test, and were subject to a much smaller of power failures. HDD #1 was subjected to only a single power failure (after which it corrupted a

  • by timeOday ( 582209 ) on Friday February 28, 2014 @12:21AM (#46365425)
    Not only is this "rated to endure at least 70GB of writes per day over five years," it also comes with a 5 year warranty. Given there's still skepticism about SSD reliability from some quarters, a 5 year warranty is unbeatable.

    I only wish Intel was offering this in a smaller size, say 100 GB. I think a SSD system drive + slow "green" HDD is a great combo in a desktop, and the price premium on this quality of SSD would be easier to swallow if the drive were $110 instead of $250 even though that would be the same $/GB.

    • by Anonymous Coward

      I have no idea where this fascination of making windows boot 5 seconds faster and load up paint lighting fast comes into play, its often weeks if not months between times I reboot, and its all my space hungry big ass applications that are slow, not calculator

      • I think maybe because it is something that can be easily shown off, or because it can be done cheaper, or because they have a misguided belief that it makes everything fast.

        Personally if I can't afford an SSD big enough to stick all the apps I normally want on there, I don't bother with an SSD in a system.

      • For me 80 GB is sufficient to store all the applications and data for my family except a few select "big" things that go on the HDD - my DVR recordings, my wife's flatbed scans of her illustrations, my son's screen recordings of minecraft... when the disk starts to get full, I find the offending directory, move it to the HDD and make a symlink. This is Linux; I find Windows to be a terrible drive space waster, and it just grows forever as you apply patches and service packs.

        Laptops are obviously more di

        • I'm not familiar with symlinks besides NTFS', but if NTFS is any indication, I'd avoid them wherever possible.

          • NTFS isn't. Symlinks work fine and are very useful on an OS made to handle them. On Windows they are bodged in awkwardly and clumsily. The only reason NTFS supports them at all is so it can claim POSIX compatibility.

            • Symlinks were only introduced in Vista, so I think you're quite wrong.
              The crappy NTFS feature that predates it is called a junction point.

              • Vista introduced the utility to make symbolic links. The filesystem supported them before that, there was just no support for actually using them. You can get them working in XP, it just takes some hackery.

        • by AmiMoJo ( 196126 ) *

          I guess you are not a gamer. Games are ridiculously large these days.

      • It's needed in Windows because updates (most too often) need the computer to reboot. So there's a dance of install update -> initiate reboot -> update installation finalizes at either shutdown or boot up -> finish boot and log in -> reopen your apps and get back to work.
        You might also do a very long virus scan or spyware scan (esp. the first time you install either utility).

        Thus Windows users "need" an SSD an then, not waiting for an Explorer window etc. is a bonus. Well in linux using some heav

        • Even though I don't reboot my main linux system often, I find it benefits from a fast system drive because I use the computer for recording TV shows, which basically destroys the filesystem cache. (I.e. it dumps all the little files I'll actually be accessing again to store blocks of huge video files I really don't want cached). At least I think that's what happens.
      • I have no idea where this fascination of making windows boot 5 seconds faster and load up paint lighting fast comes into play, its often weeks if not months between times I reboot, and its all my space hungry big ass applications that are slow, not calculator

        When you're doing presentations for customers you want your laptop to bootup fast. You want your reads in your applications to zoom because you're attempting to replicate a server class application on a desktop/laptop pc. There's all kinds of other examples but if you reboot every 3 weeks you could probably use a 4 year old machine. You're not utilizing it's potential.

    • by AmiMoJo ( 196126 ) *

      Does the warranty still apply if you write more than the rated 70GB/day?

  • by roc97007 ( 608802 ) on Friday February 28, 2014 @01:38AM (#46365599) Journal

    > Although the drive boasts exceptional throughput with random I/O, its sequential transfer rates are nothing special."

    But good random access will give you better overall performance in most cases. You rarely need to deathmarch through the drive.

  • At 90% health too. intel ssd drives are worth the premium

"A mind is a terrible thing to have leaking out your ears." -- The League of Sadistic Telepaths

Working...