Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Technology

SSD-HDD Price Gap Won't Go Away Anytime Soon 256

storagedude (1517243) writes "Flash storage costs have been dropping rapidly for years, but those gains are about to slow, and a number of issues will keep flash from closing the cost gap with HDDs for some time, writes Henry Newman at Enterprise Storage Forum. As SSD density increases, reliability and performance decrease, creating a dilemma for manufacturers who must balance density, cost, reliability and performance. '[F]lash technology and SSDs cannot yet replace HDDs as primary storage for enterprise and HPC applications due to continued high prices for capacity, bandwidth and power, as well as issues with reliability that can only be addressed by increasing overall costs. At least for the foreseeable future, the cost of flash compared to hard drive storage is not going to change.'"
This discussion has been archived. No new comments can be posted.

SSD-HDD Price Gap Won't Go Away Anytime Soon

Comments Filter:
  • RAID? (Score:2, Interesting)

    by sanosuke001 ( 640243 )
    Doesn't creating a striped RAID make up most of the performance issues from using a HDD over a SSD? At that point, it's more the bus or CPU that's a limiting factor?
    • Re:RAID? (Score:5, Insightful)

      by Red_Chaos1 ( 95148 ) on Thursday April 17, 2014 @08:36AM (#46778541)

      IIRC it would take 5+ high end HDDs to match the read/write speeds of a decent SSD. Add to it that a RAID 0 has no safety so if 1 drive faults, the whole thing is done. A single SSD (like my Corsair Force GT) will r/w at ±500MBs. You just can't beat that right now.

      • Re:RAID? (Score:5, Informative)

        by RyuuzakiTetsuya ( 195424 ) <taiki@co x . net> on Thursday April 17, 2014 @09:19AM (#46778867)

        PCIe SSDs are even faster. The one in the Mac Pro can hit 1gig read/write, for example.

        You'd need a lot of disks to come even close to that. :)

        • Re:RAID? (Score:5, Informative)

          by MachineShedFred ( 621896 ) on Thursday April 17, 2014 @09:52AM (#46779175) Journal

          I was shocked when we got one of the MacPro6 units in, and I ran a disk benchmark on it. It was sustaining 950MB/sec, which is good enough to write 10-bit YUV 4:2:2 2k video at 117fps.

          That is a realm you could only really get to with fiber channel previously, or a ridiculously expensive PCI-E card with SLC flash.

          • Thanks for posting real benchmarks. I can't afford/don't need a Mac Pro but if PCIe SSD become available on other systems, it's nice to know how fast it really operates.
            • Re:RAID? (Score:5, Informative)

              by Luckyo ( 1726890 ) on Thursday April 17, 2014 @10:36AM (#46779657)

              PCI-E SSDs were available on PCs long before debuting on macs. They often run much faster as well, as they can use RAID0 striping. I've seen drives that use a quad RAID0 pushing utterly insane numbers for long term storage at the cost of not letting TRIM commands through.

            • Thanks for posting real benchmarks. I can't afford/don't need a Mac Pro but if PCIe SSD become available on other systems, it's nice to know how fast it really operates.

              You mean like this one...

              http://www.newegg.com/Product/... [newegg.com]

            • by SpiceWare ( 3438 )
              Looks like it's already available http://www.amazon.com/gp/produ... [amazon.com]
        • Indeed, and even then for many usage patterns, latency will be much worse for the HDD RAID array, because certain operations will be the greatest latency of all the drives(i.e. if you read something striped across all the drives, and one of the drives has a longer latency in seeking to that data). So in many cases the average latency is skewed for the worst.

          That doesn't even go into power/cooling savings. SSD's use 10th of the power, which is great for a laptop.

          Risk of damage from bumping/moving the drive

      • by Rich0 ( 548339 )

        RAID 0 really only buys you throughput, and I don't think SSD really has any advantage over HD for throughput (I'm open to correction there).

        The big difference is in seek time. RAID 1 is what buys you seek time for reads, and of course it has no safety issues. There is nothing that limits RAID 1 to only one mirror either beyond the implementation (mdadm supports any number of mirrors and will divide reads across them). Of course, if you have a RAID1 with 8 drives in it, and write is going to block across

        • by Luckyo ( 1726890 )

          I imagine raid 10 would be an option (0+1). But that's going to be pretty hilarious in costs.

          • Re:RAID? (Score:5, Informative)

            by operagost ( 62405 ) on Thursday April 17, 2014 @10:47AM (#46779767) Homepage Journal
            RAID 10 and RAID 0+1 shouldn't be used interchangeably. RAID 10 is striped mirrors, and 0+1 is mirrored stripes. Both fail if all copies of mirrored data are lost, but with RAID 10 that's only 1 disk to worry about after the first failure while with RAID 0+1 it could be any of the disks in the remaining stripe set, which is at least 2.
            • by Luckyo ( 1726890 )

              Admittedly I had a misconception that they were the same thing, and now that you mention it, it's obvious that they are not.

              Thanks for clarifying.

        • by Bengie ( 1121981 )
          OpenZFS is going to gain async writes for mirroring. You can specify how many HDs to successfully write data before it returns completed. This way you have have 8 HD is mirror, but only have to wait for 2 to return and let the 6 other writes finish on their own time.
    • Re:RAID? (Score:5, Informative)

      by Anonymous Coward on Thursday April 17, 2014 @08:37AM (#46778547)

      For most applications, the performance bottleneck with a hard disk is seek latency, not raw streaming bandwidth. There is basically no way for a mechanical hard disk to match the seek performance of a SSD.

      • Multiple read heads! Cut seek time in half! (or quarter, or eighth depending on how crazy you want to go).

        • Re:RAID? (Score:4, Insightful)

          by Calinous ( 985536 ) on Thursday April 17, 2014 @09:21AM (#46778879)

          Seek time is the time for r/w head movement (closer or farther from the disk center) PLUS the wait time until the wanted data is rotated under the read/write head. So, unless you go with r/w heads for each sector on the hard drive, you can't reduce part of the seek time. And you could rotate the disks faster (like in SCSI 15k rpm disks), but there's a limit there too.
                Will HDDs ever be performance-competitive at the same cost to SSDs? At the current technology level, no. Will SSDs ever be price-competitive at the same capacity? Hardly, considering adding another platter and r/w head to a hard drive is a quite inexpensive way to increase capacity, while adding another set of flash memory chips is an expensive way to increase capacity.

          (oh, and a read/write head for each data strip was used in the 50s and 60s - see magnetic drum memory).

          • Re:RAID? (Score:4, Funny)

            by MightyYar ( 622222 ) on Thursday April 17, 2014 @09:33AM (#46779021)

            > I was recently perusing the /dev directory on a next
            > when I came upon the entry /dev/drum. This seemed a bit odd, I thought
            > that drum memory went out of fashion long, long ago. The man pages
            > didn't have anything to say about drum. Does any have any insight
            > on this odd device entry?

            This actually has nothing to do with drum memory. It's a part of the
            UUCP system.

            Long, long ago, even before version 6, somebody wanted to implement a
            program to copy files between two machines running Unix. At the time
            there were no modems becuase there weren't even any telephones. A
            Bell Labs researcher who had just visited Africa seized upon the idea
            of communicating by beating on drums, as the native Africans did. He
            added a drum interface to his PDP-11 and the device driver was called,
            of course, /dev/drum. Uucp would call a lower level program called
            `bang' to activate this device driver. Messages could also be sent
            manually by typing `bang drum' at your shell prompt. People soon
            devised shell scripts that would take a mail message, convert it
            appropriately, and call bang to send it. Soon they were sending
            multi-hop messages though several sites this way, which is how the
            `bang path' got its name.

            With the advancements in communications technology (semaphores in
            particular), /dev/drum was removed from UNIX around version 6 or 7, I
            believe. The NeXT developers reinstated it on the NeXT because they
            felt that a true multimedia machine should have as many options as
            possible.

            I hope this explanation helped.

            cjs

            curt@cynic.UUCP | "The unconscious self is the real genius.
            curt@cynic.wimsey.bc.ca | Your breathing goes wrong the minute your
            {uunet|ubc-cs}!van-bc!cynic!curt | conscious self meddles with it." --GBS

          • The intent of the (mostly) joke was in fact to put read heads under multiple sectors, cutting the rotation time in half (or more if you were to add more heads).

        • by Rich0 ( 548339 )

          That's basically what RAID1 gets you, though at a cost to write performance. You'll never beat SSD random write performance via RAID, though writes on SSDs can leave a bit to be desired as well.

        • Sure, but with SSD, you can hit whatever cell you want instantly. No waiting for the spindle to rotate to where it needs to be.

        • From a review of the Samsung 840 EVO 1TB SSD [storagereview.com] I just stuck in my MacBook Pro:

          • Sequential READ: up to 540 MB/s
          • Sequential WRITE: up to 520 MB/s
          • Random READ: up to 98,000 IOPS
          • Random WRITE: up to 90,000 IOPS

          From the same site reviewing a WD Black 4TB HDD [storagereview.com]:

          Performance from the WD Black scaled from 66 IOPS at 2T/2Q to 86 IOPS at 16T/16Q, versus the 7K4000 which scaled from 82 IOPS to 102 IOPS.

          So assuming IOPS scales linearly with heads (they don't), you'd need about 1,000 heads to get similar random access performance out of HDDs as one SSD.

          There's a reason everyone's migrating to SSDs for anything remotely IO related.

      • This. Most people still incorrectly concentrate on sequential read/write times. SSDs are only about 4x faster by that metric - 550 MB/s vs 125-150 MB/s.

        Where SSDs really shine are the small, rapid read/writes. If you look at the 4k r/w benchmarks, a good SSD will top 50 MB/s 4k speeds, and over 300 MB/s with NCQ. A good HDD is only about 1.5 MB/s, and maybe 2 MB/s with NCQ because of seek latency - the head needs to be physically moved between each 4k sector. That 100-fold difference is what makes S
    • Re: (Score:3, Informative)

      by Anonymous Coward

      Doesn't creating a striped RAID make up most of the performance issues from using a HDD over a SSD? At that point, it's more the bus or CPU that's a limiting factor?

      No RAID does not allow HDD to perform as SSDs. RAID increases throughput but it does not decrease access time, which in many cases is fare more important than throughput.

      Having a seek time of 8ms when you are working with many small files is a huge hit on performance. The seek time of SSDs is well under a millisecond. RAID does not help this no matter how many disks you stripe.

      • Re: (Score:2, Troll)

        by drinkypoo ( 153816 )

        No RAID does not allow HDD to perform as SSDs. RAID increases throughput but it does not decrease access time, which in many cases is fare more important than throughput.

        RAID doesn't improve first access time, but good RAID improves non-sequential seek times.

        Having a seek time of 8ms when you are working with many small files is a huge hit on performance. The seek time of SSDs is well under a millisecond.

        Yes, for some workloads it is very important. But for many of those, there's prefetching.

      • Re:RAID? (Score:5, Interesting)

        by Rich0 ( 548339 ) on Thursday April 17, 2014 @09:27AM (#46778935) Homepage

        No RAID does not allow HDD to perform as SSDs. RAID increases throughput but it does not decrease access time, which in many cases is fare more important than throughput.

        Having a seek time of 8ms when you are working with many small files is a huge hit on performance. The seek time of SSDs is well under a millisecond. RAID does not help this no matter how many disks you stripe.

        RAID does not always mean stripe. Mirroring does improve seek performance. It increases the chance that a drive has a head closer to the data you want already (if the implementation is smart enough to be aware of this), and it also allows seeks to occur in parallel (which isn't exactly the same as latency reduction, but is fairly equivalent in practice since drives are almost always busy).

    • I think with a properly set up system, RAID can speed it up considerably. I prefer the multi drive model for consumer systems though - a small SSD OS and application drive, a fat slow platter drive for storage of large media files, and an even fatter and slower drive for backups. 128GB/1TB/2TB is the system I have on my desktop.
      • Re:RAID? (Score:5, Insightful)

        by jones_supa ( 887896 ) on Thursday April 17, 2014 @08:56AM (#46778689)
        So the backup disk is online in the same system? Sounds dangerous.
        • I have an external 2TB drive I use for backups. (In addition to DropBox for critical files, although I've been reconsidering that particular service lately.) I unplug it when not in use. So in the same system broadly, but not really. It's a consumer system, so no need to go as silly as having a separate BDR box.
        • I don't know how his systems work but my PC works like this. I have a big disk with Linux and virtual machines. I have a SSD and a 2.5 HDD of the same capacity for Windows, and I periodically back up the SSD to the HDD. The backup is bootable and if the SSD fails I just get the HDD. All the data gets backed up to a disk on a pogoplug running Debian which is supposed to be on a separate UPS but isn't right now, at least it's not in the same machine. I don't store any big data on the Windows side, so that's o

      • by mlts ( 1038732 )

        I've seen a couple hard drives in laptops that present themselves to the BIOS as multiple volumes, although I don't know what brand they are (if someone does know the make/model, please enlighten me). One had a 32 GB SSD partition, then a 512 GB HDD partition. Unlike drives that have an 8GB cache, having two volumes allows the OS, swap, perhaps an application to sit on one volume while everything else is on the HDD.

        As for the backup hard disk, that is a wise idea as the first level of defense. It can't h

    • Re:RAID? (Score:4, Interesting)

      by omfgnosis ( 963606 ) on Thursday April 17, 2014 @09:19AM (#46778865)

      Even if this were true, you're creating an artificial advantage. How will a RAID array of HDDs compare to a RAID array of SSDs?

    • Re:RAID? (Score:5, Informative)

      by Mad Merlin ( 837387 ) on Thursday April 17, 2014 @09:24AM (#46778915) Homepage

      Absolutely not. Even 100 RAIDed HDDs (in any RAID type) will struggle to match the IOPS achieved with a single SSD.

      Typical IOPS for a 7200 RPM HDD: 80

      Typical IOPS for a modern consumer level SSD: 20,000-100,000

      http://en.wikipedia.org/wiki/IOPS [wikipedia.org]

      • This. People just don't get this.

        Typical smallish RAID array is 16 drives.

        RAID 5 IOPS for 7.2k drives - 675
        RAID 5 IOPS for 15k drives - 1642
        RAID 5 IOPS for SSD drive - 84,211

        http://www.thecloudcalculator.... [thecloudcalculator.com]

        In an environment running lots of small disk IO, like having a VM or fifty, only one of the above will give you good performance.

    • Re:RAID? (Score:4, Informative)

      by TheRealQuestor ( 1750940 ) on Thursday April 17, 2014 @09:32AM (#46779009)

      Doesn't creating a striped RAID make up most of the performance issues from using a HDD over a SSD? At that point, it's more the bus or CPU that's a limiting factor?

      No. My raid0 and Raid5 setups don't even come CLOSE to comparing to my SSDs. I've been running 2 SSD Raid0 and OMG the speed diff is absolutly crazy. Yes when one does all data is toast and they DO die. I was dumb and bought 3 OCZ drives and all 3 have died at least once in the last 1.5 years but the replacements have held up pretty well. I totally expect to lose one at any time so I have really good backups of my C: Drive :) everything else goes on my spinny platters.

      • What is the point of slicing SSDs? Seems like the law of diminishing returns takes effect. I'd mirror and take the small relative performance hit. I did some research on this when I set up my computer a year ago and it didn't seem worht it for the cost.

        I ended up going with an SSD for the OS and 2 mirrored HDDs for reliable storage.

        • by unrtst ( 777550 )

          I ended up going with an SSD for the OS and 2 mirrored HDDs for reliable storage.

          I see a lot of people going with SSD for OS + core applications, and HDD's for everything else. I don't quite understand why? Is your usage pattern such that you are frequently rebooting and/or closing all apps and re-opening them (more so than working with documents)?

          I completely understand HDD for media such as mp3's and videos - they handle throughput for those just fine and there's hardly any seeking when watching a video. And unless it's a video/music server that services a bunch of clients, a HDD will

      • by Luckyo ( 1726890 )

        How well does your RAID0 controller do with TRIM commands?

        Many are known to cause problems by not letting TRIM through to the drives.

    • by jensend ( 71114 )

      Absolutely not.

      The main advantage of a SSD for most users is not the 5x faster sequential performance, it's the >100x faster access times. RAID does improve throughput but it does very little to improve access times and random IOPS.

    • For now it appears that the bus isn't the limiting factor. The HDDs themselves simply are not faster than SSDs. After all, spinning platters and mechanical read/write heads will be slower than silicon gates. Cost and capacity are the two main advantages of HDDs.
    • by Bengie ( 1121981 )

      Doesn't creating a striped RAID make up most of the performance issues from using a HDD over a SSD? At that point, it's more the bus or CPU that's a limiting factor?

      Don't forget about IOPs. A single modern SSD can do about 80k, while a single HD is about 2k. You would need about 40 HDs to match the IOPs.

    • You mean performance issues like power consumption, heat and noise?
      There is more to performance than speed. Actually, with all the speed we get today even from mechanical hard drives IMHO these other things are far more interesting than squeezing out a little more speed. Why do I care if a program loads in 1/2 a second vs 1/4 second?

  • not really (Score:5, Insightful)

    by hypergreatthing ( 254983 ) on Thursday April 17, 2014 @08:35AM (#46778531)

    Fairly sure that increases in capacity usually means increases in performance as well. I have not seen any ssd on the market today that illustrates otherwise.
    We're down to less than .50$ a gig on ssds. Prices have been plummeting. You can get a 256 gig drive for ~100$ . 1TB drives have been almost hitting the $400 mark.
    When 2TB ssd come on the market, you'll see the rest drop in price as well. I'm not quite sure where the author is getting their information. Check the price drops over the last two years and you can see they haven't hit bottom yet.

    • Only to a certain point does capacity increase equate to a performance increase in SSDs, and the gap closes very quickly. SSDs don't have things like spindle speeds and areal density to work with to increase throughput, nor do they need them.

      My 60GB Corsair Force GT hits a few MB/s under 500MB/s in write speeds, and near 520MB/s in reads. At those speeds the difference between drives is in a few MB/s only. I'd be surprised to see a significantly larger SSD significantly increase speed over that. My 120GB OC

      • .....
        Have you heard of IOPS?
        I have never seen a smaller version ssd have a better IOPS number than a larger one.

        • by afidel ( 530433 )

          I have never seen a smaller version ssd have a better IOPS number than a larger one.

          I have, plenty of times, SLC has better IOPS/GB than MLC and within MLC eMLC has better IOPS/GB than tMLC. So for a given number of dollars the smaller drive will have better performance.

          • Maybe i should have more strongly implied: Within the same series. Not across memory/technology types.

          • So for a given number of dollars the smaller drive will have better performance.

            First, this is a red herring, since the price you pay for an SSD in a given size class won't buy you any significantly larger drive. So, a 60GB dog of an SSD for $60 is still far faster than the zero IOPS you get from a $60 120GB SSD. What you really need to compare is the cost per GB, because then you can compare things like the performance of a pair of 60GB drives in RAID-0 vs. a single 120GB.

            That said, the primary factor in SSD speed is the number of controller channels that can be connected to the fla

            • by afidel ( 530433 )

              SLC is ~10x the IOPS/GB for random writes compared to MLC, reads are generally only 20-30% faster.

              • Not in real world use. There are no 1M IOPS SLC SSDs (single drive), but there are plenty of 100K IOPS MLC SSDs.

                As a matter of fact, this [wikipedia.org] seems to show that with the exception of the Fusion-io ioDrive2 SLC variant, all the top-performing single drive SSDs are MLC. And, the MLC variants of the ioDrive2 are only about 10% behind [fusionio.com] the SLC variant.

                You can see from the Wikipedia article that what truly affects final throughput is the bus width and number of channels of SSD controller, just like I said. The fas

                • by afidel ( 530433 )

                  Interesting, you're right the IODrive 2 brings MLC much closer to SLC, there's only a 2x performance delta on an IOPS/GB basis (270k 4k random writes vs 140k for 400GB vs 365GB), for the first generation (which I own a number of) the gulf was much wider.

    • by bws111 ( 1216812 )

      Where are you getting those prices? A quick check of newegg found the cheapest ssd at $160 for 240GB ($0.67/GB). On the other hand, a 10K RPM 1TB disk costs $200 ($0.20/GB). Are you comparing the cheapest consumer ssd to the most expensive enterprise hard disk?

      • slickdeals.net

        I'd say that still counts.
        256GB SanDisk Ultra Plus 2.5" SATA III Solid State Drive $100 after $20 rebate + Free Shipping

        Samsung 840 EVO-Series 1TB 2.5-Inch SATA III SSD MZ-7TE1T0BW $455 @ Amazon

        historic low on the evo is 420$

    • by Rich0 ( 548339 )

      When 2TB ssd come on the market, you'll see the rest drop in price as well. I'm not quite sure where the author is getting their information. Check the price drops over the last two years and you can see they haven't hit bottom yet.

      Sure, but neither have hard drives. The 1TB SSD of tomorrow may very well be competitive with the 1TB HD of today, but will it be competitive against the 64TB HD of tomorrow?

    • by gl4ss ( 559668 )

      well, the thing is that hdd's keep getting faster and bigger too.

      100 bucks buys you 3TB. for 300 bucks you can get 9TB. of course this is not "enterprise grade" but neither are such cheap ssds.

      so the gap exists and will continue to exist - both go up in storage space but there's no reason to think why either one would stop growing in size. you can already get 4TB drives.

      • The only speed increase you get out of harddrives is when density goes up.
        Otherwise platter speeds have been more or less stuck at 5400/7200 rpms. Increased density has slowed down a lot in the past 5 years.
        Eventually ssds will beat out harddrives in terms of price/capacity. They've already blown them away in terms of reliability and speed.

  • Today we can have an SSD for the price of $0.50 / GB. It is already good enough.
    • by The MAZZTer ( 911996 ) <megazzt.gmail@com> on Thursday April 17, 2014 @08:43AM (#46778591) Homepage
      640K ought to be enough for anybody.
      • Well, 256GB SSD ought to be enough for anybody, and is relatively affordable.
        • by MatthewCCNA ( 1405885 ) on Thursday April 17, 2014 @09:05AM (#46778739)

          Well, 256GB SSD ought to be enough for anybody, and is relatively affordable.

          enough is never enough.

        • by jythie ( 914043 ) on Thursday April 17, 2014 @09:06AM (#46778751)
          I think one of the big bonuses of the SSDs hitting the mainstream is people (and manufacturers) are re-examining how much capacity people actually need. For a while there was a trend of just throwing the biggest drives possible at every machine made since a bigger number looks better then a smaller number on marketing material, but it meant a lot of people bought computers with drives that far exceeded their actual use cases.

          For most people 256GB is more then enough, depending on how they are using it. Though it is no where near enough for other uses.

          Personally for my use case, I have both. a 128GB drive for OS and applications, and 1TB HDD for data. If I kept my data on the SSD it would fill up rapidly, so it is not enough for this 'anybody' at least, and I know people who burn through space a lot faster then I do.
        • by jbolden ( 176878 )

          I have a 1st year MBPr. 256g SSD has been constraining. I wish I had gone for the 512g option even at close to $2/g at the time.

        • by EvilSS ( 557649 )

          Well, 256GB SSD ought to be enough for anybody, and is relatively affordable.

          Well, that depends on what you are doing on your machine. If you are a gamer, with game installs running from 20 to 50 GB (I'm looking at you Titanfall!) a 256GB system drive won't go far.

    • I just upgraded my 1TB to a 480GB Crucial M500 for $220 so I totally agree. A year ago it was closer to $500
    • Today we can have an SSD for the price of $0.50 / GB. It is already good enough.

      I can get a 1TB SATA hdd for $69 at Best Buy. How much would a 1TB SSD cost?

  • by fermion ( 181285 ) on Thursday April 17, 2014 @09:05AM (#46778741) Homepage Journal
    if you are talking about throw away worker drones or server machines, then no. There is no data on these machine, the costs to swap them out are minimal. I recall a place that had racks of a few hundred machines, a dedicated person to swap them out, and two died a day. Putting anything but the cheapest product in there would have been a waste of money. But the data machines, those were special. Probably cost more than the combined servers the fed to.

    Likewise, worker bee machines that are pretty much dumb terminals are not going to use SSD. But other machines that people actually do and store work on, that may be something different.

    Look, tape is on the order of penny per gigabyte. Hard disks are somewhere between 5-10 cents a gigabyte. SSD is about 50 cents a gigabyte. Many people still back up onto hard disk even though tape is more reliable. We are going to use SSD because there are benefits that justify the order of magnitude increase.

  • with spinning rust, you might re-engineer the bulk process that coats your disks, but the boost in recording density depends on changing the parameters of the head. bulk process and one device. compare to flash, where to boost density, you have to tweak each storage cell, controlling for defects and manufacturing flaws, where the yield of each cell multiplies, so defects are exponentially likely.

    disks (and to some extent tape) will always have scaling advantages over litho-fabed storage.

    you can certainly

  • by MatthiasF ( 1853064 ) on Thursday April 17, 2014 @09:12AM (#46778811)
    We need reliable hybrid drives with 120-160+ GBs of flash memory, instead of the ridiculously worthless 4-8 GB ones we have now.

    A hybrid with a 1:30 or 1:20 ratio of flash to platter (200 GB for 4 TB for instance) would pretty much be perfect for anyone, even enterprise applications if RAID controllers cooperated with the hybrid caching properly.

    We do not need 100% flash, just give us a practical median.

    In fact, I guarantee if someone made a hard drive with a controller with an mSATA slot for adding a SSD and offered the controller to be setup as pass-through (act as two drives) or caching (SSD keeps a cache of platter), it would sell like crazy.

    An mSATA would fit easily beneath a standard 3.5 inch platter hard drive.

    http://www.notebookreview.com/... [notebookreview.com]
    • by m.dillon ( 147925 ) on Thursday April 17, 2014 @12:16PM (#46780589) Homepage

      No we don't. Hybrid drives are stupid. The added software complexity alone makes them a non-starter for anyone who wants reliability. The disparate failure modes make it a non-starter. The SSD portion of the hybrid drive is way, WAY too small to be useful.

      If you care enough to want the performance benefit you either go with a pure SSD (which is what most people do these days), or you have a separate discrete SSD for booting, performace-oriented data, your swap store, and your HDD caching software.

      -Matt

  • The article is a bit weird. It keeps saying to ignore consumer: low price, cheap parts, focus on mobility as inapplicable to enterprise. But then it focus on enterprises disks that aren't far removed from consumer models rather than enterprise models like IBM's flash solutions (ex 840: 33T per U so more than 1P per rack). If we are going to look at enterprise flash I don't understand why you would focus on smaller solutions. Obviously the $8-14g price is even higher but it is at those price points that f

  • by wjcofkc ( 964165 ) on Thursday April 17, 2014 @09:23AM (#46778903)
    I have a 120 gig Sandisk Extreme 2 SSD and as a performance upgrade, you really can't do better than an SSD, assuming a minimum of 4 GB of ram. I was a little skeptical of claims when I bought it, but I can vouch that people aren't messing around when they talk about instant boot and zero-second loads times for applications. Mileage may vary depending on the brand and model, research and watch the specs closely. A paltry 120 gigs by itself is not enough for me or most people these days so I balance things out by installing the OS and applications on the SSD, while most files go onto a hard drive. This means a slight change in workflow, but it is entirely worth it.
    • Look into setting up junction points [wikipedia.org] for your HDDs. This way stupid windows programs that believe they need to be on C:\[some dir] can think they are on the primary drive even if they are on one of the secondary HDDs instead of the primary SDD. I have that setup and it is wonderful.
  • by Chas ( 5144 ) on Thursday April 17, 2014 @09:47AM (#46779145) Homepage Journal

    due to continued high prices for capacity, bandwidth and power

    How the hell is power an issue? SSD's consume something around 1/100th of the power that a hard drive does.

    • Not necessarily. From the article:

      The Seagate Enterprise 15K 2.5” form factor HDD and Terascale HDD have power consumption needs of 1 W and 6.5 W per drive, respectively. However, SSDs are far more varied. Consumer SSDs, designed for laptops or tablets, often have power consumptions of between 0.1 and 1.5 W per drive, however enterprise SSDs can range from 3 W to 30 W depending on make and model with most falling between 3 W and 10 W.

      A spinning HDD might require more power than an idle SSD but it is not necessarily true that a HDD requires more power all the time. Also if you look at wattage per GB, HDDs are more efficient as you need more SSDs right now to match the same capacity as a HDD. For consumers, it's a small difference but enterprises requiring lots of drives look at efficiency more closely.

  • Platter drives have been artificially held high for the past few years... and it will burn them unless they start budging on capacity and price, as SSDs will continue to drop.

    With 5TB and 6TB drives finally making it out into the consumer space, platter drive pricing may finally start dropping, but will it be too little too late? Will there be enough of a market now in the consumer space to support the larger drives? I suspect the average user has plenty of storage already - perhaps to the point of full por

  • Lets be honest here - outside of a small percentage of users doing raw uncompressed video operations HDD are more than fast enough. Drives and OS both offer large caching of high use objects which reduces seek/startup time differences to a very small amount. The biggest difference is on start up and even there.. do those 5, 10, 15 secons extra really matter that much? How often are you booting? Or even resuming from hibernation if thats your thing?

    As to power, idle is now around 5 or 6 watts and standby a

Genius is ten percent inspiration and fifty percent capital gains.

Working...