Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Data Storage

Hybrid Hard Drives Just Need 8GB of NAND 373

judgecorp writes "Research from Seagate suggests that hybrid hard drives in general use are virtually as good as solid state drives if they have just 8GB of solid state memory. The research found that normal office computers, not running data-centric applications, access just 9.58GB of unique data per day. 8GB is enough to store most of that, and results in a drive which is far cheaper than an all-Flash device. Seagate is confident enough to ease off on efforts to get data off hard drives quickly, and rely on cacheing instead. It will cease production of 7200 RPM laptop drives at the end of 2013, and just make models running at 5400 RPM."
This discussion has been archived. No new comments can be posted.

Hybrid Hard Drives Just Need 8GB of NAND

Comments Filter:
  • by Anonymous Coward on Thursday August 08, 2013 @10:53AM (#44509609)

    No chance this is just the company saying this because they missed the boat on solid state drives?

    • by arth1 ( 260657 )

      No chance this is just the company saying this because they missed the boat on solid state drives?

      Or because 5400 rpm drives are much cheaper to produce, not requiring nearly as stringent tolerance levels as 7200/10k/15k rpm drives?

      What's certain is that the worst case times will increase, and that's when people get irritated. It's easier to live with slower overall than it is to live with faster overall, but much slower at times. That will stand out like a sore thumb, and be a source of irritation.

      • As long as the most regularly libraries and executables are cached, it shouldn't seem slow.. for other files, you only need to wait when they're opened initially. I don't even really notice when I open documents over a wi-fi link, so I don't see why opening documents from a 5,400RPM HDD should be much worse.

        • by arth1 ( 260657 )

          You're assuming that "opening documents" is the bottleneck where people can get irritated. It isn't. It's more likely to be when you need to do some large operations. For a home user, that might be scanning an MP3 collection or several thousand photos. Or copying something big from a thumbdrive.
          For an office user, that might be when IT runs an AV or compliance scan, or your VM saves a snapshot, or you archive Outlook.

          In either case, it's the worst case times that irritate users. Not the normal opening

          • You're assuming that most people do that anymore. I'm the only one in my circle of friends who still maintains an mp3 collection (which is on a file server anyway). Everyone else either stores the files on their phone/mp3 player or, more commonly, streams the media. Likewise with photos, most people store them online now. Besides, how often do you actually look at those photos? We're talking everyday usage, not Aunt-Bertha-Is-In-Town-For-Her-Yearly-Visit usage. Likewise for an office user, any competent IT

          • For things where you're going to have to wait a few minutes anyway, an extra few minutes isn't really an issue as you'd already be going to get a coffee or check Slashdot or whatever. If it's on the order of seconds, then waiting a few extra seconds isn't a big deal either. Well, that's my preference anyway, maybe yours is different..

        • My laptop is now 7 years old. In that time, I've made three major upgrades to it.

          1) Moving from XP Pro to Win 7 Ultimate
          2) Upgrading from a 5400rpm to a 7200rpm drive (only other major difference between drives was capacity)
          3) Upgrading from 1GB RAM to 2.5GB RAM

          As far as day-to-day performance goes, the hard drive upgrade made the most noticeable difference. The RAM upgrade is great for the relatively rare moments that I have a lot of stuff open on my laptop (it's not my primary computer) and Windows 7 cert

          • You weren't dealing with 500GB platters in the 7 year old 5400RPM drive. Not quite apples to apples. But Seagate going with an SSD portion that's barely bigger than today's RAM upgrades seems silly.

      • What's also certain is that with hybrid drives you get the slow read speed of a 5400 rpm, the mechanical disadvantage of a fragile rotating platter, and the catastrophic (read: total) data loss when SSD's fail.

        No thanks. The money savings vs. buying a true SSD is not worth the extra complexity, slower read times, and potential for failure. If I am going to accept the risk of a SSD failure, it better be fast - through and through!

        I can't wait until flash memory becomes so inexpensive that it becomes standa

        • Having RAID on the drive itself mostly defeats the purpose of RAID (excepting RAID 0, but even that has issues with this approach). RAID is best for combating downtime due to hardware failure. By sticking both "disks" of a RAID-1 on one drive, you have no recourse if one of those "disks" fails. You can't swap out half a drive to let it rebuild on a good 'disk'.

    • by fuzzyfuzzyfungus ( 1223518 ) on Thursday August 08, 2013 @11:02AM (#44509733) Journal

      No chance this is just the company saying this because they missed the boat on solid state drives?

      Given that Seagate makes HDDs and has little or no Flash fabrication capacity, they were obviously going to include an HDD in the plan (and, given the price, so will a lot of buyers). They don't have an obvious bias (other than a general desire for 'less, because that keeps costs low') in terms of how much NAND cache is needed to see meaningful improvements.

      I'd be inclined to distrust flimflam to the effect that 'Sure, hard drives are just as good as SSDs!'; but have no particular reason to doubt that 8GB, rather than 4, or 12, or 16, or 5, or 32, is the approximate amount of flash needed, if that is what they report.

      • Their approximation is taken from a dark place...located below the belt line.

        In other words, that number is crap. It's shows a divine lack of future planning for capacity...and a stoic belief that, contrary to all historical evidence, applications and operating systems will not continue to grow in size. Assuming a ROI of at least 3 years, perhaps 5 years for some're looking at at least one major OS upgrade, possibly two, which tend be larger with each iteration; additionally, individual appli

        • by fuzzyfuzzyfungus ( 1223518 ) on Thursday August 08, 2013 @11:44AM (#44510265) Journal

          I have the suspicion that Seagate is planning quite specifically; but just don't care all that much.

          The majority of orders will, presumably, be from OEMs looking to stuff HDD slots on the cheap, while still complying with the Win8 hardware certification requirements [](most notably, resume in under 2 seconds) and possibly Intel's "ultrabook" requirements, which have their own I/O demands.

          I suspect that Seagate's calculations of 'How cheaply can we build a drive that will satisfy the letter of the requirements that our customers need to meet?" were made with care, and aren't crap at all. They're just something of a lie if you expect that level of performance to be maintained under more stressful loads.

      • The part I find interesting is that Intel-SRT (basically using a separate SSD as a cache) won't work with less than a 20GB SSD. When developing SRT, Intel determined that 20-60GB is the appropriate range for caching, so it won't work with 20GB and will ignore capacity beyond 60GB.

  • What about games (Score:4, Insightful)

    by Noughmad ( 1044096 ) <> on Thursday August 08, 2013 @10:55AM (#44509643) Homepage

    The games I download from Steam are around 5GB each. So if I try playing two games in one day, only the first one will load quickly?

    • Were you using a spinning disk now? Generally as soon as you say "game" it means you are probably not using "normal office computers" and can safely ignore spinning drives altogether.

      • Yep, they mean Browser, Excel/Word, Outlook, etc...
        Same reason why most office computers don't have 16GB of RAM; while gaming rigs do.

    • It's a complicated question. It depends on which parts of the game data the drive decides to store in the flash memory. Maybe it's based on "recent popularity" of HDD sectors. In that case it might mean that the levels which you are currently churning through in both games might reside in the flash cache, meaning that both of the games would start up quickly. However if both games actually go through all the data during a session, then there will be some occasional slowdowns in either one or both of the gam
    • by Hatta ( 162192 ) on Thursday August 08, 2013 @11:09AM (#44509827) Journal

      The first time you load each game, it will load slowly.

      If you close and reload a game, it will load quickly.

      If you close a game, load another game, then load the first game it will load slowly again.

      • The first time you load each game, it will load slowly.

        If you close and reload a game, it will load quickly.

        If you close a game, load another game, then load the first game it will load slowly again.

        Um, not necessarily. It doesn't need to load the entire 5Gb of the game in order to provide good caching response. If you played the entire game, then exited, played the entirety of another game and *then* came back to the first then, yes, it *might* load slowly again.

        However, if you load a game, play a level (or even a few), exit play another game for a level and went back to the first it is highly probable that your previous session will still be cached, but loading the next level *might* be slow (depe

    • No.
      The point of cache is to hold the data read in memory so it can be loaded again in a fast way.
      Loading things for the first time will never be faster in a hybrid drive. Never.
      That's the whole point of SSDs however.

    • What this means is that "Gamers are not normal users," and they can cost-save their standard line by going from 7200 to 5400 and charge the same, and create a new Gamer/High Performance line and charge a lot more for it, if they aren't already doing that, while eliminating the lower-tier 7200 line that was doing "good enough" for most Gamers or high end users.

      In other words, if you like 7200 cheap drives, buy one now or hope that competition won't follow suit.
  • by Skapare ( 16644 ) on Thursday August 08, 2013 @10:56AM (#44509651) Homepage

    ... all solid state in my laptop. I hate hybrids.

  • Are hybrid drives working well on Linux yet? Last I checked, support for hybrid SSDs was still in its infancy.

    • by fnj ( 64210 ) on Thursday August 08, 2013 @11:09AM (#44509825)

      SSHDs as implemented by Seagate do not require any support whatsoever in the host. Their caching algorithm does not care anything about the FS. It is block level. I have one working just fine in arch linux. Linux just sees it as any other HD, only it is much faster overall. Obviously you will never see any improvement at all in huge file copies.

      WD has some lame Windows-only SSHD tech that does require special software on the host.

      • While it is true that seagate does hybrids at the block level so it is transparent, linux hit some bugs in the hard drives firmware that windows did not quite some time ago, and the results weren't so pretty. That was a few years ago though hopefully they have it sorted by now.

        • by FS ( 10110 ) on Thursday August 08, 2013 @11:57AM (#44510491)

          I purchased one of those drives on the day it was available at Newegg for use in Linux, and then shortly after for a pair of them in RAID0 for a desktop (gaming) system where data integrity wasn't my main concern. In both systems I ran into firmware problems and could not natively flash them in the system that was running them. I pulled them into a bench PC I have and flashed them there and everything was fine. The issue had to do with power saving and would cause some pretty frequent hardware locking issues on both systems that was painful until I was able to resolve them. All 3 of the drives are benched now, but still work fine. I never lost any data due to the lockups - they would just hard lock the PC for a second or three and then continue working like nothing had happened.

          In my experience this is typical early adopter fare.

  • by gl4ss ( 559668 ) on Thursday August 08, 2013 @10:58AM (#44509683) Homepage Journal

    seagate research suggest seagate bargains are good! how amazing!

    hybrid drives blow, I guess better than nothing but no comparison to ssd. that 8 gigs isn't the same every day or if it is then the machine is acting pretty much just as a terminal and not moving media around etc(yes there was a time I could get by with a 3.2gbyte fireball, but that was long ago now).

    excuse me as I go to do a simple drag'n'drop to my bigger hd drive. hybirds would be nice, IF they slapped 128gbytes+2tbytes on it and somehow it understood that there's no need to move the video file I'm viewing to the the ssd portion ever.

    just playing two different games would outrun 8gbyte ssd portion... heck, max payne 3 was something like 30 gigs and one session of gaming probably accesses 8 gigs easily and it would be nice to have the os on the ssd portion..

  • by clinko ( 232501 )

    "640K software is all the memory anybody would ever need on a computer." - Bill Gates (Not Really: [])

    • Nobody ever said Bill Gates ever said that. The assertion was that he said that at the time that 640k ought to be enough for anyone. He has since avidly denied ever saying anything of the sort, but there is still some debate if anyone cares. []

  • Damn (Score:5, Insightful)

    by Nemyst ( 1383049 ) on Thursday August 08, 2013 @11:01AM (#44509727) Homepage
    This looks like Seagate desperately clinging to their old bastion. Even Western Digital bit the bullet and started working on pure SSDs. The problem with Seagate's calculations is that there'll come a time (not that far into the future) where NAND will be cheap enough to get a full SSD for only a moderate price hike over a HDD, all while getting all the benefits of a pure SSD drive. They risk getting left behind by clinging to the hybrid drive idea.
    • Re:Damn (Score:5, Insightful)

      by adisakp ( 705706 ) on Thursday August 08, 2013 @11:31AM (#44510079) Journal
      Hybrids aren't that bad an idea. You can get a 3TB drive for just over $100. HD Data is $0.03-$0.05 / GB. SSD's are still in the $0.80-$1.50 / GB range. That's a factor of 50X more expensive. You can't even buy a single 3TB consumer SSD and three 1TB SSD's will cost you around $2000 plus eat up half your SATA ports.

      Although I do disagree on one point -- if a consumer uses ~10GB of data a day, I would overshoot and put in 16GB rather than 8GB in a Hybrid Drive -- it's better to slightly overprovision and almost never hit the platter part of the storage than to under provision and force yourself to the slower backstore. Plus the difference should only be less than $10 more for the drive.

      One problem though with hybrid drives is they aren't necessarily faster than intelligent software caching to SSD's or of using a hardware controller (with possible software assist) that supports caching data from a HDD to a SSD (such as Intel Smart Response SSD Caching which has been on Motherboards since 2011).
  • by barlevg ( 2111272 ) on Thursday August 08, 2013 @11:03AM (#44509747)
    Honest question: how do hybrid drives compare to traditional HDDs when it comes to wear? To they tend to fail more (less) often / die faster (live longer) than traditional drives? What about pure SSDs?
    • by obarthelemy ( 160321 ) on Thursday August 08, 2013 @11:10AM (#44509833)
      Last time I checked, there was no lifespan issue for SSDs (I think it was 33 years at 10GB/day). Even bug issues seem to have been dealt with, I haven't heard any of the once-frequent OCZ horror stories (bricked SSD) in a while. I'd assume hybrid drives to be just as good as pure HDDs, actually a bit better since the SSD part will save wear and tear on the HDD part. Bugs notwithstanding.
      • by Chas ( 5144 ) on Thursday August 08, 2013 @11:35AM (#44510123) Homepage Journal

        The lifespan issue with SSDs has three main factors.

        1: Type of flash memory (SLC, MLC, TLC, in order of decreasing durability)
        2: Size of the flash drive (larger drives have more room for wear leveling algorithms to work with, thus staving off flash cell burnouts due to exceeding maximum number of writes).
        3: The amount of throughput on the flash drive. An expected heavy load is roughly 10GB/day. Doubling the load halves the lifetime of the drive. Quadrupling the load quarters it.

        Granted, the cache on a Hybrid is being used a bit differently than how you would use a straight SSD. But, with such a small cache drive, you ARE going to wind up cooking it after a relatively brief period of time.

        • by tlhIngan ( 30335 ) <> on Thursday August 08, 2013 @12:11PM (#44510697)

          The lifespan issue with SSDs has three main factors.

          1: Type of flash memory (SLC, MLC, TLC, in order of decreasing durability)
          2: Size of the flash drive (larger drives have more room for wear leveling algorithms to work with, thus staving off flash cell burnouts due to exceeding maximum number of writes).
          3: The amount of throughput on the flash drive. An expected heavy load is roughly 10GB/day. Doubling the load halves the lifetime of the drive. Quadrupling the load quarters it.

          Granted, the cache on a Hybrid is being used a bit differently than how you would use a straight SSD. But, with such a small cache drive, you ARE going to wind up cooking it after a relatively brief period of time.

          Which for most users and usage scenarios, is basically forever. There's been a volunteer-run test of longevity [] which stresses an SSD until it fails by writing data to it continually. And the SMART data typically gives you plenty of advance warning - the Media Wear Indicator (MWI) tells you how many cycles are left in the array - once it hits zero, it means the number of write-erase cycles has hit the guaranteed limit and you're running in unknown territory (though there are usually still spare blocks and most will still have plenty of life). If you want guarantees, once the MWI hits zero, it's time to back up and get a new SSD. The tests run until the drive itself dies which tell you how long you have left. So you generally have a LONG indication of media wear out.

          However, the biggest problem SSDs face is actually sudden loss and corruption of the FTL tables (the ones that map logical sectors to actual flash sectors). If you hear of SSDs dying prematurely, it's almost always because of table corruption. These tables contain things like sector translation, sector wear, dirty/clean bits, trim status, etc.

          In the past, you could regenerate the tables from the spare area data (typically 16 bytes per 512 byte data area), but use of enhanced ECC algorithms consume that space up to accommodate better error handling. Plus it also meant way longer mount times as the controller had to scan the entire media for the information (many seconds long).

          These days, controllers come with 512MB or more of RAM to hold the tables in memory for quick access. The problem is the tables are often written out lazily to storage, which means if you yank the power suddenly, the SSD might not be able to write the dirty data to stable media, or worse yet, it'll be in the middle of the write operation which leaves data in an unknown state.

          Good SSDs often have piles of capacitors to serve as emergency power which can keep the array powered for a couple of seconds - more than enough time to flush the tables to storage and protect your data. Of course, this costs a lot more money and is usually present only in the top tier drives and enterprise class SSDs. If an SSD dies suddenly, it's usually because of this.

          Hard drives use the back EMF produced by the spinning platters to perform emergency shutdown procedures, including retracting the heads.

      • I'd assume hybrid drives to be just as good as pure HDDs, actually a bit better since the SSD part will save wear and tear on the HDD part. Bugs notwithstanding.

        I'd expect the opposite. Large SSD are more reliable because of the large space available and wear leveling algorithms. Much of the data on an SSD will not be changing at all. The writing and erasing causes the wear on the SSD portion, while reading causes little/none. In a small solid state cache, like in the hybrid drives, each memory location will be subject to much more read/write/erase cycles than in a larger SSD.

    • Re: (Score:3, Informative)

      by rullywowr ( 1831632 )
      The main issue is that if a traditional HDD fails, there is a small chance for data recovery. There also may be a short period where you can catch the drive "on it's way out." When a SSD fails, it often fails spectacularly...often "bricking" the entire drive or corrupting the entire contents.
  • Yeah. That's great.

    Until you burn through that dinky little 8GB due to heavy read/write.

    Then what? You now have a 5400 rpm hard drive.

    • by Nemyst ( 1383049 )
      Probably worse: you have a 5400rpm non-functional drive. I very much doubt they've integrated a failsafe which bypasses the NAND cache if it gets exhausted. More likely the performance will worsen as the healthy NAND decreases in size until it's zero.
  • by SirDrinksAlot ( 226001 ) on Thursday August 08, 2013 @11:07AM (#44509789) Journal

    When I was buying a new laptop hard drive I got a hybrid drive but not before some research. The 7200RPM Momentus XT mops the floor with their 5400rpm new generation of hybrid drive. The performance increase on their 5400rpm drives is insignificant, it was not even worth considering but I at least found stock of the Momentus XT which *IS* well worth considering and ordered TWO. I took into account cost, capacity and performance when choosing the drives. For the cost/performance the 5400rpm drives did not deliver but it had the capacity, the Momentus XT delivered on cost/performance and was only slightly lacking in capacity. Pure SSD drive only delivered on performance which in my case wasn't weighted enough in my process to justify. If the Momentus XT didn't exist I'd have just stuck with a 7200rpm drive.

    So RIP Seagate's worth while mobile HDD's. Unless you've fixed the mediocre performance in your 5400rpm drives I'll either be doing full SSD or just buying somebody else's 7200rpm drive.

    • by vux984 ( 928602 ) on Thursday August 08, 2013 @12:16PM (#44510771)

      I took into account cost, capacity and performance when choosing the drives.

      But not battery life, apparently, which is the one area where 5400 rpm drives beat out 7200 rpm drives, and is possibly the reason they even exist. A 5400rpm hybrid would need to spin up even less and should do even better on the battery front. Not to mention that if you get a cache hit, it doesn't have to spin up at all, which is a big performance boost too.

      So while "benchmark" performance might not be great, real world use might be substantial; as the hard drive could spin down more, and you could access the drive without spinning it up some of the time, possibly even most of the time.

      There's definitely potential to be both markedly faster in real world laptop use scenarios and consume less battery with a hybrid. Whether that pans out in reality I don't know.

  • Why bother adding 8 gig of solid state storage to your hard disk when you could just add 8 gigs of RAM and use that for disk cache?

    • by hattig ( 47930 )

      Not much use if half of the 8 gig is used for storing system files that are accessed on a reboot / cold boot that wipes the RAM.

      Also 8GB of NAND probably costs $4, which is a lot less than 8GB of RAM.

    • Re:RAM cache? (Score:4, Insightful)

      by TheRaven64 ( 641858 ) on Thursday August 08, 2013 @11:34AM (#44510107) Journal

      RAM cache is useless for speeding up writes. A significant (although workload-dependent) part of the performance problem with spinning disks is that if you issue a write and then need to block until it's on disk (which you need for consistency), it can easily take 5-10ms (or more) and that severely limits the performance. Often, non-server workloads include doing a lot of small synchronous writes and then no writes for a while. An SSD as a write-through cache works well here because it can reorder a lot of writes to turn (some of) them into sequential writes and it can trickle out a lot of writes while the disk is idle. This is also pretty much the best case for flash longevity: you don't need wear levelling, because you just treat the entire flash as a ring buffer and write to one end and write to the disk from the other end. You can keep the translation layer in RAM, and if there's a power failure you just replay the entire flash journal onto the disk.

      The 'only reads 8GB' of unique data per day number is meaningless as an indication of how often each thing is used, however. If each day you always access the same 8GB, then an 8GB cache will be perfect for you. If you always access 8GB a day and you only access 7.5GB of it once, then a 512MB cache will be fine and you'll get no benefit from more, but you will get a big benefit from having a faster underlying storage device.

  • Not in my experience (Score:5, Informative)

    by swb ( 14022 ) on Thursday August 08, 2013 @11:08AM (#44509809)

    I had a Seagate Momentus XT (750 GB hybrid) and I replaced it with a Samsung 750 GB SSD. The pure SSD solution is noticeably faster in all respects, especially in boot up, and this is with a machine now using Truecrypt whole disk encyption (wasn't using it on the Momentus).

    The Momentus was a good upgrade until SSDs in the size I wanted were reasonably priced, but performance wise it isn't in the same league as a SSD.

    The hybrid SSD solution really shows its weakness when you deviate from "normal" behavior, and this can be anything from an application upgrade, running Windows updates, or accessing stuff you don't use that much. Performance just seems back to dismal levels and I suspect that it takes a while for the cache to re-optimize if the deviating disk activity is at all intensive.

    I think the hybrid concept is interesting, but I think you need more cache and a way to optimize the cache not just not most recently accessed blocks but for the operating system and applications in use, too.

  • Yeah, but typical office PCs are already plenty fast for the things they typically do, so they aren't in need of a big boost. That's why PC manufacturers have been concentrating on making them smaller and cheaper rather than more powerful. It's those data sensitive applications that are atypical of office PCs that are the market for high performance drives.

    Besides, if you only need 9.5 GB of unique data per day, you're probably better off upgrading your RAM rather than your hard drive. The stuff you ac

    • by h4rr4r ( 612664 )

      If you only need 9.5GB you should get a small SSD. RAM only helps once you fill the cache. RAM is never going to give you the performance you get from an SSD.

  • It seems that the word "hybrid" is being redefined in common use. It is now being used to combine one old outdated thing that needs to be put out to pasture with a new state-of-the-art idea.

  • Man, tell me I'm not the only one who still remembers a time when 8 Gb of RAM was a REALLY big deal. I still only have 8 Gb in my gaming computer and 6 in my laptop. Maybe I'm just old.

  • I have the following concerns with hybrid drives.

    1: After a new build or reimage (remember they are talking about office users here and offices do reimage from time to time) how long will it take the system to settle down and work out what should be on hdd and what should be on flash. There is obviously a compromise here between speed of adapting to a changed usage pattern and wear on the flash.
    2: what will happen with writes? large SSDs spread them over a large area of flash to get decent life. Will writes

  • .... too slow to do backups with if you have any sizable amount of data at all. Mirroring SSD's saves huge amount of time in cost and fiddling in terms of backing up important data.

    Hard drives are really for huge libraries of stuff you want to keep but dont use often and dont mind slow backups on because they are of lower relative importance (movies, games ,etc).

  • My suggestion is to simply buy more RAM. With 16 GB of RAM there are 10 GB for disk cache and 6 GB for everything else. This sounds like it matches Seagate's usage pattern pretty well. It should be cheaper and RAM can be written endlessly, so it should be more reliable. The only downside is it must refill its cache if you reboot. Gamers might opt for 32 GB if they can find a system with that much.
  • by TheSkepticalOptimist ( 898384 ) on Thursday August 08, 2013 @12:07PM (#44510647)

    SSD's are great, yes, but there are still too big problems with them.

    1) Capacity and price. HDD are still significantly cheaper per GB and get up to 4TB per drive. Its a hard sell to offer a computer with only 128 GB or 256 GB of system storage when you can also find ones with 3 to 4 TB of storage.

    2) SSD's are still using HDD technology. While SSD offer better performance than HDD, really SSD should be offering performance on par with RAM rather than physical spinning disk media. While come companies like Apple are hooking SSD directly to the PCI E bus, I would expect that a drive made of up solid state chips to perform more like RAM rather than really just being slightly faster than HDD.

    I'm not saying I want HDD over SSD, but I mean I've been around long enough to know when SSDs were first promised as the "next great thing" and still greatly disappointed at the state they are in today.

    I think the whole SSD industry dropped the ball as I have never seen an industry innovate at such as snails pace. SSD is a card full of chips, its not rocket science, and yet SSD still have read/write rate of decay, performance is not on par with RAM and prices are still ridiculous.

    Now compare to the HDD industry? They exceeded perceived limitations on the amount of storage per inch of platter several times now. I mean this has been SIGNIFICANT feats of engineering to squeeze out more bit density and I mean they even moved to stacking bits vertically on the medium.

    I don't know, there is no reason why we don't have Terabyte SSD's that cost $50 and rival RAM in speed today except that the SSD industry has either been unwilling to bring us this technology at this price point OR grossly incompetent to not be able to deliver the goods.

    For now, if you can put 8 GB of cache into a 4 TB hard drive and deliver me comparable speeds to SSD for less money, I think this is a huge win for Seagate over all the negative bashing you all are giving them.

    When SSD grows up and start matching HDD on price/capacity or delivering me speeds that make RAM obsolete, let me know, for now this has been an over-promised and underwhelming technology.

    Also will people please stop assuming this is the same as "we only need 640k of RAM" or "only 9GB!" statement. Its retarded. They are not claiming that people only need 8 GB of storage in total but that on average on any given day, people are only accessing roughly 8 GB of storage so cache that much and give people high performance of SSD without the stupid expense of them, and yet still have terabytes of storage to access from.

  • by gnasher719 ( 869701 ) on Thursday August 08, 2013 @01:04PM (#44511379)
    That would be a good way to check the theory in real life. Apple combines 128 GB SSD + 1 or 3 TB hard drive, but you can build it yourself with any size SSD drive. Over at MacRumors obviously everyone is up in arms that 128 GB is much too small, while Seagate says 8 GB is enough.

    Building a Fusion drive with 32 GB SSD (I suppose that's the smallest you can buy) and checking it in real life would be a good way to test this.
  • by msobkow ( 48369 ) on Thursday August 08, 2013 @01:41PM (#44511721) Homepage Journal

    Given that people buy machines with 8 and 16 gigs of RAM nowadays (if not more), isn't it more cost effective to just let the OS use that extra memory for caching instead of pushing the memory down to the drive? After all, the OS is making block level IO requests and has far more knowledge about what data it's going to use than the drive ever could.

    And if they're not making 7200 rpm drives any more, then I'm not buying Seagate drives any more. I do not want laptop quality drives in my box -- I run databases and compilers and other such IO intensive loads far too often.

    By the way, when I *rebuild* my projects, it takes about half the time of the initial build because all the source has been loaded into cache by Linux, so all it needs to do is *write* the outputs.

    Perhaps I've answered my question about RAM cache vs. disk cache if and only if it's faster to write to SSD cache than it is to write to physical disk.

The trouble with the rat-race is that even if you win, you're still a rat. -- Lily Tomlin