Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Seagate Launches Hybrid SSD Hard Drive 224

MojoKid writes "Though there has been some noise in recent years about hybrid storage, it really hasn't made a significant impact on in the market. Seagate is taking another stab at the technology and launched the Momentus XT 2.5-inch hard drive that mates 4GB of flash storage with traditional spinning media in an attempt to bridge the gap between hard drives and SSDs. Seagate claims the Momentus XT can offer the same kind of enhanced user experience as an SSD, but with the capacity and cost of a traditional hard drive. That's a pretty tall order, but the numbers look promising, at least compared to current traditional notebook hard drives."
This discussion has been archived. No new comments can be posted.

Seagate Launches Hybrid SSD Hard Drive

Comments Filter:
  • The performance of the drive gets better over time as it 'learns' your most frequently used files. I hope it's smart enough to ignore the 'swapfile'.
    • by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Monday May 24, 2010 @09:27AM (#32323116) Homepage Journal
      What swapfile? I have used Ubuntu on a few PCs with at least half a GB of RAM, and I rarely see swap usage climb above 40 MB. In an environment where reads are cheaper than writes, you'll want to use a low value for the swappiness [ubuntu.com], such as 10% instead of the default 60%.
      • The problem with ubuntu (indeed, most Linux distros I've used) is that they also sue the swap file as the hibernation file - last time I set up a laptop, I wasn't able to hibernate due to my habit of capping swap partitions/files at 256MB*, and last I looked it was impossible to set it up to use a swapfile due to the way suspend was implemented, it had to be a partition. More confusingly, linux (judging by the amount that's written to disc when hibernating) writes the entire contents of memory, rather than

    • Wouldn't you want your swap space to be stored on the faster SSD rather than the slower spinning media?
      • by eln ( 21727 ) on Monday May 24, 2010 @09:47AM (#32323396)
        These days with RAM being so cheap, your swap space is basically a warning that things are going terribly wrong. You want your swap on slow storage because slow storage is cheap and your swap should see very few writes under normal operation. If your machine starts hitting swap like crazy, you'll know immediately because your performance will go straight down the crapper as it feverishly tries to write to slow storage. This is your cue to figure out what's wrong and fix it ASAP so your machine will stop thrashing.
        • by AusIV ( 950840 ) on Monday May 24, 2010 @09:59AM (#32323546)
          The main thing I use swap for these days is hibernating my laptop. What I need is persistent storage - the quicker the better.
        • Re: (Score:3, Insightful)

          by Rogerborg ( 306625 )
          Ooh, good idea. By the way, I've been thinking of making an alarm clock that electrocutes your testicles if you hit the snooze button - can I sign you up for the beta?
        • by VanessaE ( 970834 ) on Monday May 24, 2010 @02:47PM (#32327686)

          You're forgetting one thing:

          Sometimes, a machine will go from seemingly normal to suddenly thrashing about in swap rather heavily, with no warning at all. This has been the bulk of my experience, anyway. When your machine gets to that point, and you're in a graphical environment like the majority of desktop users are, you may not be *able* to look into the problem at all. You have to wait until after the damn thing comes to its senses, because you can't even switch to a regular text console, let alone log in in from another box. Forget trying to spawn a terminal. Every little program you launch to try to find the cause just causes the machine to use more memory or swap at this point, which just compounds the problem.

          When the offending program finally does end, it's too late to see what went wrong because most programs leave no traces of their actions other than doing whatever they're programmed to do. Unless you're running some kind of process/resource logging program on your box (I'm not aware of anyone who does this outside of security professionals perhaps), good luck finding out what actually caused the problem, unless you saw something visibly bug out just before the machine stopped responding.

          There are no two ways about it - this is absolutely the worst way to handle an out-of-memory condition. Most people would much rather have programs complain about lack of memory than to have their machine "lock up" for an hour while sitting there churning away in swap. In my experience, the average user figures their computer's being stupid again, and it's time to hit the power switch or the reset button, or maybe call someone for help (which doesn't work anyway, so they're back to square one).

          To give an example, I once set my machine off to run a build which should have taken maybe half an hour, and went off to run some errands and watch a movie. It was still going three hours later, and had dug the machine so deep into swap that mouse events were taking 10-20 seconds just to echo to the screen, and keyboard events were nonexistent as far as X was concerned. I tried my level best to bring the machine back to a sane state, but I eventually had to give up and hit it with Alt-SysRq-U/S/B.

          I love Linux as much as anyone, but I got sick and tired of this happening on my boxes, and responded the only way that seemed to make sense: I disabled swap entirely on both systems and added enough RAM to each to make up for the lost "memory".

          Aside from older hardware that clearly needs it because of sheer lack of RAM, is there even any reason to recommend/enable swap by default anymore? Modern machines come standard with around 4GB of insanely fast RAM - isn't that enough?

    • SLC flash (Score:3, Informative)

      According to the article, it's SLC flash. It should have many more write-erase cycles than MLC.
  • Manageable hybrid (Score:5, Insightful)

    by Thanshin ( 1188877 ) on Monday May 24, 2010 @09:30AM (#32323144)

    Hybrid storage drives should be manually manageable.

    You should have the possibility of configuring which files/folders/partitions/whatever you want to be accessed fast and which parts are to be left as "long term", slow access, storage.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      I have a manageable hybrid.

      Read heavy system partitions on a small SSD (/boot, /bin, /etc ...etc), everything on magnetic.

      • I have a manageable hybrid.

        Read heavy system partitions on a small SSD (/boot, /bin, /etc ...etc), everything on magnetic.

        Separating both parts leaves you with two options.

        A manageable hybrid would let you have more degrees of speed/size. It would let you use just a part of the SSD to accelerate the accesses to a part of the magnetic storage, the rest as pure SSD and the rest of magnetic for low priority storage.

    • So you mean like having 2 separate drives?

      • So you mean like having 2 separate drives?

        Two drives with a direct connection that allows me to seamlessly save what I'm reading on the SSD so a second access is faster.

        (while having all other options of having them both)

      • by dingen ( 958134 )
        Except it would not be two physical drives. Awesome for laptops!
    • by drerwk ( 695572 )
      I feel the same way about swap pages.
    • by Kjella ( 173770 )

      If you mean that applications or the OS should be able to give cache hints, then I agree. If you want essentially two drives and a million manually managed symlinks I don't want that, it'll be a 10% overhead managing to save 1% on performance. And if you're only doing coarse level like installing one app here and saving files there then get two disks (there are dual bay laptops too).

    • Re: (Score:3, Insightful)

      by ZorbaTHut ( 126196 )

      I sort of disagree. Humans are really, really bad at this kind of management, and a smart computer algorithm can often do better. Just look at the people who disable swap space because "it makes the computer slower". You can't trust humans to manage this optimally, and computers can, in theory at least, generate extremely complicated structures and processes (i.e. "if the user runs this program, he's probably about to be reading this data, so let's get this onto the SSD ASAP.")

      • by petermgreen ( 876956 ) <plugwash@p[ ]ink.net ['10l' in gap]> on Monday May 24, 2010 @06:25PM (#32330162) Homepage

        Just look at the people who disable swap space because "it makes the computer slower".
        There are two main mindsets to designing computer systems.

        The batch processing mindset says that what matters is average performance.
        The real time systems mindset says that what matters is meeting your deadlines consistently.

        IMO desktops are closer to the latter than the former. Tens of milliseconds on each user action won't generally be noticed, the user can't do the next operation that quickly anyway. Tens of seconds on one action WILL be noticed and quite possiblly piss the user off especially if it's unexpected even if it only happens on a very small subset of actions. Unexpected delays break the flow of thought.

        Now consider an app like firefox. It has a habbit of using a LOT of memory (whether this is a leak or a design feature is a subject of many /. arguments and not one I want to get into here). It is also single threaded so if any part of the app needs something swapped in the whole app is blocked. If the OS decides to swap it out for whatever reason (e.g. some app ran away with memory usage and didn't finally fail until after it had swapped out everything or a long running batch job overnight caused the OS to swap stuff out and expand the disk cache). Then you click on it's taskbar icon and wait ages as all the memory pages it's state is spread over grind their way back into memory.

        You can't trust humans to manage this optimally
        True but you can't really trust computers to either. Especially when the computer hasn't really been told what the human considers important or even how the data will be used.

  • ReadyBoost in hw? (Score:5, Interesting)

    by W2k ( 540424 ) on Monday May 24, 2010 @09:43AM (#32323338) Journal
    I wonder if this is simply a more expensive version of ReadyBoost. Similarly, it takes your most frequently used files and puts them on a flash drive for faster access times, in a way that is transparent to the end user. In this case I wonder if there would be any speed gain from using this on a PC running Windows 7 with ReadyBoost? Caching always introduces some overhead, so rather than using multiple levels of "flash cache" it might be better to simply turn ReadyBoost off in that case. My experience with ReadyBoost has been that it does indeed improve performance, but in no way close to using a real SSD as the system drive.
    • Re: (Score:3, Insightful)

      by eqisow ( 877574 )
      Hmm, USB vs SATA... I imagine this would be faster than Ready Boost. On the other hand, if you're rocking USB 3.0...
    • Re:ReadyBoost in hw? (Score:5, Informative)

      by Sockatume ( 732728 ) on Monday May 24, 2010 @10:03AM (#32323606)

      Microsoft actually did pitch "ReadyDrive" [wikipedia.org] hybrid SSDs as a selling point for Vista back when it launched. It was basically the same as this, except the caching was controlled in the OS and not the drive and it did some fancier stuff like caching boot data on shutdown. It didn't do very well, perhaps because the technology wasn't mature enough in price and speed.

      • by dingen ( 958134 )
        Not to mention it would be a Windows-only product.
        • You expected a built-in feature of the Windows OS to run on Linux?

          • Re: (Score:3, Insightful)

            by dingen ( 958134 )

            No, I would expect a harddrive to work in Linux. A harddrive which relies on ReadyDrive would not be a very good product, as it would only work correctly in Windows. That's why those type of harddisks never caught on, even though Microsoft did try to push this concept.

            What Seagate is doing now, is using the ReadyDrive-concept of hybrid harddrives, but provide ReadyBoost-type technology on the controller of the harddisk instead of relying on the operating system.

        • by dskzero ( 960168 )

          Not to mention it would be a Windows-only product.

          ... and the point being?

          • by dingen ( 958134 )
            That Seagate's concept of a hybrid drive is a lot better product than what Microsoft was suggesting with the ReadyDrive concept, as Seagate's hybrid drive doesn't rely on functionality only provided by Microsoft Windows and Microsoft's ReadyDrive does.
        • Well, precisely. A Vista-only product at that.

    • by Bengie ( 1121981 )

      Since Vista already supported hybrid drives, I would *assume* the Windows would check for this stuff. Also, Vista/7 already disables readyboost automatically against drives that are SSDs.

      Another issue that would crop up is SuperFetch runs under the ReadyBoost service and Superfetch is really nice.

      Wikipedia: SuperFetch is a technology that pre-loads commonly used applications into memory to reduce their load times.

      It actually caches commonly used files, not just "apps"

      Essentially, it uses free memory to spee

    • by MobyDisk ( 75490 ) *

      Readyboost really doesn't help unless you have a really low-memory situation.

      ReadyBoost goes through USB, which is a huge bottleneck. On top of that, most USB flash drives have really slow write speeds. ReadyBoost also has to read the file, then write it to the USB drive. Windows also must assume that the user could remove the flash drive at any point, so it can't cache writes. Integrating the cache into the drive solves all of these problems.

  • Or wait.. (Score:3, Interesting)

    by XMode ( 252740 ) on Monday May 24, 2010 @09:47AM (#32323390)

    OCZ and im sure others have SSDs up to 500GB now. OK, they cost as much as my car, but they exist. It wont be long before they get up to 1TB, then 2TB.. Then its just a matter of waiting for the price to come down.

    SSDs have caught up to traditional drives capacity extremely quickly, it wont be long before you can put a 10TB SSD in your laptop and never have to worry again (well, except for loosing it).

    • Re:Or wait.. (Score:5, Insightful)

      by dingen ( 958134 ) on Monday May 24, 2010 @10:01AM (#32323576)

      SSD wont be as cheap per GB as traditional drives for many years to come. Chances are that even when a 500 GB SSD drive gets to an acceptable price point, an old-fashioned hard drive would still be cheaper and hold many, many more data at the same time.

      This solution provides a cost-effective way to have both performance and storage *right now*.

      • No, not really (Score:5, Insightful)

        by SmallFurryCreature ( 593017 ) on Monday May 24, 2010 @10:44AM (#32324124) Journal

        The way to get both performance and storage right now is to by TWO disks. An amazing concept I know. Who would have thought it was possible to get more then one HD/SSD into a PC.

        Every single story about SSD's seems to bring out the idiots who want everything on one disk. Good thing these guys ain't farmers or they would be trying to plow the field with a Ferrari or cruise town with a tractor.

        This drive is only of use to people who can't afford a real SSD and are limited to a laptop with only one drive bay and even then you would get far better performance with a normal SSD and an external drive for your porn collection.

        Yes yes, there are people who use a laptop AND have need for far bigger datasets but on the whole, those people also need far greater access speeds then a traditional laptop HD can offer. I find it amazing to see someone claim he needs to edit video on a laptop with a 500gb 2.5 inch HD running at 5400 rpm. Who are you trying to kid?

        And this drive won't be much help here. 4GB is just a cache file, if you are lucky it caches the right files but if you are doing complex stuff these "smart" caches often get horribly confused and start caching the wrong data. Like Vista trying to cache torrented files. Yes, I know it accesses the file a lot but please don't try and cache a 10gb file on the same HD. What's the fucking point? If you for instance will be running a large database from this drive, I am willing to bet its cache performance will degrade as it simply has to much to cache. Small caches only work when a small amount of files is requested a lot and the rest isn't. Like a porn collecton on your OS drive. Video editing, databases, filesharing always screw up caches.

        If you really want performance in a laptop, spring for one with two drive bays, put as much memory in it as it can hold and get an SSD and a HD. A real SSD not one of the cheap ones some laptop companies put inside. An SSD is NOT just a fast HD, they truly are in a class of their own. And even if you got only a small single SSD, then you can still save space by putting your music/porn on a flash card or usb stick instead.

        I wonder if people can ever get it into their heads that an SSD is about speed, not about capacity. Then again, since every single netbook these days comes with a 360gb slow ass HD instead of small but fast SSD, I think I might be fighting a loosing battle. Seems the average customer can only judge something if the number is bigger.

        • Re: (Score:3, Insightful)

          by dingen ( 958134 )

          The way to get both performance and storage right now is to by TWO disks. An amazing concept I know. Who would have thought it was possible to get more then one HD/SSD into a PC.

          In most computers sold today, the fitting of more than one harddrive is not possible. Besides that, it's a very difficult to manage solution, as people will have to manually decide what to put on the fast drive and what to put on the large drive. All in all it's a very fiddly solution, only available to tech-savvy folks with customazible computers. Not to mention the fact that two drives are more expensive than one.

          In the real world, a hybrid drive such as Seagate is proposing is a lot better in almost ever

          • by imgod2u ( 812837 )

            In the real world, a hybrid drive such as Seagate is proposing is a lot better in almost every way thinkable. It's just one drive, so it will fit in basically every computer in existence and it functions completely automatic, as the user is presented with just one storage medium. The tests in the article prove this type of drive is both faster than traditional drives and a lot cheaper than SSD's, so it really is best of both worlds.

            It's not actually the best of both worlds. It's more along the lines of something good from both worlds. First, this only improves read speed as it's not used as a write-cache. Second, random read and write speeds are just as abysmal as traditional HD's. And lastly, it's not quite going to match the price/capacity of traditional drives due to the need for mult-GB of SLC cache.

            Now, it's certainly better than a traditional drive at sustained reads and offers better price/capacity vs SSD drives. But I get the

            • by dingen ( 958134 )

              I would've preferred it if the drive presented the flash cache as a ReadyBoost drive to Windows and have the OS manage what needs to be cached. Certainly Windows knows more about which of its own disgusting innards needs to be readily accessible better than a hardware algorithm.

              Microsoft would like this too, as they presented this idea in 2006 already with the "ReadyDrive" concept. But it hasn't caught on, mainly because of a lack of Vista adoption.

            • by MobyDisk ( 75490 ) *

              First, this only improves read speed as it's not used as a write-cache.

              Are you sure that is how they implemented it? That is very foolish of them. That means this won't help my compile times.

              And lastly, it's not quite going to match the price/capacity of traditional drives due to the need for mult-GB of SLC cache.

              But it is darned close. I saw that this drive is only a $40 premium over existing drives. That sounds very worthwhile on a mid to high-end computer.

        • Re:No, not really (Score:5, Insightful)

          by Bigjeff5 ( 1143585 ) on Monday May 24, 2010 @12:22PM (#32325604)

          4GB is just a cache file, if you are lucky it caches the right files but if you are doing complex stuff these "smart" caches often get horribly confused and start caching the wrong data.

          You do realize that the reason your computer is so fast is because of progressive layers of cache, right?

          The fastest cache on the system is L1 cache. It's also the most expensive. Next is L2 cache, which runs at about 1/10th the speed of L1, but it's much cheaper and so there can be more of it. That it's only an order of magnitude different means the larger L2 cache has time to fill the L1 cache before the L1 cache is completely empty. Then comes L3 cache (usually), which is again about 1/10th the speed of L2, and it keeps the L2 full. Then RAM, which has kept pace pretty well and is about 1/10th the speed of L3 and keeps L3 full. And here is where things break. RAM speeds are measured in nanoseconds. Spinning disk hard drive speeds are still measured in milliseconds, and not even 1 or 2 milliseconds, more like 5-10 milliseconds. That's a couple orders of magnitude slower and breaks the chain of cache that we had going, and it is not enough to maintain full RAM at all times. What we need is a cache that is about 1/10th the speed of RAM to sit between RAM and Hard Disk.

          SLC Nand flash, with its sub-millisecond read and write times, fits the bill perfectly. It's basically a scaled up version of the caching they use on hard drives already, and because of its size should be much, much more effective.

      • by inKubus ( 199753 )

        Multi-tier storage is all the rage in enterprise storage now. With a SAN (storage area network), it's fairly easy to have different types of storage pools available to all of your servers. Then it's just software performing a maintenance process to move data from front line to near-line to back line.

        Typically this would probably go something like this, for a midsize organization with a few hundred TB of live data:

        8-16 SSD's in a RAID10 (~500-1000GB)
        32-128TB 15K SAS Drives RAID10
        32-128TB 7.2K nearline SAS

    • by inKubus ( 199753 )

      it wont be long before you can put a 10TB SSD in your laptop and never have to worry again (well, except for loosing it)

      Why are you worried about loosing it? Is there something on there the world should be afraid of? Should we all live in fear of the data to be unleashed? Are we all going to be affected when you let it loose?

      • I imagine an SSD drive much like Iron Man's suitcase, except when it unfolds it turns into a giant, Tokyo destroying monster!

    • and...?

      I don't see how the existence of large SSD invalidates the hybrid drive. What can invalidate the hybrid drive is if it does not deliver on its performance claims. Previous incarnations didn't, but maybe Seagate has something. If they can deliver a good percentage speed increase that beats the increase in cost, then they have something. Most files don't need SSD-like speeds, if the commonly used boot and application files are on SSD portion, then that would provide most of what I want on SSD witho

  • I read through the article to see the prices:
    "The initial three drives in the Seagate Momentus XT line-up will retail for $156 (500GB), $132 (320GB), and 250GB ($113). Those prices equate to roughly $0.31 to $0.45 per gigabyte, which puts these drive within striking distance of a standard HD in terms of price and much less expensive than any SSD."

    This looks interesting enough for me to look into the compatibility with my MacBook Pro. Anyone with a MacBook Pro had sizing problems when replacing the hard
    • No, they have a flexible rubber insert to hold the screw heads & the drive. So there's definitely room for variation.

      However, I can't imagine that they would be built outside of typical spec for laptop hard drives.

    • laptop sized (2.5") drives are pretty much all the same, in terms of size and mounting points, both of my Intel SSDs actually have a spacer on the top so that it "fits" the right size. I put one in my new desktop in March, it was a huge difference. When I ordered my Macbook Pro earlier this month, I also ordered a 160GB SSD for it. My desktop has an 80GB one for it's main OS drive. I am amazed at the difference it makes, Win7 boots in about 15 seconds to the desktop (from bios hand-off), and the Macbook
  • When I read specifications on a drive, one of the key things I look for is IOPS and read/write speeds. But Seagate seems to have omitted that. I wonder why?
  • Software / OS hacks (Score:5, Interesting)

    by rwa2 ( 4391 ) * on Monday May 24, 2010 @10:11AM (#32323700) Homepage Journal

    Seems like you could do better if you simply could reorder the files on your traditional hard disk so that you'd get 100% readahead buffer hits. If properly optimized this way, your traditional hard disk should always be transferring near the max block read rate of ~100MB/s

    I'm guessing this is what some of the boot profilers / optimizers are doing.

    The readahead utility used by Redhat / Fedora (and also available for Debian) gives you some benefit loading lots of small files from the disk by reordering reads by inode number to minimize head seeks. The next major benefit would be if it could actually reorganize all those files into a single tarfile, and maybe even compress it a bit, so it can do a single large block read to get all that content off of disk and into RAM cache.

  • Filesystems have a much better idea of what data is going to be used frequently. This is an optimization they should be making. Seagate can make some good guesses by looking at block-level IO statistics, but that's like trying to optimize bytecode, all the really useful information is gone by the time you get to block-level IO.

    I think hardware vendors should be supporting more interesting experimentation on the filesystem front instead of coming up with proprietary hacks like this that are basically a half solution.

    • by inKubus ( 199753 ) on Monday May 24, 2010 @10:54AM (#32324248) Homepage Journal

      What if this drive could show up as two devices and a control driver and the driver allows a very fast copy between the two (without using SATA to the mobo)? SCSI has control drivers, used for scanners, tape libraries, etc. It wouldn't be to hard to graft a few controls into this drive and then have out of band front-line to nearline migration that happens on the disk autonomously.

      • Oooh, that would be really interesting, and a very nice feature to have when you're writing a filesystem. :-) I was thinking something similar, but didn't have the idea of the control driver, and that's a very nice addition.

    • I would go as far as to say that Seagate is chasing an impossible dream here. I've done extensive tests with HAMMER on DragonFly with a SSD caching HD content.

      First of all, 4G of flash will not do diddly to improve the performance of a high capacity hard drive in the real world. The minimum you would need is 20G to have any chance of being able to cache filesystem meta-data. The 40G or 80G Intels fit the bill very nicely.

      Secondly, it absolutely matters what data the system decides to throw onto the SSD,

      • Agreed, I'd recommend 64-80GB SSD for main OS drive, and a larger one for media storage, now if I could get even 40GB SSD and 320+GB of HDD showing up as even separate drives on the same device that would be nice. I'm using about 45GB of my 80GB intel SSD on my desktop, and my laptop is using a bit more, (because my VMs re on the same drive).
    • Except you can't put out something new and be able to sell it to 99% of the market.

      I'm sorry, that's not how business works. You don't just play with something without some plan to get it to market.

      The way to do this is as follows:

      Phase one:
      Create a hybrid drive that is significantly faster than traditional HDDs (though still slower than SSDs) without any of the storage problems of SSDs. Vendors copy you, pretty much all HDDs on the market are eventually hybrid.

      Phase two:
      While this is booming, be talking

      • Re: (Score:3, Insightful)

        by Omnifarious ( 11933 ) *

        Or, how about this instead?

        Phase one, release SSD drives that are clearly faster and make a bunch of money from early adopters who think they can use them.

        Phase two, listen to developers who are trying to make them work better. Implement things like the 'release' command. Offer an idea or two of your own, like the controller side copying.

        Phase three, release the new version of the drive that supports all of that stuff and make even more money.

        Your version relies on back-room deals with proprietary softwar

  • FTA: "To put it simply, the most commonly accessed data on the platters get's copied to the much higher performing, SLC Flash memory, which results in a performance boost." Read more: http://hothardware.com/Articles/Seagate-Momentus-XT-Solid-State-Hybrid-Preview/?page=2#ixzz0orEbgttB [hothardware.com] This makes no sense to me -- that would seem to imply the most likely thing to end up on the SSD is my swap partition, which is the last thing I want on SSD. Yeah, read would be faster, but the wear would be awful. Maybe I'
  • by Johnny Mnemonic ( 176043 ) <mdinsmore@@@gmail...com> on Monday May 24, 2010 @10:49AM (#32324174) Homepage Journal

    This drive still suffers from the historical bugaboo of spinning platters: it is damaged by shock. Also, it has the power draw (and heat output) of other spinning media.

    Those are the two biggest reasons for SSD, especially in notebooks. Performance improvements are a factor, but I think they're the least interesting. In this respect, Seagate still needs to bring an answer, and they need to do it fast to justify their run up in stock price.

    • Re: (Score:3, Insightful)

      by MobyDisk ( 75490 ) *

      The platform is the benefit though. Right now it has 4GB/250GB is 1.6% flash. Once this proof-of-concept works, I bet they could make one with closer to 20% flash. At that point it might spin-up the platter drive rarely enough that the power-draw issue goes away. If the drive is usually parked, the shock benefits improve a bit. That might be a good-enough solution to stick around for 5-10 years before the next thing comes along.

  • by lazn ( 202878 ) on Monday May 24, 2010 @11:01AM (#32324348)

    I have one of the old Samsung drives from 2007, a 80GB with 256MB of flash.
    http://en.wikipedia.org/wiki/Hybrid_drive [wikipedia.org]
    http://www.samsung.com/us/consumer/office/hard-disk-drives/hybrid-hdd-flashon/HM08HHI/index.idx?pagetype=prd_detail [samsung.com]

  • and run then in RAID 10, that beat budget SSD at any time with a large storage for applications. No longer worry about "pre-configuration" in Windows. Just install and run.

  • It's a nice idea, but it'll hinder the progress of SSD if it catches on. I'd rather SSD replace HDs completely.

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...