Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Bug Data Storage Upgrades Hardware

Hybrid Seagate Hard Drive Has Performance Issues 67

EconolineCrush writes "The launch of Seagate's Momentus XT hard drive was discussed here last week, and for good reason. While not the first hybrid hard drive on the market, the XT is the only one that sheds the Windows ReadyDrive scheme for an OS-independent approach Seagate calls Adaptive Memory. While early coverage of the XT was largely positive, more detailed analysis reveals a number of performance issues, including poor sequential read throughput and an apparent problem with command queuing. In a number of tests, the XT is actually slower than Seagate's year-old Momentus 7200.4, a drive that costs $40 less."
This discussion has been archived. No new comments can be posted.

Hybrid Seagate Hard Drive Has Performance Issues

Comments Filter:
  • expected behaviour (Score:4, Interesting)

    by MonoSynth ( 323007 ) on Wednesday June 02, 2010 @03:40AM (#32428750) Homepage

    poor sequential read throughput

    That's the expected behaviour of this disk. Extremely fast for common tasks (booting and loading apps) and slower for less common and less performance-critical tasks. If you really need the SSD-like performance for all your tasks, buy a 500GB+ SSD, if you have the money for it.

    In a number of tests, the XT is actually slower than Seagate's year-old Momentus 7200.4, a drive that costs $40 less.

    That's because it's probably a $40 cheaper disk with an $80 SSD attached to it.

    • Re: (Score:3, Insightful)

      by twisteddk ( 201366 )

      While I dont share your views on the technology. I do agree that this is expected behavior from a hybrid drive. I have yet to see a hybrid drive that actually does perform significantly better than a normal drive. And that just isn't happening yet.

      I'm uncertain if this is because of a poor design, bad queueing, or other issues. But the very BEST hybrid I've seen performs only a couple of percent better than a normal drive, and then not even across the board, but only in specific tests.

      Hybrid drives have a l

      • Re: (Score:3, Interesting)

        by MonoSynth ( 323007 )

        Hybrid drives aren't made to be first choice. They're made to be an affordable choice. If you want to assemble an affordable but fast PC nowadays, you'll probably end up with a 40GB SSD for OS+Apps with a cheap, silent and big hard disk for storage. The problem with this approach is the barrier at 40GB. What if your SSD needs more space? What if it turns out that some frequently-used data is on the hard disk? Or that 60% of the OS files are hardly used? Hybrid drives try to decide for themselves which data

        • by hey ( 83763 )

          Maybe both... the 40GB SSD *and* a hybrid drive is the answer.

        • by jgrahn ( 181062 )

          If you want to assemble an affordable but fast PC nowadays, you'll probably end up with a 40GB SSD for OS+Apps with a cheap, silent and big hard disk for storage. The problem with this approach is the barrier at 40GB. What if your SSD needs more space? What if it turns out that some frequently-used data is on the hard disk? Or that 60% of the OS files are hardly used? Hybrid drives try to decide for themselves which data should be optimized.

          I fail to see why the OS and "apps" should be the things that go o

      • The XT was pretty much made for laptops. It's really the only place where getting a true hybrid (as opposed to HDD + SSD) really makes sense.

        For the XT, the SSD works as a read cache, and read cache only. You're only going to be seeing performance increases on whatever it has cached (4gigs). So if you have a few frequently used programs with long startup times, you'll see more than 'a couple of percent' better. And that's about it.

        Hybrid drives will always be the compromise between HDD and SSD. You will nev
  • by toygeek ( 473120 ) on Wednesday June 02, 2010 @03:44AM (#32428766) Journal

    Does anyone not remember the growing pains of previous technologies? Its not like this has never happened before. $Vendor releases $Product that does not meet $Expectations, charges a premium for it, and then fixes it later. Intel put out a whole slew of processors that couldn't even do proper math!

    So, if you're going to live life on the edge of the newest technology, this kind of thing is to be expected. Anybody with higher expectations should stick to last years technology and get the best of *that* instead of the newest $uberware to come out.

    • Re: (Score:3, Insightful)

      Anybody with higher expectations should stick to last years technology and get the best of *that* instead of the newest $uberware to come out.

      I take that to an extreme; the PC I'm using now is about 5 years old, has no real scalability at this point, but it still works great (especially when I got rid of the Windows user who was using it, and swapped it to Ubuntu) for my purposes. Yeah, it'd be nice to have something nice and shiny and new, but it's not worth the goddam headache.

      • Wow. That really is EXTREME!
        • More like she needed a better computer that could handle her haphazard exploits on the internet more than I needed a new shiny toy. I can wait a year; if I'd forced her to wait a year, there would be smouldering ashes on the floor by now, accompanied by a keyboard missing several keys. I can only maintain a semi-stable XP machine for so long, at this point I consider a high-risk 2.5 years an accomplishment.
      • Re: (Score:3, Insightful)

        by toygeek ( 473120 )

        I understand that completely. But, when I built my new PC this year, I bought the best of 1-2 year old technology. I'm not a gamer so I didn't need the best of the best, just something fast. Its stable, has no issues, and just works the way it is supposed to.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Absolutely. For a few years when somewhat-affordable consumer SSDs were entering the market, many of them were total shit, and even the good ones were having firmware upgrades released for them.

    • I think I will wait until the ATA-8 spec is released with a standardized version of this.

  • by Anonymous Coward on Wednesday June 02, 2010 @03:46AM (#32428780)

    The caching and everything is all happening at a level below the OS and the file system, but these tests seem to have all been run in Windows 7 Ultimate x64, whatever that is.

    Would another file system (ext4, for example) on Linux/*BSD or HFS+ on Mac OS yield different results, I wonder, w/and w/o swap? Can there be clashing optimization techniques here?

    • Can there be clashing optimization techniques here?

      No, because the disk-based disk cache in Vista depends on having access to the flash volume, and this drive doesn't provide that, therefore it is not being used.

    • by deroby ( 568773 )

      The effects should be (more or less) identical across OS's as they mostly use the HDD's in the same way. Sure, there will be differences in the way the filesystem puts things to disk, but in the end it's always a bunch of data-blocks that are laid down in a structured way so they can be retrieved easily afterwards. (exceptions exist like eg. Log-structured file system, but in 99.999% of cases the general idea always is the same).

      What I do wonder is, how does this thing feel 'in reality' ? Most of these test

  • With hard drive access times in the very low milliseconds, it has me baffled why a fully associative cache can't be implemented with write-back.

    This strikes me as pretty much the ideal solution. Surely the hardware is fast enough these days to support such a system?

    Yes I know the cache hit search becomes the bottleneck, but we're talking hundreds of microseconds here! Use volatile memory for the LRU indexes / search and it would be damn quick for hits. Ensure that the sector tag is still kept for each li

    • Three reasons:
      - RAM is expensive
      - The OS can do it better than the disk (except at boot time)
      - Doing it right is not trivial (complicated firmware is a bad thing)

      If you want a disk cache with write-back, buy more memory for your system, that's what the OS does with it.

      • by scharman ( 308566 ) on Wednesday June 02, 2010 @06:53AM (#32429434)

        (a.) volatile memory is cheap for the amount needed for only the cache search (all it has to store is maybe 16 bytes per sector which is tiny). The ram cache is a trivial amount of the cost compared to the flash memory which is where your sectors are being stored.
        (b.) re-read what I've listed above - I'm not suggesting you remove the OS tier of disk caching.
        (c.) a fully associative algorithm is trivial in complexity in contrast to their 'adaptive' algorithms. A CS101 undergrad could implement a reasonable implementation in a hour. This is trivial stuff.

        The OS is awful at write-back as if the power fails you've lost state. The benefit of a hybrid drive is that the flash is non-volatile. Writing to the flash ram is cheap. Writing to the disk is expensive. You get the best of both worlds with a flash based write-back cache.

        The benefit of flash is it's cheaper than RAM so you can have more of it whilst being far faster than mechanical. Having a 32 or 64 GB flash hybrid drive provides sufficient cache to only rarely need to write back to the disk for most user operations whilst not forcing a 'system' and 'data drive'. As far as the system is concerned, it's just presented as one very fast 2 TB drive (or whatever).

        The only time the system will slow down is when you begin to strip the cache which is perfectly reasonable as it means you've exhausted the flash capacity. For 99.999% of usage situations, this will never occur and it will feel just like a very very quick 2 TB flash drive.

        • by Glonoinha ( 587375 ) on Wednesday June 02, 2010 @07:28AM (#32429550) Journal

          I'm curious - what sort of algorithm would you use that can effectively store the data needed for a cache search that can represent 4096 byte sectors in 16 bytes.

          As for the 'only time the system will slow down is when you have out-read the cache' - that's exactly the scenario that the OP is describing - massive serial reads on files that are larger than the cache. IIRC the cache was sized on the order of a few Megabytes, and every multimedia file I read all day / all night (music files, video files, gave vobs, etc) is at least that large, most are much larger.

          PS - A CS101 undergrad could implement a reasonable implementation in a hour.
          Now that's funny. Most first semester CS 101 undergrad students I've met couldn't pour rocks out of a box if the instructions were printed on the underside of the box.

          • by TheLink ( 130905 )
            > Most first semester CS 101 undergrad students I've met couldn't pour rocks out of a box if the instructions were printed on the underside of the box.

            Because rocks are heavy, and most CS101 undergrad students aren't very strong? :)
        • The OS is awful at write-back as if the power fails you've lost state. The benefit of a hybrid drive is that the flash is non-volatile. Writing to the flash ram is cheap. Writing to the disk is expensive. You get the best of both worlds with a flash based write-back cache.

          Unfortunately, FLASH is quite slow at writing, especially with only a single channel to write to. While a drive like this would probably still beat a pure HD for write latency (and hence perceived performance), synthetic sequential write benchmarks would take a pounding the drive simply wouldn't sell.

          As an example, consider the cheap Intel 40GB SSD. It has only 5 channels (half the channels and flash of the 80GB drive) and can only write 35MB/s sequentially. That works out at about 7MB/s per channel! Try se

        • Wait, what? Oh I see - are you proposing to add a fully associative cache in front of the 4GB Flash memory to speed up cache lookups and thus lazily storing writes as well?

          I thought you were caching the stored data in a cache. I must admit I kinda glossed over the "fully associative with write-back" bit :-)

          I suppose that can work - SLC is great for caching writes on. However, it's a lot more work than simply copying hot reads onto the Flash and caching them there. What you're proposing means a lot of new wo

        • by soppsa ( 1797376 )

          A CS101 undergrad could implement a reasonable implementation in a hour. This is trivial stuff.

          Me thinks you are not a computer scientist, or do not remember the average CS101 student...

      • by AHuxley ( 892839 )
        It also seems good firmware is expensive.
        Sandforce and Intel seem to have skills.
        Where did Seagate buy its from ;)
      • by deroby ( 568773 )

        The main problem with OS caches is that they need to be read into memory before they can be used.
        => it's all great if the OS keeps a copy of something.dll in memory because it has learned over time that this file is needed very often, it still has to read it at least once first.

        Although the HDD will not realise what it is caching, it can have the relevant blocks already sitting in its cache long before the OS asks for it.

        So yes, I agree, adding RAM on the HDD makes it more expensive, adding the cache-lo

        • This doesn't work well in practice. About the only thing the HDD can actually cache is unrequested data that passes under the head while it is going after requested data. For example, it can cache data ahead of linear read requests, even if several programs are doing linear reads from different parts of the disk. This is what the HD's zone cache does. Usually around 16 'zones' can be tracked in this manner.

          Unfortunately this is data which is already readily accessible, so once the HD caches enough to bu

          • by deroby ( 568773 )

            You assume here that the information in the disk-cache needs to be read from the disk. While this is true for "current average hardware" using diskread-ahead/read-behind caching as you describe, this is not true for the non-volatile cache used in the disks being tested here.

            [... the HD has no way to determine what data should be preferentially cached whereas the OS does ...]

            That's where the 'learning' comes into play, which is what makes this kind of drive special. While I agree that the drive does not know

      • Re: (Score:3, Informative)

        by hey ( 83763 )

        I'd love to buy more memory but I already have 4GB (the limit for many machines.)

        • by bored ( 40072 )

          Buy more memory, and put your swap cache on it. There are a number of ramdisks that use the memory above 4G on 32-bit M$ OS's. With a little google searching you can find a few that work well and are free. Sure it won't act as disk cache but it works fantastically for running applications that are willing to consume multiple GB of disk (aka a whole bunch of VMWARE sessions).

          Or call M$ and bitch about them restricting the desktop users to 3G of memory, while the 32-bit server versions can access 64.

  • In some particular benchmark it doesn't have as high sequential read speeds as you might expect, and yet these "mp3" and "video" read benchmarks probably don't require the maximum bandwidth allocated from the drive. It might be working EXACTLY as expected if its streaming MP3s from the flash media which may have a "slower, but fast enough for media streaming" sequential speed and its doing it so that the platter mechanism is free for anything else that might come up.

    I don't rate the performance of this driv

  • ... what do you expect? It's a Hybrid...
  • Why do none of the symbols in the keys match the chart body? For example, Scorpio has a black triangle in the key, but the line on the chart has black diamonds. Is the chart software flaky or are the results being rigged?

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...