Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Seagate Firmware Performance Differences 177

Derkjan de Haan writes "The Seagate 7200.10 disk was the first generally available desktop drive featuring perpendicular recording for increased data density. This made higher-capacity disks with excellent performance cheaper to produce. Their sequential throughput actually exceeded that of the performance king — the Western Digital Raptor, which runs at 10,000 RPM vs. the more common 7,200 RPM. But reports began to surface on the Net claiming that some 7200.10 disks had much lower performance than other, seemingly identical disks. Attention soon focused on the firmware, designated AAK, in the lower-performing disks. Units with other firmware, AAE or AAC, performed as expected. Careful benchmarks showed very mixed results. The claims found on the Net, however, have been confirmed: the AAK disk does have a much lower throughput rate than the AAE disk. While firmware can tune various aspects of performance it is highly unusual for it to affect sequential throughput. This number is pretty much a 'fact' of the disk, and should not be affected by different firmware."
This discussion has been archived. No new comments can be posted.

Seagate Firmware Performance Differences

Comments Filter:
  • bug (Score:5, Insightful)

    by Anonymous Coward on Tuesday August 28, 2007 @05:02PM (#20390051)
    When the performance of a lower-end drive is better than that of a higher-end (or, god forbid, a SCSI drive!) this is a serious bug that of course needs to be fixed in the firmware update.
  • Reliability (Score:5, Insightful)

    by PlusFiveInsightful ( 1148175 ) on Tuesday August 28, 2007 @05:03PM (#20390067) Homepage
    I'll take reliability over performance of a hard drive any day. Nothing sucks more than swapping out drives.
  • by Bellum Aeternus ( 891584 ) on Tuesday August 28, 2007 @05:12PM (#20390191)
    So the whole article comes down to the fact the new Seagates are really good if you use them for what they're designed for, but are not as good at what they're not designed for. Surprise...

    Looks like Seagate designed the new drives for servers (probably file servers) because they're really good a moving large chunks of data around, doing large reads, and large write, but not so good a a ton of little reads and writes. So, don't buy them for your desktop/workstation.

  • by Anonymous Coward on Tuesday August 28, 2007 @05:14PM (#20390213)
    The problem simply is that when you buy a seagate 7200.10 you don't know which drive you end up getting.. server or workstation
  • Re:Reliability (Score:3, Insightful)

    by RingDev ( 879105 ) on Tuesday August 28, 2007 @05:17PM (#20390255) Homepage Journal

    Nothing sucks more than swapping out drives.
    Spoken like a man who's never been kicked in the nuts...

    I'd rather hot swap a failed raid drive than bring down a server to increase memory or redesign a solution from scratch in order to achieve the same performance gains. Heck, for the cost of having a coder just look at the I/O intensive code I could have bought another hard drive.

    -Rick
  • by zdzichu ( 100333 ) on Tuesday August 28, 2007 @05:18PM (#20390269) Homepage Journal
    From TFA page 6 [fluffles.net]:

    A sad detail is that updating an AAK disk to other firmware is impossible, due to physical differences of the two disks.
    (emph. mine)
    Different disks have different performance. News at 11.
  • Re:Reliability (Score:3, Insightful)

    by tepples ( 727027 ) <tepples.gmail@com> on Tuesday August 28, 2007 @05:31PM (#20390437) Homepage Journal

    Heck, for the cost of having a coder just look at the I/O intensive code I could have bought another hard drive.
    In which country? In some countries, high import duties and a weak local currency mean that the price of a hard drive is worth a lot more hours of labor than it would be in, for example, the United States or the United Kingdom. And across how many machines does your app run?
  • by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Tuesday August 28, 2007 @05:31PM (#20390439) Journal
    Two drives sold under identical make and model identifiers should not be that different.
  • Re:RAID1 (Score:3, Insightful)

    by Gandalf_the_Beardy ( 894476 ) on Tuesday August 28, 2007 @05:51PM (#20390711)
    It works for me - we have at least a thousand disks in our datacentre in raid5 arrays with 10+ disks per array - all the same make, model and build date and haven't yet had any fail so close that we couldn't leisurely swap the duff one out and rebuild onto the replacement. Quite why people suddenly think that drives are going to fail catastrophically at the same time like this is beyond me when the real world experience says it just isn't so.
  • Re:Reliability (Score:5, Insightful)

    by rcw-work ( 30090 ) on Tuesday August 28, 2007 @07:40PM (#20391883)

    Compared to just replacing the hard drive for $150. Hardware is cheap. Labor is not.

    Your example makes sense, but what if you've already done that? Say your app is SQL-based and does some queries that are O(n^2) complex. You've already spent $20k on a bad-ass server with RAID10, a bunch of spindles, separate transaction log drives, and as much RAM as can fit. Now, a year later, there's more records in the system and performance sucks again. Where do you go from there? These disks don't go to 11. If you want to double the performance of that $20k box, you're likely going to spend not $40k but $200k.

    Once you outgrow commodity parts, if you want a 2x speedup, you'll usually have to pay 10x for it. Or wait three years. The price/performance curve is deceptively shallow towards the bottom end.

  • Re:RAID1 (Score:1, Insightful)

    by Anonymous Coward on Tuesday August 28, 2007 @07:42PM (#20391909)
    How on earth does anyone have reliable RAID use from the crappy on board controllers is beyond me. Call me a loser but I've seen far more RAID issues then HD failure issues when dealing with anything but well known (expensive and dedicated) HD controllers. Do all SATA chip sets use a compatible format or would you have to get the same exact one if your MB or something fails and what about power outages before a commit? If using Windows and your MB fails, are you going to be able to get it running on a different MB using that array?
    Wouldn't it make more sense and be much easier to run an occasional rsync of tar.gz or equivalent to a second drive for a backup? Taking a drive out of an array and storing it in a bank vault sounds cool but does that really make sense? I mean RAID is great for uptime and availability but it should never be confused with a backup. What about a mouse slip or a virus or an OS glitch or an accidental overwrite of a file? Are you going to be first in line at the bank on the next business day to get that month old disk?

    I've got a geek card too but using RAID on an end user PC at home does not seem to make sense to me at all.
  • by straponego ( 521991 ) on Tuesday August 28, 2007 @08:30PM (#20392465)
    First, a comment on the Seagate 750G drives: If you run these, and you want to keep them running, make sure you have clean power. I've seen several of them die, usually after a power outage. Never seen one on a UPS die.

    Also, if you're concerned about Linux block device performance, look at the various kernel tunables. On a single drive, such as those Seagates, I can get extra ~10MB/s. On RAIDs and LVM volumes, the differences can be much higher-- more than twice as fast, in some cases. There are a few parameters that make a difference, and many values you might want to try for each. I have a script iterate through the various permutations, running IOZone on each, so I can see what does best for read vs. write and large vs. small file performance. But I can't release it just yet (employer makes 100% of income from Open Source; employer hates Open Source). Anyway, somebody out there can do better than I, I'm sure :)

    This discusses the tunables you'd want to check: http://www.3ware.com/KB/article.aspx?id=11050 [3ware.com]

    Note that these do NOT apply only to 3Ware controllers. And the differences in performance can be massive.

  • Re:RAID1 (Score:5, Insightful)

    by GooberToo ( 74388 ) on Tuesday August 28, 2007 @09:34PM (#20393045)
    as pretty much all mobos these days have RAID1 capability built into the chipset's SATA controller anyway.

    And many of those are actually slower than a pure, software-only, RAID solution. Sometimes the "hardware RAID" does nothing but offload checksum calculations or other bits onto slower hardware resulting it in being a major performance hinderence rather than a performance boost. Worse yet, if your controller card dies, ALL of your data is now inaccessible. Worse yet again, there is not guarantee future hardware releases, even by the same manufacturer, will be compatible. Heck some of the really low end hardware solutions don't even provide mirrored reads, which should provide a 2x read-only performance boost.

    Not all RAID is created equal. And for many, software RAID, especially for Linux users, provides a solution faster than many RAID hardware solutions, is future proof, and only costs a couple of precent in additional CPU load. Best of all, it's free and works well with LVM. In a day and age where multiple cores are common and few actually use more than one, this option doesn't have much of a downside until you're willing to look at *REAL* RAID hardware.

  • Re:RAID1 (Score:2, Insightful)

    by evilbessie ( 873633 ) on Wednesday August 29, 2007 @05:40AM (#20395875)
    Use RAID 6, you can lose any 2 disks and still have all the data, means that data is secure whilst the array is rebuilding from a single failed drive. Alright you could lose 3 disks at once but that is much less likely than losing one or two, especially if the failure/rebuild occurs quickly.

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...