Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Seagate Firmware Performance Differences 177

Derkjan de Haan writes "The Seagate 7200.10 disk was the first generally available desktop drive featuring perpendicular recording for increased data density. This made higher-capacity disks with excellent performance cheaper to produce. Their sequential throughput actually exceeded that of the performance king — the Western Digital Raptor, which runs at 10,000 RPM vs. the more common 7,200 RPM. But reports began to surface on the Net claiming that some 7200.10 disks had much lower performance than other, seemingly identical disks. Attention soon focused on the firmware, designated AAK, in the lower-performing disks. Units with other firmware, AAE or AAC, performed as expected. Careful benchmarks showed very mixed results. The claims found on the Net, however, have been confirmed: the AAK disk does have a much lower throughput rate than the AAE disk. While firmware can tune various aspects of performance it is highly unusual for it to affect sequential throughput. This number is pretty much a 'fact' of the disk, and should not be affected by different firmware."
This discussion has been archived. No new comments can be posted.

Seagate Firmware Performance Differences

Comments Filter:
  • RAID1 (Score:4, Interesting)

    by Anonymous Coward on Tuesday August 28, 2007 @05:16PM (#20390243)
    Disks are cheap. I *always* run a RAID1 mirrored pair in my PCs, as pretty much all mobos these days have RAID1 capability built into the chipset's SATA controller anyway.

    On my main machine at home, I always buy my disks in groups of three drives whenever I upgrade. Two drives stay in the machine as the mirrored pair, and once a month I pull one out and stash it in a safety deposit box at my bank, and put the third drive into the machine and re-sync the mirror. That way if my house burns down / tornado smashes it or whatever bad thing that might happen, I've got a drive with my machine's image on it, no older than one month, stashed away offsite in a secure place so I can recover most all my stuff to a new machine.
  • drive failure (Score:4, Interesting)

    by leuk_he ( 194174 ) on Tuesday August 28, 2007 @06:04PM (#20390861) Homepage Journal
    Quite why people suddenly think that drives are going to fail catastrophically at the same time like this is be

    An experienced administrator would know there is one item in the data center everything is relying on no-one could ever think of it failing, and it will fail at the most catastrophic time you think of. It won't be all fo those 1000'thns drives failing at the same time because some plane mistook your server lights for the landing runway, It will be some cheap sprinkler, the security lock of the door, Or some manager that decides to shutdown a machine to protect it from a Denial of service attack.

    If there is no such item a good BOFH will create such red button.
  • by Devistater ( 593822 ) * <devistaterNO@SPAMhotmail.com> on Tuesday August 28, 2007 @06:19PM (#20391021)
    Its not less RAM, all the 7200.10 perp drives are 16 meg cache, at least all the ones above 300 gigs are. And looks like some of the 250gig as well
    http://www.seagate.com/docs/pdf/datasheet/disc/ds_ barracuda_7200_10.pdf [seagate.com]

    Its only when you get down to the 80 and 120 gig sizes that the cache is reduced. And thats to save money on the production costs since the drive itself sells for less. If people want a cheaper, smaller capacity drive, they aren't likely to be willing to pay more for the 16 meg cache.

    So "less RAM" can pretty much be eliminated. Your other theories could still be correct though. I personally would lean towards a bug, one that passed the Q&A because it didn't affect all performance characteristics of the drive.
  • by peterxyz ( 315132 ) on Tuesday August 28, 2007 @06:52PM (#20391343) Homepage
    yup, about a decade ago I worked somewhere where this was an issue - they had a RAID configuration of somekind (I'm a nerd, but not a hardware one) and they had bearing failures in sufficiently close succession that the third failure occurred before all of the swapping from the second failure hadn't been completed.

    supposedly it was traced to a common fault in the bearings
  • Re:RAID1 (Score:5, Interesting)

    by Cef ( 28324 ) on Tuesday August 28, 2007 @09:34PM (#20393047)
    I've had disks fail almost all at the same time before.

    It's really annoying when the following happens:

    - Disk 1 dies in a RAID5 set
    - Hot spare (Disk 4) comes online and starts rebuilding
    - Disk 2 dies during the rebuild thrashing
    - Rebuild never completes
    - Put in 2 new disks
    - Restore a backup
    - Disk 3 fails during restoration, pulling in the hot swap (one of the new disks)
    - A year later, the original hot spare (Disk 4) fails, leading to another rebuild

    From my own experiences, the main culprit in these sorts of cases tend to be the bearings. Why they have a tendency to go at the same time, I have no idea. Haven't had it happen lately, but I know I'd rather avoid the problem.

    Usually though, it's not the make/model/build date that is the issue, but the batch number (especially for the parts rather than the drive). Parts tend to get allocated in batches, so if you get a batch of say.... bearings, that aren't up to snuff, that batch of drives will probably fail earlier, while others (even ones manufactured on the same date) will be fine.

Remember to say hello to your bank teller.

Working...