Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Seagate Firmware Performance Differences 177

Derkjan de Haan writes "The Seagate 7200.10 disk was the first generally available desktop drive featuring perpendicular recording for increased data density. This made higher-capacity disks with excellent performance cheaper to produce. Their sequential throughput actually exceeded that of the performance king — the Western Digital Raptor, which runs at 10,000 RPM vs. the more common 7,200 RPM. But reports began to surface on the Net claiming that some 7200.10 disks had much lower performance than other, seemingly identical disks. Attention soon focused on the firmware, designated AAK, in the lower-performing disks. Units with other firmware, AAE or AAC, performed as expected. Careful benchmarks showed very mixed results. The claims found on the Net, however, have been confirmed: the AAK disk does have a much lower throughput rate than the AAE disk. While firmware can tune various aspects of performance it is highly unusual for it to affect sequential throughput. This number is pretty much a 'fact' of the disk, and should not be affected by different firmware."
This discussion has been archived. No new comments can be posted.

Seagate Firmware Performance Differences

Comments Filter:
  • bug (Score:5, Insightful)

    by Anonymous Coward on Tuesday August 28, 2007 @04:02PM (#20390051)
    When the performance of a lower-end drive is better than that of a higher-end (or, god forbid, a SCSI drive!) this is a serious bug that of course needs to be fixed in the firmware update.
    • Firmware can't fix everything.
    • Maybe I'm oversimplifying this, but in a day when I expect to do firmware/bios updates on my motherboard, raid controller, routers (and there's even alternate firmware options like OpenWRT [openwrt.org]), and possibly even a video card. Guess what, even your monitors and TVs can have firmware updates now! What's the big deal about getting a firmware update on a hard drive?

      It seems to me that the only reason people make a big deal out of this is that historically nobody is used to updating their hard drives.
    • From the article:

      Updating firmware?
      Questions as to whether why the AAK-firmware exists are still not answered. A sad detail is that updating an AAK disk to other firmware is impossible, due to physical differences of the two disks.

      I'm not sure where the article got this from, but it implies that it could still be a hardware difference (e.g. slightly lower spec'd DSP in the controller?)

    • The difference between these drives is not only the firmware, the hardware is also different. If you look a bottom of the drives, you can see the board has a completely different layout and presumably (the pictures I've seen were too low quality and the memory was not on the visible side on the AAK-drives) different chips. According to Seagate, the AAK drives were for an OEM-customer (unfortunately, they didn't mention which one). But how or why those drives made it to retail-channels (Seagate and the OEM-
  • Reliability (Score:5, Insightful)

    by PlusFiveInsightful ( 1148175 ) on Tuesday August 28, 2007 @04:03PM (#20390067) Homepage
    I'll take reliability over performance of a hard drive any day. Nothing sucks more than swapping out drives.
    • Sure, but there are people out there who would have picked up these drives instead of the 10K rpm WD drives explicitly because of the better performance for less heat/energy usage.

      It's the return of the old "specifications subject to change without notice". Haven't seen an abuse of that one for a while.
    • RAID1 (Score:4, Interesting)

      by Anonymous Coward on Tuesday August 28, 2007 @04:16PM (#20390243)
      Disks are cheap. I *always* run a RAID1 mirrored pair in my PCs, as pretty much all mobos these days have RAID1 capability built into the chipset's SATA controller anyway.

      On my main machine at home, I always buy my disks in groups of three drives whenever I upgrade. Two drives stay in the machine as the mirrored pair, and once a month I pull one out and stash it in a safety deposit box at my bank, and put the third drive into the machine and re-sync the mirror. That way if my house burns down / tornado smashes it or whatever bad thing that might happen, I've got a drive with my machine's image on it, no older than one month, stashed away offsite in a secure place so I can recover most all my stuff to a new machine.
      • [...]I always buy my disks in groups of three drives whenever I upgrade.

        not sure if I get you right but:

        Same vendor, same product, roughly same manufacturing date? Great strategy.

        • Re: (Score:3, Insightful)

          It works for me - we have at least a thousand disks in our datacentre in raid5 arrays with 10+ disks per array - all the same make, model and build date and haven't yet had any fail so close that we couldn't leisurely swap the duff one out and rebuild onto the replacement. Quite why people suddenly think that drives are going to fail catastrophically at the same time like this is beyond me when the real world experience says it just isn't so.
          • drive failure (Score:4, Interesting)

            by leuk_he ( 194174 ) on Tuesday August 28, 2007 @05:04PM (#20390861) Homepage Journal
            Quite why people suddenly think that drives are going to fail catastrophically at the same time like this is be

            An experienced administrator would know there is one item in the data center everything is relying on no-one could ever think of it failing, and it will fail at the most catastrophic time you think of. It won't be all fo those 1000'thns drives failing at the same time because some plane mistook your server lights for the landing runway, It will be some cheap sprinkler, the security lock of the door, Or some manager that decides to shutdown a machine to protect it from a Denial of service attack.

            If there is no such item a good BOFH will create such red button.
            • That's why we have a hot standby datacentre with real time replication to it. Shame that one of our contractors reversed over the gas main and we evacuated leaving all the access cards to the hot standby in the evacuated building.....
            • by Jeff Carr ( 684298 ) <(ofni.rracffej) (ta) (moc.todhsals)> on Tuesday August 28, 2007 @08:13PM (#20392817) Homepage

              If there is no such item a good BOFH will create such red button.
              One of the data centers I worked at had just such a red button. It was designed to immediately kill all power to the room. Behind a plastic case, clearly marked "Emergency Shutoff".

              The security for the door was malfunctioning earlier this summer, and the alarm was going off. The security guard thought the button was a shutoff switch for the security system... Luckily we had redundant servers at another location... Of course half of those didn't work...

              Luckily also, this was the smaller data center at that site, so it only housed a few hundred servers... including the servers that ran many of our ATMs, and our server inventory and trouble tracking software... which didn't fail over to their backups... of course.

              In addition, we had no idea where the server housing our server inventory information was... It turns out it was housed on a server called Skywalker... which we couldn't find... It turned out to be a cluster of Anakin and Amidala...

              Fracking geeks.
              • A client of mine had years ago an exactly similar button in their server room, but in this case the button was accidentally pressed by a maintenance worker who fell from a ladder. I don't know what somersaults the poor guy had to do to accomplish this, but there it was - a freak accident.
          • It hasn't happened to me either.

            But OTOH I also never lost all my data in the flames of my burning home, nor did I have any data damaged by a virus, nor was any sensitive data stolen from me, nor ..., nor ..., nor ...

            You see, a lot of this security things are a little paranoid but I still consider the argument for not buying exactly the same stuff for backup still valid. If there is a problem with the charge changes are high that it will affect both HDs.

            Since security things tend to be a little paranoid, yo
          • Really? Mine always fail in batches. Like one month there will be 30 or 40 dead hard drives, whereas the previous month there was 2 or 3. When you look at the date, they are always 3 years and one or two months from date of manufacture. Funny how that happens but it happens reliably, year after year.

          • yup, about a decade ago I worked somewhere where this was an issue - they had a RAID configuration of somekind (I'm a nerd, but not a hardware one) and they had bearing failures in sufficiently close succession that the third failure occurred before all of the swapping from the second failure hadn't been completed.

            supposedly it was traced to a common fault in the bearings
            • by dwater ( 72834 )
              > ...the third failure occurred before all of the swapping from the second failure hadn't been completed.

              Then it was all swapped in time then, right?
          • Re:RAID1 (Score:5, Interesting)

            by Cef ( 28324 ) on Tuesday August 28, 2007 @08:34PM (#20393047)
            I've had disks fail almost all at the same time before.

            It's really annoying when the following happens:

            - Disk 1 dies in a RAID5 set
            - Hot spare (Disk 4) comes online and starts rebuilding
            - Disk 2 dies during the rebuild thrashing
            - Rebuild never completes
            - Put in 2 new disks
            - Restore a backup
            - Disk 3 fails during restoration, pulling in the hot swap (one of the new disks)
            - A year later, the original hot spare (Disk 4) fails, leading to another rebuild

            From my own experiences, the main culprit in these sorts of cases tend to be the bearings. Why they have a tendency to go at the same time, I have no idea. Haven't had it happen lately, but I know I'd rather avoid the problem.

            Usually though, it's not the make/model/build date that is the issue, but the batch number (especially for the parts rather than the drive). Parts tend to get allocated in batches, so if you get a batch of say.... bearings, that aren't up to snuff, that batch of drives will probably fail earlier, while others (even ones manufactured on the same date) will be fine.
            • Re: (Score:2, Insightful)

              by evilbessie ( 873633 )
              Use RAID 6, you can lose any 2 disks and still have all the data, means that data is secure whilst the array is rebuilding from a single failed drive. Alright you could lose 3 disks at once but that is much less likely than losing one or two, especially if the failure/rebuild occurs quickly.
          • ...all the same make, model and build date and haven't yet had any fail so close that we couldn't leisurely swap the duff one out and rebuild onto the replacement

            Problem is, while there is a a very low average chance of a drive failure, those failures actually tend to happen in clusters. I figure there are failures in teh environmental systems allowing in contaminants, you could think of them as "Friday syndrom"; whatever it is the effect is very real. If one drive in a batch fails early, the odds another

            • That's a month between failures though - an array only takes a few hours to build onto a disk evena large one. Yes they will fail in clusters, but they are very unlikley to fail so close that you cannot rebuild onto the hot spare in time, even in larger arrays
      • Re:RAID1 (Score:5, Insightful)

        by GooberToo ( 74388 ) on Tuesday August 28, 2007 @08:34PM (#20393045)
        as pretty much all mobos these days have RAID1 capability built into the chipset's SATA controller anyway.

        And many of those are actually slower than a pure, software-only, RAID solution. Sometimes the "hardware RAID" does nothing but offload checksum calculations or other bits onto slower hardware resulting it in being a major performance hinderence rather than a performance boost. Worse yet, if your controller card dies, ALL of your data is now inaccessible. Worse yet again, there is not guarantee future hardware releases, even by the same manufacturer, will be compatible. Heck some of the really low end hardware solutions don't even provide mirrored reads, which should provide a 2x read-only performance boost.

        Not all RAID is created equal. And for many, software RAID, especially for Linux users, provides a solution faster than many RAID hardware solutions, is future proof, and only costs a couple of precent in additional CPU load. Best of all, it's free and works well with LVM. In a day and age where multiple cores are common and few actually use more than one, this option doesn't have much of a downside until you're willing to look at *REAL* RAID hardware.

        • by Agripa ( 139780 )
          Worse yet, if your controller card dies, ALL of your data is now inaccessible.

          This is not always the case. With my old 3ware cards for instance, the RAID 1 format precludes using either of the drives on another controller. I have however come across some motherboard and other RAID controllers which can mirror an existing non RAID drive into a RAID 1 setup and either new drive can be used alone with a standard controller.

          Heck some of the really low end hardware solutions don't even provide mirrored reads,
      • Watch out for one gotcha - I've read that some (many? all?) RAID arrays are built so that they're only usable with the controller that built them. In the case of software RAID arrays, you're probably pretty safe because Windows/Linux/whatever will probably work the same way over a long period of time.

        However, in the case of hardware controllers, the array format may be different between implementations. This means that you're protected against drive failure but not against controller failure/theft/burnina

    • Re: (Score:3, Insightful)

      by RingDev ( 879105 )

      Nothing sucks more than swapping out drives.
      Spoken like a man who's never been kicked in the nuts...

      I'd rather hot swap a failed raid drive than bring down a server to increase memory or redesign a solution from scratch in order to achieve the same performance gains. Heck, for the cost of having a coder just look at the I/O intensive code I could have bought another hard drive.

      -Rick
      • Re: (Score:3, Insightful)

        by tepples ( 727027 )

        Heck, for the cost of having a coder just look at the I/O intensive code I could have bought another hard drive.
        In which country? In some countries, high import duties and a weak local currency mean that the price of a hard drive is worth a lot more hours of labor than it would be in, for example, the United States or the United Kingdom. And across how many machines does your app run?
        • by RingDev ( 879105 )

          In which country? In some countries, high import duties and a weak local currency mean that the price of a hard drive is worth a lot more hours of labor than it would be in, for example, the United States or the United Kingdom. And across how many machines does your app run?

          In the USA. Let's say I have an app that is so dependent on performance that the hit taken by running a slower hard drive is currently a show stopping issue. A new Seagate 7200.10 400GB hard drive costs right around $100 in the US, heck we'll call it $150 for OMGNeedItNow shipping or local retail price. Let's figure that there is believed to be a performance issue in the code, but that no one has worked on the project for 6 months to 1 year. Figure it takes about 2 hours for a developer to get the correct

          • by _merlin ( 160982 )
            Swapping the disk incurs labour costs, too. You need to send someone to do the work and test the system to ensure it's still stable after the swap.
            • Dude, he already did the math for you and you still don't get it?
              Hardware is cheaper than labor, by orders of magnitude.

              In fact, when your hardware-costs begin to escalate then most of the time it is because you were cheap (read: shortsighted)
              on your labor in the past and your underqualified "write-you-many-lines-of-code-for-$50-an-hour" contractors
              left you with an app that scales like a lame donkey.

              Btw, swapping disks and ensuring stability is called "regular maintenance" and you should have
              full-time staff
          • Re:Reliability (Score:5, Insightful)

            by rcw-work ( 30090 ) on Tuesday August 28, 2007 @06:40PM (#20391883)

            Compared to just replacing the hard drive for $150. Hardware is cheap. Labor is not.

            Your example makes sense, but what if you've already done that? Say your app is SQL-based and does some queries that are O(n^2) complex. You've already spent $20k on a bad-ass server with RAID10, a bunch of spindles, separate transaction log drives, and as much RAM as can fit. Now, a year later, there's more records in the system and performance sucks again. Where do you go from there? These disks don't go to 11. If you want to double the performance of that $20k box, you're likely going to spend not $40k but $200k.

            Once you outgrow commodity parts, if you want a 2x speedup, you'll usually have to pay 10x for it. Or wait three years. The price/performance curve is deceptively shallow towards the bottom end.

            • Your example makes sense, but what if you've already done that? Say your app is SQL-based and does some queries that are O(n^2) complex.

              This is why we have things like database clusters and distributed queries. Sometimes scaling horizontally makes more sense and is cheaper than trying to scale vertically. Which probably explains why it is so popular. ;)
              • This is why we have things like database clusters and distributed queries. Sometimes scaling horizontally makes more sense and is cheaper than trying to scale vertically. Which probably explains why it is so popular. ;)

                Agreed. But if the app was never created with that in mind, it's rarely an option, and, if poorly/naively implemented, it can cause problems much worse than performance.

                I think the only two multi-master database implementations I've seen so far that have been done right are Usenet and Act

          • In the USA. Let's say I have an app that is so dependent on performance that the hit taken by running a slower hard drive is currently a show stopping issue.

            Then the person who spec'd the system is either an idiot or incompetent. By definition you have a system which is a single point of failure which forces your system to operate below accetable minimums. With this system you should have one or more drives on hot standby or additional active disks in your array to absorb the performance hit.

            Given the rec
            • by RingDev ( 879105 )
              Wow guys. I created that scenario as a greatly simplified example to try to show that the cost of replacing a hard drive is insignificant when compared to other costs in IT. Nothing more. Having multiple fail-overs, scaling horizontally, and pushing to other bottlenecks are all great ideas, but they just cloud up the central point I was making of labor costing more than the hard drive.

              -Rick
  • Is there a tool to check what firmware my hard drive has in Linux? I've got one of these Seagates, and it's SATA, so that means hdparm can't talk to it.
    • Re: (Score:3, Informative)

      by Nimey ( 114278 )
      Sigh, never mind. Ubuntu's been updated since I put this computer together, so now hdparm /can/ talk to a SATA drive.

      Wouldn't you know that I've got an AAK disk.
  • by Bellum Aeternus ( 891584 ) on Tuesday August 28, 2007 @04:12PM (#20390191)
    So the whole article comes down to the fact the new Seagates are really good if you use them for what they're designed for, but are not as good at what they're not designed for. Surprise...

    Looks like Seagate designed the new drives for servers (probably file servers) because they're really good a moving large chunks of data around, doing large reads, and large write, but not so good a a ton of little reads and writes. So, don't buy them for your desktop/workstation.

    • Re: (Score:1, Insightful)

      by Anonymous Coward
      The problem simply is that when you buy a seagate 7200.10 you don't know which drive you end up getting.. server or workstation
  • Looking at the graphs on the first page of the article, it seems like the AAK firmware has some kind of performance cap on it. When you get to ~80 on the horizonal scale, the curves between the two graphs appear to sync up again.

    So does this mean they they've put some kind of speed governor on their hard drives, or am I totally misinterpreting the results?
  • by zdzichu ( 100333 ) on Tuesday August 28, 2007 @04:18PM (#20390269) Homepage Journal
    From TFA page 6 [fluffles.net]:

    A sad detail is that updating an AAK disk to other firmware is impossible, due to physical differences of the two disks.
    (emph. mine)
    Different disks have different performance. News at 11.
    • by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Tuesday August 28, 2007 @04:31PM (#20390439) Journal
      Two drives sold under identical make and model identifiers should not be that different.
      • According to whom? The drives are advertised based on capacity, not throughput. RPM, cache, sure, but I've never seen a manufacturer include anything other than the the interface speed in a drive's specs.
        • Indeed- I had 4 75GB IBM 75GXP disks purchased at the same time (about a week apart)- two were actually smaller by some fraction (I don't recall, 10 or 100mbish) and had different # of sectors. The two bigger ones had the all too well know failure, the others are still operational.
        • Nothing personal, but you might want to consider getting your hard drive specs elsewhere. I prefer going directly to the manufacturer's site. The 7200.10 specification sheet {in .PDF format} for the Barracuda line is here. [seagate.com]
          • Those are interface speeds. If they were actual expected speeds, you should be sending your hard drive back or starting a class action, because there's not a single 7200.10 that will hit 100MB/s (let alone 300MB/s) for more than .16 seconds -- the best case time it takes to empty/fill the cache at that speed.
      • by jhesse ( 138516 ) on Tuesday August 28, 2007 @06:44PM (#20391935) Homepage
        Tell that to D-Link.

        They were selling a USB 802.11G dongle (Model DWG-122, IIRC), one model number, *THREE* different chipsets (each requiring different drivers, only one of which had drivers for other than Windows)

        Nothing on the box other than a "A" "B" or "C" in tiny print in a corner.

        • by dwater ( 72834 )
          > Nothing on the box other than a "A" "B" or "C" in tiny print in a corner.

          Well, that's enough, isn't it?

          If these drives had such a marking, then this article wouldn't be here.
        • DLink 530TX: Via Rhine chipset. 530TX+: Realtek 8139. Apparently the + sign meant "more sucky". After validating that one has worked well, and then ending up with the other, and not having it work well (it crippled a basement closet NFS server), I can get a little choked about these small distinctions for a very long time. Bought more expensive Intel fxp cards for a long time afterward.

          Here's the thing. If I order a 530TX from my favorite rock-solid discount house, they will fill the order with a 530TX
  • by garcia ( 6573 ) on Tuesday August 28, 2007 @04:23PM (#20390329)
    This number iis pretty much a 'fact' of the disk, and should not be affected by different firmware.

    Poor spell checking is pretty much a 'fact' of the browser you use when you submit articles to Slashdot, and should be affected by different editors.

    Perhaps kdawson's firmware is broken? :)
  • by Anonymous Coward on Tuesday August 28, 2007 @04:26PM (#20390359)
    Whatever you do, don't stream audio from one of the -K drives across Vista!
  • And they are SRIPE0 and I am definitely smokin with file stuff.

    I am juggling 14 GByte pdf with a breeze (albeit acrobat reader doesn't seem to work with files larger than 4 GByte)

    • by daeg ( 828071 )
      Just curious: what the hell do you have as a 14GB PDF? I've never worked with one that large, I've always seen them split into smaller pieces and rejoined at press-time (assuiming you're working with press info). Not related to the article, just curious.
      • by McNihil ( 612243 )
        Yes true that... it was just a fun exercise to use pdftk to concatenate the entire thing and check out the speed and size and if things would still work :-D

        But yes a 200+ pages book and VERY graphic intensive one (half vector and other compressed png due to the nature of the material) will cause the size to be large.

        inkscape + home brewed stuff on Linux no less.
  • That is hogwash. According to wikipedia [wikipedia.org] it has been on the market for 2 years now in various forms. Searching newegg [newegg.com] there are many drives from a few brands that have it already, and I know I've seen them on there for at least a year.
    • I think the seagate 7200.10 DRIVE was the first non scsi drive with perpendicular, not this particular firmware on the drive. This is a revision of the 7200.10 firmware.
      In other words, the /. summary is technically/semantically correct, in calling it the first DRIVE. But the AAK firmware wasn't the firmware used over a year ago when the 7200.10 drive was first released AFAIK.

      BTW, 2007-2006 = 1 year, not 2 years. Even if you count month 9 minus month 4 (aug - apr) = 5 months, its still less than 1.5 years so
    • The Seagate? Have you considered that "AAK" implies that there many have been as many as 11 previous revisions of this drive?
  • by Froggie ( 1154 )
    It's interesting to note that the general purpose benchmarks come out with AAK in the lead while the others, all very much sequential read focussed, don't. So the question is, what exactly are the operations that the AAK is doing faster in the mixed benchmarks? Seeking? Or maybe it's a bus bandwidth limit at the hard drive end?

    Sadly, we can't tell, because the author has focussed on the sensationalism of poor performance rather than asking these questions. Seems to need a few more experiments setting up
    • Re: (Score:3, Interesting)

      by Devistater ( 593822 ) *
      Its not less RAM, all the 7200.10 perp drives are 16 meg cache, at least all the ones above 300 gigs are. And looks like some of the 250gig as well
      http://www.seagate.com/docs/pdf/datasheet/disc/ds_ barracuda_7200_10.pdf [seagate.com]

      Its only when you get down to the 80 and 120 gig sizes that the cache is reduced. And thats to save money on the production costs since the drive itself sells for less. If people want a cheaper, smaller capacity drive, they aren't likely to be willing to pay more for the 16 meg cache.

      So "less
  • It's true (Score:5, Informative)

    by fifirebel ( 137361 ) on Tuesday August 28, 2007 @04:49PM (#20390685)

    I have been setting up a couple of 8-drive RAID-5 arrays with these drives for some customers, and I also found out that 3.AAE drives performed much better that 3.AAK. No idea why. Seagate was unresponsive to queries about flashing the firmware and I had to replace all the 3.AAK drives by 3.AAEs.

    The manufacturing country had nothing to do with it. I had some chinese 3.AAE and 3.AAK as well as taiwanese (or was that thai?) 3.AAE and 3.AAK. 3.AAE would always perform better.

    The kind of testing I performed was:

    • hdparm -t /dev/sdN (AAK: 50 MB/s vs AAE: 72 MB/s)
    • time dd if=/dev/sdN of=/dev/null bs=1M (AAK was 10-15% slower)
    • iozone over ext3 showed slighly worse results with AAK than with AAE, but it was probably within the sampling/error margin (< 5%).

    Also, if you buy a retail kit (which I found cheaper than OEM at Fry's), there is no way to find out the firmware level on the box. You had to open the retail boxes to check the firmware revision on the drive itself.

    One theory I have is that these drives can supposedly be configured for server or workstation workloads. It could be that AAK drives are configured for server workloads by default (unless overridden) while the AAE are configured for workstation workloads by default. I have no idea how to toggle this under Linux.

    • If its configured for differant loads, its probably locked. I doubt it can be configured for other loads, even in windows, or dos.
      The most low level configuration for drives themselves I've ever heard of is to adjust the noise levels, some drives have configurable audio profiles, they can be quieter, but slower.

      If you could configure firmware for differant loads, that would be really cool. But I'd imagine that hdd manufactures would be against that. If you could configure a normal desktop hdd for server con
      • by Agripa ( 139780 )
        The most low level configuration for drives themselves I've ever heard of is to adjust the noise levels, some drives have configurable audio profiles, they can be quieter, but slower.

        Quantum gave me the files to replace the firmware image on a 5.25" Bigfoot series once. The buggy firmware was found to have a problem where if you reread the same sector with a delay between reads corrupted data would be returned.
    • I have been setting up a couple of 8-drive RAID-5 arrays with these drives for some customers, and I also found out that 3.AAE drives performed much better that 3.AAK.

      I have two questions that are going to sound a bit trollish, so I apologize in advance, but you sound like the right person to ask. I really wouldn't give a damn about this except that I buy a hard drive now and then and don't like to buy from companies that jerk people around.

      According to the article, the AAK drives perform a bit better than

  • The implication is that Seagate has crippled their 7200rpm drives so as not to cannibalize sales of the 10k RPM drives. Assuming this is true, it shouldn't remain so for very long. There are plenty of other purveyors of 7200rpm drives (without an interest in selling more 10K RPM SATA drives), and Seagate doesn't hold exclusive rights to perpendicular recording technology. Soon enough someone will make a 7200rpm drive that isn't crippled, and then I suspect we'll see the 7200.10 series magically return to
    • Um, you might want to check your facts on that. Seagate doesn't sell any 10k RPM SATA drives. They do sell 10k RPM SCSI/SAS drives, but I doubt they are worried about cannibalizing the sales of these drives, as they are server class hardware, not desktop class.

      Whereas, I'm sure they are happy to cannibalize Western Digital's sales of 10k RPM SATA drives.
  • Speculating is fun, so I will. Many physical devices that require exacting manufacturing processes are sold under different models of varying specs. The devices with the least manufacturing defects are the high-end, expensive models, while those with more defects are sold for less. The best example is CPUs, the difference between speeds being the amount of manufacturing defects. So perhaps with these drives they have to use different firmware depending on the quality of the platter, and for marketing th
  • I had a Samsung 250 GB HM250JI 2.5" SATA on order, in Europe it's less than 150 Euro and that is a bargain for a 250 Gig laptop drive. The problem was that a little googling showed massive performance problems with some drives. Some had miserable speed benchmarks, others (in OS X) failed to mount or mounted sporadically. Others performed just fine.

    Turned out Samsung had a couple of different firmware versions on shipping drives, and it is possible to burn new firmware to a CD and boot from it under OS X to
  • by straponego ( 521991 ) on Tuesday August 28, 2007 @07:30PM (#20392465)
    First, a comment on the Seagate 750G drives: If you run these, and you want to keep them running, make sure you have clean power. I've seen several of them die, usually after a power outage. Never seen one on a UPS die.

    Also, if you're concerned about Linux block device performance, look at the various kernel tunables. On a single drive, such as those Seagates, I can get extra ~10MB/s. On RAIDs and LVM volumes, the differences can be much higher-- more than twice as fast, in some cases. There are a few parameters that make a difference, and many values you might want to try for each. I have a script iterate through the various permutations, running IOZone on each, so I can see what does best for read vs. write and large vs. small file performance. But I can't release it just yet (employer makes 100% of income from Open Source; employer hates Open Source). Anyway, somebody out there can do better than I, I'm sure :)

    This discusses the tunables you'd want to check: http://www.3ware.com/KB/article.aspx?id=11050 [3ware.com]

    Note that these do NOT apply only to 3Ware controllers. And the differences in performance can be massive.

    • But I can't release it just yet (employer makes 100% of income from Open Source; employer hates Open Source).
      Psst... Might want to keep that to yourself around these parts.

      So of course I went spelunking who your employer might be... No luck, but I got a new sig.

    • Note that these do NOT apply only to 3Ware controllers. And the differences in performance can be massive.

      Funny; 3ware controllers are slow as shit. Buy an Areca, and enjoy performance that is TENFOLD that.

      I've seen twelve drive arrays on 3ware cards struggle to do better than 25MB/sec. Areca cards easily hit well over 200MB/sec from just a handful of drives (ie, take single drive speed, multiply by # of drives, subtract a little.)

      • Uh... yeah, with the tunables I hit mid 500MB/s on 3wares. I like arecas, and the tunables help them too. I wasn't advocating a particular card, just pointing to some documentaion that affects all linux block devices.
  • by Jaxoreth ( 208176 ) on Tuesday August 28, 2007 @08:48PM (#20393159)
    For those who are unclear on what perpendicular recording is, Hitachi made a video [youtube.com] explaining how it works. It's a bit dry and technical, but I figure the Slashdot crowd is savvy enough to grok it.
  • by Distan ( 122159 ) on Tuesday August 28, 2007 @11:13PM (#20394193)
    I am an insider in the drive industry, so while I need to be vague on some things, I can add clarification on others.

    A hard drive is a very complex subsystem inside your computer, more complex than many people realize. A hard drive contains one or more CPUs, memory, firmware, and dedicated hardware devoted to the functions of storing and retrieving data.

    There is no single "right" way to draw the line between what is firmware and what is hardware in a hard drive. Algorithms could be coded in VHDL or Verilog and synthesized into the silicon, or they could be compiled in C (or hand coded in assembly) and be embedded in firmware. Each drive company has their own philosophy for where to draw the line.

    Some drive companies choose to implement only fundamental functions in silicon, and implement everything else in firmware. For these companies, comparing their firmware to the BIOS in a PC is a poor analogy. A better analogy would be to compare the firmware to the operating system.

    In a system with "lite" firmware, the firmware typically would be responsible for configuring a few control registers and buffers, and then the hardware would take over. But for a system with "heavy" firmware, the firmware behaves much more like a kernel. Data is not going to be moved in or out of buffers, or be sent to and from platters, without the active involvement of the firmware scheduling and ordering that activity.

    The author of the OP wrote "it is highly unusual for (firmware) to affect sequential throughput". The author is wrong. In a system with "heavy" firmware, all performance is highly dependent on the firmware. It can easily make the same difference in performance as you would see running Windows 95 v. Windows XP v. Windows Vista v. RH 7.2 v. RHEL 3.0 on the same PC hardware.

    I do not know if the Seagate drive in question is a "heavy" or "lite" firmware drive, but I do know that the assumption that firmware takes a minor role in hard drive performance is mistaken.
  • There's been far far too many reports 'across the web' claiming the drives are either dead silent or noisy as hell.
    This would concur with the news post here, I'd say it's the AAM (or is it AMM?) accoustic management stuff.

    On top of this there's been more reports of faults than there was with the 7200.9's. (although nothing deathstar esque) If you go on the newegg.com feedback section for the varying 7200.10's you'll see a surprising variety of votes, yet newegg is traditionally filled with 'fanboy' response
  • I phoned Seagate's tech support number to ask about this. As soon as I started talking about firmware the 1st-level tech support guy escalates me without asking anything else. The 2nd-level guy does a bit of reading and seems to think this AAK firmware is an OEM firmware, and that Seagate isn't obliged to do anything for me at all. I'm told to contact the store I bought it from, as it is an OEM drive and the OEM is responsible for any support or replacement options, etc, etc. What a joke. He says the A

Technology is dominated by those who manage what they do not understand.

Working...