Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Compare cell phone plans using Wirefly's innovative plan comparison tool ×
Data Storage Google

Google Proposes New Hard Drive Format For Data Centers (thestack.com) 202

An anonymous reader writes: In a new research paper the VP of Infrastructure at Google argues for hard drive manufacturers and data center provisioners to consider revisions to the current 3.5" form-factor in favour of taller, multi-platter form factors — with the possibility of combining the new format with HDDs of smaller circumference which hold less data but have better seek times. Eric Brewer, also a professor at UC Berkeley, writes "The current 3.5" HDD geometry was adopted for historic reasons – its size inherited from the PC floppy disk. An alternative form factor should yield a better TCO overall. Changing the form factor is a long term process that requires a broad discussion, but we believe it should be considered."
This discussion has been archived. No new comments can be posted.

Google Proposes New Hard Drive Format For Data Centers

Comments Filter:
  • by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Friday February 26, 2016 @09:54AM (#51590947) Homepage
    Also, I thought the world was going SSD anyway, which is thinner, not thicker?
    • by drinkypoo ( 153816 ) <martin.espinoza@gmail.com> on Friday February 26, 2016 @10:02AM (#51590979) Homepage Journal

      The world will probably keep using spinning rust until purchase price (not TCO) on SSDs is lower. I wouldn't be surprised if makers went back to 5.25 x half height, and low spindle speeds. It would still permit large throughput with high density, but seeks would be slower. Not a big deal with enough caching in front of them, and/or with enough disks in an array. As SSDs approach HDD price, they will take up more of the workloads that actually have to be fast anyway.

      • by swb ( 14022 )

        I think you're right, but I think it only counts/matters for organizations operating at the spreadsheet analysis scale where the potential savings are really only realized across many thousands of disks in extremely customized environments.

        It also wouldn't surprise me if this was also being floated by Google to induce hard disk makers to leverage their existing manufacturing base to mass produce something that really only a very small number of customers are likely to have in any interest in. Hard disks ar

        • I agree with you that this only makes sense for very large customers of hard drives. If Google really thinks this is a good idea they should approach one of the vendors with a long term commitment to buy the drives or a large investment for them to develop the drives. That's really the only way I see a Seagate or WD spending their time/money to develop a product that has no retail purpose and may have no commercial customers. Let the HD companies know it won't be a wasted investment and they'll come up with
          • by swb ( 14022 )

            I could see some of the larger SAN vendors getting behind this, if only as a way to keep customers paying top dollar for SSD and tiering features. They would gain an additional way of charging more for less (super magic form factor high performance high density hard disks that only work in our custom enclosures..).

            My guess, though, is that they're probably going to see some of their business erode from SSD-only vendors whose products will provide better performance at less cost because they can eliminate s

      • So we keep using hard drives because they are cheaper and somehow raising their costs due to development, retooling, deployment costs, etc is supposed to be a good thing? Raising the costs of something you are using because its cheaper seems the opposite of what you want.
      • by Dutch Gun ( 899105 ) on Friday February 26, 2016 @01:13PM (#51592225)

        You could very well be right. Speaking of oddball heights, the first 500 *MB* drive I bought (back when the main network drive was 120MB) cost $1000, and it was actually a 3 1/2" double-height size, meaning the bay next to it had to be clear before I could install it. It wasn't a problem since I was simply installing it in a workstation. This obviously wouldn't work for Google, since I'm certain they use computers with front-mounted hot-swappable 3 1/2" drive bays all neatly packed together - I've seen how nicely these work with my Synology 5-bay NAS. Unless a new form factor becomes standardized, you can't really hack in a solution... at least not on the scale Google is dealing with.

        I don't think Google is going to get its way here with a new standardized size, at least at mass adoption scales. Inertia is pretty damn hard to overcome, even if potentially superior solutions exist. I mean, the US is still using imperial measurements, for heaven's sake. The fact that we measure them as 3 1/2" inch drives should tell you something about how hard it is to change standards.

      • by Megol ( 3135005 )

        I guess you call your computer chips molten sand?

    • by p4ul13 ( 560810 ) on Friday February 26, 2016 @10:04AM (#51590999) Homepage
      SSD is the heir apparent, but platter based disk storage will likely provide higher capacity at denser, more affordable prices for quite some time to come. I suspect Google is proposing this altered platter HD design as something that could bridge the gap until SSD reaches an affordability / density point that can catch up / replace conventional platter HD designs.
    • by LWATCDR ( 28044 )

      This is more enterprise than consumer. File systems like ZFS can use SSDs as a cache for spinning platters. On a modern server you may have a system that uses RAM as a traditional disk cache followed by an SSD or array of SSDs as a second cache layer, and then disks as the mass storage.
      It can even be pretty smart and using and ageing system to move files in and out of the SSD cache based on when they were used last and you could even tag some files to always be in the SSD cache and others to never be in the

  • by Anne Thwacks ( 531696 ) on Friday February 26, 2016 @10:04AM (#51590997)
    but keep the compatible mounting holes!

    Multi-platter was always a good idea, I assume it stopped in a desperate attempt to cut costs.

    8" hard drives often had 4 or even 8 double sided platters - and SCSI interfaces! Early 5.25" drives often had two, double sided platters. They desperately needed to access more data with less head movement because they had quite low areal bit density and used floppy-derived stepper motors for head positioning!

    • by mrchaotica ( 681592 ) * on Friday February 26, 2016 @10:15AM (#51591061)

      Multi-platter was always a good idea, I assume it stopped in a desperate attempt to cut costs.

      Wait, what? Last time I opened up a dead 3.5" hard drive (which was only a few years ago) it had either three or four platters. Are you saying they typically only have one now?

      But yes, I agree that if they want taller drives, 5 1/4" full height would be a good form factor. Maybe even not with 5" platters! If they want quicker speeds, they could maybe put four separate spindles of the platters from 2.5" drives inside the same box.

      • Actually, I just realized they could do even better: put four spindles of 2.5" platters in a 5.25" case, then put a fifth spindle in the center with the platters vertically offset to interleave with the others!

      • by andphi ( 899406 )

        It depends on the age of the drive, the manufacturer, and capacity, I think. Mostly capacity, probably. Most of the SATA drives I've taken apart recently had only one platter. A few have had two or three.

        • by jandrese ( 485 )
          You are probably taking apart drives that were purchased by cheapskates. What usually happens is you see a manufacturer announce a new drive line with 2TB, 4TB, and 6TB capacities and what they do is sell you a drive with either 1, 2, or 3 platters. If your purchasing guy is looking to cut costs he will only buy the lowest end drive in the line.
      • Parent poster is probably just buying small drives. Economies of scale says it's cheaper to manufacture one platter density and just vary the number of platters. So most 500GB drives are single-platter now (either 500GB platter or defected 1TB platter). Most newer drives are probably 1TB platter. So anyone who avoids 3TB drives because they're "unreliable" is missing the point.

    • by jeffb (2.718) ( 1189693 ) on Friday February 26, 2016 @10:20AM (#51591097)

      It sounds like you think that manufacturers have stopped making multi-platter drives. That's not true. Seagate and WD both use seven platters in their highest-capacity (10TB, standard-height) drives [arstechnica.com]. The linked article further states that they use seven platters "instead of the usual six".

      I don't know how prevalent single-platter drives are today, but multi-platter drives certainly haven't disappeared.

    • Cost may have been a driver, but another driver was the lighter and therefore potentially faster head positioner assembly. Lighter positioners allow you to either move them faster or use less power, or some of each.

  • Multiple heads (Score:4, Interesting)

    by chriswaco ( 37809 ) on Friday February 26, 2016 @10:11AM (#51591035)

    Multiple heads on each side of the platter might be a better solution, one for the inner part and one for the outer.

    • I also have been curious as to why HDDs have never introduced multiple heads. I have visioned it being two heads, placed on opposite sides. Combined with smart NCQ, it could be quite sweet.
      • Re:Multiple heads (Score:5, Informative)

        by nojayuk ( 567177 ) on Friday February 26, 2016 @10:41AM (#51591213)

        There were SCSI drives with four head actuators, one in each corner of the drive casing. They were treated as four separate drives logically and used to speed up reads on a "first to deliver the requested block" basis. They were horrendously expensive and it turned out to be very difficult to optimise the read process to gain the desired perfomance boost.

      • by Megol ( 3135005 )

        IIRC (was a while since I last saw an answer to that question) it wouldn't make economic sense as making two HDDs would cost about the same and would perform as good or better. Also the combined unit would be more sensitive to a headcrash.

    • Re:Multiple heads (Score:4, Informative)

      by Andrew Lindh ( 137790 ) on Friday February 26, 2016 @10:34AM (#51591171)

      This has been done before.... Both outside/middle dual heads and dual independent actuators on each side. Multi heads can increase performance, but cost space, power, and money. Also more parts = lower MTBF. They don't increase storage density. If you want performance use SSD.

      http://www.tomshardware.com/ne... [tomshardware.com]

    • I had a Fujitsu Eagle from the 80s which used multiple heads per side. The drive was used on a PDP-11, was 19 inch rack mount, and had a perspex cover, so you could watch the heads seeking when the drive was in use.

    • why not just a immovable heads, head per track (heads staggered of course as head much bigger than track but sufficient flux in center pushes the domains over)

      • by dfsmith ( 960400 )
        A few years ago, drives were about 40,000 tracks per inch. Each head costs about $2. Any more questions?
  • I have a feeling that in a few years we'll be left with just expensive SSDs and even more expensive "datacenter" drives.

    • Expensive? SSD prices have been dropping like a rock for several years, getting closer to HDD by the month.

    • Just bought a half terabyte laptop SSD for about $150, which was roughly what a 2.5" magnetic drive with equivalent capacity cost me three or four years ago, and only about twice the current 2.5" magnetic drive cost. I know, I know, still not comparable to 3.5" costs, but it gives you some idea of how quickly the prices are plummetting.
  • by GuB-42 ( 2483988 ) on Friday February 26, 2016 @10:13AM (#51591053)

    There are other form factors other than the typical low profile 3.5".
    In particular there is the "half-size" thickness, witch is the thickness of 5.25" bays. It was a rather common form factor for 3.5" SCSI drives.

    • In particular there is the "half-size" thickness, witch is the thickness of 5.25" bays. It was a rather common form factor for 3.5" SCSI drives.

      It was popular from the end of the ST-506 era up into the early days of ultra SCSI. However, the benefit of making a taller drive is being able to stack in more platters, which means you also need more heads and so on. Instead, they improved areal density, so that they could make the disks shorter. Now we're all married to the 3.5x1" format because of drive sleds and so on. We are, however, free to use 5.25" storage devices of whatever height we want, whether that's 1", half-height, or full-height. Those ar

      • Google could build a full-height 5.25" 'sled' that had a logic controller on it and a slide-out tray that would house six 'data cubes: each containing platters and heads that could be plugged in or out as they failed or needed upgrades. Replicating the logic 6x over is silly given today's CPU's. Frankly these things need to be SAS for compatibility but really just run PCIe to the sled and skip the discrete controller too, to get costs down more.

        I'd buy such things if they were on the market.

  • 2.5" 4X drives (Score:5, Insightful)

    by wren337 ( 182018 ) on Friday February 26, 2016 @10:15AM (#51591065) Homepage
    Surprised they haven't just gone with 2X or 4X height 2.5" drives. Same connectors, same platters, easy retrofit. You just need a different bracket.
    • Re: (Score:3, Interesting)

      I'm surprised that they haven't just done away with the 'hard drive' as is. SSDs are just a bunch of chips. I'm thinking of a 1U server that is just a board populated with chips, a fiber interface and a powersupply. Treat the 1U server as a single unit.

      When you start to add up hard drive casing, interface connectors, etc you end up wasting a lot of space for no reason. For the home user that only has 1-2 drives they make sense but for someone like Google that may have thousands of drives just jump up to the

    • Surprised they haven't just gone with 2X or 4X height 2.5" drives. Same connectors, same platters, easy retrofit. You just need a different bracket.

      I'm not (surprised). The case size of a PC tower has been trending steadily downwards for the better part of a decade. There's not room for an additional drive of that size in the common consumer tower anymore.

  • by kschendel ( 644489 ) on Friday February 26, 2016 @10:45AM (#51591225) Homepage

    Taller, more heads, smaller platter, less seek distance -- the logical end point is the drum! I'm sure we can do better than the FH-1782 today.

    Everything old is new again...

    • From a programmer's POV, drums were wonderful. Select an address, then read or write. No cylinder/head/sector calculations. No variable transfer rates. You needed better "seek" time, install multiple sets of read/write heads. Unfortunately, they were bulky and cost a LOT.

  • Horse sense (Score:5, Insightful)

    by TheRealHocusLocus ( 2319802 ) on Friday February 26, 2016 @11:19AM (#51591419)

    The current 3.5" HDD geometry was adopted for historic reasons --- its size inherited from the PC floppy disk.

    The form factor of 3.5" floppy drives was decided during the early planning stage of the Great Data Railroad. You can place exactly 16 3.54" (90mm) bare floppy discs side by side within the standard railroad gauge of 4 feet 8.5 inches. For the original 1982 HP single sided format of ~280kB [wikipedia.org] this yields roughly ~4.3mB along every 3.5" of railroad track, or 137 rows along the floor of a a standard 40-foot railroad boxcar without the use of stacking. Thus ~600MB was the capacity of a original single density data railroad car, though it was only only ~1mm in height.

    While the floppy disc made data railroads possible, media stacking made them practical. A cylinder of bare floppy media ~10 feet high is roughly 3048 discs, so your standard railroad boxcar held ~1.8TB of floppy storage, in 1982! With an average rail speed of 18mph a single boxcar passes every ~1.5 seconds, which is ~1.2T terabytes or 9200 gigabits per second! By 1998 floppy media storage density had improved ~714-fold, yielding transfer rates of 6568800Gb/s or ~821 TB/s.

    So why was floppy data railroad ultimately limited to this 'arbitrary' ~821 TB/s? Northern rail gauge of the US railway based on the English rail system which were based on tramways which used the same jigs used to build wagons whose wheel base was determined by ancient ruts that were left by Roman chariots which were sized to accommodate the width of two horses' asses. As not-quite debunked here [snopes.com].

    So the short story is, any chain of decisions regarding technology leads back to some horse's ass.

  • by Spaham ( 634471 ) on Friday February 26, 2016 @11:25AM (#51591457)

    The research paper is not available. Any pointers ?

  • Most of the data within these companies is "cool" meaning it's not actively being accessed. Take for example the massive amount of photos within FB. When was the last time you looked at a photo from 6months ago? If it needs to be accessed frequently, as in it becomes viral, then you move the photo from HDD to SSD. Sure there's 3-4TB SSD coming, but they're still much more expensive $/GB than HDD.

    Also, Google's point isn't so much about $/GB but rather that they don't need as much reliability of the drive

  • Any new solution would have to maintain backwards compatibility. The new standard would have to be ether 3.5" x 2, 3, or 4 bays; or 5.25" x 1, 2, 3, or 4 bays. The industry has 30 years behind existing bay standards, it would take them a long time to change their tooling.

    Personally I thought the Sun Fire X4500 (a/k/a thumper) was a very efficient way to maximize storage density

    • no! they can totally change form factor it as servers mostly are cycled out after five years. those that hang onto them more can scrounge for older drives or cut the metal dividers out of their drive cages. can't let dinosaurs rule the server world for stupid reasons.

  • This is not bounded by reality, but just some back of the napkin types stuff.
    Let's say you have a 3" platter w/ 1TB capacity. And you can get up to 7 in a 1-inch high 3.5" drive.
    That's 7TB.
    The spindle is about 1" in diameter, but from looking at the IBM microdrive, it may be possible to reduce that to 0.33"
    Next let's shrink the platter to 0.75". Because we're talking single speed, the amount of data is proportional to r (instead of r^2). So it's 0.42/2.5 = 0.168 TB.
    The drive is 5.75" deep, Assuming 1.75" fo

    • Looking at it another way. If you take a microdrive at 1.42" x 1.65" x 0.197", and shrink it to 1" x 1.23" x 0.197" you could fit ~81-84 in the same space as a 3.5" drive. The top end microdrive had an 8GB capacity. The top 3.5" drive at the time had a 750GB vs 10TB now, a 33.3x increase. The size decrease is something like 0.25"/0.67" = 0.37x. Multiplied together, we can expect the stack to have somewhere between 6.5 TB and 8 TB.

  • More platters yields more heads. More components to fail. This will increase failure rates for these drives at a given capacity over a similar capacity 3.5" drive with a lower platter count.

    Spinning media is still hard to beat on price. Desktop 7200 RPM drives are at $.03/GB. "Enterprise" 7200 RPM SATA at volume is between $.03/GB and $0.05GB. Cheap SSD is around $0.60/GB to $1.20/GB.

    A lot of data is still cold. At volume, this price difference matters a lot.

  • I don't understand the logic of sacrificing storage capacity for seek time. In which case, you merely end up with an incompetent SSD, an defeat the whole purpose of having a HHD in the first place.

    Wouldn't it make more sense to leverage the whole advantage of a HHD and go strictly for capacity, and use more intelligent caching or more hybrid technology to reduce seek time? You can already fit a lot of platters into the 3.5" format, and stuffing more hardware into a single enclosure will probably result in

  • Storage drum systems had many heads, arranged in a spiral around the drum so there was time at the end of a "ring" to select the next head. (Apparently nobody thought of making a straight line of heads and spiraling the data.) One of the later models of IBM multi-platter disk drives had 2 sets of head arms. All of these are mechanically complex, which is part of the reason for RAID (Redundant Array of Independent Disks) (redundancy of course being the other). Instead of trying to make one disk drive big

We cannot command nature except by obeying her. -- Sir Francis Bacon

Working...