Google Proposes New Hard Drive Format For Data Centers (thestack.com) 202
An anonymous reader writes: In a new research paper the VP of Infrastructure at Google argues for hard drive manufacturers and data center provisioners to consider revisions to the current 3.5" form-factor in favour of taller, multi-platter form factors — with the possibility of combining the new format with HDDs of smaller circumference which hold less data but have better seek times. Eric Brewer, also a professor at UC Berkeley, writes "The current 3.5" HDD geometry was adopted for historic reasons – its size inherited from the PC floppy disk. An alternative form factor should yield a better TCO overall. Changing the form factor is a long term process that requires a broad discussion, but we believe it should be considered."
Form Factor not "Format" (Score:5, Insightful)
Re:Form Factor not "Format" (Score:5, Interesting)
The world will probably keep using spinning rust until purchase price (not TCO) on SSDs is lower. I wouldn't be surprised if makers went back to 5.25 x half height, and low spindle speeds. It would still permit large throughput with high density, but seeks would be slower. Not a big deal with enough caching in front of them, and/or with enough disks in an array. As SSDs approach HDD price, they will take up more of the workloads that actually have to be fast anyway.
Re: (Score:3)
I think you're right, but I think it only counts/matters for organizations operating at the spreadsheet analysis scale where the potential savings are really only realized across many thousands of disks in extremely customized environments.
It also wouldn't surprise me if this was also being floated by Google to induce hard disk makers to leverage their existing manufacturing base to mass produce something that really only a very small number of customers are likely to have in any interest in. Hard disks ar
Re: (Score:3)
Re: (Score:2)
I could see some of the larger SAN vendors getting behind this, if only as a way to keep customers paying top dollar for SSD and tiering features. They would gain an additional way of charging more for less (super magic form factor high performance high density hard disks that only work in our custom enclosures..).
My guess, though, is that they're probably going to see some of their business erode from SSD-only vendors whose products will provide better performance at less cost because they can eliminate s
Re: (Score:2)
Re:Form Factor not "Format" (Score:4, Interesting)
You could very well be right. Speaking of oddball heights, the first 500 *MB* drive I bought (back when the main network drive was 120MB) cost $1000, and it was actually a 3 1/2" double-height size, meaning the bay next to it had to be clear before I could install it. It wasn't a problem since I was simply installing it in a workstation. This obviously wouldn't work for Google, since I'm certain they use computers with front-mounted hot-swappable 3 1/2" drive bays all neatly packed together - I've seen how nicely these work with my Synology 5-bay NAS. Unless a new form factor becomes standardized, you can't really hack in a solution... at least not on the scale Google is dealing with.
I don't think Google is going to get its way here with a new standardized size, at least at mass adoption scales. Inertia is pretty damn hard to overcome, even if potentially superior solutions exist. I mean, the US is still using imperial measurements, for heaven's sake. The fact that we measure them as 3 1/2" inch drives should tell you something about how hard it is to change standards.
Re: (Score:2)
I guess you call your computer chips molten sand?
Re:Form Factor not "Format" (Score:5, Insightful)
That's odd, because my laptop's SSD is four years old and still has plenty of usable life left - and it's from a middle-line vendor, from the early SATA3 days, so it's not even a particularly good SSD. The hard drive in the same laptop (dual-bay) is actually reporting as closer to failure. Maybe that's because it's a laptop, so it suffers more vibration and temperature variation, which is harder on hard drives than solid-state.
And the rest of your bitching seems to be based more on shoddy cloud hosts than SSDs, or on badly-configured servers. "SSDs are too fast, they bring down the entire system by filling up RAM"... wouldn't that be true of hard drives as well, IF they could transfer data that quickly?
Re: (Score:3)
SSDs have been in data centers for years now. They are more reliable than hdd. That wasn't always true, but it is now. You don't use them for "low write/high-bandwidth", you use them for high iops/random access like database indexes, or the entire database if you can cost justify it. I have never seen a linux server lockup when copying files to/from and we have thousands of linux servers, you might need to tune something if that is happening. We have multi-tiered storage with 7200/10000/flash drives. The do
Re: (Score:3)
Old SSDs died quickly under DB loads - not enough write count in their lifetime. New ones are better, but still won't last as long as HDD. This is only going to improve over time, though, and at the right price, who cares about 2 vs 4 year lifetime?
The HDDs you have to use for high IOPS DB load are dammed expensive in the first place: it's the last domain of big-box storage (think 10x consumer drive cost, 100x with fancy replication software built-in).
Google doesn't use that big-box crap of course, but I'
Re: (Score:3, Interesting)
Old SSDs died quickly under DB loads - not enough write count in their lifetime. New ones are better, but still won't last as long as HDD.
Let's take a decent 15k, even ignoring seek times the rotational latency limits us to about 500 IOPS.
Saturating that with 4k writes 24/7 for a year... about 63.12 TB written.
What's the write rating on a 200GB intel DC P3700? 3737.6TB.
Do you honestly expect that HD to survive for nearly 60 years?
Re: (Score:2)
And what's the price difference?
Intel 3700 series SSD, while good, are really expensive. A 100TB server made from them would be really expensive and there is no point in that, where a big RAID10 array of hard drives with 1TB of SSD caching (using 3700 series) is much cheaper. Especially if the server itself is used to store video files.
Re: (Score:2)
For high IOPS big-box load, you don't use 1 TB drives though. You use ~500GB 15K SAS drives, and maybe use half the space, because it's all about spindle count.
Enterprise-quality SSDs are already cheaper than big-box storage, but that doesn't really help - you can't replace a FC/iSCSI-attached storage array with some local drives. There are a variety of competing "off-brand" SSD storage arrays, but EMC has really good salesmen and really good FUD.
SSDs will conquer when big, less-technical companies dare t
Re: (Score:2)
It all depends on the usage scenario of the storage. For a database, random access is very important. For a big VOD server it's not as important, especially if the cache is big enough. Even 5400RPM 6TB drives are good enough for that (a 34 drive RAID10 array of them that is) as the access is not done in 4K blocks, but in 128K-1M blocks or even bigger.
Re: (Score:3)
Funny.. I use an ssd for vms and swap and although it's thrashed heavily when I fire up all the VMs at once I've lost no capacity and no errors have been reported. Reliability has been so solid I'm thinking of replacing the spindles in my server with SSDs in a RAID5 or RAID6 array.
There are occasional lemon SSD model runs... but that's true of all hard drive manufacturers as well.
Furthermore empirical evidence arising from analyses from quite a few data centers indicate that SSDs are more durable than hard
Re: (Score:2)
Part of the issue is that the current form factor is wasteful of space just by itself, regardless of speed or storage. Stack more disks, add a second interface, and you can get more platters into the same amount of space and probably with less power.
Re: Form Factor not "Format" (Score:2)
First thing that came to mind. And that didn't end well!
Re: (Score:3)
The data became blurry?
Re: (Score:2)
It didn't end well because they were designed to be ultra-low-cost. The goal in this case would be higher reliability and lower power consumption as compared to drives with higher spindle speeds.
Re: (Score:2)
That's because the drives were designed to be low cost consumer devices. There were 5.25" Full height SCSI drives for a while before and after the bigfoot. I have one such drive - a 1.2GB drive made in 1992. Still works.
What's old is new... (Score:3)
Multi r/w heads aren't a new concept. Some of the really old drives had them, and in fact the very original magnetic recording "disks" had a r/w head per track. I think in the trade off of more heads versus faster spinning, faster spinning won out.
I seems that there should be a market for more platters, in a slightly different form factor.
Re: (Score:3)
Um, it hasn't been "rust" in decades. That's as condescending as saying SSD's are lumps of sand...
Poetic license. Besides, spinning is the really relevant part.
Re: (Score:3)
So, if it isn't Iron anymore, what are the magnetic domains made from?
Re: (Score:3)
Rust is iron oxide, which is chemically different than metallic iron. And cobalt is one of many ferromagnetic materials besides iron.
Re: (Score:2)
> It would still permit large throughput with high density, but seeks would be slower. If you read the article that is the exact opposite of what they want.
Does anyone else agree? SSD is cheap enough now to use it for caching in front of HDDs, even if you use it nowhere else. Google has special requirements which involve minimum cost, because their architecture depends on multitudes of nodes.
Re:Form Factor not "Format" (Score:5, Insightful)
Re: (Score:2)
You can? Where? Biggest I can find in 2.5" is 2TB, so where are you finding 16TB SSD drives?
Re: (Score:3)
This is more enterprise than consumer. File systems like ZFS can use SSDs as a cache for spinning platters. On a modern server you may have a system that uses RAM as a traditional disk cache followed by an SSD or array of SSDs as a second cache layer, and then disks as the mass storage.
It can even be pretty smart and using and ageing system to move files in and out of the SSD cache based on when they were used last and you could even tag some files to always be in the SSD cache and others to never be in the
Go for it! Bring back full height 5 1/4" drives (Score:3)
Multi-platter was always a good idea, I assume it stopped in a desperate attempt to cut costs.
8" hard drives often had 4 or even 8 double sided platters - and SCSI interfaces! Early 5.25" drives often had two, double sided platters. They desperately needed to access more data with less head movement because they had quite low areal bit density and used floppy-derived stepper motors for head positioning!
Re:Go for it! Bring back full height 5 1/4" drives (Score:5, Interesting)
Wait, what? Last time I opened up a dead 3.5" hard drive (which was only a few years ago) it had either three or four platters. Are you saying they typically only have one now?
But yes, I agree that if they want taller drives, 5 1/4" full height would be a good form factor. Maybe even not with 5" platters! If they want quicker speeds, they could maybe put four separate spindles of the platters from 2.5" drives inside the same box.
Re: (Score:2)
Actually, I just realized they could do even better: put four spindles of 2.5" platters in a 5.25" case, then put a fifth spindle in the center with the platters vertically offset to interleave with the others!
Re: (Score:2)
Re: (Score:2)
Nowhere, because I'm stupid and didn't think of that problem until after hitting "submit."
Re: (Score:2)
It depends on the age of the drive, the manufacturer, and capacity, I think. Mostly capacity, probably. Most of the SATA drives I've taken apart recently had only one platter. A few have had two or three.
Re: (Score:2)
Re: (Score:2)
Parent poster is probably just buying small drives. Economies of scale says it's cheaper to manufacture one platter density and just vary the number of platters. So most 500GB drives are single-platter now (either 500GB platter or defected 1TB platter). Most newer drives are probably 1TB platter. So anyone who avoids 3TB drives because they're "unreliable" is missing the point.
Re:Go for it! Bring back full height 5 1/4" drives (Score:4, Interesting)
It sounds like you think that manufacturers have stopped making multi-platter drives. That's not true. Seagate and WD both use seven platters in their highest-capacity (10TB, standard-height) drives [arstechnica.com]. The linked article further states that they use seven platters "instead of the usual six".
I don't know how prevalent single-platter drives are today, but multi-platter drives certainly haven't disappeared.
Re: (Score:2)
Cost may have been a driver, but another driver was the lighter and therefore potentially faster head positioner assembly. Lighter positioners allow you to either move them faster or use less power, or some of each.
Multiple heads (Score:4, Interesting)
Multiple heads on each side of the platter might be a better solution, one for the inner part and one for the outer.
Re: (Score:2)
Re:Multiple heads (Score:5, Informative)
There were SCSI drives with four head actuators, one in each corner of the drive casing. They were treated as four separate drives logically and used to speed up reads on a "first to deliver the requested block" basis. They were horrendously expensive and it turned out to be very difficult to optimise the read process to gain the desired perfomance boost.
Re: (Score:2)
IIRC (was a while since I last saw an answer to that question) it wouldn't make economic sense as making two HDDs would cost about the same and would perform as good or better. Also the combined unit would be more sensitive to a headcrash.
Re: (Score:2)
This wouldn't just allow for four times the I/O, but allow four different threads to write at the same time
Or allow writes to not block reads.
Re: (Score:2)
Re: (Score:2)
There was a drive that had two actuators that could access the entire platter. That was the design, but I think in the end the complexity of multiple heads accessing the same sector was problematic because of the ordering of the operations (i.e., one head could write data to the sector that the other head was reading - if you didn't catch this, you would corrupt the data).
Plus, double the heads doubles the chance of a head crash.
It seems like NCQ, write and read caches (sometimes with flash hybrid modes) etc in current drives would bring enough complexity that additional physical heads would also be reasonable to implement. The abstraction in the drive firmware is much thicker these days.
Re:Multiple heads (Score:4, Informative)
This has been done before.... Both outside/middle dual heads and dual independent actuators on each side. Multi heads can increase performance, but cost space, power, and money. Also more parts = lower MTBF. They don't increase storage density. If you want performance use SSD.
http://www.tomshardware.com/ne... [tomshardware.com]
Re: (Score:2)
I had a Fujitsu Eagle from the 80s which used multiple heads per side. The drive was used on a PDP-11, was 19 inch rack mount, and had a perspex cover, so you could watch the heads seeking when the drive was in use.
Re: (Score:2)
why not just a immovable heads, head per track (heads staggered of course as head much bigger than track but sufficient flux in center pushes the domains over)
Re: (Score:2)
So how will this limit desktop users? (Score:2)
I have a feeling that in a few years we'll be left with just expensive SSDs and even more expensive "datacenter" drives.
Re: (Score:2)
Expensive? SSD prices have been dropping like a rock for several years, getting closer to HDD by the month.
Re: (Score:2)
Re: (Score:2)
Why no use existing form factors? (Score:3)
There are other form factors other than the typical low profile 3.5".
In particular there is the "half-size" thickness, witch is the thickness of 5.25" bays. It was a rather common form factor for 3.5" SCSI drives.
Re: (Score:2)
In particular there is the "half-size" thickness, witch is the thickness of 5.25" bays. It was a rather common form factor for 3.5" SCSI drives.
It was popular from the end of the ST-506 era up into the early days of ultra SCSI. However, the benefit of making a taller drive is being able to stack in more platters, which means you also need more heads and so on. Instead, they improved areal density, so that they could make the disks shorter. Now we're all married to the 3.5x1" format because of drive sleds and so on. We are, however, free to use 5.25" storage devices of whatever height we want, whether that's 1", half-height, or full-height. Those ar
Re: Why no use existing form factors? (Score:2)
Google could build a full-height 5.25" 'sled' that had a logic controller on it and a slide-out tray that would house six 'data cubes: each containing platters and heads that could be plugged in or out as they failed or needed upgrades. Replicating the logic 6x over is silly given today's CPU's. Frankly these things need to be SAS for compatibility but really just run PCIe to the sled and skip the discrete controller too, to get costs down more.
I'd buy such things if they were on the market.
2.5" 4X drives (Score:5, Insightful)
Re: (Score:3, Interesting)
I'm surprised that they haven't just done away with the 'hard drive' as is. SSDs are just a bunch of chips. I'm thinking of a 1U server that is just a board populated with chips, a fiber interface and a powersupply. Treat the 1U server as a single unit.
When you start to add up hard drive casing, interface connectors, etc you end up wasting a lot of space for no reason. For the home user that only has 1-2 drives they make sense but for someone like Google that may have thousands of drives just jump up to the
Re: (Score:2)
I'm not (surprised). The case size of a PC tower has been trending steadily downwards for the better part of a decade. There's not room for an additional drive of that size in the common consumer tower anymore.
Drums! (Score:3)
Taller, more heads, smaller platter, less seek distance -- the logical end point is the drum! I'm sure we can do better than the FH-1782 today.
Everything old is new again...
Re: (Score:3)
From a programmer's POV, drums were wonderful. Select an address, then read or write. No cylinder/head/sector calculations. No variable transfer rates. You needed better "seek" time, install multiple sets of read/write heads. Unfortunately, they were bulky and cost a LOT.
Horse sense (Score:5, Insightful)
The current 3.5" HDD geometry was adopted for historic reasons --- its size inherited from the PC floppy disk.
The form factor of 3.5" floppy drives was decided during the early planning stage of the Great Data Railroad. You can place exactly 16 3.54" (90mm) bare floppy discs side by side within the standard railroad gauge of 4 feet 8.5 inches. For the original 1982 HP single sided format of ~280kB [wikipedia.org] this yields roughly ~4.3mB along every 3.5" of railroad track, or 137 rows along the floor of a a standard 40-foot railroad boxcar without the use of stacking. Thus ~600MB was the capacity of a original single density data railroad car, though it was only only ~1mm in height.
While the floppy disc made data railroads possible, media stacking made them practical. A cylinder of bare floppy media ~10 feet high is roughly 3048 discs, so your standard railroad boxcar held ~1.8TB of floppy storage, in 1982! With an average rail speed of 18mph a single boxcar passes every ~1.5 seconds, which is ~1.2T terabytes or 9200 gigabits per second! By 1998 floppy media storage density had improved ~714-fold, yielding transfer rates of 6568800Gb/s or ~821 TB/s.
So why was floppy data railroad ultimately limited to this 'arbitrary' ~821 TB/s? Northern rail gauge of the US railway based on the English rail system which were based on tramways which used the same jigs used to build wagons whose wheel base was determined by ancient ruts that were left by Roman chariots which were sized to accommodate the width of two horses' asses. As not-quite debunked here [snopes.com].
So the short story is, any chain of decisions regarding technology leads back to some horse's ass.
Re: (Score:2)
<slow clap>
file not found (Score:3)
The research paper is not available. Any pointers ?
The majority of data is "cool" (Score:2)
Most of the data within these companies is "cool" meaning it's not actively being accessed. Take for example the massive amount of photos within FB. When was the last time you looked at a photo from 6months ago? If it needs to be accessed frequently, as in it becomes viral, then you move the photo from HDD to SSD. Sure there's 3-4TB SSD coming, but they're still much more expensive $/GB than HDD.
Also, Google's point isn't so much about $/GB but rather that they don't need as much reliability of the drive
Must maintain backwards compatibility (Score:2)
Any new solution would have to maintain backwards compatibility. The new standard would have to be ether 3.5" x 2, 3, or 4 bays; or 5.25" x 1, 2, 3, or 4 bays. The industry has 30 years behind existing bay standards, it would take them a long time to change their tooling.
Personally I thought the Sun Fire X4500 (a/k/a thumper) was a very efficient way to maximize storage density
Re: (Score:2)
no! they can totally change form factor it as servers mostly are cycled out after five years. those that hang onto them more can scrounge for older drives or cut the metal dividers out of their drive cages. can't let dinosaurs rule the server world for stupid reasons.
Sideways (Score:2)
This is not bounded by reality, but just some back of the napkin types stuff.
Let's say you have a 3" platter w/ 1TB capacity. And you can get up to 7 in a 1-inch high 3.5" drive.
That's 7TB.
The spindle is about 1" in diameter, but from looking at the IBM microdrive, it may be possible to reduce that to 0.33"
Next let's shrink the platter to 0.75". Because we're talking single speed, the amount of data is proportional to r (instead of r^2). So it's 0.42/2.5 = 0.168 TB.
The drive is 5.75" deep, Assuming 1.75" fo
Re: (Score:2)
Looking at it another way. If you take a microdrive at 1.42" x 1.65" x 0.197", and shrink it to 1" x 1.23" x 0.197" you could fit ~81-84 in the same space as a 3.5" drive. The top end microdrive had an 8GB capacity. The top 3.5" drive at the time had a 750GB vs 10TB now, a 33.3x increase. The size decrease is something like 0.25"/0.67" = 0.37x. Multiplied together, we can expect the stack to have somewhere between 6.5 TB and 8 TB.
More platters = higher failure rates (Score:2)
More platters yields more heads. More components to fail. This will increase failure rates for these drives at a given capacity over a similar capacity 3.5" drive with a lower platter count.
Spinning media is still hard to beat on price. Desktop 7200 RPM drives are at $.03/GB. "Enterprise" 7200 RPM SATA at volume is between $.03/GB and $0.05GB. Cheap SSD is around $0.60/GB to $1.20/GB.
A lot of data is still cold. At volume, this price difference matters a lot.
Turning HHDs into SSDs? (Score:2)
I don't understand the logic of sacrificing storage capacity for seek time. In which case, you merely end up with an incompetent SSD, an defeat the whole purpose of having a HHD in the first place.
Wouldn't it make more sense to leverage the whole advantage of a HHD and go strictly for capacity, and use more intelligent caching or more hybrid technology to reduce seek time? You can already fit a lot of platters into the 3.5" format, and stuffing more hardware into a single enclosure will probably result in
Bring back Univac Drum Storage & IBM 3330! (Score:2)
Re:Too late? (Score:5, Interesting)
I just wonder if, by the time they agree on this (if they do) the price of SSDs will have dropped enough so that they can be used instead? Storage-wise they are already there, and then some.
The point is to keep spinning platters cost-competitive with SSDs - a taller, smaller form factor would increase performance and reduce TCO... I'm thinking they're looking at something like lots of 1.8" platters stacked 4" high, they can spin faster, have faster seek times, and package multiple TB per unit, and I think the longer single bearing should be a more favorable geometry than the ultra-thin notebook compatible drives that have been developed for the last 10 years. It will be slower than SSD, but the power performance (which is the key to TCO) should remain competitive with SSDs for a long time to come. Also, presumably, if this takes off it would be datacenter focused, so longevity (again, TCO focus) should also be "baked into" the design in favor of lower retail price.
Re: (Score:2, Interesting)
The power performance will NEVER COME CLOSE to being as low as an SSD.
Even my first-gen SSDs use far less power than a regular laptop drive. Taller drive geometry = more power to spin the spindle. You do know what an ?INDUCTIVE LOAD is, right? If not, protip for you: The amount of power you use to spin up those platters alone is all the power I need to find and transfer data from my SSD. And that's done before your drive heads even begin moving!
More platters means less reliable? (Score:2)
I would think that adding additional platters would greatly lower the mean time before failure on the drives?
The disk's spindle motor and actuator are shared across platters, but the media and read/write heads are per-platter. With many small platters your seek times would go down, but the odds of a head crash or media failure would be greatly increased.
Re: (Score:2)
Google$ - the economic case for making almost anything.
Re: (Score:2)
Re: (Score:2)
HTTP protocol improvement was on a slow track until Google introduced SPDY, and then that was used as the framework for HTTP/2. Note that Google didn't get everything it wanted--SPDY requires encryption, while HTTP/2 technically does not (though no major client is implementing it without requiring encryption). It also took input from many industry sources; Google may have been the motivator, but was not remotely the final decider.
I'm not sure about the claim behind TLSv1.3, but so what if Google got it st
Re: (Score:2)
Re: (Score:2)
" Plus you don't need SSD to get LEED certification."
No, but every single power saving and low-impact device you use brings you closer to attaining LEED certification. Hard Drives, with platinum (not an easy thing to mine and refine, speaking from personal experience owning two mines here in SoCal) you're ripping the shit out of the environment to get it in most cases.
"The cost of manufacturing a similar size SSD as a typical datacenter size drive is much much much more."
Uhh, what? Do you even know the mat
Re: (Score:2)
Re: (Score:2)
Incorrect is the polite way to express it. Droolingly clueless is a better description of the level of ineptitude displayed.
Re: (Score:2)
Re:Eric Brewer = Moron (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
"SSDs do not solve any problems for data centers"
This is patently incorrect and the power bills alone prove it. Go ask any datacenter that switched over to SSD. There's a reason my box [imgur.com] uses SSDs (lower requirement for cooling the TWELVE GPUS INSIDE.)
I design these kinds of systems for a living, among other things like owning a mine and mining gems and minerals, designing semiconductors, and much, much more.
"There is more involved in managing data center performance than simple access times or temps."
Most of
Re: (Score:2)
If you have 12 GPUs in a box, i fail to see how using a SDD or HDD would make any noticeable difference in power and heat...
Re: (Score:2)
Re: (Score:2)
$10k for a ~12TB max of storage (if you used RAID0). I paid ~$800 for 6x3TB WD REDs (used in RAID6), getting the same 12TB. According to WD, the drives use 4.1W each when being accessed. That's ~25W total. Let's say the PSU is 80% efficient, so that's 32W from the wall. Electricity costs $0.12/kWh, so for the $9200 I saved by buying HDDs, I can get 76MWh of electricity, which would be enough to run these drives for 270 years.
Hell, for that money, I can pay all my electricity bills for 3 years.
270 years unti
Re: (Score:2)
the cost of manufacturing an SSD is about 25% that of manufacturing a platter HDD
Really? I think if that were anywhere near true it would be reflected in the cost of SSDs. Do tell, where can I buy a 4TB SSD for $30!
The disk drive market is pretty competitive. I tend to think if SSDs cost 25% of an HDD to make, they'd be selling for a lot less than they are. And with Google's buying power probably even less for them.
Re: (Score:2)
"Really? I think if that were anywhere near true it would be reflected in the cost of SSDs."
SSDs command a higher price premium because of A. Marketing B. actual performance and C. perceived 'new technology' to the general mass market, so the prices remain high. This is basic economics 101, man.
Re: (Score:2)
A 1 Tbyte SDD sells for about $250. The flash IC's that are used in such a drive cost a total of $500 when bought in quantities of 1000. Even with huge quantity discounts, that doesn't leave much profit margin.
The competing hard drive sells for $50.
The idea that a 1 Tbyte SDD can be manufactured for 25% of $50 = $12.50 is beyond absurd.
Re: (Score:2)
Since he keeps emphasizing how hard drives are made with expensive and environmentally-hostile platinum, perhaps he's just comparing the cost per kilo of platinum to the cost per kilo of sand. After all, that's what flash memory is made with, right?
Re: (Score:2)
Probably around the same as you could store in a 1U enclosure filled with 2.5" drives. Okay, a bit less, but at least one could seek in reasonable time.
Re: (Score:2)
It's true, Google are renowned for hiring only morons. I'm told that at the interview people are asked stuff like, is MongoDB web scale? And I'm sure they only promote the total chumps to VP. All he had to do was post an "Ask Slashdot" and you'd have no doubt politely schooled him. What a wankpuffin he must be!
Re: (Score:2)
I've worked for Google. They used to have a serious vetting process. Now it's more of who you know than what you know.
Re: (Score:2)
Oh tell me vise one, how many years experience in HDD manufacturing do you have?
Calling one of the most advanced technologies existing in this world "spinning rust" tells me 0 knowledge which would make you a loud-mouthed know-it-all asshole. The lack of insight in that HDDs is a leading storage technology with several advantages (including economics) over SDDs just reinforce that.
But if your clueless rant makes you feel better that you haven't (and will not) reach a comparable position then I guess it is w
Re: (Score:2)
Re: (Score:2)
I thought it was a HOSTS file with a built-in ad server.