AOL Spends $1M On Solid State Memory SAN 158
Lucas123 writes "AOL recently completed the roll out of a 50TB SAN made entirely of NAND flash in order to address performance issues with its relational database. While the flash memory fixed the problem, it didn't come cheap, at about four times the cost of a typical Fibre Channel disk array with the same capacity, and it performs at about 250,000 IOPS. One reason the flash SAN is so fast is that it doesn't use a SAS or PCIe backbone, but instead has a proprietary interface that offers up 5 to 6Gb/s throughput. AOL's senior operations architect said the SAN cost about $20 per gigabyte of capacity, or about $1 million. But, as he puts it, 'It's very easy to fall in love with this stuff once you're on it.'"
What? (Score:5, Insightful)
As a DBA, I would love to have solid-state storage instead of needing to segment my databases properly and work with the software dev guys to make sure we have reasonable load distribution.
Where can I get someone to pay a million dollars so I can do substandard work?
Re:What? (Score:3, Insightful)
You're the DBA - do what you do best, and start Googling! :)
Re:What? (Score:3, Insightful)
Of course, in the real world, this sort of thing (maybe not to this scale) happens all the time. We just had a customer that was having major performance problems. They demanded we put them on a massive $750,000 whiz-bang SAN device right away to alleviate their problems. So we did, and then their DBAs finally get off their asses and look at the code and make some changes that cut their I/O demand in half. Basically, they ended up burning $750,000 on something they didn't even need. I have a feeling AOL just spend $1,000,000 on something they didn't really need as well.
Re:Sas bandwidth constrained??? (Score:5, Insightful)
Now we just need something cheeper then 20$/GB
Actually, the price was the most interesting part of this:
at about four times the cost of a typical Fibre Channel disk array with the same capacity
Four times the price and, what, ten? A hundred? times the IOPS? That makes NAND pretty much a no brainer for any heavy-use database.
RAID 5? (Score:3, Insightful)
They wanted performance and went *RAID 5*? That pretty much sums the entire approach up. Let's not optimise the application first, the database second, but instead hide the problem by throwing hardware at it. Then what we'll do is use a RAID configuration that hobbles the write performance of the arrays and lets not mention what happens to performance when we lose a disk (don't say it won't happen).
Sure, RAID 5 is the answer to somethings, but not when the question is database *PERFORMANCE*.
Also - latency is more important than IOP/s. I don't care how many IOP/s you can do, if you're latency is high, the performance won't be. Most garden variety storage engineers don't seem to grasp this concept.
Re:What? (Score:2, Insightful)
certainly the failure of an entire infrastructure after the failure of a single drive is the fault of the drive manufacturer. spinning disks never fail?