Costly SSDs Worth It, Users Say 288
Lucas123 writes "When you're paying $30,000 for a PCIe flash card, it had better demonstrate an ROI. While users are still struggling with why solid state storage cost so much, when they target the technology at the right applications, the results can be staggering. For example, when Dan Marbes, a systems engineer at Associated Bank, deployed just three SSDs for his B.I. applications, the flash storage outperformed 60 15,000rpm Fibre Channel disk drives in small-block reads. But when Marbes used the SSDs for large-block random reads and any writes, 'the 60 15K spindles crushed the SSDs,' he said,"
My approach (Score:5, Informative)
Small (and cheap) 32GB SSD for my desktop...
Big powerful 12TB file server using traditional disks for the bulk of my data.
Performance for the stuff where the SSD makes a difference (program files), cheap storage for the stuff where it doesn't (just about everything else).
And if that 32GB drive dies (unproven technology.. MTBF is still a guess) .. I'll buy another cheap (probably cheaper at that point) one and restore from my daily backup.
Re: (Score:3)
I do the same thing as you do except I keep a hot spare in my computer, a regular hard drive that automatically mirrors the SSD using Norton Ghost. If the SSD dies, swap the SATA cables and reboot. I've done this a few times just to test it out, and it works.
Re: (Score:2)
Is that "except" or "except also"? If the primary dies whilst being mirrored... may I suggest two spares with an alternating schedule? :)
Re: (Score:2)
Just use rsync with "--link-dest", I've seen multi-TB backups take less than a minute because most of the files are the same.
Re: (Score:2)
It depends on the application, and how it stores its data. Not all scenarios are conducive to rsync; it is possible for an rsync to take more time than just doing a copy.
Re: (Score:2)
Those situations are pretty rare, usually as a result of databases that store many GB in a single file. And if you're backing up a database it's usually best to use whatever online backup mechanism the database provides anyway, so that you don't have to take it offline every day for backups.
Re: (Score:2)
Re: (Score:2)
You have a 12TB file server? WTF are you putting on that thing, pirated content you'll never have enough time in your life to watch?
Re: (Score:2)
Rips of my fairly massive DVD collection actually!
Ok, I won't lie, I do have some pirated content.. but I do pay for most of my media these days.
It adds up fairly quickly... most are in H.264 with fairly high settings.
And yes, I have seen everything in my collection. I almost always have something playing in the background while I work. I probably make it through my collection every few years ..
Re: (Score:2)
12TB is a lot, unless you start backing up your entire blu ray collection to disk, in which case it might not be that much. Also if you're into shooting HD video.
Re: (Score:2)
12T isn't a lot once you start putting 8-12GB per movie and ~25GB per season 1080p rips. Especially if you keep them for rewatching.
Re: (Score:2)
Re: (Score:2)
I think the original figure must have been missing a zero. 25 GB per season is less than 480i DVD bitrate (that's only about three dual-layer DVDs, whereas an hour-long TV series has about five or six dual-layer discs per season.
I only have one TV show on Blu-Ray, so I don't have a lot of data to go on, but... Serenity ran for only 14 episodes (basically a half season) and is spread across three dual-layer Blu-Ray discs, for a total of 150 GB. A full season of an hour-long TV show should be on the order o
Re: (Score:2)
So it's only 48 seasons then?
Star Trek TOS: 3 seasons
Star Trek TNG: 7 seasons
Star Trek DS9: 7 seasons
Star Trek VOY: 7 seasons
Star Trek ENT: 4 seasons
Babylon 5: 5 seasons
That's 33 seasons right there. And what self respecting geek wouldn't have those?
Re: (Score:2)
25GB is a pretty standard size for a HDTV-ripped, x264-encoded, ~22-24 episode season on tvtorrents, et al.
Rips from full BR sources seem to come in more
Re: (Score:2)
Nah, he's just storing multiple backups of FRIENDS, everybody knows you only need FRIENDS and just go back to the start when you run off the end, it's the mobius strip of Television.
Re: (Score:2)
I have a 128GB SSD for OS and scratch partition, with data and programs on a larger traditional HDD. Going to SSD, the boot speed more than doubled. Applications that rely on scratch disks, like Photoshop... Well, Photoshop is a bit confused by the whole setup. But SMARTER applications that rely upon scratch disks are lightning fast. It's not just that the SSD functions as a scratch disk faster, but that by offloading those IO interactions means that the normal disk is entirely free for more traditional
Re: (Score:2)
Windows 7 / 64 boot times including BIOS POST, but measured to the point where the UI actually responds to input. I do have it skipping most of the BIOS tests and going straight to the right drive, so the BIOS setup is reasonably optimized.
The Linux partition hadn't suffered as much from slow boot times, so it's hard to say how much the SSD is helping. The Linux partition went from fast enough that I didn't notice any slowdown, to fast enough that I didn't notice any slowdown.
That's what RAM is for.. (Score:2)
I run RF simulations at work and loaded up the RAM to cache terrain data.
64GB system memory was $1500 a few months ago. I think it is below $1000 now.
Skip the SSD and load up on ram.
Once it's cached, leave it there.
I'd like to see a thunderbolt RAM drive .. that'd be something; in my youth I'd have given it a go. Put some backup batteries in there, a mirror HD, and voila - block reads to go, in bulk. Sweet little capacitors. Do my bidding!
Re: (Score:2)
Re: (Score:2)
Oh grow up...
First time those programs are loaded is blazing fast. Moving to SSD dramatically increased boot time. Yes subsequent loads are from cache.. but having stuff load damn near instantly the first time is significant.
In addition to that, I'm a Gentoo user, and that SSD makes building those program files a hell of a lot faster.
Re:My approach (Score:5, Informative)
You're doing it wrong. Get some RAM and mount a tmpfs, and it'll be a hell of a lot faster than your SSD. It'll be at least 60% cheaper, too.
Re: (Score:2)
Re: (Score:3)
The devices the GP was talking about have quite a few advantages over SSD, and the contents don't disappear with the power (they generally have a battery that keeps the RAM refreshed).
They don't have a limited number of writes like flash-based SSDs, and they run at the speed of your bus for writing or reading.
The downside: price per gigabyte. This can be a real issue with Windows installs, since Windows likes a lot of space on C:, but for UNIX systems, you can use it for the partitions that need it and use
Re: (Score:2)
I have one, and I have not noticed significant increases in speed. I probably shaved 5 seconds off of a 20 second boot time, but I rarely shut the computer down, so that isn't such a big deal.
Re: (Score:3)
Meh (Score:2, Interesting)
Re:Meh (Score:5, Informative)
Now, the comparison in the summary is between 3 SSD's and 60 15K HDD's.. in other words, the HDD solution was enormously more expensive. (and thats NOT counting the cost of the stack of Fiber Channel raid enclosures, let alone the power that 60 stack draws)
You dont seem to know what you are talking about. SSD's arent much more expensive per gigabyte than HDD's in performance enterprise environments, and always significantly outperform for equal investment, with less power costs. The only place the "cheaper per gigabyte" argument is true is when you can get away with inexpensive HDD's.. in other words, you heard people talk about one thing but didnt know that it didnt apply to another.
When you dont know what you are talking about, act like it.
Re:Meh (Score:4, Informative)
Re: (Score:2)
Whats the price difference between 2 600GB mechanicals and the 1 1TB drive?
Also, whats MTBF, and is that SSD using internal RAID0 like I think it is? If so, have fun with data recovery when one flash failure kills the entire 1TB.
Re: (Score:2)
Re: (Score:2)
Not the GP, but I suspect he was referring to the fact that the OCZ Colossus 1TB SSD is internally a RAID 0. If an enterprise is running these, they're running RAID0, but may not even realize it.
http://www.ocztechnology.com/ocz-colossus-lt-series-sata-ii-3-5-ssd.html [ocztechnology.com]
That said, it starts at $2500. The 600GB 15k SAS drives start at $650. $1200 (comparing 2 SAS to 1 SSD) buys you a lot of rack space and cooling. If you look at "Enterprise SSD" (whatever that means), you're looking at requiring a PCI-E card
Re: (Score:2)
The limiting factor may be space. If the 1x1TB drive fits into a 1U enclosure but the 2x600GB drive requires a 2U enclosure, it may not be worth doubling your datacenter rack space for the storage.
The limiting factor may be reliability. Mechanical drives fail frequently on startup/shutdown due to temperature changes. Putting two mechanical drives in a host (instead of one solid-state) sounds like a good way to increase failure
Re: (Score:2)
At $30,000 per SSD, times 3, you get $90,000. Divide that by 60 and you get $1,500 per hard drive to break even.
Im not sure Ive ever seen a hard drive that costs $1,500; Newegg says [newegg.com] 450GB SAS 15k drives can be had for $300.
But then, we dont know how much data is being stored by that SSD, or how much was being stored by the mechanicals, or how much parity (if any?) each system had, or whether they were from the same vendor... all of which make the article pretty darn useless.
They mention, for example, that
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Modern SSDs don't work like that. The mapping of device sectors to flash pages is not one to one - instead there is an internal mapping maintained that is not unlike the way a filesystem maps the blocks of a very large file onto a block device.
So if the SSD controller needs to overwrite a sector it is almost certainly not going to erase a flash page and rewrite it in place, rather it is much more likely to write the flash page in an empty area and change the internal mapping. Then the old page can be place
Re: (Score:2)
There's a lot more to enterprise storage than the drives. Taking into account those additional costs, twice as much per drive doesn't add up to a lot more expensive in the big picture.
Dan "Obvious" Marbes (Score:5, Interesting)
For example, when Dan Marbes, a systems engineer at Associated Bank, deployed just three SSDs for his B.I. applications, the flash storage outperformed 60 15,000rpm Fibre Channel disk drives in small-block reads. But when Marbes used the SSDs for large-block random reads and any writes, 'the 60 15K spindles crushed the SSDs,' he said,"
So when you need lots of small, random reads, 3x SSDs beat 60x HDDs. Most of the time is spent seeking the file on the HDDs, your ~4.6 ms random seek time is an order of magnitude or more slower than the flash-based drives. No surprise here.
When you are just transferring large files, most of the time is spent actually transferring data. A modern SSD might manage 300-400 MB/s read, but 20x as many HDDs are still going to beat the crap out of them.
The only mildly surprising part is that part about the HDDs winning for all writes, but I guess that really depends on how the test is set up - unless you are actually writing to random parts of the HDD, it is basically a straight-up write operation, so only throughput matters - and again, 60x HDDs are going to beat 3x SSDs (though it is important to note that SSDs are significantly slower at writing than reading in general, although still much faster than an HDD on an individual basis).
Re: (Score:2)
Re: (Score:2)
The only mildly surprising part is that part about the HDDs winning for all writes,
Maybe their SSD doesnt support TRIM. That would certainly explain it.
TFA is a bit more interesting (Score:2)
hopefully SRT can be more fully advanced (Score:2)
i've had a bad experience with SSD drives having returned 2 due to deal breaking problems for me .... on caused BSODs the other could handle suspend mode without locking up.
I have now 2x samsung F3s in raid 0 (plus back up on NAS)
personally i have no desire to have to do the sort of file management small sized SSDs currently demand and since reading the reviews they all pretty much suck.... however SRT is a compelling option... all the convenience of big mechanical drives plus a speed boost of SSD ... unl
Re: (Score:2)
Re: (Score:2)
SSDs for lower power, low noise environments (Score:3)
To me the cost per GB of an SSD really makes it tough to justify in a lot of cases especially if you want to store large programs like games where you'll need at least a 100GB SSD. However one area place I have started to use low capacity (8 or 16GB) SSDs are in low noise and/or low power environments. If you team them with an ITX Atom board and the right power supply you can build a small computer with no moving parts whatsoever. And the computer will have a very low power usage for applications like HTPCs or network appliances (like firewalls) where the machine might always be powered on.
Re: (Score:2)
Same here, although I don't run Apache. but Lighttpd. But my workload is basically read only, so it's very cache friendly. The flash drive only lights up during boot.
The outcome is not exactly what they said (Score:3)
If you think about it, the outcome of this test is 100% in favor of the SSD.
Think about it:
The tester was willing to test only 3 SSD's versus *60* 15K drives. So the tester thought that 20 times fewer drives was a fair test for the comparison. What is the tester actually saying here? I think I have a feeling I know. :-)
Anyway, 15K drives are not long for the market. Soon, all that will be left are economy class, 10K, and SSD's.
Only a little more maturity, and the enterprise flood gates will open. When that happens, the hapless victim will be the short stroked IOPS environment, where total IOPS was always the requirement, and that requirement was for more IOPS than capacity. I.e., if a 15K drive offers 400 IOPS, and you need 400,000 sustained, but don't have to store very much at all, your only current choices are buying a lot of 15K drives. Or a bazillion less SSD's.
The switchover point is only a heartbeat away.
Bye 15K drive. I'll miss you.
C//
Re: (Score:2)
The tester was willing to test only 3 SSD's versus *60* 15K drives.
Except they COMPLETELY neglected to mention whether there was parity in place on either system, or how much data was being held by each system.
Protip-- the mechanicals were holding a TON more data than the SSD systems, and im pretty sure the mechanicals can be hotswapped.
Re: (Score:2)
Re: (Score:2)
At 30k, the edge of the platter is travelling at:
8.75cm (about the diameter of a platter) * pi * 30.000 =
8,24 km/m or just under 500km/h.
A 60k drive would be breaking the sound barrier; there is no way that would ever happen inside your computer.
Don't be poor... (Score:2)
Seeks are an issue (Score:3, Interesting)
Just as an info dump for anyone who's not familiar with why SSDs perform so much better: SSDs have far better seek performance.
A normal HDD takes about 10ms to seek (3ms at the very high end, 15ms at the low end- 10ms is a good rule of thumb), which means you've got a princely 100 seeks per second per spindle (i.e. HDD). SSDs don't have seek limitations. Looking up a contiguous block of data vs not looking up a contiguous block of data makes no difference to an SSD.
It turns out that 100 seeks isn't a lot in serving infrastructure or, in some cases, on a desktop. When you go to read a file off disk multiple seeks are involved- you need to look up the inode (or equivalent), find the file and a large file will probably be in many different chunks require separate seeks to access them.
Even on a desktop you'll frequently be seek bound not throughput limited. Lets say you are starting up a largish java application (Eclipse might be a good example). It references a huge number of library (.jar) files which are certainly large enough to require many seeks to access. And those libraries are often linked in to system libraries which also have their own dependencies and may have additional dependencies all of which require further seeks. Plus with Eclipse it will look up the time stamps on files in the project... and so on.
During boot of a system is another time when HDD are usually seek bound- lots of different applications/services/daemons are starting at the same time, loading lots of libraries causing lots and lots of seeks.
On server infrastructure a highly utilized database will probably be seek bound not throughput limited.
The article is kind of stating the blindly obvious- if you are seek bound SSDs are better. And 60 drives gives ~6000 seeks. A typical modernish desktop HDD can get in the order of 100MB/s data transfer (average sustained), more expensive HDD can get quite a lot more. If we take 3.0Gb/s as a ceiling (i.e. the SATA 3.0 max transfer rate) then at 6000 seeks/second you are getting 3000MB/6000seeks=0.5MB per seek. So the result makes perfect sense if you looking up data that is either entirely non-contigous or smaller than 500kB- an SSD will beat you every time on seeks (since it has no seek time).
The limitations on SSDs are: they have throughput limitations, just like HDD and more importantly their write performance is usually significantly worse than a HDD (writing on an SSD often involves reading and re-writing large chunks of data, even for very small writes). You can easily construct tests where HDD perform better than SSDs (particularly something like a 60 spindle array of HDD where an awful lot of writes can be cached in the on disk's ram buffer, which is common on hire performance drives- often battery backed so they can "guarantee" the write has been committed without having to wait for a write to the magnetic media).
Of course SSDs other obvious application are where you want robustness and silence, i.e. laptops. Oddly enough their power performance isn't that much better than a normal HDD (although that might have changed since I last read about it).
Re: (Score:2)
Oddly enough their power performance isn't that much better than a normal HDD (although that might have changed since I last read about it).
That depends on the manufacturer. My intel SSD requires less than 750mA at full tilt. I have an OCZ SSD that uses 2A all the time.
Re: (Score:2)
That 100 seeks per second isnt the entire story, however-- basically every drive you can possibly get these days supports Native Command Queuing, which means that the drive will try to organize its reads so that it doesnt have to seek to position 50 to grab a block of data, then reseek to position 40 to grab another block. With NCQ, it would rearrange the requests so that it first gets position 40, then goes to 50 within the same "rotation"-- so both requests were done in under that 10ms.
NCQ makes a huge d
Re: (Score:2)
Goodbye defragmentation? (Score:2)
Looking up a contiguous block of data vs not looking up a contiguous block of data makes no difference to an SSD.
Then is it safe to say that we should never again worry about running defrag utilities after moving to SSD?
Re: (Score:2)
Yes, this is one major reason for performance improvements on desktop systems. Fragmentation is not an issue (well, sorta) with SSDs.
(I say sorta, because due to the way NAND erases are handled a highly fragmented filesystem can cause write amplification and slowdowns as blocks are reclaimed. TRIM helps with this.)
Perhaps more relevant to home/SOHO users is . . . (Score:2)
. . . a Storage Review experiment from over a year ago:
http://www.storagereview.com/western_digital_velociraptors_raid_ssd_alternative [storagereview.com]
They put WD Raptors in RAID 0 to form a high performance (yet still affordable) platter drive setup, and then faced them off against Western Digital's new (at the time, first) SSD. Makes sense, right? Except that WD's first SSD was a complete joke, an underperforming [anandtech.com], laughably expensive POS that I forgot about a couple days after Anand's review. When I first read about i
Mix this: (Score:2)
Re: (Score:2)
Sure there is, if your drive doesnt support TRIM (or hasnt TRIM'd in a while)-- that can certainly result in a slower drive.
My own very recent experience (2 weeks ago) (Score:4, Informative)
I moved a small 4TB database from 24x 256G 15k SAS drives to 24x 240G OCZ Vertex 3 SATA3 drives. I ran a few queries on the old and the new. same data, same parameters, same amount of data pulled. Both were hooked up via PCIe 8x slots.
the SSD crushed the SAS. Not just a mere 2x or 3x crushing. A _FIFTEEN TIMES FASTER_ crushing. This was pulling about a million rows out. 12 seconds (SSD) vs 189 seconds (spindles)
Cost difference? under $50 per drive more expensive for SSD. I think our actual rate was around $10 per drive more. However, the system as a whole (array+drives+computer) was $12k less. No contest... for our particular application, SSD hands down makes it actually work.
we'll be moving the larger database (same data, same function) to SSD as soon as we can.
Re: (Score:2)
SSD Is Cheaper... (Score:2)
Anyway, that's my two-sentence rewrite of TFA.
Why if it's just a boot drive? (Score:2)
Eve Online (Score:2)
I would like to see the performance gain in eve. This could reduce lag significantly.
Why is the NetApp Flash Cache so pricey? (Score:3)
On "why does NetApp sell their PCIe NAND flash card $30k?", here is your answer, Chris Rima: http://blog.zorinaq.com/?e=37 [zorinaq.com]
In a 3 words: because NetApp can.
It's not the components or engineering behind the card that cost $30k. NetApp prices it so high because the card boosts the performance of their filers by about the same amount as a ~$50k shelf of SAS disks (click that link and go read NetApp's own marketing documentation). They have got to have price points that make sense to customers.
(I know a fraction of you will think "No way!". Well, arbitrary price markups on enterprise gear do exist. This NetApp Flash Cache is effectively priced $150/GB. How do you think that certain competitors can even sell _enterprise_ flash at well below $10/GB? We are not talking 25 or 50% less, but a whole order of magnitude less expensive!)
Copypasta with SSDs (Score:2)
So you're saying copypasta can be copied even faster thanks to SSDs?
Re: (Score:3)
Re: (Score:2)
Are you sure that's not just a 500GB disk with a 4GB cache?
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Sounds kinda slow. My triple-raid0 SSD setup tachs at 400MB/s (Sony Z11). For your price, I guess it's good enough for general usage. After experiencing this, it's a hard stretch to try to go back to HDD speeds.
Re: (Score:2)
...And my software raid5 of 3 640GB drives gets 175MB/sec linear read(at the beginning, of course) . I just don't see the point: The OS should be doing the caching to spare ram already, and well, adding more memory never hurt.
Re: (Score:2)
Re: (Score:3)
Not quite. Seagate's tech is a simple block cache, where the most frequently accessed blocks get thrown on the SSD portion. It is no doubt quite effective, but we should be able to do better if we move the logic into the OS.
The OS has intimate knowledge of the filesystem, and can easily profile its use. Files that are small or randomly accessed should get put on the SSD, while large sequentially accessed files should get put on the HDD.
Re:So a good idea would be... (Score:4, Insightful)
Re: (Score:3)
Of course. OS awareness of hardware capabilities lets you avoid workarounds like TRIM. Instead we have TRIM because the nature of the hardware (NAND flash) is hidden behind an interface designed for rotating media that behaves differently.
Well, if you integrate it into the block layer, then it can be utilized by the filesystem with one layer of abstraction. Abstract it completely and you have to hope that the drive logic is capable of making goo
Re: (Score:2)
Re: (Score:2)
ZFS with L2ARC seems to do fine with that, haven't looked closely how it does it though. But I hear it does have some optimizations.
Re: (Score:2)
Re: (Score:2)
Hybridization generally takes the form of servers with it used in a tiered form. Actual hybridized devices (like the OCZ or Seagate devices) are of limited value in enterprise.
This is currently a fairly hot area of research, though most of it is occurring behind closed doors at the moment.
Re: (Score:2)
I can't see a whole lot of development going into hybrid drives when it's entirely possible that the price point of SSDs will drop enough in a few years to justify mainstream use.
Re: (Score:2)
Its almost as if nearly all of slashdot has no idea that 15K drives cost so much... hybrid drives make no sense unless you give up on the RPM's
The high end RAID cards do this (Score:2)
You can cache your data using SSDs but still have a RAID. Adaptec, LSI, Intel, they all have cards that do it.
Re: (Score:3)
In addition to the Momentus XT that several others mentioned, there is also SSD caching on Intel's Z68 motherboards allowing you to designate a 20GB+ drive as an automated cache of whatever data is traveling to and from your hard drives. The effects are quite noticeable and it seems like the system is pretty smart. Plus, it is done in software that is easily updated should a more efficient algorithm be found.
Re: (Score:2)
Re:So a good idea would be... (Score:4, Interesting)
Re: (Score:2)
Actually, most newer SANs support even further tiered setup. Some will manage this automatically and others require a scan and move. Various technologies do this with varying levels of intricacy and transparency.
A few solutions looked at the heat map of access and would proactively move these to either SSD storage, traditional SAS storage or high density SATA storage. If you wanted to spring for the frontend cache systems they were sporting volatile cache based a memory backend. Though our workloads typical
Re: (Score:2)
Re: (Score:2)
Yes, it has been done. Even in software, one of the best known is probably ZFS with L2ARC on Solaris and other systems, look it up.
Have a nice day.
Re: (Score:2)
I remember reading about MS doing research in that area years before SSDs where all the craze. In fact, Vista already supported them (in 2007).
Re: (Score:3)
Am I supposed to just run out and buy SSDs for the whole load?
No. I think that in the open source world you're expected to write the update yourself.
Re:And still no SSD caching for Linux file systems (Score:5, Funny)
Re: (Score:2)
Damn, I wish I had mod points :)
Re: (Score:2)
No, you are supposed to start a flame-war on lkml about how SSD cache is a stupid idea that will never amount to anything. Next hundreds of kernel developers will start develop the code to prove you wrong.
I can't believe it's impossible to view Goatse in Linux!
Re: (Score:2)
Either write it yourself, pay someone to do it, beg nicely or shut up and just wait.
Re: (Score:2)
Because you should be using ZFS?
I troll, but its true.
Re: (Score:3)
A tiny number of SSDs out perform a huge number of spinning disks except in certain situations. Story right now.
Re: (Score:2)