He did give an answer, and from that answer I think he probably does know. In short:
it depends.
I know, it sucks that we can't sum up all aspects of a technology in a single paragraph. Computers aren't easy and are still for people who don't mind doing their own homework.
I used to own a backup company and we did a lot of testing, testing both software and the major hardware vendors. Including some $2,000+ raid cards.
Once upon a time there were plusses and minuses.
Software raid is better. Specifically the Linux MD raid, with LVM on top. It's much more flexible / featurefull and doesn't make your data dependent on a specific controller.
Back in the day, hardware raid had the advantage of not using CPU cycles. At full write bandwidth, raid could use up 10% or even 20% (during
Excellent arguments, and for the most part I agree with you.
However, in those rare situations when we're still building actual physical servers instead of deploying cloud-based infrastructure it sometimes does make sense to use hardware RAID, if only because we don't have the chance to use a software-based solution.
In my case this happens when deploying hypervisors. If I'm installing a VMware vSphere cluster on bare metal, the software won't give me any way to set up a software RAID and we have to rely on t
Wait, vSphere doesn't support software raid? I know HyperV can sit on top of a windows storage space, and I know Proxmox supports both LVM and ZFS arrays directly, I just assumed that vSphere, being the expensive dedicated hypervisor it is, would support some kind of multi-disk array in software. TIL
vSphere will do this but it requires multiple nodes and licensing costs almost as much (or more, in some circumstances) per socket as the base vSphere license.
Sure, vSAN is a lot more than just software RAID, but that's what they offer. I'll also note that S2D on Windows requires Datacenter licensing, which is also not cheap.
Uh...why would they. The storage tier handles RAID at the hardware level with software. Point your goofy little VM at a volume...config the volume to your requirements. This is bog standard 101 shit.
A mathematician is a device for turning coffee into theorems.
-- P. Erdos
How is this on Slashdot? (Score:-1, Flamebait)
Re: (Score:5, Insightful)
If you don't know, why post?
I'm interested as well, and I want to read suggestions from people who have been there.
Re: (Score:0, Flamebait)
He did give an answer, and from that answer I think he probably does know. In short:
it depends.
I know, it sucks that we can't sum up all aspects of a technology in a single paragraph. Computers aren't easy and are still for people who don't mind doing their own homework.
Used to be "it depends". Now software is better (Score:5, Insightful)
I used to own a backup company and we did a lot of testing, testing both software and the major hardware vendors. Including some $2,000+ raid cards.
Once upon a time there were plusses and minuses.
Software raid is better. Specifically the Linux MD raid, with LVM on top. It's much more flexible / featurefull and doesn't make your data dependent on a specific controller.
Back in the day, hardware raid had the advantage of not using CPU cycles. At full write bandwidth, raid could use up 10% or even 20% (during
Re: (Score:5, Informative)
Excellent arguments, and for the most part I agree with you.
However, in those rare situations when we're still building actual physical servers instead of deploying cloud-based infrastructure it sometimes does make sense to use hardware RAID, if only because we don't have the chance to use a software-based solution.
In my case this happens when deploying hypervisors. If I'm installing a VMware vSphere cluster on bare metal, the software won't give me any way to set up a software RAID and we have to rely on t
Re:Used to be "it depends". Now software is better (Score:2)
Wait, vSphere doesn't support software raid? I know HyperV can sit on top of a windows storage space, and I know Proxmox supports both LVM and ZFS arrays directly, I just assumed that vSphere, being the expensive dedicated hypervisor it is, would support some kind of multi-disk array in software. TIL
Re:Used to be "it depends". Now software is better (Score:4, Insightful)
Remember, VMware and EMC are the same company. They like selling storage hardware.
Re: (Score:2)
vSphere will do this but it requires multiple nodes and licensing costs almost as much (or more, in some circumstances) per socket as the base vSphere license.
Sure, vSAN is a lot more than just software RAID, but that's what they offer. I'll also note that S2D on Windows requires Datacenter licensing, which is also not cheap.
Re: Used to be "it depends". Now software is bette (Score:2)
Uh...why would they. The storage tier handles RAID at the hardware level with software. Point your goofy little VM at a volume...config the volume to your requirements. This is bog standard 101 shit.